Professional Documents
Culture Documents
Database and DBMS Basics
Database and DBMS Basics
These problems and others led to the development of database management systems.
Oracle History
3 ORACLE DATABASE ADMINISTRATION
Oracle has a 3 decade history, outlasting many of its predecessors. This brief summary traces the
evolution of Oracle from its initial inception to its current status as the world moist flexible and robust
database management system.
Founded in August 1977 by Larry Ellison, Bob Miner, Ed Oates and Bruce Scott, Oracle was initially
named after "Project Oracle" a project for one of their clients, the C.I.A, and the company that developed
Oracle was dubbed "Systems Development Labs", or SDL. Although they may not have realized it at the
time, these four men would change the history of database management forever.
In 1978 SDL was renamed Relational Software Inc (RSI) to market their new database.
1979 - Oracle release 2
The first commercial RDBMS was built using PDP-11 assembler language. Although they created a
commercial version of RDBMS in 1977, it wasn't available for sale until 1979 with the launch of Oracle
version 2. The company decided against starting with version 1 because they were afraid that the term
"version 1" might be viewed negatively in the marketplace. USA Air Force and then CIA were the first
customers to use Oracle 2.
In 1982 there was another change of the company’s name, from RSI to Oracle Systems Corporation so
as to match its popular database name. The current company name comes from a CIA project that Larry
Ellison had previously worked on code named “Oracle”.
1983 - Oracle release 3
The Oracle version 3 was developed in 1983. This version was assembled using C programming language
and could run in mainframes, minicomputers, and PCs – or any hardware with a C compiler. It supported
the execution of SQL statements and transactions. This version also included new options of pre-join
data to increase Oracle optimization.
1984 - Oracle release 4
Despite the advances introduced in version 3, demand was so great that Oracle was compelled to
improve the software even further with the release of version 4 in 1984. Oracle version 4 included
support for reading consistency, which made it much faster than any previous version. Oracle version 4
also brought us the introduction of the export/import utilities and the report writer, which allows one the
ability to create a report based on a query.
1985 - Oracle release 5
With the introduction of version 5 in 1985, Oracle addressed the increasing use of the internet in
business computing. This version was equipped with the capability to connect clients’ software through a
network to a database server. The Clustering Technology was introduced in this version as well and
Oracle became the pioneer using this new concept – which would later be known as Oracle Real
Application Cluster in version 9i. Oracle version 5 added some new security features such as auditing,
which would help determine who and when someone accessed the database.
Oracle version 5.1 was launched in 1986 and allowed for supporting distributed queries. Later that same
year Oracle released SQL*Plus, a tool that offers ad hoc data access and report writing. 1986 also
brought the release of SQL*Forms, an application generator and runtime system with facilities for simple
application deployment.
1988 - Oracle release 6
The PL/SQL language came with Oracle version 6 in 1988. This version provided a host of new features
including the support of OLTP high-speed systems, hot backup capability and row level locking – which
locks only the row or rows being used during a writing operation, rather than locking an entire table.
Prior to the hot backup feature, database administrators were required to shutdown the database to back
it up. Once the hot backup feature was introduced, DBA’s could do a backup while the database was still
online.
Oracle Parallel Server was introduced in Oracle version 6.2 and was used with DEC VAX Cluster. This new
feature provided high availability because more than one node (server) could access the data in
database. With the increased availability this feature also accelerated the performance of the system that
was sharing users’ connections between nodes.
1992 - Oracle release 7
1992 was a memorable year for Oracle. The company announced Oracle version 7, which was the
culmination of four years of hard work and two years of customer testing before release to market. This
version of Oracle provided a vast array of new features and capabilities in areas such as security,
administration, development, and performance. Oracle 7 also addressed security concerns by providing
full control of who, when, and what users were doing in the database. Version 7 also allowed us to
monitor every command, the use of privileges and the user’s access to a particular item. With Oracle 7
users could use stored procedures and had triggers to enforce business-rules. Roles were created at this
version to make the security maintenance easier for users and privileges. The two-phase commit was
added to support distributed transactions.
Oracle7 Release 7.1 introduced some good new capabilities for database administrators, such as parallel
recovery and read-only tablespaces. For the application developments, Oracle inserted the dynamic SQL,
user-defined SQL functions and multiple same-type triggers. The first 64-bit DBMS was introduced within
this version as well as the VLM (Very Large Memory) option. The feature Oracle Parallel Query could
make some complex queries run 5 to 20 times faster.
In 1996 Oracle 7.3 was shipped, offering customers the ability to manage all kinds of data types;
including video, color images, sounds and spatial data. 1996 also brought the release of Oracle's first
4 ORACLE DATABASE ADMINISTRATION
biometric authentication for a commercially available database. This technology could analyze human
characteristics, both physical and behavioral, for purposes of authentication.
1997 - Oracle release 8
The Oracle 8 Database was launched in 1997 and was designed to work with Oracle's network computer
(NC). This version supported Java, HTML and OLTP.
1998 - Oracle release 8i
Just one year later Oracle released Oracle 8i which was the first database to support Web technologies
such as Java and HTTP. In 2000 Oracle 8i Parallel Server was working with Linux which eliminated costly
downtime.
2001 - Oracle release 9i
Oracle Real Application Cluster came with Oracle 9i Database in 2001. This featureprovides software for
clustering and high availability in Oracle database environments. Supporting native XML was also a new
feature of Oracle 9i and this was the first relational database to have these characteristics. Version 9i
release 2 enabled Oracle to integrate relational and multidimensional database. Despite the fact that
hard disks were becoming cheaper, data was increasing very quickly in databases and Oracle 9i came
with a special technology named table compression that reduced the size of tables by 3 to 10 times and
increased the performance when accessing those tables.
2003 - Oracle release 10g
Although Oracle 9i had only been in the market for two years, Oracle launched version 10g in 2003. The
release of 10g brought us the introduction to Grid Computing technology. Data centers could now share
hardware resources, thus lowering the cost of computing infrastructure. 10g was also the first Oracle
version to support 64-bit on Linux. With Oracle Database 10g and Real Application Cluster it was now
possible to move from very expensive SMP boxes and mainframes to an infrastructure that relies on low
costs such as UNIX or Windows servers, which have high availability, scalability and performance.
Oracle has long strived to make their software products available through the internet; but this effort
was only enhanced with the creation of the 10g Express Edition. With the introduction of the 10g Express
Edition in 2005, Oracle gave small business and startup corporations a viable option to integrate Oracle
into the workplace at no cost.
2007 - Oracle release 11g
The latest version of Oracle Database is 11g which was released on July 11th2007. This version
introduced more features than any other in Oracle history. This version includes:
Oracle Database Replay, a tool that captures SQL statements and lets you replay them all in
another database to test the changes before you actually apply then on a production database;
Transaction Management using Log Miner and Flashback Data Archive to get DML statements
from redo log files;
Virtual Column Partitioning;
Case sensitive passwords;
Online Patching;
Parallel Backups on same file using RMAN and many others.
Oracle is known for growth and change, which is why it is important to continually study its history and
previous lessons learned while embracing new features and functionality. Throughout its history Oracle
has acquired Database and Software Applications companies in order to provide more complete solutions
to its customers and increase the credibility of its products. Today Oracle has more than 320,000
customers and is present in 145 countries making it one of the elite companies in its field.
Database Administrators
Each database requires at least one database administrator (DBA). An Oracle Database system can be
large and can have many users. Therefore, database administration is sometimes not a one-person job,
but a job for a group of DBAs who share responsibility. A database administrator's responsibilities can
include the following tasks:
Installing and upgrading the Oracle Database server and application tools.
Allocating system storage and planning future storage requirements for the database system.
Creating primary database storage structures (tablespaces) after application developers have
designed an application.
Creating primary objects (tables, views, indexes) once application developers have designed an
application.
Modifying the database structure, as necessary, from information given by application
developers.
5 ORACLE DATABASE ADMINISTRATION
Enrolling users and maintaining system security.
Ensuring compliance with Oracle license agreements.
Controlling and monitoring user access to the database.
Monitoring and optimizing the performance of the database.
Planning for backup and recovery of database information.
Maintaining archived data on tape.
Backing up and restoring the database.
Contacting Oracle for technical support.
Security Officers
In some cases, a site assigns one or more security officers to a database. A security officer enrolls users,
controls and monitors user access to the database, and maintains system security. As a DBA, you might
not be responsible for these duties if your site has a separate security officer.
Network Administrators
Some sites have one or more network administrators. A network administrator, for example, administers
Oracle networking products, such as Oracle Net Services.
Database Users
Database users interact with the database through applications or utilities. A typical user's
responsibilities include the following tasks:
• Entering, modifying, and deleting data, where permitted
• Generating reports from the data
Application Developers
Application developers design and implement database applications. Their responsibilities include the
following tasks:
Designing and developing the database application
Designing the database structure for an application
Estimating storage requirements for an application
Specifying modifications of the database structure for an application
Relaying this information to a database administrator
Tuning the application during development
Establishing security measures for an application during development
Application developers can perform some of these tasks in collaboration with DBAs.
Application Administrators
An Oracle Database site can assign one or more application administrators to administer a particular
application.
Each application can have its own administrator.
1.2. Tasks of a Database Administrator
The following tasks present a prioritized approach for designing, implementing, and maintaining an
Oracle
Database:
Task 1: Evaluate the Database Server Hardware
Task 2: Install the Oracle Database Software
Task 3: Plan the Database
Task 4: Create and Open the Database
Task 5: Back Up the Database
Task 6: Enroll System Users
Task 7: Implement the Database Design
Task 8: Back Up the Fully Functional Database
Task 9: Tune Database Performance
Task 10: Download and Install Patches
Task 11: Roll Out to Additional Hosts
Note: When upgrading to a new release, back up your existing production environment, both software
and database, before installation.
where:
host is the host name or IP address of the
computer hosting the remote database.
Both IP version 4 (IPv4) and IP version 6
(IPv6) addresses are supported. IPv6
addresses must be enclosed in square
brackets. port is the TCP port on which the
Oracle Net listener on hostlistens for
database connections. If omitted, 1521 is
assumed.
service_name is the database service
name to which to connect. Can be omitted
if the Net Services listener configuration on
the remote host designates a default
service. If no default service is
configured, service_name must be
supplied. Each database typically offers a
standard service with a name equal to the
global database name, which is made up
of the DB_NAME and DB_DOMAIN initialization
parameters as follows:
DB_NAME.DB_DOMAIN
edition={edition_name | DATABASE_D Specifies the edition in which the new database session
EFAULT} starts. If you specify an edition, it must exist and you
must have the USE privilege on it. If this clause is not
specified, the database default edition is used for the
session.
Example:
This simple example connects to a local database as user SYSTEM. SQL*Plus prompts for the SYSTEM
user password.
connect system
Example
This example connects to a local database as user SYS with the SYSDBA privilege. SQL*Plus prompts for
the SYS user password.
connect sys as sysdba
When connecting as user SYS, you must connect AS SYSDBA.
Example
This example connects locally with operating system authentication.
connect /
Example
This example connects locally with the SYSDBA privilege with operating system authentication.
connect / as sysdba
Example
This example uses easy connect syntax to connect as user salesadmin to a remote database running on
the host db1.mycompany.com. The Oracle Net listener (the listener) is listening on the default port
(1521). The database service is sales.mycompany.com. SQL*Plus prompts for the salesadmin user
password.connect salesadmin@db1.mycompany.com/sales.mycompany.com
Example
This example is identical that the listener is listening on the non-default port number 1522.
connect salesadmin@db1.mycompany.com:1522/sales.mycompany.com
Example
This example connects remotely as user salesadmin to the database service designated by the net
service name sales1. SQL*Plus prompts for the salesadmin user password.
connect salesadmin@sales1
Example
This example connects remotely with external authentication to the database service designated by the
net service sales1. SQL*Plus prompts for the salesadmin user password.
connect salesadmin@sales1
Example
This example connects remotely with external authentication to the database service designated by the
net service name sales1.
connect /@sales1
Example
This example connects remotely with the SYSDBA privilege and with external authentication to the
database service designated by the net service name sales1.
connect /@sales1 as sysdba
Because Oracle Database continues to evolve and can require maintenance, Oracle periodically produces
new releases. Not all customers initially subscribe to a new release or require specific maintenance for
their existing As many as five numbers may be required to fully identify a release. The significance of
these numbers is discussed in the sections that follow.
1.4. Release Number Format
To understand the release nomenclature used by Oracle, examine the following example of an Oracle
Database
11 ORACLE DATABASE ADMINISTRATION
Note: Starting with release 9.2, maintenance releases of Oracle Database are denoted by a change to
the second digit of a release number. In previous releases, the third digit indicated a particular
maintenance release.
Major Database Release Number
The first digit is the most general identifier. It represents a major new version of the software that
contains significant new functionality.
Database Maintenance Release Number
The second digit represents a maintenance release level. Some new features may also be included.
Application Server Release Number
The third digit reflects the release level of the Oracle Application Server (OracleAS).
Component-Specific Release Number
The fourth digit identifies a release level specific to a component. Different components can have
different numbers in this position depending upon, for example, component patch sets or interim
releases.
Platform-Specific Release Number
The fifth digit identifies a platform-specific release. Usually this is a patch set. When different platforms
require the equivalent patch set, this digit will be the same across the affected platforms.
Note: The SYSDBA and SYSOPER system privileges allow access to a database instance
even when the database is not open. Control of these privileges is totally outside of the
database itself. While referred to as system privileges, SYSDBA and SYSOPER can also be
thought of as types of connections (for example, you specify: CONNECT AS SYSDBA) that
enable you to perform certain database operations for which privileges cannot be granted
in any other fashion.
The manner in which you are authorized to use these privileges depends upon the method of
authentication that you use. When you connect with SYSDBA or SYSOPER privileges, you connect with a
default schema, not with the schema that is generally associated with your username. For SYSDBA this
schema is SYS; for SYSOPER the schema is PUBLIC. Connecting with Administrative Privileges: Example
This example illustrates that a user is assigned another schema (SYS) when connecting with the SYSDBA
system privilege. Assume that the sample user oe has been granted the SYSDBA system privilege and
has issued the following statements:
CONNECT oe
CREATE TABLE admin_test (name VARCHAR2(20));
Later, user oe issues these statements:
CONNECT oe AS SYSDBA
SELECT * FROM admin_test;
User oe now receives the following error:
ORA-00942: table or view does not exist
Having connected as SYSDBA, user oe now references the SYS schema, but the table was created in the
oe schema.
Database Administrators can authenticate database administrators through the data dictionary, (using an
account password) like other users. Keep in mind that beginning with Oracle Database 11g Release 1,
database passwords are case-sensitive. (You can disable case sensitivity and return to pre–Release
11g behavior by setting the SEC_CASE_SENSITIVE_LOGON initialization parameter to FALSE.)
In addition to normal data dictionary authentication, the following methods are available
for authenticating database administrators with the SYSDBA or SYSOPER privilege:
Password files
14 ORACLE DATABASE ADMINISTRATION
Strong authentication with a network-based authentication service, such as Oracle Internet
Directory
These methods are required to authenticate a database administrator when the database is not started
or otherwise unavailable. (They can also be used when the database is available.)
The remainder of this section focuses on operating system authentication and password file
authentication.
Notes:
These methods replace the CONNECT INTERNAL syntax provided with earlier versions of Oracle
Database. CONNECT INTERNAL is no longer supported.
Operating system authentication takes precedence over password file authentication. If you meet
the requirements for operating system authentication, then even if you use a password file, you
will be authenticated by operating system authentication.
Your choice will be influenced by whether you intend to administer your database locally on the same
system where the database resides, or whether you intend to administer many different databases from
a single remote client. Figure 1-2 illustrates the choices you have for database administrator
authentication schemes.
If you are performing remote database administration, consult your Oracle Net documentation to
determine whether you are using a secure connection. Most popular connection protocols, such as TCP/IP
and DECnet, are not secure.
To connect to Oracle Database as a privileged user over a nonsecure connection, you must be
authenticated by a password file. When using password file authentication, the database uses a password
file to keep track of database user names that have been granted the SYSDBA or SYSOPER system
privilege.
You can connect to Oracle Database as a privileged user over a local connection or a secure remote
connection in two ways:
15 ORACLE DATABASE ADMINISTRATION
If the database has a password file and you have been granted
the SYSDBA or SYSOPER system privilege, then you can connect and be authenticated by a
password file.
If the server is not using a password file, or if you have not been
granted SYSDBA or SYSOPER privileges and are therefore not in the password file, you can use
operating system authentication. On most operating systems, authentication for database
administrators involves placing the operating system username of the database administrator in
a special group, generically referred to as OSDBA. Users in that group are granted
SYSDBA privileges. A similar group, OSOPER, is used to grant SYSOPER privileges to users.
This section describes how to authenticate an administrator using the operating system.
Membership in one of two special operating system groups enables a DBA to authenticate to the
database through the operating system rather than with a database user name and password. This is
known as operating system authentication. These operating system groups are generically referred to as
OSDBA and OSOPER. The groups are created and assigned specific names as part of the database
installation process. The default names vary depending upon your operating system, and are listed in the
following table:
Oracle Universal Installer uses these default names, but you can override them. One reason to override
them is if you have multiple instances running on the same host computer. If each instance is to have a
different person as the principal DBA, you can improve the security of each instance by creating a
different OSDBA group for each instance. For example, for two instances on the same host, the OSDBA
group for the first instance could be named dba1, and OSDBA for the second instance could be
named dba2. The first DBA would be a member of dba1 only, and the second DBA would be a member
of dba2 only. Thus, when using operating system authentication, each DBA would be able to connect
only to his assigned instance.
Membership in the OSDBA or OSOPER group affects your connection to the database in the following
ways:
If you are a member of the OSDBA group and you specify AS SYSDBA when you connect to the
database, then you connect to the database with the SYSDBA system privilege.
If you are a member of the OSOPER group and you specify AS SYSOPER when you connect to
the database, then you connect to the database with the SYSOPER system privilege.
If you are not a member of either of these operating system groups and you attempt to connect
as SYSDBA or SYSOPER, the CONNECT command fails.
2. Add the account to the OSDBA or OSOPER operating system defined groups.
16 ORACLE DATABASE ADMINISTRATION
Connecting Using Operating System Authentication
A user can be authenticated, enabled as an administrative user, and connected to a local database by
typing one of the following SQL*Plus commands:
CONNECT / AS SYSDBA
CONNECT / AS SYSOPER
For the Windows platform only, remote operating system authentication over a secure connection is
supported. You must specify the net service name for the remote database:
Both the client computer and database host computer must be on a Windows domain.
This section describes how to authenticate an administrative user using password file authentication.
To enable authentication of an administrative user using password file authentication you must do the
following:
1. If not already created, create the password file using the ORAPWD utility:
Notes:
When you invoke Database Configuration Assistant (DBCA) as part of the Oracle
Database installation process, DBCA creates a password file.
Beginning with Oracle Database 11g Release 1, passwords in the password file are case-
sensitive unless you include the IGNORECASE = Y command-line argument.
Note:
4. Connect to the database as user SYS (or as another user with the administrative privileges).
5. If the user does not already exist in the database, create the user and assign a password.
Keep in mind that beginning with Oracle Database 11g Release 1, database passwords are case-
sensitive. (You can disable case sensitivity and return to pre–Release 11g behavior by setting
the SEC_CASE_SENSITIVE_LOGON initialization parameter to FALSE.)
This statement adds the user to the password file, thereby enabling connection AS SYSDBA.
Administrative users can be connected and authenticated to a local or remote database by using the
SQL*Plus CONNECT command. They must connect using their username and password and the AS
SYSDBA or AS SYSOPER clause. Note that beginning with Oracle Database 11g Release 1, passwords
are case-sensitive unless the password file was created with the IGNORECASE = Y option.
For example, user oe has been granted the SYSDBA privilege, so oe can connect as follows:
CONNECT oe AS SYSDBA
However, user oe has not been granted the SYSOPER privilege, so the following command will fail:
CONNECT oe AS SYSOPER
Note:
Operating system authentication takes precedence over password file authentication. Specifically, if you
are a member of the OSDBA or OSOPER group for the operating system, and you connect as SYSDBA or
SYSOPER, you will be connected with associated administrative privileges regardless of
the username/password that you specify.
If you are not in the OSDBA or OSOPER groups, and you are not in the password file, then attempting to
connect as SYSDBA or as SYSOPER fails.
You can create a password file using the password file creation utility, ORAPWD. For some operating
systems, you can create this file as part of your standard installation.
Argument Description
18 ORACLE DATABASE ADMINISTRATION
FILE Name to assign to the password file. You must supply a complete path. If you supply
only a file name, the file is written to the current directory.
ENTRIES (Optional) Maximum number of entries (user accounts) to permit in the file.
FORCE (Optional) If y, permits overwriting an existing password file.
IGNORECAS (Optional) If y, passwords are treated as case-insensitive.
E
The command prompts for the SYS password and stores the password in the created password file.
Example
The following command creates a password file named orapworcl that allows up to 30 privileged users
with different passwords.
FILE
This argument sets the name of the password file being created. You must specify the full path
name for the file. This argument is mandatory.
The file name required for the password file is operating system specific. Some operating
systems require the password file to adhere to a specific format and be located in a specific
directory. Other operating systems allow the use of environment variables to specify the name
and location of the password file.
Table 1-1 lists the required name and location for the password file on the UNIX, Linux, and
Windows platforms. For other platforms, consult your platform-specific documentation.
Table 1-1 Required Password File Name and Location on UNIX, Linux, and Windows
For example, for a database instance with the SID orcldw, the password file must be
named orapworcldw on Linux and PWDorcldw.ora on Windows.
Caution:
It is critically important to the security of your system that you protect your password file and
the environment variables that identify the location of the password file. Any user with access to
these could potentially compromise the security of the connection.
ENTRIES
This argument specifies the number of entries that you require the password file to accept. This
number corresponds to the number of distinct users allowed to connect to the database
19 ORACLE DATABASE ADMINISTRATION
as SYSDBA or SYSOPER. The actual number of allowable entries can be higher than the number
of users, because the ORAPWD utility continues to assign password entries until an operating
system block is filled. For example, if your operating system block size is 512 bytes, it holds four
password entries. The number of password entries allocated is always a multiple of four.
Entries can be reused as users are added to and removed from the password file. If you intend to
specify REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE, and to allow the granting
of SYSDBA and SYSOPER privileges to users, this argument is required.
Caution:
When you exceed the allocated number of password entries, you must create a new password
file. To avoid this necessity, allocate more entries than you think you will ever need.
FORCE
This argument, if set to Y, enables you to overwrite an existing password file. An error is
returned if a password file of the same name already exists and this argument is omitted or set
to N.
IGNORECASE
If this argument is set to y, passwords are case-insensitive. That is, case is ignored when
comparing the password that the user supplies during login with the password in the password
file.
NONE: Setting this parameter to NONE causes Oracle Database to behave as if the password file
does not exist. That is, no privileged connections are allowed over nonsecure connections.
EXCLUSIVE: (The default) An EXCLUSIVE password file can be used with only one instance of
one database. Only an EXCLUSIVE file can be modified. Using an EXCLUSIVE password file
enables you to add, modify, and delete users. It also enables you to change the SYS password
with the ALTER USER command.
SHARED: A SHARED password file can be used by multiple databases running on the same
server, or multiple instances of an Oracle Real Application Clusters (Oracle RAC) database.
A SHARED password file cannot be modified. Therefore, you cannot add users to a SHARED
password file. Any attempt to do so or to change the password of SYS or other users with
the SYSDBA or SYSOPER privileges generates an error. All users
needing SYSDBA or SYSOPER system privileges must be added to the password file
when REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE. After all users are added, you
can change REMOTE_LOGIN_PASSWORDFILE to SHARED, and then share the file.
This option is useful if you are administering multiple databases or an Oracle RAC database.
Suggestion: To achieve the greatest level of security, you should set the
REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE immediately after
creating the password file.
Note:
You cannot change the password for SYS if REMOTE_LOGIN_PASSWORDFILE is set to SHARED. An
error message is issued if you attempt to do so.
To synchronize the SYS passwords, use the ALTER USER statement to change the SYS password.
The ALTER USER statement updates and synchronizes both the dictionary and password file passwords.
To synchronize the passwords for non-SYS users who log in using the SYSDBA or SYSOPER privilege,
you must revoke and then regrant the privilege to the user, as follows:
1. Find all users who have been granted the SYSDBA privilege.
3. Find all users who have been granted the SYSOPER privilege.
When you grant SYSDBA or SYSOPER privileges to a user, that user's name and privilege information
are added to the password file. If the server does not have an EXCLUSIVE password file (that is, if the
initialization parameter REMOTE_LOGIN_PASSWORDFILE is NONE or SHARED, or the password file
is missing), Oracle Database issues an error if you attempt to grant these privileges.
A user's name remains in the password file only as long as that user has at least one of these two
privileges. If you revoke both of these privileges, Oracle Database removes the user from the password
file.
Use the following procedure to create a password and add new users to it:
1. Follow the instructions for creating a password file as explained in "Creating a Password File with
ORAPWD".
Note:
3. Connect with SYSDBA privileges as shown in the following example, and enter
the SYS password when prompted:
4. Start up the instance and create the database if necessary, or mount and open an existing
database.
5. Create users as necessary. Grant SYSDBA or SYSOPER privileges to yourself and other users as
appropriate. Granting and Revoking SYSDBA and SYSOPER Privileges
If your server is using an EXCLUSIVE password file, use the GRANT statement to grant
the SYSDBA or SYSOPER system privilege to a user, as shown in the following example:
Use the REVOKE statement to revoke the SYSDBA or SYSOPER system privilege from a user, as shown
in the following example:
Because SYSDBA and SYSOPER are the most powerful database privileges, the WITH ADMIN
OPTION is not used in the GRANT statement. That is, the grantee cannot in turn grant
the SYSDBA or SYSOPER privilege to another user. Only a user currently connected as SYSDBA can
grant or revoke another user's SYSDBA or SYSOPER system privileges. These privileges cannot be
22 ORACLE DATABASE ADMINISTRATION
granted to roles, because roles are available only after database startup. Do not confuse
the SYSDBA and SYSOPER database privileges with operating system roles.
Use the V$PWFILE_USERS view to see the users who have been granted the SYSDBA, SYSOPER,
or SYSASM system privileges. The columns displayed by this view are as follows:
Column Description
USERNAM This column contains the name of the user that is recognized by the password file.
E
SYSDBA If the value of this column is TRUE, then the user can log on with the SYSDBA system
privileges.
SYSOPER If the value of this column is TRUE, then the user can log on with the SYSOPER system
privileges.
SYSASM If the value of this column is TRUE, then the user can log on with the SYSASM system
privileges.
Note:
Expand the number of password file users if the password file becomes full
If you receive an error when you try to grant SYSDBA or SYSOPER system privileges to a user because
the file is full, then you must create a larger password file and regrant the privileges to the users.
3. Follow the instructions for creating a new password file using the ORAPWD utility in "Creating a
Password File with ORAPWD". Ensure that the ENTRIES parameter is set to a number larger
than you think you will ever need.
If you determine that you no longer require a password file to authenticate users, you can delete the
password file and then optionally reset the REMOTE_LOGIN_PASSWORDFILE initialization parameter
to NONE. After you remove this file, only those users who can be authenticated by the operating system
can perform SYSDBA or SYSOPER database administration operations.
23 ORACLE DATABASE ADMINISTRATION
Preinstallation Requirements
login as root
Memory
RAM: At least 4 GB
The following table describes the relationship between installed RAM and the configured swap space
requirement:
To determine the size of the configured swap space, enter the following command:
# mkdir /data/
# dd if=/dev/zero of=/data/swapfile.1 bs=1024 count=65536
65536+0 records in
65536+0 records out
67108864 bytes (67 MB) copied, 1.3094 seconds, 51.3 MB/s
Transform it as a swap file
# /sbin/mkswap /data/swapfile.1
Setting up swapspace version 1, size = 67104 kB
# /sbin/swapon /data/swapfile.1
add it in the /etc/fstab in order to be recognize on boot
On SUSE LINUX
yast
yast2
# getconf PAGESIZE
4096
# getconf PAGE_SIZE
4096
Support
# /sbin/swapon /data/swapfile.1
swapon: /data/swapfile.1: Invalid argument
Verify that the file (/data/swapfile.1) has been made as a linux swap file with the command
/sbin/mkswap.
25 ORACLE DATABASE ADMINISTRATION
System Architecture
Verify that the processor architecture matches the Oracle software release that you want to install.
Disk Space
The following tables describe the disk space requirements on Linux x86:
Check that you have the minimal operating system and kernel.
Package - RPM
Oracle recommends that you install your Linux operating system with the default software packages
(RPMs), unless you specifically intend to perform a minimal installation.
binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-2.5-24 (32 bit)
glibc-common-2.5
glibc-devel-2.5
glibc-devel-2.5 (32 bit)
glibc-headers-2.5
ksh-20060214
libaio-0.3.106
libaio-0.3.106 (32 bit)
26 ORACLE DATABASE ADMINISTRATION
libaio-devel-0.3.106
libaio-devel-0.3.106 (32 bit)
libgcc-4.1.2
libgcc-4.1.2 (32 bit)
libstdc++-4.1.2
libstdc++-4.1.2 (32 bit)
libstdc++-devel 4.1.2
make-3.81
numactl-devel-0.9.8.x86_64
sysstat-7.0.2
The numa package link for Linux x86 is /usr/lib and Linux x86-64 is /usr/lib64/
To determine whether the required packages are installed, enter commands similar to the following:
If a package is not installed, then install it from the Linux distribution media or download the required
package version from the Linux vendor's Web site.
Database Connectivity
If you intend to use ODBC, then install the most recent ODBC Driver Manager for Linux. Download and
install the Driver Manager from the following URL:
http://www.unixodbc.org
To use ODBC, you must also install the following additional ODBC RPMs, depending on your operating
system
You can use JDK 6 Update 10 (Java SE Development Kit 1.6 u10) or JDK 5 (1.5.0_16) with the JNDI
extension with the Oracle Java Database Connectivity and Oracle Call Interface drivers. However, these
are not mandatory for the database installation. Please note that IBM JDK 1.5 is installed with this
release.
Parameters
During installation, for certain prerequisite check failures, you can click Fix & Check Again to generate a
fixup script (runfixup.sh). You can run this script as a root user to complete the required preinstallation
steps.
Checks and sets kernel parameters to values required for successful installation, including:
Shared memory parameters
Semaphore parameters
Open file descriptor and UDP send/receive parameters
Sets permissions on the Oracle Inventory directory.
Reconfigures primary and secondary group memberships for the installation owner, if necessary,
for the Oracle Inventory directory, and for the operating system privileges groups.
Sets up virtual IP and private IP addresses in /etc/hosts.
Sets shell limits to required values, if necessary.
Installs the Cluster Verification Utility packages (cvuqdisk rpm).
Using fixup scripts will not ensure that all the prerequisites for installing Oracle Database are satisfied.
You must still verify that all the preinstallation requirements are met to ensure a successful installation.
Network Setup
DNS
Verify the value of the DNS configuration file /etc/resolv.conf. The nameserver must be not set or set to a
valid DNS server and you can add the two time-out parameters.
Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as
follows:
SELINUX=disabled
Log in as root.
Installation Groups
Create OS groups.
This section provides instructions on how to create the operating system user and groups that will be used to
install and manage the Oracle Database 11g Release 2 software. In addition to the Oracle software owner,
another OS user (jhunter) will be configured with the appropriate DBA related OS groups to manage the
Oracle database.
OS
OS Group OS Users Assigned Oracle Oracle
Description Group
Name to this Group Privilege Group Name
ID
OS Group Descriptions
This group must be created the first time you install Oracle software on the system.
Members of the OINSTALL group are considered the "owners" of the Oracle software and
are granted privileges to write to the Oracle central inventory (oraInventory). When you
install Oracle software on a Linux system for the first time, OUI creates
the /etc/oraInst.loc file. This file identifies the name of the Oracle Inventory group (by
default, oinstall), and the path of the Oracle Central Inventory directory.
Ensure that this group is available as a primary group for all planned Oracle software
installation owners. For the purpose of this guide, the oracle installation owner will be
configured with oinstall as its primary group.
Members of the OSDBA group can use SQL to connect to an Oracle instance
as SYSDBA using operating system authentication. Members of this group can perform
critical database administration tasks, such as creating the database and instance startup
and shutdown. The default name for this group is dba. The SYSDBA system privilege allows
access to a database instance even when the database is not open. Control of this privilege
is totally outside of the database itself.
The oracle installation owner should be a member of the OSDBA group (configured as a
secondary group) along with any other DBA user accounts (i.e. jhunter) needing access
to an Oracle instance as SYSDBA using operating system authentication.
The SYSDBA system privilege should not be confused with the database role DBA.
The DBA role does not include the SYSDBA or SYSOPER system privileges.
Members of the OSOPER group can use SQL to connect to an Oracle instance
as SYSOPER using operating system authentication. Members of this optional group have a
limited set of database administrative privileges such as managing and running backups.
29 ORACLE DATABASE ADMINISTRATION
The default name for this group is oper. The SYSOPER system privilege allows access to a
database instance even when the database is not open. Control of this privilege is totally
outside of the database itself. To use this group, choose the advanced installation type to
install the Oracle database software.
The database being created in this guide will not make use of Automatic Storage
Management (ASM) and therefore will not create or assign the ASM related OS groups like
asmadmin, asmdba, and asmoper.
Create the recommended OS groups and user for the Oracle Database software owner.
Optionally, configure any other OS users with the appropriate DBA related OS groups to manage the Oracle
database. Remember to use the append option (-a) to the usermod command so that the user will not be
removed from groups not listed.
Log in to machine as the oracle user account and create the following login script (.bash_profile).
Values marked in red should be customized for your environment.
# ---------------------------------------------------
# .bash_profile
# ---------------------------------------------------
# OS User: oracle
30 ORACLE DATABASE ADMINISTRATION
# Application: Oracle Database Software Owner
# Version: Oracle 11g Release 2
# ---------------------------------------------------
# ---------------------------------------------------
# ORACLE_SID
# ---------------------------------------------------
# Specifies the Oracle system identifier (SID) for
# the Oracle instance running on this node. When
# using RAC, each node must have a unique ORACLE_SID.
# (i.e. racdb1, racdb2,...)
# ---------------------------------------------------
ORACLE_SID=testdb1; export ORACLE_SID
# ---------------------------------------------------
# ORACLE_UNQNAME and ORACLE_HOSTNAME
# ---------------------------------------------------
# In previous releases of Oracle Database, you were
# required to set environment variables for
# ORACLE_HOME and ORACLE_SID to start, stop, and
# check the status of Enterprise Manager. With
# Oracle Database 11g Release 2 (11.2) and later, you
# need to set the environment variables ORACLE_HOME,
# ORACLE_UNQNAME, and ORACLE_HOSTNAME to use
# Enterprise Manager. Set ORACLE_UNQNAME equal to
# the database unique name and ORACLE_HOSTNAME to
# the hostname of the machine.
# ---------------------------------------------------
ORACLE_UNQNAME=testdb1; export ORACLE_UNQNAME
ORACLE_HOSTNAME=testnode1.idevelopment.info; export ORACLE_HOSTNAME
# ---------------------------------------------------
# JAVA_HOME
# ---------------------------------------------------
# Specifies the directory of the Java SDK and Runtime
# Environment.
# ---------------------------------------------------
JAVA_HOME=/usr/local/java; export JAVA_HOME
# ---------------------------------------------------
# ORACLE_BASE
# ---------------------------------------------------
# Specifies the base of the Oracle directory structure
# for Optimal Flexible Architecture (OFA) compliant
# database software installations.
# ---------------------------------------------------
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
# ---------------------------------------------------
# ORACLE_HOME
# ---------------------------------------------------
# Specifies the directory containing the Oracle
# Database software.
# ---------------------------------------------------
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME
# ---------------------------------------------------
31 ORACLE DATABASE ADMINISTRATION
# ORACLE_PATH
# ---------------------------------------------------
# Specifies the search path for files used by Oracle
# applications such as SQL*Plus. If the full path to
# the file is not specified, or if the file is not
# in the current directory, the Oracle application
# uses ORACLE_PATH to locate the file.
# This variable is used by SQL*Plus, Forms and Menu.
# ---------------------------------------------------
ORACLE_PATH=/u01/app/oracle/dba_scripts/sql:$ORACLE_HOME/rdbms/admin; export
ORACLE_PATH
# ---------------------------------------------------
# SQLPATH
# ---------------------------------------------------
# Specifies the directory or list of directories that
# SQL*Plus searches for a login.sql file.
# ---------------------------------------------------
# SQLPATH=/u01/app/oracle/dba_scripts/sql; export SQLPATH
# ---------------------------------------------------
# ORACLE_TERM
# ---------------------------------------------------
# Defines a terminal definition. If not set, it
# defaults to the value of your TERM environment
# variable. Used by all character mode products.
# ---------------------------------------------------
ORACLE_TERM=xterm; export ORACLE_TERM
# ---------------------------------------------------
# NLS_DATE_FORMAT
# ---------------------------------------------------
# Specifies the default date format to use with the
# TO_CHAR and TO_DATE functions. The default value of
# this parameter is determined by NLS_TERRITORY. The
# value of this parameter can be any valid date
# format mask, and the value must be surrounded by
# double quotation marks. For example:
#
# NLS_DATE_FORMAT = "MM/DD/YYYY"
#
# ---------------------------------------------------
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
# ---------------------------------------------------
# TNS_ADMIN
# ---------------------------------------------------
# Specifies the directory containing the Oracle Net
# Services configuration files like listener.ora,
# tnsnames.ora, and sqlnet.ora.
# ---------------------------------------------------
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
# ---------------------------------------------------
# ORA_NLS11
# ---------------------------------------------------
# Specifies the directory where the language,
# territory, character set, and linguistic definition
# files are stored.
# ---------------------------------------------------
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
# ---------------------------------------------------
# PATH
32 ORACLE DATABASE ADMINISTRATION
# ---------------------------------------------------
# Used by the shell to locate executable programs;
# must include the $ORACLE_HOME/bin directory.
# ---------------------------------------------------
PATH=.:${JAVA_HOME}/bin:$JAVA_HOME/db/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/oracle/dba_scripts/bin
export PATH
# ---------------------------------------------------
# LD_LIBRARY_PATH
# ---------------------------------------------------
# Specifies the list of directories that the shared
# library loader searches to locate shared object
# libraries at runtime.
# ---------------------------------------------------
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
# ---------------------------------------------------
# CLASSPATH
# ---------------------------------------------------
# The class path is the path that the Java runtime
# environment searches for classes and other resource
# files. The class search path (more commonly known
# by the shorter name, "class path") can be set using
# either the -classpath option when calling a JDK
# tool (the preferred method) or by setting the
# CLASSPATH environment variable. The -classpath
# option is preferred because you can set it
# individually for each application without affecting
# other applications and without other applications
# modifying its value.
# ---------------------------------------------------
CLASSPATH=.:$ORACLE_HOME/jdbc/lib/ojdbc6.jar
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
# ---------------------------------------------------
# THREADS_FLAG
# ---------------------------------------------------
# All the tools in the JDK use green threads as a
# default. To specify that native threads should be
# used, set the THREADS_FLAG environment variable to
# "native". You can revert to the use of green
# threads by setting THREADS_FLAG to the value
# "green".
# ---------------------------------------------------
THREADS_FLAG=native; export THREADS_FLAG
# ---------------------------------------------------
# TEMP, TMP, and TMPDIR
# ---------------------------------------------------
# Specify the default directories for temporary
# files; if set, tools that create temporary files
# create them in one of these directories.
# ---------------------------------------------------
export TEMP=/tmp
export TMPDIR=/tmp
# ---------------------------------------------------
33 ORACLE DATABASE ADMINISTRATION
# UMASK
# ---------------------------------------------------
# Set the default file mode creation mask
# (umask) to 022 to ensure that the user performing
# the Oracle software installation creates files
# with 644 permissions.
# ---------------------------------------------------
umask 022
The next step is to configure an Oracle base path compliant with an Optimal Flexible Architecture
(OFA) structure and correct permissions. The Oracle base path will be used to store the Oracle Database
software.
Create the directory that will be used to store the Oracle data files.
Create the directory that will be used to store the Oracle recovery files.
At the end of this section, you should have the following user, groups, and directory path configuration.
A separate OSDBA group (dba), whose members include oracle, and who are granted the
SYSDBA privilege to administer the Oracle Database.
A separate OSOPER group (oper), whose members include oracle, and who are granted limited
Oracle database administrator privileges.
An Oracle Database software owner (oracle), with the oraInventory group as its primary group,
and with the OSDBA (dba) and OSOPER (oper) group as its secondary group.
34 ORACLE DATABASE ADMINISTRATION
OFA-compliant mount points /u01, /u02, and /u03 that will be used for the Oracle software
installation, data files, and recovery files.
To improve the performance of the software on Linux systems, you must increase the following resource
limits for the Oracle software owner (oracle).
Item in
Resource Shell Soft
limits.c Hard Limit
Limit Limit
onf
at least
Open file descriptors nofile at least 65536
1024
Number of processes
at least
available to a single nproc at least 16384
2047
user
At
Size of the stack at least 10240 KB,
least
segment of the stack and at most
10240
process 32768 KB
KB
2. Check the soft and hard limits for the file descriptor setting. Ensure that the result is in the
recommended range. For example:
3. Check the soft and hard limits for the number of processes available to a user. Ensure that the result
is in the recommended range. For example:
4. Check the soft limit for the stack setting. Ensure that the result is in the recommended range. For
example:
5. If necessary, update the resource limits in the /etc/security/limits.conf configuration file for
the Oracle installation owner by adding the following lines.
Add the following line to the /etc/pam.d/login file, if it does not already exist.
Depending on your shell environment, make the following changes to the default shell startup file in order to
change ulimit settings for the Oracle installation owner.
For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile file.
For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file.
This section focuses on preparing the Linux operating system for the Oracle Database 11g Release 2
installation. This includes verifying enough memory and swap space, setting shared memory and
semaphores, setting the maximum number of file handles, setting the IP local port range, and how to
activate all kernel parameters for the system.
The kernel parameters discussed in this section will need to persist through machine reboots. Although there
are several methods used to set these parameters, I will be making all changes permanent through reboots
by placing all values in the /etc/sysctl.conf file.
Kernel Parameters
The kernel parameters presented in this section are only recommended values as documented by Oracle.
For production database systems, Oracle recommends that you tune these values to optimize the
performance of the system.
Verify that the kernel parameters described in this section are set to values greater than or equal to the
recommended values. Also note that when setting the four semaphore values that all four values need to be
entered on one line.
Oracle Database 11g Release 2 for Linux requires the kernel parameter settings shown below. The values
given are minimums, so if your system uses a larger value, do not change it.
kernel.shmmax = 4294967295
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.file-max = 6815744
fs.aio-max-nr = 1048576
RHEL/OL/CentOS 6 already comes configured with default values defined for the following
kernel parameters.
kernel.shmmax
kernel.shmall
The default values for these two kernel parameters should be overwritten with the
recommended values defined in this guide.
# +---------------------------------------------------------+
# | KERNEL PARAMETERS FOR ORACLE DATABASE 11g R2 ON LINUX |
# +---------------------------------------------------------+
# +---------------------------------------------------------+
# | SHARED MEMORY |
37 ORACLE DATABASE ADMINISTRATION
# +---------------------------------------------------------+
# +---------------------------------------------------------+
# | SEMAPHORES |
# +---------------------------------------------------------+
# +---------------------------------------------------------+
# | NETWORKING |
# ----------------------------------------------------------+
# Defines the local port range that is used by TCP and UDP
# traffic to choose the local port
net.ipv4.ip_local_port_range = 9000 65500
# +---------------------------------------------------------+
# | FILE HANDLES |
# ----------------------------------------------------------+
Placing the kernel parameters in the /etc/sysctl.conf startup file persists the required kernel
parameters through reboots. Linux allows modification of these kernel parameters to the current system
while it is up and running, so there's no need to reboot the system after making kernel parameter changes.
To activate the new kernel parameter values for the currently running system, run the following as root.
Memory
The minimum required RAM for Oracle Database 11g Release 2 running on the Linux platform is 1 GB
(although 2 GB or more of RAM is highly recommended).
Use the following command to check the amount of installed RAM on the system.
If the size of the installed RAM is less than the required size, then you must install more memory before
continuing.
39 ORACLE DATABASE ADMINISTRATION
Swap Space
The following table describes the relationship between installed RAM and the configured swap space
recommendation.
More than 16 GB 16 GB
Use the following command to determine the size of the configured swap space.
On Linux, the HugePages feature allocates non-swappable memory for large page tables
using memory-mapped files. If you enable HugePages, then you should deduct the memory
allocated to HugePages from the available RAM before calculating swap space.
If necessary, additional swap space can be configured by creating a temporary swap file and adding it to the
current swap. This way you do not have to use a raw device or even more drastic, rebuild your system.
1. As root, make a file that will act as additional swap space, let's say about 500MB.
3. Finally, format the "partition" as swap and add it to the swap space:
To determine the available RAM and swap space, enter the following command.
Starting with Oracle Database 11g, the Automatic Memory Management feature requires more shared
memory (/dev/shm) and file descriptors. The shared memory should be sized to be at least the greater of
MEMORY_MAX_TARGET and MEMORY_TARGET for each Oracle instance on the computer.
To determine the amount of shared memory available, enter the following command.
Network Configuration
During the Linux OS install, we already configured the IP address and host name for the database node.
This sections contains additional network configuration steps that will prepare the machine to run the Oracle
database.
Note that the Oracle database server should have a static IP address configured for the public network
(eth0 for this guide). Do not use DHCP naming for the public IP address; you need a static IP address.
Ensure that the node name (testnode1) is not included for the loopback address in the /etc/hosts file. If
the machine name is listed in the in the loopback address entry as below:
The /etc/hosts file must contain a fully qualified name for the server.
For example.
During the Linux OS install, I indicated to disable the firewall. By default the option to configure a firewall is
selected by the installer. This has burned me several times so I like to do a double-check that the firewall
option is not configured and to ensure udp ICMP filtering is turned off.
1. Check to ensure that the firewall option is turned off. If the firewall option is stopped (like it is in my
example below) you do not have to proceed with the following steps.
2. If the firewall option is operating, you will need to first manually disable UDP ICMP rejections.
3. Then, turn UDP ICMP rejections off for all subsequent server reboots (which should always be
turned off).
linux.x64_11gR2_database_1of
Oracle Database 11.2.0. OTN / eDelivery / 2.zip
11g Release 2 1 MOS linux.x64_11gR2_database_2of
2.zip
p10098816_112020_Linux-x86-
64_2of7.zip
p10404530_112030_Linux-x86-
Oracle Database 11.2.0. 64_1of7.zip
10404530
11g Release 2 3 p10404530_112030_Linux-x86-
64_2of7.zip
You should now have a single directory called database and the optional examples directory containing
the Oracle installation files.
For the purpose of this example, we will forgo the "Create Database" option when installing the Oracle
Database software. The database will be created later in this guide using the Database Configuration
Assistant (DBCA) after all installs have been completed.
Log into the node as the Oracle software owner (oracle). If you are using X emulation then set
the DISPLAY environmental variable accordingly.
43 ORACLE DATABASE ADMINISTRATION
Start the Oracle Universal Installer (OUI) by issuing the following command in the database install
directory.
[oracle@testnode1 ~]$ id
uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),503(oper)
At any time during installation, if you have a question about what you are being asked to do, click
the Help button on the OUI page.
The prerequisites checks will fail for the following version-dependent reasons. As mentioned at
the beginning of this guide, RHEL6 and OL6 are not certified or supported for use with any Oracle Database
version at the time of this writing.
11.2.0.1: The installer shows multiple "missing package" failures because it does not recognize
several of the newer version packages that were installed. These "missing package" failures can be
ignored as the packages are present. The failure for the "pdksh" package can be ignored because it
is no longer part of RHEL6 and we installed the "ksh" package in its place.
11.2.0.2: The installer should only show a single "missing package" failure for the "pdksh" package.
The failure for the "pdksh" package can be ignored because it is no longer part of RHEL6 and we
installed the "ksh" package in its place.
To stay informed with the latest security issues, enter your e-mail address, preferably your
My Oracle Support e-mail address or user name in the Email field. You can select the "I wish
to receive security updates via My Oracle Support" check box to receive security updates.
Configure
Enter your My Oracle Support password in the "My Oracle Support Password" field.
Security
For the purpose of this example, un-check the security updates check-box and click
Updates
the [Next] button to continue.
Acknowledge the warning dialog indicating you have not provided an email address by
clicking the [Yes] button.
44 ORACLE DATABASE ADMINISTRATION
Specify the Oracle base and Software location (Oracle home) as follows.
Installation OracleBase: /u01/app/oracle
Location SoftwareLocation: /u01/app/oracle/product/11.2.0/dbhome
_1
Since this is the first install on the host, you will need to create the Oracle
Create Inventory. Use the default values provided by the OUI.
Inventory InventoryDirectory: /u01/app/oraInventory
oraInventory Group Name: oinstall
47 ORACLE DATABASE ADMINISTRATION
Prerequisite The installer will run through a series of checks to determine if the machine
Checks and OS configuration meet the minimum requirements for installing the
Oracle Database software.
Starting with 11g Release 2, if any checks fail, the installer (OUI) will create
shell script programs called fixup scripts to resolve many incomplete system
configuration requirements. If OUI detects an incomplete task that is
marked "fixable", then you can easily fix the issue by generating the fixup
script by clicking the [Fix & Check Again] button.
The fixup script is generated during installation. You will be prompted to run
the script as root in a separate terminal session. When you run the script,
it raises kernel values to required minimums, if necessary, and completes
48 ORACLE DATABASE ADMINISTRATION
other operating system configuration tasks.
If all prerequisite checks pass, the OUI continues to the Summary screen. If
the OUI detected any failed checks, take the appropriate action to resolve it
or click the "Ignore All" check box to acknowledge it is safe to continue with
the installation without resolving the issue (the "pdksh-5.2.-14" missing
package, for example).
Install Product The installer performs the Oracle Database software installation.
49 ORACLE DATABASE ADMINISTRATION
Finish At the end of the installation, click the [Close] button to exit the OUI.
As already mentioned, Oracle writes to its online redolog files in a circular manner. When the current online
redolog fills, Oracle will switch to the next one. To facilitate media recovery, Oracle allows the DBA to put the
database into "Archive Log Mode" which makes a copy of the online redolog after it fills (and before it gets
reused). This is a process known as archiving.
The Database Configuration Assistant (DBCA) allows users to configure a new database to be in archive log
mode within the Recovery Configuration section; however most DBA's opt to bypass this option during initial
database creation. In cases like this where the database is in no archive log mode, it is a simple task to put
the database into archive log mode. Note however that this will require a short database outage.
1. Log in to the database as a user with SYSDBA privileges and shut down the instance.
Database altered.
4. Open the database.
Database altered.
5. Verify Archive Log Mode is enabled.
In this section you will download and install a collection of Oracle DBA scripts that can be used to manage
many aspects of your database including space management, performance, backups, security, and session
management. As the Oracle software owner (oracle), download
the dba_scripts_archive_Oracle.zip archive to the $ORACLE_BASE directory. For the purpose of
this example, the dba_scripts_archive_Oracle.zip archive will be copied to /u01/app/oracle.
Next, unzip the archive file to the $ORACLE_BASE directory.
For example:
ORACLE_PATH=$ORACLE_BASE/dba_scripts/sql:.:$ORACLE_HOME/rdbms/admin
export ORACLE_PATH
SQL> @dba_tablespaces
Status Tablespace Name TS Type Ext. Mgt. Seg. Mgt. Tablespace Size Used (in bytes) Pct. Used
-------- ------------------ ------------ ---------- --------- ------------------ ------------------ -------
ONLINE EXAMPLE PERMANENT LOCAL AUTO 157,286,400 85,131,264 54
ONLINE SYSAUX PERMANENT LOCAL AUTO 629,145,600 487,718,912 78
ONLINE SYSTEM PERMANENT LOCAL MANUAL 734,003,200 705,953,792 96
ONLINE TEMP TEMPORARY LOCAL MANUAL 67,108,864 66,060,288 98
ONLINE UNDOTBS1 UNDO LOCAL MANUAL 560,988,160 419,102,720 75
ONLINE USERS PERMANENT LOCAL AUTO 5,242,880 1,048,576 20
------------------ ------------------
---------
avg
70
sum 2,153,775,104 1,765,015,552
6 rows selected.
To obtain a list of all available Oracle DBA scripts while logged into SQL*Plus, run the help.sql script.
SQL> @help.sql
========================================
Automatic Shared Memory Management
========================================
asmm_components.sql
========================================
Automatic Storage Management
========================================
asm_alias.sql
asm_clients.sql
asm_diskgroups.sql
asm_disks.sql
asm_disks_perf.sql
53 ORACLE DATABASE ADMINISTRATION
asm_drop_files.sql
asm_files.sql
asm_files2.sql
asm_templates.sql
perf_top_sql_by_buffer_gets.sql
perf_top_sql_by_disk_reads.sql
========================================
Workspace Manager
========================================
wm_create_workspace.sql
wm_disable_versioning.sql
wm_enable_versioning.sql
wm_freeze_workspace.sql
wm_get_workspace.sql
wm_goto_workspace.sql
wm_merge_workspace.sql
wm_refresh_workspace.sql
wm_remove_workspace.sql
wm_unfreeze_workspace.sql
wm_workspaces.sql
...
testdb1:/u01/app/oracle/product/11.2.0/dbhome_1:Y
...
Next, create a text file named /etc/init.d/dbora as the root user, containing the following.
#!/bin/sh
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.
#
# Set ORA_HOME to be equivalent to the $ORACLE_HOME
# from which you wish to execute dbstart and dbshut;
#
# Set ORA_OWNER to the user id of the owner of the
# Oracle database in ORA_HOME.
ORA_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
ORA_OWNER=oracle
if [ ! -f $ORA_HOME/bin/dbstart ]
then
echo "Oracle startup: cannot start"
exit
fi
case "$1" in
'start')
# Start the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su - $ORA_OWNER -c "$ORA_HOME/bin/dbstart $ORA_HOME"
;;
'stop')
# Stop the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su - $ORA_OWNER -c "$ORA_HOME/bin/dbshut $ORA_HOME"
;;
esac
Note that the /etc/init.d/dbora script listed above may look a little different from a
similar one used for Oracle9i — most notably the omission of the commands to start/stop the
Oracle TNS listener process. As of Oracle Database 10g Release 2, the dbstartscript
includes the commands to automatically start/stop the listener.
Use the chmod command to set the privileges to 750.
Oracle database:
An Oracle Database is a Relational Database management System used to store & retrieve the related
information.
An Oracle Database server/instance consists of shared memory structure, background processes &
storage which handles the functional requirement to manage concurrent & shared data access by users.
Oracle Database product has evolve though its 8i, 9i, 10g & 11g version . Oracle database server is a part
of multitier architecture includes Client Machine/Webserver, Middleware Application & Database
Server.
62 ORACLE DATABASE ADMINISTRATION
Client Machine/Webserver:
Client is the End user who accesses the DB to retrieve the information.
Various ways to access db by client are Sqlplus, Sql developer or other third Party Tools like TOAD/PLSQL
Developer, Web URL. Client can be remote or local to DB servers which means that Webserver &
Middleware layers are optional & DB can be retrieve from its local server itself. In Complex & Critical
Application Setup’s Multitier approach being followed to make efficient administration, security
enforcement, patch/upgrades, backup, restoration, monitoring, license management, hardware
management of every component.
Middleware Application:
It stands as a middleware layer to client before accessing database which consists of data retrieval
policy, functions, application/java /plsql codes, user interface etc. Oracle CRM, Fusion Middleware and
other vendor application products are found in this layer
Database Server:
Here it comes the Oracle Database, located on Server supporting any platform like Windows, Solaris,
AIX, HP-UX and Linux etc.
Will simplify the correlation between each of the basic (writing basic as 8i to 11g various new
components being added by oracle but I have picked up the most common of all them & will be easy to
understand rather than adding more confusion) database Components & their Usage in following section
with reference to above Oracle Basic Architecture with below flow.
User and server processes: The processes shown in the figure are
called user and serverprocesses. These processes are used to manage the execution of SQL
statements.
· A Shared Server Process can share memory and variable processing for multiple user processes.
· A Dedicated Server Process manages memory and variables for a single user process.
This figure from the Oracle Database Administration Guide provides another way of viewing the SGA.
63 ORACLE DATABASE ADMINISTRATION
System users can connect to an Oracle database through SQLPlus or through an application program like
the Internet Developer Suite (the program becomes the system user). This connection enables users to
execute SQL statements.
The act of connecting creates a communication pathway between a user process and an Oracle
Server. As is shown in the figure above, the User Process communicates with the Oracle Server through
a Server Process. The User Process executes on the client computer. The Server Process executes on
the server computer, and actually executes SQL statements submitted by the system user.
The figure shows a one-to-one correspondence between the User and Server Processes. This is called
a Dedicated Server connection. An alternative configuration is to use a Shared Server where more
than one User Process shares a Server Process.
Sessions: When a user connects to an Oracle server, this is termed a session. The User Global
Area is session memory and these memory structures are described later in this document. The session
starts when the Oracle server validates the user for connection. The session ends when the user logs out
(disconnects) or if the connection terminates abnormally (network failure or client computer failure).
A user can typically have more than one concurrent session, e.g., the user may connect using SQLPlus
and also connect using Internet Developer Suite tools at the same time. The limit of concurrent session
connections is controlled by the DBA.
If a system users attempts to connect and the Oracle Server is not running, the system user receives
the Oracle Not Available error message.
As was noted above, an Oracle database consists of physical files. The database itself has:
· Datafiles – these contain the organization's actual data.
65 ORACLE DATABASE ADMINISTRATION
· Redo log files – these contain a chronological record of changes made to the database, and enable
recovery when failures occur.
· Control files – these are used to synchronize all database activities and are covered in more detail in
a later module.
Other key files as noted above include:
Parameter file – there are two types of parameter files.
o The init.ora file (also called the PFILE) is a static parameter file. It contains parameters that specify
how the database instance is to start up. For example, some parameters will specify how to allocate
memory to the various parts of the system global area.
o The spfile.ora is a dynamic parameter file. It also stores parameters to specify how to startup a
database; however, its parameters can be modified while the database is running.
Password file – specifies which *special* users are authenticated to startup/shut down an Oracle
Instance.
Archived redo log files – these are copies of the redo log files and are necessary for recovery in an
online, transaction-processing environment in the event of a disk failure.
If you create a database with Database Configuration Assistant (DBCA) and choose the basic installation
option, then automatic memory management is the default.
The SGA is a read/write memory area that stores information shared by all database processes and by
all users of the database (sometimes it is called the Shared Global Area).
o This information includes both organizational data and control information used by the Oracle Server.
o The SGA is allocated in memory and virtual memory.
o The size of the SGA can be established by a DBA by assigning a value to the parameter
SGA_MAX_SIZE in the parameter file—this is an optional parameter.
The SGA is allocated when an Oracle instance (database) is started up based on values specified in the
initialization parameter file (either PFILE or SPFILE).
The SHOW SGA SQL command will show you the SGA memory allocations.
This is a recent clip of the SGA for the DBORCL database at SIUE.
In order to execute SHOW SGA you must be connected with the special privilege SYSDBA (which is
only available to user accounts that are members of the DBA Linux group).
Oracle 11g uses a Dynamic SGA. Memory configurations for the system global area can be made
without shutting down the database instance. The DBA can resize the Database Buffer Cache and
Shared Pool dynamically.
Several initialization parameters are set that affect the amount of random access memory dedicated to
the SGA of an Oracle Instance. These are:
SGA_MAX_SIZE: This optional parameter is used to set a limit on the amount of virtual
memory allocated to the SGA – a typical setting might be 1 GB; however, if the value for
SGA_MAX_SIZE in the initialization parameter file or server parameter file is less than the sum the
memory allocated for all components, either explicitly in the parameter file or by default, at the time the
instance is initialized, then the database ignores the setting for SGA_MAX_SIZE. For optimal
performance, the entire SGA should fit in real memory to eliminate paging to/from disk by the operating
system.
DB_CACHE_SIZE: This optional parameter is used to tune the amount memory allocated to the
Database Buffer Cache in standard database blocks. Block sizes vary among operating systems. The
DBORCL database uses 8 KB blocks. The total blocks in the cache defaults to 48 MB on LINUX/UNIX
and 52 MB on Windows operating systems.
LOG_BUFFER: This optional parameter specifies the number of bytes allocated for the Redo Log
Buffer.
SHARED_POOL_SIZE: This optional parameter specifies the number of bytes of memory allocated
to shared SQL and PL/SQL. The default is 16 MB. If the operating system is based on a 64
bit configuration, then the default size is 64 MB.
LARGE_POOL_SIZE: This is an optional memory object – the size of the Large Pool defaults to
zero. If the init.ora parameter PARALLEL_AUTOMATIC_TUNING is set toTRUE, then the default size
is automatically calculated.
JAVA_POOL_SIZE: This is another optional memory object. The default is 24 MB of memory.
The size of the SGA cannot exceed the parameter SGA_MAX_SIZE minus the combination of the size of
the additional parameters, DB_CACHE_SIZE, LOG_BUFFER, SHARED_POOL_SIZE,
LARGE_POOL_SIZE, and JAVA_POOL_SIZE.
Memory is allocated to the SGA as contiguous virtual memory in units termed granules. Granule size
depends on the estimated total size of the SGA, which as was noted above, depends on the
SGA_MAX_SIZE parameter. Granules are sized as follows:
If the SGA is less than 1 GB in total, each granule is 4 MB.
If the SGA is greater than 1 GB in total, each granule is 16 MB.
Granules are assigned to the Database Buffer Cache, Shared Pool, Java Pool, and other memory
structures, and these memory components can dynamically grow and shrink. Using contiguous memory
improves system performance. The actual number of granules assigned to one of these memory
components can be determined by querying the database view named V$BUFFER_POOL.
Granules are allocated when the Oracle server starts a database instance in order to provide memory
addressing space to meet the SGA_MAX_SIZE parameter. The minimum is 3 granules: one each for the
fixed SGA, Database Buffer Cache, and Shared Pool. In practice, you'll find the SGA is allocated much
more memory than this. The SELECT statement shown below shows a current_size of 1,152 granules.
For additional information on the dynamic SGA sizing, enroll in Oracle's Oracle11g Database Performance
Tuning course.
68 ORACLE DATABASE ADMINISTRATION
A PGA is:
a nonshared memory region that contains data and control information exclusively for use by an
Oracle process.
A PGA is created by Oracle Database when an Oracle process is started.
One PGA exists for each Server Process and each Background Process. It stores data and control
information for a single Server Process or a single Background Process.
It is allocated when a process is created and the memory is scavenged by the operating system when
the process terminates. This is NOT a shared part of memory – one PGA to each process only.
The collection of individual PGAs is the total instance PGA, or instance PGA.
Database initialization parameters set the size of the instance PGA, not individual PGAs.
The Program Global Area is also termed the Process Global Area (PGA) and is a part of memory
allocated that is outside of the Oracle Instance.
The content of the PGA varies, but as shown in the figure above, generally includes the following:
Private SQL Area: Stores information for a parsed SQL statement – stores bind variable values and
runtime memory allocations. A user session issuing SQL statements has a Private SQL Area that may be
associated with a Shared SQL Area if the same SQL statement is being executed by more than one
system user. This often happens in OLTP environments where many users are executing and using the
same application program.
o Dedicated Server environment – the Private SQL Area is located in the Program Global Area.
o Shared Server environment – the Private SQL Area is located in the System Global Area.
Session Memory: Memory that holds session variables and other session information.
SQL Work Areas: Memory allocated for sort, hash-join, bitmap merge, and bitmap create types of
operations.
o Oracle 9i and later versions enable automatic sizing of the SQL Work Areas by setting the
WORKAREA_SIZE_POLICY = AUTO parameter (this is the default!) and PGA_AGGREGATE_TARGET
= n (where n is some amount of memory established by the DBA). However, the DBA can let the Oracle
DBMS determine the appropriate amount of memory.
69 ORACLE DATABASE ADMINISTRATION
A session that loads a PL/SQL package into memory has the package state stored to the
UGA. The package state is the set of values stored in all the package variables at a specific time. The
state changes as program code the variables. By default, package variables are unique to and persist for
the life of the session.
The OLAP page pool is also stored in the UGA. This pool manages OLAP data pages, which are equivalent
to data blocks. The page pool is allocated at the start of an OLAP session and released at the end of the
session. An OLAP session opens automatically whenever a user queries a dimensional object such as
a cube.
Note: Oracle OLAP is a multidimensional analytic engine embedded in Oracle Database 11g. Oracle
OLAP cubes deliver sophisticated calculations using simple SQL queries - producing results with speed of
thought response times.
The UGA must be available to a database session for the life of the session. For this reason, the UGA
cannot be stored in the PGA when using a shared server connection because the PGA is specific to a
single process. Therefore, the UGA is stored in the SGA when using shared server connections, enabling
any shared server process access to it. When using a dedicated server connection, the UGA is stored in
the PGA.
Automatic Shared Memory Management (10g)
Prior to Oracle 10G, a DBA had to manually specify SGA Component sizes through the initialization
parameters, such as SHARED_POOL_SIZE, DB_CACHE_SIZE, JAVA_POOL_SIZE, and LARGE_POOL_SIZE
parameters.
Automatic Shared Memory Management enables a DBA to specify the total SGA memory available
through the SGA_TARGET initialization parameter. The Oracle Database automatically distributes this
memory among various subcomponents to ensure most effective memory utilization.
With automatic SGA memory management, the different SGA components are flexibly sized to adapt to
the SGA available. Setting a single parameter simplifies the administration task – the DBA only specifies
the amount of SGA memory available to an instance – the DBA can forget about the sizes of individual
components. No out of memory errors are generated unless the system has actually run out of
memory. No manual tuning effort is needed.
The SGA_TARGET initialization parameter reflects the total size of the SGA and includes memory for the
following components:
Fixed SGA and other internal allocations needed by the Oracle Database instance
The log buffer
The shared pool
The Java pool
The buffer cache
The keep and recycle buffer caches (if specified)
Nonstandard block size buffer caches (if specified)
The Streams Pool
If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the SGA_MAX_SIZE
value is bumped up to accommodate SGA_TARGET.
When you set a value for SGA_TARGET, Oracle Database 11g automatically sizes the most commonly
configured components, including:
The shared pool (for SQL and PL/SQL execution)
The Java pool (for Java execution state)
70 ORACLE DATABASE ADMINISTRATION
The large pool (for large allocations such as RMAN backup buffers)
The buffer cache
There are a few SGA components whose sizes are not automatically adjusted. The DBA must specify the
sizes of these components explicitly, if they are needed by an application. Such components are:
Keep/Recycle buffer caches (controlled by DB_KEEP_CACHE_SIZE and
DB_RECYCLE_CACHE_SIZE)
Additional buffer caches for non-standard block sizes (controlled by DB_nK_CACHE_SIZE, n =
{2, 4, 8, 16, 32})
Streams Pool (controlled by the new parameter STREAMS_POOL_SIZE)
The granule size that is currently being used for the SGA for each component can be viewed in the
view V$SGAINFO. The size of each component and the time and type of the last resize operation
performed on each component can be viewed in the view V$SGA_DYNAMIC_COMPONENTS.
Shared Pool
The Shared Pool is a memory structure that is shared by all system users.
It caches various types of program data. For example, the shared pool stores parsed SQL, PL/SQL
code, system parameters, and data dictionary information.
The shared pool is involved in almost every operation that occurs in the database. For example, if a
user executes a SQL statement, then Oracle Database accesses the shared pool.
It consists of both fixed and variable structures.
The variable component grows and shrinks depending on the demands placed on memory size by
system users and application programs.
Memory can be allocated to the Shared Pool by the parameter SHARED_POOL_SIZE in the parameter
file. The default value of this parameter is 8MB on 32-bit platforms and 64MB on 64-bit platforms.
Increasing the value of this parameter increases the amount of memory reserved for the shared pool.
You can alter the size of the shared pool dynamically with the ALTER SYSTEM SET command. An
example command is shown in the figure below. You must keep in mind that the total memory allocated
to the SGA is set by the SGA_TARGET parameter (and may also be limited by the SGA_MAX_SIZE if it
is set), and since the Shared Pool is part of the SGA, you cannot exceed the maximum size of the
SGA. It is recommended to let Oracle optimize the Shared Pool size.
71 ORACLE DATABASE ADMINISTRATION
The Shared Pool stores the most recently executed SQL statements and used data definitions. This is
because some system users and application programs will tend to execute the same SQL statements
often. Saving this information in memory can improve system performance.
Library Cache
Memory is allocated to the Library Cache whenever an SQL statement is parsed or a program unit is
called. This enables storage of the most recently used SQL and PL/SQL statements. If the Library Cache
is too small, the Library Cache must purge statement definitions in order to have space to load new SQL
and PL/SQL statements. Actual management of this memory structure is through a Least-Recently-
Used (LRU) algorithm. This means that the SQL and PL/SQL statements that are oldest and least
recently used are purged when more storage space is needed.
The Data Dictionary Cache is a memory structure that caches data dictionary information that has been
recently used.
This cache is necessary because the data dictionary is accessed so often.
Information accessed includes user account information, datafile names, table descriptions, user
privileges, and other information.
The database server manages the size of the Data Dictionary Cache internally and the size depends on
the size of the Shared Pool in which the Data Dictionary Cache resides. If the size is too small, then the
data dictionary tables that reside on disk must be queried often for information and this will slow down
performance.
The Server Result Cache holds result sets and not data blocks. The server result cache contains the SQL query result
cache and PL/SQL function result cache, which share the same infrastructure.
Buffer Caches
A number of buffer caches are maintained in memory in order to improve system response time.
Cache hit/miss: First time if an oracle process requesting a block is found in database buffer is known
as a cache hit, else it must fetch it from data file into buffer know as direct IO & should be considered as
cache miss
Database buffer also holds static components keep (db_keep_cache_size) & recycle buffer
(db_recycle_cache_size)
Data blocks of the segments allocated to KEEP buffer cache retained in memory
Database blocks of the segments allocated to RECYCLE are wiped out of memory as soon as they are no
longer needed , making room for other RECYCLE segment blocks
DEFAULT buffer pool holds segment blocks which are not assigned to any of the above buffer pool
By default segments allocated to DEFAULT buffer pool
Oracle also supports non-default db block sizes in database buffer 2K, 4K, 8K, 16K, 32K by parameters
DB_2K_CACHE_SIZE,
DB_4K_CACHE_SIZE,
DB_8K_CACHE_SIZE,
DB_16K_CACHE_SIZE,
DB_32K_CACHE_SIZE
The Database Buffer Cache is a fairly large memory object that stores the actual data blocks that are
retrieved from datafiles by system queries and other data manipulation language commands.
When Database Smart Flash Cache (flash cache) is enabled, part of the buffer cache can reside in
the flash cache.
This buffer cache extension is stored on a flash disk device, which is a solid state storage device
that uses flash memory.
The database can improve performance by caching buffers in flash memory instead of reading from
magnetic disk.
Database Smart Flash Cache is available only in Solaris and Oracle Enterprise Linux.
Database blocks are kept in the Database Buffer Cache according to a Least Recently Used (LRU)
algorithm and are aged out of memory if a buffer cache block is not used in order to provide space for
the insertion of newly needed database blocks.
The write list (also called a write queue) holds dirty buffers – these are buffers that hold that data that
has been modified, but the blocks have not been written back to disk.
The LRU list holds unused, free clean buffers, pinned buffers, and free dirty buffers that have not yet
been moved to the write list. Free clean buffers do not contain any useful data and are available for
use. Pinned buffers are currently being accessed.
When an Oracle process accesses a buffer, the process moves the buffer to the most recently used
(MRU) end of the LRU list – this causes dirty buffers to age toward the LRU end of the LRU list.
75 ORACLE DATABASE ADMINISTRATION
When an Oracle user process needs a data row, it searches for the data in the database buffer cache
because memory can be searched more quickly than hard disk can be accessed. If the data row is
already in the cache (a cache hit), the process reads the data from memory; otherwise a cache miss
occurs and data must be read from hard disk into the database buffer cache.
Before reading a data block into the cache, the process must first find a free buffer. The process searches
the LRU list, starting at the LRU end of the list. The search continues until a free buffer is found or until
the search reaches the threshold limit of buffers.
Each time a user process finds a dirty buffer as it searches the LRU, that buffer is moved to the write list
and the search for a free buffer continues.
When a user process finds a free buffer, it reads the data block from disk into the buffer and moves the
buffer to the MRU end of the LRU list.
If an Oracle user process searches the threshold limit of buffers without finding a free buffer, the process
stops searching the LRU list and signals the DBWn background process to write some of the dirty buffers
to disk. This frees up some buffers.
The block size for a database is set when a database is created and is determined by the init.ora
parameter file parameter named DB_BLOCK_SIZE.
Typical block sizes are 2KB, 4KB, 8KB, 16KB, and 32KB.
The size of blocks in the Database Buffer Cache matches the block size for the database.
The DBORCL database uses an 8KB block size.
This figure shows that the use of non-standard block sizes results in multiple database buffer cache
memory allocations.
Because tablespaces that store oracle tables can use different (non-standard) block sizes, there can be
more than one Database Buffer Cache allocated to match block sizes in the cache with the block sizes in
the non-standard tablespaces. The size of the Database Buffer Caches can be controlled by the
parameters DB_CACHE_SIZE and DB_nK_CACHE_SIZE to dynamically change the memory allocated
to the caches without restarting the Oracle instance.
You can dynamically change the size of the Database Buffer Cache with the ALTER SYSTEM command like
the one shown here:
You can have the Oracle Server gather statistics about the Database Buffer Cache to help you size it to
achieve an optimal workload for the memory allocation. This information is displayed from the
V$DB_CACHE_ADVICE view. In order for statistics to be gathered, you can dynamically alter the
system by using the ALTER SYSTEM SET DB_CACHE_ADVICE (OFF, ON,
READY) command. However, gathering statistics on system performance always incurs some overhead
that will slow down system performance.
76 ORACLE DATABASE ADMINISTRATION
This pool retains blocks in memory (data from tables) that are likely to be reused throughout daily
processing. An example might be a table containing user names and passwords or a validation table of
some type. The DB_KEEP_CACHE_SIZE parameter sizes the KEEP Buffer Pool.
This pool is used to store table data that is unlikely to be reused throughout daily processing – thus the
data blocks are quickly removed from memory when not needed.
The DB_RECYCLE_CACHE_SIZE parameter sizes the Recycle Buffer Pool.
The Redo Log Buffer memory object stores images of all changes made to database blocks.
· Database blocks typically store several table rows of organizational data. This means that if a single
column value from one row in a block is changed, the block image is stored. Changes include INSERT,
UPDATE, DELETE, CREATE, ALTER, or DROP.
· LGWR writes redo sequentially to disk while DBWn performs scattered writes of data blocks to disk.
o Scattered writes tend to be much slower than sequential writes.
o Because LGWR enable users to avoid waiting for DBWn to complete its slow writes, the database delivers
better performance.
The Redo Log Buffer as a circular buffer that is reused over and over. As the buffer fills up, copies of the
images are stored to the Redo Log Files that are covered in more detail in a later module.
77 ORACLE DATABASE ADMINISTRATION
Large Pool
The Large Pool is an optional memory structure that primarily relieves the memory burden placed on
the Shared Pool. The Large Pool is used for the following tasks if it is allocated:
· Allocating space for session memory requirements from the User Global Area where a Shared Server is
in use.
· Transactions that interact with more than one database, e.g., a distributed database scenario.
· Backup and restore operations by the Recovery Manager (RMAN) process.
o RMAN uses this only if the BACKUP_DISK_IO = n and BACKUP_TAPE_IO_SLAVE =
TRUE parameters are set.
o If the Large Pool is too small, memory allocation for backup will fail and memory will be allocated from
the Shared Pool.
· Parallel execution message buffers for parallel server operations. The
PARALLEL_AUTOMATIC_TUNING = TRUE parameter must be set.
The Large Pool size is set with the LARGE_POOL_SIZE parameter – this is not a dynamic parameter. It
does not use an LRU list to manage memory.
Java Pool
The Java Pool is an optional memory object, but is required if the database has Oracle Java installed and
in use for Oracle JVM (Java Virtual Machine).
· The size is set with the JAVA_POOL_SIZE parameter that defaults to 24MB.
· The Java Pool is used for memory allocation to parse Java commands and to store data associated with
Java commands.
· Storing Java code and data in the Java Pool is analogous to SQL and PL/SQL code cached in the Shared
Pool.
Streams Pool
This pool stores data and control structures to support the Oracle Streams feature of Oracle Enterprise
Edition.
· Oracle Steams manages sharing of data and events in a distributed environment.
· It is sized with the parameter STREAMS_POOL_SIZE.
· If STEAMS_POOL_SIZE is not set or is zero, the size of the pool grows dynamically.
Processes
Client Process
In order to use Oracle, you must connect to the database. This must occur whether you're using
SQLPlus, an Oracle tool such as Designer or Forms, or an application program. The client process is also
termed the user process in some Oracle documentation.
78 ORACLE DATABASE ADMINISTRATION
This generates a User Process (a memory object) that generates programmatic calls through your user
interface (SQLPlus, Integrated Developer Suite, or application program) that creates a session and
causes the generation of a Server Process that is either dedicated or shared.
Server Process
A Server Process is the go-between for a Client Process and the Oracle Instance.
· Dedicated Server environment – there is a single Server Process to serve each Client Process.
· Shared Server environment – a Server Process can serve several User Processes, although with some
performance reduction.
· Allocation of server process in a dedicated environment versus a shared environment is covered in
further detail in the Oracle11g Database Performance Tuning course offered by Oracle Education.
The first components of the Oracle instance that we will examine are the Oracle background
processes. These processes run in the background of the operating system and are not interacted with
directly. Each process is highly specialized and has a specific function in the overall operation of the
Oracle kernel. While these processes accomplish the same functions regardless of the host operating
system, their implementation is significantly different. On Unix-based systems, owing to Unix's
multiprocess architecture, each Oracle process runs as a separate operating system process. Thus, we
can actually see the processes themselves from within the operating system.
For instance, we can use the ps command on Linux to see these processes, as shown in the following
screenshot. We've highlighted a few of them that we will examine in depth. Note that our background
processes are named in the format ora_ processtype_SID. Since the SID for our database is ORCL, that
name forms a part of the full process name:
79 ORACLE DATABASE ADMINISTRATION
Background Processes
As is shown here, there are both mandatory, optional, and slave background processes that are started
whenever an Oracle Instance starts up. These background processes serve all system users. We will
cover mandatory process in detail.
Optional Processes
· Archiver Process (ARCn)
· Coordinator Job Queue (CJQ0)
· Dispatcher (number “nnn”) (Dnnn)
· Others
This query will display all background processes running to serve a database:
SELECT PNAME
FROM V$PROCESS
WHERE PNAME IS NOT NULL
ORDER BY PNAME;
PMON
The core process of the Oracle architecture is the PMON process—the Process Monitor. The
PMON is tasked with monitoring and regulating all other Oracle-related processes. This includes
not only background processes but server processes as well. Most databases run in a dedicated
server mode. In this mode, any user that connects to the database is granted a server process
with which to do work. In Linux systems, this process can actually be viewed at the server level
with the ps -ef command. When the user connects over the network, the process will be labeled
with LOCAL=NO in the process description. Privileged users such as database administrators can
also make an internal connection to the database, provided that we are logging in from the
server that hosts the database. When an internal connection is made, the process is labeled
with LOCAL=YES. We see an example of each in the following screenshot of the ps –ef
command on a Linux machine hosting Oracle:
Under ordinary circumstances, when a user properly disconnects his or her session from the database by
exiting the tool used to connect to it, the server process given to that user terminates cleanly. However,
what if instead of disconnecting the connection properly, the machine that the user was connected to was
rebooted? In situations like these, the server process on the database is left running since it hasn't
received the proper instructions to terminate. When this occurs, it is the job of PMON to monitor sessions
and clean up orphaned processes. The PMON normally "wakes up" every 3 seconds to check these
processes and clean them up. In addition to this primary function, PMON is also responsible for
registering databases with network listeners.
Since the instance cannot run unless PMON is running, DBAs sometimes check
for it using the pscommand as a way of determining whether the instance is
down, because, on Unix-based systems, we can actually see the processes at
the server level using the command ps –ef | grep pmon. If a process is not
returned, we know the instance is down.
81 ORACLE DATABASE ADMINISTRATION
SMON
If an Oracle Instance fails, all information in memory not written to disk is lost. SMON is responsible for
recovering the instance when the database is started up again. It does the following:
82 ORACLE DATABASE ADMINISTRATION
· Rolls forward to recover data that was recorded in a Redo Log File, but that had not yet been recorded
to a datafile by DBWn. SMON reads the Redo Log Files and applies the changes to the data blocks. This
recovers all transactions that were committed because these were written to the Redo Log Files prior to
system failure.
· Opens the database to allow system users to logon.
· Rolls back uncommitted transactions.
SMON also does limited space management. It combines (coalesces) adjacent areas of free space in the
database's datafiles for tablespaces that are dictionary managed.
It also deallocates temporary segments to create free space in the datafiles.
The SMON, or System Monitor process, has several very important duties. Chiefly SMON is responsible
for instance recovery. Under normal circumstances, databases are shut down using the proper
commands to do so. When this occurs, all of the various components, mainly the datafiles, are properly
recorded and synchronized so that the database is left in a consistent state. However, if the database
crashes for some reason (the database's host machine loses power, for instance), this synchronization
cannot occur. When the database is restarted, it will begin from an inconsistent state. Every time the
instance is started, SMON will check for these marks of synchronization. In a situation where the
database is in an inconsistent state, SMON will perform instance recovery to resynchronize these
inconsistencies. Once this is complete, the instance and database can open correctly. Unlike database
recovery, where some data loss has occurred, instance recovery occurs without intervention from the
DBA. It is an automatic process that is handled by SMON.
The SMON process is also responsible for various cleanup operations within the datafiles themselves.
tempfiles are the files that hold the temporary data that is written when an overflow from certain
memory caches occurs. This temporary data is written in the form of temporary segments within the
tempfile. When this data is no longer needed, SMON is tasked with removing them. The SMON process
can also coalesce data within datafiles, removing gaps, which allows the data to be stored more
efficiently.
The Database Writer writes modified blocks from the database buffer cache to the datafiles.
The purpose of DBWn is to improve system performance by caching writes of database blocks from
the Database Buffer Cache back to datafiles.
· Blocks that have been modified and that need to be written back to disk are termed "dirty blocks."
83 ORACLE DATABASE ADMINISTRATION
· The DBWn also ensures that there are enough free buffers in the Database Buffer Cache to service
Server Processes that may be reading data from datafiles into the Database Buffer Cache.
· Performance improves because by delaying writing changed database blocks back to disk, a Server
Process may find the data that is needed to meet a User Process request already residing in memory!
· DBWn writes to datafiles when one of these events occurs that is illustrated in the figure below.
For all of the overhead duties of processes such as PMON and SMON, we can probably intuit that there
must be a process that actually reads and writes data from the datafiles. Until later versions, that
process was named DBWR – the Database Writer process. The DBWR is responsible for reading and
writing the data that services user operations, but it doesn't do it in the way that we might expect.
In Oracle, almost no operation is executed directly on the disk. The Oracle processing paradigm is to
read data into memory, complete a given operation while the data is still in memory, and write it back to
the disk. We will cover the reason for this in greater depth when we discuss memory caches, but for now
let's simply say it is for performance reasons. Thus, the DBWR process will read a unit of data from the
disk, called a database block, and place it into a specialized memory cache. If data is changed using
an UPDATE statement, for instance, it is changed in memory. After some time, it is written back to the
disk in its new state. If we think about it, it should be obvious that the amount of reading and writing in
a database would constitute a great deal of work for one single process. It is certainly possible that a
single DBWR process would become overloaded and begin to affect performance. That's why, in more
recent versions of Oracle, we have the ability to instantiate multiple database writer processes. So we
can refer to DBWR as DBWn, where "n" is a given instantiation of a database writer process. If our
instance is configured to spawn three database writers, they would be dbw0, dbw1, and dbw2. The
number of the DBWn processes that are spawned is governed by one of our initialization parameters,
namely, db_writer_processes.
Let's take a closer look at how the value for db_writer_processes affects the database writer processes
that we can see in the Linux operating system. We won't go into great depth with the commands that
84 ORACLE DATABASE ADMINISTRATION
we'll be using at this point, but we can still see how the spawning of multiple DBWn processes works. We
will become very familiar with commands such as these as we revisit them frequently throughout many
of the examples in this book. First, let's examine the number of DBWn processes on our system using
the ps command, with which we're familiar:
From the Linux command line, we use the ps –ef command along with the grep command that searches
through the processes in the system with the string dbw in their names. This restricts our output to only
those processes that contain dbw, which will be the database writer processes. As we can see in the
preceding screenshot, there is only one database writer process named ora_dbw0_orcl.
As mentioned, the number of the database writer processes is determined by an initialization parameter.
The name of that parameter is db_writer_processes.We can determine the value of this parameter by
logging into the database using SQL*Plus (the command sqlplus / as sysdba) and showing its value using
the show parameter command, as in the following screenshot:
Since we've already determined that we only have a single dbw0 process, it should come as no surprise
that the value for our parameter is 1. However, if we wish to add more database writers, it is simple to
do so. From the SQL*Plus command line, we issue the following command, followed by the shutdown
immediate and startup commands to shut down and start up the database:
What's the optimal number of database writers? The answer is that, as with
many aspects of database administration, it depends. The parameter has a
maximum value of 20, so does that mean more is better? Not necessarily. The
simplest answer is that the default value, either 1 or the integer value
resulting from the number of CPUs divided by 8 (whichever is greater), will
generally provide the best performance. Most opinions regarding best
practices vary greatly and are usually based on the number of CPUs in the
host box. Generally, the default value will serve you well unless your server
is very large or heavy tuning is needed.
85 ORACLE DATABASE ADMINISTRATION
The alter system command instructs Oracle to set the db_writer_processes parameter to 4. The
change is recognized when the database is restarted. From here, we type exit to leave SQL*Plus and
return to the Linux command line. We then issue our ps command again and view the results:
As we can see in the preceding screenshot, there are four database writer processes,
calledora_dbw0_orcl, ora_dbw1_orcl, ora_dbw2_orcl, and ora_dbw3_orcl, that align with our
value for db_writer_processes. We now have four database writer processes with which to read and write
data.
LGWR
The Log Writer (LGWR) writes contents from the Redo Log Buffer to the Redo Log File that is in use.
· These are sequential writes since the Redo Log Files record database modifications based on the actual
time that the modification takes place.
· LGWR actually writes before the DBWn writes and only confirms that a COMMIT operation has
succeeded when the Redo Log Buffer contents are successfully written to disk.
· LGWR can also call the DBWn to write contents of the Database Buffer Cache to disk.
· The LGWR writes according to the events illustrated in the figure shown below.
86 ORACLE DATABASE ADMINISTRATION
LGWR writes to disk when:
– A transaction is COMMITED
– A timeout occurs (3 sec)
– The redo log buffer is 1/3 full
– There is more than 1 megabyte of redo entries
– Before DBWR writes out ‘dirty’ blocks to datafiles
CKPT
The Checkpoint (CPT) process writes information to update the database control files and headers of
datafiles.
· A checkpoint identifies a point in time with regard to the Redo Log Files where instance recovery is to
begin should it be necessary.
· It can tell DBWn to write blocks to disk.
· A checkpoint is taken at a minimum, once every three seconds.
Think of a checkpoint record as a starting point for recovery. DBWn will have completed writing all
buffers from the Database Buffer Cache to disk prior to the checkpoint, thus those records will not
require recovery. This does the following:
· Ensures modified data blocks in memory are regularly written to disk – CKPT can call the DBWn
process in order to ensure this and does so when writing a checkpoint record.
· Reduces Instance Recovery time by minimizing the amount of work needed for recovery since only
Redo Log File entries processed since the last checkpoint require recovery.
· Causes all committed data to be written to datafiles during database shutdown.
We mentioned in the preceding section that the purpose of the DBWn process is to move data in and out
of memory. Once a block of data is moved into memory, it is referred to as a buffer. When a buffer in
memory is changed using an UPDATE statement, for instance, it is called a dirty buffer. Dirty buffers
can remain in memory for a time and are not automatically flushed to disk. The event that signals the
writing of dirty buffers to disk is known as a checkpoint. The checkpoint ensures that memory is kept
available for other new buffers and establishes a point for recovery. In earlier versions of Oracle, the type
of checkpoint that occurred was known as a full checkpoint. This checkpoint will flush all dirty buffers
back to the datafiles on the disk. While full checkpoints represent a complete flush of the dirty buffers,
they are expensive in terms of performance. Since Version 8i, the Oracle kernel makes use of an
incremental checkpoint that intelligently flushes only part of the available dirty buffers when needed. Full
checkpoints only occur now during a shutdown of the database or on demand, using a command.
87 ORACLE DATABASE ADMINISTRATION
The process in the instance that orchestrates checkpointing is the CKPT process. The CKPT process uses
incremental checkpoints at regular intervals to ensure that dirty buffers are written out and any changes
recorded in the redo logs are kept consistent for recovery purposes. Unlike the DBWn process, there is
only one CKPT process. Although the incremental checkpoint method is used by CKPT, we can also force
a full checkpoint using the command shown in the following screenshot:
PURPOSE OF CHECKPOINTS
Database blocks are temporarily stored in Database buffer cache. As blocks are read, they are stored
in DB buffer cache so that if any user accesses them later, they are available in memory and need not be
read from the disk. When we update any row, the buffer in DB buffer cache corresponding to the block
containing that row is updated in memory. Record of the change made is kept in redo log buffer. On
commit, the changes we made are written to the disk thereby making them permanent. But where are
those changes written? To the datafiles containing data blocks? No!!! The changes are recorded in online
redo log files by flushing the contents of redo log buffer to them. This is called write ahead logging. If
the instance crashed right now, the DB buffer cache will be wiped out but on restarting the database,
Oracle will apply the changes recorded in redo log files to the datafiles.
Why doesn’t Oracle write the changes to datafiles right away when we commit the transaction? The
reason is simple. If it chose to write directly to the datafiles, it will have to physically locate the data
block in the datafile first and then update it which means that after committing, user has to wait until
DBWR searches for the block and then writes it before he can issue next command. This will bring down
the performance drastically. That is where the role of redo logs comes in. The writes to the redo logs are
sequential writes – LGWR just dumps the info in redologs to log files sequentially and synchronously so
that the user does not have to wait for long. Moreover, DBWR will always write in units of Oracle blocks
whereas LGWR will write only the changes made. Hence, write ahead logging also improves performance
by reducing the amount of data written synchronously. When will the changes be applied to the
datablocks in datafiles? The data blocks in the datafiles will be updated by the DBWR asynchronously in
response to certain triggers. These triggers are called checkpoints.
Checkpoint is a synchronization event at a specific point in time which causes some / all dirty blocks to
be written to disk thereby guaranteeing that blocks dirtied prior to that point in time get written.
Whenever dirty blocks are written to datafiles, it allows oracle
- to reuse a redo log : A redo log can’t be reused until DBWR writes all the dirty blocks protected by
that logfile to disk. If we attempt to reuse it before DBWR has finished its checkpoint, we get the
following message in alert log : Checkpoint not complete.
88 ORACLE DATABASE ADMINISTRATION
- to reduce instance recovery time : As the memory available to a database instance increases, it is
possible to have database buffer caches as large as several million buffers. It requires that the database
checkpoint advance frequently to limit recovery time, since infrequent checkpoints and large buffer
caches can exacerbate crash recovery times significantly.
- to free buffers for reads : Dirtied blocks can’t be used to read new data into them until they are
written to disk. Thus DBWrR writes dirty blocks from the buffer cache, to make room in the cache.
Various types of checkpoints in Oracle:
– Full checkpoint
– Thread checkpoint
- File checkpoint
- Parallel Query checkpoint
- Object checkpoint
- Log switch checkpoint
_ Incremental checkpoint
AGEING ALGORITHM
This strategy involves writing changed blocks that have been dirty for the longest time and is called
aging writes. This algorithm relies on the CKPT Q running thru the cache and buffers being linked to the
end of this list the first time they are made dirty.
The LRU list contains all the buffers – free / pinned / dirty. Whenever a buffer in LRU list is dirtied, it is
placed in CKPT Q as well i.e. a buffer can simultaneously have pointers in both LRU list and CKPT Q but
the buffers in CKPT Q are arranged in the order in which they were dirtied. Thus, checkpoint queue
contains dirty blocks in the order of SCN# in which they were dirtied
Every 3 secs DBWR wakes up and checks if there are those many dirty buffers in CKPT Q which need to
br written so as to satisfy instance recovery requirement..
If those many or more dirty buffers are not found,
DBWR goes to sleep
else (dirty buffers found)
.CKPT target RBA is calculated based on
– The most recent RBA
– log_checkpoint_interval
– log_checkpoint_timeout
– fast_start_mttr_target
– fast_start_io_target
– 90% of the size of the smallest redo log file
DBWR walks the CKPT Q from the low end (dirtied earliest) of the redo log file collecting buffers for
writing to disk until it reaches the buffer that is more recent than the target RBA. These buffers are
placed in write list-main.
DBWR walks the write list-main and checks all the buffers
– If changes made to the buffer have already been written to redo log files
. Move those buffers to write-aux list
else
. Trigger LGWR to write changes to those buffers to redo logs
. Move those buffers to write-aux list
. Write buffers from write-aux list to disk
. Update checkpoint RBA in SGA
. Delink those buffers from CKPT Q
. Delink those buffers from write-aux list
- Statistics Updated:
. DBWR checkpoint buffers written
- Controlfile updated every 3 secs by CKPT
. Checkpoint progress record
As sessions link buffers to one end of the list, DBWR can effectively unlink buffers from the other end
and copy them to disk. To reduce contention between DBWR and foreground sessions, there are two
linked lists in each working set so that foreground sessions can link buffers to one while DBWR is
unlinking them from the other.
LRU/TCH ALGORITHM
LRU/TCH algorithm writes the cold dirty blocks to disk that are on the point of being pushed out of
cache.
As per ageing algorithm, DBWR will wake up every 3 seconds to flush dirty blocks to disk. But if blocks
get dirtied at a fast pace during those 3 seconds and a server process needs some free buffers, some
buffers need to be flushed to the disk to make room. That’s when LRU/TCH algorithm is used to write
those dirty buffers which are on the cold end of the LRU list.
Whenever a server process needs some free buffers to read data, it scans the LRU list from its cold
end to look for free buffers.
While searching
If unused buffers found
Read blocks from disk into the buffers and link them to the corresponding hash bucket
if it finds some clean buffers (contain data but not dirtied or dirtied and have been flushed to disk),
if they are the candidates to be aged out (low touch count)
Read blocks from disk into the buffers and link them to the corresponding hash bucket
else (have been accessed recently and should not be aged out)
Move them to MRU end depending upon its touch count.
If it finds dirty buffers (they are already in CKPT Q),
91 ORACLE DATABASE ADMINISTRATION
Delink them from LRU list
Link them to the write-main list (Now these buffers are in CKPT Q and write-main list)
The server process scans a threshold no. of buffers (_db_block_max_scan_pct = 40(default)). If it
does not find required no. of free buffers,
It triggers DBWR to dirty blocks in write-mainlist to disk
. DBWR walks the write list-main and checks all the buffers
– If changes made to the buffer have already been written to redo log files
. Move those buffers to write-aux list
else
. Trigger LGWR to write changes to those buffers to redo logs
. Move those buffers to write-aux list
. Write buffers from write-aux list to disk
. Delink those buffers from CKPT Q and w rite-aux list
. Link those buffers to LRU list as free buffers
Note that
- In this algorithm, the dirty blocks are delinked from LRU list before linking them to write-main list in
contrast to ageing algorithm where the blocks can be simultaneously be in both CKPT Q and LRU list.
– In this algorithm, checkpoint is not advanced because it may be possible that the dirty blocks on the
LRU end may actually not be the ones which were dirtied earliest. They may be there because the server
process did not move them to the MRU end earlier. There might be blocks present in CKPT Q which were
dirtied earlier than the blocks in question.
If a Redo Log File fills up and a switch is made to a new Redo Log File (this is covered in more detail in a
later module), the CKPT process also writes checkpoint information into the headers of the datafiles.
Checkpoint information written to control files includes the system change number (the SCN is a number
stored in the control file and in the headers of the database files that are used to ensure that all files in
the system are synchronized), location of which Redo Log File is to be used for recovery, and other
information.
CKPT does not write data blocks or redo blocks to disk – it calls DBWn and LGWR as necessary.
The Manageability Monitor Lite Process (MMNL) writes statistics from the Active Session History
(ASH) buffer in the SGA to disk. MMNL writes to disk when the ASH buffer is full.
The information stored by these processes is used for performance tuning – we survey performance
tuning in a later module.
92 ORACLE DATABASE ADMINISTRATION
Prior to Oracle Version 10g, database performance tuning was accomplished primarily using data
dictionary views. Oracle's extensive data dictionary provided a great deal of insight into the inner
workings of the database. However, these views had limitations as to how much internal data was stored
and how often it was updated. In short, the performance tuning needs of today's databases required a
more extensive interface into Oracle. With Version 10g, the Oracle database included what amounts to a
second data dictionary, the Automatic Workload Repository (AWR) that focuses solely on
performance tuning metrics. The MMON process, the Manageability Monitor, extracts these metrics
from the Oracle memory caches and writes them to the AWR. MMON essentially takes point-in-time
snapshots of performance data, allowing the data to be used in trend analysis. MMON also invokes
the ADDM, the Automatic Database Diagnostic Monitor, which analyses these metrics and can offer
performance optimization suggestions in the form of a report. MMON is assisted by another
process, MMNL, the Manageability Monitor Light, to gather these statistics. The following screenshot
displays some of these secondary processes:
RECO
The Recoverer Process (RECO) is used to resolve failures of distributed transactions in a distributed
database.
· Consider a database that is distributed on two servers – one in St. Louis and one in Chicago.
· Further, the database may be distributed on servers of two different operating systems, e.g. LINUX
and Windows.
· The RECO process of a node automatically connects to other databases involved in an in-doubt
distributed transaction.
· When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt
transactions, removing from each database's pending transaction table any rows that correspond to the
resolved transactions.
Of these, you will most often use ARCn (archiver) when you automatically archive redo log file
information (covered in a later module).
ARCn
While the Archiver (ARCn) is an optional background process, we cover it in more detail because it is
almost always used for production systems storing mission critical information.
· The ARCn process must be used to recover from loss of a physical disk drive for systems that are
"busy" with lots of transactions being completed.
· It performs the tasks listed below.
When a Redo Log File fills up, Oracle switches to the next Redo Log File.
· The DBA creates several of these and the details of creating them are covered in a later module.
· If all Redo Log Files fill up, then Oracle switches back to the first one and uses them in a round-robin
fashion by overwriting ones that have already been used.
· Overwritten Redo Log Files have information that, once overwritten, is lost forever.
ARCHIVELOG Mode:
· If ARCn is in what is termed ARCHIVELOG mode, then as the Redo Log Files fill up, they are
individually written to Archived Redo Log Files.
· LGWR does not overwrite a Redo Log File until archiving has completed.
· Committed data is not lost forever and can be recovered in the event of a disk failure.
· Only the contents of the SGA will be lost if an Instance fails.
In NOARCHIVELOG Mode:
· The Redo Log Files are overwritten and not archived.
· Recovery can only be made to the last full backup of the database files.
· All committed transactions after the last full backup are lost, and you can see that this could cost the
firm a lot of $$$.
94 ORACLE DATABASE ADMINISTRATION
When running in ARCHIVELOG mode, the DBA is responsible to ensure that the Archived Redo Log Files
do not consume all available disk space! Usually after two complete backups are made, any Archived
Redo Log Files for prior backups are deleted.
Slave Processes
Slave processes are background processes that perform work on behalf of other processes.
Innn: I/O slave processes -- simulate asynchronous I/O for systems and devices that do not support it.
In asynchronous I/O, there is no timing requirement for transmission, enabling other processes to start
before the transmission has finished.
· For example, assume that an application writes 1000 blocks to a disk on an operating system that
does not support asynchronous I/O.
· Each write occurs sequentially and waits for a confirmation that the write was successful.
· With asynchronous disk, the application can write the blocks in bulk and perform other work while
waiting for a response from the operating system that all blocks were written.
Parallel Query Slaves -- In parallel execution or parallel processing, multiple processes work together
simultaneously to run a single SQL statement.
· By dividing the work among multiple processes, Oracle Database can run the statement more quickly.
· For example, four processes handle four different quarters in a year instead of one process handling all
four quarters by itself.
· Parallel execution reduces response time for data-intensive operations on large databases such as data
warehouses. Symmetric multiprocessing (SMP) and clustered system gain the largest performance
benefits from parallel execution because statement processing can be split up among multiple CPUs.
Parallel execution can also benefit certain types of OLTP and hybrid systems.
Logical Structure
It is helpful to understand how an Oracle database is organized in terms of a logical structure that is
used to organize physical objects.
Segment: When logical storage objects are created within a tablespace, for example, an employee
table, a segment is allocated to the object.
· Obviously a tablespace typically has many segments.
· A segment cannot span tablespaces but can span datafiles that belong to a single tablespace.
Extent: Each object has one segment which is a physical collection of extents.
· Extents are simply collections of contiguous disk storage blocks. A logical storage object such as
a table or index always consists of at least one extent – ideally the initial extent allocated to an object
will be large enough to store all data that is initially loaded.
· As a table or index grows, additional extents are added to the segment.
· A DBA can add extents to segments in order to tune performance of the system.
· An extent cannot span a datafile.
Block: The Oracle Server manages data at the smallest unit in what is termed a block or data
block. Data are actually stored in blocks.
95 ORACLE DATABASE ADMINISTRATION
A physical block is the smallest addressable location on a disk drive for read/write operations.
An Oracle data block consists of one or more physical blocks (operating system blocks) so the data block,
if larger than an operating system block, should be an even multiple of the operating system block size,
e.g., if the Linux operating system block size is 2K or 4K, then the Oracle data block should be 2K, 4K,
8K, 16K, etc in size. This optimizes I/O.
The data block size is set at the time the database is created and cannot be changed. It is set with
the DB_BLOCK_SIZE parameter. The maximum data block size depends on the operating system.
The data block size is set at the time the database is created and cannot be changed. It is set with
the DB_BLOCK_SIZE parameter. The maximum data block size depends on the operating system.
Thus, the Oracle database architecture includes both logical and physical structures as follows:
· Physical: Control files; Redo Log Files; Datafiles; Operating System Blocks.
· Logical: Tablespaces; Segments; Extents; Data Blocks.
SQL Statements are processed differently depending on whether the statement is a query, data
manipulation language (DML) to update, insert, or delete a row, or data definition language (DDL) to
write information to the data dictionary.
Processing a query:
· Parse:
o Search for identical statement in the Shared SQL Area.
o Check syntax, object names, and privileges.
o Lock objects used during parse.
o Create and store execution plan.
· Bind: Obtains values for variables.
96 ORACLE DATABASE ADMINISTRATION
· Execute: Process statement.
· Fetch: Return rows to user process.
Processing a DML statement:
· Parse: Same as the parse phase used for processing a query.
· Bind: Same as the bind phase used for processing a query.
· Execute:
o If the data and undo blocks are not already in the Database Buffer Cache, the server process reads them
from the datafiles into the Database Buffer Cache.
o The server process places locks on the rows that are to be modified. The undo block is used to store the
before image of the data, so that the DML statements can be rolled back if necessary.
o The data blocks record the new values of the data.
o The server process records the before image to the undo block and updates the data block. Both of these
changes are made in the Database Buffer Cache. Any changed blocks in the Database Buffer Cache are
marked as dirty buffers. That is, buffers that are not the same as the corresponding blocks on the disk.
o The processing of a DELETE or INSERT command uses similar steps. The before image for a DELETE
contains the column values in the deleted row, and the before image of an INSERT contains the row
location information.
Oracle server process scans the Library cache to see if there are caches of this command.
98 ORACLE DATABASE ADMINISTRATION
If so, then the existence of a direct reuse SQL Information (implementation plan), which is called a soft
parse (soft parse) or known as the library cache hit (library cache hit).
If not, the Oracle server process will parse this statement, called the hard parse.
The so-called resolution, is the server before executing this statement, you must calculate its meaning
and implementation plan:
What is emp? Is a table? Synonyms? Or view? Are emp object exists? The sal emp exist? This user has
no permission to view, modify?
To understand the true meaning of the statement, the server must determine how to execute it in the
best way. Is there an index on the id column? If so, is the use of an index position fast, or full table scan
faster?
Oracle server to get the content, we must query the data dictionary. When querying data dictionary,
Oracle server process will first scan Data dictionary cache view the data dictionary exists, if it exists, then
the direct reuse, otherwise, it will be read from the disk to the Data dictionary cache and cache them for
future use .
After
2. SQL parsing, start executing statements. Assuming the data is modified before sal = 80.
First, scan whether there emp database buffer cache in the data block id = 1, if not, the data block is
loaded from disk (s, a data block may contain multiple pieces of data, the data may also be distributed in
a plurality of data blocks in) to the database buffer cache.
Assign an undo segment transaction services that end, this segment of the data blocks to be loaded
database buffer cache. Note: A transaction can only be assigned by an undo segment service, revocation
of data generated by a transaction can not be assigned to multiple undo segment, but an undo segment
for multiple transaction services.
Save image data in a data block before the undo segment (image).
The undo data modification operations recorded in the redo log buffer block in.
99 ORACLE DATABASE ADMINISTRATION
3. Then, id Database buffer cache in the emp object data block = 1 is modified, sal is set to 50.Then,
redo log buffer will record the modifications.
Code farmers spent in the company in 2056, and wages less than 30 then, jealousy envy hate, lonely,
cold ah! The spirit of the company responsible for the company's attitude to saving every sum of
expenditure. Code farmers made a tough decision! So. . .
In this case, the redo log buffer to produce a modified emp object id = redo log a data block, and the
block has not been modified since undo, so, redo logs do not contain modified undo logs.
In this case, redo logs generated in the redo log buffer is marked commit the mark, and contains the
current SCN and time stamp. Note: After writing redo logs submitted by the redo log buffer, the user
may not be able to commit successfully receive feedback! LGWR process will have to wait until the
contents of the redo log buffer is written only after the redo log file to commit a successful user can
receive feedback.
6. After receiving yards farmers commit success feedback, the transaction is executed successfully.
At this point, id disk emp object = data in block 1, sal could be 80, it could be 50, it could be one, is not
known. How much more does not matter, in other transactions to operate this data block, reading the
database buffer cache is the value that has been committed, that is, sal = 1. If sal = instance crashes
before a sync to disk, the next startup, Oracle SMON background process will extract the redo log file to
restore the database records.
Environment Variables
Oracle makes use of environment variables on the server and client computers in both LINUX and
Windows operating systems in order to:
· establish standard locations for files, and
· make it easier for you to use Oracle.
101 ORACLE DATABASE ADMINISTRATION
On LINUX, environment variables values can be displayed by typing the command env at the operating
system prompt. It is common to have quite a few environment variables. This example highlights those
variables associated with the logged on user and with the Oracle database and software:
dbock/@sobora2.isg.siue.edu=>env
_=/bin/env
SSH_CONNECTION=::ffff:24.207.183.37 25568 ::ffff:146.163.252.102 22
PATH=/bin:/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin:.:/u01/app/oracle/produ
ct/11.2.0/dbhome_1/bin
SHELL=/bin/ksh
HOSTNAME=sobora2.isg.siue.edu
USER=dbock
ORACLE_BASE=/u01/app/oracle/
SSH_CLIENT=::ffff:24.207.183.37 25568 22
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
TERM=xterm
ORACLE_SID=DBORCL
LANG=en_US.UTF-8
SSH_TTY=/dev/pts/2
LOGNAME=dbock
MAIL=/var/spool/mail/oracle1
LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_1/lib
HOME=/u01/home/dbock
ORACLE_TERM=vt100
VARIABLE_NAME = value
export VARIABLE_NAME
dbock/@sobora2.isg.siue.edu=> ORACLE_SID=USER350
dbock/@sobora2.isg.siue.edu=> export ORACLE_SID
The following environment variables in a LINUX environment are used for the server.
HOME
Command: HOME=/u01/student/dbock
Use: Stores the location of the home directory for your files for your assigned LINUX account. You can
always easily change directories to your HOME by typing the command: cd $HOME
Note: The $ is used as the first character of the environment variable so that LINUX uses the value of
the variable as opposed to the actual variable name.
LD_LIBRARY_PATH
Command: LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_1/lib
Use: Stores the path to the library products used most commonly by you. Here the first entry in the
path points to the library products for Oracle that are located in the
directory /u01/app/oracle/product/11.2.0/dbhome_1/lib. For multiple entries, you can separate
Path entries by a colon.
ORACLE_BASE
Command: ORACLE_BASE=/u01/app/oracle
Use: Stores the base directory for the installation of Oracle products. Useful if more than one version of
Oracle is loaded on a server. Other than that, this variable does not have much use. We are not using it
at SIUE.
ORACLE_HOME
Command: ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
Use: Enables easy changing to the home directory for Oracle products. All directories that you will use
are hierarchically below this one. The most commonly used subdirectories are named dbs and rdbms.
ORACLE_SID
102 ORACLE DATABASE ADMINISTRATION
Command: ORACLE_SID=USER350 (or the name of your database)
Use: Tells the operating system the system identifier for the database. One of the databases on
the SOBORA2 server is named DBORCL – when you create your own database, you will use you’re a
database name assigned by your instructor as the ORACLE_SID system identifier for your database.
ORACLE_TERM
Command: ORACLE_TERM=vt100
Use: In LINUX, this specifies the terminal emulation type. The vt100 is a very old type of emulation for
keyboard character input.
PATH
Command: PATH=/u01/app/oracle/product/11.2.0/dbhome_1/bin:/bin:/usr/bin:/usr/local/
bin:.
Use: This specifies path pointers to the most commonly used binary files. A critical entry for using
Oracle is the=/u01/app/oracle/product/11.2.0/dbhome_1/bin entry that points to the Oracle
binaries. If you upgrade to a new version of Oracle, you will need to upgrade this path entry to point to
the new binaries.
Windows Variables
In a Windows operating system environment, environment variables are established by storing entries
into the system registry. Your concern here is primarily with the installation of Oracle tools software on a
client computer.
Windows and Oracle allows and recommends the creation of more than one ORACLE_HOME directory
(folder) on a Windows client computer. This is explained in more detail in the installation manuals for the
various Oracle software products.
Basically, you should use one folder as an Oracle Home for Oracle Enterprise Manager software and a
different folder as an Oracle Home for Oracle's Internet Developer Suite – this suite of software includes
Oracle's Forms, Reports, Designer, and other tools for developing internet-based applications.
Oracle contains a set of underlying views that are maintained by the database server and accessible to
the database administrator user SYS. These views are called dynamic performance views because
they are continuously updated while a database is open and in use, and their contents relate primarily to
performance.
Although these views appear to be regular database tables, they are not. These views provide data on
internal disk structures and memory structures. You can select from these views, but you can never
update or alter them.
The catalog
.sql script contains definitions of the views and public synonyms for the dynamic performance views.
You must run catalog.sql to create these views and synonyms. After installation, only user SYS or
anyone with SYSDBA role has access to the dynamic performance tables.
V$ Views
The actual dynamic performance views are identified by the prefix V_$. Public synonyms for these views
have the prefix V$. Database administrators and other users should access only the V$ objects, not
the V_$ objects.
103 ORACLE DATABASE ADMINISTRATION
The dynamic performance views are used by Oracle Enterprise Manager, which is the primary interface
for accessing information about system performance. After an instance is started, the V$ views that read
from memory are accessible. Views that read data from disk require that the database be mounted, and
some require that the database be open.
GV$ Views
For almost every V$ view described in this chapter, Oracle has a corresponding GV$ (global V$) view. In
Real Application Clusters, querying a GV$ view retrieves the V$ view information from all qualified
instances. In addition to the V$ information, each GV$ view contains an extra column named
INST_ID of datatype NUMBER. The INST_ID column displays the instance number from which the
associated V$ view information was obtained. The INST_ID column can be used as a filter to
retrieve V$ information from a subset of available instances. For example, the following query retrieves
the information from the V$LOCK view on instances 2 and 5:
Components
SQL*Plus
Init Params
DB Startup
DB Shutdown
Alert Log
> Perf Views
104 ORACLE DATABASE ADMINISTRATION
SQL execution
DATA DICTIONARY VIEWS: An important part of an Oracle database is its data dictionary, which is a
105 ORACLE DATABASE ADMINISTRATION
read-only set of tables that provides administrative metadata about the database. A data dictionary
contains information such as the following:
The definitions of every schema object in the database, including default values for columns
and integrity constraint information
The amount of space allocated for and currently used by the schema objects
The names of Oracle Database users, privileges and roles granted to users, and auditing
information related to users
The data dictionary is a central part of data management for every Oracle database. For
example, the database performs the following actions:
Accesses the data dictionary to find information about users, schema objects, and storage
structures
Modifies the data dictionary every time that a DDL statement is issued Because Oracle Database
stores data dictionary data in tables, just like other data, users can query the data with SQL. For
example, users can run SELECT statements to determine their privileges, which tables exist in
their schema, which columns are in these tables, whether indexes are built on these columns,
and so on.
Views with the prefix DBA_ show all relevant information in the entire database. DBA_ views are
intended only for administrators.
For example, the following query shows information about all objects in the database:
Views with the prefix ALL_ refer to the user's overall perspective of the database. These views return
information about schema objects to which the user has access through public or explicit grants of
privileges and roles, in addition to schema objects that the user owns.
For example, the following query returns information about all the objects to which you have access:
Because the ALL_ views obey the current set of enabled roles, query results depend on which roles are
enabled, as shown in the following example:
Application developers should be cognizant of the effect of roles when using ALL_ views in a stored
procedure, where roles are not enabled by default.
The views most likely to be of interest to typical database users are those with the prefix USER_. These
views:
Refer to the user's private environment in the database, including metadata about schema
objects created by the user, grants made by the user, and so on
Display only rows pertinent to the user, returning a subset of the information in the ALL_ views
Has columns identical to the other views, except that the column OWNER is implied
For example, the following query returns all the objects contained in your schema:
DUAL is a small table in the data dictionary that Oracle Database and user-written programs can
reference to guarantee a known result. The dual table is useful when a value must be returned only
once, for example, the current date and time. All database users have access to DUAL.
The DUAL table has one column called DUMMY and one row containing the value X. The following
example queries DUAL to perform an arithmetical operation:
The data dictionary base tables are the first objects created in any Oracle database. All data dictionary
tables and views for a database are stored in the SYSTEM tablespace. Because the SYSTEM tablespace
is always online when the database is open, the data dictionary is always available when the database is
open.
The Oracle Database user SYS owns all base tables and user-accessible views of the data dictionary.
Data in the base tables of the data dictionary is necessary for Oracle Database to function. Therefore,
only Oracle Database should write or change data dictionary information. No Oracle Database user
should ever alter rows or schema objects contained in the SYS schema because such activity can
compromise data integrity. The security administrator must keep strict control of this central account.
During database operation, Oracle Database reads the data dictionary to ascertain that schema objects
exist and that users have proper access to them. Oracle Database also updates the data dictionary
continuously to reflect changes in database structures, auditing, grants, and data.
For example, if user hr creates a table named interns, then new rows are added to the data dictionary
that reflect the new table, columns, segment, extents, and the privileges that hr has on the table. This
new information is visible the next time the dictionary views are queried.
Oracle Database creates public synonyms for many data dictionary views so users can access them
conveniently. The security administrator can also create additional public synonyms for schema objects
that are used system wide. Users should avoid naming their own schema objects with the same names
as those used for public synonyms.
Much of the data dictionary information is in the data dictionary cache because the database
constantly requires the information to validate user access and verify the state of schema
objects. Parsing information is typically kept in the caches. The COMMENTS columns describing the
tables and their columns are not cached in the dictionary cache, but may be cached in the database
buffer cache.
Other Oracle Database products can reference existing views and create additional data dictionary tables
or views of their own. Application developers who write programs that refer to the data dictionary should
refer to the public synonyms rather than the underlying tables. Synonyms are less likely to change
between releases.
108 ORACLE DATABASE ADMINISTRATION
When an Oracle Instance is started, the characteristics of the Instance are established by parameters
specified within the initialization parameter file that is read during startup. In the figure shown below,
the initialization parameter file is named spfiledb01.ora; however, you can select any name for the
parameter file—the database here has an ORACLE_SID value of db01.
PFILE
This is a plain text file. It is common to maintain this file either by editing it with the vi editor, or
by FTPing it to my client computer, modifying it with Notepad, and then FTPing it back to
the SOBORA2 server.
109 ORACLE DATABASE ADMINISTRATION
The file is only read during database startup so any modifications take effect the next time the database
is started up. This is an obvious limitation since shutting down and starting up an Oracle database is not
desirable in a 24/7 operating environment.
The naming convention followed is to name the file initSID.ora where SID is the system identifier. For
example, the PFILE for the departmental SOBORA2server for the database named DBORCL is
named initDBORCL.ora.
When Oracle software is installed, a sample init.ora file is created. You can create one for your
database by simply copying the init.ora sample file and renaming it. The sample command shown here
creates an init.ora file for a database named USER350. Here the file was copied to the
user's HOME directory and named initUSER350.ora.
$ cp $ORACLE_HOME/dbs/init.ora $HOME/initUSER350.ora
You can also create an init.ora file by typing commands into a plain text file using an editor such as
Notepad.
NOTE: For a Windows operating system, the default location for the init.ora file
is C:\Oracle_Home\database.
This is a listing of the initDBORCL.ora file for the database named DBORCL. We will cover these
parameters in our discussion below.
· The example below shows the format for specifying values: keyword = value.
· Each parameter has a default value that is often operating system dependent.
· Generally parameters can be specified in any order.
· Comment lines can be entered and marked with the # symbol at the beginning of the
comment.
· Enclose parameters in quotation marks to include literals.
· Usually operating systems such as LINUX are case sensitive so remember this in specifying file
names.
110 ORACLE DATABASE ADMINISTRATION
The basic initialization parameters – there are about 255 parameters –the actual number changes
with each version of Oracle. Most are optional and Oracle will use default settings for them if you do not
assign values to them. Here the most commonly specified parameters are sorted according to their
category.
· DB_BLOCK_SIZE (mandatory) – specifies the size of the default Oracle block in the
database. At database creation time, the SYSTEM, TEMP, and SYSAUX tablespaces are created
with this block size. An 8KB block size is about the smallest you should use for any database
although 2KB and 4KB block sizes are legal values.
· CONTROL_FILES (mandatory) – tells Oracle the location of the control files to be read
during database startup and operation. The control files are typically multiplexed (multiple
copies).
diagnostic_dest='/u01/student/dbockstd/diag'
#Archive
log_archive_dest_1='LOCATION=/u01/student/dbockstd/oradata/arch'
log_archive_format='USER350_%t_%s_%r.arc'
#Miscellaneous
COMPATIBLE='11.2.0'
INSTANCE_NAME=USER350
#Memory sizing
MEMORY_TARGET=1G
113 ORACLE DATABASE ADMINISTRATION
· PGA_AGGREGATE_TARGET (recommended, but not needed if MEMORY_TARGET is
set) and SORT_AREA_SIZE (no longer recommended) –specifies the target aggregate PGA
memory available to all server processes attached to the instance.
o When managing memory manually, Oracle RDBMS tries to ensure the total PGA memory
allocated for all database server processes and background processes does not exceed
this target.
o In the past, this was an often used parameter to improve sorting performance, this
parameter SORT_AREA_SIZE specifies (in bytes) the maximum amount of memory
Oracle will use for a sort.
o Now Oracle doesn’t recommend using the parameter unless the instance is configured
with a shared server option. Instead use the PGA_AGGREGATE_TARGET parameter
instead (use a minimum of 10MB, the default Oracle setting is 20% of the size of the
SGA).
#Pool sizing
SGA_TARGET=134217728
#Alternatively you can set these individually to establish minimum sizes for these
caches, but this is not recommended
DB_CACHE_SIZE=1207959552
JAVA_POOL_SIZE=31457280
LARGE_POOL_SIZE=1048576
SHARED_POOL_SIZE=123232153 #This is the minimum for 10g
So, which parameters should you include in your PFILE when you create a database? I suggest a simple
init.ora file initially - you can add to it as time goes on in this course.
SPFILE
The SPFILE is a binary file. You must NOT manually modify the file and it must always reside on the
server. After the file is created, it is maintained by the Oracle server.
The SPFILE enables you to make changes that are termed persistent across startup and shutdown
operations. You can make dynamic changes to Oracle while the database is running and this is the main
advantage of using this file. The default location is in the $ORACLE_HOME/dbs directory with a default
name of spfileSID.ora. For example, a database named USER350 would have aSPFILE with a name
of spfileUSER350.ora.
As is shown in the figure above, you can create an SPFILE from an existing PFILE by typing in the
command shown while using SQL*Plus. Note that the filenames are enclosed in single-quote marks.
Recreating a PFILE
You can also create a PFILE from an SPFILE by exporting the contents through use of
the CREATE command. You do not have to specify file names as Oracle will use the spfile associated
with the ORACLE_SID for the database to which you are connected.
CREATE PFILE FROM SPFILE;
You would then edit the PFILE and use the CREATE command to create a new SPFILE from the
edited PFILE.
The STARTUP command is used to startup an Oracle database. You have learned about two different
initialization parameter files. There is a precedence to which initialization parameter file is read when an
Oracle database starts up as only one of them is used.
115 ORACLE DATABASE ADMINISTRATION
These priorities are used when you simply issue the STARTUP command within SQL to startup a
database.
· Oracle knows which database to startup based on the value of ORACLE_SID.
· Oracle uses the priorities listed below to decide which parameter file to use during startup.
STARTUP
· First Priority: the spfileSID.ora on the server side is used to start up the instance.
· Second Priority: If the spfileSID.ora is not found, the default SPFILE on the server side is
used to start the instance.
· Third Priority: If the default SPFILE is not found, the initSID.ora on the server side will be
used to start the instance.
A specified PFILE can override the use of the default SPFILE to start an instance. Examples:
STARTUP PFILE=$ORACLE_HOME/dbs/initUSER350.ora
Or
STARTUP PFILE=$HOME/initUSER350.ora
SPFILE=$HOME/initUSER350.ora
Earlier you read that an advantage of the SPFILE is that certain dynamic parameters can be changed
without shutting down the Oracle database. These changes are made as shown in the figure below by
using the ALTER SYSTEM command. Modifications made in this way change the contents of
the SPFILE. If you shutdown the database and startup again, the modifications you previously made will
take effect because the SPFILE was modified.
The ALTER SYSTEM SET command is used to change the value of instance parameters and has a
number of different options as shown here.
where
Here is an example coding script within SQL*Plus that demonstrates how to display current parameter
values and to alter these values.
You can also use the ALTER SYSTEM RESET command to delete a parameter setting or revert to a default
value for a parameter.
Starting Up a Database
Instance Stages
Databases can be started up in various states or stages. The diagram shown below illustrates the stages
through which a database passes during startup and shutdown.
NOMOUNT: This stage is only used when first creating a database or when it is necessary to recreate a
database's control files. Startup includes the following tasks.
· Read the spfileSID.ora or spfile.ora or initSID.ora.
· Allocate the SGA.
· Startup the background processes.
· Open a log file named alert_SID.log and any trace files specified in the initialization
parameter file.
· Example startup commands for creating the Oracle database and for the database belonging
to USER350 are shown here.
MOUNT: This stage is used for specific maintenance operations. The database is mounted, but not
open. You can use this option if you need to:
· Rename datafiles.
· Enable/disable redo log archiving options.
· Perform full database recovery.
· When a database is mounted it
o is associated with the instance that was started during NOMOUNT stage.
o locates and opens the control files specified in the parameter file.
o reads the control file to obtain the names/status of datafiles and redo log files, but it does
not check to verify the existence of these files.
· Example startup commands for maintaining the Oracle database and for the database
belonging to USER350 are shown here.
OPEN: This stage is used for normal database operations. Any valid user can connect to the
database. Opening the database includes opening datafiles and redo log files. If any of these files are
missing, Oracle will return an error. If errors occurred during the previous database shutdown, the
SMON background process will initiate instance recovery. An example command to startup the database
in OPEN stage is shown here.
If the database initialization parameter file is in the default location at $ORACLE_HOME/dbs, then you
can simply type the command STARTUP and the database associated with the current value
of ORACLE_SID will startup.
You can force a restart of a running database that aborts the current Instance and starts a new normal
instance with the FORCE option.
Sometimes you will want to startup the database, but restrict connection to users with the RESTRICTED
SESSION privilege so that you can perform certain maintenance activities such as exporting or importing
part of the database.
You may also want to begin media recovery when a database starts where your system has suffered a
disk crash.
On a LINUX server, you can automate startup/shutdown of an Oracle database by making entries in a
special operating system file named oratab located in the/var/opt/oracle directory.
118 ORACLE DATABASE ADMINISTRATION
IMPORTANT NOTE: If an error occurs during a STARTUP command, you must issue
a SHUTDOWN command prior to issuing another STARTUP command.
You can change the stage of a database. This example changes the database from OPEN to READ ONLY.
Restricted Mode
Earlier you learned to startup the database in a restricted mode with the RESTRICT option. If the
database is open, you can change to a restricted mode with the ALTER SYSTEM command as shown
here. The first command restricts logon to users with restricted privileges. The second command
enables all users to connect.
One of the tasks you may perform during restricted session is to kill current user sessions prior to
performing a task such as the export of objects (tables, indexes, etc.). The ALTER SYSTEM KILL
SESSION 'integer1, integer2' command is used to do this. The values of integer1 and integer2 are
obtained from the SID and SERIAL# columns in the V$SESSION view. The first six SID values shown
below are for background processes and should be left alone! Notice that the
users SYS and USER350 are connected. We can kill the session for user account name DBOCKSTD.
Now when DBOCK attempts to select data, the following message is received.
When a session is killed, PMON will rollback the user's current transaction and release all table and row
locks held and free all resources reserved for the user.
You can open a database as read-only provided it is not already open in read-write mode. This is useful
when you have a standby database that you want to use to enable system users to execute queries while
the production database is being maintained.
119 ORACLE DATABASE ADMINISTRATION
An oracle database can be started in various modes. Each mode is used by the DBA's to perform some
specific operation in the database.
To start the database there are 3 modes.
NOMOUNT MOUNT ==> OPEN
STATUS
------------
STARTED
STARTUP MOUNT MODE: (Maintenance phase)
Mounting a database into mount includes the following tasks:
Locating and opening the control file specified in the parameter file.
Reading the control file to obtain the name, status and destination of DATA FILES AND ONLINE
REDO LOG FILES
To perform special maintenance operations
Renaming data files (Data files for an offline tablespace can be renamed when the database is
open)
Enabling and disabling online redo log file archiving, flashback options.
120 ORACLE DATABASE ADMINISTRATION
Performing full Database Recovery
Database altered.
STATUS
------------
MOUNTED
(Or)
We can directly go from a shut database to a mount database by typing below command.
SQL> SHUTDOWN
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
STATUS
------------
MOUNTED
STARTUP OPEN MODE: (Available for user access)
the last stage of the startup process is opening the database. When the database is
started in the open mode, all valid users can connect to the database and perform
database operations. Prior to this stage, the general users can’t connect to the
database at all. You can bring the database into the open mode by issuing the ALTER
DATABASE command as follows:
To open the database, the Oracle server first opens all the data files and the online
redo log files, and verify that the database is consistent. If the database isn’t
consistent—for example, if the SCNs in the control files don’t match some of the SCNs in
the data file headers—the background process will automatically perform an instance
recovery before opening the database. If media recovery rather than instance recovery is
needed, Oracle will signal that a database recovery is called for and won’t open the
database until you perform the recovery.
Opening a database includes the following tasks:
Open online data files
Open online redo log files
Command to start Database in mount mode:
Database altered.
STATUS
------------
OPEN
121 ORACLE DATABASE ADMINISTRATION
(Or)
We can directly go from a shut database to an open database by typing below command.
SQL> STARTUP
ORACLE instance started.
Apart from above mode there are other modes also as stated below.
STARTUP FORCE MODE: (shut abort+startup)
If we start an oracle database in restricted mode then only those users who have restricted session
privilege will be able to connect to the database.
Startup restrict include the following tasks.
It open database in restricted mode where only restricted user can access.
Whenever we are shutting a database in a normal way then before shutting the oracle
122 ORACLE DATABASE ADMINISTRATION
database, oracle will write a common scn to the file headers of the datafiles and to the
controlfile.
But incase of a shut abort oracle does not get the chance to write the common scn thus
when we restart the database then oracle will find that the scn does not match for the
data files and the control file. Thus oracle will call smon to perform 'crash recovery'
or 'instance recovery'.
Database Shutdown
The SHUTDOWN command is used to shutdown a database instance. You must be connected as
either SYSOPER or SYSDBA to shutdown a database.
Or
Shutdown Normal
Shutdown Transactional
Shutdown Immediate
Shutdown Abort: This is used if the normal or transactional or immediate options fail. This is
the LEAST favored option because the next startup will require instance recovery and
you CANNOT backup a database that has been shutdown with the ABORT option.
· Current SQL statements are immediately terminated.
· Users are disconnected.
· Database and redo buffers are NOT written to disk.
· Uncommitted transactions are NOT rolled back.
· The Instance is terminated without closing files.
· The database is NOT closed or dismounted.
· Database recovery by SMON must occur on the next startup.
· The shutdown command is:
Shutdown Abort
Diagnostic Files
These files are used to store information about database activities and are useful tools for
troubleshooting and managing a database. There are several types of diagnostic files.
Starting with Oracle 11g, the $ORACLE_BASE parameter value is the anchor for diagnostic and alert
files. New in Oracle 11g is the new ADR (Automatic Diagnostic Repository) and Incident Packaging
System. It is designed to allow quick access to alert and diagnostic information.
· The new $ADR_HOME directory is located by default at $ORACLE_BASE/diag.
· There are directories for each instance at $ORACLE_HOME/diag/$ORACLE_SID.
123 ORACLE DATABASE ADMINISTRATION
· The new initialization parameter DIAGNOSTIC_DEST can be used to specify an alternative
location for the diag directory contents.
You can access the alert log via standard SQL using the new V$DIAG_INFO view:
NAME VALUE
---------------------- -------------------------------------------------------
Diag Enabled TRUE
ADR Base /u01/app/oracle
ADR Home /u01/app/oracle/diag/rdbms/dborcl/DBORCL
Diag Trace /u01/app/oracle/diag/rdbms/dborcl/DBORCL/trace
Diag Alert /u01/app/oracle/diag/rdbms/dborcl/DBORCL/alert
Diag Incident /u01/app/oracle/diag/rdbms/dborcl/DBORCL/incident
Diag Cdump /u01/app/oracle/diag/rdbms/dborcl/DBORCL/cdump
Health Monitor /u01/app/oracle/diag/rdbms/dborcl/DBORCL/hm
Default Trace File /u01/app/oracle/diag/rdbms/dborcl/DBORCL/trace/DBORCL_o
ra_25119.trc
Active Problem Count 1
Active Incident Count 2
11 rows selected.
You can enable or disable user tracing with the ALTER SESSION command as shown here.
· You can also set the SQL_TRACE = TRUE parameter in the initialization parameter files.
4. If we start database using pfile then system global area will be static i.e. we cannot modify
system global area at runtime.
7. We can create pfile from an spfile using the SQL> create pfile from spfile; command.
SPFILE:
1. The spfile is a server parameter file.
4. If we start database using spfile then system global area will be dynamic i.e. we can modify
system global area at runtime without shut down our database.
SQL> startup
7. We can create spfile from a PFILE using the SQL> create spfile from pfile; command.
As SPFILe is a server side binary file, local copy of the PFILE is not required to start oracle from a
remote machine.
A SPFILE doesn’t need a local copy of the pfile to start oracle from a remote machine. Thus
eliminates configuration problems.
SPFILE is a binary file and modifications to that can only be done through ALTER SYSTEM SET
command.
125 ORACLE DATABASE ADMINISTRATION
As SPFILE is maintained by the server, human errors can be eliminated as the parameters are
checked before modification in SPFILE
Changes to the parameters in SPFILE will take immediate effect without restart of the instance
i.e. Dynamic change of parameters is possible
When an Oracle Instance is started, the Different memory structures of the Instance are established by
parameters specified within the initialization parameter file. These initialization parameters are either
stored in a PFILE or SPFILE. SPFILEs are available in Oracle 9i and above. All prior releases of Oracle are
using PFILEs.
3. The SPFILE is maintained by the server. Parameters are checked before changes are accepted.
4. Eliminate configuration problems (no need to have a local PFILE if you want to start Oracle from a
remote machine)
Execute the following query to see if your database was started with a PFILE or SPFILE:
NOTE
1)To fire above command your database should be at least connect to ideal instance of target database
NOTE
1)To fire above command your database should be at least connect to ideal instance of target database.
In oracle 10g you can now make the memory management automatic, Oracle will allocate and deallocate
memory for each of the dynamic memory component based on changing database workloads at runtime.
The benefits of ASMM are
1)SGA_MAX_SIZE: This parameter sets final benchmark for SGA_TARGET it means that SGA_TARGET
< = SGA_MAX_SIZE but it cannot be SGA_TARGET > SGA_MAX_SIZE . It is a static parameter. If we
alter the parameter for its effect we need to restart the system.
2)SGA_TARGET: This parameter specifies the size of SGA. If this SGA_TARGET > 0 then Automatic
Shared Memory Management (ASMM) will take place.It is a dynamic parameter. however the
SGA_TARGET cannot exceed the SGA_MAX_SIZE.
128 ORACLE DATABASE ADMINISTRATION
129 ORACLE DATABASE ADMINISTRATION
130 ORACLE DATABASE ADMINISTRATION
131 ORACLE DATABASE ADMINISTRATION
132 ORACLE DATABASE ADMINISTRATION
133 ORACLE DATABASE ADMINISTRATION
134 ORACLE DATABASE ADMINISTRATION
STORAGE MANAGEMENT
STORAGE MANAGEMENT:
135 ORACLE DATABASE ADMINISTRATION
Oracle allocates logical database space for all data in a database. The units of database space allocation
are data blocks, extents, and segments. Figure 2-1shows the relationships among these data structures:
At the finest level of granularity, Oracle stores data in data blocks (also called logical blocks, Oracle
blocks, or pages). One data block corresponds to a specific number of bytes of physical database space
on disk.
The next level of logical database space is an extent. An extent is a specific number of contiguous data
blocks allocated for storing a specific type of information.
The level of logical database storage greater than an extent is called a segment. A segment is a set of
extents, each of which has been allocated for a specific data structure and all of which are stored in the
same tablespace. For example, each table's data is stored in its own data segment, while each index's
data is stored in its own index segment. If the table or index is partitioned, each partition is stored in its
own segment.
Oracle allocates space for segments in units of one extent. When the existing extents of a segment are
full, Oracle allocates another extent for that segment. Because extents are allocated as needed, the
extents of a segment may or may not be contiguous on disk.
A segment and all its extents are stored in one tablespace. Within a tablespace, a segment can include
extents from more than one file; that is, the segment can span datafiles. However, each extent can
contain data from only one datafile.
Although you can allocate additional extents, the blocks themselves are allocated separately. If you
allocate an extent to a specific instance, the blocks are immediately allocated to the free list. However, if
the extent is not allocated to a specific instance, then the blocks themselves are allocated only when the
high water mark moves. The high water mark is the boundary between used and unused space in a
segment.
Segment Types
Objects in an Oracle database such as tables, indexes, clusters, sequences, etc., are comprised
of segments. There are several different types of segments.
Table: Data are stored in tables. When a table is created with the CREATE TABLE command, a table
segment is allocated to the new object.
· Table segments do not store table rows in any particular order.
· Table segments do not store data that is clustered or partitioned.
· The DBA has almost no control over the location of rows in a table.
· The segment belongs to a single tablespace.
Table Partition: If a table has high concurrent usage, that is simultaneous access by many different
system users as would be the case for a SALES_ORDER table in an online-transaction processing
environment, you will be concerned with scalability and availability of information as the DBA. This may
lead you to create a table that is partitioned into more than one table partition segment.
138 ORACLE DATABASE ADMINISTRATION
· A partitioned table has a separate segment for each partition.
· Each partition may reside in a different tablespace.
· Each partition may have different storage parameters.
· The Oracle Enterprise Edition must have the partitioning option installed in order to create a
partitioned table.
Cluster: Rows in a cluster segment are stored based on key value columns. Clustering is sometimes
used where two tables are related in a strong-weak entity relationship.
· A cluster may contain rows from two or more tables.
· All of the tables in a cluster belong to the same segment and have the same storage
parameters.
· Clustered table rows can be accessed by either a hashing algorithm or by indexing.
Index: When an index is created as part of the CREATE TABLE or CREATE INDEX command, an index
segment is created.
· Tables may have more than one index, and each index has its own segment.
· Each index segment has a single purpose – to speed up the process of locating rows in a table
or cluster.
Index-Organized Table: This special type of table has data stored within the index based on primary
key values. All data is retrievable directly from the index structure (a tree structure).
Index Partition: Just as a table can be partitioned, so can an index. The purpose of using a
partitioned index is to minimize contention for the I/O path by spreading index input-output across more
than one I/O path.
· Each partition can be in a different tablespace.
· The partitioning option of Oracle Enterprise Edition must be installed.
Undo: An undo segment is used to store "before images" of data or index blocks prior to changes being
made during transaction processing. This allows a rollback using the before image information.
Temporary: Temporary segments are created when commands and clauses such as CREATE
INDEX, SELECT DISTINCT, GROUP BY, and ORDER BYcause Oracle to perform memory sort
operations.
· Often sort actions require more memory than is available.
· When this occurs, intermediate results of sort actions are written to disk so that the sort
operation can continue – this allows information to swap in and out of memory by writing/reading
to/from disk.
· Temporary segments store intermediate sort results.
LOB: Large objects can be stored as one or more columns in a table. Large objects (LOBs) include
images, separate text documents, video, sound files, etc.
· These LOBs are not stored in the table – they are stored as separate segment objects.
· The table with the column actually has a "pointer" value stored in the column that points to
the location of the LOB.
NestedTable: A column in one table may consist of another table definition. The inner table is called a
"nested table" and is stored as a separate segment. This would be done for a SALES_ORDER table that
has the SALES_DETAILS (order line rows) stored as a nested table.
Bootstrap Segment: This is a special cache segment created by the sql.bsq script that runs when a
database is created.
· It stores initial data dictionary cache information when a database is opened.
· This segment cannot be queried or updated and requires no DBA maintenance.
Storage Clauses/Parameters
When database objects are created, the object always has a set of storage parameters. This figure
shows three ways that an object can obtain storage clause parameters.
139 ORACLE DATABASE ADMINISTRATION
Extents
Extents are allocated in chunks that are not necessarily uniform in size, but the space allocated is
contiguous on the disk drive as is shown in this figure.
· When a database object such as a table grows, additional disk space is allocated to its
segment of the tablespace in the form of an extent.
· This figure shows two extents of different sizes for the Department table segment.
In order to develop an understanding of extent allocation to segments, review this CREATE
TABLESPACE command.
· INITIAL specifies the initial extent size (the first extent allocated).
o A size that is too large here can cause failure of the database if there is not any area on
the disk drive with sufficient contiguous disk space to satisfy the INITIAL parameter.
o When a database is built to store information from an older system that is being
converted to Oracle, a DBA may have some information about how large initial extents
need to be in general and may specify a larger size as is done here at 128K.
· NEXT specifies the size of the next extent (2nd, 3rd, etc).
o This is termed an incremental extent.
o This can also cause failure if the size is too large.
o Usually a smaller value is used, but if the value is too small, segment fragmentation can
result.
o This must be monitored periodically by a DBA which is why dictionary managed
tablespaces are NOT preferred.
· PCTINCREASE can be very troublesome.
o If you set this very high, e.g. 50% as is shown here, the segment extent size can
increase by 7,655% over just 10 extents.
o Best solution: a single INITIAL extent of the correct size followed by a small value
for NEXT and a value of 0 (or a small value such as 5) for PCTINCREASE.
Use smaller default INITIAL and NEXT values for a dictionary-managed tablespace's default storage
clauses as these defaults can be over-ridden during the creation of individual objects (tables, indexes,
etc.) where the STORAGE clause is used in creating the individual objects.
· MINEXTENTS and MAXEXTENTS parameters specify the minimum and maximum number of
extents allocated by default to segments that are part of the tablespace.
The default storage parameters can be overridden when a segment is created as is illustrated in this next
section.
PCTFREE 5 PCTUSED 65
STORAGE (
INITIAL 48K
NEXT 48K
PCTINCREASE 5
MINEXTENTS 1
MAXEXTENTS UNLIMITED)
TABLESPACE Data01;
Allocation/Deallocation: When a tablespace is initially created, the first datafile (and subsequent
datafiles) created to store the tablespace has a header which may be one or more blocks at the
beginning of the file as is shown in the figure below.
· As segments are created, extended, or altered free extents are allocated.
· The below figure shows that extents can vary in size.
· This figure represents a Locally Managed tablespace where the Locally Managed tablespace's
extent size is specified by the EXTENT MANAGEMENT LOCAL AUTOALLOCATE clause—recall
that AUTOALLOCATE enables Oracle to decide the appropriate extent size for a segment. In an
older Oracle database, it could also represent a Dictionary Managed tablespace.
· As segments are dropped, altered, or truncated, extents are released to become free extents
available for reallocation.
· The first extent is allocated to a segment, even though the data blocks may be empty.
· Oracle formats the blocks for an extent only as they are used - they can actually contain old
data.
· Extents for a segment must always be in the same tablespace, but can be in different datafiles.
· The first data block of every segment contains a directory of the extents in the segment.
· If you delete data from a segment, the extents/blocks are not returned to the tablespace for
reuse. Deallocation occurs when:
o You DROP a segment.
o You use an online segment shrink to reclaim fragmented space in a segment.
Segment is the generic name used in Oracle databases to represent objects like tables, indexes or
partitions. These are stored in Data Files in pieces called Extents.
The Segments can be either in MANUAL mode or in AUTO mode - Automatic Segment Space
Management (ASSM)
In earlier Oracle version, the MANUAL mode managed free blocks and free space in a Free List stored the
Data Dictionary, which overloaded System tablespace. Since Oracle 10g and ASSM the free blocks and
the free space are managed in a bitmap in the Segment Header of each Tablespace.
142 ORACLE DATABASE ADMINISTRATION
The 2 main views to find segments and extents information are: dba_segments and dba_extents
Another important concept to understand, in case of Table segments, is the "High Watermark" (HWM). It
defines the position of the last formated block for the segment. It means that in case of a Full table scan
(FTS - i.e. select * from table1 ;) Oracle will go through all the segment's blocks up to the HWM position.
143 ORACLE DATABASE ADMINISTRATION
Database Block
The Database Block or simply Data Block, as you have learned, is the smallest size unit for
input/output from/to disk in an Oracle database.
· A data block may be equal to an operating system block in terms of size, or may be larger in
size, and should be a multiple of the operating system block.
· The DB_BLOCK_SIZE parameter sets the size of a database's standard blocks at the time
that a database is created.
· DB_BLOCK_SIZE has to be a multiple of the physical block size allowed by the operating
system for a server’s storage devices.
· If DB_BLOCK_SIZE is not set, then the default data block size is operating system-specific.
The standard data block size for a database is 4KB or 8KB.
· Oracle also supports the creation of databases that have more than one block size. This is
primarily done when you need to specify tablespaces with different block sizes in order to
maximize I/O performance.
· You've already learned that a database can have up to four nonstandard block
sizes specified.
· Block sizes must be sized as a power of two between 2K and 32K in size,
e.g., 2K, 4K, 8K, 16K, or 32K.
· A sub cache of the Database Buffer Cache is configured by Oracle for each nonstandard block
size.
Standard Block Size: The DB_CACHE_SIZE parameter specifies the size of the Database Buffer
Cache. However, if SGA_TARGET is set and DB_CACHE_SIZE is not, then Oracle decides how much
memory to allocate to the Database Buffer Cache. The minimum size for DB_CACHE_SIZE must be
specified as follows:
144 ORACLE DATABASE ADMINISTRATION
· One granule where a granule is a unit of contiguous virtual memory allocation in RAM.
· If the total System Global Area (SGA) based on SGA_MAX_SIZE is less than 128MB, then a
granule is 4MB.
· If the total SGA is greater than 128MB, then a granule is 16MB.
· The default value for DB_CACHE_SIZE is 48MB rounded up to the nearest granule size.
Nonstandard Block Size: If a DBA wishes to specify one or more nonstandard block sizes, the
parameter following parameters are set.
· The data block sizes should be a multiple of the operating system's block size within the
maximum limit to avoid unnecessary I/O.
· Oracle data blocks are the smallest units of storage that Oracle can use or allocate.
· Do not use the specified DB_BLOCK_SIZE value to set nonstandard block sizes.
· For example, if the standard block size is 8K, do not use
the DB_8K_CACHE_SIZE parameter.
· DB_2K_CACHE_SIZE -- parameter for 2K nonstandard block sizes.
· DB_4K_CACHE_SIZE -- parameter for 4K nonstandard block sizes.
· DB_8K_CACHE_SIZE -- parameter for 8K nonstandard block sizes.
· DB_16K_CACHE_SIZE -- parameter for 16K nonstandard block sizes.
· DB_32K_CACHE_SIZE -- parameter for 32K nonstandard block sizes.
Nonstandard Block Size Tablespaces: The BLOCKSIZE parameter is used to create a tablespace
with a nonstandard block size. Example:
· Here the nonstandard block size specified with the BLOCKSIZE clause is 32K.
· This command will not execute unless the DB_32K_CACHE_SIZE parameter has already
been specified because buffers of size 32K must already be allocated in the Database Buffer
Cache as part of a sub cache.
There are some additional rules regarding the use of multiple block sizes:
· If an object is partitioned and resides in more than one tablespace, all of the tablespaces
where the object resides must be the same block size.
· Temporary tablespaces must be the standard block size. This also applies to permanent
tablespaces that have been specified as default temporary tablespaces for system users.
This figure shows the components of a data block. This is the structure regardless of the type of
segment to which the block belongs.
Block header – contains common and variable components including the block address, segment type,
and transaction slot information.
· The block header also includes the table directory and row directory.
· On average, the fixed and variable portions of block overhead total 84 to 107 bytes.
· Table Directory – used to track the tables to which row data in the block belongs.
o Data from more than one table may be in a single block if the data are clustered.
o The Table Directory is only used if data rows from more than one table are
stored in the block, for example, a cluster.
· Row Directory - used to track which rows from a table are in this block.
o The Row Directory includes for each row or row fragment in the row data area.
o When space is allocated in the Row Directory to store information about a row, this
space is not reclaimed upon deletion of a row, but is reclaimed when new rows are
inserted into the block.
o A block can be empty of rows, but if it once contained rows, then data will be allocated in
the Row Directory (2 bytes per row) for each row that ever existed in the block.
145 ORACLE DATABASE ADMINISTRATION
· Transaction Slots are space that is used when transactions are in progress that will modify
rows in the block.
· The block header grows from top down.
· Data space (Row Data) – stores row data that is inserted from the bottom up.
Free space in the middle of a block can be allocated to either the header or data space, and is
contiguous when the block is first allocated.
· Free space is allocated to allow variable character and numeric data to expand and contract as
data values in existing rows are modified.
· New rows are also inserted into free space.
· Free space may fragment as rows in the block are modified or deleted.
Oracle (the SMON background process) automatically and transparently coalesces the free space of a
data block periodically only when the following conditions are true:
· An INSERT or UPDATE statement attempts to use a block that contains sufficient free space
to contain a new row piece.
· The free space is fragmented so that the row piece cannot be inserted in a contiguous section
of the block.
After coalescing, the amount of free space is identical to the amount before the operation, but the space
is now contiguous. This figure shows before and after coalescing free space.
Table Data in a Segment: Table data is stored in the form of rows in a data block.
· The figures below show the block header then the data space (row data) and the free space.
· Each row consists of columns with associated overhead.
· The storage overhead is in the form of "hidden" columns accessible by the DBMS that specify
the length of each succeeding column.
· Rows are stored right next to each other with no spaces in between.
· Column values are stored right next to each other in a variable length format.
· The length of a field indicates the length of each column value (variable length - Note the
Length Column 1, Length Column 2, etc., entries in the figure).
· Column length of 0 indicates a null field.
· Trailing null fields are not stored.
146 ORACLE DATABASE ADMINISTRATION
When a row is chained or migrated, I/O performance associated with this row decreases because Oracle
must scan more than one data block to retrieve the information for the row.
Manual Data Block Free Space Management -- Database Block Space Utilization Parameters
Manual data block management requires a DBA to specify how block space is used and when a block is
available for new row insertions.
· This is the default method for data block management for dictionary managed
tablespace objects (another reason for using locally managed tablespaces with UNIFORM
extents).
· Database block space utilization parameters are used to control space allocation for data and
index segments.
Example: Suppose a DBA sets INITTRANS at 4 and MAXTRANS at 10. Initially, 4 transaction slots
are allocated in the block header. If 6 system users process concurrent transactions for a given block,
then the number of transaction slots increases by 2 slots to 6 slots. Once this space is allocated in the
header, it is not deallocated.
148 ORACLE DATABASE ADMINISTRATION
What happens if 11 system users attempt to process concurrent transactions for a given block? The
11th system user is denied access – an Oracle error message is generated – until current transactions
complete (either are committed or rolled back).
You, as the DBA, must decide how much Free Space is needed for data blocks in manual management
of data blocks.
You set the free space with the PCTFREE and PCTUSED parameters at the time that you create an
object like a Table or Index.
PCTFREE: The PCTFREE parameter is used at the time an object is created to set the percentage of
usable block space to be reserved during row insertion for possible later updates to rows in the block.
· PCTFREE is the only space parameter used for Automatic Segment Space Management.
· The parameter guarantees that at least PCTFREE space is reserved for updates to existing
data rows. PCTFREE reserves space for growth of existing rows through the modification of data
values.
· This figure shows the situation where the PCTFREE parameter is set to 20 (20%).
· The default value for PCTFREE is 10%.
· New rows can be added to a data block as long as the amount of space remaining is at or
greater than PCTFREE.
· After PCTFREE is met (this means that there is less space available than
the PCTFREE setting), Oracle considers the block full and will not insert new rows to the block.
PCTUSED: The parameter PCTUSED is used to set the level at which a block can again be considered
by Oracle for insertion of new rows. It is like a low water mark whereas PCTFREE is a high water
mark. The PCTUSED parameter sets the minimum percentage of a block that can be used for row data
plus overhead before new rows are added to the block.
· After a data block is filled to the limit determined by PCTFREE, Oracle Database considers the
block unavailable for the insertion of new rows until the percentage of that block falls beneath
the parameter PCTUSED.
· As free space grows (the space allocated to rows in a database block decreases due to
deletions or updates), the block can again have new rows inserted but only if the percentage of
the data block in use falls below PCTUSED.
· Example: if PCTUSED is set at 40, once PCTFREE is hit, the percentage of block space
used must drop to 39% or less before row insertions are again made.
· The system default for PCTUSED is 40.
· Oracle tries to keep a data block at least PCTUSED full before using new blocks.
· The PCTUSED parameter is not set when Automatic Segment Space Management is
enabled. This parameter only applies when Manual Segment Space Management is in use.
This figure depicts the situation where PCTUSED is set to 40 and PCTFREE is set to 20 (40% and 20%
respectively).
Both PCTFREE and PCTUSED are calculated as percentages of the available data space – Oracle deducts
the space allocated to the block header from the total block size when computing these parameters.
149 ORACLE DATABASE ADMINISTRATION
Generally PCTUSED plus PCTFREE should add up to 80. The sum of PCTFREE and PCTUSED cannot
exceed 100. If PCTFREE is 20, and PCTUSED is 60, this will ensure at least 60% of each block is used
while saving 20% for row updates.
A low PCTFREE has these effects (basically the opposite effect of high PCTFREE):
· There is less space for growth of existing rows.
· Performance may suffer due to the need to reorganize data in data blocks more frequently:
o Oracle may need to migrate a row that will no longer fit into a data block due to
modification of data within the row.
o If the row will no longer fit into a single database block, as may be the case for very large
rows, then database blocks are chained together logically with pointers. This also
causes a performance hit. This may also cause a DBA to consider the use of a
nonstandard block size. In these situations, I/O performance will degrade.
o Examine the extent of chaining or migrating with the ANALYZE command. You may
resolve row chaining and migration by exporting the object (table), dropping the object,
and then importing the object.
· Chaining may increase resulting in additional Input/output operations.
· Very little storage space within a data block is wasted.
If data for an object tends to be fairly stable (doesn't change in value very much), not much free space is
needed (as little as 5%). If changes occur extremely often and data values are very volatile, you may
need as much as 40% free space. Once this parameter is set, it cannot be changed without at least
partially recreating the object affected.
· Update activity with high row growth – the application uses tables that are frequently
updated affecting row size – set PCTFREE moderately high and PCTUSED moderately low to
allow for space for row growth.
PCTFREE = 20 to 25
PCTUSED = 35 to 40
(100 – PCTFREE) – PCTUSED = 35 to 45
· Insert activity with low row growth – the application has more insertions of new rows with
very little modification of existing rows – set PCTFREE low and PCTUSED at a moderate
level. This will avoid row chaining. Each data block has its space well utilized but once new row
insertion stops, there are no more row insertions until there is a lot of storage space again
available in a data blocks to minimize migration and chaining.
PCTFREE = 5 to 10
PCTUSED = 50 to 60
(100 – PCTFREE) – PCTUSED = 30 to 45
· Performance primary importance and disk space is readily available – when disk space
is abundant and performance is the critical issue, a DBA must ensure minimal migration or
chaining occurs by using very high PCTFREE and very low PCTUSED settings. A lot of storage
space will be wasted to minimize migration and chaining.
PCTFREE = 30
PCTUSED = 30
(100 – PCTFREE) – PCTUSED = 40
· Disk space usage is important and performance is secondary – the application uses
large tables and disk space usage is critical. Here PCTFREE should be very low
while PCTUSED is very high – the tables will experience some data row migration and chaining
with a performance hit.
PCTFREE = 5
PCTUSED = 90
(100 – PCTFREE) – PCTUSED = 5
Free lists: With Manual Segment Space Management, when a segment is created, it is created with
a Free List that is used to track the blocks allocated to the segment that are available for row
insertions.
· A segment can have more than one free list if the FREELISTS parameter is specified in the
storage clause when an object is created.
· If a block has free space that falls below PCTFREE, that block is removed from the free list.
· Oracle improves performance by not considering blocks that are almost full as candidates for
row insertions.
The free and used space for a segment is tracked with bitmaps instead of free lists.
· The bitmap is stored in the header section of the segment, in a separate set of blocks
called bitmapped blocks.
· The bitmap tracks the status of each block in a segment with respect to available space.
151 ORACLE DATABASE ADMINISTRATION
· Think of an individual bit as either being "on" to indicate the block is available or "off" to
indicate a block is or is not available.
· When a new row needs to be inserted into a segment, the bitmap is searched for a candidate
block. This search occurs much more rapidly than can be done with a Free List because
a Bit Map Index can often be entirely stored in memory and the use of a Free List requires
searching a chain data structure (linked list).
Automatic segment management can only be enabled at the tablespace level, and only if the tablespace
is locally managed. An example CREATE TABLESPACE command is shown here.
The SEGMENT SPACE MANAGEMENT AUTO clause specifies the creation of the bitmapped segments.
Statements that can increase the amount of free space in a database block:
· DELETE statements that delete rows, and
· UPDATE statements that update a column value to a smaller value than was previously
required.
· INSERT statements, but only if the tablespace allows for compression and the INSERT causes
data to be compressed, thereby freeing up some space in a block.
· Both of these statements release space that can be used subsequently by an INSERT
statement.
· Released space may or may not be contiguous with the main area of free space in a data
block.
Oracle does this compression only in such situations, because otherwise the performance of a database
system decreases due to the continuous compression of the free space in data blocks.
Periodically you will need to obtain information from the data dictionary about storage parameter
settings. The following views are useful.
· DBA_EXTENTS – information on space allocation for segments.
· DBA_SEGMENTS – stores information on segments.
· DBA_TABLESPACES – a row is added when a tablespace is created.
· DBA_DATA_FILES – a row is added for each datafile in the database.
· DBA_FREE_SPACE – shows the space in each datafile that is free.
152 ORACLE DATABASE ADMINISTRATION
153 ORACLE DATABASE ADMINISTRATION
154 ORACLE DATABASE ADMINISTRATION
155 ORACLE DATABASE ADMINISTRATION
156 ORACLE DATABASE ADMINISTRATION
157 ORACLE DATABASE ADMINISTRATION
158 ORACLE DATABASE ADMINISTRATION
159 ORACLE DATABASE ADMINISTRATION
160 ORACLE DATABASE ADMINISTRATION
161 ORACLE DATABASE ADMINISTRATION
162 ORACLE DATABASE ADMINISTRATION
163 ORACLE DATABASE ADMINISTRATION
164 ORACLE DATABASE ADMINISTRATION
TABLESPACE MANAGEMENT:
Tablespace Types
There are three types of tablespaces: (1) permanent, (2) undo, and (3) temporary.
· Permanent – These tablespaces store objects in segments that are permanent – that persist
beyond the duration of a session or transaction.
· Undo – These tablespaces store segments that may be retained beyond a transaction, but are
basically used to:
o Provide read consistency for SELECT statements that access tables that have rows that
are in the process of being modified.
o Provide the ability to rollback a transaction that fails to commit.
· Temporary – This tablespace stores segments that are transient and only exist for the
duration of a session or a transaction. Mostly, a temporary tablespace stores rows for sort and
join operations.
165 ORACLE DATABASE ADMINISTRATION
Beginning with Oracle 10g, the smallest Oracle database is two tablespaces. This applies to Oracle 11g.
o SYSTEM – stores the data dictionary.
o SYSAUX – stores data for auxiliary applications (covered in more detail later in these notes).
In reality, a typical production database has numerous tablespaces. These include SYSTEM and NON-
SYSTEM tablespaces.
SYSTEM – a tablespace that is always used to store SYSTEM data that includes data about tables,
indexes, sequences, and other objects – this metadata comprises the data dictionary.
· Every Oracle database has to have a SYSTEM tablespace—it is the first tablespace created
when a database is created.
· Accessing it requires a higher level of privilege.
· You cannot rename or drop a SYSTEM tablespace.
· You cannot take a SYSTEM tablespace offline.
· The SYSTEM tablespace could store user data, but this is not normally done—a good rule to
follow is to never allow allow the storage of user segments in the SYSTEM tablespace.
· This tablespace always has a SYSTEM Undo segment.
The SYSAUX tablespace stores data for auxiliary applications such as the LogMiner, Workspace Manager,
Oracle Data Mining, Oracle Streams, and many other Oracle tools.
· This tablespace is automatically created if you use the Database Creation
Assistant software to build an Oracle database.
· Like the SYSTEM tablespace, SYSAUX requires a higher level of security and it cannot be
dropped or renamed.
· Do not allow user objects to be stored in SYSAUX. This tablespace should only store system
specific objects.
· This is a permanent tablespace.
All other tablespaces are referred to as Non-SYSTEM. A different tablespace is used to store
organizational data in tables accessed by application programs, and still a different one for undo
information storage, and so on. There are several reasons for having more than one tablespace:
· Flexibility in database administration.
166 ORACLE DATABASE ADMINISTRATION
· Separate data by backup requirements.
· Separate dynamic and static data to enable database tuning.
· Control space allocation for both applications and system users.
· Reduce contention for input/output path access (to/from memory/disk).
The full CREATE TABLESPACE (and CREATE TEMPORARY TABLESPACE) command syntax is shown
here.
As you can see, almost all of the clauses are optional. The clauses are defined as follows:
· MINIMUM EXTENT: Every used extent for the tablespace will be a multiple of this integer
value. Use either T, G, M or K to specify terabytes, gigabytes, megabytes, or kilobytes.
· BLOCKSIZE: This specifies a nonstandard block size – this clause can only be used if the
DB_CACHE_SIZE parameter is used and at least one DB_nK_CACHE_SIZE parameter is set and
the integer value for BLOCSIZE must correspond with one of the DB_nK_CACHE_SIZE parameter
settings.
· LOGGING: This is the default – all tables, indexes, and partitions within a tablespace have
modifications written to Online Redo Logs.
167 ORACLE DATABASE ADMINISTRATION
· NOLOGGING: This option is the opposite of LOGGING and is used most often when large
direct loads of clean data are done during database creation for systems that are being ported
from another file system or DBMS to Oracle.
· DEFAULT storage_clause: This specifies default parameters for objects created inside the
tablespace. Individual storage clauses can be used when objects are created to override the
specified DEFAULT.
· OFFLINE: This parameter causes a tablespace to be unavailable after creation.
· PERMANENT: A permanent tablespace can hold permanent database objects.
· TEMPORARY: A temporary tablespace can hold temporary database objects, e.g., segments
created during sorts as a result of ORDER BY clauses or JOIN views of multiple tables. A
temporary tablespace cannot be specified for EXTENT MANAGEMENT LOCAL or have the
BLOCKSIZE clause specified.
· extent_management_clause: This clause specifies how the extents of the tablespace are
managed and is covered in detail later in these notes.
· segment_management_clause: This specifies how Oracle will track used and free space in
segments in a tablespace that is using free lists or bitmap objects.
· datafile_clause: filename [SIZE integer [K|M] [REUSE]
[ AUTOEXTEND ON | OFF ]
filename: includes the path and filename and file size. .
REUSE: specified to reuse an existing file.
· NEXT: Specifies the size of the next extent.
· MAXSIZE: Specifies the maximum disk space allocated to the tablespace. Usually set in
megabytes, e.g., 400M or specified as UNLIMITED.
When you create a tablespace, if you do not specify extent management, the default is locally managed.
Locally Managed
The extents allocated to a locally managed tablespace are managed through the use of bitmaps.
· Each bit corresponds to a block or group of blocks (an extent).
· The bitmap value (on or off) corresponds to whether or not an extent is allocated or free for
reuse.
168 ORACLE DATABASE ADMINISTRATION
Using LMT, each tablespace manages its own free and used space within a bitmap structure stored in one
of the tablespace's data files. Each bit corresponds to a database block or group of blocks. Execute one
of the following statements to create a locally managed tablespace:
SQL> CREATE TABLESPACE ts2 DATAFILE '/oradata/ts2_01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Segment Space Management eliminates the need to specify and tune the PCTUSED, FREELISTS, and
FREELISTS GROUPS storage parameters for schema objects. The Automatic Segment Space Management
feature improves the performance of concurrent DML operations significantly since different parts of the
bitmap can be used simultaneously eliminating serialization for free space lookups against the
FREELSITS. This is of particular importance when using RAC, or if "buffer busy waits" are detected.
Convert between LMT and DMT:
The DBMS_SPACE_ADMIN package allows DBAs to quickly and easily convert between LMT and DMT
mode. Look at these examples:
SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local('ts1');
PL/SQL procedure successfully completed.
· Local management is the default for the SYSTEM tablespace beginning with Oracle 10g.
· When the SYSTEM tablespace is locally managed, the other tablespaces in the database must
also be either locally managed or read-only.
· Local management reduces contention for the SYSTEM tablespace because space allocation
and deallocation operations for other tablespaces do not need to use data dictionary tables.
· The LOCAL option is the default so it is normally not specified.
· With the LOCAL option, you cannot specify any DEFAULT STORAGE, MINIMUM EXTENT,
or TEMPORARY clauses.
169 ORACLE DATABASE ADMINISTRATION
A locally managed tablespace maintains a bitmap in the data file header to track free and used space in
the data file body. Each bit corresponds to a group of blocks. When space is allocated or freed, Oracle
Database changes the bitmap values to reflect the new status of the blocks.
In this way, the database eliminates the need to coalesce free extents.
Alternatively, all extents can have the same size in a locally managed tablespace and override
object storage options.
170 ORACLE DATABASE ADMINISTRATION
Extent Management
· UNIFORM – a specification of UNIFORM means that the tablespace is managed in uniform
extents of the SIZE specified.
o use UNIFORM to enable exact control over unused space and when you can predict the space
that needs to be allocated for an object or objects.
o Use K, M, G, T, etc to specify the extent size in kilobytes, megabytes, gigabytes, terabytes,
etc. The default is 1M; however, you can specify the extent size with the SIZE clause of
the UNIFORM clause.
o For our small student databases, a good SIZE clause value is 128K.
o You must ensure with this setting that each extent has at least 5 database blocks.
· AUTOALLOCATE – a specification of AUTOALLOCATE instead of UNIFORM, then the
tablespace is system managed and you cannot specify extent sizes.
o AUTOALLOCATE is the default.
§ this simplifies disk space allocation because the database automatically selects the
appropriate extent size.
§ this does waste some space but simplifies management of tablespace.
o Tablespaces with AUTOALLOCATE are allocated minimum extent sizes of 64K with a minimum
of 5 database blocks per extent.
Advantages of Local Management: Basically all of these advantages lead to improved system
performance in terms of response time, particularly the elimination of the need to coalesce free extents.
· Local management avoids recursive space management operations. This can occur in
dictionary managed tablespaces if consuming or releasing space in an extent results in another
operation that consumes or releases space in an undo segment or data dictionary table.
· Because locally managed tablespaces do not record free space in data dictionary tables, they
reduce contention on these tables.
· Local management of extents automatically tracks adjacent free space, eliminating the need to
coalesce free extents.
· The sizes of extents that are managed locally can be determined automatically by the system.
· Changes to the extent bitmaps do not generate undo information because they do not update
tables in the data dictionary (except for special cases such as tablespace quota information).
Example: CREATES TABLESPACE command – this creates a locally managed Inventory tablespace with
AUTOALLOCATE management of extents.
Example: CREATE TABLESPACE command – this creates a locally managed Inventory tablespace with
UNIFORM management of extents with extent sizes of 128K.
171 ORACLE DATABASE ADMINISTRATION
Possible Errors
You cannot specify the following clauses when you explicitly specify EXTENT MANAGEMENT LOCAL:
o DEFAULT storage clause
o MINIMUM EXTENT
o TEMPORARY
Segment Space Management in Locally Managed Tablespaces
Use the SEGMENT SPACE MANAGEMENT clause to specify how free and used space within a segment is
to be managed. Once established, you cannot alter the segment space management method for a
tablespace.
MANUAL: This setting uses free lists to manage free space within segments.
o Free lists are lists of data blocks that have space available for inserting rows.
o You must specify and tune the PCTUSED, FREELISTS, and FREELIST GROUPS storage
parameters.
o MANUAL is usually NOT a good choice.
AUTO: This uses bitmaps to manage free space within segments.
o This is the default.
o A bitmap describes the status of each data block within a segment with regard to the data
block's ability to have additional rows inserted.
o Bitmaps allow Oracle to manage free space automatically.
o Specify automatic segment-space management only for permanent, locally managed
tablespaces.
o Automatic generally delivers better space utilization than manual, and it is self-tuning.
Example CREATE TABLESPACE command – this creates a locally managed Inventory tablespace with
AUTO segment space management.
Dictionary Managed
With this approach the data dictionary contains tables that store information that is used to manage
extent allocation and deallocation manually.
Oracle use the data dictionary (tables in the SYS schema) to track allocated and free extents for
tablespaces that is in "dictionary managed" mode. Free space is recorded in the SYS.FET$ table, and
used space in the SYS.UET$ table. Whenever space is required in one of these tablespaces, the ST
(space transaction) enqueue latch must be obtained to do inserts and deletes against these tables. As
only one process can acquire the ST Enque at a given time, this often lead to contention.
Execute the following statement to create a dictionary managed tablespace:
SQL> CREATE TABLESPACE ts1 DATAFILE '/oradata/ts1_01.dbf' SIZE 50M
EXTENT MANAGEMENT DICTIONARY
DEFAULT STORAGE ( INITIAL 50K NEXT 50K MINEXTENTS 2 MAXEXTENTS 50 PCTINCREASE 0);
NOTE: Keep in mind you will NOT be able to create any tablespaces of this type in your 11g
database. This information is provided in the event you have to work with older databases.
The DEFAULT STORAGE clause enables you to customize the allocation of extents. This provides
increased flexibility, but less efficiency than locally managed tablespaces.
Example – this example creates a tablespace using all DEFAULT STORAGE clauses.
ORA – 12913
173 ORACLE DATABASE ADMINISTRATION
SQL>execute dbms_space_admin.tablespace_migrate_from_local(‘XXXX’);
SQL>exec dbms_space_admin.tablespace_migrate_to_local(‘TEST’);
PL/SQL procedure successfully completed.
If we create a database with DBCA , it will have a locally managed SYSTEM tablespace by default. we
cannot create new dictionary managed tablespaces.
Sizes of extents that are managed locally can be determined automatically by the system. Alternatively,
all extents might be the same size in a LMT. If we want to create extents with the same sizes, you need
to specify UNIFORM.
Changes to the extent bitmaps do NOT generate rollback information because they do NOT update tables
in the data dictionary (except for special cases such as tablespace quota information). Reduced
fragmentation.
174 ORACLE DATABASE ADMINISTRATION
As I said , see the keyword "autoallocate" why extents are allocated with different sizes as “autoallocate”
specifies extent sizes are system generated. Most likely our tablespace will be autoalloctate LMT.
Who wants to create extents with the same sizes, need to specify UNIFORM.
175 ORACLE DATABASE ADMINISTRATION
Let’s start with two users . Each user is assigned different , different tablespace. User ROSE is assigned
to test tablespace(uniform). User SONA is assigned to samp tablespace (autoallocate).
When creating tablespace test , I have mentioned ‘UNIFORM’ so all extents sizes are same. We can see
bytes column from following screen shot.
176 ORACLE DATABASE ADMINISTRATION
When creating tablespace samp , I did NOT mention ‘UNIFORM’ so extents sizes are NOT same. We can
see bytes column from following screen shot.
177 ORACLE DATABASE ADMINISTRATION
LMT can use either autoallocate or uniform is all about allocation of new extents when space pressure
increases in the tablespace
UNIFORMLY SIZED EXTENTS - UNIFORM
AUTO SIZED EXTENTS - AUTOALLOCATE
AUTO ALLOCATE
Means that the extent sizes are managed by Oracle. It will choose the optimal next size for the extents
starting with 64 KB. As the segments grow and more extents are needed, Oracle starts allocating larger
and larger sizes then it moves to 1Mb , 8MB ultimately to 64Mb extents. We can make initial extent size
of greater than 64KB , it will allocate extents atleast the amount of the space.
178 ORACLE DATABASE ADMINISTRATION
UNIFORM
Create the extents the same size by specifying the size when create the tablespace. i.e. UNIFORM
specifies that the tablespace is managed with uniform extents of SIZE bytes (use K or M to specify the
extent size).
In 10g , If not specified “ Extent management dictionary “ automatically tablespace will be created as “
LOCALLY MANAGED ”.
SYSTEM tablespace should be “ DICTIONARY MANAGED ” otherwise cannot create dictionary managed
tablespaces.
SYSTEM NO NO NO
SYSAUX YES NO NO
TEMPORARY NO YES NO
PROPERTY_VALUE
It will show what already set smallfile or bigfile.
FIND DEFAULT_PERMANENT_TABLESPACE
PERMANENT TABLESPACE
Permanaent tablespaces can be either small tablespaces or big tablespaces. Small tablespace can be
made up of a number of data files. Big tablespace will only be made up of one data file and this can get
extremely large. We cannot add datafile to a bigfile tablespace.
A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile tablespace with 32K
blocks can contain a 128 terabyte datafile. The maximum number of datafiles in an Oracle Database is
limited (usually to 64K files). We can specify SIZE in kilobytes (K), megabytes (M), gigabytes (G), or
terabytes (T).
Bigfile tablespaces are supported only for locally managed tablespaces with automatic segment space
management.
ORA-12905
183 ORACLE DATABASE ADMINISTRATION
The DEFAULT TEMPORARY TABLESPACE cannot be taken off-line The DEFAULT TEMPORARY TABLESPACE
cannot be dropped until create another one. We cannot change a default temporary tablespace into a
permanent tablespace. Oracle10g introduced new feature which we will create the temp file automatically
when we restart the database.
If we make Temporary tablespace goes offline, associated all temp files will be offline status. Even if
tablespace is offline , can add temp file under this tablespace by default ONLINE status. Even a
tablespace is offline, can set temporary tablespace as “Default temporary tablespace “.
UNDO Tablespace
The Undo tablespace is used for automatic undo management. Note the required use of
the UNDO clause within the CREATE command shown in the figure here.
More than one UNDO tablespace can exist, but only one can be active at a time.
WHAT IS UNDO?
Word UNDO means to reverse or erase the change. In oracle world, UNDO allows to reverse the
transaction so that database looks like before the transaction started. Also it provides read consistent
image of data i.e. if during a transaction if one session is changing a data where as at the same time
other session wants to read that data then UNDO will provide the state of data before the transaction
starts.
USE OF UNDO
There is few main usage of UNDO in oracle database.
* UNDO is used to reverse the uncommitted transaction on the issue of ROLLBACK command.
* It is used to provide read consistent image of a record
* It is used to during database recovery to apply any uncommitted transaction in redo logs to datafiles.
* Flashback Query also uses UNDO to get the image of data back in time.
UNDO MANAGEMENT
Oracle needs UNDO tablespace to create undo segments. If undo tablespace is not created then it will
use SYSTEM tablespace as UNDO and not recommended. To create UNDO tablespace you have to use
UNDO keyword in create tablespace command. Most common configuration of UNDO_MANAGEMENT is
AUTO (Default is MANUAL in 8i & 9i).
184 ORACLE DATABASE ADMINISTRATION
One can have multiple UNDO tablespace in a database but only one can be active at any given time.
In AUTOMATIC undo management oracle create and manages undo itself and DBAs don’t have to worry
about its management. Oracle attempts to assign one undo segment to each transaction. When it
cannot, it creates and assigns additional undo segments. Multiple transactions can write to one undo
segment if space in undo tablespace is depleted which can cause contention. In order to avoid this
situation add more space in undo tablespace. Later this article will provide scripts to monitor
fragmentation in UNDO tablespace and method to overcome it.
UNDO_MANAGEMENT If AUTO, use automatic undo management mode. If MANUAL, use manual undo
management mode. The default is MANUAL.
UNDO_TABLESPACE An optional dynamic parameter specifying the name of an undo tablespace to use.
This parameter should be used only when the database has multiple undo tablespaces and you want to
direct the database instance to use a particular undo tablespace.
UNDO_RETENTION A dynamic parameter specifying the minimum length of time to retain undo. The
default is 900 seconds. The setting of this parameter should take into account any flashback
requirements of the system.
When oracle introduced UNDO management it was based on segment management. Segment
management is expensive on CPU, memory and IO so oracle add a new feature in later version of oracle
10g to do undo management in memory and called it in-memory Undo (IMU).
a) IN-MEMORY UNDO
Main benefit of IMU is that as there is no block level change of UNDO segment involves here so IMU
will not generate any redo entry. Hence IMU improve the Undo Header contention and undo block
contention by doing UNDO management in memory called IMU Node. Rather than writing a change to
undo buffer ,undo is written to IMU Node. On the other hand memory structure require latches to
maintain serial execution so make sure that enough IMU latches are created by changing processes
parameter. Oracle use ‘In memory Undo Latch’ to access IMU structures in shared pool. So if you have
high waits on this latch then you can increase the number of latches by increasing processes parameter
or switch in-memory undo by setting in_memory_undo to false.
185 ORACLE DATABASE ADMINISTRATION
The following is a summary of the initialization parameters for automatic undo management mode:
_imu_pools Default is 3 on some system. This sets the number of IMU pools. It is not related to memory
allocation for IMU.
_recursuve_imu_transactions This enables Oracle’s own SQL to use IMU. Default is FALSE.
_db_writer_flush_imu Allows Oracle the freedom to artificially age a transaction for increased automatic
cache management. Default is Initialization Parameter Description TRUE
There is no parameter that allow us to change memory allocation for IMU Node but changing
shared_pool_size parameter can help adjusting the IMU memory allocation.
To findout how much memory is allocated run the command On UAT database
As mentioned earlier that huge wait on “in memory Undo Latch “ and high amount of cpu usage shows
that you have contention latch contention with IMU. As they are very CPU hungry if waiting for latch so
you would like to resolve this issue as early as possible.
There are few reasons that you might have contention with IMU
* Increase the number of processes parameter so that it you can increase number of latches. Be careful
and understand the other impact of increasing processes parameter.
* Increase the shared_pool_size. Again also understand the impact of increasing shared_poo_size on
over database performance.
186 ORACLE DATABASE ADMINISTRATION
* Last resort is to disable the IMU and force oracle to use segment management by changing
_in_memory_undo to FALSE.
Segment based Undo management is same as any table or index segment management and normally
costly than IMU but on the other hand Oracle has more experience in handling segment based undo
management than IMU. IMU is released in later version of oracle 10g. Oracle also provides a tool cal
UNDO advisor which checks the information in AWR and advice and helps you on setting up undo
environment. Sample example of Undo advisor from oracle documentation is mentioned below
undo_management: It is recommend to set to AUTO and let oracle manage the space.
undo_retention: The number is in secs. That means how long oracle would keep the data in the
extent after it is commit. The higher number would avoid the “Snapshot too old error” and
flashback query can query order data. However, the higher number it is set, the more space
would be used.
Undo_tablespace is defined the undo tablespace name.
We can add more data file in the undo tablespace but only one undo table space per database.
Since I specify the path after the datafile, the file is not name and manages by OMF, hence the drop
tablespace would not remove the file. We have to manually remove it by using OS command, such as rm.
ORA-30013
190 ORACLE DATABASE ADMINISTRATION
191 ORACLE DATABASE ADMINISTRATION
COLUMNS VALUES
TEMPORARY Tablespace
A TEMPORARY tablespace is used to manage space for sort operations. Sort operations generate
segments, sometimes large segments or lots of them depending on the sort required to satisfy the
specification in a SELECT statement's WHERE clause.
Sort operations are also generated by SELECT statements that join rows from within tables and between
tables.
Note the use of the TEMPFILE instead of a DATAFILE specification for a temporary tablespace in the
figure shown below.
Each database needs to have a specified default temporary tablespace. If one is not specified, then
any user account created without specifying a TEMPORARY TABLESPACE clause is assigned a
temporary tablespace in the SYSTEM tablespace!
This should raise a red flag as you don't want system users to execute SELECT commands that cause
sort operations to take place within the SYSTEM tablespace.
If a default temporary tablespace is not specified at the time a database is created, a DBA can create one
by altering the database.
After this, new system user accounts are automatically allocated temp as their temporary tablespace. If
you ALTER DATABASE to assign a new default temporary tablespace, all system users are automatically
reassigned to the new default tablespace for temporary operations.
Limitations:
· A default temporary tablespace cannot be dropped unless a replacement is created. This is
usually only done if you were moving the tablespace from one disk drive to another.
· You cannot take a default temporary tablespace offline – this is done only for system
maintenance or to restrict access to a tablespace temporarily. None of these activities apply to
default temporary tablespaces.
· You cannot alter a default temporary tablespace to make it permanent.
· Example continued: This code changes the database's default temporary tablespace
to TEMPGRP – you use the same command that would be used to assign a temporary
tablespace as the default because temporary tablespace groups are treated logically the same as
an individual temporary tablespace.
· To drop a tablespace group, first drop all of its members. Drop a member by assigning the
temporary tablespace to a group with an empty string.
Tablespace groups allow users to use more than one tablespace to store temporary segments. It
contains only temporary tablespace. It is created implicitly when the first temporary tablespace is
assigned to it, and is deleted when the last temporary tablespace is removed from the group.
Benefits:
-It allows the user to use multiple temporary tablespaces in different sessions at the same time.
-It allows a single SQL operation to use multiple temporary tablespaces for sorting.
Oracle databases having a DATA (or more than one DATA) tablespace will also have an accompanying
INDEXES tablespace.
· The purpose of separating tables from their associated indexes is to improve I/O efficiency.
· The DATA and INDEXES tablespaces will typically be placed on different disk drives thereby
providing an I/O path for each so that as tables are updated, the indexes can also be updated
simultaneously.
194 ORACLE DATABASE ADMINISTRATION
Bigfile Tablespaces
A Bigfile tablespace is best used with a server that uses a RAID storage device with disk stripping – a
single datafile is allocated and it can be up to 8EB (exabytes, a million terabytes) in size with up
to 4G blocks.
· Bigfile tablespaces can only be locally managed with automatic segment space management
except for locally managed undo tablespaces, temporary tablespaces, and the SYSTEM
tablespace.
· If a Bigfile tablespace is used for automatic undo or temporary segments, the segment space
management must be set to MANUAL.
· Bigfile tablespaces save space in the SGA and control file because fewer datafiles need to be
tracked.
· ALTER TABLESPACE commands on a Bigfile tablespace do not reference a datafile because
only one datafile is associated with each Bigfile tablespace.
Example – this example creates a Bigfile tablespace named Graph01 (to store data that is graphical in
nature and that consumes a lot of space). Note use of the BIGFILE keyword.
· Example continued: This resizes the Bigfile tablespace to increase the capacity from 10
gigabytes to 40 gigabytes.
Notice in the above two examples that there was no need to refer to the datafile by name since the
Bigfile tablespace has only a single datafile.
Compressed Tablespaces
This type of tablespace is used to compress all tables stored in the tablespace.
· The keyword DEFAULT is used to specify compression when followed by the compression type.
· You can override the type of compression used when creating a table in the tablespace.
This example creates a compressed tablespace named COMP_DATA. Here the Compress for
OLTP clause specifies the type of compression. You can study the other types of compression on your
own from your readings.
Tablespace created.
Encrypted Tablespaces
Encryption requires creation of an Oracle wallet to store the master encryption key.
This example creates an encrypted tablespace named SECURE_DATA that uses 256-bit keys.
Tablespace created.
You cannot encrypt an existing tablespace with the ALTER TABLESPACE statement. You would need to
export the data from an unencrypted tablespace and then import it into an encrypted tablespace.
If the tablespace being modified is locally managed, the segments that are associated with the dropped
tables and index are changed to temporary segments so that the bitmap is not updated.
To change a tablespace from read only to read/write, all datafiles for the tablespace must be online.
Another reason for making a tablespace read only is to support the movement of the data to read only
media such as CD-ROM. This type of change would probably be permanent. This approach is sometimes
used for the storage of large quantities of static data that doesn’t change. This also eliminates the need
to perform system backups of the read only tablespaces. To move the datafiles to a read only media,
first alter the tablespaces as read only, then rename the datafiles to the new location by using
the ALTER TABLESPACE RENAME DATAFILE option..
Offline Tablespaces
Most tablespaces are online all of the time; however, a DBA can take a tablespace offline. This enables
part of the database to be available – the tablespaces that are online – while enabling maintenance on
the offline tablespace. Typical activities include:
· Offline tablespace backup – a tablespace can be backed up while online, but offline backup is
faster.
· Recover an individual tablespace or datafile.
· Move a datafile without closing the database.
You cannot use SQL to reference offline tablespaces – this simply generates a system error. Additionally,
the action of taking a tablespace offline/online is always recorded in the data dictionary and control
file(s). Tablespaces that are offline when you shutdown a database are offline when the database is
again opened.
The commands to take a tablespace offline and online are simple ALTER TABLESPACE commands. These
also take the associated datafiles offline.
NORMAL: All data blocks for all datafiles that form the tablespace are written from the SGA to the
datafiles. A tablespace that is offline NORMAL does not require any type of recovery when it is brought
back online.
TEMPORARY: A checkpoint is performed for all datafiles in the tablespace. Any offline files may require
media recovery.
IMMEDIATE: A checkpoint is NOT performed. Media recovery on the tablespace is required before it is
brought back online to synchronize the database objects.
FOR RECOVER: Used to place a tablespace in offline status to enable point-in-time recovery.
Note: You will not be able to practice the commands in this section because Dictionary-
Managed tablespaces cannot be created in Oracle 11g.
Any of the storage settings for Dictionary-Managed tablespaces can be modified with the ALTER
TABLESPACE command. This only alters the default settings for future segment allocations.
Tablespace Sizing
Normally over time tablespaces need to have additional space allocated. This can be accomplished by
setting the AUTOEXTEND option to enable a tablespace to increase automatically in size.
· This can be dangerous if a “runaway” process or application generates data and consumes all
available storage space.
· An advantage is that applications will not ABEND because a tablespace runs out of storage
capacity.
· This can be accomplished when the tablespace is initially created or by using the ALTER
TABLESPACE command at a later time.
This query uses the DBA_DATA_FILES view to determine if AUTOEXTEND is enabled for selected
tablespaces in the SIUE DBORCL database.
TABLESPACE_NAME AUT
------------------------------ ---
SYSTEM NO
SYSAUX NO
UNDOTBS1 YES
USERS NO
ALTER DATABASE
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
AUTOEXTEND ON MAXSIZE 600M;
This command looks similar to the above command, but this one resizes a datafile while the above
command sets the maxsize of the datafile.
ALTER DATABASE
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
RESIZE 600M;
198 ORACLE DATABASE ADMINISTRATION
· Add a new datafile to a tablespace with the ALTER TABLESPACE command.
Moving/Relocating Tablespaces/Datafiles
The ALTER TABLESPACE command can be used to move datafiles by renaming them. This cannot be
used if the tablespace is the SYSTEM or contains active undo or temporary segments.
The ALTER DATABASE command can also be used with the RENAME option. This is the method that
must be used to move the SYSTEM tablespace because it cannot be taken offline. The steps are:
1. Shut down the database.
2. Use an operating system command to move the files.
3. Mount the database.
4. Execute the ALTER DATABASE RENAME FILE command.
Dropping Tablespaces
Occasionally tablespaces are dropped due to database reorganization. A tablespace that contains data
cannot be dropped unless the INCLUDING CONTENTS clause is added to the DROP command. Since
tablespaces will almost always contain data, this clause is almost always used.
A DBA cannot drop the SYSTEM tablespace or any tablespace with active segments. Normally you should
take a tablespace offline to ensure no active transactions are being processed.
An example command set that drops the compressed tablespace COMP_DATA created earlier is:
The AND DATAFILES clause causes the datafiles to also be deleted. Otherwise, the tablespace is
removed from the database as a logical unit, and the datafiles must be deleted with operating system
commands.
199 ORACLE DATABASE ADMINISTRATION
The CASCADE CONSTRAINTS clause drops all referential integrity constraints where objects in one
tablespace are constrained/related to objects in another tablespace.
Non-Standard Block Sizes: It may be advantageous to create a tablespace with a nonstandard block
size in order to import data efficiently from another database. This also enables transporting tablespaces
with unlike block sizes between databases.
· A block size is nonstandard if it differs from the size specified by
the DB_BLOCK_SIZE initialization parameter.
· The BLOCKSIZE clause of the CREATE TABLESPACE statement is used to specify
nonstandard block sizes.
· In order for this to work, you must have already set DB_CACHE_SIZE and at least
one DB_nK_CACHE_SIZE initialization parameter values to correspond to the nonstandard
block size to be used.
· The DB_nK_CACHE_SIZE initialization parameters that can be used are:
o DB_2K_CACHE_SIZE
o DB_4K_CACHE_SIZE
o DB_8K_CACHE_SIZE
o DB_16K_CACHE_SIZE
o DB_32_CACHE_SIZE
· Note that the DB_nK_CACHE_SIZE parameter corresponding to the standard block size
cannot be used – it will be invalid – instead use the DB_CACHE_SIZE parameter for the
standard block size.
Example – these parameters specify a standard block size of 8K with a cache for standard block size
buffers of 12M. The 2K and 16K caches will be configured with cache buffers of 8M each.
DB_BLOCK_SIZE=8192
DB_CACHE_SIZE=12M
DB_2K_CACHE_SIZE=8M
DB_16K_CACHE_SIZE=8M
Example – this creates a tablespace with a blocksize of 2K (assume the standard block size for the
database was 8K).
As you learned earlier, when you use an OMF approach, the DB_CREATE_FILE_DEST parameter in the
parameter file specifies that datafiles are to be created and defines their location. The DATAFILE clause
to name files is not used because filenames are automatically generated by the Oracle Server, for
example, ora_tbs1_2xfh990x.dbf.
You can also use the ALTER SYSTEM command to dynamically set this parameter in the SPFILE
parameter file.
Additional tablespaces are specified with the CREATE TABLESPACE command shown here that specifies
not the datafile name, but the datafile size. You can also add datafiles with the ALTER
TABLESPACE command.
Setting the DB_CREATE_ONLINE_LOG_DEST_n parameter prevents log files and control files from
being located with datafiles – this will reduce I/O contention.
When OMF tablespaces are dropped, their associated datafiles are also deleted at the operating system
level.
200 ORACLE DATABASE ADMINISTRATION
The following data dictionary views can be queried to display information about tablespaces.
· Tablespaces: DBA_TABLESPACES, V$TABLESPACE
· Datafiles: DBA_DATA_FILES, V$_DATAFILE
· Temp files: DBA_TEMP_FILES, V$TEMPFILE
You should examine these views in order to familiarize yourself with the information stored in them.
Let’s try to add a datafile to Tablespace USERS and try to add the same again to UNDOTBS, to
demonstrate that one datafile can be associated with ONLY one tablespace. Trying to add the Datafile
associated already to a tablespace to other Tablespace will error out – ‘ORA-01537 - ... file already part
of database’
Let’s create a new Tablespace in Database MyDB created in my Virtual Linux server.
To create/alter a tablespace, the user should have the CREATE/ALTER TABLESPACE system privileges (To
know different system privileges granted to a user – query the DBA_SYS_PRIVS table. As I connected to
database as SYS, the user is SYS)
201 ORACLE DATABASE ADMINISTRATION
Below is the query to create a locally managed Tablespace with 100MB datafile size, extent management
local & autoallocate.
AUTOALLOCATE causes the tablespace to be system managed with a minimum extent size of 64K.
202 ORACLE DATABASE ADMINISTRATION
The alternative to AUTOALLOCATE is UNIFORM, which specifies that the tablespace is managed with
extents of uniform size. You can specify that size in the SIZE clause of UNIFORM. If you omit SIZE, then
the default size is 1M.
MANUAL - Manual segment space management uses linked lists called "freelists" to manage free space
in the segment
AUTO- Automatic segment space management uses bitmaps. Automatic segment space management is
the more efficient method, and is the default for all new permanent, locally managed tablespaces
Let’s drop example1 Tbs created above and recreate the same using the CREATE TABLESPACE statement
with explicitly mentioning the Segment Management as AUTO
Bigfile Tablespaces
- - If Database is created by mentioning Bigfile as default for TBs creation, then CREATE TABLESPACE..
statement creates the tablespace as Bigfile Tablespace
- - Bigfile tablespaces are by default EXTENT MANAGEMENT LOCAL and SEGMENT SPACE
MANAGEMENT AUTO.
- If you specify EXTENT MANAGEMENT DICTIONARY and SEGMENT SPACE MANAGEMENT MANUAL,
then the TBs creation will error out
203 ORACLE DATABASE ADMINISTRATION
- - A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile tablespace
with 32K blocks can contain a 128 terabyte datafile. The maximum number of datafiles in an Oracle
Database is limited (usually to 64K files). Therefore, bigfile tablespaces can significantly enhance the
storage capacity of an Oracle Database.
- - To find the default TBs type using which the database is created can be found by querying
DATABASE_PROPERTIES table
Encrypted Tablespaces
- - TBs encryption is applicable to Permanent TBs ONLY
- - Any user who is granted privileges on objects stored in an encrypted tablespace can access those
objects without providing any kind of additional password or key
- - Data from an encrypted tablespace is automatically encrypted when written to the undo tablespace,
to the redo logs, and to any temporary tablespace. There is no need to explicitly create encrypted undo
or temporary tablespaces, and in fact, you cannot specify encryption for those tablespace types.
- - Transparent data encryption supports industry-standard encryption algorithms, including the
following Advanced Encryption Standard (AES) and Triple Data Encryption Standard (3DES) algorithms:
§ 3DES168
§ AES128(default when USING keyword is not mentioned)
§ AES192
§ AES256
- - You cannot encrypt an existing tablespace with an ALTER TABLESPACE statement. However, you can
use Data Pump or SQL statements such as CREATE TABLE AS SELECT or ALTER TABLE MOVE to move
existing table data into an encrypted tablespace.
- - Encryption algorithm implemented for a TBs can be determined by querying
-v$encrypted_tablespaces
204 ORACLE DATABASE ADMINISTRATION
- - Tablespace encryption uses the transparent data encryption feature of Oracle Database, which
requires that you create an Oracle wallet to store the master encryption key for the database. The wallet
must be open before you can create the encrypted tablespace and before you can store or retrieve
encrypted data.
When we try to create the encrypted TBs without create/open the oracle wallet, then – ‘ORA-28365:
wallet is not open’ will be thrown
To correct the above error, create a directory named – ‘wallet’ as in here $ORACLE_HOME/admin/
$ORACLE_SID/wallet.
Shutdown/Restart the instance and open the oracle wallet using the ALTER SYSTEM… and then create the
encrypted TBs
DBA_TABLESPACE_USAGE_METRICS
205 ORACLE DATABASE ADMINISTRATION
Undo Purpose
Transactions
Transaction – collection of SQL data manipulation language (DML) statements treated as a logical unit.
· Failure of any statement results in the transaction being "undone".
· If all statements process, SQLPlus or the programming application will issue a COMMIT to
make database changes permanent.
· Transactions implicitly commit if a user disconnects from Oracle normally.
· Abnormal disconnections result in transaction rollback.
· The command ROLLBACK is used to cancel (not commit) a transaction that is in progress.
SET TRANSACTION – Transaction boundaries can be defined with the SET TRANSACTION command.
· This has no performance benefit achieved by setting transaction boundaries, but doing so
enables defining a savepoint.
· Savepoint – allows a sequence of DML statements in a transaction to be partitioned so you
can roll back one or more or commit the DML statements up to the savepoint.
· Savepoints are created with the SAVEPOINT savepoint_name command.
· DML statements since the last savepoint are rolled back with the ROLLBACK TO
SAVEPOINT savepoint_name command.
In earlier versions of Oracle, the term rollback was used instead of undo, and instead of
managing undo segments, the DBA was responsible for managing rollback segments.
· Rollback segments were one of the primary areas where problems often arose; thus, the
conversion to automatic undo management is a significant improvement.
· You will see parts of the data dictionary and certain commands still use the term Rollback for
backward compatibility.
Undo Segments
Undo segment header – this stores a transaction table where information about current transactions
using this particular segment is stored.
· A serial transaction uses only one undo segment to store all of its undo data.
· A single undo segment can support multiple concurrent transactions.
Purpose of Undo Segments – Undo segments have three purposes: (1) Transaction Rollback, (2)
Transaction Recovery, and (3) Read Consistency.
Transaction Rollback: Old images of modified columns are saved as undo data to undo segments.
· If a transaction is rolled back because it cannot be committed or the application program
directs a rollback of the transaction, the Oracle server uses the undo data to restore the original
values by writing the undo data back to the table/index row.
· If you disconnect non-normally, rollback of uncommitted transactions is automatic.
Transaction Recovery: Sometimes an Oracle Instance will fail and transactions in progress will not
complete nor be committed.
· Redo Logs bring both committed and uncommitted transactions forward to the point of
instance failure.
· Undo data is used to undo any transactions that were not committed at the point of failure.
· Recovery is covered in more detail in a later set of notes.
In the figure shown below, an UPDATE command has a lock on a data block from
the EMPLOYEE table and an undo image of the block is written to the undo segment. The update
transaction has not yet committed, so any concurrent SELECT statement by a different system user will
result in data being displayed from the undo segment, not from the EMPLOYEE table. This read-
consistent image is constructed by the Oracle Server.
213 ORACLE DATABASE ADMINISTRATION
A SYSTEM undo segment is created in the SYSTEM tablespace when a database is created.
· SYSTEM undo segments are used for modifications to objects stored in the SYSTEM
tablespace.
· This type of Undo Segment works identically in both manual and automatic mode.
Databases with more than one tablespace must have at least one non-SYSTEM undo segment for
manual mode or a separate Undo tablespace for automatic mode.
Manual Mode: A non-SYSTEM undo segment is created by a DBA and is used for changes to objects
in a non-SYSTEM tablespace. There are two types of non-SYSTEM undo segments: (1) Private and
(2) Public.
Private Undo Segments: These are brought online by an instance if they are listed in the parameter
file.
· They can also be brought online by issuing an ALTER ROLLBACK SEGMENT
segment_name ONLINE command.
· Prior to Oracle 9i, undo segments were named rollback segments and the command has
not changed.
· Private undo segments are used for a single Database Instance.
Public Undo Segments: These form a pool of undo segments available in a database.
· These are used with Oracle Real Application Clusters as a pool of undo segments available to
any of the Real Application Cluster instances.
· You can learn more about public undo segments by studying the Oracle Real Application
Clusters and Administration manual.
Deferred Undo Segments: These are maintained by the Oracle Server so a DBA does not have to
maintain them.
· They can be created when a tablespace is brought offline (immediate, temporary, or recovery).
· They are used for undo transactions when the tablespace is brought back online.
· They are dropped by the Oracle Server automatically when they are no longer needed.
Automatic Undo Segments are named with a naming convention of: _SYSMUn_<generated
number>$
For example, they may be named: _SYSMU1_1872589076$ and _SYSMU2_1517779068$, etc.
Examples:
UNDO_MANAGMENT=AUTO or UNDO_MANAGMENT=MANUAL
UNDO_TABLESPACE=UNDO01
You can alter the system to change the active Undo tablespace that is in use as follows:
· In the example command shown above, the Undo tablespace is named UNDO01.
· If the Undo tablespace cannot be created, the entire CREATE DATABASE command fails.
2. You can also create an Undo tablespace with the CREATE UNDO TABLESPACE command.
· This is the same as the normal CREATE TABLESPACE command but with the UNDO keyword
added.
The ALTER TABLESPACE command can be used to modify an Undo tablespace. For example, the DBA
may need to add an additional datafile to the Undo tablespace.
Use the ALTER SYSTEM command to switch between Undo tablespaces – remember only one Undo
tablespace can be active at a time.
The database is online while the switch operation is performed, and user transactions can be executed
while this command is being executed.
· When the switch operation completes successfully, all transactions started after the switch
operation began are assigned to transaction tables in the new undo tablespace.
· The switch operation does not wait for transactions in the old undo tablespace to commit.
· If there are any pending transactions in the old undo tablespace, the old undo tablespace
enters into a PENDING OFFLINE mode (status).
· In this mode, existing transactions can continue to execute, but undo records for new user
transactions cannot be stored in this undo tablespace.
The DROP TABLESPACE command can be used to drop an Undo tablespace that is no longer needed –
it cannot be an active undo tablespace.
Older application programs may have programming code (PL/SQL) that use the SET TRANSACTION
USE ROLLBACK SEGMENT statement to specify a specific rollback segment to use when processing
large, batch transactions. Such a program has not been modified to Automatic Undo Management and
normally this command would return an Oracle error: ORA-30019: Illegal rollback segment
operation in Automatic Undo mode.
You can suppress these errors by specifying the UNDO_SUPPRESS_ERRORS parameter in the
initialization file with a value of TRUE.
A DBA can also determine how long to retain undo data to provide consistent reads. If undo data is not
retained long enough, and a system user attempts to access data that should be located in an Undo
Segment, then an Oracle error: ORA-1555 snapshot too old error is returned – this means read-
consistency could not be achieved by Oracle.
Undo Retention
216 ORACLE DATABASE ADMINISTRATION
After a transaction is committed, undo data is no longer needed for rollback or transaction recovery
purposes.
· However, for consistent read purposes, long-running queries may require this old undo
information for producing older images of data blocks.
· Several Oracle Flashback features can also depend upon the availability of older undo
information.
· For these reasons, it is desirable to retain the old undo information for as long as possible.
Oracle Database automatically tunes the undo retention period based on undo tablespace size and
system activity.
· You can optionally specify a minimum undo retention period (in seconds) by setting
the UNDO_RETENTION initialization parameter.
· The exact impact this parameter on undo retention is as follows:
o The UNDO_RETENTION parameter is ignored for a fixed size undo tablespace. The database
always tunes the undo retention period for the best possible retention, based on system
activity and undo tablespace size.
o For an undo tablespace with the AUTOEXTEND option enabled, the database attempts to
honor the minimum retention period specified byUNDO_RETENTION.
o When space is low, instead of overwriting unexpired undo information, the tablespace auto-
extends.
o If the MAXSIZE clause is specified for an auto-extending undo tablespace, when the
maximum size is reached, the database may begin to overwrite unexpired undo information.
If Undo Segment data is to be retained a long time, then the Undo tablespace will need larger datafiles.
· The UNDO_RETENTION parameter defines the period in seconds.
· You can set this parameter in the initialization file or you can dynamically alter it with
the ALTER SYSTEM command:
· The above command will retain undo segment data for 720 minutes (12 hours) – the default
value is 900 seconds (15 minutes).
· This sets the minimum undo retention period.
· If the tablespace is too small to store Undo Segment data for 720 minutes, then the data is
not retained – instead space is recovered by the Oracle Server to be allocated to new active
transactions.
Oracle 11g automatically tunes undo retention by collecting database use statistics
whenever AUTOEXTEND is on.
· Specifying UNDO_RETENTION sets a low threshold so that undo data is retained at a
minimum for the threshold value specified, providing there is sufficient Undo tablespace capacity.
· The RETENTION GUARANTEE clause of the CREATE UNDO TABLESPACE statement can
guarantee retention of Undo data to support DML operations, but may cause database failure if
the Undo tablespace is not large enough – unexpired Undo data segments are not overwritten.
· The TUNED_UNDORETENTION column of the V$UNDOSTAT dynamic performance view can
be queries to determine the amount of time Undo data is retained for an Oracle database.
· Query the RETENTION column of the DBA_TABLESPACES view to determine the setting for
the Undo tablespace – possible values are GUARANTEE,NOGUARANTEE, and NOT APPLY (for
tablespaces other than Undo).
When space is inadequate to support changes to uncommitted transactions for rollback operations, the
error message ORA-30036: Unable to extend segment by space_qtr in undo
tablespace tablespace_name is displayed, and the DBA must increase the size of the Undo
tablespace.
Initial Size – enable automatic extension (use the AUTOEXTEND ON clause with the CREATE
TABLESPACE or ALTER TABLESPACE commands) for Undo tablespace datafiles so they automatically
increase in size as more Undo space is needed.
· After the system stabilizes, if you decide to used a fixed-size Undo tablespace, then Oracle
recommends setting the Undo tablespace maximum size to about 10% more than the current
size.
· The Undo Advisor software available in Oracle Enterprise Manager can be used to calculate
the amount of Undo retention disk space a database needs.
The V$UNDOSTAT view displays statistical data to show how well a database is performing.
· Each row in the view represents statistics collected for a 10-minute interval.
· You can use this to estimate the amount of undo storage space needed for the current
workload.
· If workloads vary considerably throughout the day, then a DBA should conduct estimations
during peak workloads.
· The column SSOLDERRCNT displays the number of queries that failed with a "Snapshot too
old" error.
In order to size an Undo tablespace, a DBA needs three pieces of information. Two are obtained from the
initialization file: UNDO_RETENTION and DB_BLOCK_SIZE. The third piece of information is
obtained by querying the database: the number of undo blocks generated per second.
(SUM(UNDOBLKS))/SUM((END_TIME-BEGIN_TIME)*86400)
------------------------------------------------
.063924708
In this next query, the END_TIME and BEGIN_TIME columns are DATE data and subtractions of these
results in days – converting days to seconds is done by multiplying by 86,400, the number of seconds in
a day. This value needs to be multiplied by the size of an undo block – the same size as the database
block defined by the DB_BLOCK_SIZE parameter.
The number of bytes of Undo tablespace storage needed is calculated by this query:
Bytes
----------
668641.879
Convert this figure to megabytes of storage by dividing by 1,048,576 (the number of bytes in a
megabyte). The Undo tablespace needs to be about 0.64 MBaccording to this calculation, although this
is because the sample database has very few transactions.
Undo Quota
An object called a resource plan can be used to group users and place limits on the amount of
resources that can be used by a given group.
· This may become necessary when long transactions or poorly written transactions consume
limited database resources.
· If the database has no resource bottlenecks, then the allocating of quotas can be ignored.
Sometimes undo data space is a limited resource. A DBA can limit the amount of undo data space used
by a group by setting the UNDO_POOL parameter which defaults to unlimited.
· If the group exceeds the quota, then new transactions are not processed until old ones
complete.
· The group members will receive the ORA-30027: Undo quota violation – failed to get %s
(bytes) error message.
This query lists information about undo segments in the SIUE DBORCL database. Note the two
segments in the SYSTEM tablespace and the remaining segments in the UNDO tablespace.
The owner column above specifies the type of undo segment. SYS means a private undo segment.
This query is a join of the V$ROLLSTAT and V$ROLLNAME views to display statistics on undo
segments currently in use by the Oracle Instance. The usncolumn is a sequence number.
219 ORACLE DATABASE ADMINISTRATION
This query checks the use of an undo segment by any currently active transaction by joining
the V$TRANSACTION and V$SESSION views.
Flashback Features
Flashback features allow DBAs and users to access database information from a previous point in time.
· Undo information must be available so the retention period is important.
· Example: If an application requires a version of the database that is up to 12 hours old,
the UNDO_RETENTION must be set to 43200.
· The RETENTION GUARANTEE clause needs to be specified.
The Oracle Flashback Query option is supplied through the DBMS_FLASHBACK package at the
session level.
At the object level, Flashback Query uses the AS OF clause to specify the point in time for which data is
viewed.
Flashback Version Query enables users to query row history through use of a VERSIONS clause of
a SELECT statement.
Example: This SELECT statement retrieves the state of an employee record for an employee named Sue
at 9:30 AM on June 13, 2013 because it was discovered that Sue's employee record was erroneously
deleted.
For illustration purposes, we will assume that a session overwrites the rollback information it requires
resulting in this error. To understand how this results in ORA 01555, consider the following sequence of
events:
1. Session A executes a Query at time T1 . The SCN is 100.
2. Session A selects a Block B1 during this Query
3. Session A does an update on Block B1. The SCN becomes 101.
4. Session A updates some other tables, generating some more rollback information.
5. Session A issues a COMMIT for the updates made in Step 3 and Step 4. This would mean that
other transactions are free to overwrite the rollback information generated due to the updates
performed by Session A.
6. Session A selects a different data from Block B1.
At this point the header of Block B1, will have a SCN that is different from the SCN value that was during
the start of the query. (i.e. 100 in our case). This would mean that Oracle, to maintain read consistent
information, will now have to get the block image when the Query was executed. (i.e.. image of the block
as of SCN 100). This is depicted below
In such a case, if Oracle is not able to get the rollback information that it is after (Session A has
generated quite a lot of rollback information which could have overwritten the data that Oracle is looking
for), we get ORA 01555 – Snapshot too old error. In the next post, we will discuss one more reason for
this error – rollback transaction slot getting overwritten.
What Is Undo?
Oracle Database creates and manages information that is used to roll back, or undo, changes to the
database. Such information consists of records of the actions of transactions, primarily before they are
committed. These records are collectively referred to as undo.
When a ROLLBACK statement is issued, undo records are used to undo changes that were made to the
database by the uncommitted transaction. During database recovery, undo records are used to undo any
uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency
by maintaining the before image of the data for users who are accessing the data at the same time that
another user is changing it.
Oracle provides a fully automated mechanism, referred to as automatic undo management, for managing
undo information and space. With automatic undo management, the database manages undo segments
in an undo tablespace. Beginning with Release 11g, automatic undo management is the default mode for
a newly installed database. An auto-extending undo tablespace named UNDOTBS1 is automatically
created when you create the database with Database Configuration Assistant (DBCA).
When the instance starts, the database automatically selects the first available undo tablespace. If no
undo tablespace is available, the instance starts without an undo tablespace and stores undo records in
the SYSTEM tablespace. This is not recommended, and an alert message is written to the alert log file to
warn that the system is running without an undo tablespace.
If the database contains multiple undo tablespaces, you can optionally specify at startup that you want to
use a specific undo tablespace. This is done by setting the UNDO_TABLESPACE initialization parameter,
as shown in this example:
UNDO_TABLESPACE = undotbs_01
222 ORACLE DATABASE ADMINISTRATION
If the tablespace specified in the initialization parameter does not exist, the STARTUP command fails.
The UNDO_TABLESPACE parameter can be used to assign a specific undo tablespace to an instance in
an Oracle Real Application Clusters environment.
The database can also run in manual undo management mode. In this mode, undo space is managed
through rollback segments, and no undo tablespace is used.
Note:
Space management for rollback segments is complex. Oracle strongly recommends leaving the database
in automatic undo management mode.
Initialization Description
Parameter
UNDO_MANAGEMENT If AUTO or null, enables automatic undo management. If MANUAL, sets manual undo
management mode. The default is AUTO.
UNDO_TABLESPACE Optional, and valid only in automatic undo management mode. Specifies the name
of an undo tablespace. Use only when the database has multiple undo tablespaces
and you want to direct the database instance to use a particular undo tablespace.
When automatic undo management is enabled, if the initialization parameter file contains parameters
relating to manual undo management, they are ignored.
Note:
Earlier releases of Oracle Database default to manual undo management mode. To change to automatic
undo management, you must first create an undo tablespace and then change
the UNDO_MANAGEMENT initialization parameter to AUTO . If your Oracle Database is release 9ior
later and you want to change to automatic undo management. A
null UNDO_MANAGEMENT initialization parameter defaults to automatic undo management mode in
Release 11g and later, but defaults to manual undo management mode in earlier releases. You must
therefore use caution when upgrading a previous release to Release 11g.
After a transaction is committed, undo data is no longer needed for rollback or transaction recovery
purposes. However, for consistent read purposes, long-running queries may require this old undo
information for producing older images of data blocks. Furthermore, the success of several Oracle
Flashback features can also depend upon the availability of older undo information. For these reasons, it
is desirable to retain the old undo information for as long as possible.
When automatic undo management is enabled, there is always a current undo retention period, which
is the minimum amount of time that Oracle Database attempts to retain old undo information before
overwriting it. Old (committed) undo information that is older than the current undo retention period is
said to be expired and its space is available to be overwritten by new transactions. Old undo information
with an age that is less than the current undo retention period is said to be unexpired and is retained for
consistent read and Oracle Flashback operations.
Oracle Database automatically tunes the undo retention period based on undo tablespace size and
system activity. You can optionally specify a minimum undo retention period (in seconds) by setting
the UNDO_RETENTION initialization parameter. The exact impact this parameter on undo retention is
as follows:
223 ORACLE DATABASE ADMINISTRATION
The UNDO_RETENTION parameter is ignored for a fixed size undo tablespace. The database
always tunes the undo retention period for the best possible retention, based on system activity
and undo tablespace size.
For an undo tablespace with the AUTOEXTEND option enabled, the database attempts to honor
the minimum retention period specified by UNDO_RETENTION. When space is low, instead of
overwriting unexpired undo information, the tablespace auto-extends. If the MAXSIZE clause is
specified for an auto-extending undo tablespace, when the maximum size is reached, the
database may begin to overwrite unexpired undo information. The UNDOTBS1 tablespace that is
automatically created by DBCA is auto-extending.
Oracle Database automatically tunes the undo retention period based on how the undo tablespace is
configured.
If the undo tablespace is configured with the AUTOEXTEND option, the database dynamically
tunes the undo retention period to be somewhat longer than the longest-running active query on
the system. However, this retention period may be insufficient to accommodate Oracle Flashback
operations. Oracle Flashback operations resulting in snapshot too old errors are the indicator
that you must intervene to ensure that sufficient undo data is retained to support these
operations. To better accommodate Oracle Flashback features, you can either set
the UNDO_RETENTION parameter to a value equal to the longest expected Oracle Flashback
operation, or you can change the undo tablespace to fixed size.
If the undo tablespace is fixed size, the database dynamically tunes the undo retention period for
the best possible retention for that tablespace size and the current system load. This best
possible retention time is typically significantly greater than the duration of the longest-running
active query.
If you decide to change the undo tablespace to fixed-size, you must choose a tablespace size
that is sufficiently large. If you choose an undo tablespace size that is too small, the following
two errors could occur:
DML could fail because there is not enough space to accommodate undo for new
transactions.
Long-running queries could fail with a snapshot too old error, which means that there
was insufficient undo data for read consistency.
Note:
Automatic tuning of undo retention is not supported for LOBs. This is because undo information for LOBs
is stored in the segment itself and not in the undo tablespace. For LOBs, the database attempts to honor
the minimum undo retention period specified by UNDO_RETENTION. However, if space becomes low,
unexpired LOB undo information may be overwritten.
Retention Guarantee
To guarantee the success of long-running queries or Oracle Flashback operations, you can enable
retention guarantee. If retention guarantee is enabled, the specified minimum undo retention is
guaranteed; the database never overwrites unexpired undo data even if it means that transactions fail
due to lack of space in the undo tablespace. If retention guarantee is not enabled, the database can
overwrite unexpired undo when space is low, thus lowering the undo retention for the system. This
option is disabled by default.
WARNING:
Enabling retention guarantee can cause multiple DML operations to fail. Use with caution.
224 ORACLE DATABASE ADMINISTRATION
You enable retention guarantee by specifying the RETENTION GUARANTEE clause for the undo
tablespace when you create it with either the CREATE DATABASE or CREATE UNDO
TABLESPACE statement. Or, you can later specify this clause in an ALTER TABLESPACE statement.
You disable retention guarantee with the RETENTION NOGUARANTEE clause.
You can use the DBA_TABLESPACES view to determine the retention guarantee setting for the undo
tablespace. A column named RETENTION contains a value of GUARANTEE, NOGUARANTEE,
or NOT APPLY, where NOT APPLY is used for tablespaces other than the undo tablespace.
For a fixed-size undo tablespace, the database calculates the best possible retention based on database
statistics and on the size of the undo tablespace. For optimal undo management, rather than tuning
based on 100% of the tablespace size, the database tunes the undo retention period based on 85% of
the tablespace size, or on the warning alert threshold percentage for space used, whichever is lower.
(The warning alert threshold defaults to 85%, but can be changed.) Therefore, if you set the warning
alert threshold of the undo tablespace below 85%, this may reduce the tuned size of the undo retention
period.
You can determine the current retention period by querying the TUNED_UNDORETENTION column of
the V$UNDOSTAT view. This view contains one row for each 10-minute statistics collection interval over
the last 4 days. (Beyond 4 days, the data is available in
the DBA_HIST_UNDOSTAT view.)TUNED_UNDORETENTION is given in seconds.
You specify the minimum undo retention period (in seconds) by setting
the UNDO_RETENTION initialization parameter. As described in "About the Undo Retention Period", the
current undo retention period may be automatically tuned to be greater than UNDO_RETENTION, or,
unless retention guarantee is enabled, less than UNDO_RETENTION if space in the undo tablespace is
low.
UNDO_RETENTION = 1800
The effect of an UNDO_RETENTION parameter change is immediate, but it can only be honored if the
current undo tablespace has enough space.
Automatic tuning of undo retention typically achieves better results with a fixed-size undo tablespace. If
you decide to use a fixed-size undo tablespace, the Undo Advisor can help you estimate needed capacity.
You can access the Undo Advisor through Enterprise Manager or through the DBMS_ADVISORPL/SQL
package. Enterprise Manager is the preferred method of accessing the advisor. For more information on
using the Undo Advisor through Enterprise Manager, see Oracle Database 2 Day DBA.
The Undo Advisor relies for its analysis on data collected in the Automatic Workload Repository (AWR). It
is therefore important that the AWR have adequate workload statistics available so that the Undo Advisor
can make accurate recommendations. For newly created databases, adequate statistics may not be
available immediately. In such cases, continue to use the default auto-extending undo tablespace until at
least one workload cycle completes.
An adjustment to the collection interval and retention period for AWR statistics can affect the precision
and the type of recommendations that the advisor produces. See Oracle Database Performance Tuning
Guide for more information.
To use the Undo Advisor, you first estimate these two values:
After the database has completed a workload cycle, you can view the Longest Running Query
field on the System Activity subpage of the Automatic Undo Management page.
The longest interval that you will require for Oracle Flashback operations
For example, if you expect to run Oracle Flashback queries for up to 48 hours in the past, your
Oracle Flashback requirement is 48 hours.
You then take the maximum of these two values and use that value as input to the Undo Advisor.
Running the Undo Advisor does not alter the size of the undo tablespace. The advisor just returns a
recommendation. You must use ALTER DATABASE statements to change the tablespace datafiles to
fixed sizes.
The following example assumes that the undo tablespace has one auto-extending datafile
named undotbs.dbf. The example changes the tablespace to a fixed size of 300MB.
Note:
If you want to make the undo tablespace fixed-size, Oracle suggests that you first allow enough time
after database creation to run a full workload, thus allowing the undo tablespace to grow to its minimum
required size to handle the workload. Then, you can use the Undo Advisor to determine, if desired, how
much larger to set the size of the undo tablespace to allow for long-running queries and Oracle Flashback
operations.
226 ORACLE DATABASE ADMINISTRATION
You can activate the Undo Advisor by creating an undo advisor task through the advisor framework. The
following example creates an undo advisor task to evaluate the undo tablespace. The name of the
advisor is 'Undo Advisor'. The analysis is based on Automatic Workload Repository snapshots, which you
must specify by setting parameters START_SNAPSHOT and END_SNAPSHOT. In the following
example, the START_SNAPSHOT is "1" and END_SNAPSHOT is "2".
After you have created the advisor task, you can view the output and recommendations in the Automatic
Database Diagnostic Monitor in Enterprise Manager. This information is also available in
the DBA_ADVISOR_* data dictionary views
(DBA_ADVISOR_TASKS, DBA_ADVISOR_OBJECTS,DBA_ADVISOR_FINDINGS, DBA_ADVISOR_
RECOMMENDATIONS, and so on).
This section describes the various steps involved in undo tablespace management and contains the
following sections:
Although Database Configuration Assistant (DBCA) automatically creates an undo tablespace for new
installations of Oracle Database Release 11g, there may be occasions when you want to manually create
an undo tablespace.
There are two methods of creating an undo tablespace. The first method creates the undo tablespace
when the CREATE DATABASE statement is issued. This occurs when you are creating a new database,
and the instance is started in automatic undo management mode (UNDO_MANAGEMENT = AUTO).
The second method is used with an existing database. It uses the CREATE UNDO
TABLESPACE statement.
227 ORACLE DATABASE ADMINISTRATION
You cannot create database objects in an undo tablespace. It is reserved for system-managed undo data.
Oracle Database enables you to create a single-file undo tablespace. Single-file, or bigfile, tablespaces
are discussed in "Bigfile Tablespaces".
You can create a specific undo tablespace using the UNDO TABLESPACE clause of the CREATE
DATABASE statement.
The following statement illustrates using the UNDO TABLESPACE clause in a CREATE
DATABASE statement. The undo tablespace is named undotbs_01and one
datafile, /u01/oracle/rbdb1/undo0101.dbf, is allocated for it.
If the undo tablespace cannot be created successfully during CREATE DATABASE, the entire CREATE
DATABASE operation fails. You must clean up the database files, correct the error and retry the CREATE
DATABASE operation.
The CREATE DATABASE statement also lets you create a single-file undo tablespace at database
creation. This is discussed in "Supporting Bigfile Tablespaces During Database Creation".
The CREATE UNDO TABLESPACE statement is the same as the CREATE TABLESPACE statement, but
the UNDO keyword is specified. The database determines most of the attributes of the undo tablespace,
but you can specify the DATAFILE clause.
This example creates the undotbs_02 undo tablespace with the AUTOEXTEND option:
You can create more than one undo tablespace, but only one of them can be active at any one time.
Undo tablespaces are altered using the ALTER TABLESPACE statement. However, since most aspects of
undo tablespaces are system managed, you need only be concerned with the following actions:
Adding a datafile
Renaming a datafile
If an undo tablespace runs out of space, or you want to prevent it from doing so, you can add more files
to it or resize existing datafiles.
You can use the ALTER DATABASE...DATAFILE statement to resize or extend a datafile.
Use the DROP TABLESPACE statement to drop an undo tablespace. The following example drops the
undo tablespace undotbs_01:
An undo tablespace can only be dropped if it is not currently used by any instance. If the undo
tablespace contains any outstanding transactions (for example, a transaction died but has not yet been
recovered), the DROP TABLESPACE statement fails. However, since DROP TABLESPACE drops an
undo tablespace even if it contains unexpired undo information (within retention period), you must be
careful not to drop an undo tablespace if undo information is needed by some existing queries.
You can switch from using one undo tablespace to another. Because
the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM
SET statement can be used to assign a new undo tablespace.
Assuming undotbs_01 is the current undo tablespace, after this command successfully executes, the
instance uses undotbs_02 in place ofundotbs_01 as its undo tablespace.
If any of the following conditions exist for the tablespace being switched to, an error is reported and no
switching occurs:
The tablespace is already being used by another instance (in a RAC environment only)
The database is online while the switch operation is performed, and user transactions can be executed
while this command is being executed. When the switch operation completes successfully, all transactions
started after the switch operation began are assigned to transaction tables in the new undo tablespace.
229 ORACLE DATABASE ADMINISTRATION
The switch operation does not wait for transactions in the old undo tablespace to commit. If there are
any pending transactions in the old undo tablespace, the old undo tablespace enters into a PENDING
OFFLINE mode (status). In this mode, existing transactions can continue to execute, but undo records
for new user transactions cannot be stored in this undo tablespace.
An undo tablespace can exist in this PENDING OFFLINE mode, even after the switch operation
completes successfully. A PENDING OFFLINE undo tablespace cannot be used by another instance, nor
can it be dropped. Eventually, after all active transactions have committed, the undo tablespace
automatically goes from the PENDING OFFLINE mode to the OFFLINE mode. From then on, the undo
tablespace is available for other instances (in an Oracle Real Application Cluster environment).
If the parameter value for UNDO TABLESPACE is set to '' (two single quotes), then the current undo
tablespace is switched out and the next available undo tablespace is switched in. Use this statement with
care because there may be no undo tablespace available.
The Oracle Database Resource Manager can be used to establish user quotas for undo space. The
Database Resource Manager directive UNDO_POOL allows DBAs to limit the amount of undo space
consumed by a group of users (resource consumer group).
You can specify an undo pool for each consumer group. An undo pool controls the amount of total undo
that can be generated by a consumer group. When the total undo generated by a consumer group
exceeds its undo limit, the current UPDATE transaction generating the undo is terminated. No other
members of the consumer group can perform further updates until undo space is freed from the pool.
When no UNDO_POOL directive is explicitly defined, users are allowed unlimited undo space.
Oracle Database also provides proactive help in managing tablespace disk space use by alerting you
when tablespaces run low on available space. Please refer to "Managing Tablespace Alerts" for
information on how to set alert thresholds for the undo tablespace.
In addition to the proactive undo space alerts, Oracle Database also provides alerts if your system has
long-running queries that cause SNAPSHOT TOOOLD errors. To prevent excessive alerts, the long query
alert is issued at most once every 24 hours. When the alert is generated, you can check the Undo
Advisor Page of Enterprise Manager to get more information about the undo tablespace.
This section lists views that are useful for viewing information about undo space in the automatic undo
management mode and provides some examples. In addition to views listed here, you can obtain
information from the views available for viewing tablespace and datafile information. Please refer to"
Datafiles Data Dictionary Views" for information on getting information about those views.
The following dynamic performance views are useful for obtaining space information about the undo
tablespace:
View Description
V$UNDOSTAT Contains statistics for monitoring and tuning undo space. Use this view to
help estimate the amount of undo space required for the current workload.
The database also uses this information to help tune undo usage in the
system. This view is meaningful only in automatic undo management mode.
230 ORACLE DATABASE ADMINISTRATION
V$ROLLSTAT For automatic undo management mode, information reflects behavior of the
undo segments in the undo tablespace
V$TRANSACTION Contains undo segment information
DBA_UNDO_EXTENTS Shows the status and size of each extent in the undo tablespace.
DBA_HIST_UNDOSTA Contains statistical snapshots of V$UNDOSTAT information. Please refer
T to Oracle Database 2 Day DBA for more information.
231 ORACLE DATABASE ADMINISTRATION
232 ORACLE DATABASE ADMINISTRATION
233 ORACLE DATABASE ADMINISTRATION
234 ORACLE DATABASE ADMINISTRATION
235 ORACLE DATABASE ADMINISTRATION
236 ORACLE DATABASE ADMINISTRATION
Controlfile Structure
Information about the database is stored in different sections of the control file. Each section is a set of
records about an aspect of the database. For example, one section in the control file tracks data files and
contains a set of records, one for each data file. Each section is stored in multiple logical control file
blocks. Records can span blocks within a section.
The control file contains the following types of records:
Circular reuse records
These records contain noncritical information that is eligible to be overwritten if needed. When all
available record slots are full, the database either expands the control file to make room for a new record
or overwrites the oldest record. Examples include records about:
LOG HISTORY
OFFLINE RANGE
ARCHIVED LOG
BACKUP SET
BACKUP PIECE
BACKUP DATAFILE
BACKUP REDOLOG
DATAFILE COPY
BACKUP CORRUPTION
COPY CORRUPTION
DELETED OBJECT
PROXY COPY
Noncircular reuse records
These records contain critical information that does not change often and cannot be overwritten.
Examples of information include tablespaces, data files, online redo log files, and redo threads. Oracle
Database never reuses these records unless the corresponding object is dropped from the tablespace.
Examples of non-circular controlfile sections (the ones that can only expand)
DATABASE (info)
CKPT PROGRESS (Checkpoint progress)
REDO THREAD, REDO LOG (Logfile)
DATAFILE (Database File)
FILENAME (Datafile Name)
TABLESPACE
238 ORACLE DATABASE ADMINISTRATION
TEMPORARY FILENAME
RMAN CONFIGURATIO
Reading and writing the control file blocks is different from reading and writing data blocks. For the
control file, Oracle Database reads and writes directly from the disk to the program global area (PGA).
Each process allocates a certain amount of its PGA memory for control file blocks.
A Control File is a small binary file that stores information needed to startup an Oracle database and to
operate the database.
If you are using an SPFILE, you can use the steps specified in the figure shown here. The difference is
you name the control file in the first step and create the copy in step 3.
· The CREATE CONTROLFILE statement can potentially damage specified datafiles and redo
log files.
· It is only issued as a command in NOMOUNT stage.
· Omitting a filename can cause loss of the data in that file, or loss of access to the entire
database.
· · If the database had forced logging enabled before creating the new control file, and you
want it to continue to be enabled, then you must specify the FORCE LOGGING clause in
the CREATE CONTROLFILE statement.
Control file names generated with OMF can be found within the alertSID.log that is automatically
generated by the CREATE DATABASE command and maintained by the Oracle Server.
Control File Information
Several dynamic performance views and SQL*Plus commands can be used to obtain information about
control files.
· V$CONTROLFILE – gives the names and status of control files for an Oracle Instance.
· V$DATABASE – displays database information from a control file.
· V$PARAMETER – lists the status and location of all parameters.
· V$CONTROLFILE_RECORD_SECTION – lists information about the control file record
sections.
· SHOW PARAMETER CONTROL_FILES command – lists the name, status, and location of
control files.
242 ORACLE DATABASE ADMINISTRATION
The queries shown here were executed against the DBORCL database used for general instruction in our
department.
CONNECT / AS SYSDBA
SELECT name FROM v$controlfile;
NAME
--------------------------------------------------------------------------------
/u01/student/dbockstd/oradata/USER350control01.ctl
/u02/student/dbockstd/oradata/USER350control02.ctl
SELECT name, value FROM v$parameter WHERE name='control_files';
NAME VALUE
--------------------------------------------------------------------------------
control_file /u01/student/dbockstd/oradata/USER350control01.ctl,
/u02/student/dbockstd/oradata/USER350control02.ctl
DESC v$controlfile_record_section;
Name Null? Type
--------------------- -------- ----------------------------
TYPE VARCHAR2(28)
RECORD_SIZE NUMBER
RECORDS_TOTAL NUMBER
RECORDS_USED NUMBER
FIRST_INDEX NUMBER
LAST_INDEX NUMBER
LAST_RECID NUMBER
SELECT type, record_size, records_total, records_used
FROM v$controlfile_record_section
WHERE type='DATAFILE';
TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED
--------------- ----------- ------------- ------------
DATAFILE 520 25 4
The RECORDS_TOTAL shows the number of records allocated for the section that stores information on
data files.
Several dynamic performance views display information from control files including:
· V$BACKUP
· V$DATAFILE,
· V$TEMPFILE
· V$TABLESPACE
· V$ARCHIVE
· V$LOG
· V$LOGFILE
SQL> @control_clonedb.sql
Control file created.
STEP7: OPEN DATABASE WITH RESETLOGS OPTION.
SQL> alter database open resetlogs;
Database altered.
SQL> select database_name from v$database;
DATABASE_NAME
--------------------------------------------------------------------------------
CLONEDBN
DBNEWID is a database utility that can change the internal database identifier (DBID)
and the database name (DBNAME) for an operational database.
The DBNEWID utility solves this problem by allowing you to change any of the following:
• Only the DBID of a database
• Only the DBNAME of a database
• Both the DBNAME and DBID of a database
I prefer to change both DBNAME and DBID at the same time as a best practice during
creation of test environments.
Step-1. We will change both db_name to CLONE and dbid belongs to cloned database.
export ORACLE_SID=CLONEDB
sqlplus / as sysdba
shutdown immediate;
startup mount;
Step-3. Execute nid command and check the log file “/tmp/nid.log”:
/data/oracle/app/oracle/oradata/CLONEDB/control02.ctl
Changing database ID from 953825422 to 1066065334
Changing database name from CLONEDBN to CLONE
Control File /data/oracle/app/oracle/oradata/CLONEDB/control01.ctl - modified
Control File /data/oracle/app/oracle/oradata/CLONEDB/control02.ctl - modified
Datafile /data/oracle/app/oracle/oradata/CLONEDB/system.db - dbid changed, wrote new
name
Datafile /data/oracle/app/oracle/oradata/CLONEDB/user04.db - dbid changed, wrote new
name
Datafile /data/oracle/app/oracle/oradata/CLONEDB/sysaux.db - dbid changed, wrote new
name
Datafile /data/oracle/app/oracle/oradata/CLONEDB/undo.db - dbid changed, wrote new
name
Datafile /data/oracle/app/oracle/oradata/CLONEDB/test1_tmp.db - dbid changed, wrote
new name
Step-4. Startup instance with nomount option and change the db_name to CLONE.
Then shutdown and startup mount instance again for activate new db_name. At last, open
database with resetlogs option.
System altered.
Database altered.
Step-5. Control the value of dbid and name of the new database.
Step-6. You should create new password file for the new environment if you need
cd $ORACLE_HOME/dbs
orapwd file=orapwCLONE password=clone entries=3
Every Oracle Database has a control file, which is a small binary file that records the physical structure
of the database. The control file includes:
Checkpoint information
The control file must be available for writing by the Oracle Database server whenever the database is
open. Without the control file, the database cannot be mounted and recovery is difficult.
The control file of an Oracle Database is created at the same time as the database. By default, at least
one copy of the control file is created during database creation. On some operating systems the default is
to create multiple copies. You should create two or more copies of the control file during database
creation. You can also create control files later, if you lose control files or want to change particular
settings in the control files.
This section describes guidelines you can use to manage the control files for a database, and contains the
following topics:
You specify control file names using the CONTROL_FILES initialization parameter in the database
initialization parameter file (see "Creating Initial Control Files"). The instance recognizes and opens all
the listed file during startup, and the instance writes to and maintains all listed control files during
database operation.
If you are not using Oracle-managed files, then the database creates a control file and uses a
default filename. The default name is operating system specific.
If you are using Oracle-managed files, then the initialization parameters you set to enable that
feature determine the name and location of the control files, as described in Chapter 15, "Using
Oracle-Managed Files".
If you are using Automatic Storage Management, you can place incomplete ASM filenames in
the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST initialization parameters. ASM
then automatically creates control files in the appropriate places. See the sections "About ASM
Filenames" and "Creating a Database That Uses ASM" in Oracle Database Storage Administrator's
Guide for more information.
Every Oracle Database should have at least two control files, each stored on a different physical disk. If a
control file is damaged due to a disk failure, the associated instance must be shut down. Once the disk
drive is repaired, the damaged control file can be restored using the intact copy of the control file from
the other disk and the instance can be restarted. In this case, no media recovery is required.
The database writes to all filenames listed for the initialization parameter CONTROL_FILES in
the database initialization parameter file.
The database reads only the first file listed in the CONTROL_FILES parameter during database
operation.
If any of the control files become unavailable during database operation, the instance becomes
inoperable and should be aborted.
Note:
Oracle strongly recommends that your database has a minimum of two control files and that they
are located on separate physical disks.
One way to multiplex control files is to store a control file copy on every disk drive that stores members
of redo log groups, if the redo log is multiplexed. By storing control files in these locations, you minimize
the risk that all control files and all groups of the redo log will be lost in a single disk failure.
It is very important that you back up your control files. This is true initially, and every time you change
the physical structure of your database. Such structural changes include:
The main determinants of the size of a control file are the values set for
the MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and
MAXINSTANCES parameters in the CREATE DATABASE statement that created the associated
database. Increasing the values of these parameters increases the size of a control file of the associated
database.
This section describes ways to create control files, and contains the following topics:
The initial control files of an Oracle Database are created when you issue the CREATE
DATABASE statement. The names of the control files are specified by the CONTROL_FILES parameter
in the initialization parameter file used during database creation. The filenames specified
in CONTROL_FILES should be fully specified and are operating system specific. The following is an
example of a CONTROL_FILES initialization parameter:
CONTROL_FILES = (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u03/oracle/prod/control03.ctl)
If files with the specified names currently exist at the time of database creation, you must specify
the CONTROLFILE REUSE clause in the CREATE DATABASE statement, or else an error occurs. Also, if
the size of the old control file differs from the SIZE parameter of the new one, you cannot use
the REUSE clause.
The size of the control file changes between some releases of Oracle Database, as well as when the
number of files specified in the control file changes. Configuration parameters such
as MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES,
and MAXINSTANCES affect control file size.
You can subsequently change the value of the CONTROL_FILES initialization parameter to add more
control files or to change the names or locations of existing control files.
You can create an additional control file copy for multiplexing by copying an existing control file to a new
location and adding the file name to the list of control files. Similarly, you rename an existing control file
by copying the file to its new name or location, and changing the file name in the control file list. In both
249 ORACLE DATABASE ADMINISTRATION
cases, to guarantee that control files do not change during the procedure, shut down the database before
copying the control file.
To add a multiplexed copy of the current control file or to rename a control file:
2. Copy an existing control file to a new location, using operating system commands.
3. Edit the CONTROL_FILES parameter in the database initialization parameter file to add the new
control file name, or to change the existing control filename.
This section discusses when and how to create new control files.
It is necessary for you to create new control files in the following situations:
All control files for the database have been permanently damaged and you do not have a control
file backup.
For example, you would change a database name if it conflicted with another database name in a
distributed environment.
Note:
You can change the database name and DBID (internal database identifier) using the DBNEWID
utility. See Oracle Database Utilities for information about using this utility.
The compatibility level is set to a value that is earlier than 10.2.0, and you must make a change
to an area of database configuration that relates to any of the following parameters from
the CREATE DATABASE or CREATE CONTROLFILE commands: MAXLOGFILES, MAXLOGME
MBERS, MAXLOGHISTORY, and MAXINSTANCES. If compatibility is 10.2.0 or later, you do
not have to create new control files when you make such a change; the control files
automatically expand, if necessary, to accommodate the new configuration information.
For example, assume that when you created the database or recreated the control files, you
set MAXLOGFILES to 3. Suppose that now you want to add a fourth redo log file group to the
database with the ALTER DATABASE command. If compatibility is set to 10.2.0 or later, you can
do so and the controlfiles automatically expand to accommodate the new logfile information.
However, with compatibility set earlier than 10.2.0, yourALTER DATABASE command would
generate an error, and you would have to first create new control files.
For information on compatibility level, see "About The COMPATIBLE Initialization Parameter".
You can create a new control file for a database using the CREATE CONTROLFILE statement. The
following statement creates a new control file for the prod database (a database that formerly used a
different database name):
250 ORACLE DATABASE ADMINISTRATION
CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('/u01/oracle/prod/redo01_01.log',
'/u01/oracle/prod/redo01_02.log'),
GROUP 2 ('/u01/oracle/prod/redo02_01.log',
'/u01/oracle/prod/redo02_02.log'),
GROUP 3 ('/u01/oracle/prod/redo03_01.log',
'/u01/oracle/prod/redo03_02.log')
RESETLOGS
DATAFILE '/u01/oracle/prod/system01.dbf' SIZE 3M,
'/u01/oracle/prod/rbs01.dbs' SIZE 5M,
'/u01/oracle/prod/users01.dbs' SIZE 5M,
'/u01/oracle/prod/temp01.dbs' SIZE 5M
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG;
Cautions:
The CREATE CONTROLFILE statement can potentially damage specified datafiles and redo log
files. Omitting a filename can cause loss of the data in that file, or loss of access to the entire
database. Use caution when issuing this statement and be sure to follow the instructions in
"Steps for Creating New Control Files".
If the database had forced logging enabled before creating the new control file, and you want it
to continue to be enabled, then you must specify the FORCE LOGGING clause in the CREATE
CONTROLFILE statement. See "Specifying FORCE LOGGING Mode".
1. Make a list of all datafiles and redo log files of the database.
If you follow recommendations for control file backups as discussed in "Backing Up Control
Files" , you will already have a list of datafiles and redo log files that reflect the current structure
of the database. However, if you have no such list, executing the following statements will
produce one.
If you have no such lists and your control file has been damaged so that the database cannot be
opened, try to locate all of the datafiles and redo log files that constitute the database. Any files
not specified in step 5 are not recoverable once a new control file has been created. Moreover, if
you omit any of the files that make up the SYSTEM tablespace, you might not be able to recover
the database.
If the database is open, shut down the database normally if possible. Use
the IMMEDIATE or ABORT clauses only as a last resort.
STARTUP NOMOUNT
5. Create a new control file for the database using the CREATE CONTROLFILE statement.
When creating a new control file, specify the RESETLOGS clause if you have lost any redo log
groups in addition to control files. In this case, you will need to recover from the loss of the redo
logs (step 8). You must specify the RESETLOGS clause if you have renamed the
database. Otherwise, select the NORESETLOGS clause.
6. Store a backup of the new control file on an offline storage device. See "Backing Up Control
Files" for instructions for creating a backup.
7. Edit the CONTROL_FILES initialization parameter for the database to indicate all of the control
files now part of your database as created in step 5 (not including the backup control file). If you
are renaming the database, edit the DB_NAME parameter in your instance parameter file to
specify the new name.
8. Recover the database if necessary. If you are not recovering the database, skip to step 9.
If you are creating the control file as part of recovery, recover the database. If the new control
file was created using the NORESETLOGS clause (step 5), you can recover the database with
complete, closed database recovery.
If the new control file was created using the RESETLOGS clause, you must specify USING
BACKUP CONTROL FILE. If you have lost online or archived redo logs or datafiles, use the
procedures for recovering those files.
If you did not perform recovery, or you performed complete, closed database recovery in
step 8, open the database normally.
If you specified RESETLOGS when creating the control file, use the ALTER
DATABASE statement, indicating RESETLOGS.
After issuing the CREATE CONTROLFILE statement, you may encounter some errors. This section
describes the most common control file errors:
After creating a new control file and using it to open the database, check the alert log to see if the
database has detected inconsistencies between the data dictionary and the control file, such as a datafile
in the data dictionary includes that the control file does not list.
252 ORACLE DATABASE ADMINISTRATION
If a datafile exists in the data dictionary but not in the new control file, the database creates a
placeholder entry in the control file under the name MISSINGnnnn, where nnnn is the file number in
decimal. MISSINGnnnn is flagged in the control file as being offline and requiring media recovery.
If the actual datafile corresponding to MISSINGnnnn is read-only or offline normal, then you can make
the datafile accessible by renaming MISSINGnnnn to the name of the actual datafile.
If MISSINGnnnn corresponds to a datafile that was not read-only or offline normal, then you cannot
use the rename operation to make the datafile accessible, because the datafile requires media recovery
that is precluded by the results of RESETLOGS. In this case, you must drop the tablespace containing
the datafile.
Conversely, if a datafile listed in the control file is not present in the data dictionary, then the database
removes references to it from the new control file. In both cases, the database includes an explanatory
message in the alert log to let you know what was found.
If Oracle Database sends you an error (usually error ORA-01173, ORA-01176, ORA-01177, ORA-
01215, or ORA-01216) when you attempt to mount and open the database after creating a new control
file, the most likely cause is that you omitted a file from the CREATE CONTROLFILE statement or
included one that should not have been listed. In this case, you should restore the files you backed up in
step 3 and repeat the procedure from step 4, using the correct filenames.
Use the ALTER DATABASE BACKUP CONTROLFILE statement to back up your control files. You have
two options:
Back up the control file to a binary file (duplicate of existing control file) using the following
statement:
Produce SQL statements that can later be used to re-create your control file:
This command writes a SQL script to a trace file where it can be captured and edited to
reproduce the control file. View the alert log to determine the name and location of the trace file.
This section presents ways that you can recover your control file from a current backup or from a
multiplexed copy.
This procedure assumes that one of the control files specified in the CONTROL_FILES parameter is
corrupted, that the control file directory is still accessible, and that you have a multiplexed copy of the
control file.
1. With the instance shut down, use an operating system command to overwrite the bad control file
with a good copy:
% cp /u03/oracle/prod/control03.ctl /u02/oracle/prod/control02.ctl
This procedure assumes that one of the control files specified in the CONTROL_FILES parameter is
inaccessible due to a permanent media failure and that you have a multiplexed copy of the control file.
1. With the instance shut down, use an operating system command to copy the current copy of the
control file to a new, accessible location:
% cp /u01/oracle/prod/control01.ctl /u04/oracle/prod/control03.ctl
2. Edit the CONTROL_FILES parameter in the initialization parameter file to replace the bad
location with the new location:
CONTROL_FILES = (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u04/oracle/prod/control03.ctl)
SQL> STARTUP
If you have multiplexed control files, you can get the database started up quickly by editing
the CONTROL_FILES initialization parameter. Remove the bad control file
from CONTROL_FILES setting and you can restart the database immediately. Then you can perform the
reconstruction of the bad control file and at some later time shut down and restart the database after
editing the CONTROL_FILES initialization parameter to include the recovered control file.
You want to drop control files from the database, for example, if the location of a control file is no longer
appropriate. Remember that the database should have at least two control files at all times.
2. Edit the CONTROL_FILES parameter in the database initialization parameter file to delete the
old control file name.
View Description
V$DATABASE Displays database information from the control file
V$CONTROLFILE Lists the names of control files
V$CONTROLFILE_RECORD_SECTIO Displays information about control file record sections
N
V$PARAMETER Displays the names of control files as specified in
the CONTROL_FILES initialization parameter
NAME
-------------------------------------
/u01/oracle/prod/control01.ctl
/u02/oracle/prod/control02.ctl
/u03/oracle/prod/control03.ctl
255 ORACLE DATABASE ADMINISTRATION
256 ORACLE DATABASE ADMINISTRATION
257 ORACLE DATABASE ADMINISTRATION
258 ORACLE DATABASE ADMINISTRATION
259 ORACLE DATABASE ADMINISTRATION
Redo Log Files enable the Oracle Server or DBA to redo transactions if a database failure occurs. This
is their ONLY purpose – to enable recovery.
Transactions are written synchronously to the Redo Log Buffer in the System Global Area.
· All database changes are written to redo logs to enable recovery.
· As the Redo Log Buffer fills, the contents are written to Redo Log Files.
· This includes uncommitted transactions, undo segment data, and schema/object management
information.
· During database recovery, information in Redo Log Files enable data that has not yet been
written to datafiles to be recovered.
Redo Thread
If a database is accessed by multiple instances, a redo log is called a redo thread.
· This applies mostly in an Oracle Real Application Clusters environment.
· Having a separate thread for each instance avoids contention when writing to what would
otherwise be a single set of redo log files - this eliminates a performance bottleneck.
A Redo Log File stores Redo Records (also called redo log entries).
· Each record consists of "vectors" that store information about:
o changes made to a database block.
o undo block data.
o transaction table of undo segments.
· These enable the protection of rollback information as well as the ability to roll forward for
recovery.
· Each time a Redo Log Record is written from the Redo Log Buffer to a Redo Log File, a System
Change Number (SCN) is assigned to the committed transaction.
You will not always be able to accomplish all of the above guidelines – your ability to meet these
guidelines will depend on the availability of a sufficient number of independent physical disk drives.
· The Redo Log file to which LGWR is actively writing is called the current log file.
· Log files required for instance recovery are categorized as active log files.
· Log files no longer needed for instance recovery are categorized as inactive log files.
· Active log files cannot be overwritten by LGWR until ARCn has archived the data when
archiving is enabled.
4. A Redo Log Group fails while LGWR is writing to the members – Oracle generates an error
message and the database instance shuts down. Check to see if the disk drive needs to be
turned back on or if media recovery is required. In this situation, just turn on the disk drive and
Oracle will perform automatic instance recovery.
Sometimes a Redo Log File in a Group becomes corrupted while a database instance is in operation.
· Database activity halts because archiving cannot continue.
· Clear the Redo Log Files in a Group (here Group #2) with the statement:
How large should Redo Log Files be, and how many Redo Log Files are enough?
The size of the redo log files can influence performance, because the behavior of the DBWn and ARCn
processes (but not the LGWR process) depend on the redo log sizes.
· Generally, larger redo log files provide better performance.
· Undersized log files increase checkpoint activity and reduce performance.
· It may not always be possible to provide a specific size recommendation for redo log files, but
redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable.
· Size your online redo log files according to the amount of redo your system generates. A rough
guide is to switch logs at most once every twenty minutes; however more often switches are
common when using Data Guard for primary and standby databases.
· It is also good for the file size to be such that a filled group can be archived to a single offline
storage unit when such an approach is used.
· If the LGWR generates trace files and an alert file entry that Oracle is waiting because a
checkpoint is not completed or a group has not been archived, then test adding another redo log
group (with its files).
This provides facts and guidelines for sizing Redo Log files.
· Minimum size for an On-line Redo Log File is 4MB.
· Maximum size and Default size depends on the operating system.
· The file size depends on the size of transactions that process in the database.
o Large batch update transactions require larger Redo Log Files, 5MB or more in size.
o Databases that primarily support on-line, transaction-processing (OLTP) can work
successfully with smaller Redo Log Files.
263 ORACLE DATABASE ADMINISTRATION
· Set the size large enough so that the On-line Redo Log Files switch about once every 20
minutes.
o If your Log Files are 4MB in size and switches are occurring on the average of once
every 10 minutes, then double their size!
o You can specify the log switch interval to 20 minutes (a typical value) with
the init.ora command shown here that sets the ARCHIVE_LAG_TARGETparameter in
seconds ( there are 1200 seconds in 20 minutes).
ARCHIVE_LAG_TARGET = 1200
· Determine if LGWR has to wait (meaning you need more groups) by:
o Check the LGWR trace files – the trace files will provide information about LGWR waits.
o Check the alert_SID.log file for messages indicating that LGWR has to wait for a group
because a checkpoint has not completed or a group has not been archived.
The parameter MAXLOGFILES in the CREATE DATABASE command specifies the maximum number of
Redo Log Groups you can have – group numbers range from 1 to MAXLOGFILES.
· Override this parameter only by recreating the database or control files.
· When MAXLOGFILES is not specified, the CREATE DATABASE command uses a default value
specific to each operating system – check the operation system documentation.
· With Oracle 11g if your exceed the maximum number of Redo Log Groups, Oracle
automatically causes the control file to expand in size to accommodate the new maximum
number.
LGWR writes from the Redo Log Buffer to the current Redo Log File when:
· a transaction commits
· the Redo Log Buffer is 1/3 or more full.
· There is more than 1MB of changed rows in the Redo Log Buffer
· Prior to DBWn writing modified blocks from the Database Buffer Cache to Datafiles.
Checkpoints can occur for all datafiles in the database or only for specific datafiles. A checkpoint occurs,
for example, in the following situations:
· when a log switch occurs.
· when an Oracle Instance is shut down with the normal, transactional, or immediate option.
· when forced by setting the initialization parameter FAST_START_MTTR_TARGET that
controls the number of dirty buffers written by DBWn to datafiles.
· when a DBA issues the command to create a checkpoint.
· when the ALTER TABLESPACE [OFFLINE NORMAL | READ ONLY | BEGIN
BACKUP] command causes check pointing on specific datafiles.
OPTIMAL_LOGFILE_SIZE
--------------------
256
o You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise
Manager Database Control.
Result:
NAME VALUE
---------------------------- ---------
redo wastage 17941684
With Oracle 11g Release 2 you can specify a block size for online redo log files with
the BLOCKSIZE keyword in the CREATE DATABASE, ALTERDATABASE,
and CREATE CONTROLFILE statements. The permissible block sizes are 512, 1024, and 4096.
This example shows use of the BLOCKSIZE parameter to create 512Kb blocks.
BLOCKSIZE
---------
512
Log Switches and Checkpoints
This figure shows commands used to cause Redo Log File switches and Checkpoints.
265 ORACLE DATABASE ADMINISTRATION
Remember, you must keep at least two groups of On-line Redo Log Files working. You also cannot drop
an active (current) Group. Further, the actual operating system files are not deleted when you drop a
Group. You must use operating system commands to delete the files that stored the Redo Logs of the
dropped Group.
Sometimes an individual Redo Log File will become damaged (invalid). You can use the following
command to drop the file. Then use the operating system command to delete the file that stored the
invalid Redo Log File, and then recreate the Redo Log File.
Each Redo Log File member in a Group must be identical in size. If you need to make your Redo Log
Files larger, use the following steps.
1. Use the V$LOG view to identify the current active Redo Log Group.
GROUP# STATUS
---------- ----------------
1 INACTIVE
2 INACTIVE
3 CURRENT
4 INACTIVE
2. Drop one or more of the inactive Redo Log Groups keeping at least two current On-line Redo Log
Groups.
3. Use operating system commands to delete the files that stored the dropped Redo Log Files.
4. Recreate the groups with larger file sizes. Continue this sequence until all groups have been resized.
MEMBER STATUS
---------------------------------------- ----------
/u01/oradata/DBORCL/DBORCLredo01a.log
/u01/oradata/DBORCL/DBORCLredo02a.log
/u01/oradata/DBORCL/DBORCLredo03a.log
/u02/oradata/DBORCL/DBORCLredo01b.log
/u02/oradata/DBORCL/DBORCLredo02b.log
/u02/oradata/DBORCL/DBORCLredo03b.log
6 rows selected.
As a Redo Log File is the file to record the information recorded in the Redo Log Buffer by LGWR (Log
Writer), changes to the database are recorded in the Redo Log Buffer is used to recover data in the event
of future disability. Changes indicate the role of the Redo Log File to figure in the process of being stored
in the Data File follows.
① changed by a DML query data changes are stored in the Redo Log Buffer, a state that is stored in the
Database Buffer Cache File shall assume that the state of the store afterwards.
② The commit command occurs when LGWR Process the Redo Log Buffer to the saved changes to the
SCN (system commit number) to attach Redo Log File to save it. After the data is stored in the Redo Log
File changes are recorded in the Redo Log Buffer is deleted.
③ LGWR writes the final number assigned to each of the commit SCN data and save the data to the
Redo Log File to commit SCN section in the Control File.
① LGWR has run out of space while trying to write to the Redo Log File to record changes in the following
files to the Redo Log Buffer
268 ORACLE DATABASE ADMINISTRATION
Log Switch to hayeotdago occurred. When this is to be generated log switch CKPT Process has
encountered a checkpoint signals, thereby detecting them.
② Redo Log File status
Current
o The status of the file that information is recorded in the Redo Log Buffer is called by the
current LGWR Current state.
Active
o Is filled all the space on the file to another file crossed the Current state due to the log
switch, but the stored information is referred to as the state of the unwritten log file in
the Data File Active state.
InActive
o The contents of the status log file that is stored in the Redo Log File is stored in the Data
File
o This status can only delete the redo log file.
① Process CKPT passes the checkpoint signal after detecting a log switch to DBWR to save the contents
stored in the Database Buffer Cache for Data File. When the Disk File The data will be stored by the
DBWR becomes spent the first part, the last number of the SCN.
② CKPT Process by passing the signal to DBWR Process After writing the checkpoint SCN directly into the
Control File.
③ Difference between the commit SCN and checkpoint SCN is stored in the Control File
commit SCN
o The content of the LGWR Redo Log Buffer is accomplished by the SCN and synchronize
and commit granted every time you save the Redo Log File. (Redo Log is the commit SCN
numbers and commit SCN number data in the File consistent; the commit every update.)
checkpoint SCN
o commit rather than being stored as data is updated when the Redo Log File by the time
the information through the checkpoint signal stored in the Database Buffer Cache is
stored at a time Data File, as shown by the amount of data being stored at this time.
269 ORACLE DATABASE ADMINISTRATION
① The Instance Recovery are allowed to proceed through a process where the header information of the
control file and data file is used. As indicated above, the control file is saved SCN numbers of the data
stored in the current and Redo Log File, data file to the SCN number for the current stored data is stored.
This aspect of the SCN number that exists in between the two control file SCN number is greater, commit
is Redo Log File, the data is saved, but, Data File, the Redo Log File from the SCN number as lacking
because the data is not stored properly will be able to recover the data.
* However, if the SCN numbers stored in the control file in the startup process produces smaller error
control file version.
① 2 Min groups of Redo Log File is in Oracle, and the minimum number of members of one dog per
group.
* On a production must have a stable minimum of three groups of two or more members per
group can run the Oracle.
② log Switch for generating a checkpoint is generated by cyclic group (round robin fashion)
③ is a member size and contents of the log file in the same group are the same.
④ dispersing each member of the group to another position, because it is safe for administration plays
an important role in the Database recovery.
⑤ if each member of the group of the contents, such that several parallel at the same time, however, if
there is a member on the same disk is recorded in series.
270 ORACLE DATABASE ADMINISTRATION
Make groups and members of the files that make up the current Redo Log File
order by 1,2;
('/home/oracle/oradata/redo04_a.log',
If you add a group, creating one or more files to be enclosed in the appropriate file members should give
() parentheses.
... Size 5M -> means it will be set to the file size of each member of the group to 5MB (can be omitted).
'/home/oracle/disk5/redo03_d.log' To group 3;
should try to be alter database statement specifies the number of groups to become added to the end.
3 it can be seen that the group is deleted. But alter database statement in Redo Log File to remove a
control file is recorded in the Redo Log File in the removal of information that the actual file is not deleted
while it remains.
SQL>! Rm /home/oracle/disk5/redo03_d.log
SQL>! Rm /home/oracle/disk5/redo03_c.log
SQL>! Rm /home/oracle/disk4/redo03_b.log
SQL>! Rm /home/oracle/disk3/redo03_a.log
Above the minimum number of groups had called two Redo Log File. If you try to delete one
group while the remaining two are not removed with the following error:
ERROR at line 1:
ORA-01567: dropping log 1 would leave less than 2 log files for instance testdb (thread 1)
I will remove three of redo03_a.log member files a group from the list above.
'/home/oracle/oradata/redo03_a.log';
Above each group of Redo Log File should have had at least one more member.
When you remove a group of 3 redo03.log file, the following error occurs:
2 '/home/oracle/oradata/redo03.log';
ERROR at line 1:
ORA-00361: can not remove last log member /home/oracle/oradata/redo03.log for group 3
The log files exist, the state is, as mentioned above, and the state is deleted InActive state indicating
that all data is stored in the Data File. (Active Status While they are deleted but are stored in the
Data File in a state that is currently commit Never should not be deleted because it does not)
If the status of the log file to delete the Current file is generated when a member of the following error:
2 '/home/oracle/disk5/redo01_c.log';
ERROR at line 1:
ORA-01609: log 1 is the current log for thread 1 - cannot drop members
Because of the method used to generate a log switch force should change to the state of the log file.
If the current state of the Redo Log File is the same as the picture shown above if you run the following
command: The status is changed.
Redo Log
- Oracle In case your fault occur if data changes occur and record all the details and content before
change after change.
- The memory of the place where the records may be content Redo log Buffer, a file Redo log File.
※ Write Ahead Log - Change to change the actual data that the first record in the redo log data before
and after
※ Log Force at Commit - complete commit request comes in, to save all the redo record related to the
redo log file from the user and then commit the
Then change so that Row area by another user after setting the Lock to the Block (= page fix)
: Change Vector is the purpose of the recovery of the changed data in the future, meaning the set of all
the information about the changed data to be recorded in the redo log
For example, if you insert a one row has multiple items together Oneness of change vector as follows:
In general, the Redo Log is recorded as a change in the purpose to recover the transaction.
This means that even when used to recover data outside the Commit Rollback data.
After the data user to commit a Roll forward to more, but you also need to save Rollback after a DB in
the state that has not yet been Rollback failed rollback is complete, even when the kill occurs before the
data when the DB is still Checkpoint Kill must have all Rollback.
The created on the PGA Change Vector is copied into their Redo Record Format in a Row unit Redo Log
Buffer.
2) on the PGA created after the change vector and copy to calculate the capacity needed in the redo log
buffer should acquire the Latch.
All memory resources (shared pool, database buffer cache, etc.) each have a proper latch.
Redo Log Buffer too, like to write the contents to the Redo Log Buffer shall obtain Redo Copy Latch Be
the first to have secured the Latch. If at the same time lead to overload in the process of obtaining a
Redo Copy Latch If the number of server processes to change the data May.
Redo Copy Latch Change Vector is all because there must be a need to have multiple records until the
Redo Log Buffer.
Number of Redo Copy Latch is " _log_simultaneous_copies can be adjusted by the hidden parameter
named "(the default CPU count x2)
277 ORACLE DATABASE ADMINISTRATION
3) Redo Copy Latch secure server process must ensure the Redo Allocation Latch to record the
information on the Redo Log Buffer.
Starting with the 9i Redo Log Bufferr divided into several spaces Shared Redo Strand feature was
introduced that assigns a value to each space Redo Allocation Latch can be set to LOG_PARALLEISM
parameters. (Default: 1)
Beginning with the Strand 10g Shared Redo more extended concept of Private Redo Strand feature was
introduced.
Starting with 10g each server processes created after the Change Vector Private Redo Strand there,
creating a space that is immediately written to the LGWR Redo Log File, if necessary.
Due to the introduction of the Private Redo Strand each process are further improved in performance,
reduced part was Latch contention exists to ensure this is also referred to as Zero Copy Redo.
?
1 SQL> select count (*)
2 from v $ latch_children
3 where name = 'redo allocation' ;
4) If in certain circumstances the Redo Log Buffer LGWR some of the information contained in this Redo
Log Buffer is recorded in the Redo Log File
-> Asked to record the information in the Redo Log Buffer to Redo Log File to LGWR
- Every 3 seconds
LGWR process will be days if not Sleep status to become a Rdbms Ipc Message of wait events in T time
Out is the time of 3 seconds every once and Wake Up the R EDO Log Buffer in Redo Log File to record
content that should be to be found. So if you want to record Flush the content part is recorded in the
Redo Log Buffer after recording the Redo Log File.
278 ORACLE DATABASE ADMINISTRATION
- If the Redo Log Buffer is 1/3 full, the total size is greater than 1MB or server process computes the
Block Number of Log Buffer is currently used each time you get assigned a Redo Log Buffer Block of Log
Buffer If the number is currently used If more than the value of the _LOG_IO_SIZE be asked to write
the contents of a LGWR Redo Log Buffer to Redo Log File.
That the user has performed the Commit Sync Write referred to as recorded in the Redo Log File
Starting with Oracle 8i DBWR LGWR the on-disk RBA value is greater than the value of high-RBA Block
with the data file to be written to you that B lock the D iffered Write Queue on record that after the first
LGWR process carried out by the appropriate Redo Log After you have created a first down to write
D being matchuge the Sync in a way to record ata Block.
When the above conditions LGWR will be what is in the Redo Log Buffer in the Redo Log Buffer after
recording the Redo Log File
LGWR and DBWR writes down the Block Unit As you write down the contents of the Redo Log Buffer to
Redo Log File. The size of the Block write down, but Block size is determined by the DB_BLOCK_SIZE
LOG Buffer LGWR is writing down is not the value of the DB_BLOCK_SIZE OS Block Size and may differ
depending on the OS type.
?
1 SQL> select max (lebsz) from sys.x $ kccle;
- TABLE generation, INDEX option when creating nologging (general insert, update, delete operations are
all recorded)
INACTIVE: The aneung currently in use redo logs needed to recover without map
ACTIVE: INSTANCE Recovery log redo necessary for
CURRENT: LGWR redo logs are currently being recorded
Health Check
Redo configuration
. The above, as shown in the mirror the other disks by physically LOG1 in turn, benefits by
Managing availability and enclose two Redo members of the management in order to do this as a
group. GROUP 1:
To Redo Management
?
1 ALTER DATABASE ADD Logfile
2 GROUP 1
3 ('/home/oracle/MYDB/disk1/redolog01_01.log',
280 ORACLE DATABASE ADMINISTRATION
4 '/home/oracle/MYDB/disk2/redolog01_02.log',) size 20m,
5 GROUP 2
6 ('/home/oracle/MYDB/disk3/redolog02_01.log',
7 '/home/oracle/MYDB/disk4/redolog02_02.log',) size 20m;
?
1 ALTER DABASE DROP LOGFILE GROUP 1;
※ Caution deleted if the current state of the Redo Log Group CURRENT, ACITIVE can not delete the Redo
Log absolute. by deleting the LOGFILE SWITCH before the first occurrence shall be made INACTIVE
state. However, that is not necessarily the INACTIVE state that the LOGFILE SWITCH no. But look at it
several times eventually changes to INACTIVE state.
?
1 ALTER DATABASE ADD LOGFILE MEMBER '/home/oracle/MYDB/disk3/redolog01_03.log'
2 TO ('/home/oracle/MYDB/disk1/redolog01_01.log', '/home/oracle/MYDB/disk2/redolog01_02.log',);
?
1 ALTER DATABASE DROP LOGFILE MEMBER '/home/oracle/MYDB/disk3/redolog01_03.log';
- Redo Log rename (used in the file location changes or file name change)
?
1 ALTER DATABASE RENAME FILE '/diska/logs/log1a.log', '/diska/logs/log2a.log'
2 TO '/diskc/logs/log1c.log', '/diskc/logs/log2c.log';
?
1 ALTER SYSTEM SWITCH LOGIFLE;
The online redo logs are crucial for recovery operations of an Oracle database. The online redo log
consists of two or more pre-allocated files that store all changes in the database. Each instance of an
Oracle database has a redo log file line to protect the database in case of crash.
Each instance of a database has its redo log groups. These redo log files, multiplexed or not, are
managed by a single thread in an Oracle instance if Oracle Parallel Server is not used: the thread LGWR
(Log Writer).
Redo entries record all changes made to the database, including the rollback segments. So the online
redo log also protects rollback data.
Redo records are put into a buffer in a circular fashion in the redo log buffer of the SGA for an Oracle
instance and are written in a redo log files by the LGWR process back plane (Oracle background process
Log Writer). Therefore That a transaction is committed (commit), the process LGWR writes redo records
from the transaction redo log buffer in the SGA to a redo log file, and SCN (system change number) is
assigned to identify redo records with each committed transaction.
It is only when all redo records associated with a given transaction are written to disk in the redo log files
is the user process notified that the transaction is committed.
Redo records can also be written in a redo log file before the corresponding transaction is committed. If
the redo log buffer is full, or another transaction commits, the process LGWR flush all of the redo log
redo log buffer entries to a redo log file, even though redo records are not validated. If necessary, Oracle
can reverse these changes.
Redo log files consist of at least two redo log files online (online redo log). Oracle requires a minimum of
two files to ensure that a redo log file is always available for writing while the other is being archived (if
archivelog mode is active).
The LGWR process writes redo log files in circular mode: when the current redo log file is filled, LGWR
begins writing the next redo log file. When the last redo log file is filled, LGWR returns to the first redo
log file and writes it and restarting a new cycle.
Redo log files are available to LGWR filled process for reuse according to the active ARCHIVELOG mode or
not:
If archiving is not enabled (NOARCHIVELOG mode), a filled redo log file is available once the
changes recorded in the latter were written in the data files.
If archiving is enabled (ARCHIVELOG mode), a filled redo log file is available to process LGWR
once the changes recorded in the latter were written in the data file and once the redo log file was
archived.
282 ORACLE DATABASE ADMINISTRATION
Oracle uses a redo log file at a time to write the redo records from the redo log buffer. The redo log file
that LGWR process fills is called the current redo log file.
Redo log files required for recovery are called active redo log file. Those who are not required to recovery
are called inactive redo log files.
If archiving is enabled, Oracle can reuse or overwrite an active redo log file as its content was not
archived in full.
A log switch is the point in which Oracle ends writing to a redo log file and switches to another redo log
file. A log switch always occurs when the redo log file is full and the scriptures must continue to the next
redo log file. The log switches can also be done manually.
Oracle automatically assigns a log sequence number (log sequence number) in a redo log file each time a
log switch occurs. If Oracle archive redo log files, archived redo log files that retain sequence
number. During a crash, a recovery, Oracle reapplied correctly redo log files by the log sequence number.
When multiplexing is applied redo log files, LGWR process writes the same information in multiple
identical redo log files, thereby eliminating a reading crash on a damaged redo log file.
The implementation of the redo log files multiplexing is to create redo log file groups. Each redo log file
in a group is called a member. Members in a group of redo log files must have the same size.
According to the example given in the diagram, the LGWR process writes simultaneously in A_LOG1 and
B_LOG1 members of Group1 group of redo log files and then in the A_LOG2 and B_LOG2 members
Group2 group of redo log files after a log switch.
It is recommended that the members of a group of redo log files on different disks.
283 ORACLE DATABASE ADMINISTRATION
It is imperative to consider the parameters that can limit the number of online redo log file before
altering the configuration of the redo log files for instance.
parameter MAXLOGFILES used in the command CREATE DATABASE determines the number of redo
log file groups. The only way to change this limit requires to recreate the database or alter its file
controls. If the parameter MAXLOGFILES is not specified, Oracle applies a default value dependent on
the operating system.
The initialization parameter LOG_FILES (in the initialization file of the instance) can temporarily
decrease the maximum number of redo log file groups without exceeding the setting MAXLOGFILES .
The parameter MAXLOGMEMBERS used in the command CREATE DATABASE determines the
maximum number of members in a group of redo log files. As the parameter MAXLOGFILES , the only
way to increase this value requires to recreate the database or alter files checks. If the
parameter MAXLOGMEMBERS is not specified, Oracle applies a default value dependent on the
operating system.
records_total
-------------
32
In the context of practical cases: 32 redo log file groups can be created at maximum.
3- Case study
In the practical case that follows, it is proposed to see the main controls for handling Oracle redo log files
to perform a reorganization of redo log files.
The initial order to create the basic practice where data is recalled below:
MAXLOGHISTORY 1
DATAFILE '/sdata/oracle/v8/TSTT1ORA/data/system01.dbf' SIZE 264M REUSE AUTOEXTEND
OFF
MAXDATAFILES 254
MaxInstances 1
CHARACTER SET WE8ISO8859P1
NATIONAL CHARACTER SET WE8ISO8859P1;
The command CREATE DATABASE requires us not to create more than 32 redo log file groups. Each group
will not contain more than 2 members, 2 redo log files.
When creating GSC database, 3 redo log file groups were created 3 groups that contain only one redo log
file. After reconstruction, there will always be three redo log file groups, but each group will contain two
redo log files as shown in the following diagram:
In the context of practical cases, members of the same group can not be placed on different disks.
The views V $ LOG and V $ LOGFILE provide information on redolog file groups. These views are based
on the information in the control files.
The view V $ LOG gives precise information about redo log files:
select group #,
# thread,
# sequence,
bytes,
members,
archived,
status,
# first_change,
to_char (first_time, 'dd / mm / yy hh: mi: ss') as FIRST_TIME
from v $ log
The view V $ LOG says although he has three redo log file groups, three groups with only one member or
redo log file (members = 1). Each redo log file has a size of 1 MB
The active redo log file is the file in the group 1 for which the status is CURRENT (status column).
The log sequence number is 1066 (sequence #) to the first group of redo log files, for the second group
in 1064 and 1065 for the third group, which makes perfect sense considering the circularity in the redo
log files .
The column first_change # indicates the first SCN (system change number) in the group of redo log files
and the time that is what is given by the SCN first_time column.
The view V $ LOGFILE provides for its physical constitution and locations of members of redo log file
groups:
The status is INVALID when a redo log file in a group can not be accessed.
The status is STALE When Oracle suspect that a redo log file is incomplete or incorrect until the redo log
file in question is a member of the active group.
The command alter system allows file group switcher redo log:
The V $ LOG view actually confirms the switch log at the end of the command: the redo log file group
becomes active group 2 with a log sequence number incremented by 1.
The alert file of the proceedings confirms the switch made manual:
286 ORACLE DATABASE ADMINISTRATION
Before deleting a group of redo log files, some concepts must be known:
it is imperative to ensure that there will be at least two redo log file groups available after
deletion.
an error message appears when attempting to remove a member of an active group of redo log
files. A log switch must be performed beforehand.
it is allowed to only remove a member of a group of redo log files, provided that this member is
not unique and the last of the group (the ORA-00361 message is displayed if: unable to remove the
last member ).
the physical file is not deleted remotely disk.
In the case practice: group 1 is not active and has only one member, therefore only the following syntax
can be used.
Group 1 redo log files will be recreated with only 1 member (redo1_01.log). The syntax alter
database is used to add a group of redo log files: the number of members in a redo log file group can
not exceed the parameter MAXLOGMEMBERS .
The command alter database add logfile allows the user to specify a number for the group.
287 ORACLE DATABASE ADMINISTRATION
In the case practice:
The redo1_02.log member of the group 1 created will be added with the command alter database add
logfile member .
The V $ LOG view shows good two members in the group of redo log files # 1 and when it is a group of
redo log files again indicated status in the V $ LOG view isUNUSED :
select group #,
# thread,
# sequence,
bytes,
members,
archived,
status,
# first_change,
to_char (first_time, 'dd / mm / yy hh: mi: ss') as FIRST_TIME
from v $ log;
Group 2 will be recreated by adding and removing members (log switches are made):
The V $ LOGHIST view is based on the information in the control file provides a history of the switches
redo log.
The log sequence number is logged in the V $ views LOGHIST also giving SCN (system change number)
starting a sequence (first_change #) and the SCN corresponding to a log switch.
For example, the sequence of log 1068 started on 14/01/2005 1:50 to 01: the first SCN is the 312770
number (first_change #) and the last is the SCN number 312820 since a log switch was made for
312820 the SCN (switch_change #). The log sequence contains 1068 including 50 redo records.
The parameter MAXLOGHISTORY when creating the database governs the maximum number of retention of
information of log switches in the control files. To change this setting, or a control file must be recreated,
or the database must be rebuilt.
records_total
-------------
1815
In the context of practical cases: 1815 switches are based on historical log in V $ LOGHIST.
The only redo log files play the important role to record the data transactions, and provide the recovery
mechanism to ensure data integrity.
The online redo logs files provide the mean to redo transaction in the even to a database failure. When a
transaction coming, it's written into the redo log buffer (Please refer to chapter 1 for the architecture);
and then gets flushed to the online redo log files. When a media failure occurs, the redo log files provide
the recovery mechanism to recover the transaction data. This includes not yet committed transactions,
undo segment information and schema and object management statements.
A transaction not being 'logged' into redo log files includes issuing 'NOLOGGING' clause in the statement
or when using direct load insert.
Redo log files are group based, and at two online redo log groups are required.
A set of identical copies of online redo log files is called an online redo log file group.
The LGWR background process concurrently writes the same information to all online redo log
files in a group.
The Oracle server needs a minimum of two online redo log file groups for the normal operation of
a database.
MAXLOGFILES: it's the parameter in CREATE DATABASE command specifies the absolute
maximum of online redo log file groups.
The maximum and default value for MAXLOGFILES depends on operating system.
MAXLOGMEMBERS: the parameter in CREATE DATABASE command determines the maximum
number of members per redo log file group.
You need to know the mechanism of how Online Redo Log Files work to be able to utilize it ensuing the
data availability.
The Oracle server sequentially records all changes made to the database in the Redo Log Buffer. The redo
entries are written from the Redo Log Buffer to the current online redo log file group by the LGWR
process. LGWR writes under the following situations:
Log Switches
LGWR writes to the online redo log files sequentially. When the current online redo log file group is filled,
LGWR begins writing to the next group. This is called a log switch.
290 ORACLE DATABASE ADMINISTRATION
Checkpoints
During a checkpoint:
DBWn writes a number of dirty database buffers, which are covered by the log that is being
checkpointed, to the data files.
The checkpoint background process CKPT updates the control file to reflect that it has completed
a checkpoint successfully. If the checkpoint is caused by a log switch, CKPT also updates the headers of
the data files.
FAST_START_MTTR_TARGET = 600
There are a coupld of dynamic views for online redo log files that you can retrive information of online
redo log files,ex:
V$LOG
V$LOGFILE
Oracle uses the redo files to be sure that any changes made by a user will not be lost if there is a system
failure. Redo files are essential for the restoration process. When an instance stops abnormally, it is
292 ORACLE DATABASE ADMINISTRATION
possible that certain information in the redo files are not written to the data files. Oracle has the redo log
groups.
Each group has at least one redo file. It must have at least two distinct groups of redo files (also called
redo threads), each containing at least one member. For if you have only one redo file, this will override
Oracle redo file and we will lose all transactions. Each database redo its file groups. These groups,
multiplexed or not, are called instance of the redo thread. In typical configurations, only one instance
of the database accesses the Oracle database. Thus, only one thread is present. In a RAC environment,
two or more instances simultaneously access a single database and each instance its own redo thread.
Redo files are filled with redo records. A redo record, also called a redo entry is composed of a group
of vectors of change, which is a description of a change made to the base block. For example, if you
change the value of an employee's salary in the table, it generates a redo record containing change
vectors describing the changes to the data segment of the block of the table, the segment of the data
block undo and undo segment transaction table.
The redo entries record data that can be used to rebuild after wholes changes made to the base, undo
segments included. In addition, the redo file also protects cancellation data. When restoring the database
293 ORACLE DATABASE ADMINISTRATION
using redo data, the base reads the vectors of changes in the redo records and applies the changes to
the relevant blocks.
The redo records are placed in a circular fashion in the redo log buffer in the SGA. They are written in a
single file with the LGWR redo process. Once a transaction is committed, LGWR writes the redo records in
the transaction from the redo log buffer of the SGA to redo file and attribute SCN to identify the redo
records for each committed transaction and only when all redo records associated with the transaction
are given without incident on the disk in the redo files online and the user process notified that the
transaction has been committed.
Redo records can also be written to the redo before the corresponding transaction file is validated. If the
redo log buffer is full or other transaction has been committed, LGWR redo void all entries in the redo log
buffer file redo, though some redo records should not be validated. If necessary, Oracle can reverse
these changes.
Figure 1
If archiving is disabled (the base in NOARCHIVELOG mode), a full redo file is available after the
changes recorded in it are written to the data files.
If archiving is enabled (The database is in ARCHIVELOG mode), a full redo file is available to
LGWR after the changes made in it are written to the data files were archived.
Active redo files (Fluent) and Inactive
294 ORACLE DATABASE ADMINISTRATION
Oracle uses redo one file at a time to store redo records from the redo log buffer. The redo file that LGWR
is writing in is called the current redo fill.
Redo files needed for database restoration are called redo active files. Redo files that are not needed for
the restoration are called inactive redo files.
If you have activated archiving (ARCHIVELOG mode), then the database cannot reuse or overwrite the
file online redo as one of ARCn process has archived its contents. If archiving is disabled
(NOARCHIVELOG mode), so when the last redo file is full, LGWR continues writing on the first available
active file.
Logs Switches and log sequence numbers
A log switch is the point where the base stop writing in one of the online redo files and starts writing in
another. Normally, a log switch occurs when the current redo file is completely filled and must continue to
write the next redo file. However, you can configure the switches logs so they reproduce at regular
intervals, without worrying if the redo file being completely filled. It can also force log switches manually.
Oracle assigns each redo file a new sequence number each time a log switch happens and the LGWR
begins writing in it. When Oracle archive redo files, the archived file keeps the log sequence
number. Recycled redo file, provides the next available log sequence number.
Each redo online or archived file is only identified by its sequence number (log sequence).
During a crash, the instance or media recovery, based strictly applies the redo files in ascending order by
using the archived redo file sequence number required and redo files.
The views V $ THREAD, V $ LOG, V $ LOGFILE and V $ LOG_HISTORY provide information on Redo files.
V $ THREAD gives information about the file being redo.
THREAD # NUMBER
STATUS VARCHAR2 ( 6 )
ENABLED VARCHAR2 ( 8 )
GROUPS NUMBER
INSTANCE VARCHAR2 ( 80 )
OPEN_TIME DATE
CURRENT_GROUP# NUMBER
SEQUENCE# NUMBER
CHECKPOINT_CHANGE# NUMBER
CHECKPOINT_TIME DATE
ENABLE_CHANGE# NUMBER
ENABLE_TIME DATE
DISABLE_CHANGE# NUMBER
DISABLE_TIME DATE
LAST_REDO_SEQUENCE# NUMBER
LAST_REDO_BLOCK NUMBER
LAST_REDO_CHANGE# NUMBER
LAST_REDO_TIME DATE
The view V $ LOG provides information by reading the control file instead of reading the data dictionary.
GROUP # NUMBER
THREAD # NUMBER
SEQUENCE # NUMBER
BYTES NUMBER
MEMBERS NUMBER
ARCHIVED VARCHAR2 ( 3 )
STATUS VARCHAR2 ( 16 )
FIRST_CHANGE # NUMBER
FIRST_TIME DATE
GROUP # NUMBER
STATUS VARCHAR2 ( 7 )
TYPE VARCHAR2 ( 7 )
MEMBER VARCHAR2 ( 513 )
IS_RECOVERY_DEST_FILE VARCHAR2 ( 3 )
STATUS INVALID takes the value if the file is inaccessible, STALE if the file is incomplete, DELETED if the
file is no longer used and empty if the file is in use.
From 10g has a new column in the V $ LOGFILE: IS_RECOVERY_DEST_FILE. This column is in the view V
$ CONROLFILE, V $ archived_log, V $ DATAFILE_COPY, V $ DATAFILE and V $ BACKUP_PIECE, it is set to
YES if the file has been created in the flash recovery area.
To create a new redo file group or a member, you must have the ALTER DATABASE system privilege. The
base can have a maximum MAXLOGFILES groups.I.3.1. Creating groups redo▲
To create a new group of redo files, use the query ALTER DATABASE ADD LOGFILE clause with.
For example:
ALTER DATABASE ADD LOGFILE (‘ /oracle/dbs/log1c.rdo ' , ' /oracle/dbs/log2c.rdo ' ) SIZE
500K;
You must specify the full path and name for the new members, otherwise they will be
created in the default directory or in the current directory after the OS.
296 ORACLE DATABASE ADMINISTRATION
One can specify the number that identifies the group using the GROUP clause:
The use of group numbers facilitates administration redo file groups. The group number must be between
1 and MAXLOGFILES. Please do not skip the group numbers (eg 10, 20.30), if space in the control files
will be consumed unnecessarily.
In some cases, it is not necessary to create a group of completely redo files. The group may already exist
because one or more members have been removed (eg following a disk failure). In this case, you can
add new members to the existing group.
To create a new redo file member of an existing group, use ALTER DATABASE ADD LOGFILE clause with
MEMBER. In the following example we add a new member to redo group number 2:
Note that the file name must be indicated, but its size is not compulsory. The size of the new member is
determined from the size of existing members.
When, unwanted ALTER DATABASE, you can alternatively identify the target group by specifying all the
other group members in the TO clause, as shown in the following example:
One can use the OS commands to move the redo files. After using ALTER databse to give their new
names (location) known by the base. This procedure is the necessary, for example, if the disk currently
used for some redo files to be removed, or if the data files and redo some files are in the same disc and
should be separated to minimize contention.
To rename a member redo files, you must have the ALTER DATABASE system privilege.
In addition, we must also have system privileges to copy files to the desired directory and privileges to
open and save the database.
Before moving the redo files, or any other change in the basic structures, back up the database
completely. As a precaution after renommination or moving a set of redo files, immediately back up the
control file.
297 ORACLE DATABASE ADMINISTRATION
IMMEDIATE SHUTDOWN
The HOST command can be used to run OS commands without leaving SQL * Plus. In some
OS using a character instead of HOST. For example, on UNIX using the exclamation point
(!).
The following example uses OS commands (UNIX) to move members of the redo files to a new location:
CONNECT / as SYSDBA
STARTUP MOUNT
ALTER DATABASE
RENAME FILE ' /diska/logs/log1a.rdo ' , ' /diska/logs/log2a.rdo '
TO ' /diskc/logs/log1c.rdo ' , ' /diskc/logs/log2c.rdo ' ;
Changing the redo file takes effect at the opening of the base.
In some cases, one must remove an entire group. For example, we want to reduce testing and the
number of groups. In other cases, one must remove one or more members. For example, if some
members are in a failed disk.
298 ORACLE DATABASE ADMINISTRATION
Deleting a Group
To delete a group of redo files, it must have the ALTER DATABASE system privilege.
Before deleting a group of redo files, you need to consider the following restrictions and precautions:
An instance requires at least two groups of redo files, regardless of the number of members in
the group. (A group contains one or more members.)
You can delete a group of redo files, only if it is inactive. If you need to remove the current
group, first, we force a log switch.
Make sure the redo file group is archived (if archiving is enabled) before deleting it.
Delete a group of redo files with the ALTER DATABASE command using the DROP LOGFILE clause.
When a group is deleted from the database and that it does not use OMF, OS files will not be deleted
from the disk. You must use the OS commands to remove them physically.
To remove a member from a redo file, you must have the ALTER DATABASE system privilege.
To remove an inactive member of a redo file, use the ALTER DATABASE DROP LOGFILE clause with
MEMBER.
299 ORACLE DATABASE ADMINISTRATION
Deleting a member
When a member of a journal is deleted, the OS file is not deleted from the disk.
The log switch occurs when LGWR stops writing in a newspaper group and began writing in another. By
default, a log switch occurs automatically when the redo file current group is full.
You can force a log switch for the current group is inactive and available for maintenance on the redo
files. For example, we delete the currently active group, but we are unable to remove it while active. It
should also force a log switch if the currently active group needs to be archived at a specific time before
the members of the group are completely filled. This is useful in configurations where the redo files are
quite large and take longer to complete.
To force a log switch, you must have the ALTER SYSTEM privilege. Use the ALTER SYSTEM command with
SWITCH LOGFILE clause.
We can configure the database to use the CHECKSUM so that the blocks redo files are checked. If it
affects the initialization parameter to TRUE DB_BLOCK_CHECKSUM, Oracle calculates the checksum for
each block oracle when he wrote in the disc, including blocks of newspapers. The checksum is stored in
the block header.
Oracle uses the checksum to detect corrupt blocks in redo files. The basic checks the log block when the
block is read from the archived log during recovery and when he writes the block in the archived log. An
error will be detected and written in the alert file if corruption is detected.
If corruption is detected in a block diary for archiving, the system attempts to read the block from
another member in the group. If the block is corrupted in all members of the newspaper group, then
archiving can not continue.
The default setting is the DB_BLOCK_CHECKSUM TRUE. The value of this parameter can be changed
dynamically using ALTER SYSTEM
A redo file can be initialized without stopping the database, for example, if the redo file is corrupt.
301 ORACLE DATABASE ADMINISTRATION
Initialization of a group
This command can be used if you can not delete the redo files, there are two situations:
If the redo corrupted file is not yet archived, use the UNARCHIVED key.
This command initializes the corrupt redo files and prevents archiving.
If one sets the redo file needed to restore or backup, we can not restore from this backup.
If we initialize a non redo archived file, it should make another backup of the database.
To initialize a non archived redo file that is needed to put a tablespace offline online, use the DATAFILE
clause UNRECOVERABLE in control DATABASE CLEAR LOGFILE.
If we initialize a redo file needed to bring an offline tablespace online, you will be unable to bring the
tablespace online again. We are obliged to delete the tablespace or perform an incomplete recovery. Note
that the tablespace offline normally does not need restoration.
One thread
Two threads
Four threads
No thread
302 ORACLE DATABASE ADMINISTRATION
T2 Redo files are filled with records
Undo
Redo
Vectors newspapers
Vector changes
T3 The redo records are written to the redo file through the process
DBWR
CKPT
LGWR
RDWR
no log file.
a log file.
303 ORACLE DATABASE ADMINISTRATION
Unconditional
a RBA
304 ORACLE DATABASE ADMINISTRATION
SCN
the RDBA
T10 The following views provide information about the log file
V $ LOGFILE
V $ LOGFILES
V $ THREADS
V $ THREAD
1 and MAXLOGMEMBER
1 and 10
1 and MAXLOGFILES
1 and MAXLOGMEMBERS
305 ORACLE DATABASE ADMINISTRATION
T13 To delete a group of newspapers, one must have system privilege
ALTER LOGFILE
ALTER SYSTEM
ALTER DATABASE
No system privilege
INACTIVE
OFFLINE
ASSETS
Default is FALSE
Default is TRUE
T19 We can initialize a log file without stopping the database with the command
To initialize a non archived log that is needed to put an offline tablespace online,
T20
using
307 ORACLE DATABASE ADMINISTRATION
Solutions:
T1: A
T2: B and D
T3: C
T4: A and C
T5: D
T6: B and C
T7: D
T8: A and C
T9: A
T10: A and D
T11: A and D
T12: C
T13: C
T14: A
T15: B and D
T16: C
T17: A
T18: B and D
T19: D
T20: B
We cannot resize the redo log files. We must drop the redolog file and recreate them .This is only method
to resize the redo log files. A database requires atleast two groups of redo log files, regardless the
number of the members. We cannot the drop the redo log file if its status is current or active . We have
change the status to "inactive" then only we can drop it.
When a redo log member is dropped from the database, the operating system file is not deleted from
disk. Rather, the control files of the associated database are updated to drop the member from the
database structure. After dropping a redo log file, make sure that the drop completed successfully, and
then use the appropriate operating system command to delete the dropped redo log file. In my case i
have four redo log files and they are of 50MB in size .I will resize to 100 MB. Below are steps to resize
the redo log files.
Here, we cannot drop the current and active redo log file .
Redo entries record data that you can use to reconstruct all changes made to the database,
including the rollback segments. Therefore, the online redo log also protects rollback data.
When you recover the database using redo data, Oracle reads the change vectors in the redo
records and applies the changes to the relevant blocks.
How Oracle Writes to the Online Redo Log.
The online redo log of a database consists of two or more online redo log files.
Oracle requires a minimum of two files to guarantee that one is always available for writing while
the other is being archived (if in ARCHIVELOG mode).
LGWR writes to online redo log files in a circular fashion. When the current online redo log file
fills, LGWR begins writing to the next available online redo log file.
When the last available online redo log file is filled, LGWR returns to the first online redo log file
and writes to it, starting the cycle again.
The above figure illustrates the circular writing of the online redo log file. The numbers next to
each line indicate the sequence in which LGWR writes to each online redo log file.
NOARCHIVELOG mode : If archiving is disabled a filled online redo log file is available once the
changes recorded in it have been written to the datafiles.
ARCHIVELOG mode : If archiving is enabled a filled online redo log file is available to LGWR
once the changes recorded in it have been written to the datafiles and once the file has been
archived.
311 ORACLE DATABASE ADMINISTRATION
· NOARACHIVELOG mode:
o The Redo Log Files are overwritten each time a log switch occurs, but the files are never
archived.
o When a Redo Log File (group) becomes inactive it is available for reuse by LGWR.
o This mode protects a database from instance failure, but NOT from media failure.
o In the event of media failure, database recovery can only be accomplished to the last full
backup of the database!
o You cannot perform tablespace backups in NOARCHIVELOG mode.
· ARCHIVELOG mode –
o Full On-line Redo Log Files are written by the ARCn process to specified archive locations,
either disk or tape – you can create more than one archiver process to improve
performance.
o A database control file tracks which Redo Log File groups are available for reuse (those
that have been archived).
o The DBA can use the last full backup and the Archived Log Files to recover the database.
o A Redo Log File that has not been archived cannot be reused until the file is archived – if
the database stops awaiting archiving to complete, add an additional Redo Log Group.
This figure shows the archiving of log files by the ARCn process as log files are reused by LGWR.
323 ORACLE DATABASE ADMINISTRATION
While archiving can be set to either manual or automatic, the preferred setting for normal production
database operation is automatic. In manual archiving, the DBA must manually archive each On-line
Redo Log File.
1. Connect to the database with administrator privileges (AS SYSDBA) – shutdown the database
instance normally with the command:
Shutdown
Note: You cannot change from ARCHIVELOG to NOARCHIVELOG if any datafiles require media
recovery.
2. Backup the database – it is always recommended to backup a database before making any major
changes.
3. Edit the init.ora file to add parameters to specify the destinations for archive log files (the next
section provides directions on how to specify archive destinations).
4. Startup a new instance in MOUNT stage – do not open the database – archive status can only be
modified in MOUNT stage:
5. Issue the command to turn on archiving and then open the database:
SHUTDOWN IMMEDIATE
7. Backup the database – necessary again because the archive status has changed. The previous
backup was taken in NOARCHIVELOG mode and is no longer usable.
Archive Redo Log files can be written to a single disk location or they can be multiplexed, i.e. written to
multiple disk locations.
· Archiving to a single destination was once accomplished by specifying
the LOG_ARCHIVE_DEST initialization parameter in the init.ora file – it has since been
replaced in favor of the LOG_ARCHIVE_DEST_n parameter (see next bullet).
· Multiplexing can be specified for up to 31 locations by using
the LOG_ARCHIVE_DEST_n parameters (where n is a number from 1 to 31). This can also be
used to duplex the files by specifying a value for
the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_2 parameters.
· When multiplexing, you can specify remote disk drives if they are available to the server.
These examples show setting the init.ora parameters for the possible archive destination specifications:
3. Example of Multiplexing Three Archive Log Destinations (for those DBAs that are very
risk averse):
Specify the naming pattern to use for naming Archive Redo Log Files with
the LOG_ARCHIVE_FORMAT command in the init.ora file.
LOG_ARCHIVE_FORMAT = arch_%t_%s_%r.arc
This example shows a sequence of Archive Redo Log files generated using
the LOG_ARCHIVE_FORMAT to specify naming the Redo Log Files – all of the logs are for thread 1 with
log sequence numbers of 100, 101, and 102 with reset logs ID 509210197 indicating the files are from
the same database.
/disk1/archive/arch_1_101_509210197.arc,
/disk1/archive/arch_1_102_509210197.arc
/disk2/archive/arch_1_100_509210197.arc,
/disk2/archive/arch_1_101_509210197.arc,
/disk2/archive/arch_1_102_509210197.arc
/disk3/archive/arch_1_100_509210197.arc,
/disk3/archive/arch_1_101_509210197.arc,
/disk3/archive/arch_1_102_509210197.arc
Information about the status of the archiving can be obtained from the V$INSTANCE dynamic
performance view. This shows the status for the DBORCL database.
ARCHIVE
-----------
STARTED
Several dynamic performance views contain useful information about archived redo logs, as summarized
in the following table.
Dynamic Performance
View Description
V$DATABASE Identifies whether the database is in ARCHIVELOG or NOARCHIVELOG
mode and whether MANUAL (archiving mode) has been specified.
V$ARCHIVED_LOG Displays historical archived log information from the control
file. If you use a recovery catalog, the RC_ARCHIVED_LOG view
contains similar information.
325 ORACLE DATABASE ADMINISTRATION
Dynamic Performance
View Description
V$ARCHIVE_DEST Describes the current instance, all archive destinations, and the
current value, mode, and status of these destinations.
V$ARCHIVE_PROCESSES Displays information about the state of the various archive
processes for an instance.
V$BACKUP_REDOLOG Contains information about any backups of archived logs. If you
use a recovery catalog, the RC_BACKUP_REDOLOG contains similar
information.
V$LOG Displays all redo log groups for the database and indicates which
need to be archived.
V$LOG_HISTORY Contains log history information such as which logs have been
archived and the SCN range for each archived log.
A final caution about automatic archiving – Archive Redo Log files can consume a large quantity of
space. As you dispose of old copies of database backups, dispose of the associated Archive Redo Log
files.
Note: Proofread any scripts before using. Always try scripts on a test instance first. This Blog is not
responsible for any damage.
ARCHIVELOG MODE:
Advantages:
1. You can perform hot backups (backups when the database is online).
2. The archive logs and the last full backup (offline or online) or an older backup can completely recover the
database without losing any data because all changes made in the database are stored in the log file.
Disadvantages:
1. It requires additional disk space to store archived log files. However, the agent offers the option to purge the
logs after they have been backed up, giving you the opportunity to free disk space if you need it.
NOARCHIVELOG MODE:
Advantages:
1. It requires no additional disk space to store archived log files.
Disadvantages:
1. If you must recover a database, you can only restore the last full offline backup. As a result, any changes made
to the database after the last full offline backup are lost.
2. Database downtime is significant because you cannot back up the database online. This limitation becomes a
very serious consideration for large databases.
Important!!!
NOARCHIVELOG mode does not guarantee Oracle database PITR (Point-in-Time-Recovery) recovery if there is
a disaster.
If the Oracle database is expected to maintain in NOARCHIVELOG mode, then it must backup full Oracle
database files while the database is offline and can be restored only till last full offline backup time (All changes
after that backup are lost).
When backups scheduled using RMAN utility, ensure that the database runs in ARCHIVELOG mode
Oracle Database lets you save filled groups of redo log files to one or more offline destinations, known
collectively as the archived redo log. The process of turning redo log files into archived redo log files is
called archiving. This process is only possible if the database is running in ARCHIVELOG mode. You
can choose automatic or manual archiving.
326 ORACLE DATABASE ADMINISTRATION
An archived redo log file is a copy of one of the filled members of a redo log group. It includes the redo
entries and the unique log sequence number of the identical member of the redo log group. For example,
if you are multiplexing your redo log, and if group 1 contains identical member files a_log1and b_log1,
then the archiver process (ARCn) will archive one of these member files. Should a_log1 become
corrupted, then ARCn can still archive the identical b_log1. The archived redo log contains a copy of
every group created since you enabled archiving.
When the database is running in ARCHIVELOG mode, the log writer process (LGWR) cannot reuse and
hence overwrite a redo log group until it has been archived. The background process ARC n automates
archiving operations when automatic archiving is enabled. The database starts multiple archiver
processes as needed to ensure that the archiving of filled redo logs does not fall behind.
Recover a database
Get information about the history of a database using the LogMiner utility
This section describes the issues you must consider when choosing to run your database
in NOARCHIVELOG or ARCHIVELOG mode, and contains these topics:
The choice of whether to enable the archiving of filled groups of redo log files depends on the availability
and reliability requirements of the application running on the database. If you cannot afford to lose any
data in your database in the event of a disk failure, use ARCHIVELOG mode. The archiving of filled redo
log files can require you to perform extra administrative operations.
When you run your database in NOARCHIVELOG mode, you disable the archiving of the redo log. The
database control file indicates that filled groups are not required to be archived. Therefore, when a filled
group becomes inactive after a log switch, the group is available for reuse by LGWR.
NOARCHIVELOG mode protects a database from instance failure but not from media failure. Only the
most recent changes made to the database, which are stored in the online redo log groups, are available
for instance recovery. If a media failure occurs while the database is in NOARCHIVELOG mode, you can
only restore the database to the point of the most recent full database backup. You cannot recover
transactions subsequent to that backup.
In NOARCHIVELOG mode you cannot perform online tablespace backups, nor can you use online
tablespace backups taken earlier while the database was in ARCHIVELOG mode. To restore a database
operating in NOARCHIVELOG mode, you can use only whole database backups taken while the
database is closed. Therefore, if you decide to operate a database in NOARCHIVELOG mode, take whole
database backups at regular, frequent intervals.
When you run a database in ARCHIVELOG mode, you enable the archiving of the redo log. The
database control file indicates that a group of filled redo log files cannot be reused by LGWR until the
group is archived. A filled group becomes available for archiving immediately after a redo log switch
occurs.
327 ORACLE DATABASE ADMINISTRATION
A database backup, together with online and archived redo log files, guarantees that you can
recover all committed transactions in the event of an operating system or disk failure.
If you keep an archived log, you can use a backup taken while the database is open and in
normal system use.
You can keep a standby database current with its original database by continuously applying the
original archived redo logs to the standby.
You can configure an instance to archive filled redo log files automatically, or you can archive manually.
For convenience and efficiency, automatic archiving is usually best. Figure 11-1 illustrates how the
archiver process (ARC0 in this illustration) writes filled redo log files to the database archived redo log.
If all databases in a distributed database operate in ARCHIVELOG mode, you can perform coordinated
distributed database recovery. However, if any database in a distributed database is
in NOARCHIVELOG mode, recovery of a global distributed database (to make all databases consistent)
is limited by the last full backup of any database operating in NOARCHIVELOG mode.
Controlling Archiving
This section describes how to set the archiving mode of the database and how to control the archiving
process. The following topics are discussed:
328 ORACLE DATABASE ADMINISTRATION
Setting the Initial Database Archiving Mode
You set the initial archiving mode as part of database creation in the CREATE DATABASE statement.
Usually, you can use the default of NOARCHIVELOG mode at database creation because there is no
need to archive the redo information generated by that process. After creating the database, decide
whether to change the initial archiving mode.
If you specify ARCHIVELOG mode, you must have initialization parameters set that specify the
destinations for the archived redo log files (see "Specifying Archive Destinations").
To change the archiving mode of the database, use the ALTER DATABASE statement with
the ARCHIVELOG or NOARCHIVELOG clause. To change the archiving mode, you must be connected
to the database with administrator privileges (AS SYSDBA).
The following steps switch the database archiving mode from NOARCHIVELOG to ARCHIVELOG:
SHUTDOWN
An open database must first be closed and any associated instances shut down before you can
switch the database archiving mode. You cannot change the mode
from ARCHIVELOG to NOARCHIVELOG if any datafiles need media recovery.
Before making any major change to a database, always back up the database to protect against
any problems. This will be your final backup of the database in NOARCHIVELOG mode and can
be used if something goes wrong during the change to ARCHIVELOG mode. Edit the
initialization parameter file to include the initialization parameters that specify the destinations
for the archived redo log files (see "Specifying Archive Destinations").
3. Start a new instance and mount, but do not open the database.
STARTUP MOUNT
To enable or disable archiving, the database must be mounted but not open.
4. Change the database archiving mode. Then open the database for normal operations.
SHUTDOWN IMMEDIATE
329 ORACLE DATABASE ADMINISTRATION
6. Back up the database.
Changing the database archiving mode updates the control file. After changing the database
archiving mode, you must back up all of your database files and control file. Any previous backup
is no longer usable because it was taken in NOARCHIVELOG mode.
To operate your database in manual archiving mode, follow the procedure shown in "Changing the
Database Archiving Mode". However, when you specify the new mode in step 5, use the following
statement:
When you operate your database in manual ARCHIVELOG mode, you must archive inactive groups of
filled redo log files or your database operation can be temporarily suspended. To archive a filled redo log
group manually, connect with administrator privileges. Ensure that the database is mounted but not
open. Use the ALTER SYSTEM statement with the ARCHIVE LOG clause to manually archive filled redo
log files. The following statement archives all unarchived log files:
When you use manual archiving mode, you cannot specify any standby databases in the archiving
destinations.
Even when automatic archiving is enabled, you can use manual archiving for such actions as rearchiving
an inactive group of filled redo log members to another location. In this case, it is possible for the
instance to reuse the redo log group before you have finished manually archiving, and thereby overwrite
the files. If this happens, the database writes an error message to the alert log.
However, to avoid any runtime overhead of invoking additional ARCn processes, you can set
the LOG_ARCHIVE_MAX_PROCESSES initialization parameter to specify up to ten ARCn processes to
be started at instance startup. The LOG_ARCHIVE_MAX_PROCESSES parameter is dynamic, and can
be changed using the ALTER SYSTEM statement. The database must be mounted but not open. The
following statement increases (or decreases) the number of ARCn processes currently running:
Before you can archive redo logs, you must determine the destination to which you will archive and
familiarize yourself with the various destination states. The dynamic performance (V$) views, listed
in "Viewing Information About the Archived Redo Log", provide all needed archive information.
You can choose whether to archive redo logs to a single destination or multiplex them. If you want to
archive only to a single destination, you specify that destination in
the LOG_ARCHIVE_DEST initialization parameter. If you want to multiplex the archived logs, you can
choose whether to archive to up to ten locations (using the LOG_ARCHIVE_DEST_n parameters) or to
archive only to a primary and secondary destination (using LOG_ARCHIVE_DEST and
LOG_ARCHIVE_DUPLEX_DEST). The following table summarizes the multiplexing alternatives, which
are further described in the sections that follow.
LOG_ARCHIVE_DUPLEX_DES LOG_ARCHIVE_DUPLEX_DEST =
T '/disk2/arc'
Use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 10) to specify from one to
ten different destinations for archival. Each numerically suffixed parameter uniquely identifies an
individual destination.
You specify the location for LOG_ARCHIVE_DEST_n using the keywords explained in the following
table:
If you use the LOCATION keyword, specify a valid path name for your operating system. If you
specify SERVICE, the database translates the net service name through the tnsnames.ora file to a
connect descriptor. The descriptor contains the information necessary for connecting to the remote
database. The service name must have an associated database SID, so that the database correctly
updates the log history of the control file for the standby database.
Perform the following steps to set the destination for archived redo logs using
the LOG_ARCHIVE_DEST_n initialization parameter:
SHUTDOWN
2. Set the LOG_ARCHIVE_DEST_n initialization parameter to specify from one to ten archiving
locations. The LOCATION keyword specifies an operating system specific path name. For
example, enter:
331 ORACLE DATABASE ADMINISTRATION
LOG_ARCHIVE_DEST_1 = 'LOCATION = /disk1/archive'
LOG_ARCHIVE_DEST_2 = 'LOCATION = /disk2/archive'
LOG_ARCHIVE_DEST_3 = 'LOCATION = /disk3/archive'
If you are archiving to a standby database, use the SERVICE keyword to specify a valid net
service name from the tnsnames.ora file. For example, enter:
Note:
If the COMPATIBLE initialization parameter is set to 10.0.0 or higher, the database requires
the LOG_ARCHIVE_FORMAT parameter. The default for this parameter is operating system
dependent. For example, this is the default format for UNIX:
LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
The incarnation of a database changes when you open it with the RESETLOGS option.
Specifying %r causes the database to capture the resetlogs ID in the archived redo log file
name.
LOG_ARCHIVE_FORMAT = arch_%t_%s_%r.arc
This setting will generate archived logs as follows for thread 1; log sequence numbers 100, 101,
and 102; resetlogs ID 509210197. The identical resetlogs ID indicates that the files are all from
the same database incarnation:
/disk1/archive/arch_1_100_509210197.arc,
/disk1/archive/arch_1_101_509210197.arc,
/disk1/archive/arch_1_102_509210197.arc
/disk2/archive/arch_1_100_509210197.arc,
/disk2/archive/arch_1_101_509210197.arc,
/disk2/archive/arch_1_102_509210197.arc
/disk3/archive/arch_1_100_509210197.arc,
/disk3/archive/arch_1_101_509210197.arc,
/disk3/archive/arch_1_102_509210197.arc
To specify a maximum of two locations, use the LOG_ARCHIVE_DEST parameter to specify a primary
archive destination and the LOG_ARCHIVE_DUPLEX_DEST to specify an optional secondary archive
destination. All locations must be local. Whenever the database archives a redo log, it archives it to every
destination specified by either set of parameters.
SHUTDOWN
LOG_ARCHIVE_DEST = '/disk1/archive'
LOG_ARCHIVE_DUPLEX_DEST = '/disk2/archive'
Each archive destination has the following variable characteristics that determine its status:
Valid/Invalid: indicates whether the disk location or service name information is specified and
valid
Enabled/Disabled: indicates the availability state of the location and whether the database can
use the destination
Several combinations of these characteristics are possible. To obtain the current status and other
information about each destination for an instance, query the V$ARCHIVE_DEST view.
The characteristics determining a locations status that appear in the view are shown in Table 11-1. Note
that for a destination to be used, its characteristics must be valid, enabled, and active.
VALID True True True The user has properly initialized the destination, which is
available for archiving.
INACTIVE Fals n/a n/a The user has not provided or has deleted the destination
e information.
ERROR True True False An error occurred creating or writing to the destination file;
refer to error data.
DEFERRED True False True The user manually and temporarily disabled the destination.
DISABLED True False False The user manually and temporarily disabled the destination
following an error; refer to error data.
BAD n/a n/a n/a A parameter error occurred; refer to error data.
PARAM
333 ORACLE DATABASE ADMINISTRATION
The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization parameter lets
you control the availability state of the specified destination (n).
The availability state of the destination is DEFER, unless there is a failure of its parent destination, in
which case its state becomes ENABLE.
The two modes of transmitting archived logs to their destination are normal archiving
transmission and standby transmission mode. Normal transmission involves transmitting files to a
local disk. Standby transmission involves transmitting files through a network to either a local or remote
standby database.
In normal transmission mode, the archiving destination is another disk drive of the database server. In
this configuration archiving does not contend with other files required by the instance and can complete
more quickly. Specify the destination with either the LOG_ARCHIVE_DEST_n or
LOG_ARCHIVE_DEST parameters.
It is good practice to move archived redo log files and corresponding database backups from the local
disk to permanent inexpensive offline storage media such as tape. A primary value of archived logs is
database recovery, so you want to ensure that these logs are safe should disaster strike your primary
database.
In standby transmission mode, the archiving destination is either a local or remote standby database.
Caution:
You can maintain a standby database on a local disk, but Oracle strongly encourages you to maximize
disaster protection by maintaining your standby database at a remote site.
If you are operating your standby database in managed recovery mode, you can keep your standby
database synchronized with your source database by automatically applying transmitted archived redo
logs.
To transmit files successfully to a standby database, either ARCn or a server process must do the
following:
Transmit the archived logs in conjunction with a remote file server (RFS) process that resides
on the remote server
Each ARCn process has a corresponding RFS for each standby destination. For example, if three
ARCn processes are archiving to two standby databases, then Oracle Database establishes six RFS
connections.
334 ORACLE DATABASE ADMINISTRATION
You transmit archived logs through a network to a remote location by using Oracle Net Services. Indicate
a remote archival by specifying a Oracle Net service name as an attribute of the destination. Oracle
Database then translates the service name, through the tnsnames.ora file, to a connect descriptor. The
descriptor contains the information necessary for connecting to the remote database. The service name
must have an associated database SID, so that the database correctly updates the log history of the
control file for the standby database.
The RFS process, which runs on the destination node, acts as a network server to the ARC n client.
Essentially, ARCn pushes information to RFS, which transmits it to the standby database.
The RFS process, which is required when archiving to a remote destination, is responsible for the
following tasks:
Updating the standby database control file (which Recovery Manager can then use for recovery)
Archived redo logs are integral to maintaining a standby database, which is an exact replica of a
database. You can operate your database in standby archiving mode, which automatically updates a
standby database with archived redo logs from the original database.
Sometimes archive destinations can fail, causing problems when you operate in automatic archiving
mode. Oracle Database provides procedures to help you minimize the problems associated with
destination failure. These procedures are discussed in the sections that follow:
The LOG_ARCHIVE_DEST_n parameter lets you specify whether a destination is OPTIONAL (the
default) or MANDATORY. The LOG_ARCHIVE_MIN_SUCCEED_DEST=n parameter uses
all MANDATORY destinations plus some number of non-standby OPTIONAL destinations to determine
whether LGWR can overwrite the online log. The following rules apply:
Omitting the MANDATORY attribute for a destination is the same as specifying OPTIONAL.
You must have at least one local destination, which you can
declare OPTIONAL or MANDATORY.
If you DEFER a MANDATORY destination, and the database overwrites the online log without
transferring the archived log to the standby site, then you must transfer the log to the standby
manually.
If you are duplexing the archived logs, you can establish which destinations are mandatory or optional by
using the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameters. The following rules
apply:
In this scenario, you archive to three local destinations, each of which you declare
as OPTIONAL. Table11-2 illustrates the possible values for
LOG_ARCHIVE_MIN_SUCCEED_DEST=n in this case.
Value Meaning
1 The database can reuse log files only if at least one of the OPTIONAL destinations
succeeds.
2 The database can reuse log files only if at least two of the OPTIONAL destinations
succeed.
3 The database can reuse log files only if all of the OPTIONAL destinations succeed.
This scenario shows that even though you do not explicitly set any of your destinations
to MANDATORY using the LOG_ARCHIVE_DEST_n parameter, the database must successfully archive
to one or more of these locations when LOG_ARCHIVE_MIN_SUCCEED_DEST is set to 1, 2, or 3.
Value Meaning
1 The database ignores the value and uses the number of MANDATORY destinations (in
this example, 2).
2 The database can reuse log files even if no OPTIONAL destination succeeds.
3 The database can reuse logs only if at least one OPTIONAL destination succeeds.
4 The database can reuse logs only if both OPTIONAL destinations succeed.
This case shows that the database must archive to the destinations you specify as MANDATORY,
regardless of whether you set LOG_ARCHIVE_MIN_SUCCEED_DEST to archive to a smaller number of
destinations.
Use the REOPEN attribute of the LOG_ARCHIVE_DEST_n parameter to specify whether and when
ARCn should attempt to rearchive to a failed destination following an error. REOPEN applies to all errors,
not just OPEN errors.
REOPEN=n sets the minimum number of seconds before ARCn should try to reopen a failed destination.
The default value for n is 300 seconds. A value of 0 is the same as turning off the REOPEN attribute;
ARCn will not attempt to archive after a failure. If you do not specify the REOPEN keyword, ARCn will
never reopen a destination following an error.
You cannot use REOPEN to specify the number of attempts ARCn should make to reconnect and transfer
archived logs. The REOPEN attempt either succeeds or fails.
When you specify REOPEN for an OPTIONAL destination, the database can overwrite online logs if
there is an error. If you specify REOPEN for a MANDATORY destination, the database stalls the
production database when it cannot successfully archive. In this situation, consider the following options:
Change the destination by deferring the destination, specifying the destination as optional, or
changing the service.
ARCn reopens a destination only when starting an archive operation from the beginning of the
log file, never during a current operation. ARCn always retries the log copy from the beginning.
337 ORACLE DATABASE ADMINISTRATION
If you specified REOPEN, either with a specified time the default, ARCn checks to see whether
the time of the recorded error plus the REOPEN interval is less than the current time. If it is,
ARCn retries the log copy.
Background processes always write to a trace file when appropriate. (See the discussion of this topic
in "Monitoring Errors with Trace Files and the Alert Log".) In the case of the archivelog process, you can
control the output that is generated to the trace file. You do this by setting the LOG_ARCHIVE_TRACE
initialization parameter to specify a trace level. The following values can be specified:
You can combine tracing levels by specifying a value equal to the sum of the individual levels that you
would like to trace. For example, setting LOG_ARCHIVE_TRACE=12, will generate trace level 8 and 4
output. You can set different values for the primary and any standby database.
The default value for the LOG_ARCHIVE_TRACE parameter is 0. At this level, the archivelog process
generates appropriate alert and trace entries for error conditions.
You can change the value of this parameter dynamically using the ALTER SYSTEM statement. The
database must be mounted but not open. For example:
Changes initiated in this manner will take effect at the start of the next archiving operation.
You can display information about the archived redo log using dynamic performance views or
the ARCHIVE LOG LIST command.
Several dynamic performance views contain useful information about archived redo logs, as summarized
in the following table.
338 ORACLE DATABASE ADMINISTRATION
Dynamic Performance Description
View
V$DATABASE Shows if the database is in ARCHIVELOG or NOARCHIVELOG mode
and if MANUAL (archiving mode) has been specified.
V$ARCHIVED_LOG Displays historical archived log information from the control file. If you
use a recovery catalog, the RC_ARCHIVED_LOG view contains similar
information.
V$ARCHIVE_DEST Describes the current instance, all archive destinations, and the current
value, mode, and status of these destinations.
V$ARCHIVE_PROCESSES Displays information about the state of the various archive processes for
an instance.
V$BACKUP_REDOLOG Contains information about any backups of archived logs. If you use a
recovery catalog, the RC_BACKUP_REDOLOG contains similar
information.
V$LOG Displays all redo log groups for the database and indicates which need to
be archived.
V$LOG_HISTORY Contains log history information such as which logs have been archived
and the SCN range for each archived log.
For example, the following query displays which redo log group requires archiving:
GROUP# ARC
-------- ---
1 YES
2 NO
LOG_MODE
------------
NOARCHIVELOG
The SQL*Plus command ARCHIVE LOG LIST displays archiving information for the connected instance.
For example:
This display tells you all the necessary information regarding the archived redo log settings for the
current instance:
The oldest filled redo log group has a sequence number of 11160.
339 ORACLE DATABASE ADMINISTRATION
The next filled redo log group to archive has a sequence number of 11163.
USER MANAGEMENT
Authentication methods
database level, OS level, network level
Application security
User session information using Application CONTEXT
Application Context: name-value pair that holds session info.
You can retrieve info about a user (i.e., username/terminal, username/deptid) and
restrict database and application access based on this information.
Virtual Private Database: restrict database access on the row and column levels.
VPD policy: dynamically imbeds a WHERE clause into SQL statements
(2) Authorization
With Virtual Private Databases (VPDs), Oracle allows column masking to hide columns.
When you select the row, Oracle will only display NULL for the secure columns.
If you're securing at the row level and column level, it's probably easier to just implement VPDs and not the
secure views.
A VPD is just asking Oracle to put a where clause on DML against an object with a security policy on it.
A security policy is defined with DBMS_RLS package.
A security policy is normally defined in a CONTEXT (a piece of data that says how the where clause should
be built).
(4) Audit
Authentication Methods
Application Security
Encryption
Audit
346 ORACLE DATABASE ADMINISTRATION
GRANT CREATE SESSION instead of CONNECT role
Grant organization-specific roles
Specify a DEFAULT Tablespace (Otherwise SYSTEM will be used => disk contention)
Obs: A tablespace designated as the default permanent tablespace cannot be dropped.
Specify a PROFILE for the user
Limits can be imposed at the user session level, or for each database call.
You can define limits on CPU time, number of logical reads, number of concurrent sessions for each user,
session idle time, session elapsed connect time and the amount of private SGA space for a session.
Use AUDIT SESSION to gather information about limits CONNECT_TIME, LOGICAL_READS_PER_SESSION.
About Profiles
(b) Assign profile to user and check the users' resource constraints:
· The user Scott is a standard "dummy" user account found on many Oracle systems for the
purposes of system testing – it needs to be disabled to remove a potential hacker access route.
· The IDENTIFIED BY clause specifies the user password.
· In order to create a user, a DBA must have the CREATE USER system privilege.
· Users also have a privilege domain – initially the user account has NO privileges – it is empty.
· In order for a user to connect to Oracle, you must grant the user the CREATE SESSION
system privilege.
· Each username must be unique within a database. A username cannot be the same as the
name of a role (roles are described in a later module).
Each user has a schema for the storage of objects within the database (see the figure below).
· Two users can name objects identically because the objects are referred to globally by using a
combination of the username and object name.
· Example: User350.Employee – each user account can have a table named Employee
because each table is stored within the user's schema.
349 ORACLE DATABASE ADMINISTRATION
Scott has two tablespaces identified, one for DEFAULT storage of objects and one
for TEMPORARY objects.
Scott has a quota set on 2 tablespaces. More details about tablespace allocation are given later in these
notes.
Scott has the resource limitations allocated by the PROFILE named accountant. The account is
unlocked (the default – alternatively the account could be created initially with the LOCK specification).
The PASSWORD EXPIRE clause requires Scott to change the password prior to connecting to the
database. After the password is set, when the user logs on using SQLPlus or any other software product
that connects to the database, the user receives the following message at logon, and is prompted to
enter a new password:
ERROR:
ORA-28001: the account has expired
Changing password for SCOTT
Old password:
New password:
Retype new password:
Password changed
Database Authentication
Database authentication involves the use of a standard user account and password. Oracle performs
the authentication.
· System users can change their password at any time.
· Passwords are stored in an encrypted format.
· Each password must be made up of single-byte characters, even if the database uses a multi-
byte character set.
· Advantages:
o User accounts and all authentication are controlled by the database. There is no reliance
on anything outside of the database.
350 ORACLE DATABASE ADMINISTRATION
o Oracle provides strong password management features to enhance security when using
database authentication.
o It is easier to administer when there are small user communities.
Oracle recommends using password management that includes password aging/expiration, account
locking, password history, and password complexity verification.
External Authentication
External Authentication requires the creation of user accounts that are maintained by
Oracle. Passwords are administered by an external service such as theoperating system or a network
service (Oracle Networks – Network authentication through the network is covered in the
course Oracle Database Administration Fundamentals II). This option is generally useful when a user
logs on directly to the machine where the Oracle server is running.
· A database password is not used for this type of login.
· In order for the operating system to authenticate users, a DBA sets
the init.ora parameter OS_AUTHENT_PREFIX to some set value – the default value isOPS$ in
order to provide for backward compatibility to earlier versions of Oracle.
· This prefix is used at the operating system level when the user's account username.
· You can also use a NULL string (a set of empty double quotes: "" ) for the prefix so that the
Oracle username exactly matches the Operating System user name. This eliminates the need for
any prefix.
#init.ora parameter
OS_AUTHENT_PREFIX=OPS$
When Scott attempts to connect to the database, Oracle will check to see if there is a database user
named OPS$Scott and allow or deny the user access as appropriate. Thus, to use SQLPlus to log on to
the system,
the LINUX/UNIX user Scott enters the following command from the operating system:
$ sqlplus /
All references in commands that refer to a user that is authenticated by the operating system must
include the defined prefix OPS$.
Oracle allows operating-system authentication only for secure connections – this is the default. This
precludes use of Oracle Net or a shared server configuration and prevents a remote user from
impersonating another operating system user over a network.
The REMOTE_OS_AUTHENT parameter can be set to force acceptance of a client operating system user
name from a nonsecure connection.
· This is NOT a good security practice.
· Setting REMOTE_OS_AUTHENT = FALSE creates a more secure configuration based on
server-based authentication of clients.
· Changes in the parameter take effect the next time the instance starts and the database is
mounted.
Global Authentication
Central authentication can be accomplished through the use of Oracle Advanced Security software for a
directory service.
Global users termed Enterprise Users are authenticated by SSL (secure socket layers) and the user
accounts are managed outside of the database.
Global Roles are defined in a database and known only to that database and authorization for the roles
is done through the directory service. The roles can be used to provide access privileges
Enterprise Roles can be created to provide access across multiple databases. They can consist of one
or more global roles and are essentially containers for global roles.
Creating a Global User Example:
351 ORACLE DATABASE ADMINISTRATION
· In the directory create multiple enterprise users and a mapping object to tell the database how
to map users DNs to the shared schema.
Default Tablespace
If one is not specified, the default tablespace for a user is the SYSTEM tablespace – not a good choice
for a default tablespace. The standard practice to always set a default tablespace as was shown in
the CREATE USER command.
Temporary Tablespace
The default Temporary Tablespace for a user is also the SYSTEM tablespace.
· Allowing this situation to exist for system users will guarantee that user processing will cause
contention with access to the data dictionary.
· Generally a DBA will create a TEMP tablespace that will be shared by all users for processing
that requires sorting and joins.
Tablespace Quotas
Assigning a quota ensures that users with privileges to create objects can create those objects in the
tablespace.
A quota also ensures the amount of space allocated for storage by an individual user is not
exceeded. The default is NO QUOTA on any tablespace so a quota must be set or else the Oracle user
account cannot be used to create any objects.
Assigning Other Tablespace Quotas: You can assign a quota on tablespaces other than
the DEFAULT and TEMPORARY tablespaces for users.
· This enables the user to create objects in the other tablespaces.
· This is often done for senior systems analysts and programmers who are authorized to create
objects in a DATA tablespace.
If you change a quota and the new quota is smaller than the old one, then the following rules apply:
· For users who have already exceeded the new quota, new objects cannot be created, and
existing objects cannot be allocated more space until the combined space of the user's objects is
within the new quota.
· For users who have not exceeded the new quota, user objects can be allocated additional
space up to the new quota.
Granting the UNLIMITED TABLESPACE privilege to a user account overrides all quota settings for all
tablespaces.
Existing objects for the user will remain within the tablespace, but cannot be allocated additional disk
space.
To make any other use of the command, a user must have the ALTER USER system privilege -
something the DBA should not give to individual users.
Changing a user's security setting with the ALTER USER command changes future sessions, not a
current session to which the user may be connected.
· Dropping a user causes the user and the user schema to be immediately deleted from the
database.
· If the user has created objects within their schema, it is necessary to use the CASCADE option
in order to drop a user.
· If you fail to specify CASCADE when user objects exist, an error message is generated and the
user is not dropped.
· In order for a DBA to drop a user, the DBA must have the DROP USER system privilege.
CAUTION: You need to exercise caution with the CASCADE option to ensure that you don't drop a user
where views or procedures exist that depend upon tables that the user created. In those cases,
dropping a user requires a lot of detailed investigation and careful deletion of objects.
If you want to deny access to the database, but do not want to drop the user and the user's objects, you
should revoke the CREATE SESSION privilege for the user temporarily.
You cannot drop a user who is connected to the database - you must first terminate the user's session
with the ALTER SYSTEM KILL SESSION command.
11 rows selected.
Site Licensing
One of the DBA's responsibilities is to ensure that the Oracle Server license agreement is maintained.
A DBA can track and limit session access for users concurrently accessing the database through use of
the LICENSE_MAX_SESSIONS,LICENSE_SESSIONS_WARNING,
and LICENSE_MAX_USERS parameters in the PFILE. If an organization's license is unlimited, these
parameters may have their value set to 0.
354 ORACLE DATABASE ADMINISTRATION
If the limit for the number of authorized connections to an Oracle Instance session is met, Oracle will
only allow users with the RESTRICTED SESSION privilege (usually DBAs) to connect to the database.
When the maximum limit is reached, Oracle writes a message in the ALERT file indicating the maximum
connections was reached. A DBA can also set awarning limit on the number of concurrent sessions so
that Oracle writes a message to the ALERT file indicating that the warning limit was reached.
When the maximum limit is reached, Oracle enforces the limit by restricting access to the
database. Oracle also tracks the highest number of concurrent sessions for each instance. This is
termed the "high water mark" and the information is written to the ALERT file.
LICENSE_MAX_SESSIONS = 80
A DBA does not have to set the warning limit (LICENSE_SESSIONS_WARNING), but this parameter
makes it easier to manage site licensing. Set the warning limit in the init.ora file with the command:
LICENSE_SESSIONS_WARNING = 70
The usage limits can be changed while the database is running with the ALTER SYSTEM command. This
example alters the number of concurrent sessions and the warning limit:
ALTER SYSTEM
SET LICENSE_MAX_SESSIONS = 100
LICENSE_SESSIONS_WARNING = 90;
If the new value is lower than the number of users currently logged on, Oracle does not force any users
off of the system, but enforces the new limit for new users who attempt to connect.
LICENSE_MAX_USERS = 100
Attempting to create users after the limit is reached generates an error and a message is written to
the ALERT file. A DBA can change the maximum named users limit with the ALTER SYSTEM command
as shown here:
To view the current session limits, query the V$LICENSE data dictionary view as shown in this SELECT
statement.
Privileges
General
Authentication means to authenticate a system user account ID for access to an Oracle database.
Authorization means to verify that a system user account ID has been granted the right, called
a privilege, to execute a particular type of SQL statement or to access objects belonging to another
system user account.
355 ORACLE DATABASE ADMINISTRATION
In order to manage system user access and use of various system objects, such as tables, indexes, and
clusters, Oracle provides the capability to grant and revoke privileges to individual user accounts.
System Privileges
As Oracle has matured as a product, the number of system privileges has grown. The current number
is over 100. A complete listing is available by querying the view named SYSTEM_PRIVILEGE_MAP.
If you can create an object, such as that privilege provided by the CREATE TABLE privilege, then you
can also drop the objects you create.
Category Privilege
Create Session
SESSION
Alter Session
Create Tablespace
Alter Tablespace
TABLESPACE
Drop Tablespace
Unlimited Tablespace
TABLE Create Table
Create Any Table
356 ORACLE DATABASE ADMINISTRATION
Alter Any Table
Drop Any Table
Select Any Table
Create Any Index
INDEX
Alter Any Index
Some privileges that you might expect to exist, such as CREATE INDEX, do not exist since if you
can CREATE TABLE, you can also create the indexes that go with it and use the ANALYZE command.
Some privileges, such as UNLIMITED TABLESPACE cannot be granted to a role (roles are covered in
Module 14-3)
In general, you can grant a privilege to either a user or to a role. You can also grant a privilege
to PUBLIC - this makes the privilege available to every system user.
The WITH ADMIN OPTION clause enables the grantee (person receiving the privilege) to grant the
privilege or role to other system users or roles; however, you cannot use this clause unless you have,
yourself, been granted the privilege with this clause.
The GRANT ANY PRIVILEGE system privilege also enables a system user to grant or revoke privileges.
The GRANT ANY ROLE system privilege is a dangerous one that you don't give to the average system
user since then the user could grant any role to any other system user.
This table lists example privileges associated with each of these special privileges.
SYSOPER SYSDBA
SYSOPER PRIVILEGES THAT INCLUDE THE
WITH ADMIN OPTION.
STARTUP
SHUTDOWN
ALTER DATABASE OPEN | MOUNT
CREATE DATABASE
RECOVER DATABASE
ALTER DATABASE ARCHIVELOG
RESTRICTED SESSION
ALTER DATABASE BEGIN/END BACKUP RECOVER DATABASE UNTIL
You cannot grant the SYSDBA or SYSOPER privileges by using the WITH ADMIN OPTION. Also, you
must have these privileges in order to grant/revoke them from another system user.
You can display system privileges by querying the DBA_SYS_PRIVS view. Here is the result of a query
of the SIUE Oracle database.
357 ORACLE DATABASE ADMINISTRATION
SELECT * FROM dba_sys_privs WHERE Grantee = 'USER349';
You can view the users who have SYSOPER and SYSDBA privileges by
querying v$pwfile_users. Note: Your student databases will display no rows selected—this output
comes from the DBORCL database.
The view SESSION_PRIVS gives the privileges held by a user for the current logon session.
There are no cascading effects when a system privilege is revoked. For example, the DBA grants
the SELECT ANY TABLE WITH ADMIN OPTION to systemuser1, and then system user1 grants
the SELECT ANY TABLE to system user2, then if system user1 has the privilege revoked,
system user2 still has the privilege.
For example, if this protection is in place, the SELECT ANY TABLE privilege to allow a user to access
views and tables in other schemas would not enable the system user to access dictionary objects.
A user account automatically has all object privileges for schema objects created within his/her
schema. Any privilege owned by a user account can be granted to another user account or to a role.
The following table provided by Oracle Corporation gives a map of object privileges and the type of
object to which a privilege applies.
Here the SELECT and ALTER privileges were granted for the Orders table belonging to the system
user User350. These two privileges were granted to allsystem users through the PUBLIC specification.
In the 3rd example, User349 receives the SELECT privilege on User350's Order_Details table and
can also grant that privilege to other system users via the WITH GRANT OPTION.
In the 4th example, the Accountant_Role role receives ALL privileges associated with
the Order_Details table.
In the 5th example UPDATE privilege is allocated for only two columns (Price and Description) of
the Order_Details table.
Notice the difference between WITH ADMIN OPTION and WITH GRANT OPTION - the first applying to
System privileges (these are administrative in nature), the second applying to Object privileges.
Several example REVOKE commands are shown here. Note the use of ALL (to revoke all object privileges
granted to a system user) and ON (to identify the object).
In the latter example, the CASCADE CONSTRAINTS clause would drop referential integrity constraints
defined by the revocation of ALL privileges.
There is a difference in how the revocation of object privileges affects other users. If user1 grants a
SELECT on a table with GRANT OPTION to user2, anduser2 grants the SELECT on the table to user3, if
the SELECT privilege is revoked from user2 by user1, then user3 also loses the SELECT privilege. This
is a critical difference.
Table Privileges
Table privileges are schema object privileges specifically applicable to Data Manipulation Language
(DML) operations and Data Definition Language (DDL) operations for tables.
DML Operations
As was noted earlier, privileges to DELETE, INSERT, SELECT, and UPDATE for a table or view should
only be granted to a system user account or role that need to query or manipulate the table data.
INSERT and UPDATE privileges can be restricted for a table to specific columns.
· A selective INSERT causes a new row to have values inserted for columns that are specified in
a privilege – all other columns store NULL or pre-defined default values.
· A selective UPDATE restricts updates only to privileged columns.
DDL Operations
The ALTER, INDEX, and REFERENCES privileges allow DDL operations on a table.
· Grant these privileges conservatively.
· Users attempting DDL on a table may need additional system or object schema privileges, e.g.,
to create a table trigger, the user requires the CREATE TRIGGER system privilege as well as
the ALTER TABLE object privilege.
View Privileges
As you've learned, a view is a virtual table that presents data from one or more tables in a database.
· Views show the structure of underlying tables and are essentially a stored query.
· Views store no actual data – the data displayed is derived from the tables (or views) upon
which the view is based.
359 ORACLE DATABASE ADMINISTRATION
· A view can be queried.
· A view can be used to update data, providing the view is "updatable" by definition.
To use a view, a system user account only requires appropriate privileges on the view itself – privileges
on the underlying base objects are NOT required.
Procedure Privileges
The EXECUTE ANY PROCEDURE system privilege provides the ability to execute any procedure in a
database.
A user of a procedure requires only the EXECUTE privilege on the procedure, and does NOT require
privileges on underlying objects. A user of a procedure is termed the Invoker.
At runtime, the privileges of the Definer are checked – if required privileges on referenced objects have
been revoked, then neither the Definer or any Invoker granted EXECUTE on the procedure can
execute the procedure.
Other Privileges
CREATE PROCEDURE or CREATE ANY PROCEDURE system privileges must be granted to a user
account in order for that user to create a procedure.
To alter a procedure (manually recompile), a user must own the procedure or have the ALTER ANY
PROCEDURE system privilege.
Procedure owners must have appropriate schema object privileges for any objects referenced in the
procedure body – these must be explicitly granted and cannot be obtained through a role.
Type Privileges
Type privileges are typically system privileges for named types that include object types, VARRAYs, and
nested tables. The system privileges in this area are detailed in this table.
The CONNECT and RESOURCE roles are granted the CREATE TYPE system privilege and the DBA role
includes all of the above privileges.
Object Privileges
360 ORACLE DATABASE ADMINISTRATION
The EXECUTE privilege permits a user account to use the type's methods. The user can use the named
type to:
· Define a table.
· Define a column in a table.
· Declare a variable or parameter of the named type.
Example from Oracle Database Security Guide Part Number B10773-01 documentation:
Assume that three users exist with the CONNECT and RESOURCE roles:
User1
User2
User3
User1 performs the following DDL in his schema:
CREATE TYPE Type1 AS OBJECT (
Attribute_1 NUMBER);
The following statements succeed because User2 has EXECUTE privilege on User1's TYPE2 with
the GRANT OPTION:
GRANT EXECUTE ON Type3 TO User3;
GRANT SELECT on Tab2 TO User3;
However, the following grant fails because User2 does not have EXECUTE privilege
on User1's TYPE1 with the GRANT OPTION:
GRANT SELECT ON Tab1 TO User3;
Several views provide information about object privileges. These can be queried as you have time and
include:
DBA_TAB_PRIVS - all object privileges granted to a user.
DBA_COL_PRIVS - all privileges granted on specific columns of a table.
Roles
General
The Role database object is used to improve the management of various system objects, such as tables,
indexes, and clusters by granting privileges to access these objects to roles. As you learned in earlier
studies, there are two types of privileges, System and Object. Both types of privileges can be allocated
to roles.
The concept of a role is a simple one – a role is created as a container for groups of privileges that are
granted to system users who perform similar, typical tasks in a business.
Example: A system user fills the position of Account_Manager. This is a business role. The role is
created as a database object and privileges are allocated to the role. In turn the role is allocated to all
employees that work as account managers, and all account managers thereby inherit the privileges
needed to perform their duties.
This figure shows privileges being allocated to roles, and the roles being allocated to two types of system
users – Account_Mgr and Inventory_Mgr.
361 ORACLE DATABASE ADMINISTRATION
From the figure it should be obvious that if you add a new system user who works as
an Account_Manager, then you can allocate almost all of the privileges this user will need by simply
allocating the role named ACCOUNT_MGR to the system user.
Role Benefits
· Easier privilege management: Use roles to simplify privilege management. Rather than
granting the same set of privileges to several users, you can grant the privileges to a role, and
then grant that role to each user.
· Dynamic privilege management: If the privileges associated with a role are modified, all
the users who are granted the role acquire the modified privileges automatically and
immediately.
· Selective availability of privileges: Roles can be enabled and disabled to turn privileges on
and off temporarily. Enabling a role can also be used to verify that a user has been granted that
role.
· Can be granted through the operating system: Operating system commands or utilities
can be used to assign roles to users in the database.
Predefined Roles
Numerous predefined roles are created as part of a database. These are listed and described in the
following table.
The first three roles are provided to maintain compatibility with previous versions of Oracle and may not
be created automatically in future versions of Oracle. Oracle Corporation recommends that you design
your own roles for database security, rather than relying on these roles.
Script to
ROLE DESCRIPTION
Create Role
CONNECT SQL.BSQ Includes system privileges: ALTER SESSION (This
362 ORACLE DATABASE ADMINISTRATION
role has been deprecated and has only been retained
with the ALTER SESSION privilege for compatibility
with previous Oracle versions)
SQL.BSQ Includes system privileges: CREATE CLUSTER, CREATE
RESOURCE INDEXTYPE, CREATE OPERATOR,CREATE PROCEDURE, CREATE
SEQUENCE, CREATE TABLE, CREATE TRIGGER, CREATE TYPE
SQL.BSQ Gives all system privileges to the grantee WITH
DBA
ADMIN OPTION.
CATEXP.SQL Provides the privileges required to perform full
and incremental database exports. Includes: SELECT
ANY TABLE, BACKUP ANY TABLE, EXECUTE ANY
PROCEDURE, EXECUTE ANY TYPE,ADMINISTER RESOURCE
EXP_FULL_DATABASE
MANAGER, and INSERT, DELETE, and UPDATE on the
tablesSYS.INCVID, SYS.INCFIL, and SYS.INCEXP. Also
the following
roles: EXECUTE_CATALOG_ROLEand SELECT_CATALOG_ROLE.
CATEXP.SQL Provides the privileges required to perform full
database imports. Includes an extensive list of
IMP_FULL_DATABASE system privileges (use view DBA_SYS_PRIVS to view
privileges) and the following
roles:EXECUTE_CATALOG_ROLE and SELECT_CATALOG_ROLE.
SQL.BSQ Provides DELETE privilege on the system audit table
DELETE_CATALOG_ROLE
(AUD$)
SQL.BSQ Provides EXECUTE privilege on objects in the data
EXECUTE_CATALOG_ROLE
dictionary. Also, HS_ADMIN_ROLE.
SQL.BSQ Provides SELECT privilege on objects in the data
SELECT_CATALOG_ROLE
dictionary. Also, HS_ADMIN_ROLE.
CATALOG.SQL Provides privileges for owner of the recovery
catalog. Includes: CREATE SESSION, ALTER
RECOVERY_CATALOG_OWNER SESSION,CREATE SYNONYM, CREATE VIEW, CREATE
DATABASE LINK, CREATE TABLE, CREATE CLUSTER, CREATE
SEQUENCE, CREATE TRIGGER, and CREATE PROCEDURE
CATHS.SQL Used to protect access to the HS (Heterogeneous
Services) data dictionary tables (grants SELECT)
and packages (grants EXECUTE). It is granted
HS_ADMIN_ROLE
to SELECT_CATALOG_ROLE andEXECUTE_CATALOG_ROLE such
that users with generic data dictionary access also
can access the HS data dictionary.
CATQUEUE.SQL Provides privileges to administer Advance Queuing.
Includes ENQUEUE ANY QUEUE, DEQUEUE ANY QUEUE,
AQ_ADMINISTRATOR_ROLE
and MANAGE ANY QUEUE, SELECT privileges on AQ
tables and EXECUTE privileges on AQ packages.
RESOURCE role – when granted to a system user, the system user automatically has
the UNLIMITED TABLESPACE privilege.
· We grant this role to students that need to design with the Internet Developer Suite that
includes Oracle Designer, Reports, Forms and other rapid application development software.
· Normally the RESOURCE role would not be granted to organizational members who are not
information technology professionals.
Creating Roles
Sample commands to create roles are shown here. You must have the CREATE ROLE system privilege.
363 ORACLE DATABASE ADMINISTRATION
CREATE ROLE Account_Mgr;
The IDENTIFIED BY clause specifies how the user must be authorized before the role can be enabled
for use by a specific user to which it has been granted. If this clause is not specified, or NOT
IDENTIFIED is specified, then no authorization is required when the role is enabled.
· Externally by the operating system, network, or other external source – the following
statement creates a role named ACCTS_REC and requires that the user be authorized by an
external source before it can be enabled:
Altering Roles
Use the ALTER ROLE command as is shown in these examples.
Granting Roles
General facts about roles:
· Grant system privileges and roles to users and to other roles.
· To grant a privilege to a role, you must be granted a system privilege with the ADMIN
OPTION or have the GRANT ANY PRIVILEGE system privilege.
· To grant a role, you must have been granted the role yourself with the ADMIN OPTION or
have the GRANT ANY ROLE system privilege.
· You cannot grant a role that is IDENTIFIED GLOBALLY as global roles are controlled entirely
by the enterprise directory service.
Use the GRANT command to grant a role to a system user or to another role, as is shown in these
examples.
When you create a role, the role is automatically granted to you with the ADMIN OPTION.
Granting with ADMIN OPTION is rarely done except to allocate privileges to security administrators, not
to other administrators or system users.
Example: This example creates a new user dbock with the specified password.
You cannot grant system privileges and roles with object privileges in the same GRANT statement.
Example: This grants SELECT, INSERT, and DELETE privileges for all columns of the EMPLOYEE table
to two user accounts.
Example: This grants all object privileges on the SUPERVISOR view to a user by use of
the ALL keyword.
Example: This specifies the WITH GRANT OPTION to enable User350 to grant the object privileges to
other users and roles.
· The grantee can grant object privileges to other users and roles in the database.
· The grantee can create views on the table.
· The grantee can grant corresponding privileges on the views to other users and roles.
· The grantee CANNOT use the WITH GRANT OPTION when granting object privileges to a
role.
Example: This grants the INSERT and UPDATE privileges on the Employee_ID, Last_Name,
and First_Name columns of the Employee table.
Default Roles
365 ORACLE DATABASE ADMINISTRATION
Oracle enables all privileges granted to a user and through user default roles when a user logs on.
The ALTER USER statement enables a DBA to specify the roles to be enabled when a system user
connects to the database without requiring the user to specify the roles' passwords. These roles must
have already been granted to the user with the GRANT statement.
Using the ALTER USER command to limit the default role causes privileges assigned to the user by other
roles to be temporarily removed.
The last example limits User153 only to privileges granted directly to the user, with no privileges being
allowed through roles.
You can also enable/disable roles through the SET ROLE command.
You cannot set a user's default roles with the CREATE USER statement.
The number of default roles a user can have is specified with the MAX_ENABLED_ROLES parameter.
Example: This enables the role Inventory_Mgr that you have been granted by specifying the
password.
The second example revokes the role Account_Mgr from the role Inventory_Mgr. The third example
revokes the role Access_MyBank_Acct from PUBLIC.
Example: You are the original grantor, this REVOKE will revoke the specified privileges from the users
specified.
Example: You granted User350 the privilege to UPDATE the Birth_Date, Last_Name,
and First_Name columns for the Employee table, but now want to revoke the UPDATE privilege on
the Birth_Date column.
You must first revoke the UPDATE privilege on all columns, then issue
a GRANT to regrant the UPDATE privilege on the specified columns.
Example:
· You as the DBA grant the CREATE VIEW system privilege to User350 WITH ADMIN
OPTION.
· User350 creates a view named Employee_Supervisor.
· User350 grants the CREATE VIEW system privilege to user349.
· User349 creates a view named Special_Inventory.
· You as the DBA revoke CREATE VIEW from User350.
· The Employee_Supervisor view continues to exist.
· User349 still has the CREATE VIEW system privilege and the Special_Inventory view
continues to exist.
Cascading revoke effects do occur for system privileges related to DML operations.
Example:
· You as the DBA grant the UPDATE ANY TABLE to User350.
· User350 creates a procedure that updates the Employee table, but User350 has not received
specific privileges on the Employee table.
· You as the DBA revoke the UPDATE ANY TABLE privilege.
· The procedure will fail.
Dropping Roles
If you drop a role:
· Oracle revokes the role from all system users and roles.
· The role is removed from the data dictionary.
· The role is automatically removed from all user default role lists.
· There is NO impact on objects created such as tables because the creation of objects is not
depending on privileges received through a role.
In order to drop a role, you must have been granted the role with the ADMIN OPTION or have
the DROP ANY ROLE system privilege.
Use the following steps to create, assign, and grant users roles:
1. Create a role for each application task. The name of the application role corresponds to a task in the
application, such as PAYROLL.
2. Assign the privileges necessary to perform the task to the application role.
3. Create a role for each type of user. The name of the user role corresponds to a job title, such
as PAY_CLERK.
The DBA has granted the user two roles, PAY_CLERK and PAY_CLERK_RO.
· The PAY_CLERK role has been granted all of the privileges that are necessary to perform the
payroll clerk function.
· The PAY_CLERK_RO (RO for read only) role has been granted only SELECT privileges on
the tables required to perform the payroll clerk function.
· The user can log in to SQL*Plus to perform queries, but cannot modify any of the data,
because the PAY_CLERK is not a default role, and the user does not know the password
for PAY_CLERK.
· When the user logs in to the payroll application, it enables the PAY_CLERK by providing the
password. It is coded in the program; the user is not prompted for it.
The important thing is here identify session. "DO NOT KILL WRONG SESSION!"
Connecting to sqlplus as SYS admin
[oracle@ora ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Sun feb 5 19:26:28 2014
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Check user
SQL> show user
USER is "SYS"
setting format
SQL> set linesize 100
SQL> column spid format A10
SQL> column username format A10
SQL> column program format A45
SQL> set pagesize 60
Now Selecting our session to be killed
SQL> Select
2 x.inst_id,
3 x.sid,
4 x.serial#,
5 y.spid,
6 x.username,
7 x.program
8 From gv$session x
9 Join gv$process y ON y.addr = x.paddr AND y.inst_id = x.inst_id
10 Where x.type != 'BACKGROUND';
% kill -9 spid
OR...
Profiles only take effect when resource limits are "turned on" for the database as a whole.
· Specify the RESOURCE_LIMIT initialization parameter.
RESOURCE_LIMIT = TRUE
Profile Specifications
Profile specifications include:
· Password aging and expiration
· Password history
· Password complexity verification
· Account locking
· CPU time
· Input/output (I/O) operations
· Idle time
· Connect time
· Memory space (private SQL area for Shared Server only)
· Concurrent sessions
This query lists the resource limits for the DEFAULT profile.
16 rows selected.
Creating a Profile
A DBA creates a profile with the CREATE PROFILE command.
· This command has clauses that explicitly set resource limits.
· A DBA must have the CREATE PROFILE system privilege in order to use this command.
· Example:
371 ORACLE DATABASE ADMINISTRATION
Resource limits that are not specified for a new profile inherit the limit set in
the DEFAULT profile. These clauses are covered in detail later in these notes.
Assigning Profiles
Profiles can only be assigned to system users if the profile has first been created. Each system user is
assigned only one profile at a time. When a profile is assigned to a system user who already has a
profile, the new profile replaces the old one – the current session, if one is taking place, is not affected,
but subsequent sessions are affected. Also, you cannot assign a profile to a role or another profile (Roles
are covered in Module 16).
As was noted above, profiles are assigned with the CREATE USER and ALTER USER command. An
example CREATE USER command is shown here – this command is covered in more detail in Module 14.
User created.
USERNAME PROFILE
-------------- -----------------
USER349 ACCOUNTANT
Altering Profiles
Profiles can be altered with the ALTER PROFILE command.
· A DBA must have the ALTER PROFILE system privilege to use this command.
· When a profile limit is adjusted, the new setting overrides the previous setting for the limit,
but these changes do not affect current sessions in process.
· Example:
Profile dropped.
USERNAME PROFILE
------------------------------ ----------
USER349 DEFAULT
· Changes that result from dropping a profile only apply to sessions that are created after the
change – current sessions are not modified.
Password Management
Password management can be easily controlled by a DBA through the use of profiles.
Password limits set in this fashion are always enforced. When password management is in use, an
existing user account can be locked or unlocked by theALTER USER command.
Password Account Locking: This option automatically locks a system user account if the user fails to
execute proper login account name/password entries after a specified number of login attempts.
Password Expiration/Aging: Specifies the lifetime of a password – after the specified period, the
password must be changed.
Password History: This option ensures that a password is not reused within a specified period of time
or number of password changes.
· This is implemented by use of a password verification function. A DBA can write such a
function or can use the default function namedVERIFY_FUNCTION.
· The function that is used for password complexity verification is specified with the profile
parameter, PASSWORD_VERIFY_FUNCTION.
· If NULL is specified (the default), no password verification is performed.
· The default VERIFY_FUNCTION has the characteristics shown in the figure below.
When a DBA connected as the user SYS executes the utlpwdmg.sql script (located
at $ORACLE_HOME/rdbms/admin/utlpwdmg.sql) , the Oracle Server creates
the VERIFY_FUNCTION . The script also executes the ALTER PROFILE command given below – the
command modifies the DEFAULT profile.
Function created.
Profile altered.
This ALTER PROFILE command is part of the utlpwdmg.sql script and does not need to be executed
separately.
Creating a Profile with Password Protection: The figure shown below provides an example CREATE
PROFILE command.
Use these parameters values when setting parameters to values that are less than a day:
· 1 hour: PASSWORD_LOCK_TIME = 1/24
· 10 minutes: PASSWORD_LOCK_TIME = 10/1400
· 5 minutes: PASSWORD_LOCK_TIME = 5/1440
Resource Management
As noted earlier, resource limits are enabled by setting the RESOURCE_LIMIT initialization parameter
to TRUE (the default is FALSE) or by enabling the parameter with the ALTER SYSTEM command.
System altered.
Resource Description
CPU_PER_SESSION Total CPU time – measured in hundredths of seconds
CPU_PER_CALL Maximum CPU time allowed for a statement parse, execute, or
fetch operation, in hundredths of a second.
SESSIONS_PER_USER Maximum number of concurrent sessions allowed for each user
name
CONNECT_TIME Maximum total elapsed connect time measured in minutes
IDLE_TIME Maximum continuous inactive time in a session measured in
minutes when a query or other operation is not in progress.
LOGICAL_READS_ Number of data blocks (physical and logical reads) read per
PER_SESSION session from either memory or disk.
LOGICAL_READS_PER_CAL Maximum number of data blocks read for a statement parse,
L execute, or fetch operation.
COMPOSITE_LIMIT Total Resource cost, in service units, as a composite weighted
sum of CPU_PER_SESSION, CONNECT_TIME,
LOGICAL_READS_PER_SESSION, and PRIVATE_SGA.
PRIVATE_SGA Maximum amount of memory a session can allocate in the shared
pool of the SGA measured in bytes, kilobytes, or megabytes
(applies to Shared Server only).
376 ORACLE DATABASE ADMINISTRATION
· Profile limits enforced at the session level are enforced for each connection where a system
user can have more than one concurrent connection.
· If a session-level limit is exceeded, then the Oracle Server issues an error message such
as ORA-02391: exceeded simultaneous SESSION_PER_USER limit, and then disconnects
the system user.
· Resource limits can also be set at the Call-level, but this applies to PL/SQL programming
limitations and we do not cover setting these Call-level limits in this course.
The ALTER RESOURCE COST command is used to adjust weightings for resource costs. This can affect
the impact of the COMPOSITE_LIMIT parameter.
Example: Here the weights are changed so CPU_PER_SESSION favors CPU usage over connect time
by a factor of 50 to 1. This means it is much more likely that a system user will be disconnected from
excessive CPU usage than from the use of excessive connect time.
RESOURCE_NAME UNIT_COST
-------------------------------- ----------
CPU_PER_SESSION 50
LOGICAL_READS_PER_SESSION 0
CONNECT_TIME 1
PRIVATE_SGA 0
Profile altered.
User altered.
· Step 3. Test the new limit. The COMPOSITE_COST can be computed. This is the
formula. This table compares high/low values for CPU andCONNECT usage to compute the
composite cost and indicates if the resource limit is exceeded.
377 ORACLE DATABASE ADMINISTRATION
The Database Resource Manager can provide the Oracle server more control over resource management
decisions; thus, avoiding problems from inefficient operating system management.
Oracle Database Resource Manager (the Resource Manager) enables you to manage multiple workloads
within a database through the creation of resource plans and resource groups, and the allocation of
individual user accounts to resource groups that are, in turn, allocated resource plans.
Generally the operating system handles resource management. However, within an Oracle database, this
can result in a number of problems:
· Excessive overhead from operating system context switching between Oracle Database server
processes when the number of server processes is high.
· Inefficient scheduling because the O/S may deschedule database servers while they hold
latches, which is inefficient.
· Inappropriate allocation of resources by not prioritizing tasks properly among active processes.
· Inability to manage database-specific resources, such as parallel execution servers and active
sessions
Example: Allocate 80% of available CPU resources to online users leaving 20% for batch users and
jobs.
The Resource Manager enables you to classify sessions into groups based on session attributes,
and to then allocate resources to those groups in a way that optimizes hardware utilization for your
application environment.
You can use the DBMS_RESOURCE_MANAGER PL/SQL package to create and maintain these
elements. The objects created are stored in the data dictionary.
Some special consumer groups always exist in the data dictionary and cannot be modified or deleted:
· SYS_GROUP – the initial consumer group for all sessions created by SYS or SYSTEM.
· OTHER_GROUPS – this group contains all sessions not assigned to a consumer group. Any
resource plan must always have a directive for the OTHER_GROUPS.
378 ORACLE DATABASE ADMINISTRATION
This figure from your readings shows a simple resource plan for an OLTP and reporting set of
applications.
· The plan is named DAYTIME.
· It allocates CPU resources among three resource consumer groups
named OLTP, REPORTING, and OTHER_GROUPS.
Oracle provides a predefined procedure named CREATE_SIMPLE_PLAN so that a DBA can create
simple resource plans.
A resource plan can reference subplans. This figure illustrates a top plan and all descending plans and
groups.
The Resource Manager is not enabled by default. This command (or init.ora file parameter) by the DBA
actives the Resource Manager and sets the top plan.
RESOURCE_MANAGER_PLAN = DAYTIME.
Activate or deactivate the Resource Manager dynamically or change plans with the ALTER SYSTEM
command.
Note: The Database Resource Manager is covered further in the Oracle course Oracle Performance
Tuning.
10 rows selected.
14 rows selected.
380 ORACLE DATABASE ADMINISTRATION
381 ORACLE DATABASE ADMINISTRATION
382 ORACLE DATABASE ADMINISTRATION
383 ORACLE DATABASE ADMINISTRATION
384 ORACLE DATABASE ADMINISTRATION
385 ORACLE DATABASE ADMINISTRATION
386 ORACLE DATABASE ADMINISTRATION
387 ORACLE DATABASE ADMINISTRATION
388 ORACLE DATABASE ADMINISTRATION
389 ORACLE DATABASE ADMINISTRATION
390 ORACLE DATABASE ADMINISTRATION
391 ORACLE DATABASE ADMINISTRATION
392 ORACLE DATABASE ADMINISTRATION
The following initialization parameters allow the database server to use the Oracle-managed files feature:
The file system directory specified by either of these parameters must already exist: the database does
not create it. The directory must also have permissions to allow the database to create the files in it. The
default location is used whenever a location is not explicitly specified for the operation creating the file.
The database creates the filename, and a file thus created is an Oracle-managed file. Both of these
initialization parameters are dynamic, and can be set using the ALTER SYSTEM or ALTER SESSION
statement.
Archived logs
Flashback logs
You specify the name of file system directory that becomes the default location for creation of the
operating system files for these entities. For example:
DB_RECOVERY_FILE_DEST = '/u01/oradata'
DB_RECOVERY_FILE_DEST_SIZE = 20G
Include the DB_CREATE_ONLINE_LOG_DEST_n initialization parameter in your initialization parameter
file to identify the default location for the database server to create:
• Redo log files
• Control files
You specify the name of a file system directory that becomes the default location for the creation of the
operating system files for these entities. You can specify up to five multiplexed locations. For the creation
of redo log files and control files only, this parameter overrides any default location specified in the
DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST initialization parameters. If you do not specify a
DB_CREATE_FILE_DEST parameter, but you do specify the DB_CREATE_ONLINE_LOG_DEST_n
parameter, then only redo log files and control files can be created as Oracle-managed files. It is
recommended that you specify at least two parameters. For example:
DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata'
This allows multiplexing, which provides greater fault-tolerance for the redo log and control file if one of
the destinations fails.
Files of one database type are easily distinguishable from other database types.
Files are clearly associated with important attributes specific to the file type.
For example, a datafile name may include the tablespace name to allow for easy association of datafile to
tablespace, or an archived log name may include the thread, sequence, and creation date. No two
Oracle-managed files are given the same name. The name that is used for creation of an Oracle-
managed file is constructed from three sources:
A file name template that is chosen based on the type of the file. The template also depends on
the operating system platform and whether or not automatic storage management is used.
A unique string created by Oracle Database or the operating system. This ensures that file
creation does not damage an existing file and that the file cannot be mistaken for some other
file.
As a specific example, filenames for Oracle-managed files have the following format on a Solaris file
system:
<destination_prefix>/o1_mf_%t_%u_.dbf
Where:
<destination_prefix> is
<destination_location>/<db_unique_name>/<datafile>
where:
/u01/app/oracle/oradata/PAYROLL/datafile/o1_mf_tbs1_2ixh90q_.dbf
Names for other file types are similar. Names on other platforms are also similar, subject to the
constraints of the naming rules of the platform.
The examples on the following pages use Oracle-managed file names as they might appear with a
Solaris file system as an OMF destination.
If the CONTROL_FILES parameter is not set and none of these initialization parameters are set, then the
Oracle Database default behavior is operating system dependent. At least one copy of a control file is
created in an operating system dependent default location. Any copies of control files created in this
fashion are not Oracle managed files, and you must add a CONTROL_FILES initialization parameter to
any initialization parameter file. If the database creates an Oracle-managed control file, and if there is a
server parameter file, then the database creates a CONTROL_FILES initialization parameter entry in the
server parameter file. If there is no server parameter file, then you must manually include a
CONTROL_FILES initialization parameter entry in the text initialization parameter file.
Specifying Redo Log Files at Database Creation
The LOGFILE clause is not required in the CREATE DATABASE statement, and omitting it provides a
simple means of creating Oracle-managed redo log files. If the LOGFILE clause is omitted, then redo log
files are created in the default redo log file destinations. In order of precedence, the default destination is
defined as follows:
• If either the DB_CREATE_ONLINE_LOG_DEST_n is set, then the database creates a log file member in
each directory specified, up to the value of the MAXLOGMEMBERS initialization parameter.
• If the DB_CREATE_ONLINE_LOG_DEST_ n parameter is not set, but both the DB_CREATE_FILE_DEST
and DB_RECOVERY_FILE_DEST initialization parameters are set, then the database creates one Oracle
managed log file member in each of those locations. The log file in the DB_CREATE_FILE_DEST
destination is the first member.
• If only the DB_CREATE_FILE_DEST initialization parameter is specified, then the database creates a log
file member in that location.
• If only the DB_RECOVERY_FILE_DEST initialization parameter is specified, then the database creates a
log file member in that location.
The default size of an Oracle-managed redo log file is 100 MB. Optionally, you can create Oracle-
managed redo log files, and override default attributes, by including the LOGFILE clause but omitting a
filename. Redo log files are created the same way, except for the following: If no filename is provided in
398 ORACLE DATABASE ADMINISTRATION
the LOGFILE clause of CREATE DATABASE, and none of the initialization parameters required for creating
Oracle-managed files are provided, then the CREATE DATABASE statement fails.
The default size for an Oracle-managed datafile is 100 MB and the file is autoextensible. When
autoextension is required, the database extends the datafile by its existing size or 100 MB, whichever is
smaller. You can also explicitly specify the autoextensible unit using the NEXT parameter of the STORAGE
clause when you specify the datafile (in a CREATE or ALTER TABLESPACE operation). Optionally, you can
create an Oracle-managed datafile for the
SYSTEM or SYSAUX tablespace and override default attributes. This is done by including the DATAFILE
clause, omitting a filename, but specifying overriding attributes. When a filename is not supplied and the
DB_CREATE_FILE_DEST parameter is set, an Oracle-managed datafile for the SYSTEM or SYSAUX
tablespace is created in the DB_CREATE_FILE_DEST directory with the specified attributes being
overridden. However, if a filename is not supplied and the DB_CREATE_FILE_DEST parameter is not set,
then the CREATE DATABASE statement fails. When overriding the default attributes of an Oracle-
managed file, if a SIZE value is specified but no AUTOEXTEND clause is specified, then the datafile is not
autoextensible.
The following parameter settings relating to Oracle-managed files, are included in the initialization
parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'
DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata'
399 ORACLE DATABASE ADMINISTRATION
Example: this example creates a database with the following Oracle-managed files:
• A 100 MB SYSTEM tablespace datafile in directory /u01/oradata that is autoextensible up to an
unlimited size.
• A SYSAUX tablespace datafile in directory /u01/oradata that is 100 MB and autoextensible up to an
unlimited size. The tablespace is locally managed with automatic segment-space management.
• Two redo log files of 100 MB each in directory /u01/oradata. They are not multiplexed.
• An undo tablespace datafile in directory /u01/oradata that is 10 MB and autoextensible up to an
unlimited size. An undo tablespace named SYS_UNDOTBS is created.
• A control file in /u01/oradata.
The following parameter settings are included in the initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'
DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata'
Examples The following are some examples of creating tablespaces with Oracle-managed files.
Example: The following example sets the default location for datafile creations to /u01/oradata and then
creates a tablespace tbs_1 with a datafile in that location. The datafile is 100 MB and is autoextensible
with an unlimited maximum size.
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata';
SQL> CREATE TABLESPACE tbs_1;
Example: This example creates a tablespace named tbs_2 with a datafile in the directory /u01/oradata.
The datafile initial size is 400 MB, and because the SIZE clause is specified, the datafile is not
autoextensible. The following parameter setting is included in the initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'
The following statement is issued at the SQL prompt:
SQL> CREATE TABLESPACE tbs_2 DATAFILE SIZE 400M;
Example: This example creates a tablespace named tbs_3 with an autoextensible datafile in the
directory /u01/oradata with a maximum size of 800 MB and an initial size of 10 0 MB:
The following parameter setting is included in the initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'
The following statement is issued at the SQL prompt:
SQL> CREATE TABLESPACE tbs_3 DATAFILE AUTOEXTEND ON MAXSIZE 800M;
Example: The following example sets the default location for datafile creations to /u01/oradata and then
creates a tablespace named tbs_4 in that directory with two datafiles. Both datafiles have an initial size
of 200 MB, and because a SIZE value is specified, they are not autoextensible
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata';
SQL> CREATE TABLESPACE tbs_4 DATAFILE SIZE 200M SIZE 200M;
Example: The following example creates an undo tablespace named undotbs_1 with a datafile in the
directory /u01/oradata. The datafile for the undo tablespace is 100 MB and is autoextensible with an
unlimited maximum size.
The following parameter setting is included in the initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'
The following statement is issued at the SQL prompt:
SQL> CREATE UNDO TABLESPACE undotbs_1;
Example This example adds an Oracle-managed autoextensible datafile to the tbs_1 tablespace. The
datafile has an initial size of 100 MB and a maximum size of 800 MB. The following parameter setting is
included in the initialization parameter file:
DB_CREATE_FILE_DEST = '/u01/oradata'
The following statement is entered at the SQL prompt:
SQL> ALTER TABLESPACE tbs_1 ADD DATAFILE AUTOEXTEND ON MAXSIZE 800M;
Example: The following example sets the default location for datafile creations to /u01/oradata and then
creates a tablespace named temptbs_1 with a tempfile in that location. The tempfile is 100 MB and is
autoextensible with an unlimited maximum size.
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/oradata';
SQL> CREATE TEMPORARY TABLESPACE temptbs_1;
Example: The following example sets the default location for datafile creations to /u03/oradata and then
adds a tempfile in the default location to a tablespace named temptbs_1. The tempfile initial size is 100
MB. It is autoextensible with an unlimited maximum size.
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u03/oradata';
SQL> ALTER TABLESPACE TBS_1 ADD TEMPFILE;
401 ORACLE DATABASE ADMINISTRATION
Later, you must issue the ALTER DATABASE OPEN RESETLOGS statement to re-create the redo log files.
Renaming Files
The following statements are used to rename files:
• ALTER DATABASE RENAME FILE
• ALTER TABLESPACE ... RENAME DATAFILE
403 ORACLE DATABASE ADMINISTRATION
These statements do not actually rename the files on the operating system, but rather, the names in the
control file are changed. If the old file is an Oracle-managed file and it exists, then it is deleted. You must
specify each filename using the conventions for filenames on your operating system when you issue this
statement.
2. Creating a database
Once the initialization parameters are set, the database can be created by using this statement:
SQL> CREATE DATABASE sample
2> DEFAULT TEMPORARY TABLESPACE dflttmp;
Because a DATAFILE clause is not present and the DB_CREATE_FILE_DEST initialization parameter is set,
the SYSTEM tablespace datafile is created in the default file system (/u01/oradata in this scenario). The
filename is uniquely generated by the database. The file is autoextensible with an initial size of 100 MB
and an unlimited maximum size. The file is an Oracle-managed file. A similar datafile is created for the
SYSAUX tablespace. Because a LOGFILE clause is not present; two redo log groups are created. Each log
group has two members, with one member in the DB_CREATE_ONLINE_LOG_DEST_1 location and the
other member in the DB_CREATE_ONLINE_LOG_DEST_2 location. The filenames are uniquely generated
by the database. The log files are created with a size of 100 MB. The log file members are Oracle-
managed files.
Similarly, because the CONTROL_FILES initialization parameter is not present, and two
DB_CREATE_ONLINE_LOG_DEST_n initialization parameters are specified, two control files are created.
The control file located in the DB_CREATE_ONLINE_LOG_DEST_1 location is the primary control file; the
control file located in the DB_CREATE_ONLINE_LOG_DEST_2 location is a multiplexed copy. The
filenames are uniquely generated by the database. They are Oracle-managed files. Assuming there is a
server parameter file, a CONTROL_FILES initialization parameter is generated.
Automatic undo management mode is specified, but because an undo tablespace is not specified and the
DB_CREATE_FILE_DEST initialization parameter is set, a default undo tablespace named SYS_UNDOTBS
is created in the directory specified by DB_CREATE_FILE_DEST . The datafile is a 10 MB datafile that is
autoextensible. It is an Oracle-managed file. Lastly, a default temporary tablespace named dflttmp is
specified. Because DB_CREATE_FILE_DEST is included in the parameter file, the tempfile for dflttmp is
created in the directory specified by that parameter. The tempfile is 100 MB and is autoextensible with an
unlimited maximum size. It is an Oracle-managed file. The internally generated filenames can be seen
when selecting from the usual views. For example:
SQL> SELECT NAME FROM V$DATAFILE;
NAME
----------------------------------------------------
/u01/oradata/SAMPLE/datafile/o1_mf_system_cmr7t30p_.dbf
/u01/oradata/SAMPLE/datafile/o1_mf_sysaux_cmr7t88p_.dbf
/u01/oradata/SAMPLE/datafile/o1_mf_sys_undotbs_2ixfh90q_.dbf
3 rows selected
5. Managing tablespaces
The default storage for all datafiles for future tablespace creations in the sample database is the location
specified by the DB_CREATE_FILE_DEST initialization parameter (/u01/oradata in this scenario). Any
datafiles for which no filename is specified, are created in the file system specified by the initialization
parameter DB_CREATE_FILE_DEST. For example:
SQL> CREATE TABLESPACE tbs_1;
The preceding statement creates a tablespace whose storage is in /u01/oradata. Datafile is created with
an initial of 100 MB and it is autoextensible with an unlimited maximum size. The datafile is an Oracle-
managed file. When the tablespace is dropped, the Oracle-managed files for the tablespace are
automatically removed. The following statement drops the tablespace and all the Oracle-managed files
used for its storage:
SQL> DROP TABLESPACE tbs_1;
Once the first datafile is full, the database does not automatically create a new datafile. More space can
be added to the tablespace by adding another Oracle-managed datafile. The following statement adds
another datafile in the location specified by DB_CREATE_FILE_DEST:
SQL> ALTER TABLESPACE tbs_1 ADD DATAFILE;
The default file system can be changed by changing the initialization parameter. This does not change
any existing datafiles. It only affects future creations. This can be done dynamically using the following
statement:
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST='/u04/oradata';
2. Creating tablespaces
Once DB_CREATE_FILE_DEST is set, the DATAFILE clause can be omitted from a CREATE TABLESPACE
statement. The datafile is created in the location specified by DB_CREATE_FILE_DEST by default. For
example:
SQL> CREATE TABLESPACE tbs_2;
When the tbs_2 tablespace is dropped, its datafiles are automatically deleted.
406 ORACLE DATABASE ADMINISTRATION
An E-Commerce Architecture
This figure shows a typical Internet architecture.
· The organization has an Intranet that connects client computers to one or more Database
Servers.
· The client computers also connect to the Internet through an Application Web Server.
Oracle Net
Oracle Net Services is Oracle's solution for providing enterprise wide connectivity in distributed,
heterogeneous computing environments.
· Objective is for Oracle Net Services to make it easy to manage network configurations while
maximizing performance and enabling network diagnostic capabilities when problems arise.
· Connectivity is provided by Oracle Net.
o Oracle Net is a component of Oracle Net Services and is the software that enables a
connection from a client application to an Oracle database server.
o Oracle Net maintains the connection and exchanges messages between client and server
computers.
o Oracle Net software is located on each computer in the network.
o Oracle Net is a layer of software that interfaces with the network protocol, that is, the set
of rules that determine how data is subdivided and transmitted into packets on a
network.
o Oracle Net uses the TCP/IP protocol for connectivity.
Oracle Net includes two components:
· Oracle Net foundation layer establishes and maintains connections.
· Oracle protocol support that maps the foundation layer's technology to industry-standard
protocols.
Oracle supports Java client applications that access an Oracle database with a Java Database
Connectivity (JDBC) Driver. This is a standard Java interface to connect to a relational DBMS. Oracle
offers the following drivers:
· JDBC OCI Driver – used for clients with Oracle client software installed.
· JDBC Thin Driver – used for clients without an Oracle installation that use applets.
408 ORACLE DATABASE ADMINISTRATION
Web Client Connections Without an Application Server
Web clients can run programs that access Oracle databases directly without a Web Server.
· A database can accept HTTP, FTP, or WebDAV protocol connections that can connect to Oracle
XML DB in an Oracle database instance.
The figure shows a client with a HTTP connection that connects through a web server like Apache.
This figure shows a client using a Web Browser such as Internet Explorer with a JDBC Thin driver that
uses a Java version of Oracle Net called JavaNet to communicate with the Oracle database server that is
configured with Oracle Net.
Location
Transparency
Many companies have more than one databases, often distributed, that support different client
applications.
Each database is represented in Oracle Net by one or more services.
· Service – identified by a service name.
· Client computers use the service name to identify the database to be accessed.
· The information about the database service and its location in the network is transparent to
the client because the information needed for a connection is stored in a repository.
409 ORACLE DATABASE ADMINISTRATION
Oracle Net and Oracle software are scalable meaning that an organization can maximize the use of
system resources. One way this is done is through ashared server architecture that allows many client
computers to connect to a server.
The shared server approach:
· Client computers communicate their requests for data by routing requests through one or
more dispatcher processes.
· The dispatcher process(es) will queue client requests in a common queue.
· When a server process becomes idle, it will select the next client to serve in the queue.
· Server processes are pooled and a small pool of server processes can share a large number
of client computers.
410 ORACLE DATABASE ADMINISTRATION
· Client computers are configured with protocol addresses that enable them to send connection
requests to a listener.
· After a connection is established, the client computer and Oracle Database Server
communicate directly.
Database Service and Database Instance Identification
An Oracle database is a service to a client computer that runs on a server (In a Windows server, you can
see these services quite easily through the Control Panel).
· A database can have more than one service associated with it although one is typical.
· For example, one service might be dedicated to system users accessing financial data while
another one is dedicated to system users accessing warehouse data.
· Using more than one service can enable a DBA to allocate system resources.
Service Name:
· SERVICE_NAMES init.ora parameter specifies the service name in the database’s initialization
parameter file.
· The service name defaults to a global database name when it is not specified – this is a
name that comprises the database name from the DB_NAMEparameter and the domain name
from the DB_DOMAIN parameter.
· The SERVICE_NAMES parameter in the initialization parameter file (init.ora) can specify
more than one service entry as shown below.
o This also enables a DBA to limit resource allocations for clients requesting a service.
· This enables using a pool of Multi-threaded service dispatchers to be used for clients
requesting sobora1.siue.edu, for example, while a different dispatcher or pool of dispatchers
could be configured to service sobora2.siue.edu, for example.
Instance Name:
412 ORACLE DATABASE ADMINISTRATION
· Each database instance is identified by an instance name.
· INSTANCE_NAME parameter in the initialization parameter file specifies the instance name.
· This figure shows two database servers, each connected to a single database that is opened as
two separate instances, each with a unique parameter file called an instance parameter file
(ifile).
Accessing a Service
· The connect description describes the database location and database service name.
· Includes the PORT= specification – the standard listener port is 1521 for Oracle software –
other ports can be used as long as no other service is using the port on the server – an
alternative port, such as 1523 could be assigned if port 1521 was already in use for another
service on the host.
· The listener process for a database instance knows the services for which it can handle
connection requests, because an Oracle database dynamically registers this information with the
listener.
· Service registration provides a listener process with information about the database instances
and the service handlers available for each instance.
INSTANCE_NAME parameter:
· Can be added to the connect descriptor to listen for a specific instance of a database where
multiple instances may be in use.
DBORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(CONNECT_DATA=(SERVICE_NAME=DBORCL)
(INSTANCE_NAME=DBORCL_repository)
SERVER= parameter – another approach is to specify a particular service handler as part of the connect
descriptor.
DBORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = sobora2.isg.siue.edu)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=DBORCL)
(SERVER=shared)
)
)
This figure shows more detail with a Listener and a Dispatcher for a Shared Server Process.
· The Listener hands the connection request to the Dispatcher for future communication. The
steps are:
This figure shows more detail with a Listener for a Dedicated Server Process.
· The Listener passes a connection request to a dedicated server process -- first it starts the
process. The steps are:
1. The listener receives a client connection request.
2. The listener starts a dedicated server process.
3. The listener provides the location of the dedicated server process to the client in a redirect
message.
4. The client connects directly to the dedicated server.
CONNECT dbock/password@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)
(HOST=sobora2.siue.edu)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=DBORCL)))
· Example: This example uses a simple net service name of DBORCL as the connect
identifier.
o The net service name is mapped to the proper connect descriptor by using a repository
of connection information that is access through one of Oracle’s naming methods.
CONNECT dbock/password@dborcl
Oracle Net supports the following naming methods:
· Local Naming.
o With this approach a local configuration file named tnsnames.ora is stored on each client
computer.
o Net service names are stored in the tnsnames.ora file as was described above.
o The file can be configured for individual client machines and client needs. This is the
approach taken at SIUE.
o Local naming is most appropriate for simple distributed networks with a small number of
services that change infrequently.
· Directory Naming.
o This approach was described earlier in these notes.
o Service addresses and net service names are stored in a Lightweight Directory Access
Protocol (LDAP)-compliant directory server.
· Easy Connect Naming.
o Clients connect to a database without any configuration.
o Clients use a connect string for a simple TCP/IP address that consist of a host name and
optional port and service name.
o Example: CONNECT username/password@host[:port][/service_name]
o Recommended for simple TCP/IP environments.
· External Naming.
o A third-party naming service already configured for your environment is used.
After a naming method is configured, the client computers must be enabled for the naming method
following three steps:
1. The client contacts a naming method –
o This step converts the connect identifier to a connect descriptor.
o With local naming for a Windows computer, this is accomplished by storing
the tnsnames.ora file on the $Oracle_Home/Network/Admin directory specified for
the client machine when the Oracle software was initially loaded onto the machine.
2. Based on the identified connect descriptor, the client forwards a request to the listener
address given in the connect descriptor.
3. The client connection is accepted by the listener (usually uses a TCP/IP protocol). If the
client information received in the connect descriptor matchesclient information in the
database and in its listener configuration file (named listener.ora), a connection is made;
otherwise, an error message is returned.
Configuring the Local Naming Method
Client Configuration
Local Naming configuration requires storing a tnsnames.ora file on each client computer.
· The local naming method adds net service names to the tnsnames.ora file.
· Each net service name maps to a connect descriptor.
· The tnsnames.ora file specifies connect descriptors for one or more databases.
· Examine the tnsnames.ora file located on a computer in the computer
classroom/laboratory – located in $Oracle_Home/Network/Admin.
· Example from the tnsnames.ora file on a client computer in our laboratory:
DBORCL =
416 ORACLE DATABASE ADMINISTRATION
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = sobora2.isg.siue.edu )(PORT = 1521))
)
(CONNECT_DATA =
(SID = DBORCL)
)
)
Oracle Net Configuration Assistant – Oracle software that runs automatically during installation of the
Oracle RDBMS.
· Provides a “wizard” interface that prompts for information needed to build
a tnsnames.ora file automatically.
· If you select Custom Installation as an option when configuring your network connection, you
can select the naming method to use.
· If you select Directory Naming or any other method other than Local Naming, the naming
method has to already be set up.
You can also configure the tnsnames.ora file manually by adding service names to the file by using a
simple text editor like Notepad.
Listener Configuration on the Server
Here is the sample code stored in the listener.ora file on the SIUE sobora2 server.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = sobora2.isg.siue.edu )(PORT = 1521))
)
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = DBORCL.siue.edu)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = DBORCL )
)
)
###########################################
# Listener alias
###########################################
local_listener = "LISTENER_DBACLASS"
dbock/@sobora2.isg.siue.edu=>lsnrctl
LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 22-JUL-2009 11:12:36
Copyright (c) 1991, 2007, Oracle. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL>
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=sobora2.isg.siue.edu)
(PORT=1521)))
Services Summary...
Service "DBORCL.siue.edu" has 2 instance(s).
Instance "DBORCL", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:25 refused:3
LOCAL SERVER
Instance "DBORCL", status READY, has 1 handler(s) for this service...
Handler(s):
418 ORACLE DATABASE ADMINISTRATION
"DEDICATED" established:3281 refused:0 state:ready
LOCAL SERVER
Service "DBORCLXDB.siue.edu" has 1 instance(s).
Instance "DBORCL", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:972 state:ready
DISPATCHER <machine: sobora2.isg.siue.edu, pid: 15972>
(ADDRESS=(PROTOCOL=tcp)(HOST=sobora2.isg.siue.edu)(PORT=11615))
Service "DBORCL_XPT.siue.edu" has 1 instance(s).
Instance "DBORCL", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:3281 refused:0 state:ready
LOCAL SERVER
Service "USER305.siue.edu" has 1 instance(s).
Instance "USER305", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "USER305_XPT.siue.edu" has 1 instance(s).
Instance "USER305", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "USER350.siue.edu" has 1 instance(s).
Instance "USER350", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:3661 refused:0 state:ready
LOCAL SERVER
Service "USER350_XPT.siue.edu" has 1 instance(s).
Instance "USER350", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:3661 refused:0 state:ready
LOCAL SERVER
The command completed successfully
LSNRCTL>
The DBA can assign different names to listener processes. This is done in the listener.ora file. The
default name of a listener is LISTENER and is configured to listen on the following default protocol
addresses:
· TCP/IP protocol - port 1521.
(address=(protocol=tcp)(host=host_name)(port=1521))
· IPC protocol.
(address=(protocol=ipc)(key=PNPKEY))
When a listener service is contacted by a client, one of these actions is performed as is shown in this
figure.
If the database service is running a dispatcher service, then the listener hands the request to
the dispatcher – the process that manages the connection of many clients to the same server in a
multi-threaded server environment.
If a dispatcher is not in use, the listener can spawn a dedicated server process or allocate a pre-
spawned dedicated server process and pass the client connection to this dedicated server process
(one server per client as we have discussed in earlier lectures).
419 ORACLE DATABASE ADMINISTRATION
Either way, a redirect message is sent back to the client informing the client of the location of the
dispatcher or dedicated server process.
If a user or application requests disconnection from a server, the server disconnects when all
transactions are complete. If this server is connected to asecond server in order to support the
user/application, then these additional connections are also disconnected.
Additional Connection Request. When an application is connected to a server and attempts to access
another user account (same or different server), the application is usually disconnected from the current
connection.
Abnormal Connection Termination. If communications are aborted without Oracle Net being notified,
Oracle Net will recognize the failure and eventually clean up the client/server operations (during the next
data operation) and disconnect the operation.
Timer Initiated Disconnect or Dead Connection Detection. This feature is enabled to minimize
wasted resources by invalid connections. Uncommitted transactions are automatically rolled back and
locks are released for the broken connection. Oracle detects dead connections by periodically sending a
small probe packet to each client at a user-defined interval (several minutes is typical) and initiates the
disconnection through the allocated Server process if the connection is invalid.
Refer to the Net Services Administrator's Guide for additional information on configuring other naming
methods, pre-spawned dedicated servers, and handling large connection volumes.
Manually create the listener.ora file and start the listener from command line.
LISTENER_TEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST =orcl.localdomain )(PORT = 1522))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))
)
)
ADR_BASE_LISTENER_ORACLEDB1 = /u01/app/oracle
[oracle@orcl admin]$ lsnrctl stop LISTENER_TEST
[oracle@orcl admin]$ lsnrctl status LISTENER_TEST
Second method Start the Oracle Net Configuration Assistant (NETCA)
420 ORACLE DATABASE ADMINISTRATION
Here Add (to add new listener) Reconfigure(exist listener), delete(exist) Rename(exist).
Choose Add And click next
421 ORACLE DATABASE ADMINISTRATION
The important thing you must Select TCP here. Click next
422 ORACLE DATABASE ADMINISTRATION
The exist port is 1521 however user for different listener different port for example 1522
Click Finish
AS You see there are two listener script my first and second listener.
Listen the incoming network request from client and forward to the Oracle instance.
PMON would register itself to the listener. Generally take ~ 1 min for PMON to connect to Listener. Until
the PMON is connect to Listener, user can not connect to the Database from remote and would get ‘ ORA-
12514: TNS: listener does not currently know of service requested in connect descriptor’.
We can use alter system register to force the PMON to register the listener
Default Listener
By default, We do not need to configure which Listener Oracle Database would connect to. It would
connect to default listener ‘LISTENER’.
Listener command
lsnrctl start
lsnrctl stop
lsnrctl status
lsnrctl service
If there is no listener , user can not connect from remote and would get “ORA-12541 TNS: no listener ”
---- Below section are optional , but if you do set the LOCAL_LISTENER in the parameter file, the
tnsname.ora must have the correspond entry. However, if there is no SID_LIST defined in the
listener.ora, the LOCAL_LISTENER and TNSNAMES.ora must configure. Otherwise the SID does not know
which listener to go to.
tnsnames.ora
429 ORACLE DATABASE ADMINISTRATION
Listener configure file:$TNS_ADMIN/listener.ora
one listener can be share by multiple database instance (SID). we can also create dedicate listener for
the each SID. Below screenshot shows, the SID orcl is register with LISTENER_A. SID PODB use
LISTENER_PODB.
When we specifically tell the listener about a instance in the SID_LIST section, listener just assumes it is there and creates a
listening point for it. It doesn't check the status, and so the status is UNKNOWN. It does not affect the database connection.
430 ORACLE DATABASE ADMINISTRATION
DATABASE LINKS
The central concept in distributed database system is Database Link. A dblink allows (client) users to
access data on remote database. A connection between from one database to another in same host. A
connection between two physical database servers. i.e., (from an oracle database server to another
database server).
POINTS TO NOTE:
When many users require an access path to remote oracle database, Oracle recommends to create
PUBLIC database link for all users.
When Oracle uses a directory server, an administrator can easily manage global database links for all
databases (DB LINK is centralized).
SAMP table is exist in ‘orcltest’ database. I want to access (samp) table. using dblink from ‘orclprod’
database. So I create a dblink in ORCLPROD, pointing to ORCLTEST.
In orclprod Database:
user1 ( exist in ORCLPROD database ) trying to access samp table from ORCLTEST database using
dblink.
433 ORACLE DATABASE ADMINISTRATION
SQL> create database link testlink connect to scott identified by tiger using 'orcltest';
Database link created.
SQL> select * from scott. samp@testlink;
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
A database link is a pointer that defines a one-way communication path from an Oracle Database server
to another database server. The link pointer is actually defined as an entry in a data dictionary table. To
access the link, you must be connected to the local database that contains the data dictionary entry.
A database link connection is one-way in the sense that a client connected to local database A can use a
link stored in database A to access information in remote database B, but users connected to database B
cannot use the same link to access data in database A. If local users on database B want to access data
on database A, then they must define a link that is stored in the data dictionary of database B.
A database link connection allows local users to access data on a remote database. For this connection to
occur, each database in the distributed system must have a unique global database name in the
network domain. The global database name uniquely identifies a database server in a distributed system.
The below figure shows an example of user scott accessing the emp table on the remote database with
the global name hq.acme.com: Database links are either private or public. If they are private, then only
the user who created the link has access; if they are public, then all database users have access.
434 ORACLE DATABASE ADMINISTRATION
One principal difference among database links is the way that connections to a remote database occur.
Users access a remote database through the following types of links:
Create database links using the CREATE DATABASE LINK statement. After a link is created, you can
use it to specify schema objects in SQL statements.
A shared database link is a link between a local server process and the remote database. The link is
shared because multiple client processes can use the same link simultaneously.
When a local database is connected to a remote database through a database link, either database can
run in dedicated or shared server mode. The following table illustrates the possibilities:
Different users accessing the same schema object through a database link can share a network
connection.
When a user needs to establish a connection to a remote server from a particular server process,
the process can reuse connections already established to the remote server. The reuse of the
connection can occur if the connection was established on the same server process with the
same database link, possibly in a different session. In a non-shared database link, a connection
is not shared across multiple sessions.
When you use a shared database link in a shared server configuration, a network connection is
established directly out of the shared server process in the local server. For a non-shared
database link on a local shared server, this connection would have been established through the
local dispatcher, requiring context switches for the local dispatcher, and requiring data to go
through the dispatcher.
The great advantage of database links is that they allow users to access another user's objects in a
remote database so that they are bounded by the privilege set of the object owner. In other words, a
local user can access a link to a remote database without having to be a user on the remote database.
For example, assume that employees submit expense reports to Accounts Payable (A/P), and further
suppose that a user using an A/P application needs to retrieve information about employees from
the hq database. The A/P users should be able to connect to the hq database and execute a stored
procedure in the remote hq database that retrieves the desired information. The A/P users should not
need to be hq database users to do their jobs; they should only be able to access hq information in a
controlled way as limited by the procedure.
To understand how a database link works, you must first understand what a global database name is.
Each database in a distributed database is uniquely identified by its global database name. The database
forms a global database name by prefixing the database network domain, specified by the
436 ORACLE DATABASE ADMINISTRATION
DB_DOMAIN initialization parameter at database creation, with the individual database name, specified
by the DB_NAME initialization parameter.
The name of a database is formed by starting at the leaf of the tree and following a path to the root. For
example, the mfg database is in division3 of the acme_tools branch of the com domain. The global
database name for mfg is created by concatenating the nodes in the tree as follows:
mfg.division3.acme_tools.com
While several databases can share an individual name, each database must have a unique global
database name. For example, the network
domainsus.americas.acme_auto.com and uk.europe.acme_auto.com each contain
a sales database. The global database naming system distinguishes the sales database in
the americas division from the sales database in the europe division as follows:
sales.us.americas.acme_auto.com
sales.uk.europe.acme_auto.com
Typically, a database link has the same name as the global database name of the remote database that it
references. For example, if the global database name of a database is sales.us.oracle.com, then the
database link is also called sales.us.oracle.com.
437 ORACLE DATABASE ADMINISTRATION
When you set the initialization parameter GLOBAL_NAMES to TRUE, the database ensures that the
name of the database link is the same as the global database name of the remote database. For
example, if the global database name for hq is hq.acme.com, and GLOBAL_NAMES is TRUE, then the
link name must be called hq.acme.com. Note that the database checks the domain part of the global
database name as stored in the data dictionary, not the DB_DOMAIN setting in the initialization
parameter file
If you set the initialization parameter GLOBAL_NAMES to FALSE, then you are not required to use
global naming. You can then name the database link whatever you want. For example, you can name a
database link to hq.acme.com as foo.
After you have enabled global naming, database links are essentially transparent to users of a distributed
database because the name of a database link is the same as the global name of the database to which
the link points. For example, the following statement creates a database link in the local database to
remote database sales:
Oracle Database lets you create private, public, and global database links. These basic link types differ
according to which users are allowed access to the remote database:
USER_DB_LINKS
Public User called PUBLIC. View ownership Creates a database-wide link. All users and PL/SQL
data through views shown for subprograms in the database can use the link to
private database links. access database objects in the corresponding remote
database.
Global User called PUBLIC. View ownership Creates a network-wide link. When an Oracle network
data through views shown for uses a directory server, the directory server
private database links. automatically create and manages global database
links (as net service names) for every Oracle
Database in the network. Users and PL/SQL
subprograms in any database can use a global link to
access objects in the corresponding remote database.
Determining the type of database links to employ in a distributed database depends on the specific
requirements of the applications using the system. Consider these features when making your choice:
Type of Features
438 ORACLE DATABASE ADMINISTRATION
Link
Private This link is more secure than a public or global link, because only the owner of the
database link private link, or subprograms within the same schema, can use the link to access the
remote database.
Public When many users require an access path to a remote Oracle Database, you can create
database link a single public database link for all users in a database.
Global When an Oracle network uses a directory server, an administrator can conveniently
database link manage global database links for all databases in the system. Database link
management is centralized and simple.
When creating the link, you determine which user should connect to the remote database to access the
data. The following table explains the differences among the categories of users involved in database
links:
Fixed user A user whose username/password is part of the link CREATE PUBLIC
definition. If a link includes a fixed user, the fixed user's DATABASE LINK hq
username and password are used to connect to the remote CONNECT TO jane
database. IDENTIFIED BY doe
USING 'hq';
Note:
The REMOTE_OS_AUTHENT initialization parameter is deprecated. It is retained for backward
compatibility only.
Create database links using the CREATE DATABASE LINK statement. The table gives examples of SQL
statements that create database links in a local database to the
remote sales.us.americas.acme_auto.com database:
After you have created a database link, you can execute SQL statements that access objects on the
remote database. For example, to access remote object emp using database link foo, you can issue:
You must also be authorized in the remote database to access specific remote objects.
Constructing properly formed object names using database links is an essential aspect of data
manipulation in distributed systems.
Oracle Database uses the global database name to name the schema objects globally using the following
scheme:
schema.schema_object@global_database_name
where:
schema_object is a logical data structure like a table, index, view, synonym, procedure,
package, or a database link.
global_database_name is the name that uniquely identifies a remote database. This name
must be the same as the concatenation of the remote database initialization
parameters DB_NAME and DB_DOMAIN, unless the parameter GLOBAL_NAMES is set
to FALSE, in which case any name is acceptable.
For example, using a database link to database sales.division3.acme.com, a user or application can
reference remote data as follows:
If GLOBAL_NAMES is set to FALSE, then you can use any name for the link
to sales.division3.acme.com. For example, you can call the link foo. Then, you can access the remote
database as follows:
SELECT name FROM scott.emp@foo; # link name different from global name
To access a remote schema object, you must be granted access to the remote object in the remote
database. Further, to perform any updates, inserts, or deletes on the remote object, you must be granted
the SELECT privilege on the object, along with the UPDATE, INSERT, or DELETE privilege. Unlike when
accessing a local object, the SELECT privilege is necessary for accessing a remote object because the
441 ORACLE DATABASE ADMINISTRATION
database has no remote describe capability. The database must do a SELECT * on the remote object in
order to determine its structure.
Oracle Database lets you create synonyms so that you can hide the database link name from the user. A
synonym allows access to a table on a remote database using the same syntax that you would use to
access a table on a local database. For example, assume you issue the following query against a table in
a remote database:
You can create the synonym emp for emp@hq.acme.com so that you can issue the following query
instead to access the same data:
To resolve application references to schema objects (a process called name resolution), the database
forms object names hierarchically. For example, the database guarantees that each schema within a
database has a unique name, and that within a schema each object has a unique name. As a result, a
schema object name is always unique within the database. Furthermore, the database resolves
application references to the local name of the object.
In a distributed database, a schema object such as a table is accessible to all applications in the
system. The database extends the hierarchical naming model with global database names to effectively
create global object names and resolve references to the schema objects in a distributed database
system. For example, a query can reference a remote table by specifying its fully qualified name,
including the database in which it resides.
For example, assume that you connect to the local database as user SYSTEM:
CONNECT SYSTEM@sales1
You then issue the following statements using database link hq.acme.com to access objects in
the scott and jane schemas on remote database hq:
Execute DESCRIBE operations on some remote objects. The following remote objects, however,
do support DESCRIBE operations:
Tables
442 ORACLE DATABASE ADMINISTRATION
Views
Procedures
Functions
Obtain nondefault roles on a remote database. For example, if jane connects to the local
database and executes a stored procedure that uses a fixed user link connecting
as scott, jane receives scott's default roles on the remote database. Jane cannot issue SET
ROLE to obtain a nondefault role.
Use a current user link without authentication through SSL, password, or NT native
authentication
Materialized Views
Materialized views in Oracle
Oracle materialized views were first introduced in Oracle8.
Materialized views are schema objects that can be used to summarize, precompute, replicate and
distribute data.
In mview, the query result is cached as a concrete table that may be updated from the original base
tables from time to time. This enables much more efficient access, at the cost of some data being
potentially out-of-date. It is most useful in datawarehousing scenarios, where frequent queries of the
actual base tables can be extremely expensive.
Oracle uses materialized views (also known as snapshots in prior releases) to replicate data to non-
master sites in a replication environment and to cache expensive queries in a datawarehouse
environment. A materialized view is a database object that contains the results of a query. They are local
copies of data located remotely, or are used to create summary tables based on aggregations of a table's
data.
A materialized view is a replica of a target master from a single point in time. We can define a
materialized view on a base/master table (at a master site), partitioned table, view, synonym or a
443 ORACLE DATABASE ADMINISTRATION
master materialized view (at a materialized view site). Whereas in multi master replication tables are
continuously updated by other master sites, materialized views are updated from one or more masters
through individual batch updates, known as a refreshes, from a single master site or master materialized
view site.
A materialized view provides indirect access to table data by storing the results of a query in a separate
schema object. Unlike an ordinary view, which does not take up any storage space or contain any data,
Mview stores data, whereas view stores only query. The existence of a materialized view is transparent
to SQL, but when used for query rewrites will improve the performance of SQL execution. An updatable
materialized view lets you insert, update, and delete.
A materialized view can be stored in the same database as it's base table(s) or in a different database.
Materialized views stored in the same database as their base tables can improve query performance
through query rewrites. Query rewrites are particularly useful in a datawarehouse environment. A
materialized view can query tables, views and other materialized views. Collectively these are called
master tables (a replication term) or detail tables (a datawarehouse term).
For replication purposes, materialized views allow us to maintain copies of remote data on local node.
These copies are read-only. If we want to update the local copies, we have to use the Advanced
Replication feature. We can select data from a materialized view as we would from a table or view.
For datawarehousing purposes, mviews commonly created are aggregate views, single-table aggregate
views and join views. In replication environments, mviews commonly created are primary key, rowid and
subquery materialized views.
Whenever you create a materialized view, regardless of it's type, always specify the schema name of the
table owner in the query for the materialized view.
Prerequisites:
To create mviews, the user should have any one of
CREATE MATERIALIZED VIEW or CREATE ANY MATERIALIZED VIEW privileges.
And
SQL> GRANT QUERY REWRITE TO user-name;
The background processes responsible for these materialized view refreshes are the coordinated job
queue (CJQ) processes.
job_queue_processes=n
Syntax:
Complete Refresh
To perform COMPLETE refresh of a materialized view, the server that manages the materialized view
executes the materialized view's defining query, which essentially recreates the materialized view. To
refresh the materialized view, the result set of the query replaces the existing materialized view data.
Oracle can perform a complete refresh for any materialized view. Depending on the amount of data that
satisfies the defining query, a complete refresh can take a substantially longer amount of time to perform
than a fast refresh.
Note: If a materialized view is complete refreshed, then set it's PCTFREE to 0 and PCTUSED to 99 for
maximum efficiency.
The complete refresh re-creates the entire materialized view. If we request a complete refresh, Oracle
performs a complete refresh even if a fast refresh is possible.
From Oracle 10g, complete refresh of single materialized view can do delete instead of truncate. To force
the refresh to do truncate instead of delete, parameter ATOMIC_REFRESH must be set to false.
ATOMIC_REFRESH = FALSE, mview will be truncated and whole data will be inserted. The refresh will go
faster, and no undo will be generated.
ATOMIC_REFRESH = TRUE (default), mview will be deleted and whole data will be inserted. Undo will be
generated. We will have access at all times even while it is being refreshed.
If we perform complete refresh of a master materialized view, then the next refresh performed on any
materialized views based on this master materialized view must be a complete refresh. If a fast refresh is
attempted for such a materialized view after it's master materialized view has performed a complete
refresh, then Oracle returns the following error:
ORA-12034 mview log is younger than last refresh
Fast Refresh
To perform FAST refresh, the master that manages the materialized view first identifies the changes that
occurred in the master since the most recent refresh of the materialized view and then applies these
changes to the materialized view. Fast refreshes are more efficient than complete refreshes when there
are few changes to the master because the participating server and network replicate a smaller amount
of data. We can perform fast refreshes of materialized views only when the master table or master
materialized view has a materialized view log. Also, for fast refreshes to be faster than complete
refreshes, each join column in the CREATE MATERIALIZED VIEW statement must have an index on it.
A materialized view log is a schema object that records changes to a master table's data so that a
materialized view defined on the master table can be refreshed incrementally.
We should create a materialized view log for the master tables if we specify the REFRESH FAST clause.
SQL> CREATE MATERIALIZED VIEW LOG ON emp;
445 ORACLE DATABASE ADMINISTRATION
To refresh this mview,
SQL> EXEC DBMS_MVIEW.REFRESH('mv_emp', 'F');
After a direct path load on a master table or master materialized view using SQL*Loader, a fast refresh
does not apply the changes that occurred during the direct path load. Also, fast refresh does not apply
changes that result from other types of bulk load operations on masters. Examples of these operations
include some INSERT statements with an APPEND hint and some INSERT ... SELECT * FROM statements.
Note:
->> Fast refreshable materialized views can be created based on master tables and master materialized
views only.
->> Materialized views based on a synonym or a view must be complete refreshed.
->> Materialized views are not eligible for fast refresh if the defined subquery contains an analytic
function.
Force Refresh
To perform FORCE refresh of a materialized view, the server that manages the materialized view
attempts to perform a fast refresh. If fast refresh is not possible, then Oracle performs complete refresh.
Use the force setting when you want a materialized view to refresh if fast refresh is not possible.
Partition Change Tracking (PCT) refresh refers to MV refresh using only the changed partitions of the
base tables of an MV. This refresh method is possible only if the base tables are partitioned and changes
to base tables are tracked on a partition basis.
Enhanced Partition Change Tracking (EPCT) Refresh refers to PCT based refresh applied to MVs
containing columns that are partition-join dependent on the partitioning column of the base table.
In the above example, the first copy of the materialized view is made at SYSDATE (immediately) and the
interval at which the refresh has to be performed is every two days.
446 ORACLE DATABASE ADMINISTRATION
SQL> CREATE MATERIALIZED VIEW mv_emp_pk
REFRESH COMPLETE
START WITH SYSDATE NEXT SYSDATE + 2/(24*60)
WITH ROWID
AS SELECT * FROM emp@remote_db;
In this example, the interval is two minutes. For every two minutes, fast refresh will happen.
How to know when was the last refresh happened on materialized views:
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,'YYYY-MM-DD HH24:MI:SS') from
dba_mviews;
(or)
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,'YYYY-MM-DD HH24:MI:SS') from
dba_mview_analysis;
(or)
SQL> select NAME, to_char(LAST_REFRESH,'YYYY-MM-DD HH24:MI:SS')
from dba_mview_refresh_times;
1. Read only
Cannot be updated and complex materialized views are supported.
2. Updateable
Can be updated even when disconnected from the master site.
Are refreshed on demand.
Consumes fewer resources.
Requires Advanced Replication option to be installed.
3. Writeable
Created with the for update clause.
Changes are lost when view is refreshed.
Requires Advanced Replication option to be installed.
Note: For read-only, updatable, and writeable materialized views, the defining query of the materialized
view must reference all of the primary key columns in the master.
In addition, using read-only materialized views eliminates the possibility of a materialized view
introducing data conflicts at the master site or master materialized view site, although this convenience
means that updates cannot be made at the remote materialized view site.
Updatable materialized views enable us to decrease the load on master sites because users can make
changes to the data at the materialized view site.
Note:
1. Do not use column aliases when we are creating an updatable materialized view. Column aliases
cause an error when we attempt to add the materialized view to a materialized view group using the
CREATE_MVIEW_REPOBJECT procedure.
2. An updatable materialized view based on a master table or master materialized view that has
defined column default values does not automatically use the master's default values.
3. Updatable materialized views do not support the DELETE CASCADE constraint.
The following types of materialized views cannot be masters for updatable materialized views:
ROWID materialized views
Complex materialized views
Read-only materialized views
However, these types of materialized views can be masters for read-only materialized views.
Additional Restrictions for Updatable Materialized Views Based on Materialized Views, those must:
Belong to a materialized view group that has the same name as the materialized view group at
it's master materialized view site.
Reside in a different database than the materialized view group at it's master materialized view
site.
Be based on another updatable materialized view or other updatable materialized views, not on a
read-only materialized view.
448 ORACLE DATABASE ADMINISTRATION
Be based on a materialized view in a materialized view group that is owned by PUBLIC at the
master materialized view site.
While multimaster replication also distributes a database among multiple sites, the networking
requirements for multimaster replication are greater than those for replicating with materialized views
because of the transaction by transaction nature of multimaster replication. Further, the ability of
multimaster replication to provide real-time or near real-time replication may result in greater network
traffic, and might require a dedicated network link.
Materialized views are updated through an efficient batch process from a single master site or master
materialized view site. They have lower network requirements and dependencies than multimaster
replication because of the point in time nature of materialized view replication. Whereas multimaster
replication requires constant communication over the network, materialized view replication requires only
periodic refreshes.
In addition to not requiring a dedicated network connection, replicating data with materialized views
increases data availability by providing local access to the target data. These benefits, combined with
mass deployment and data subsetting (both of which also reduce network loads), greatly enhance the
performance and reliability of your replicated database.
Note:
Both the master site and the materialized view site must have compatibility level (COMPATIBLE
initialization parameter) 9.0.1 or higher to replicate user-defined types and any objects on which they
are based.
We cannot create refresh-on-commit materialized views based on a master with user-defined
types. Refresh-on-commit materialized views are those created using the ON COMMIT REFRESH clause in
the CREATE MATERIALIZED VIEW statement.
Advanced Replication does not support type inheritance.
Group A at the materialized view site contains only some of the objects in the corresponding Group A at
the master site. Group B at the materialized view site contains all objects in Group B at the master site.
Under no circumstances, however, could Group B at the materialized view site contain objects from
Group A at the master site. A materialized view group has the same name as the master group on which
the materialized view group is based. For example, a materialized view group based on a personnel
master group is also named personnel.
In addition to maintaining organizational consistency between materialized view sites and their master
sites or master materialized view sites, materialized view groups are required for supporting updatable
materialized views. If a materialized view does not belong to a materialized view group, then it must be a
read-only or writeable materialized view.
Refresh Groups
Managing MVs is much easier in Oracle 10g with the introduction of the powerful new tuning advisors
that can tell us a lot about the design of the MVs. Tuning recommendations that can generate a complete
script that can be implemented quickly, saving significant time and effort. The ability to force rewriting or
abort the query can be very helpful in decision-support systems where resources must be conserved, and
where a query that is not rewritten should not be allowed to run amuck inside the database.
Related Views
DBA_MVIEWS
DBA_MVIEW_LOGS
DBA_MVIEW_KEYS
DBA_REGISTERED_MVIEWS
DBA_REGISTERED_MVIEW_GROUPS
DBA_MVIEW_REFRESH_TIMES
DBA_MVIEW_ANALYSIS
Related Package/Procedures
DBMS_MVIEW package
REFRESH
REFRESH_ALL
REFRESH_ALL_MVIEWS
REFRESH_DEPENDENT
REGISTER_MVIEW
UNREGISTER_MVIEW
PURGE_LOG
DBMS_REPCAT package
DBMS_REFRESH package
A materialized view log is required on a master if we want to fast refresh materialized views based on the
master. When we create a materialized view log for a master table or master materialized view, Oracle
creates an underlying table as the materialized view log. A Mview log can hold the primary keys, rowids,
or object ids of rows, or both, that have been updated in the master table or master materialized view. A
materialized view log can also contain other columns to support fast refreshes of materialized views with
subqueries.
The name of a materialized view log's table is MLOG$_master_name. The materialized view log is created
in the same schema as the target master. One materialized view log can support multiple materialized
views on its master table or master materialized view.
When changes are made to the master table or master materialized view using DML, an internal trigger
records information about the affected rows in the materialized view log. This information includes the
values of the primary key, rowid, or object id, or both, as well as the values of the other columns logged
in the materialized view log. This is an internal AFTER ROW trigger that is automatically activated when
we create a materialized view log for the target master table or master materialized view. It inserts a
row into the materialized view log whenever an INSERT, UPDATE, or DELETE statement modifies the
table's data. This trigger is always the last trigger to fire.
SQL> CREATE MATERIALIZED VIEW LOG ON emp WITH SEQUENCE, ROWID INCLUDING NEW VALUES;
A combination materialized view log works in the same manner as a materialized view log that tracks
only one type of value, except that more than one type of value is recorded. For example, a combination
materialized view log can track both the primary key and the rowid of the affected row are recorded.
Though the difference between materialized view logs based on primary keys and rowids is small (one
records affected rows using the primary key, while the other records affected rows using the physical
rowid), the practical impact is large. Using rowid materialized views and materialized view logs makes
reorganizing and truncating your master tables difficult because it prevents your ROWID materialized
views from being fast refreshed. If we reorganize or truncate your master table, then your rowid
materialized view must be COMPLETE refreshed because the rowids of the master table have changed.
If there is a conflict between an updatable M-view and a master, then, during a refresh, the conflict may
result in an entry in the updatable materialized view log that is not in the materialized view log at the
451 ORACLE DATABASE ADMINISTRATION
master site or master materialized view site. In this case, Oracle uses the updatable materialized view
log to remove or overwrite the row in the materialized view.
The updatable materialized view log is also used when we fast refresh a writeable materialized view, as
illustrated in the following scenario:
1. A user inserts a row into a writeable materialized view that has a remote master. Because the
materialized view is writeable and not updatable, the transaction is not stored in the deferred transaction
queue at the materialized view site.
2. Oracle logs information about this insert in the updatable materialized view log.
3. The user fast refreshes the materialized view.
4. Oracle uses the information in the updatable materialized view log and deletes the inserted row.
A materialized view must be an exact copy of the master when the fast refresh is complete. Therefore,
Oracle must delete the inserted row.
Primary key materialized views are the default type of materialized views in Oracle. They are updatable if
the materialized view was created as part of a materialized view group and FOR UPDATE was specified
when defining the materialized view. An updatable materialized view must belong to a materialized view
group that has the same name as the replication group at its master site or master materialized view
site. In addition, an updatable materialized view must reside in a different database than the master
replication group.
The following statement creates the primary key materialized view on the table emp located on a remote
database.
SQL> CREATE MATERIALIZED VIEW mv_emp_pk
BUILD DEFFERED
REFRESH FAST
START WITH SYSDATE NEXT SYSDATE + 1/48
WITH PRIMARY KEY
AS SELECT * FROM emp@remote_db;
Changes are propagated according to the row-level changes that have occurred, as identified by the
primary key value of the row (not the ROWID).
The following is an example of a SQL statement for creating an updatable, primary key materialized
view:
SQL> CREATE MATERIALIZED VIEW offshore.customers
FOR UPDATE
AS SELECT * FROM onsite.customers@orcl;
Primary key M-views allow materialized view master tables to be reorganized without affecting the
eligibility of the materialized view for fast refresh.
452 ORACLE DATABASE ADMINISTRATION
Materialized views may contain a subquery so that we can create a subset of rows at the remote
materialized view site. A subquery is a query imbedded within the primary query, so that we have more
than one SELECT statement in the CREATE MATERIALIZED VIEW statement. This subquery may be as
simple as a basic WHERE clause or as complex as a multilevel WHERE EXISTS clause. Primary key
materialized views that contain a selected class of subqueries can still be incrementally (or fast)
refreshed, if each master referenced has a materialized view log. A fast refresh uses materialized view
logs to update only the rows that have changed since the last refresh.
The following statement creates a subquery materialized view based on the emp and dept tables located
on the remote database:
SQL> CREATE MATERIALIZED VIEW mv_empdept
DISABLE QUERY REWRITE
AS SELECT * FROM emp@remote_db e
WHERE EXISTS
(SELECT * FROM dept@remote_db d WHERE e.dept_no = d.dept_no);
For backward compatibility, Oracle supports ROWID materialized views in addition to the default primary
key materialized views. A ROWID materialized view is based on the physical row identifiers (rowids) of
the rows in a master. ROWID materialized views should be used only for materialized views based on
master tables from an Oracle7 database, and should not be used from Oracle8 or higher.
The following statement creates the rowid materialized view on table emp located on a remote database:
SQL> CREATE MATERIALIZED VIEW mv_emp_rowid
REFRESH WITH ROWID
ENABLE QUERY REWRITE
AS SELECT * FROM emp@remote_db;
ROWID materialized views should have a single master table and cannot contain any of the following:
An object materialized view inherits the object identifier (OID) specifications of its master. If the master
has a primary key-based OID, then the OIDs of row objects in the materialized view are primary key-
based. If the master has a system generated OID, then the OIDs of row objects in the materialized view
are system generated. Also, the OID of each row in the object materialized view matches the OID of the
same row in the master, and the OIDs are preserved during refresh of the materialized view.
Consequently, REFs to the rows in the object table remain valid at the materialized view site.
A materialized view is considered complex when the defining query of the materialized view contains:
i) A CONNECT BY clause
For example, the following statement creates a complex materialized view because it has a UNION ALL
set operation:
iii) In some cases, the DISTINCT or UNIQUE keyword, although it is possible to have the DISTINCT
or UNIQUE keyword in the defining query and still have a simple materialized view
vi) In some cases, a UNION operation. Specifically, a materialized view with a UNION operation is
complex if any one of these conditions is true:
o Any query within the UNION is complex. The previous bullet items specify when a query makes a
materialized view complex.
o The outermost SELECT list columns do not match for the queries in the UNION. In the following
example, the first query only has order_total in the outermost SELECT list while the second query has
customer_id in the outermost SELECT list. Therefore, the materialized view is complex.
SQL> CREATE MATERIALIZED VIEW oe.orders AS
SELECT order_total FROM oe.orders@orcl o
WHERE EXISTS (SELECT cust_first_name, cust_last_name
FROM oe.customers@orcl c
WHERE o.customer_id = c.customer_id AND c.credit_limit > 50)
UNION
SELECT customer_id FROM oe.orders@orcl o
WHERE EXISTS (SELECT cust_first_name, cust_last_name
FROM oe.customers@orcl c
WHERE o.customer_id = c.customer_id AND c.account_mgr_id = 30);
o The innermost SELECT list has no bearing on whether a materialized view is complex. In the previous
example, the innermost SELECT list is cust_first_name and cust_last_name for both queries in the
UNION.
Note: If possible, we should avoid using complex materialized views because they cannot be fast
refreshed, which may degrade network performance.
A refresh group can contain materialized views from more than one materialized view group to maintain
transactional (read) consistency across replication group boundaries.
454 ORACLE DATABASE ADMINISTRATION
To preserve referential integrity and transactional (read) consistency among multiple materialized views,
Oracle has the ability to refresh individual materialized views as part of a refresh group. After refreshing
all of the materialized views in a refresh group, the data of all materialized views in the group correspond
to the same transactionally consistent point in time.
While you may want to define a single refresh group for each materialized view group, it may be more
efficient to use one large refresh group that contains objects from multiple materialized view groups.
Such a configuration reduces the amount of overhead needed to refresh your materialized views. A
refresh group can contain up to 400 materialized views.
One configuration that we want to avoid is using multiple refresh groups to refresh the contents of a
single materialized view group. Using multiple refresh groups to refresh the contents of a single
materialized view group may introduce inconsistencies in the materialized view data, which may cause
referential integrity problems at the materialized view site. Only use this type of configuration when we
have in-depth knowledge of the database environment and can prevent any referential integrity
problems.
During the refresh of a refresh group, each materialized view in the group is locked at the materialized
view site for the amount of time required to refresh all of the materialized views in the refresh group.
This locking is required to prevent users from updating the materialized views during the refresh
operation, because updates may make the data inconsistent. Therefore, having smaller refresh groups
means that the materialized views are locked for less time when you perform a refresh.
Network connectivity must be maintained while performing a refresh. If the connectivity is lost or
interrupted during the refresh, then all changes are rolled back so that the database remains consistent.
Therefore, in cases where the network connectivity is difficult to maintain, consider using smaller refresh
groups.
Advanced Replication includes an optimization for null refresh. That is, if there were no changes to the
master tables or master materialized views since the last refresh for a particular materialized view, then
almost no extra time is required for the materialized view during materialized view group refresh.
However, for materialized views in adatabase prior to release 8.1, consider separating materialized views
of master tables that are not updated often into a separate refresh group of their own. Doing so shortens
the refresh time required for other materialized view groups that contain materialized views of master
tables that are updated frequently.
On-Demand Refresh
Scheduled materialized view refreshes may not always be the appropriate solution for your environment.
For example, immediately following a bulk data load into a master table, dependent materialized views
no longer represent the master table's data. Rather than wait for the next scheduled automatic group
refreshes, you can manually refresh dependent materialized view groups to immediately propagate the
new rows of the master table to associated materialized views.
You may also want to refresh your materialized views on-demand when your materialized views are
integrated with a sales force automation system located on a disconnected laptop.
The following example illustrates an on-demand refresh of the hr_refg refresh group:
SQL> EXECUTE DBMS_REFRESH.REFRESH('hr_refg');
ADR
Automatic Diagnostic Repository (ADR) (ADR)
In an effort to make trouble resolution easier for the DBA Oracle 11g introduced the Fault Diagnosability
Infrastructure. The Fault Diagnosability Infrastructure assists in preventing, detecting, diagnosing, and
resolving database related problems. Problems such as database bugs and various forms of corruption
are made easier to support with the Fault Diagnosability Infrastructure. A number of changes come with
the Fault Diagnosability Infrastructure such as where the alert log is generated.
455 ORACLE DATABASE ADMINISTRATION
The Automatic Diagnostic Repository (ADR)
Perhaps one of the biggest Oracle 11g changes associated with the Fault Diagnosability Infrastructure is
the ADR. The ADR is a structure that contains all files associated with the Fault Diagnosability
Infrastructure. The ADR is a physical location for file storage, which has a pre-defined and standard
directory structure. Within the ADR, different Oracle components (such as individual database instances)
store data in their own ADR home. The ADR provides for standardization of the location for files that
Oracle is required to support. This standardized file structure also makes it easy for Oracle to package
these files so that they can be sent to Oracle as a part of a Service Request.
Associated with the ADR is the new diagnostic_dest parameter. This parameter defines the root of the
ADR. The diagnostic_dest parameter deprecates the user_dump_dest, core_dump_dest, and
background_dump_dest parameters. Any Oracle 11g database will ignore these parameters and will use
the diagnostic_dest parameter. This can be an upgrade issue, because if you do not define the correct
diagnostic destination directory, then the default values will be used, which may not be your intent.
Additionally, if the background_dump_dest parameter is set, a warning will appear during the startup of
the database. The database will start using the default diagnostic directory location. Additionally, Oracle
will create a small alert log entry in the background_dump_dest location with just a few lines indicating
that the background_dump_dest parameter is obsolete and indicating the new location where Oracle will
be creating the alert log.
As an example assume $ORACLE_BASE is /u01/oracle and the database name is mydb and that the
database is a two node RAC instance. The structure of the ADR directory for that instance will be
/u01/oracle/diag/rdbms/mydb/mydb1, and this would be the ADR Home directory for that database
instance. Each Oracle component within the ADR infrastructure (instances, ASM, networking) will have
it's own ADR home. ADR supports the use of shared storage if you are using RAC or you can use
individual storage on each node. Shared storage in a RAC environment provides the ability to see the
aggregate diagnostic data from any node. Also a shared ADR allows for more robust recovery options for
the data recovery advisor.
Under the ADR home for a given Oracle component will be a number of other directories. For the Oracle
database, some of the most common directories include the following:
cdump - This is the location of the core dumps for the database.
trace - This contains trace files generated by the system, as well as a text copy of the alert log.
incident - This directory contains multiple subdirectories, one for each incident.
There is a lot of Metadata to be stored with regards to ADR. Each Oracle database (and ASM instance)
has a V$DIAG_INFO view that provides information on the various ADR directories and other metadata
related to ADR, such as active incidents. Here is an example of a query against the V$DIAG_INFO view:
ADR is special repository that auto-maintained by Oracle11g about critical errors. ADR is maintained in
memory.
Oracle Database Release 11g. ADRCI enables:
Viewing diagnostic data within the Automatic Diagnostic Repository (ADR).
Viewing Health Monitor reports.
Packaging of incident and problem information into a zip file for transmission to Oracle Support.
ADR made up of a directory structure like the following.
/u01/app/oracle/diag/rdbms/orcl/orcl/alert
/u01/app/oracle/diag/rdbms/orcl/orcl/cdump
/u01/app/oracle/diag/rdbms/orcl/orcl/hm
/u01/app/oracle/diag/rdbms/orcl/orcl/incident
/u01/app/oracle/diag/rdbms/orcl/orcl/trace
Automatic Diagnostic Repository (ADR) is a file-based repository that aids the DBA in identifying,
diagnosing, and resolving problems. Oracle’s stated goals for ADR are:
Providing first-failure diagnosis
Allowing for problem prevention
Limiting damage and interruptions after a problem is detected
Reducing problem diagnostic time
Reducing problem resolution time
Simplifying customer interaction with Oracle Support
ADR accomplishes this with new features like an always-on memory-based tracing system to capture
diagnosis information from many different database components when a problem is detected, similar to
an aircraft’s “black box”.
Another new feature, Incident Packaging Services (IPS), simplifies the task of collecting diagnostic data
(traces, dumps, log files) related to a critical error. ADR assigns an incident number to a detected error
and adds it to all diagnostic information that’s related to it. A DBA can then easily package all related
information into a zip file to upload to Oracle Support. ADR defines a problem as an error such as an
ORA-00600 internal error. Problems are tracked inside of ADR by a problem key, which consists of a text
string, an error code and parameters that describe the problem.
An incident is a specific occurrence of a problem. ADR assigns a unique number for each incident, writes
an entry in the alert log, sends an alert to OEM, gathers diagnostic information, and stores that
information in an ADR sub-folder.
Using the ADRCI command-line application, you can then see the information saved for an incident, add
or remove files from the incident inventory, and save all the related files into a zip file.
To use ADRCI, you just need execute permissions. Since ADR is outside of the database, you can access
it without having the instance available.
457 ORACLE DATABASE ADMINISTRATION
DIAGNOSTIC
PREVIOUS LOCATION 10G ADR LOCATION 11G
DATA
Foreground
process USER_DUMP_DEST $ADR_HOME/TRACE
traces
Background
process BACKGROUND_DUMP_DEST $ADR_HOME/TRACE
traces
Alert log
BACKGROUND_DUMP_DEST $ADR_HOME/ALERT&TRACE
data
Core Dumps CORE_DUMP_DEST $ADR_HOME/CDUMP
Incident
USER|BACKGROUND_DUMP_DEST $ADR_HOME/INCIDENT/INCDIR_N
dumps
ADR is file based repository for diagnostic data like trace file, process dump,data structure dump etc.
In oracle 11g trace. alert not saved in *_DUMP_DEST directory even you set those parameters in
init.ora.11g ignore *_DUMP_DEST and store data in new format , directory structure is given below
Note: ADR_HOME is user define variable , I have define this variable make life easier
ADR root where ADR directory structure start.11g new initialize parameter DIAGNOSTIC_DEST decide
location of ADR root,
In 11g alert file is saved in 2 location, one is in alert directory ( in XML format) and old style alert file in
trace directory. Within ADR base, there can be many ADR homes, where each ADR home is the root
directory for all diagnostic data for a particular instance. The location of an ADR home for a database is
shown on the above graphic.
458 ORACLE DATABASE ADMINISTRATION
Note :- I have created on environment variable ADR_HOME=
<Diag/product_type/database_name/instance_name>. I am using same in all my this
document
Retention policy
There is retention policy for ADR that allow to specify how long to keep the data
ADR incidents are controlled by two different policies:
We can change retention policy using “adrci” MMON purge data automatically on expired ADR data.
1 rows fetched
adrci>
Change Retention
Oracle 11g introduces new tool/utility called ADRCI known as ADR command line tool. This tool allow
user to interact with ADR ,check alert log, check health monitor(HM) status , create report on HM,
Package incident and problem information into a zip file for send to Oracle Support. etc.
No username/password need to log in to ADRCI, ADRCI interact with file system and ADR data is secured
only by operating system permissions on the ADR directories.
HELP [topic]
Available Topics:
CREATE REPORT
ECHO
EXIT
HELP
HOST
IPS
PURGE
RUN
SET BASE
SET BROWSER
SET CONTROL
SET ECHO
SET EDITOR
SET HOMES | HOME | HOMEPATH
SET TERMOUT
SHOW ALERT
SHOW BASE
SHOW CONTROL
SHOW HM_RUN
SHOW HOMES | HOME | HOMEPATH
SHOW INCDIR
SHOW INCIDENT
SHOW PROBLEM
SHOW REPORT
SHOW TRACEFILE
SPOOL
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~
460 ORACLE DATABASE ADMINISTRATION
adrci>
One can see alert log content with the help of ADRCI
adrci>>show alert
$adrci
adrci> set editor vi
adrci> show alert ( it will open alert in vi editor )
adrci> show alert -tail ( Similar to Unix tail command )
adrci> show alert -tail 200 ( Similar to Unix Command tail -200 )
adrci> show alert -tail -f ( Similar to Unix command tail -f )
Since alert log saved as XML format ( log.xml ) , you can query xml file as well
You can spool output for ADRCI using spool command same as we use in sqlplus
461 ORACLE DATABASE ADMINISTRATION
Problem and Incident
Problem
AD introduce new concept of problem and incident, problem is critical error in database and in ADR
problem is identified by problem key. Problem key is consist of oracle error number, error parameter
value etc
for example ORA600kci
Incident
Incident is single occurrence of problem , each incident is identified by unique number called incident
id , which is unique in ADR home, all incident data stored in ADR. Each incident has a problem key and
is mapped to a single problem. When error occurred backup ground process make entry in alert.log
and collect data about incident (like process dump, data structure dump etc)
If similar incident happen more frequently , oracle will not collect data for all incident
By default only five dumps per hour for a given problem are allowed for single given
problem and this call flood control in 11g , some time you see "flood control" messages in
alert<SID>.log / log.xml.Incident can be created as manual as well, if needed.
adrci>>SHOW INCIDENT
DBA need not search trace, dump etc related particular error, to sent it to oracle support. In
ADR diagnostic data are tagged with incident id and IPS identified trace and dump for particular
incident and allow end user to create package from ADR to send to Oracle Support. Using IPS end user
can add some more file to package if needed.
adrci>SHOW INCIDENT
---------------------------------------------------------
9817 ORA 600 [kcidr_reeval_3] 2008-08-14 18:41:03.609077 +05:30
We can use IPS CREATE PACKAGE command to create a logical package for above incident
You can add additional files if needed, But file should be in ADR, below in example we adding alert log to
package.
diag/rdbms/orcl2/orcl2/incident/incdir_9817/orcl2_ora_5967_i9817.trm
2302506 05-14-08 18:41
diag/rdbms/orcl2/orcl2/incident/incdir_9817/orcl2_ora_5967_i9817.trc
186887 05-14-08 18:41 diag/rdbms/orcl2/orcl2/trace/alert_orcl2.log
491982 05-14-08 18:41 diag/rdbms/orcl2/orcl2/alert/log.xml
1122 05-14-08 18:41 diag/rdbms/orcl2/orcl2/trace/orcl2_diag_5931.trc
189 05-14-08 18:41 diag/rdbms/orcl2/orcl2/trace/orcl2_diag_5931.trm
1342 05-14-08 18:41 diag/rdbms/orcl2/orcl2/trace/orcl2_ora_5967.trc
773 05-14-08 18:41 diag/rdbms/orcl2/orcl2/trace/orcl2_ora_5967.trm
831 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/IPS_CONFIGURATION.dmp
338 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/IPS_PACKAGE.dmp
193 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/IPS_PACKAGE_INCIDENT.dmp
1094 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/IPS_PACKAGE_FILE.dmp
234 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/IPS_PACKAGE_HISTORY.dmp
6004 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/IPS_FILE_METADATA.dmp
214 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/IPS_FILE_COPY_LOG.dmp
1273 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/DDE_USER_ACTION_DEF.dmp
1813 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/DDE_USER_ACTION_PARAMETER_DEF.dm
p
204 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/DDE_USER_ACTION.dmp
198 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/DDE_USER_ACTION_PARAMETER.dmp
353 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/DDE_USER_INCIDENT_TYPE.dmp
163 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/DDE_USER_INCIDENT_ACTION_MAP.dmp
614 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/INCIDENT.dmp
357 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/INCCKEY.dmp
202 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/INCIDENT_FILE.dmp
406 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/PROBLEM.dmp
710 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/HM_RUN.dmp
843 05-14-08 18:49 diag/rdbms/orcl2/orcl2/hm/HMREPORT_HM_RUN_21.hm
708 05-14-08 18:49 diag/rdbms/orcl2/orcl2/hm/HMREPORT_HM_RUN_41.hm
207 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/export/EM_USER_ACTIVITY.dmp
62624 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/config.xml 489 05-14-07 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/metadata.xml
9508 05-14-08 18:49
diag/rdbms/orcl2/orcl2/incpkg/pkg_4/seq_1/manifest_4_1.xml
0 05-05-08 04:00 diag/rdbms/orcl2/orcl2/alert/
0 05-05-08 04:00 diag/rdbms/orcl2/orcl2/cdump/
0 05-14-08 18:49 diag/rdbms/orcl2/orcl2/hm/
0 05-14-08 18:41 diag/rdbms/orcl2/orcl2/incident/
0 05-14-08 18:45 diag/rdbms/orcl2/orcl2/incpkg/
0 05-13-08 22:51 diag/rdbms/orcl2/orcl2/ir/
0 05-14-08 18:41 diag/rdbms/orcl2/orcl2/lck/
0 05-05-08 04:00 diag/rdbms/orcl2/orcl2/metadata/
0 05-14-08 18:41 diag/rdbms/orcl2/orcl2/stage/
0 05-14-08 18:41 diag/rdbms/orcl2/orcl2/sweep/
0 05-14-08 18:41 diag/rdbms/orcl2/orcl2/trace/
464 ORACLE DATABASE ADMINISTRATION
Log a SR and upload this zip file to Oracle Support for diagnose and resolution.
IPS in Summary
$ adrci
adrci> help ips
adrci> show incident
( For example above command show incident No 9817 for ORA-600 [XYZ] )
adrci> ips create package incident 9817 <= ( it will give package No.)
adrci> ips create package incident 9817
Created package 4 based on incident id 9817, correlation level typical
adrci>
Health Monitor run diagnostic checks on various components of the database. Health Monitor
checks examine various components of the database, including files, memory, transaction integrity,
metadata, and process usage. In order to collect more data after critical error (incident) , oracle invoke
health monitoring implicitly. If need end-user can also run health monitoring procedure manually
Reactive: The fault Diagnosability infrastructure can invoke Health Monitor checks automatically
in response to critical errors.
Manual: DBA can manually run Health Monitor health checks Manually
Please look at the V$HM_CHECK view , it will list all Health monitoring checks
NAME
-------------------------
HM Test Check
Database Cross Check
Data Block Check
Redo Check
Logical Block Check
Table Check
Table-Index Cross Check
Table Row Check
Table-Index Row Mismatch
Transaction Check
Undo Segment Check
All Control Files Check
CF Member Check
All Datafiles Check
Single Datafile Check
Log Group Check
Log Group Member Check
Archived Log Check
Redo Revalidation Check
IO Revalidation Check
Block IO Revalidation Check
Txn Revalidation Check
Failure Simulation Check
Database Dictionary Check
25 rows selected.
The checker generates a report of its execution in XML and stores the reports in ADR.
You can view these reports using either V$HM_RUN, DBMS_HM, ADRCI, or Enterprise Manager.
$adrci
466 ORACLE DATABASE ADMINISTRATION
adrci>SHOW HM_RUN
*************************************************************************
----------------------------------------------------------
RUN_ID 1
RUN_NAME HM_RUN_1
CHECK_NAME Database Cross Check
NAME_ID 2
MODE 2
START_TIME 2008-08-05 04:01:56.783059 +05:30
RESUME_TIME
END_TIME 2008-08-08 04:02:04.007178 +05:30
MODIFIED_TIME 2008-08-08 04:02:04.007178 +05:30
TIMEOUT 0
FLAGS 0
STATUS 5
SRC_INCIDENT_ID 0
NUM_INCIDENTS 0
ERR_NUMBER 0
REPORT_FILE
RUN_ID 21
RUN_NAME HM_RUN_21
2 rows fetched
Create HM Report
You can create and view Health Monitor checker reports using the ADRCI utility. Make sure that Oracle
environment variables are set properly, The ADRCI utility starts and displays its prompt as shown above.
You then enter the SHOW HM_RUN command to list all the checker runs registered in the ADR repository.
Locate the checker run for which you want to create a report and note the checker run name using the
corresponding RUN_NAME field. you can generate the report using the CREATE REPORT HM_RUN
command. You view the report using the SHOW REPORT HM_RUN command or by running
dbms_hm.get_run_report on sql prompt
467 ORACLE DATABASE ADMINISTRATION
DBMS_HM.GET_RUN_REPORT('HM_RUN_21')
--------------------------------------------------------------------------------
<?xml version="1.0" encoding="US-ASCII"?>
<HM-REPORT REPORT_ID="HM_RUN_21">
<TITLE>HM Report: HM_RUN_21</TITLE>
<RUN_INFO>
<CHECK_NAME>Database Dictionary Check</CHECK_NAME>
<RUN_ID>21</RUN_ID>
<RUN_NAME>HM_RUN_21</RUN_NAME>
<RUN_MODE>MANUAL</RUN_MODE>
<RUN_STATUS>COMPLETED</RUN_STATUS>
<RUN_ERROR_NUM>0</RUN_ERROR_NUM>
<SOURCE_INCIDENT_ID>0</SOURCE_INCIDENT_ID>
<NUM_INCIDENTS_CREATED>0</NUM_INCIDENTS_CREATED>
<RUN_START_TIME>2008-08-13 23:09:43.831573 +05:30</RUN_START_TIME>
<RUN_END_TIME>2008-08-13 23:09:47.713191 +05:30</RUN_END_TIME>
</RUN_INFO>
<RUN_PARAMETERS>
<RUN_PARAMETER>TABLE_NAME=ALL_CORE_TABLES</RUN_PARAMETER>
<RUN_PARAMETER>CHECK_MASK=ALL</RUN_PARAMETER>
</RUN_PARAMETERS>
<RUN-FINDINGS/>
</HM-REPORT>
adrci>
469 ORACLE DATABASE ADMINISTRATION
View HM generated reports on OS level ( In ADR repository )
[oracle@apps001 hm]$ ls
HMREPORT_HM_RUN_21.hm
[oracle@apps001 hm]$
470 ORACLE DATABASE ADMINISTRATION
471 ORACLE DATABASE ADMINISTRATION
472 ORACLE DATABASE ADMINISTRATION
473 ORACLE DATABASE ADMINISTRATION
474 ORACLE DATABASE ADMINISTRATION
475 ORACLE DATABASE ADMINISTRATION
476 ORACLE DATABASE ADMINISTRATION
FLASHBACK TECHNOLOGY
477 ORACLE DATABASE ADMINISTRATION
Oracle Flashback Technology is a group of Oracle Database features that let us view past
states of database objects or to return database objects to a previous state without using point-
in-time media recovery. Flashback Database is a part of the backup & recovery enhancements in
Oracle 10g Database that are called Flashback Features .
Flashback Database enables us to wind our entire database backward in time, reversing the effects of
unwanted database changes within a given time window. The effects are similar to database point-in-
time recovery. It is similar to conventional point in time recovery in its effects, allowing us to return a
database to its state at a time in the recent past.
Flashback Database can be used to reverse most unwanted changes to a database, as long as the
datafiles are intact. Oracle Flashback Database lets us quickly recover an Oracle database to a previous
time to correct problems caused by logical data corruptions or user errors.
In most cases, a disastrous logical failure caused by human error can be solved by performing a
Database Point-in-Time Recovery (DBPITR). Before 10g the only way to do a DBPITR was incomplete
media recovery. Media Recovery is a slow and time-consuming process that can take a lot of hours. On
the other side, by using of Flashback Database a DBPITR can be done in an extremely fast way: 25 to
105 times faster than usual incomplete media recovery and in result it can minimize the downtime
significantly.
The maximum allowed memory for the flashback buffer is 16 MB. We don’t have direct control on its size.
478 ORACLE DATABASE ADMINISTRATION
The flashback buffer size depends on the size of the current redo log buffer that is controlled by Oracle.
Starting at 10g R2, the log buffer size cannot be controlled manually by setting the initialization
parameter LOG_BUFFER.
In 10G R2, Oracle combines fixed SGA area and redo buffer together. If there is a free space after Oracle
puts the combined buffers into a granule, that space is added to the redo buffer. The sizing of the redo
log buffer is fully controlled by Oracle. According to SGA and its atomic sizing by granules, Oracle will
calculate automatically the size of the log buffer depending of the current granule size. For smaller SGA
size and 4 MB granules, it is possible redo log buffer size + fixed SGA size to be multiple of the granule
size. For SGAs bigger than 128 MB, the granule size is 16 MB. We can see current size of the redo log
buffer, fixed SGA and granule by querying the V$SGAINFO view , and can query the V$SGASTAT view to
display detailed information on the SGA and its structures.
To find current size of the flashback buffer, we can use the following query:
SQL> SELECT * FROM v$sgastat WHERE NAME = 'flashback generation buff';
There is no official information from Oracle that confirms the relation between 'flashback generation buff'
structure in SGA and the real flashback buffer structure. This is only a suggestion. A similar message
message is written to the alertSID.log file during opening of the database .
Allocated 3981204 bytes in shared pool for flashback generation buffer Starting background process
RVWR RVWR started with pid=16, OS id=5392 .
RVWR writes periodically flashback buffer contents to flashback database logs. It is an asynchronous
process and we don’t have control over it. All available sources are saying that RVWR writes periodically
to flashback logs. The explanation for this behavior is that Oracle is trying to reduce the I/O and CPU
overhead that can be an issue in many production environments.
Flashback log files can be created only under the Flash Recovery Area (that must be configured before
enabling the Flashback Database functionality). RVWR creates flashback log files into a directory named
“FLASHBACK” under FRA. The size of every generated flashback log file is again under Oracle’s control.
According to current Oracle environment – during normal database activity flashback log files have size
of 8200192 bytes. It is very close value to the current redo log buffer size. The size of a generated
flashback log file can differs during shutdown and startup database activities. Flashback log file sizes can
differ during high intensive write activity as well.
Flashback log files can be written only under FRA (Flash Recovery Area). FRA is closely related and is
built on top of Oracle Managed Files (OMF). OMF is a service that automates naming, location, creation
and deletion of database files. By using OMF and FRA, Oracle manages easily flashback log files. They are
created with automatically generated names with extension .FLB. For instance, this is the name of one
flashback log file: O1_MF_26ZYS69S_.FLB
By its nature flashback logs are similar to redo log files. LGWR writes contents of the redo log buffer to
online redo log files, RVWR writes contents of the flashback buffer to flashback database log files. Redo
log files contain all changes that are performed in the database, that data is needed in case of media or
instance recovery. Flashback log files contain only changes that are needed in case of flashback
operation. The main differences between redo log files and flashback log files are :
Flashback log files are never archived - they are reused in a circular manner.
Redo log files are used to forward changes in case of recovery while flashback log files are used
to backward changes in case of flashback operation.
479 ORACLE DATABASE ADMINISTRATION
Flashback log files can be compared with UNDO data (contained in UNDO tablespaces) as well.
While UNDO data contains changes at the transaction level, flashback log files contain UNDO data at the
data block level. While UNDO tablespace doesn’t record all operations performed on the database (for
instance, DDL operations), flashback log files record that data as well. In few words, flashback log files
contain the UNDO data for our database.
To Summarize :
UNDO data doesn’t contain all changes that are performed in the database while flashback logs
contain all altered blocks in the database .
UNDO data is used to backward changes at the transaction level while flashback logs are used to
backward changes at the database level .
We can query the V$FLASHBACK_DATABASE_LOGFILE to find detailed info about our flashback log files.
Although this view is not documented it can be very useful to check and monitor generated flashback
logs.
There is a new record section within the control file header that is named FLASHBACK LOGFILE
RECORDS. It is similar to LOG FILE RECORDS section and contains info about the lowest and highest SCN
contained in every particular flashback database log file .
***************************************************************************
FLASHBACK LOGFILE RECORDS
***************************************************************************
(size = 84, compat size = 84, section max = 2048, section in-use = 136,
last-recid= 0, old-recno = 0, last-recno = 0)
(extent = 1, blkno = 139, numrecs = 2048)
FLASHBACK LOG FILE #1:
(name #4) E:\ORACLE\FLASH_RECOVERY_AREA\ORCL102\FLASHBACK\O1_MF_26YR1CQ4_.FLB
Thread 1 flashback log links: forward: 2 backward: 26
size: 1000 seq: 1 bsz: 8192 nab: 0x3e9 flg: 0x0 magic: 3 dup: 1
Low scn: 0x0000.f5c5a505 05/20/2006 21:30:04
High scn: 0x0000.f5c5b325 05/20/2006 22:00:38
In current environment this is the file with name: O1_MF_26YSTQ6S_.FLB and with values of:
Low SCN : 4123374373
High SCN : 4123376446
Note: If we want to perform successfully a flashback operation we will always need to have available at
least one archived (or online redo) log file. This is a particular file that contains redo log information
about changes around the desired flashback point in time (SCN 4123376440). In this case, this is the
archived redo log with name: ARC00097_0587681349.001 that has values of:
First change#: 4123361850
Next change#: 4123380675
The flashback operation will not succeed without this particular archived redo log. The reason for
this :Flashback log files contain information about before-images of data blocks, related to some SCN
(System Change Number). When we perform flashback operation to SCN 4123376440, Oracle cannot
apply all needed flashback logs and to complete successfully the operation because it applying before-
images of data. Oracle needs to restore each data block copy (by applying flashback log files) to its state
at a closest possible point in time before SCN 4123376440. This will guarantee that the subsequent “redo
apply” operation will forward the database to SCN 4123376440 and the database will be in consistent
state. After applying flashback logs, Oracle will perform a forward operation by applying all needed
archive log files (in this case redo information from the file: ARC00097_0587681349.001) that will
forward the database state to the desired SCN.
Oracle cannot start applying redo log files before to be sure that all data blocks are returned to their
state before the desired point in time. So, if desired restore point of time is 10:00 AM and the oldest
restored data block is from 09:47 AM then we will need all archived log files that contain redo data for
the time interval between 09:47 AM and 10:00 AM. Without that redo data, the flashback operation
cannot succeed. When a database is restored to its state at some past target time using Flashback
Database, each block changed since that time is restored from the copy of the block in the flashback logs
480 ORACLE DATABASE ADMINISTRATION
most immediately prior to the desired target time. The redo log is then used to re-apply changes since
the time that block was copied to the flashback logs.
Note: Redo logs must be available for the entire time period spanned by the flashback logs, whether on
tape or on disk. (In practice, however, redo logs are generally needed much longer than the flashback
retention target to support point-in-time recovery.)
Flashback logs are not independent. They can be used only with the redo data that contains database
changes around the desired SCN. This means that if we want to have working flashback window (and to
be able to restore the database to any point in time within this window) we need to ensure the
availability of redo logs as well. If we are familiar with this information then we will be able to work in a
better way with this feature and to ensure that it will help us to perform faster recovery without
unexpected problems.
A flashback log is created whenever necessary to satisfy the flashback retention target, as long
as there is enough space in the flash recovery area.
A flashback log can be reused; once it is old enough that it is no longer needed to satisfy the
flashback retention target.
If the database needs to create a new flashback log and the flash recovery area is full or there is
no disk space, then the oldest flashback log is reused instead.
If the flash recovery area is full, then an archived redo log may be automatically deleted by the
flash recovery area to make space for other files. In such a case, any flashback logs that would require
the use of that redo log file for the use of FLASHBACK DATABASE are also deleted.
Note : Re-using the oldest flashback log shortens the flashback database window. If enough flashback
logs are reused due to a lack of disk space, the flashback retention target may not be satisfied.
If possible, avoid using Flashback Database with a target time or SCN that coincides with a NOLOGGING
operation. Also, perform a full or incremental backup of the affected datafiles immediately after any
NOLOGGING operation to ensure recoverability to points in time after the operation. If we expect to use
Flashback Database to return to a point in time during an operation such as a direct-path INSERT,
consider performing the operation in LOGGING mode.
Oracle Flashback features use the Automatic Undo Management to obtain metadata and transaction
historical data.
Undo data is persistent and survives database shutdown.
You can use the Flashback options to
o recover data from user errors,
o compare table data at two points in time,
o view transaction actions (the set of actions performed in a given transaction).
o Undo table drops
o Revert the entire database to a previous point in time.
If the database Flashback feature is off then follow the below steps :
5.) Set the recovery file destination size. This is the hard limit on the total space to be used by
target database recovery files created in the flash recovery area .
Main application of Flashback Technologies is to point out logical errors and undo erroneous
changes without performing point in time recovery. There are various technologies that come under
Flashback Umbrella. Each one of them is discussed and demonstrated in this tutorial.
First of all, set the Undo Retention to 1 Hour and Retention Guarantee to avoid lower limit
errors.
485 ORACLE DATABASE ADMINISTRATION
1) Flashback Drop
In earlier database releases if a table was accidentally dropped, one had to recover the database using
point-in-time recovery. While this would restore the table, it would also revert all other database objects
to that same point. Alternately, one could import the table back into the database if an appropriate
export file happened to exist. But invariably none of these alternatives was well suited to the desired
task. This has been vastly simplified and improved with Flashback Drop. It simply reverses the effects of a
DROP TABLE operation.
Note Only tables which are in locally-managed (as opposed to dictionary-managed) tablespaces and
those not contained within the SYSTEM tablespace may be the subject of a Flashback Drop operation.
Other objects which are excluded from a Flashback Drop include partitioned index-organized tables (IOTs),
and those to which fine-grained auditing (FGA) and virtual private database (VPD) policies have been
applied.
To support Flashback Drop a structure called the recycle bin exists within the database.
It is used to Undrop dropped tables. Uses LIFO method while Undrop and after undrop the
Table is renamed to original while its relevant indexes, triggers etc. still have the system generated
names and cannot be revert to original names automatically.
486 ORACLE DATABASE ADMINISTRATION
RECYCLEBIN=ON
Prior to Oracle 10g, a DROP command permanently removed objects from the database. In Oracle 10g, a
DROP command places the object in the recycle bin. The extents allocated to the segment are not
reallocated until we purge the object. we can restore the object from the recycle bin at any time. This
feature eliminates the need to perform a point-in-time recovery operation. Therefore, it has minimum
impact to other database users.
In Oracle 10g the default action of a DROP TABLE command is to move the table to the recycle bin (or
rename it), rather than actually dropping it. The PURGE option can be used to permanently drop a table.
The recycle bin is a logical collection of previously dropped objects, with access tied to the DROP
privilege. The contents of the recycle bin can be shown using the SHOW RECYCLEBIN command and
purged using the PURGE TABLE command. As a result, a previously dropped table can be recovered from
the recycle bin.
Recycle Bin: A recycle bin contains all the dropped database objects until :
If an object is dropped and recreated multiple times all dropped versions will be kept in the recycle bin,
subject to space. Where multiple versions are present it's best to reference the tables via the
recyclebin_name. For any references to the ORIGINAL_NAME it is assumed the most recent object is
drop version in the referenced question. During the flashback operation the table can be renamed.
RECYCLE BIN concept has been in introduced in Oracle 10g onwards. This is similar to WINDOWS
RECYCLEBIN and objects are stored in FLASHBACK area.
The Recycle Bin is a virtual container where all dropped objects reside. Underneath the covers, the
objects are occupying the same space as when they were created. If table EMP was created in the USERS
tablespace, the dropped table EMP remains in the USERS tablespace. Dropped tables and any associated
objects such as indexes, constraints, nested tables, and other dependant objects are not moved, they
are simply renamed with a prefix of BIN$$. You can continue to access the data in a
dropped table or even use Flashback Query against it. Each user has the same rights and privileges on
Recycle Bin objects before it was dropped. You can view your dropped tables by querying the new
RECYCLEBIN view. Objects in the Recycle Bin will remain in the database until the owner of the dropped
objects decides to permanently remove them using the new PURGE command. The Recycle Bin objects
are counted against a user's quota. But Flashback Drop is a non-intrusive feature. Objects in the Recycle
Bin will be automatically purged by the space reclamation process if
o A user creates a new table or adds data that causes their quota to be exceeded.
o The tablespace needs to extend its file size to accommodate create/insert operations.
There is no issues with DROPping the table, behaviour wise. It is the same as in 8i / 9i. The space is not
released immediately and is accounted for within the same tablespace / schema after the drop.
Both individual users and administrators have their own view of the recycle bin. In the first example
below, notice the user view once a table has been dropped. A superficial look at the recycle bin is
available via the SHOW RECYCLEBIN command while a more comprehensive one is obtained by querying
the USER_RECYCLEBIN view or RECYCLEBIN synonym.
489 ORACLE DATABASE ADMINISTRATION
While the object remains in the recycle bin, one can every query it or perform Flashback Query upon the
object.
The administrator view into the bin is available from the data dictionary view DBA_RECYCLEBIN. This
likewise maintains the relationships between bin-resident objects and their original names.
Any given object may be dropped several times. Therefore the bin must have the ability to uniquely
identify each instance. Therefore, you will notice that the renamed form of an object as it exists in the bin
follows this basic form:
Unique_ID – a 26 character unique identifier for the object as the same object name could be
dropped from many different schemas.
490 ORACLE DATABASE ADMINISTRATION
Version – a version number as the same schema object could be dropped several times before the
bin is purged.
The command FLASHBACK TABLE…TO BEFORE DROP restores the table and all its dependent objects
from the recycle bin. Dependent objects restored along with the table include indexes, triggers and
constraints.
******
***********************
If a database object already exists in the database with the same name, an error is returned unless you
also specify the RENAME TO clause. Since the dependent objects keep their name you may need to
rename them before performing the flashback with the RENAME clause.
If the same object was dropped multiple times, the instance that was most recently moved to the recycle
bin is recovered. To restore an older version of that object, use the system-generated name. To illustrate,
the following query indicates that there are several instances of the database object within the bin. Using
the appropriate technique, we can restore the oldest one to the schema.
Objects are removed from the recycle bin in one of two ways. First, one can explicitly remove the objects
using the PURGE command. This obviously means that they are no longer available for reclamation. Also,
objects may be automatically purged by the database instance if the schema is about to exceed its
storage quota, or if the tablespace in which it resides is extending. There are several different forms of
the explicit PURGE command. In this first example the recycle bin for a user schema is explicitly purged in
its entirety.
In other cases, a more focused purge may be performed. The examples shown next purge previous
incarnations of the CUSTOMERS table or the CUSTOMERS_INDEX index.
One could also refer to an object as part of the PURGE command by using its BIN$ bin-resident name as
well. If one has sufficient privileges, all recycled objects previously stored in a given tablespace, or the
entire database, may be purged, as shown next.
491 ORACLE DATABASE ADMINISTRATION
Bypa
ssing the Recycle Bin
One can bypass the recycle bin and permanently and immediately drop a table and its dependent
objects. If you issue the DROP TABLE…PURGE command, it will not move the objects to the recycle bin.
About Implicitly Dropped Objects Objects which are implicitly dropped as a result of DROP
TABLESPACE…INCLUDING CONTENTS, DROP CLUSTER or DROP USER…CASCADE commands are never
moved to the recycle bin. Such objects cannot be recovered using FLASHBACK DROP.
BIN$zbjrBdpw==$0
BIN$zbjra9wy==$0
As long as the space used by recycle bin objects is not reclaimed, you can recover those objects by using
Flashback Drop. The following are recycle bin object reclamation policies:
• Automatic cleanup under space pressure: While objects are in the recycle bin,
their corresponding space is also reported in DBA_FREE_SPACE because their
space is automatically reclaimable. The free space in a particular tablespace is
then consumed in the following order:
2. Free space corresponding to recycle bin objects. In this case, recycle bin objects are automatically
purged from the recycle bin using a first in, first out (FIFO) algorithm.
3. Free space automatically allocated if the tablespace is auto-extensible. Suppose that you create a
new table inside the TBS1 tablespace. If there is free space allocated to this tablespace that does
not correspond to a recycle bin object, this free space is used as a first step. If this is not enough,
free space is used that corresponds to recycle bin objects that reside inside TBS1. If the free
space of some recycle bin objects is used, these objects are purged automatically from the
recycle bin. At this time, you can no longer recover these objects by using the Flashback Drop
feature. As a last resort, the TBS1 tablespace is extended (if possible) if the space requirement is
not yet satisfied.
492 ORACLE DATABASE ADMINISTRATION
2) Flashback Query
Use to view data at a specific point in time. Uses Undo data, hence the greater the UNDO_RETENTION
parameter the more historical data can be queried. Moreover, a Flashback Data Archive can be created to
retain undo data for comparatively longer periods to support more historical Flashback queries (i.e 1 year
or before).
This feature causes either a session or just a single query to be placed into flashback mode. It employs
the system-supplied package DBMS_FLASHBACK (), either directly or using the standard SQL interface.
When a session or query has been placed in this mode, it will operate upon data that has flashed back to
a specific point in time or database system change number (SCN). Undo data retained by the database is
referenced to achieve accurate flashback results.
There are various methods by which one may perform a flashback query:
Using the AS OF TIMESTAMP clause within a SQL statement to flashback to a specific point-in-
time.
Using the AS OF SCN clause within a SQL statement to flashback to a database SCN.
Explicitly calling the DBMS_FLASHBACK () package for a session to perform similar flashback
operations at the entire session level.
SELECT…AS OF TIMESTAMP
To illustrate a brief initial example, notice in the query below the average value for ListPrice within the
Products table as of what is the current point-in-time.
Thereafter, a 10% price increase is implemented for all products and this is reflected in a new average
value. The transaction is committed and the update made permanent to the database.
493 ORACLE DATABASE ADMINISTRATION
Suppose a sophisticated sales analysis application running at a later point in time has noticed a
significant decline in sales as of certain time. Management might inquire as to what the average list price
of products was at that point, as compared with the current average. Now notice how a query can include
the AS OF TIMESTAMP clause to satisfy this request. This clause allows one to specify a timestamp value,
often with the help of the TO_TIMESTAMP () system-supplied function. The undo data is read and the
query results reflect the prior point-intime.
With a little bit of logic, any table can thus revert to an earlier state using this feature as well. Notice the
following example. At the same time, we have other far more elegant means of actually undoing an
update using flashback technology, but this example illustrates the underlying capability.
This means that user or application errors which inadvertently delete rows which should not have been
deleted, or performed other database updates erroneously may be undone. Bear in mind that the
flashback operation actually pertains to the object and not the query or the database as a whole. This
becomes clear when performing a join operation. In the example below, the Products table is flashed back
to a prior point but is joined with the Members table in its current state.
494 ORACLE DATABASE ADMINISTRATION
By using the AS OF TIMESTAMP clause on all tables within the query, one can create a hybrid query which
uses different states of the tables within the same query.
Using our hypothetical scenario above, one could revert the PRODUCTS table to a prior point in time, and
flashback the SALES table to a later point in time. One could then examine what the net sales value
would have been over that period without the list price increase.
Flashback Query
Use to query all data at a specified point in time.
495 ORACLE DATABASE ADMINISTRATION
Unwanted
updates
SELECT…AS OF SCN
One may alternately flashback a query to a particular SCN. There are various methods by which the
desired SCN might be computed. One helpful method involves the use of the pseudo-column
ORA_ROWSCN. This pseudo-column refers to the most recent COMMIT SCN which resulted in the row
being updated.
496 ORACLE DATABASE ADMINISTRATION
Notice how this is used within in a query. If one wishes to obtain whatever is the current SCN for the
entire database, the GET_SYSTEM_CHANGE_NUMBER() function within the DBMS_FLASHBACK() package
provides this information. You will learn how to utilize this system-supplied package below.
Regardless of the method used to determine the desired SCN point, this example shows another update
to the table is issued and committed. The query, however, flashes back to the SCN when the rows were
still present.
Using DBMS_FLASHBACK[]
Using the DBMS_FLASHBACK() system-supplied package the entire database session, or perhaps just a
transaction, may be flashed back to a prior point. This allows all queries, PL/SQL program units, and so on
to operate in that state without changes. In fact, perhaps using a logon system event trigger, one could
implicitly set one or more database sessions to a prior point in time and then have the applications
operate for the session as if the application was running at a prior point. In this way an application user
could use their application and issue transactions as if the time period were a point in the past. Or
consider a PL/SQL application which opens a cursor while in flashback mode, and then opens a cursor on
the same database objects while in normal mode, with the results of the two compared. Operating in this
mode involves the following simple steps:
The transaction must first enable flashback query to a specific point in time or SCN point using
the ENABLE_AT_TIME() or ENABLE_AT_SYSTEM_CHANGE_NUMBER() program units.
The transaction must complete by disabling flashback queries, using the DISABLE() program unit.
The FLASHBACK object privilege is required in order to perform flashback queries on an object, as shown
above. (Of course the SELECT object privilege would also be required). Like other object privileges, this is
implicitly available to the owner but must be explicitly granted to other users. Also, in the case of the
DBMS_FLASHBACK() package which, like other system-supplied packages which are owned by SYS, the
EXECUTE privilege must be granted to each user who will employ it.
497 ORACLE DATABASE ADMINISTRATION
Consider the scenario where the ListPrice of products has been increased. A call to DBMS_FLASHBACK()
allows the session to operate back to the point in time where the increases had not become effective. All
SQL and PL/SQL program units will now operate in this mode, until the flashback has been disabled.
Next, the database session returns to using the latest state of the production database.
Note Note that flashback query does not apply to such objects as data dictionary fixed tables, dynamic
performance tables, external tables, and so on. Part of what this means is that system functions and
pseudo-columns like SYSDATE and others will retain their current values even if the transaction or session
is operating in flashback mode.
Alternately one may flashback a transaction or session to an SCN. Notice this similar example shown
next.
498 ORACLE DATABASE ADMINISTRATION
See all versions of rows between two times/SCN and the transaction that changes the row.
Like Flashback Query it also depends on UNDO DATA. Oracle Flashback Versions Query is an extension to
SQL that can be used to retrieve the versions of rows in a given table that existed in a specific time
interval. Oracle Flashback Versions Query returns a row for each version of the row that existed in the
specified time interval. For any given table, a new row version is created each time the COMMIT
statement is executed. Flashback version query allows the versions of a specific row to be tracked during
a specified time period using the VERSIONS BETWEEN clause.
The Flashback Versions Query feature enables you to perform queries of specific rows as of a certain time
or SCN number. The FLASHBACK and SELECT object privileges are required for this operation. This feature
can be combined with the VERSIONS clause, which can be added to display all the versions of the
committed rows between two points-in-time or two SCNs. This displays a history of row changes that
allows one to evaluate all the states of any given row. Hence, this feature can be used as a means of
auditing activity on a table. Any uncommitted row versions are not displayed. However, the display
includes all deleted rows and subsequently reinserted versions of the rows are shown. Pseudo columns
may referenced directly in the SELECT statement or used in the WHERE clause. There are several pseudo
columns that relate directly to the Flashback Versions Query feature, which are listed next. As you can
see, these support either timestamp or SCN references.
499 ORACLE DATABASE ADMINISTRATION
Exam
ple: Consider the initial setup for this example. We obtain the current SCN number as an initial reference
point. Thereafter, several updates on the Teams table are performed, including the insertion and then
deletion of a Team row. These updates are part of committed transactions.
The current SCN is determined and this is the reference point that we will use. The clause VERSIONS
BETWEEN SCN MINVALUE AND MAXVALUE will reference undo data within the range specified. If explicit
SCN values are not used, the keywords MINVALUE and MAXVALUE will use the full range of undo data
available. The clause AS OF SCN xxxx provides a reference point for which the row versions should be
evaluated. If this is omitted, then the most recent SCN is used.
The results of the below query can be interpreted as follows:
The first row corresponds to the version of the row that was deleted. Given that
VERSIONS_ENDSCN is null, it means that the row still existed as of that VERSIONS_STARTSCN
number.
The second row corresponds to the inserted row with Name value of ‘Support’. The
VERSIONS_ENDSCN value indicates this version of the row no longer existed as of that SCN.
The third row corresponds to the row with a Name value ‘HR’ when it was inserted. It also still
exists as of the current SCN.
500 ORACLE DATABASE ADMINISTRATION
See all changes made by a transaction. An UNDO_SQL for each statement executed within
the transaction is available in FLASHBACK_TRANSACTION_QUERY table to revert back the changes
(FLASHBACK LOGS must be enabled to obtain UNDO_SQL from FLASHBACK_TRANSACTION_QUERY
table). Also uses UNDO DATA.
501 ORACLE DATABASE ADMINISTRATION
Flashback transaction query can be used to get extra information about the transactions listed by
flashback version queries. The VERSIONS_XID column values from a flashback version query can be
used to query the FLASHBACK_TRANSACTION_QUERY view.
Flashback Transaction Query is complementary to the Flashback Versions Query feature. Using a Versions
Query, one might identify all of the versions of a given row within a table, as you have just seen. Next,
using Flashback Transaction Query, one can use Versions Query information to query a view named
FLASHBACK_TRANSACTION_QUERY. The FLASHBACK_TRANSACTION_QUERY view indicates the transaction
which created the row version and the SQL code necessary to undo each of the changes made by that
transaction. By invoking that SQL code, one could undo the changes, thereby reverting one or more
tables to their original state.
Database Configuration
Flashback Transaction Query requires that supplemental redo log data be added to the standard redo
processing of the database. While more extensive options of this feature are required for other database
facilities such as standby databases using Oracle Data Guard, Flashback Transaction Query only requires
that minimal supplemental redo logging be enabled. This is done with the following command:
When this option is first enabled, all existing shared SQL cursors within the SQL cache are invalidated,
meaning that a temporary performance loss will occur until the cache is reloaded over the course of
time. A query to the V$DATABASE view can confirm that minimal supplemental redo logging is enabled.
Quer
ying FLASHBACK_TRANSACTION_QUERY
In addition to a properly configured database, to query the view FLASHBACK_TRANSACTION_QUERY one
must have the SELECT ANY TRANSACTION system privilege. The first example shown here returns
information about all transactions, both active and committed, for the TEAMS table.
502 ORACLE DATABASE ADMINISTRATION
This next example identifies all of the database updates which were part of a given transaction.
503 ORACLE DATABASE ADMINISTRATION
Performance Note Queries against the FLASHBACK_TRANSACTION_QUERY table can be notoriously slow
due to the size of this view. One helpful hint is to use the index which has been built on the XID column.
Since this is a RAW data type column, however, the index will only be selected by the optimizer if a
compatible RAW search value is included in the query. For this reason we use the HEXTORAW() function in
the previous example.
This last example returns information about all transactions within a given time period.
Flashback Versions Query and Flash Transaction Query can be used in conjunction with each other to
audit transactions. Flashback Versions Query provides a history of changes made to a row, as well as the
transaction identifier. However, you may want to know how a row evolved to a given value. By using the
transaction identifier, you can use Flashback Transactional Query to see which operations were
performed, as well as which SQL statements are necessary to undo the transaction. To accomplish this,
follow the steps outlined below: First, use the Flashback Versions Query to display a history of changes:
504 ORACLE DATABASE ADMINISTRATION
Next, display the exact operations that were performed by using Flashback Transactional Query:
5) Flashback Transaction
With Flashback Transaction, you can reverse a transaction and dependent transactions. Uses the
DBMS_FLASHBACK package to back-out a transaction.
6) Flashback Table
Use to recover tables to specific point in time. Requires Undo Data and Row Movement must be enabled
for the respective table. There are two distinct table related flashback table features in oracle, flashback
table which relies on undo segments and flashback drop which lies on the recycle bin not the undo
segments.
Flashback table lets we recover a table to a previous point in time, we don't have to take the tablespace
offline during a recovery, however oracle acquires exclusive DML locks on the table or tables that we are
recovering, but the table continues to be online. When using flashback table oracle does not preserve the
ROWIDS when it restores the rows in the changed data blocks of the tables, since it uses DML operations
to perform its work, we must have enabled row movement in the tables that we are going to flashback,
only flashback table requires we to enable row movement. If the data is not in the undo segments then
505 ORACLE DATABASE ADMINISTRATION
we cannot recover the table by using flashback table, however we can use other means to recover the
table.
Flashed back
tables
This feature allows one to permanently flashback one or more tables to a specific point-in-time or SCN. It
is most useful to recover from user or application error. For example, suppose that a serious application
logic bug was found indicating that updates performed over a recent period of time were all erroneous
and must be permanently undone. This must take place while the application continues to operate. The
Flashback Table operation would be the ideal solution. The source of the original data for the flashback
table operation is also the undo data. The undo data is read online and the table restored to the point
designated. Previously, one might need to take a portion of the database offline and perform a
complicated point-in-time recovery operation. Or, a more intricate set of steps would be needed using
only Flashback Query. However, this task is simpler and more efficient using Flashback Table. While
Flashback Table primarily restores tables, it also automatically maintains dependent objects such as
indexes (either standard indexes or partitioned indexes in the case of partitioned tables), triggers, and
constraints. Furthermore, if the table had been replicated as part of a distributed database configuration,
the replicated objects are maintained during the flashback operation too. Once performed, this statement
is executed as a single transaction. This means that either all updates must be flashed back successfully
or the entire flashback transaction is rolled back. The flashback operation may itself be undone, reverting
the table to a different point in time if necessary.
Restriction on flashback table recovery : we cannot use flashback table on SYS objects we cannot
flashback a table that has had preceding DDL operations on the table like table structure changes,
dropping columns, etc The flashback must entirely exceed or it will fail, if flashing back multiple tables all
tables must be flashed back or none. Any constraint violations will abort the flashback operation we
cannot flashback a table that has had any shrink or storage changes to the table (pct-free, initrans and
maxtrans. The following example creates a table, inserts some data and flashbacks to a point prior to the
data insertion. Finally it flashbacks to the time after the data insertion.
You must have been granted the FLASHBACK ANY TABLE or have the FLASHBACK object privilege.
You must also have the SELECT, INSERT, DELETE and ALTER privileges on the table.
Row movement must be enabled on the table by means of the ALTER TABLE…ENABLE ROW
MOVEMENT statement.
To determine the appropriate flashback time, you can use Flashback Versions Query and Flashback
Transaction Query. Both allow you to establish the specific time to flashback the table. Once the proper
flashback time is determined, the FLASHBACK TABLE command can be used to flashback one or more
tables either to a point-in-time or a SCN.
506 ORACLE DATABASE ADMINISTRATION
Example: The FLASHBACK TABLE statement is executed as a single transaction. Therefore, the
ROLLBACK statement cannot be used as a method to bring the tables back to their prior state. However,
if you need to undo the effects of the flashback statement, another FLASHBACK TABLE command can be
executed specifying a different time or SCN that occurred prior to the first executed FLASHBACK TABLE
statement. Before using Flashback Table, the administrator must enable row movement on the impacted
tables since Flashback Table does not preserve the row IDs. To enable row movement, issue the
following statement:
If this prerequisite step has not been taken then a flashback operation will result in the following
error: ORA-08189: cannot flashback the table because row movement is not enabled
Prepare Your Tables For FlashbackOne cannot flashback a table to a point prior to its ability to
support row movement. In other words, if one wishes to flashback a table and is prevented from doing so
because row movement was not enabled, simply enabling row movement will not allow that same
flashback operation to then be performed. One may only flashback a table to a point after row movement
has been enabled.
Next, perform the Flashback Table operation. This first example uses a time stamp to flashback the
CUSTOMERS table. You can use either of the methods shown to specify the timestamp:
The structure of the table must be stable and must have existed at a time consistent with the timestamp
indicated. Otherwise an error such as the following would occur:ORA-01466: unable to read data -
table definition has changed This next example uses a SCN number to flashback the tables.
Typically, a SCN number will be used if a referential integrity constraint exists. In this case referential
integrity exists between the CUSTOMERS and SALES tables, thus a FLASHBACK TABLE statement will be
used to group the tables within the same operation. By default, the triggers are disabled when executing
this statement. However, if you need to override the default behavior, use the ENABLE TRIGGER clause.
In the following example, the triggers are enabled throughout the Flashback operation:
7) Flashback Database
Rewinds database. Uses Flashback Logs to perform operations. Enable Flashback Logs as already
mentioned in 4) above.
Flashback database is not enabled by default, when enabled flashback database a process (RVWR –
recovery Writer) copies modified blocks to the flashback buffer. This buffer is then flushed to disk
(flashback logs). Remember the flashback logging is not a log of changes but a log of the complete block
images. Not every changed block is logged as this would be too much for the database to cope with, so
only as many blocks are copied such that performance is not impacted. Flashback database will construct
a version of the data files that is just before the time we want. The data files probably will be in a
inconsistent state as different blocks will be at different SCN’s, to complete the flashback process, Oracle
then uses the redo logs to recover all the blocks to the exact time requested thus synchronizing all the
data files to the same SCN. Archiving mode must be enabled to use flashback database. An important
note to remember is that Flashback can never reserve a change only to redo them.
The advantage in using flashback database is speed and convenience with which we can take the
database back in time. we can use rman, sql and Enterprise manager to flashback a database. If the flash
recovery area does not have enough room the database will continue to function but flashback operations
may fail. It is not possible to flashback one tablespace, we must flashback the whole database. If
performance is being affected by flashback data collection turn some tablespace flashbacking off .
We cannot undo a resized data file to a smaller size. When using ‘backup recovery area’ and ‘backup
recovery files’ controlfiles , redo logs, permanent files and flashback logs will not be backed up.
There are several components within the database that support this feature. A description of each
component appears in the table below. As well, you will find an illustration of the Flashback Database
architecture following the table.
About target parameters Parameters such as DB_FLASHBACK_RETENTION_TARGET are, as the name
implies, parameters that specifytarget values and not absolute values. This means that while the
database will endeavor to achieve the target, it is not guaranteed and is dependent upon other factors. In
the case of DB_FLASHBACK_RETENTION_TARGET, the actual retention time is dependent upon the
flashback area also having sufficient space, as directed by the parameter DB_RECOVERY_FILE_DEST_SIZE.
509 ORACLE DATABASE ADMINISTRATION
By default, flashback logs are generated for all permanent tablespaces. If you have a tablespace for
which you do not want to log flashback data, you can execute the ALTER TABLESPACE command to
exclude a tablespace. Such a tablespace must be taken offline prior to flashing back the database. This
next example excludes the SIDERISUSERS tablespace from participating in the flashback of the database:
To determine which tablespaces are to be excluded from participating in the flashback of the database,
query the V$TABLESPACE as displayed in the following example.
The database must be mounted but not open before launching the flashback. The following command
flashes back to the SCN.
Any valid timestamp expressions or literal values may also be stated instead if one wishes to perform a
point-in-time flashback. Notice this example.
Thereafter, the database must be opened. Generally one will open it with the RESETLOGS option.
About Restore Points
Restore points are simply alias or mnemonic names assigned to SCNs. In this way, rather than recording
tedious SCN numbers and potentially causing a serious database recovery error due to a typographical
error, one can instead refer to an easily recognizable restore point name. A restore point is created at any
time using the CREATE RESTORE POINT command. The current SCN is associated with this label.
If the database is operating in ARCHIVELOG mode and the flash recovery area has been configured, then
one may define a guaranteed restore point. This will ensure that the flashback logs are maintained for as
long as necessary so as to support a flashback database operation to that point.
The V$RESTORE_POINT view will list the current set of restore points and which are guaranteed. In the
case of guaranteed restore points, it will also indicate the amount of flashback log storage currently
required to maintain this point.
A number of views exist to support the information presented via EM and also to provide additional
details. First, space usage within the flash recovery area is found within the view
V$FLASH_RECOVERY_AREA_USAGE.
511 ORACLE DATABASE ADMINISTRATION
The V$FLASHBACK_DATABASE_LOG data dictionary view likewise reports useful information. It reveals
the SCN and point-in-time currently supported by the flashback area. If the point-in-time does not match
the number of minutes specified by RETENTION_TARGET then one may need to find additional space for
the recovery area.
Two other important pieces of information are FLASHBACK_SIZE and ESTIMATED_FLASHBACK_SIZE. The
first reveals the current size of the flashback data while the second indicates the size actually needed,
based upon current transaction history, to satisfy the retention target. In the example above it is
expected that much more space will eventually be needed for the flash recovery area. The view
V$FLASHBACK_DATABASE_STAT maintains statistics to compute the amount of flashback space needed.
At various sample points, usually hourly, it indicates that amount of flashback log bytes written, data file
bytes read and written, and redo bytes written. Data file bytes are more resource consumptive since they
are random, while the logs are sequential writes in nature.
V$SYSSTAT reveal the number of operations, rather than the bytes, which utilize the flashback logs. The
number of flashback log writes is indicative of the amount of block changes made by transactions. The
number of physical reads for flashback data when performing a flashback database operation.
512 ORACLE DATABASE ADMINISTRATION
Create Archiving of Undo Data and retain it for longer periods like 1 year or more. It is also refer to as
Total Recall.
Flashback Data Archive (Oracle Total Recall) provides the ability to track and store all transactional
changes to a table over its lifetime. It is no longer necessary to build this intelligence into our application.
A Flashback Data Archive is useful for compliance with record stage policies and audit reports.
Prior to oracle 11g, Flashback technology to a large part was based on the availability of undo data or
flashback logs and both the undo data as well as flashback logs are subject to recycling when out of
space pressure exists. The UNDO tablespace in Oracle was primarily meant for transaction consistency
and not data archival. A Flashback Data Archive is configured with retention time. Data archived in the
Flashback Data Archive is retained for the retention time.Let’s look at an example :
Remove Flashback Data Archive and all its historical data, but not its tablespaces:
SQL> DROP FLASHBACK ARCHIVE near_term ;
Use Cases :
Flashback Data Archive is handy for many purposes. Here are some ideas:
• To audit for recording how data changed
• To enable an application to undo changes (correct mistakes)
• To debug how data has been changed
• To comply with some regulations that require data must not be changed after some time. Flashback
Data
Archives are not regular tables so they can’t be changed by typical users.
• Recording audit trails on cheaper storage thereby allowing more retention at less cost
Create a Tablespace for Data Archive.
SQL> CREATE TABLESPACE TBS1
> DATAFILE 'D:\APP\ADMINISTRATOR\ORADATA\PROD\TBS01.DBF'
> SIZE 500M AUTOEXTEND ON;
Create Flashback Data Archive.
SQL> CREATE FLASHBACK ARCHIVE DEFAULT FLA1
> TABLESPACE TBS1 QUOTA 10G RETENTION 5 YEAR;
Add Tables to Flashback Archive.
SQL> ALTER TABLE HR.EMPLOYEES FLASHBACK ARCHIVE;
SQL> ALTER TABLE HR.DEPARTMENTS FLASHBACK ARCHIVE;
Now undo data of at most 5 years will be retained for the above tables.
As you are aware, other flashback facilities within the database allows one to view the past state of rows
within a table. The difficulty with those facilities though, is that they rely upon undo data. Typically undo
data does not persist for an extended period of time. This means that while these other features are
certainly useful, they do not have the duration sufficient for ILM regulatory compliance requirements. The
Flashback Data Archive facility instead uses a special object known by the same name, flashback data
archive. This archive can be retained for as long as the ILM requirements dictate. This dedicated resource
is therefore not dependent upon other database operations for its success. This feature is configured by
means of these steps: 1. Create a tablespace specifically dedicated to flashback data archives, or
designate an existing tablespace with sufficient free space for this purpose. 2. Create one or more
flashback data archives within the appropriate tablespace(s), indicating what the retention period should
be for each one. 3. Decide whether or not a default flashback archive should exist for the database. 4.
Enable flashback archiving for selected tables, associating each table so designated with an appropriate
flashback data archive. Once this is done and properly configured, one can assume that a simple
513 ORACLE DATABASE ADMINISTRATION
flashback query will always succeed when it falls within the defined retention period, even if undo data
has long since been discarded.
While we could use an existing tablespace, in this example we decide to create a tablespace with a fixed
size of 5 MB which is dedicated to supporting all the flashback data archives within our application
database.
Based upon the data retention requirements for our organization and the regulatory obligations placed
upon us, we will create the appropriate flashback data archives within the designated tablespace. In each
case we decide how much of the designated data archive space this particular archive object should be
allocated.
The RETENTION clause permits the keyword designations YEAR, MONTH and DAY. Most of the attributes
of a flashback archive may be modified using the ALTER FLASHBACK ARCHIVE command. In this example
we expand the quota permitted for the tablespace and decide to allow additional archive space to be
taken from
another tablespace.
514 ORACLE DATABASE ADMINISTRATION
For the most part one will rely upon the database to retain the data archive for the duration specified. On
occasion one might want to manually purge this data. This is permitted, as you can see next. The clauses
PURGE BEFORE SCN xxx and PURGE BEFORE TIMESTAMP (TimeStamp) are also supported. Once the data
is purged from the archive, then the historical row state information is only available if it exists within the
undo data, and this is almost certainly not sufficient to support the retention period within our scenario.
Of course, a flashback archive which is no longer needed and no longer in use may be dropped.
The data dictionary maintains metadata for the flashback archives defined. General information is
available from the view DBA_FLASHBACK_ARCHIVE.
The storage space allocated for each flashback archive is maintained within the view
DBA_FLASHBACK_ARCHIVE_TS.
515 ORACLE DATABASE ADMINISTRATION
Note The system privilege FLASHBACK ARCHIVE ADMINISTER is required in order to administer
flashback archives within the database.
The next step is to enable flashback archiving for selected tables. We may designate which flashback
archive is appropriate for each table in question, or a default flashback archive can be designated for use
when a specific one is not selected. First, in order to designate one of the flashback archives as the
default, this would be done as shown here:
The STATUS column within DBA_FLASHBACK_ARCHIVE will indicate if a default archive has been
established for the database.
The owner of the tables now has flashback archives available in the database for their use. Before they
may utilize these however, they must be granted the FLASHBACK ARCHIVE object privilege on one or
more of the flashback archives. This preparatory step would be performed by the flashback archive
administrator, as shown here:
516 ORACLE DATABASE ADMINISTRATION
The table owner may now manage archiving on individual tables, utilizing the attributes of each one to
which they have access. In this example a table is associated with a specific archive.
In this case archiving is enabled for a table, but the default flashback archive is implicitly selected.
Archiving may be disabled for a table, which will no longer consume space allocated to the archive and
will therefore be dependent upon undo data for any flashback queries issued against it.
Note Nearly all DDL operations which affect the logical structure of the table will be forbidden
once archiving is enabled for a table. The only exception is the ALTER TABLE…ADD COLUMN command,
which is permitted. If this were not the case then one could contravene the purpose and intention of
archiving by modifying its logical structure. Illegal DDL operations attempted on such tables will generate
the error ORA-55610: Invalid DDL statement on history-tracked table.
Once archiving is enabled, an internal object is used within the designated tablespace to support the
archive records. The administrator may view these internal objects from the view
DBA_FLASHBACK_ARCHIVE_TABLES.
517 ORACLE DATABASE ADMINISTRATION
No need to enable Flashback Logs for Flashback database to a guaranteed restore point. It
stores all the required undo data and logs required to flashback database to specific restore point.
Database Cloning
What is Cloning?
Database Cloning is a procedure that can be used to create an identical copy of the existing Oracle
database. DBA’s sometimes need to clone databases to test backup and recovery strategies or export a
table that was dropped from the production database and import it back into the production database.
Cloning can be done on separate hosts or on the same host and is different from standby database.
Cold Cloning
Hot Cloning
RMAN Cloning
Here is a brief explanation how to perform cloning in all these three methods
Cold Cloning is one the reliable methods that is done using the Cold Backup. The drawback of this
method is that the database has to be shutdown while taking the cold backup.
Considerations:
Source Database Name: RIS
Clone Database Name: RISCLON
Source Database physical files path=/u01/RIS/oradata
Cloned Database physical files path=/u02/RISCLON/oradata
Steps to be followed:
Startup the source database, (I know all the prod databases are running fine since long so no need to
startup once again )
$ export ORACLE_SID=RIS
$ sqlplus / as sysdba
SQL> startup
Find out the path and names of datafiles, control files, and redo log files.
If database is using pfile, use OS command to copy the pfile to a backup location.
Shutdown the ‘RIS’ database, here we are doing cold cloning so we need to stop all DB services.
SQL> shutdown
Copy all data files, control files, and redo log files of ‘RIS’ database to a target database location.
$ mkdir /u02/RISCLON/oradata
$ cp /u01/RIS/oradata/* /u02/RISCLON/oradata/
Create appropriate directory structure in clone database for dumps and specify them in the parameter
file.
$ mkdir -p /u02/RISCLON/{bdump,udump}
Edit the clone database parameter file and make necessary changes to the clone database
$ cd /u02/RISCLON/
$ vi initRISCLON.ora
db_name=RISCLON
control_files=/u02/RISCLON/oradata/cntrl01.ctl
background_dump_dest=/u02/RISCLON/bdump
user_dump_dest=/u02/RISCLON/udump
. . .
. . .
:wq!
$ export ORACLE_SID=RISCLON
Create the control file trace for the clone database using the trace control file and specify the appropriate
paths for redolog and datafiles.
SQL> @u01/RIS/source/udump/cntrl.sql
Once the control file’s successfully created, open the database with resetlogs option.
Hot database cloning is more suitable for databases which are running 24X7X365 type of databases and
is done using the hot backup. For hot database cloning, database has to be in archivelog mode and there
no need to shutdown the database.
Considerations:
Source Database Name: RIS
Clone Database Name: RISCLON
Source Database physical files path=/u01/RIS/oradata
Cloned Database physical files path=/u02/RISCLON/oradata
Steps to be followed:
If database is using pfile, use OS command to copy the pfile to a backup location.
$ mkdir /u02/RISCLON/oradata
$ cp /u01/RIS/source/oradata/*.dbf /u02/RISCLON/oradata/
6. After copying all datafiles, release the database from backup mode.
7. Switch the current log file and note down the oldest log sequence number
8. Copy all archive log files generated during FIRST old log sequence no. to the LAST old log sequence
no. during which the database was in backup mode.
10. Create appropriate directory structure for the clone database and specify the same
534 ORACLE DATABASE ADMINISTRATION
$ cd /u02/RISCLON
$ mkdir bdump udump
11. Edit the clone database parameter file and make necessary changes to the clone database
$ cd /u02/RISCLON
$ vi initRISCLON.ora
db_name=RISCLON
control_files=/u02/RISCLON/oradata/cntrl01.ctl
background_dump_dest=/u02/RISCLON/bdump
user_dump_dest=/u02/RISCLON/udump
. . .
. . .
:wq!
$ export ORACLE_SID=RISCLON
SQL> startup nomount pfile=’/u02/RISCLON/initRISCLON.ora’
13. Create the control file for the clone database using the trace control file.
14. Create the control file by running trace file from the trace path
SQL> @u01/RIS/source/udump/cntrl.sql
16. You will be prompted to feed the archive log files henceforth. Specify the absolute path and file name
for the archive log files and keep feeding them until you cross the LAST old sequence no. (Refer: Step 8),
type CANCEL to end the media recovery.
RMAN provides the DUPLICATE command, which uses the backups of the database to create the clone
database. Files are restored to the target database, after which an incomplete recovery is performed and
the clone database is opened using RESETLOGS option. All the preceding steps are performed
automatically by RMAN without any intervention from the DBA.
Considerations:
Source Database Name: RIS
Clone Database Name: RISCLON
Source Database physical files path=/u01/RIS/oradata
Cloned Database physical files path=/u02/RISCLON/oradata
535 ORACLE DATABASE ADMINISTRATION
Steps to be followed:
If database is using pfile, use OS command to copy the pfile to a backup location.
$ cd /u02/RISCLON
$ mkdir bdump udump
$ cd /u02/RISCLON
$ vi initRISCLON.ora
db_name=RISCLON
control_files=/u02/RISCLON/oradata/cntrl01.ctl
db_file_name_convert=(‘/u01/RIS/oradata’,’/u02/RISCLON/oradata’)
# This parameter specifies from where to where the datafiles should be cloned
log_file_name_convert=(‘/u01/RIS/oradata’,’/u02/RISCLON/oradata’)
# This parameter specifies from where to where the redologfiles should be cloned
background_dump_dest=/u02/RISCLON/bdump
user_dump_dest=/u02/RISCLON/udump
. . .
. . .
:wq!
NOTE: db_file_name_convert and log_file_name_convert parameters are required only if the source
database directory structure and clone database directory structure differs.
4. Configure the listener using ‘listener.ora’ file and start the listener
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = RIS)
(ORACLE_HOME = /u01/oracle/product/10.2.0/db_1/)
(SID_NAME =RIS)
)
(SID_DESC =
(GLOBAL_DBNAME = RISCLON)
(ORACLE_HOME = /u02/oracle/product/10.2.0/db_1/)
(SID_NAME =RISCLON)
)
)
con_RISCLON =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 200.168.1.22)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RISCLON)
)
)
$ export ORACLE_SID=RISCLON
SQL> startup nomount pfile=’/u02/RISCLON/initRISCLON.ora’
SQL> exit
$ export ORACLE_SID=RIS
$ rman target / auxiliary sys/sys@con_RISCLON
NOTE: The preceding command restores all files from the backup of the target database to the clone
database destination using all available archive log files and also RMAN opens the clone database with
resetlogs option.
Are they same? Well let’s have a look on the above activities that helps in finding the differences between
them.
What is MOS?
MOS means MY ORACLE SUPPORT also previously called METALINK. METALINK is Oracle's Official
Electronic On-line Support Service.
MOS requires a paid software license support contract. It offers technical support notes, bug access,
request tracking and patches. Users with a valid support contract can register on Oracle’s metalink.
Oracle Support
Oracle uses CSI number to verify if a customer is eligible to receive Oracle support. CSI means
CUSTOMER SUPPORT IDENTIFIER (number). Customers with valid CSI numbers can log SRs
(Service Requests) on the metalink Website. When creating SR on a METALINK, Oracle can start Web
Conference using (OCS – Oracle Collaboration Suite) to collect more specific information about the
problem.
The CSI also used to identify a customer's account and track service requests. Information contained
within My Oracle Support is made accessible strictly to registered MOS users, for reference purposes
only. If we have a MOS account, then we can download patches. We need a support contract to obtain a
MOS.
Types of Patches
Definition of an Oracle Patch : Patches are software programs for individual BUG FIXES. Oracle
issues product fixes software, usually it is called as Patches, it is used to fix a particular problem. (Bugs,
Security weakness , Improving Performance etc).
Patches are associated with particular versions of Oracle products. When we apply patch to Oracle
Software, a small collection of files are replaced to fix certain bugs and database Version number
doesn’t change.
544 ORACLE DATABASE ADMINISTRATION
Patches are available as Single Interim Patches and Patchsets (Patch Releases). Patch Releases
have release numbers. If we installed oracle 10.2.0. 0 , the first patch release will have a release
number of 10.2.0.1.
POINTS TO NOTE
Interim patch given to customers in critical need. Main purpose is business customers who cannot wait
until the next Patch Set or new product release to get a fix.
The First Digit (10) is Most General Identifier. Major Oracle database release number, It contains
significant new functionality.
The Second Digit (2) : Database maintenance release no, some new features also included or BUG
fixes to existing release (10.1.0)
The Third Digit (0) Application Server release no , (OracleAS).
The Fourth Digit (4) is Component Specific/Patch Release no A Patch release contains fixes for
serious bugs. Different components have different numbers. Ex : Component Patch Sets.
The Fifth Digit (0) : Platform Specific Release No, Usually this is a Patchset. it usually fixes or works
around a particular, critical problem.
Check Current Release Number
Overview of CPU
CPU was introduced in JAN 2005 to provide SECURITY FIXES.
CPU are sets of patches containing fixes for security fault.
Critical Patch Updates are Collections of Security fixes for Oracle Products. They are available to
customers with valid support contracts.
CPU PATCHES ARE ALWAYS CUMULATIVE , that means fixes from previous Oracle security alerts and
critical patch updates are included in current patch. However each advisory describes only the security
fixes added since the previous Critical Patch Update advisory. (Not required to have previous security
patches applied before applying the latest patches).
Critical Patch Updates and Security Alerts for information about Oracle Security Advisories.
CPU patches are collection of patches applied to fix multiple security vulnerabilities. Suppose after
applying latest patchset for current release; If there is any bug occurrence , then oracle release cpu
patches in regular interval is used to fix those bug. CPU patch based on latest patchset.
Overview of PSU
PSU was introduced in JULY 2009.
PSU is limited from 25 to 100 new bug fixes.
PSU’s are also well tested by Oracle compared to one off patches.
PSU are patch sets but some major differences respect to regular patch sets.
Oracle Introduced new method for patching i.e. Patch set Updates or PSU.
PSUs are cumulative and include all of the security fixes from CPU patches, plus additional fixes. An
Oracle PSU having recommended bug fixes and "proactive" cumulative patches, So the DBA
choose to apply all patches in the PSU patch bundle (which includes additional fixes).
If PSU patch is applied , We cannot apply CPU patch ( until dB upgrade to new version) - Any Specific
reason ?
10.2.0.4.1 1 indicates for PSU patch.
if we have 10.2.0.4 then it's well contain all fixes in 10.2.0.3
So ,the fifth no of the database version is incremented for each PSU. All PSUs are denoted by the last
digit - (10.2.0.4.1 , 10.2.0.4.2) . The initial PSU is version 10.2.0.4.1, the next PSU for Release will be
10.2.0.4.2 and so on.
What is OPTACH?
Opatch is a JAVA based Oracle Utility.
Opatch is the Oracle databases Interim (one-off) Patch Installer.
One-off bug fixes, we can use opatch to apply them. Opatch assists to apply interim patches to Oracle’s
software and removes interim patches from Oracle software. Opatch also able to Report already installed
interim patch and can detect conflict (when already interim patch has been applied).
Opatch Supports
Applying an interim Patch.
Reporting on installed products and interim patches.
Rolling back (Removes) the application of an interim patch.
Detecting a conflict and raises an error about conflict situation
OUI vs Opatch
Why two different utilities are used ?
Oracle Offers two utilities for software deployment.
OUI to install Oracle Products.
Opatch to apply interim Patches.
548 ORACLE DATABASE ADMINISTRATION
Opatch assists with the process of applying interim patches to Oracle's software . OUI performs
component-based installations as well as complex installations, such as integrated bundle and suite
installations.
Patching is one of the most common task performed by DBA's in day-to-day life . Here , we
will discuss about the various types of patches which are provided by Oracle . Oracle issues
product fixes for its software called patches. When we apply the patch to our Oracle software
installation, it updates the executable files, libraries, and object files in the software home directory
. The patch application can also update configuration files and Oracle-supplied SQL schemas. Patches are
applied by using OPatch, a utility supplied by Oracle, OUI or Enterprise Manager Grid Control.
Oracle Patches are of various kinds .Here, we are broadly categorizing it into two groups.
1.) Patchset:
2.) Patchset Updates:
1.) Patchset: A group of patches form a patch set. Patchsets are applied by invoking OUI (Oracle
Universal Installer). Patchsets are generally applied for Upgradation purpose. This results in a version
change for our Oracle software, for example, from Oracle Database 11.2.0.1.0 to Oracle Database
11.2.0.3.0. We will cover this issue later.
2.) Patchset Updates: Patch Set Updates are proactive cumulative patches containing recommended
bug fixes that are released on a regular and predictable schedule. Oracle has categories as :
i.) Critical Patch Update (CPU) now refers to the overall release of security fixes each quarter
rather than the cumulative database security patch for the quarter. Think of the CPU as the
overarching quarterly release and not as a single patch .
ii.) Patch Set Updates (PSU) are the same cumulative patches that include both the security fixes and
priority fixes. The key with PSUs is they are minor version upgrades (e.g., 11.2.0.1.1 to
11.2.0.1.2). Once a PSU is applied, only PSUs can be applied in future quarters until the
database is upgraded to a new base version.
iii.) Security Patch Update (SPU) terminology is introduced in the October 2012 Critical Patch
Update as the term for the quarterly security patch. SPU patches are the same as previous CPU
patches, just a new name . For the database, SPUs can not be applied once PSUs have been
applied until the database is upgraded to a new base version.
iv.) Bundle Patches are the quarterly patches for Windows and Exadata which include both the
quarterly security patches as well as recommended fixes.
PSUs(PatchSet Updates) or CPUs(Critical Patch Updates) ,SPU are applied via opatch utility.
Patchset OR CPU/PSU (or one-off) patch contains Post Installation tasks to be executed on all
Oracle Database instances after completing the Installation tasks. If we are planning to apply a
patchset along with required one-off-patches (either CPU or PSU or any other one-off patch), then
we can complete the Installation tasks of the Patchset+CPU/PSU/one-off patches at once and then
execute Post Installation tasks of the Patchset+CPU/PSU/one-off patches in the same sequence as
they were installed .
This approach minimizes the requirement of database shutdown across each patching activity and
simplifies the patching mechanism as two tasks:
Software update and then
Database update.
549 ORACLE DATABASE ADMINISTRATION
Here , we will cover the Opatch Utility in details along with example.
OPatch is the recommended (Oracle-supplied) tool that customers are supposed to use in order
to apply or rollback patches. OPatch is PLATFORM specific . Release is based on Oracle Universal
Installer version . OPatch resides in $ORACLE_HOME/OPatch . OPatch supports the following :
Applying an interim patch.
Rolling back the application of an interim patch.
Detecting conflict when applying an interim patch after previous interim patches have
been applied. It also suggests the best options to resolve a conflict .
Reporting on installed products and interim patch.
The patch metadata exist in the inventory.xml and action.xml files exists
under<stage_area>/<patch_id>/etc/config/
Bug number
Unique Patch ID
Date of patch year
Required and Optional components
OS platforms ID
Instance shutdown is required or not
Patch can be applied online or not
Login to metalink.
Click "Patches & Updates" link on top menu.
On the patch search section enter patch number and select the platform of your database.
Click search.
On the search results page, download the zip file.
$ export PATH=$ORACLE_HOME/OPatch:$PATH:
$ opatch apply .
We can check the final status of applied patched new Oracle Home by using the below command.
Notes :
i.) If we are using a Data Guard Physical Standby database, we must install this patch on both the
primary database and the physical standby database .
ii.) While applying patching take care of mount point status .There should be sufficient Space .
Compatibility Matrix
Database Upgrade are common but risky task for a DBA if not done properly. Here, I am listing detailed method of
upgrade with verification and validation.
Minimum Version of the Oracle database software that can be directly upgraded to Oracle 11g Release 2, So before
upgrade remote DBA needs to check this.
The following database software version will require an indirect upgrade path. In this case DBA needs to do double
effort, because two upgrades are needed.
Log in to the system as the owner of the Oracle Database 11g Release 2 (11.2) Oracle home directory.
Copy the Pre-Upgrade Information Tool (utlu112i.sql) and utltzuv2.sql from the Oracle Database 11g Release 2 (11.2)
ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the temporary directory on
your system.
$ORACLE_HOME/rdbms/admin/utlu112i.sql
Should be change to the directory where utlu112i.sql had been copied in the previous step. Start SQL*Plus and
connect to the database instance as a user with SYSDBA privileges. Then run and spool the utlu112i.sql file. Please
note that the database should be started using the Source Oracle Home.
Check the spool file and examine the output of the upgrade information tool.
Check for the integrity of the source database prior to starting the upgrade by downloading and running dbupgdiag.sql
script from below My Oracle Support article
Note 556610.1 Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql) (Avoid this step if don’t
have support access)
If the dbupgdiag.sql script reports any invalid objects, run $ORACLE_HOME/rdbms/admin/utlrp.sql (multiple times) to
validate the invalid objects in the database, until there is no change in the number of invalid objects.
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> @utlrp.sql
After validating the invalid objects, re-run dbupgdiag.sql in the database once again and make sure that everything is
fine.
5. Optimizer Statistics:
When upgrading to Oracle Database 11g Release 2 (11.2), optimizer statistics are collected for dictionary tables that lack
statistics. This statistics collection can be time consuming for databases with a large number of dictionary tables, but
statistics gathering only occurs for those tables that lack statistics or are significantly changed during the upgrade
$ lsnrctl stop
Connect to RMAN:
RUN
{
ALLOCATE CHANNEL chan_name TYPE DISK;
BACKUP DATABASE FORMAT '%U' TAG before_upgrade;
BACKUP CURRENT CONTROLFILE TO '';
}
Note: Once the Parameter file is modified as per your requirement, copy the file to $ORACLE_HOME/dbs (11g Oracle
Home )
11. Set Environment Variables:
If your operating system is UNIX then complete this step, else skip to next Step.
1. Make sure the following environment variables point to the Oracle database software 11g Release directories:
- ORACLE_BASE
- ORACLE_HOME
- PATH
$ export ORACLE_HOME=
$ export PATH=$ORACLE_HOME/bin:$PATH
$ export ORACLE_BASE=
Note : If ORACLE_BASE is not known, after setting Path towards 11g Oracle Home, execute 'orabase', which will point
the location of base.
$ orabase
/uo1/app/oracle
2. Update the oratab entry, to set the new ORACLE_HOME pointing towards ORCL and disable automatic startup
Sample /etc/oratab
#orcl:/opt/oracle/product/10.2/db_1:N
orcl:/opt/oracle/product/11.2/db_1:N
Note : After /etc/oratab is updated to have sid and Oracle Home (11.2), you can execute oraenv (/usr/local/bin/oraenv)
and set the environment. The input has to be the sid which is entered in /etc/oratab against 11g home.
for Instance,
At the operating system prompt, change to the $ORACLE_HOME/rdbms/admin directory of 11gR2 Oracle Home.
553 ORACLE DATABASE ADMINISTRATION
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> startup UPGRADE
Set the system to spool results to a log file for later verification after the upgrade is completed and start the upgrade
script.
These measures are an important final step to ensure the integrity and consistency of the newly upgraded Oracle
Database software. Also, if you encountered a message listing obsolete initialization parameters when you started the
database for upgrade, then remove the obsolete initialization parameters from the parameter file before restarting. If
necessary, convert the SPFILE to a PFILE so you can edit the file to delete parameters.
Check for the integrity of the upgraded database by running dbupgdiag.sql script from below Metalink article
Note 556610.1 Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql)
If the dbupgdiag.sql script reports any invalid objects, run $ORACLE_HOME/rdbms/admin/utlrp.sql (multiple times) to
validate the invalid objects in the database, until there is no change in the number of invalid objects.
After validating the invalid objects, re-run dbupgdiag.sql in the upgraded database once again and make sure that
everything is fine.
For the upgraded instance(s) modify the ORACLE_HOME parameter to point to the new ORACLE_HOME. Start the
listener :
lsnrctl start
1. Make sure the following environment variables point to the Oracle 11g Release directories:
- ORACLE_BASE
- ORACLE_HOME
- PATH
Also check that your oratab file and any client scripts that set the value of ORACLE_HOME point to the Oracle database
software11g Release 2 (11.2) home.
Note : If you are upgrading a cluster database, then perform these checks on all nodes in which this cluster database
has instances configured.
- If you changed the CLUSTER_DATABASE parameter prior the upgrade set it back to TRUE
- Migrate your initialization parameter file to a server parameter file.
This will create a spfile as a copy of the init.ora file located in $ORACLE_HOME/dbs (UNIX) & %ORACLE_HOME
%\database (Windows).
Oracle Enterprise Management is Web-Based tool to manage Oracle Database.OEM using for perform
administrative task abd view performance statistics.
How to use Database Control
a) ORACLE_HOME/bin/emctl start dbconsole [To start DB Control]
b) ORACLE_HOME/bin/emctl status dbconsole [To check status of DB Control]
c) ORACLE_HOME/bin/emctl stop dbconsole [To stop DB Control]
If you didnt install OEM through installation Oracle Database 11g Then You need downloand OEL and
installa it by your self.
So,i will write a article about that next days.
In this article i will take part of starting and stoping options of OEM.
Status OEM
Starting OEM
Stoping OEM