Professional Documents
Culture Documents
USER RAM
HARD DISK
1. Whenever user sends a request, primary search for the data will be done in RAM. If the
information is available, it will be given to the user
2. Otherwise, secondary search will be done in hard disk and copy of that info will be placed in
RAM before giving that to the user
3. The request and response between RAM and hard disk is called I/O operation
4. Files with more size will be fitted into RAM using swapping (flushing old data – Least Recently
Used algorithm)
Questions:
Note: The primary goal of DBA is to reduce response time there by increasing performance and also
avoiding I/O (all these 3 are interlinked)
Note: Any request and response between memory and disk is called I/O
USER INSTANCE
DATABASE
Data
dictionary Java pool
cache
LRU end Stream pool
SGA
PARAMETER FILE
PASSWORD FILE
DATABASE
1. Whenever user starts and application, a user process will be created on the client side. E.g.:
sqlplusw.exe process will be started when a user clicks on sqlplus executable on a windows
operating system
2. This user process will send a request to establish a connection to server by providing login
credentials (sometimes even host string also)
3. On server side, Listener service will accept all connections that are coming in and will hand over
user information (like username, password, ip address, network etc) to a background process
called PMON (process monitor)
4. PMON will then perform the authentication of the user using base tables. For this it will do a
primary search in data dictionary cache and if a copy of base table is not available in that, then it
will copy from the database
5. Once authenticated, user will receive and acknowledgement statement. This can be either
successful / unsuccessful message
6. If successful connection, PMON will create server process and a memory will be allocated to that
server process which is called as PGA (private global area)
7. Server process is the one which will do work on behalf of user process
BASE TABLES
1. Base tables store the information i.e. helpful for database functionality. This info is also called as
dictionary information
2. Base tables will be in the form of XXX$ (i.e. name suffixed with a $ sign) and will reside in
SYSTEM tablespace
3. Information in base tables will be in cryptic format and because of this we can access but cannot
understand data inside them
4. A try to modify base tables (performing DML or DDL operations) may lead to database
corruption. Only oracle processes are having authority to modify them
5. Base tables will be created at the time of database creation using SQL.BSQ script
ii. Semantic checking of the statement i.e. checking for the privileges using base
tables
iii. Diving the statement into literals
b. EXECUTION : This phase will perform following actions
i. Converting the statement into ASCII format
ii. Compiling the statement
iii. Running or executing the statement
c. FETCH : Data will be retrieved in this phase
Note: For a PL/SQL program, BINDING will happen after PARSING phase (so it will have 4 phases to go)
1. Server process will receive the statement sent by user process on server side and will handover
that to library cache of shared pool
2. The 1st phase of sql execution i.e. Parsing will be done in library cache
3. Then, OPTIMIZER (brain of oracle sql engine) will generate many execution plans, but chooses
the best one based on time & cost (time – response time, cost – cpu resource utilization)
4. Server process will send the parsed statement with its execution plan to PGA and 2nd phase i.e.
EXECUTION will be done there
5. After execution, server process will start searching for the data from LRU end of LRU list and this
search will continue till it founds data or reaches MRU end. If it found data, it will be given to the
user. If it didn’t found any data, it means data is not there in database buffer cache
6. In such cases, server process will copy data from datafiles to MRU end of LRU list of database
buffer cache
7. From MRU end again blocks will be copied to PGA for filtering required rows and then it will be
given to user (displayed on user’s console)
Note: server process will not start searching from MRU end because there may be a chance of missing
the data by the time it reaches LRU end in searching
Note: for statements issued for the second time, parsing and fetch phases are skipped, subject to the
availability of data and parsed statement in the instance
INSTANCE DATABASE
DBA CLASS NOTES | version 2.0 6
KANNA TECHNOLOGIES
1. The following are the logical structures of database and will be helpful in easy manageability of
the database
a. TABLESPACE – an alias name to a group of datafiles (or) group of segments (or) a space
where tables reside
b. SEGMENT – group of extents (or) object that occupies space
c. EXTENT – group of oracle data blocks (or) memory unit allocated to the object
d. ORACLE DATA BLOCK – basic unit of data storage (or) group of operating system blocks
2. The following tablespaces are mandatory to exist in 10g database
a. SYSTEM – stores base tables (dictionary information)
b. SYSAUX – auxiliary tablespace to SYSTEM which also stores base tables required for
reporting purpose
c. TEMP – used for performing sort operations
d. UNDO – used to store before images helpful for rollback of transaction or instance
recovery
Note: Oracle 9i should have all the above tablespaces except SYSAUX. SYSAUX is introduced in 10g to
avoid burden on SYSTEM tablespace
1. Server process performs parsing in library cache by taking the statement information. Optimizer
will generate the best execution plan based on time & cost
2. By following the execution plan statement will get executed in PGA
3. After execution, Server process will search for the data in LRU list. If exists, it will copy undo
block to LRU list. If data is not found, then it will copy both data block and undo block to LRU list.
4. From there those blocks will be copied to PGA where modifications will be done by which redo
entries will be generated in PGA which are copied to redolog buffer cache by server process
Note: A single atomic change happened to the database is called redo entry or redo record or change
vector. E.g.: if 100 rows are modified, then we will have 200 redo entries
5. Modifications is done by copying previous image from data block to undo block and new value
will be inserted into data block thus making both the blocks DIRTY
6. The dirty blocks will be moved to write list from where DBWRn will write them to corresponding
datafiles. But before DBWRn writes, LGWR writes the content of log buffer cache to redolog files
1. DDL statement processing is same as DML processing as internally all DDL are DML statements
to the base tables
2. For every DDL statement, base tables will get modified with update/delete/insert statements.
Because of this reason, in case of DDL also undo will be generated
USER PROCESS – It is the process which places request from client side and will be created when user
starts any application
SERVER PROCESS – It is the process which will do work on behalf of user process on server side
1. It is the memory area allocated to server process to perform execution of the SQL statement &
to store session information
2. The size of memory allocated will be defined using PGA_AGGREGATE_TARGET
3. Before 9i, PGA is configured using
a. WORK_AREA_SIZE
b. BITMAP_WORK_AREA
c. SORT_AREA_SIZE
d. HASH_AREA_SIZE etc
4. Sorting will takes place in PGA if the data is small in size. This is called as in-memory sort.
5. If the data size is larger than sort are size of PGA, Oracle will use both PGA and TEMP tablespace
which needs no.of I/O’s and automatically database performance will get degraded.
ORACLE INSTANCE – It is a way through which users will access / modify data in the database. It is a
combination of memory structures and background processes
SHARED GLOBAL AREA (SGA) – It is the memory area which contains several memory caches helpful in
reading and writing data
SHARED POOL –
1. It is the memory area where a copy of the data is placed in LRU list
2. The status of block in DBC will be any of the following status
a. UNUSED – block which is never used
b. FREE – block which is used already but currently it is free
c. PINNED – block currently in use
LOG BUFFER CACHE – It is the memory area where a copy of redo entries are maintained and size is
defined by LOG_BUFFER
Note: LBC should be allotted with smallest size than any other memory component in SGA
LARGE POOL –
JAVA POOL – It is memory area used to run java executables (like JDBC driver) and size is defined using
JAVA_POOL_SIZE
STREAM POOL –
1. It is the memory area used when replicating a database using oracle streams
2. This parameter is introduced in 10g and can be defined using STREAM_POOL_SIZE
3. If stream pool is not defined and streams are used, then 10% of memory from shared pool will
be used. This may affect the database performance
3. It will release the temporary segments occupied by the transactions when they are completed (a
more detailed post available @ http://pavandba.wordpress.com/2010/04/20/how-temporary-
tablespace-works/ )
1. We know that LGWR wiill write redo entries into redolog files. But if we have more and more
redo entries generated (for huge transactions), redolog file size increases and even terabytes of
storage is not sufficient.
2. To overcome this Oracle designed its architecture so that LGWR will write into 2 or more
redolog files in a cyclic order (shown in the below diagram)
3. When doing this, certain events will trigger out which are listed as below
Redolog Redolog
member 1 member 2
LOGSWITCH
LGWR moving from one redolog file to another is called LOG SWITCH. At the time of log switch,
following actions will take place
ü Checkpoint event will occur – this tells that committed data should be made permanent to
datafiles. (Eg: Its just like automatic saving of email when composing in gmail)
ü CKPT process will update the latest SCN to datafile header and controlfiles by taking the info
from redolog files
ü DBWRn will write the corresponding dirty blocks from write list to datafiles
ü ARCHn process will generate archives (copy of online redolog files) only if database is in
archivelog mode
Note: Checkpoint event not only occurs at log switch. It can occur at repeated interval and this is
decided by a parameter LOG_CHECKPOINT_INTERVAL (till 8i) and FAST_START_MTTR_TARGET (from 9i)
EXAMPLE : 1
...
2 4
3 3
3
3 Controlfile - 3
1 2
ü In the above diagram, assume that 1,2 and 3 transactions are commited and 4 is going on. Also
as checkpoint occurred at log switch, complete data of 1 and 2, also part of 3 were written to
datafiles
ü Assume LGWR is writing to second file for transaction 4 and instance crash occurred.
ü While recovery, SMON will start comparing the SCN between datafile header and redolog file.
Also it will check for commit point.
ü In the above example, 1 & 2 are written and commited. So nothing is there to recover. But for 3,
only half data is written. To write other half, SMON will initiate DBWRn. But DBWRn will be
unable to do that as dirty blocks are cleaned out from write list (due to instance restart)
ü Now DBWRn will take help of server process which will actually generate dirty blocks with the
help of redo entries that are already written to redolog files
EXAMPLE : 2
...
….
3 3
3 Controlfile - 3
1 2 3
ü In the above example, transaction 3 is not yet commited, but because log switch occurred and
checkpoint event triggred, part of its data is written to datafiles
ü Assume an instance crash occurred and SMON is performing instance recovery
ü SMON will start comparing SCN as usual and when comes to 3 it identifies that data is written to
datafiles, but actually 3 is not committed. So this data need to be reverted. Again it will ask
DBWRn to take this job
ü DBWRn in turn takes help from server process which will generated blocks with old values with
the help of undo tablespace
Note : For roll forward, redo entries from redolog files will be used where as in rollback before images
from undo tablespace will be used
Note: whenever any user performs DML transactions on a table, oracle will apply lock. This is to
maintain read consistency
DBWRn – It is responsible in writing dirty buffers from write list to datafiles and it will do this action in
following situations
LGWR – It is responsible for writing redo entries from log buffer cache to redolog files and it will perform
this in following situations
CKPT – it will update the latest SCN to control files and datafile header by taking information from
redolog files. This will happen at every log switch
ARCHn – It will generated offline redolog files in specified location. This will be done only if database is
in archivelog mode
REDOLOG FILES – files contains redo entries which are helpful in database recovery. To avoid space
constraints oracle will create two or more redolog files and LGWR will write into them in a cyclic order
Redo Redo
log log
file 1 file 2
CONTROL FILES – These files will store crucial database information like
ARCHIVED REDOLOG FILES – These files will be created by ARCHn process if archivelog mode is enabled.
The size of archives will be equal or less than redolog files
PARAMETER FILE
1. This file contains parameters that will define the characteristics of database.
2. It will be in the form of init<SID>.ora [SID – instance name] and resides in ORACLE_HOME/dbs
(on unix) and ORACLE_HOME/database (on windows)
3. Parameters are divided into two types
a. Static parameters – the value for these parameters cannot be changed when the
database is up and running
b. Dynamic parameters – the value for these parameters can be changed even when DB is
up and running
4. Difference between static and dynamic parameters can be known from IS_SESMODIFIABLE and
IS_SYSMODIFIABLE columns of v$parameter view
Note: Instance name can be different from database name. This is to provide security
1. This file is binary copy of pfile which provides different scopes in changing parameter values
a. Scope=spfile -> changes the values from next startup
b. Scope=memory -> changes the values immediately (values will revert from next startup)
c. Scope=both -> changes the values immediately and also made permanent in next
startup
2. Spfile will be in the form of spfile<SID>.ora and resides in ORACLE_HOME/dbs (on unix) and
ORACLE_HOME/database (on windows)
3. We can create spfile from pfile or pfile from spfile using below commands
1. In 9i, SGA has been made dynamic i.e. sizes of SGA components can be changed without
shutting down the database (not possible in 8i)
2. Many times DBA’s faced problem in calculating correct memory sizes which lead to performance
problems in instance level and DBA’s are more involved in handling instance tuning issues. To
avoid this, oracle 10g introduced ASMM
3. The following memory components are automatically sized when using ASMM
a. Shared pool
b. database buffer cache
c. large pool
d. java pool
e. stream pool
Note: LOG_BUFFER will not be automatically sized in any version. It’s a static parameter
4. Using ASMM, we can define total memory to SGA and oracle will decide how much to distribute
to all caches. This is possible by setting SGA_TARGET parameter (new in 10g)
Note: to enabled ASMM, we should define STATISTICS_LEVEL = TYPICAL (default) or ALL. To know more
about ASMM, read http://pavandba.wordpress.com/2009/10/24/63/
5. Maximum size for SGA is defined by SGA_MAX_SIZE. Depends on transactions load, SGA size will
vary from SGA_TARGET to SGA_MAX_SIZE
6. It’s been observed that individual parameters are also defined in some 10g databases which
means, those values will act as min values and SGA_TARGET value will act as medium and
SGA_MAX_SIZE as max value
7. Oracle 10g introduced new background process MMAN in order to manage the memory for SGA
components
Note: Memory allocation to both SGA and PGA has been made automated in 11g using
MEMORY_TARGET and MEMORY_MAX_TARGET parameters
Note: SGA size is not at all dependent on database size and will be calculated based on the transactions
hitting the database
1. Alert log is the monitoring file which records following information useful for DBA in diagnosing
the errors in the database
a. All oracle related errors
b. Every startup and shutdown timestamp
c. Non-default parameters
d. Archivelog generation information
e. Checkpoint information (optional) etc
2. Alert log file will only specifies error message in brief, but it will point to a file which will have
more information about the error. These are called trace files
3. Trace files are of 3 types – background trace files, core trace files and user trace files
4. If any background process fails to perform, it will throw error and a trace file will be generated
(called background trace files) in ORACLE_HOME/admin/SID/bdump location
5. For all operating system related errors with oracle, trace files will be generated (called core
trace files) in ORACLE_HOME/admin/SID/cdump location
6. For all user related errors, trace files (called user trace files) will be generated in
ORACLE_HOME/admin/SID/udump location
7. The default location of these files can be changed by defining following parameters
a. BACKGROUND_DUMP_DEST
b. CORE_DUMP_DEST
c. USER_DUMP_DEST
Note: The above 3 parameters are replaced with a single parameter DIAG_DEST in oracle 11g
1. Reading and writing to a hard disk will be done with the help of I/O header
2. For a hard disk there will be only one I/O header exists
3. As per OFA, oracle recommends to store all the physical files separately in different hard drives
4. In such case, different I/O headers will be working for different hard drives thereby increasing
database performance
5. If not possible, at least we should separate redolog files, archivelog files and controlfiles from
datafiles
STARTUP PHASES
SPFILE or PFILE
Note: A database can be started without issuing shutdown command using SQL> startup force
SHUTDOWN TYPES
Mode Can New users Can existing users Will the current Whether
able to connect? can issue new transactions by checkpoint occurs?
transactions? existing users
completes?
NORMAL X ü ü ü
TRANSACTIONAL X X ü ü
IMMEDIATE X X X ü
ABORT X X X X
SGA
UGA
PARAMETER FILE
PASSWORD FILE
DATABASE
1. Multiple user requests will be received by dispatcher which will be placed in request
queue
2. Shared server processes will take information from request queue and will be processed
inside the database
3. The results will be placed in response queue from where dispatcher will send them to
corresponding users
4. Instead of PGA, statements will get executed in UGA (user global area) in shared server
architecture
5. Shared server architecture can be enabled by specifying following parameters
a. DISPATCHERS
b. MAX_DISPATCHERS
c. SHARED_SERVER_PROCESSES
d. MAX_SHARED_SERVER_PROCESSES
e. CIRCUITS and MAX_CIRCUITS (optional)
6. This architecture should be enabled only if ora-04030 or ora-04031 errors are observed
frequently in alert log file
7. To make shared server architecture effective, SERVER=SHARED should be mentioned in
client TNSNAMES.ORA file
8. A single dispatcher can handle 20 user requests where as a single shared server process
can handle 16 requests concurrently
Note: SMONn can have 16 slave processes and DBWRn can have 20 slave processes working
concurrently
Note: startup and shutdown is not possible if sysdba connects through shared server
connection
WRITE LRU
Large pool
list list
Data dictionary cache
Server
process Java pool
PARAMETER FILE
PASSWORD FILE
ARCHIVED
REDOLOG FILES DATA FILES REDOLOG CONTROL
FILES FILES
1. Server process will receive the statement sent by user process on server side and will handover
that to library cache of shared pool
2. The 1st phase of sql execution i.e Parsing will be done in library cache
3. Then, OPTIMIZER (brain of oracle sql engine) will generate many execution plans, but chooses
the best one based on time & cost (time – response time, cost – cpu resource utilization)
4. Server process will send the parsed statement with its execution plan to PGA and 2nd phase i.e
EXECUTION will be done there
5. After execution, server process will start searching for the data from LRU end of LRU list and this
search will continue till it founds data or reaches MRU end. If it found data, it will be given to the
user. If it didn’t found any data, it means data is not there in database buffer cache
6. In such cases, server process will copy data from datafiles to MRU end of LRU list of database
buffer cache
7. From MRU end the rows pertaining to requested table will be filtered and placed in SERVER
RESULT CACHE along with execution plan id and then it will be given to user (displayed on user’s
console)
Note : for statements issued for the second time, server process will get parsed tree and plan id from
library cache and it will straightly goes to server result cache and compares the plan id. If the plan id
matches, corresponding rows will be given to user. So, in this case, it is skipping all 3 phases of SQL
execution by which response time is much faster than 10g database.
6. If we specify MEMORY_TARGET parameter, oracle will allocate 0.25% of shared pool size as
result cache. If we specify SGA_TARGET (which is of 10g), result cache will be 0.5% of shared
pool. If we use individual parameters (like in 9i), result cache will be of 1% size of shared pool
7. When any DML/DDL statements modify table data or structure, data in result cache will become
invalid and need to be processed again
Note: http://pavandba.wordpress.com/2010/07/15/how-result-cache-works/
You can improve the response times of frequently executed SQL queries by using the result cache. The
result cache stores results of SQL queries and PL/SQL functions in a new component of the SGA called
the Result Cache Memory. The first time a repeatable query executes, the database caches its results.
On subsequent executions, the database simply fetches the results from the result cache instead of
executing the query again. The database manages the result cache. You can turn result caching on only
at the database level. If any of the objects that are part of a query are modified, the database invalidates
the cached query results. Ideal candidates for result caching are queries that access many rows to return
a few rows, as in many data warehousing solutions.
The result cache consists of two components, the SQL Query Result Cache that stores SQL query results
and the PL/SQL Function Result Cache that stores the values returned by PL/SQL functions, with both
components sharing the same infrastructure. I discuss the two components of the result cache in the
following sections.
The result cache is always enabled by default, and its size depends on the memory the database
allocates to the shared pool. If you specify the MEMORY_TARGET parameter for allocating memory,
Oracle allocates 0.25% of the MEMORY_TARGET parameter value to the result cache. If you specify the
SGA_TARGET parameter instead, Oracle allocates 0.5% of the SGA_TARGET value to the result cache.
You can change the memory allocated to the result cache by setting the RESULT_CACHE_MAX_SIZE
initialization parameter. This parameter can range from a value of zero to a system-dependent
maximum. You disable result caching by setting the parameter to zero, as shown here:
Since result caching is enabled by default, it means that the RESULT_CACHE_MAX_SIZE parameter has a
positive default value as well, based on the size of the MEMORY_TARGET parameter (or the
SGA_TARGET parameter if you have that parameter instead). In addition to the
RESULT_CACHE_MAX_SIZE parameter, two other initialization parameters have a bearing on the
functioning of the result cache: the RESULT_CACHE_MAX_RESULT parameter specifies the maximum
amount of the result cache a single result can use. By default, a single cached result can occupy up to 5
percent of the result cache, and you can specify a percentage between 1 and 100. The
RESULT_CACHE_REMOTE_EXPIRATION parameter determines the length of time for which a cached
result that depends on remote objects is valid. By default, this parameter is set to zero, meaning you
aren’t supposed to use the result cache for queries involving remote objects. The reason for this is over
time remote objects could be modified, leading to invalid results in the cache.
Whether the database caches a query result or not depends on the value of the RESULT_CACHE_MODE
initialization parameter, which can take two values: MANUAL or FORCE. Here’s how the two values
affect result caching behavior in the database:
1. If you set the parameter to FORCE, the database will try to use the cache for all results,
wherever it’s possible to do so. You can, however, skip the cache by specifying
NO_RESULT_CACHE hint within a query.
2. If you set the parameter to MANUAL, the database caches the results of a query only if you
include the RESULT_CACHE hint in the query.
By default, the RESULT_CACHE_MODE parameter is set to MANUAL and you can change the value
dynamically as shown here:
Using the RESULT_CACHE hint as a part of a query adds the ResultCache operator to a query’s execution
plan. The ResultCache operator will search the result cache to see whether there’s a stored result in
there for the query. It retrieves the result if it’s already in the cache; otherwise, the ResultCache
operator will execute the query and store its results in the result cache. The no_result_cache operator
works the opposite way. If you add this hint to a query, it’ll lead the ResultCache operator to bypass the
result cache and reexecute the query to get the results.
Note: The RESULT_CACHE and the NO_RESULT_CACHE hints always take precedence over the value you
set for the RESULT_CACHE_MODE initialization parameter.
You can use the following views to manage the result cache:
• V$RESULT_CACHE_DEPENDENCY: Lists the dependency information between the cached results and
dependencies
You can’t cache results in the SQL Query Result Cache for the following objects:
1. Temporary tables
2. Dictionary tables
3. Nondeterministic PL/SQL functions
4. The curval and nextval pseudo functions
5. The SYSDATE, SYS_TIMESTAMP, CURRENT_DATE, CURRENT_TIMESTAMP, LOCAL_TIMESTAMP,
USERENV, SYS_CONTEXT, and SYS_QUID functions
6. You also won’t be able to cache subqueries, but you can use the RESULT_CACHE hint in an inline
view.
If you are using any OCI applications and drivers such as JDBC and ODP.NET, you can also use Oracle’s
client-side caching of SQL result sets in the Client Query Result Cache that’s located on the server. The
database keeps the result sets consistent with changes in session attributes. If you’ve frequently
repeated statements in your applications, client-side caching could offer tremendous improvement in
query performance benefits. Since the database caches results on the clients, server round-trips are
minimized and scalability improves as a result, with lower I/O and CPU load.
Unlike server-side caching, client-side caching isn’t enabled by default. If your applications produce small
result sets that are static over a period of time, client-side caching may be a good thing to implement.
Frequently executed queries and queries involving lookup tables are also good candidates for client-side
caching.
As with server-side caching, you use the RESULT_CACHE_MODE initialization parameter to enable and
disable client-side caching. The RESULT_CACHE and the NO_RESULT_CACHE hints work the same way as
they do for server-side caching. If you choose to specify the MANUAL setting for the
RESULT_CACHE_MODE parameter, you must use the RESULT_CACHE hint in a query for the query’s
results to be cached. Also, the two hints override the setting of the RESULT_CACHE_MODE parameter,
as in the case of server-side caching. You pass the RESULT_CACHE and the NO_RESULT_CACHE hints to
SQL statements by using the OCIStatementPrepare() and the OCIStatementPrepare2() calls.
There are two initialization parameters that control how the Client Query Result Cache works. Here’s a
brief description of these parameters:
1. CLIENT_RESULT_CACHE_SIZE: Determines the maximum client per-process result set cache size
(in bytes). If you set this parameter to zero, you disable the Client Query Result Cache. The
database allocates the maximum-size memory to every OCI client process by default.
Note: You can override the setting of the CLIENT_RESULT_CACHE_SIZE parameter with the server-side
parameter
2. OCI_RESULT_CACHE_MAX_SIZE. By setting the latter to zero, you can disable the Client Query
Result Cache.
3. CLIENT_RESULT_CACHE_LAG: Determines the Client Query Result Cache lag time. A low value
means more round-trips to the database from the OCI client library. Set this parameter to a low
value if your application accesses the database infrequently.
Restrictions
You can’t cache queries that use the following types of objects, even though you may be able to cache
them in a server-side result cache:
• Views
• Remote objects
• Flashback queries
1. This is the new feature in 11g which enables DBA to manage both SGA and PGA automatically by
setting MEMORY_TARGET and MEMORY_MAX_TARGET parameters.
2. MEMORY_TARGET = SGA_TARGET + PGA_AGGREGATE_TARGET
3. MEMORY_TARGET is dynamic parameter, so the value can be changed at any time, where as
MEMORY_MAX_TARGET is static
4. We can check memory sufficiency and tune it by taking advice from
v$MEMORY_TARGET_ADVICE
Connect to the server system by configuring required Host Name. Here require a middleware called
putty. It is used to connect to server from client.
Now add group oinstall. This group will allow the user to install oracle software.
Add DBA group. This group allow users to perform database administration.
Now add required user with primary group as oinstall and secondary group as DBA.
Now set password for the user. Here we have to enter password two times, one for new and other for
verification.
In 10g, we have to configure the kernel parameters. These kernel parameters are to be set for defining
the limitations of resources usage by the oracle.
Fs.file_max=65536.This defines total how many files oracle can create on database.
Net.core.rmem_default=262144.
Net.core.rmem_max=262144.
Net.core.wmem_default=262144.
Net.core.wmem_max=262144.
WinSCP: It is a third party tool which is used to copy software from windows box to the linux box.
Configuring WinSCP: Open the winscp and give the linux box ip address and username followed by
password as shown below. Then click login button
Then the following wizard will obtain. It will show windows box contents in left and linux box contents in
right.
Then there comes a wizard. In transfer setting button click “binary” and then go for copy
Now switch to the user and go to the location where unziped file is located.
VNC Server: It is a software which is used to invoke graphical user interface(GUI) in unix box.
Configuring VNCserver :
Now switch to the root user. Start the vnc server by giving the command ‘service vncserver start’
Again switch to user and go to the /usr/bin location and type ls –ltr vnc* command for vnc files.
Type the same name or IP address follow by serial number after the colon and click ok.
Now goto the unzipped file location.There we can find a directory named database.
And find installer file by giving ls –ltr command then run the installer.
In this wizard go for install software only option. As we requie to install software only.
Now open the user bash profile with vi editor. And export ORACLE_HOME , PATH and ORACLE_SID
name.
Now enter dbca in the user vncserver. Wait for some time
Here we have to chose any templates based on requirement. I go for general purpose.
In this management option. Uncheck the conf the DB with enterprise manager option and click next.
Here we have specify the password. I go for same password for all accounts.
This wizard specifies the file storage mechanism. Go for file system.
Here we have to give the location for database. I use common location for all database files.
In this wizard, we have to specify recovery area. I go for flash recovery area. Then click next.
command: . . bash_profile
Sqlplus “ / as sysdba”
Now open putty and provide required server ip address and click open.
Adding groups: From root user type the below command to add oinstall group.
è DBA group will allow the user to perform database administration actions.
è Here I m adding user with primary group as oinstall and secondary group as dba.
Now give the ownership on all the mount points to the user by giving below command.
WinSCP: It is a third party tool which is used to copy software from windows box to the linux box.
Configuring WinSCP: Open the winscp and give the linux box ip address and username followed by
password as shown below. Then click login button
Then the following wizard will obtain. It will show windows box contents in left and linux box contents in
right.
Go to the required location and simply drag and drop the software.
Then there comes a wizard. In transfer setting button click “binary” and then go for copy
Now switch to bhanu user and go to the software location and unzip the file in sequence order.
VNC Server: It is a software which is used to invoke graphical user interface(GUI) in unix box.
Configuring VNCserver:
è Switch to root user and type command ‘service vncserver start’ as shown.
è Open the vncserver and give above host name or IP address followed by number after colon.
It prompts for password. Give the same password which we given before the creating authority file.
Uncheck the security updates box click yes even if we got any error.
è In 11g there is a bug in creating database later. To avoid this we choose this option.
Here go for the advance installation as we can get more features in this type of installation.
In this wizard we have to specify the language for the product. I go for English.
è In character sets tab go for Unicode option as it provide multiple language support.
è Here specify the database location and type of storage. Go for file system
è Here we can set different passwords for different users, but I go for same password for all.
è After some time it shows to perform some pre-requisition steps to perform. Click fix & check
again button.
è From root user go to that location and execute the script with “sh root.sh” as shown below.
1. When we install oracle software, sqlplus or any other oracle commands will not work.
To make them work we need to set environment variables in .bash_profile file
2. Below are the steps to do the same
[oracle@pc1 ~] $ cd /u02/oraInventory/contentsXML
From the above command, we will get oracle home name and path. We need to set
ORACLE_HOME path and bin location in .bash_profile file as follows
[oracle@pc1 ~] $ vi .bash_profile
export ORACLE_HOME=/u02/ora11g
PATH=$PATH/bin:/usr/bin:$ORACLE_HOME/bin
1. Copy any existing database pfile to a new name. If no database exists on this server, use
pfile of another database which is residing on another server
2. Open pfile with vi editor and do necessary changes like changing database name, dump
locations etc and save it
3. Create necessary directories as mentioned in pfile
4. Copy the database creation script to the server and edit it to your need
5. Export the SID
$ export ORACLE_SID=dev
6. Start the instance in nomount phase using the pfile
SQL> startup nomount
7. Execute create database script
SQL> @db.sql
8. Once database is created, it will be opened automatically
9. Execute the catalog.sql and catproc.sql scripts
SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql
SQL> @$ORACLE_HOME/rdbms/admin/catproc.sql
10. Finally add this database entry to oratab file
Note: Sometimes, we may get error “Oracle Instance terminated. Disconnection forced”. This is
due to the reason that undo tablespace name mentioned in pfile is different from the one
mentioned in database creation script
1. Redolog files are mainly used for recovering a database and also to ensure data commit
2. If a redolog file is lost, it will lead to data loss. To avoid this, we can maintain multiplexed copies
of redolog files in different locations. These copies are together called as redolog group and
individual files are called redolog members
3. Oracle recommends to maintain a min of 2 redolog groups with min of 2 members in each group
4. LGWR will write into members of same group parallely only if ASYNC I/O is enabled at OS level
5. Redolog files will have 3 states – CURRENT, ACTIVE and INACTIVE. Always these states will be
changed in cyclic order
6. We cannot have different sizes for members in the same group, whereas we can have different
sizes for different groups, but not recommended to implement
GROUP 1 GROUP 2
COMMANDS
Note: Even after we drop logfile group or member, still file will exists at OS level
# Resuing a member
Note: We cannot resize a redolog member, instead we need to create new group with required
size and drop the old group
Or
1. Control file contains crucial database information and loss of this file will lead to loss of
important data about database. So it is recommended to have multiplexed copies of files in
different locations
2. If control file is lost in 9i, database may go for force shutdown, where as database will continue
to run, if it is 10g version
COMMANDS
ARCHIVELOG FILES
1. Archive log files are offline copies for online redolog files and are required to recover the
database if we have old backup
2. Archivelog generation can be of two ways – manual and automatic. It is always preferred to use
automatic method as DBA’s cannot be dedicated to perform manual archiving
3. The following are parameters that are used for archivelog mode with their description
a. LOG_ARCHIVE_START – it will enable automatic archiving and useful only till 9i
(deprecated in 10g)
b. LOG_ARCHIVE_TRACE – it is used to generate a trace file to know how ARCHn process
working
c. LOG_ARCHIVE_MIN_SUCCEEDED_DEST – defines min destinations to which ARCHn
process should complete archiving by the time LGWR starts writing to online redolog file
d. LOG_ARCHIVE_MAX_PROCESSES – will start multiple ARCH processes and helpful in
faster writing
e. LOG_ARCHIVE_LOCAL_FIRST – if enabled, ARCHn process will first generate archive in
local machine and then in remote machine. It is used in case of dataguard setup
f. LOG_ARCHIVE_FORMAT – defines the archive log file format
g. LOG_ARCHIVE_DUPLEX_DEST – if want to archive in only 2 locations, we should use this
h. LOG_ARCHIVE_DEST_1...10 – if want to archive to more than 2, we should enable this
i. LOG_ARCHIVE_DEST_STATE_1...10 – to enable / disable archive locations
j. LOG_ARCHIVE_CONFIG – it enables / disables sending redologs to remote location. Used
in dataguard environment
4. When we want to multiplex into only 2 locations, from 10g we should use LOG_ARCHIVE_DEST
and LOG_ARCHIVE_DUPLEX_DEST parameters
5. The default location for archivelogs in 10g is Flash Recovery Area (FRA). The archives in this
location are deleted when a space pressure arised. The location and size of FRA can be known
using DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE parameters respectively
6. To disable archivelog generation into FRA, we shouldn’t use LOG_ARCHIVE_DEST, but should use
LOG_ARCHIVE_DEST_1
COMMANDS
Note: When we enable archivelog mode using above method, the archives will be generated by default
in Flash Recovery Area (FRA). It is the location where files required for recovery exist and introduced in
10g
Block header
20%
PCT FREE level
PCTUSED level
40%
Note: Block space utilization parameters are deprecated from locally managed tablespaces
DATAFILE
HEADER 121 122 123
1. In DMT, free block information is used to maintain in the form of freelist which will be there
in data dictionary cache
2. Everytime DBWRn requires free block, server process will perform an I/O to know free block
information from freelist. This will happen to all the free blocks. Because more no of I/O’s
are being performed, it will degrade the performance of database
Block status
id
121 122 123 124
121 1
122 0
123 1
124 1
1. In locally managed tablespace, free block information is maintained in datafile header itself in
the form of bitmap blocks
2. These bitmaps are represented with 0 and 1 where 0 means free and 1 means used
3. When a free block is required, server process will search in bitmap block and will inform DBWRn
thus it is avoiding I/O which increases database performance
4. In 8i default tablespace type is dictionary, but still we can create locally managed tablespace
where as in 9i, default is local
Note: In any version, we can create dictionary managed tablespace only if SYSTEM tablespace is
dictionary
Note: Even though we specify a tablespace as NOLOGGING, still all DML transactions will
generate redo entries (this is to help in instance recovery). NOLOGGING is applicable in only
below situations
COMMANDS
# To create a tablespace
# To create a tablespace in 9i
# To enable/disable autoextend
# To resize a datafile
# To add a datafile
Note: If we have multiple datafiles, extents will be allocated in round robin fashion
Note: Adding the datafile for tablespace size increment is the best option if we have multiple
hard disks
Note: Local to dictionary conversion is possible only if SYSTEM tablespace is not local
Note: The above steps can also be used for normal datafile renaming/relocation
# To drop a tablespace
Or
Or
# To reuse a datafile
BIGFILE TABLESPACE
1. For managing the datafiles in VLDB, oracle introduced bigfile tablespace in 10g
2. Bigfile tablespace’s datafiles can grow into terabytes based on the block size. For
example, for a 8KB block size a single file can grow till 4TB
3. Bigfile tablespaces should be created only when we have stripping and mirroring
implemented at storage level in real time
4. We can’t add another datafile to a bigfile tablespace until it reaches max value
5. Bigfile tablespaces can be created only as LMT and with ASSM
COMMANDS
CAPACITY PLANNING
UNDO MANAGEMENT
2. The data will be selected from undo tablespace, if any DML operation is being performed on the table
on which select query is also fired. This is to maintain read consistency.
B B B
Data File
1. In the above situation, Tx1 issued an update statement on table A and committed. Because of
this dirty blocks are generated in DBC and undo blocks are used from undo tablespace. Also,
dirty blocks of A are not yet written to datafiles
2. Tx2 is updating table B and because of non availability of undo blocks, Tx2 overrided expired
undo blocks of Tx1
3. Tx3 is selecting the data from A. This operation will first look for data in undo tablespace, but
already blocks of A are occupied by B (Tx2), it will not retrieve any data. Then it will check for
latest data in datafiles, but as dirty blocks are not yet written to datafiles, there are transaction
will be unable to get data. In this situation it will throw ORA-1555 (snapshot too old) error
1. Re-issuing the SELECT statement will be a solution when we are getting ora-1555 very rarley
2. It may occur due to undersized undo tablespace. So increasing undo tablespace size is one
solution
3. Increasing undo_retention value is also a solution
4. Avoiding frequent commits
5. Using “retention gurantee” clause with DML statement. This is only from 10g
COMMANDS
# To add a tempfile
# To resize a tempfile
TABLESPACE ENCRYPTION
Oracle Database 10g introduced Transparent Data Encryption (TDE), which enabled you to
encrypt columns in a table. The feature is called “transparent” because the database takes care
of all the encryption and decryption details. In Oracle Database 11g, you can also encrypt an
entire tablespace. In fact, tablespace encryption helps you get around some of the restrictions
imposed on encrypting a column in a table through the TDE feature. For example, you can get
around the restriction that makes it impossible for you to encrypt a column that’s part of a
foreign key or that’s used in another constraint, by using tablespace encryption.
As with TDE, you need to create an Oracle Wallet to implement tablespace encryption.
Therefore, let’s first create an Oracle Wallet before exploring how to encrypt a tablespace.
Tablespace encryption uses Oracle Wallets to store the encryption master keys. Oracle Wallets
could be either encryption wallets or auto-open wallets. When you start the database, the
auto-open wallet opens automatically, but you must open the encryption wallet yourself.
Oracle recommends that you use an encryption wallet for tablespace encryption, unless you’re
dealing with a Data Guard setup, where it’s better to use the auto-open wallet.
You can create the wallet easily by executing the following command in SQL*Plus:
The previous command creates an Oracle Wallet if there isn’t one already and adds a master
key to that wallet. By default, Oracle stores the Oracle Wallet, which is simply an operating
DBA CLASS NOTES | version 2.0 115
KANNA TECHNOLOGIES
system file named ewallet.pl2, in an operating system–determined location. You can, however,
specify a location for the file by setting the parameter encryption_wallet_location in the
sqlnet.ora file, as shown here:
ENCRYPTION_WALLET_LOCATION=
(SOURCE=
(METHOD=file)
(METHOD_DATA= (DIRECTORY=/apps/oracle/general/wallet) ) )
You must first create a directory named wallet under the $ORACLE_BASE/admin/$ORACLE_SID
directory. Otherwise, you’ll get an error when creating the wallet: ORA-28368: cannot auto-
create wallet. Once you create the directory named wallet, issue the following command to
create the Oracle Wallet:
System altered.
The ALTER SYSTEM command shown here will create a new Oracle Wallet if you don’t have one.
It also opens the wallet and creates a master encryption key. If you have an Oracle Wallet, the
command opens the wallet and re-creates the master encryption key. Once you’ve created the
Oracle Wallet, you can encrypt your tablespaces
Tablespace created.
The storage clause ENCRYPT tells the database to encrypt the new tablespace. The clause
ENCRYPTION tells the database to use the default encryption algorithm, DES128. You can
specify an alternate algorithm such as 3DES168, AES128, or AES256 through the clause USING,
DBA CLASS NOTES | version 2.0 116
KANNA TECHNOLOGIES
which you specify right after the ENCRYPTION clause. Since I chose the default encryption
algorithm, I didn’t use the USING clause here.
The following example shows how to specify the optional USING clause, to define a nondefault
encryption algorithm.
Tablespace created.
The example shown here creates an encrypted tablespace, MYTBSP2, that uses the 3DES168
encryption algorithm instead of the default algorithm.
Note: You can check whether a tablespace is encrypted by querying the DBA_TABLESPACES
view
The database automatically encrypts data during the writes and decrypts it during reads. Since
both encryption and decryption aren’t performed in memory, there’s no additional memory
requirement. There is, however, a small additional I/O overhead. The data in the undo
segments and the redo log will keep the encrypted data in the encrypted form. When you
perform operations such as a sort or a join operation that use the temporary tablespace, the
encrypted data remains encrypted in the temporary tablespace.
COMMANDS
USER MANAGEMENT
1. User creation should be done after clearly understanding requirement from application team
2. Whenever we create user, we should assign a default permanent tablespace (which allows to
create tables) and default temporary tablespace(which allows to do sorting). At any moment of
time we can change them
3. After creating user, don’t grant connect and resource roles (in 9i). In 10g, we can grant connect
role as it contains only create session privilege
4. Resource role internally contains unlimited tablespace privilege and because of this, it will
override the quota that is granted initially. So, it should not be granted in real time until it is
required
5. Privileges for a user are of 2 types
a. System level privs – eg: create table, create view etc
b. Object level privs – eg: select on A, update on B etc
6. A role is a set of privileges which will reduce the risk of issuing many commands
7. To findout roles and privileges assigned to a user, use following views
a. DBA_SYS_PRIVS
b. DBA_TAB_PRIVS
c. DBA_ROLE_PRIVS
d. ROLE_SYS_PRIVS
e. ROLE_TAB_PRIVS
f. ROLE_ROLE_PRIVS
8. System level privileges can be granted with admin option so that the grantee can grant the same
privilege to other users. If we revoke the privilege from first grantee, still others will have that
privilege
9. Object level privileges can be granted with grant option so that grantee can grant the same
privilege to other users. If we revoke the privilege from first grantee, it will revoked from all the
users
COMMANDS
# To create a user
Note: Allocating quota doesn’t represent reserving the space. If 2 or more users are sharing a
tablespace, quota will filled up in first come first serve basis
Note: The objects created in the old tablespace remain unchanged even after changing a default
tablespace for a user
# To drop a user
Or
PROFILE MANAGEMENT
COMMANDS
# To create a profile
SQL> @$ORACLE_HOME/rdbms/admin/utlpwdmg.sql
Note: sessions terminated because of idle time are marked as SNIPPED in v$session and DBA need to
manually kill the related OS process to clear the session
# To kill a session
Note: Resource management parameters are affective only if RESOURCE_LIMIT is set to TRUE
Note: from 11g onwards passwords for all users are case-sensitive
AUDITING
c. OS – audited information will be stored in the form of trace files at OS level. For this we
need to set AUDIT_FILE_DEST parameter
d. DB, Extended – it is same as DB option but will still record info like SQL_TEXT,
BIND_VALUE etc
e. XML – it will generated XML files to store auditing information
f. XML, Extended – same as XML but will record much more information
3. Even though we set AUDIT_TRAIL parameter to some value, oracle will not start auditing until
one of the following types of auditing commands are issued
a. Statement level auditing
b. Schema level auditing
c. Object level auditing
d. Database auditing (it is only till 9i. due to performance issues it was removed from 10g)
4. By default some activites like startup & shutdown of database, any structural changes to
database are audited and recorded in alert log file
5. If auditing is enabled with DB, then we need to monitor space in SYSTEM tablespace as there is a
chance of getting full when more and more information is keep on recorded
6. SYS user activities can also be captured by setting AUDIT_SYS_OPERATIONS to TRUE
7. Auditing should use following scopes
a. Whenever successful / not successful
b. By session / By access
8. Disadvantage of auditing is, it will not capture changed values. For that DBA will use triggers.
This is replaced with Fine Grained Auditing (FGA) which will capture old and new values when a
record is modified
Note: when we use triggers, they will create separate tables to store the audited information which are
called trigger tables. To access them we need to create indexes and maintain them which is a quite
difficult task
9. FGA can be initiated using DBMS_FGA and by creating and setting audit policies. Information
that is captured can be viewed using DBA_FGA_AUDIT_TRAIL view
10. Enabling auditing at database level will have adverse impact on the database performance
COMMANDS
# To enable auditing
Note: AUDIT_TRAIL parameter is static and require a restart of database before going to be effective
11. Sarben – Oxley act defines rules to provide security for the database and some rules are
as follows...
a. Default users should not have default passwords
b. Passwords for users should be implemented using password_verify_function
c. Apart from CREATE SESSION (in 10g we can grant CONNECT role), no other
privilege should be given to PUBLIC schema
d. Lock the unused accounts
e. Auditing should be enabled with AUDIT_SYS_OPERATIONS=TRUE
f. Following parameters need to set
i. REMOTE_OS_AUTHENT = FALSE
ii. REMOTE_LOGIN_PASSWORD = EXCLUSIVE
g. Never connect using SYS AS SYSDBA
h. Listener should have a password
1. We need to have oracle client software installed on client machine in order to connect
to server
2. The following files are required to establish a successful connection to the server
a. Client – TNSNAMES.ORA and SQLNET.ORA
b. Server – LISTENER.ORA, TNSNAMES.ORA and SQLNET.ORA
3. TNSNAMES.ORA file contains the description of the database to which connection
should establish
4. SQLNET.ORA will define the type of connection between client and server
5. Apart from using tnsnames.ora we can also use EZCONNECT, LDAP, bequeath protocol
etc to establish connection to server
6. These files will reside $ORACLE_HOME/network/admin
7. LISTENER service will run based on LISTENER.ORA file and we can manage listener using
below commands
a. $ lsnrctl start / stop / status / reload
8. We can have host string different from service name i.e instance name or SID and even
it can be different from database name. This is to provide security for the database
9. Tnsping is the command to check the connectivity to the database from client machine
10. Do create seperate listeners for multiple databases
11. If the connections are very high, create multiple listeners for the same database
12. Any network related problem should be resolved in the following steps
a. Check whether listener is up and running or not on server side
b. Check the output is ok for tnsping command
c. If still problem exist, check firewall on both client and server. If not known take
the help of network admin
13. We can know the free port information from netstat command
14. We need to set password for listener so as to provide security
15. TNSNAMES.ORA and SQLNET.ORA files can also be seen on server side because server
will act as client when connecting to another server
16. If listener is down, existing users will not have any impact. Only new users cannot be
able to connect to instance
17. From 10g, SYSDBA connection to database will use bequeath protocol as it doesn’t
require any protocol like TCP/IP
Error 1: Connect failed because target host or object does not exist
Solution : goto run->drivers->etc->hosts file and open it with notepad. Then do add a entry
Solution : Generally it will occur if network connectivity is not there. So ping the server and if its
working fine then check if firewall is enabled either at client side or server side.
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u02)
(PROGRAM = extproc)
))
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1.kanna.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
))
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = prod)
(ORACLE_HOME = /u02)
))
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1.kanna.com)(PORT = 1521))
))
1. This file contain sysdba password and will be used if any user with sysdba permission is trying to
connect database remotely.
2. It will be in the form of orapw<SID> and resides in ORACLE_HOME/dbs (on unix) and
ORACLE_HOME/database (on windows)
3. If we forgot password of sys user or lost password file, it can be recreated using ORAPWD utility
as follows
2. Even though manageability is complex, having multiple small databases will give more
benefit
3. DDMS will use two phase commit mechanism i.e. a transaction should be committed in
both the databases before the data was made permanent
4. DDMS can have different databases like oracle, db2, sql server etc. In this case they will
talk each other using oracle gateway component (which need to be configured
seperately)
5. If we have same databases in DDMS, it’s called homogenous DDMS. If we have different
databases then it’s called heterogeneous DDMS
Database Links
1. It is the object which will pull remote database data to local database
2. While creating dblink, we need to know username and password of remote database
3. Apart from username and password of remote db, we need to have tns entry in
tnsnames.ora of local db and tnsping command should work
Materialized Views
1. It is an object used to pull remote database’s data frequently in specified time which is
called as refreshing the data using materialized views
2. Snapshot is the object which used to do the same till 8i, but the disadvantage is time
constraint in pulling huge no.of rows
3. MV uses MV log to store information of already transferred rows. MVLOG will store
rowid’s of table rows which helps in further refresh
4. MV should be created in the database where we store the data abd MVLOG will be
created automatically in remote database
5. MVLOG is a table not a file and its name always will start with MV$LOG
6. MV refresh can happen in following three modes
a. Complete – pulling entire data
Note: we can use “refresh fast on commit” in order to transfer the data to remote database
without waiting
ORACLE UTILITIES
SQL * LOADER
SQL * LOADER
5. Export will convert the command to select statements and the final output will be
returned to dumpfile
6. Server process will take the responsibility of writing the data to dumpfile
7. Export will transfer the data to dumpfile in the size of block. To increase the speed of
writing we can set BUFFER=10 * avg row length. But we will never use this formula in
real time
8. DIRECT=Y will make the export process faster by performing in the following way
d. Partitioned table
Note: when we give direct=y option, if oracle cannot export a table with that option, it will
automatically convert to conventional path
10. By mentioning CONSISTENT=Y, export will take data from only undo tablespace if a DML
operation is being performed on the table
11. Import is the utility to dump the contents from export dumpfile to a schema
12. Import internally converts contents of export dump file to DDL and DML statements
imp à create table à inserts the data à create index or other objects à add constraints and
enable them
13. SHOW=Y can be used to check corruption in export dump file. This will not actually
import the contents
14. IGNORE=Y should be used if already an object exists with the same name. It will append
the data if the object exists already
Note: whenever import fails with warning for constraints or grants, do import again with
ROWS=N option
Note: when we are importing tables with LONG, LOB datatypes or partitioned tables, the
destination database should also contain same tablespace name as source database
COMMANDS
# To import a schema
# To import a table
DATAPUMP
15. Datapump is an extension to traditional exp/imp which provides more advantages like
security, fastness etc
16. During datapump export, oracle will create master table in the corresponding schema
and data will be transferred parallely from tables to dumpfile
Table 1
Table 3
17. During datapump import this will happen in reverse order i.e from dumpfile a master
table will be created and from that original tables
18. After finishing either export or import in datapump, oracle will automatically drops
master table
19. Just like exp/imp, datapump also contains 4 (database, schema, table and row) levels
20. In datapump dumpfile will reside only on server and cannot be created on client side
with the help of directory option. This will provide security to dumpfile
21. DBA_DATAPUMP_JOBS view can be used to find the status of datapump export or
import process
Note: whenever datapump export is done using PARALLEL option, import also should be done
with the same option. Otherwise it will effect the time taking for import
22. Oracle will try to import tables to the tablespace with same name and if tablespace
doesn’t exist, it will go to users default tablespace
COMMANDS
# To create a directory
Directory created.
Grant succeeded.
OWNER DIRECTORY_NAME
------------------------------ ------------------------------
DIRECTORY_PATH
SYS DPBKP
/u01/expbkp
# To import a schema
# To import a table
COLD BACKUP
SQL> startup
Note: archives are not required to take back up with cold backup
HOT BACKUP
1. Taking the backup while the database is up and running is called hot backup
2. During hot backup database will be in fuzzy state and still users can perform
transactions which makes backup inconsistent
3. Whenever we place a tablespace or database in begin backup mode, following happens
a. The corresponding datafiles header will be freezed i.e CKPT process will not
update latest SCN
b. Body of the datafile is still active i.e DBWRn will write the dirty blocks to datafiles
4. After end backup, datafile header will be unfreezed and CKPT process will update latest
SCN immediately by taking that information from controlfiles
5. During hot backup, we will observe much redo generated because oracle will copy
entire data block as redo entry into LBC. This is to avoid fractured block
6. A block fracture occurs when a block is being read by the backup, and being written to
at the same time by DBWR. Because the OS (usually) reads blocks at a different rate
than Oracle, your OS copy will pull pieces of an Oracle block at a time. What if the OS
copy pulls half a block, and while that is happening, the block is changed by DBWR?
When the OS copy pulls the second half of the block it will result in mismatched halves,
which Oracle would not know how to reconcile.
7. This is also why the SCN of the datafile header does not change when a tablespace
enters hot backup mode. The current SCNs are recorded in redo, but not in the datafile.
This is to ensure that Oracle will always recover over the datafile contents with redo
entries. When recovery occurs, the fractured datafile block will be replaced with a
complete block from redo, making it whole again.
Since we are placing entire database into begin backup mode, no repetition for all the
tablespaces is required
Note: In any version, during hot backup we will not take redolog files backup
DATABASE RECOVERY
1. Recover is of 2 types
a. Complete recovery – recovering database till the point of failure. No data loss
b. Incomplete recovery – recovering to a certain time or scn. Has data loss
2. We will perform complete recovery if we lost only datafiles
3. We will perform incomplete recovery if we lost either redolog files, controlfiles or
archivelog files
4. Recovery process involves two phases
a. RESTORE – copying a file from backup location to original location as that file is
lost now
b. RECOVER – applying archivelogs and redologs to bring the file SCN in par with
latest SCN
STEPS for recovering database (we will perform this when we lost more than 50% of datafiles)
When we use above command, it will delete the file at OS level, but data dictionary will not be
updated and never we can get back that file even if we have backup. So don’t use this in real
time
SQL>startup
SQL> recover database until scn 12345 / until time ‘2011-01-05 11:00:00’;
Using RESETLOGS – When used resetlogs option to open the database, it will
1. Create new redolog files at OS level (location and size will be taken from controlfile) if
not already existing
2. Resets the log seq number (LSN) to 1, 2, 3 etc for the created files
3. Whenever database is opened with resetlogs option, we will say database entered into
new incarnation. If database is in new incarnation, the backups which were taken till
now are no more useful. So, whenever we perform an incomplete recovery we need to
take full backup of database immediately
Note: All the archives generated from the date of datafile creation should be available to do
this
The above command may not work sometimes, in which case we need to use already taken
trace file during backup. This command will generate a controlfile script in udump in the form
of trace file
[oracle@server1 ~]$ goto udump location and copy the first create controlfile script to a file
called control.sql
SQL> @control.sql
Note: After creating control files using above procedure, there will be no SCN in that. So server
process will write the latest SCN to control files in this situation by taking info from datafile
header
Note: we will perform until time recovery in case we lost a single table and need to recover it.
But to do this we need to have approval from all the users in the database
1. It is the backup methodology introduced in 8.0 which performs block level backup i.e
RMAN will take the backup of only used blocks
2. RMAN will take the information from bitmap block about used blocks and while
performing this RMAN will make sure DBWRn is not writing into free blocks
3. Advantages of RMAN
a. Block level backup
b. Parallelism
c. Duplexing of archives
d. Detection of corruption in datafiles
e. Validating backup
f. Incremental backup
g. Recovery catalog etc
4. Components of RMAN
a. RMAN executable file
b. Target database
c. Auxiliary database
d. Recovery catalog
e. Media management layer – it is responsible in interacting with tape drive while
taking RMAN backup directly to tape
COMMANDS
# To connect to RMAN
# To backup archivelogs
Note: By default in 10g, rman backup will go to flash recovery area. To override that, use below
command
The above command will get the information from controlfile of the database
RMAN> run
{
allocate channel c1 device type sbt_tape;
backup database plus archivelog;
}
The above command will run backup for 5 hours and will pause after that. It will continue on
next day at scheduled time
RMAN> run
{
sql ‘alter tablespace mydata offline’;
restore tablespace mydata;
recover tablespace mydata;
sql ‘alter tablespace mydata online’;
}
RMAN>run
{
shutdown immediate;
startup mount;
restore datafile 1;
recover datafile 1;
sql ‘alter database open’;
}
RMAN> run
{
shutdown immediate;
startup mount;
set until scn 1234; or set until time “to_date(‘2011-01-05 11:30:00’,’YYYY-MM-DD
hh24:mi:ss’)”;
recover database;
RMAN> run
{
shutdown immediate;
startup nomount;
restore controlfile from autobackup;
sql ‘alter database mount’;
recover database;
sql ‘alter database open resetlogs’;
}
RECOVERY CATALOG
1. RMAN will store the backup information in target database controlfile. If we lost this
controlfile and perform either complete or incomplete recovery, we will loose backup
info even though physically backups are available
2. To avoid this situation RMAN introduced recovery catalog. It is a database which stores
target database backup information
3. Single recovery catalog can support multiple target databases
4. We cannot obtain recovery catalog information from target but vice versa is possible
INCREMENTAL BACKUP
1. Taking backup of very large database (VLDB) will take time if the backup size is
increasing frequently
2. In such cases, we can go for incremental backup which will take backup of any changes
happend from last full backup to till date
3. Incremental backups are two types
a. Differential (default)
b. Cumulative
4. Both incremental backup types will have level 0 and level 1 (level 0 –full backup, level 1-
incremental backup)
5. First time incremental backup will do level 0 backup always
6. RMAN will perform incremental backup by identifying changed blocks with the help of
block SCN
7. We cannot recover database using level 1 backup applying on full database backup
8. We can apply level 1 backup on image copies and can recover the database
9. 10g RMAN can perform faster incremental backups using block change tracker. With this
whenever any block changes CTWR (change track writer) background process will write
that information to a tracking file
10. The change tracking file resides in DB_CREATE_FILE_DEST
COMMANDS
You can also create the change tracking file in a location you choose yourself, using the
following SQL statement:
The REUSE option tells Oracle to overwrite any existing file with the specified name.
RMAN> RUN {
RECOVER COPY OF DATABASE WITH TAG 'incr_update';
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr_update'
DATABASE;
}
PERFORMANCE TUNING
NETWORKING TUNING
1. When a performance problem is reported, first we need to check if problem is only for
one user or for multiple users
2. If it is only for only one user, then it could be because of network problem so check
tnsping to database
3. If tnsping value is too high then intimate network admin about this. If not move to next
phase
APPLICATION TUNING
1. In this phase we need to find whether any new applications added or are there any
changes in the code
2. If any additions or changes, ask application team to revert them and then check the
performance. If working fine, then problem is with those changes
3. If no additions/changes happened or if performance problem exists after reverting, then
proceed for next phase
SQL TUNING
1. By running some reports like ADDM or ASH, we can know what queries are giving
problem and can send them to application team (or team which is responsible for
writing sql queries) for tuning
2. Sometimes DBA help may be required, so DBA should have expertise knowledge on SQL
3. In real time, most(90%) of tuning problems will get resolved in this phase. If not solved,
proceed to next phase
OBJECT TUNING
1. In this phase, first we need to check what is the last analyzed date for the tables
involved in the query.
2. If we see last_analyzed date as old date, then it means that table statistics didn’t
gathered from long time. It may be the reason for performance problem.
3. Optimizer will generate the best execution plan based on these statistics. But if statistics
are old, optimizer will go for worst plan which affects performance. In such cases, we
need to analyze manually using below commands
# To analyze a table
4. If table contains huge no.of rows analyze will take time as it collects info for each and
every row. In such cases, we can estimate statistics which means collecting statistics for
some percentage of rows. This can be done using below command
# To analyze a schema
Note: In 10g, oracle automatically collects statistics for tables which are modified greater than
10% every night 10PM of server time. But due to practical complications, experts
recommended to disable that automated job and create a new one manually
Note: Always statistics gathering job should run in non-peak hours of server time as it takes
max cpu power and memory for processing
EXPLAIN PLAN
1. It is a plan which shows the flow of execution for any sql statement
2. To generate explain plan we require plan_table in SYS schema. If not there, we can
create using $ORACLE_HOME/rdbms/admin/utlxplan.sql script
3. After creating plan_table, use below command to generate explain plan
SQL> grant select,insert on plan_table to scott;
SQL> conn scott/tiger
SQL> explain plan for select * from emp;
4. To view the content of explain plan, run $ORACLE_HOME/rdbms/admin/utlxpls.sql
script
5. Optimizer may deviate from best execution plan sometimes depends on resources (CPU
or memory) availability
6. If index is not there on a table, create an index on a column which is after where
condition in the query
7. Also choose any one of following types of index to be created
a. B-Tree index – must be used for high cardinality (no.of distinct values) columns
b. Bitmap index – for low cardinality columns
c. Function based index – for columns with defined functions
A X B
In the above diagram X is the common column shared by both A & B. Problem in
using cluster table is any modifications to X cannot be done easily
c. Index organized table (IOT) – it avoids creating indexes separately as data itself
will be stored in index form. The performance of IOT is fast but DML and DDL
operations are very costly
d. Partition table – a normal table can be split logically into partitions so that we
can make queries to search only in 1 partition which improves search time. The
following are the types of partitions available
i. Range
ii. List
iii. Hash
We can also have composite partition of following types
a. Range – range
b. Range – list
c. Range – hash
d. List – list
e. List – hash
f. List – range (from 11g)
DATABASE TUNING
Fragmentation
# To move a table
The above command will create a duplicate table and copies the data, then drops the original
table
Note: The above command is used even to normally move a table to another tablespace in case
of space constraint. Also, we can move table to the same tablespace, but we need to have free
space as double the size of table
Note: After table move, the corresponding indexes will become UNUSABLE because the row id’s
will change. We need to use any of the below commands to rebuild the indexes
SQL> alter index pk_emp rebuild online nologging; - always prefer to use this command as it
executes faster because no redo is generated
# To shrink a table
As row id’s doesn’t change with above commands, it is not necessary to rebuild indexes. While
doing shrinking, still users can access the table, but it will use full scan instead of index scan
Note: Apart from table fragmentation, we have tablespace fragmentation and that will occur
only in DMT or LMT with manual segment space management. The only solution is to export &
import the objects in that tablespace. So, it is always preferred to use LMT with ASSM
ROW CHAINING
1. If the data size is more than block size, data will spread into multiple blocks forming a
chain which is called row chaining
2. For example, when we are storing a 20k size of image, it will spread into 3 blocks as
shown below
3. Because data is spreaded across multiple blocks oracle need to perform multiple I/O’s to
retrieve this which will lead to performance degradation
4. Solution for row chaining is to create new tablespace with non-default block size and
moving the tables
5. We can create tablespaces with non default block size of 2k, 4k, 16k and 32k (8k is
anyways default)
6. More block size cannot be fitted into default database buffer cache, so it is required to
enable separate buffer cache
Note: Once defined, we can’t change the default block size of the database
ROW MIGRATION
1. Updating a row may increase row size and in such case it will use PCTFREE space
2. If PCTFREE is full, but still a row requires more size for update, oracle will move that
entire row to another block
3. If many rows are moved like this, more I/O’s should be performed to retrieve data which
degrades performance
4. Solutions to avoid row migration is to increase the PCTFREE percentage or sometimes
creating a non-default block size also acts as a solution
5. Because PCTFREE is managed automatically in LMT, we will not observe any row
migration in LMT
INSTANCE TUNING
1. If even after performing all the steps in database tuning, performance problem exists,
we need to instance level tuning
TKPROF report
1. Transient kernel profiler is a report which show details like time taken, cpu utilization in
every phase (parse, execution and fetch) of sql execution
2. From TKPROF report if we observe that statement is getting parsed everytime and if it is
frequently executed query, reason could be statement flushing out from shared pool
because of less size. So increasing shared pool size is the solution
3. If we observe fetching is happening everytime, it could be because of data flushing from
buffer cache for which increasing the size is the solution
4. If the size of database buffer cache is enough to hold the data bit still data is flushing
out, in such cases we can use keep & recycle caches
5. If a table is placed in KEEP cache, it will be there in the instance till its lifetime without
flushing. If a table is placed in RECYCLE cache, it will be flushed immediately without
waiting for LRU to occur
Note: Frequently used tables should be placed in keep cache whereas full scan tables should be
placed in recycle cache
STATSPACK REPORT
SQL> @$ORACLE_HOME/rdbms/admin/spcreate.sql
This will create a PERFSTAT user who is responsible for storing statistical data
SQL> @$ORACLE_HOME/rdbms/admin/spreport.sql
*A separate hand out is given for learning how to analyze statspack report
SQL> @$ORACLE_HOME/rdbms/admin/awrrpt.sql
SQL> @$ORACLE_HOME/rdbms/admin/ashrpt.sql
1. It is a tool which can be used to get recommendations from oracle on the performance
issues
2. It will take the snapshots generated by AWR and based on that will generate
recommendations
SQL> @$ORACLE_HOME/rdbms/admin/addmrpt.sql
ENTERPRISE MANAGER(EM)
1. It is a tool through which we can manage entire database and can perform all database
actions in a single click
2. Till 9i, it is called as oracle enterprise manager (OEM) and is restricted to use within the
network of database
3. From 10g it was made browser based so that we can manage the database from
anywhere in the world
4. EM can be configured either through DBCA or manual way
or
# To drop repository
# To recreate repository
# To manage EM
[oracle@server1 ~ ]$ vi control.sql
Here change the database name and replace word REUSE with SET and make sure it is having
RESETLOGS
SQL> ! rm /datafiles/prod/*.ctl
SQL> @control.sql
SQL> startup
FLASHBACK FEATURES
FLASHBACK QUERY
—————————–
CURRENT_SCN TO_CHAR(SYSTIMESTAM
----------- -------------------
722452 2004-03-29 13:34:12
4. COMMIT;
COUNT(*)
----------
1
COUNT(*)
----------
0
COUNT(*)
----------
0
3. COMMIT;
6. COMMIT;
8. COMMIT;
FLASHBACK TABLE
—————————–
CURRENT_SCN
-----------
715315
CURRENT_SCN
-----------
715340
COUNT(*)
----------
0
COUNT(*)
----------
1
FLASHBACK DATABASE
———————————–
Database must be in archivelog mode and flashback should be enabled for performing this. When placed
in flashback mode, we can observe flashback logs getting generated in flash_recovery_area
-- Flashback 5 minutes.
CONN sys/password AS SYSDBA
SHUTDOWN IMMEDIATE
STARTUP MOUNT EXCLUSIVE
FLASHBACK DATABASE TO TIMESTAMP SYSDATE-(1/24/12);
ALTER DATABASE OPEN RESETLOGS;
DATAGUARD
Oracle Data Guard is one of the most effective and comprehensive data availability, data
protection and disaster recovery solutions available today for enterprise data. Oracle Data
Guard is the management, monitoring, and automation software infrastructure that creates,
maintains, and monitors one or more standby databases to protect enterprise data from
failures, disasters, errors, and corruptions.
Data Guard maintains these standby databases as transitional consistent copies of the
production database. These standby databases can be located at remote disaster recovery sites
thousands of miles away from the production data center, or they may be located in the same
city, same campus, or even in the same building. If the production database becomes
unavailable because of a planned or an unplanned outage, Data Guard can switch any standby
database to the production role, thus minimizing the downtime associated with the outage, and
preventing any data loss.
Available as a feature of the Enterprise Edition of the Oracle Database, Data Guard can be used
in combination with other Oracle High Availability (HA) solutions such as Real Application
Clusters (RAC), Oracle Flashback and Oracle Recovery Manager (RMAN), to provide a very high
level of data protection and data availability that is unprecedented in the industry.
A Data Guard configuration consists of one production (or primary) database and up to nine
standby databases. The databases in a Data Guard configuration are connected by Oracle Net
and may be dispersed geographically. There are no restrictions on where the databases are
located, provided that they can communicate with each other. However, for disaster recovery,
it is recommended that the standby databases are hosted at sites that are geographically
separated from the primary site.
A standby database is initially created from a backup copy of the primary database. Once
created, Data Guard automatically maintains the standby database as a transactional consistent
copy of the primary database by transmitting primary database redo data to the standby
system and then applying the redo logs to the standby database. Data Guard provides two
methods to apply this redo data to the standby database and keep it transactional consistent
with the primary, and these methods correspond to the two types of standby databases
supported by Data Guard.
A physical standby database provides a physically identical copy of the primary database, with
on-disk database structures that are identical to the primary database on a block-for-block
basis. The database schemas, including indexes are the same. The Redo Apply technology
applies redoes data on the physical standby database using standard Oracle media recovery
techniques.
A logical standby database contains the same logical information as the production database,
although the physical organization and structure of the data can be different. The SQL apply
technology keeps the logical standby database synchronized with the primary database by
transforming the data in the redo logs received from the primary database into SQL statements
and then executing the SQL statements on the standby database. This makes it possible for the
logical standby database to be accessed for queries and reporting purposes at the same time
the SQL is being applied to it. Thus, a logical standby database can be used concurrently for
data protection and reporting.
Role Management:
Using Data Guard, the role of a database can be switched from a primary role to a standby role
and vice versa, ensuring no data loss in the process, and minimizing downtime. There are two
kinds of role transitions – a switchover and a failover. A switchover is a role reversal between
the primary database and one of its standby databases. This is typically done for planned
maintenance of the primary system. During a switchover, the primary database transitions to a
standby role and the standby database transitions to the primary role. The transition occurs
without having to re-create either database. A failover is an irreversible transition of a standby
database to the primary role. This is only done in the event of a catastrophic failure of the
primary database, which is assumed to be lost and to be used again in the Data Guard
configuration, it must be re-instantiated as a standby from the new primary.
In some situations, a business cannot afford to lose data at any cost. In other situations, some
applications require maximum database performance and can tolerate a potential loss of data.
Data Guard provides three distinct modes of data protection to satisfy these varied
requirements:
Maximum Protection— This mode offers the highest level of data protection. Data is
synchronously transmitted to the standby database from the primary database and
transactions are not committed on the primary database unless the redo data is available on at
least one standby database configured in this mode. If the last standby database configured in
this mode becomes unavailable, processing stops on the primary database. This mode ensures
no-data-loss.
Maximum Availability— This mode is similar to the maximum protection mode, including zero
data loss. However, if a standby database becomes unavailable (for example, because of
network connectivity problems), processing continues on the primary database. When the fault
is corrected, the standby database is automatically resynchronized with the primary database.
Maximum Performance— This mode offers slightly less data protection on the primary
database, but higher performance than maximum availability mode. In this mode, as the
primary database processes transactions, redo data is asynchronously shipped to the standby
database. The commit operation of the primary database does not wait for the standby
database to acknowledge receipt of redo data before completing write operations on the
primary database. If any standby destination becomes unavailable, processing continues on the
primary database and there is little effect on primary database performance.
The Oracle Data Guard Broker is a distributed management framework that automates and
centralizes the creation, maintenance, and monitoring of Data Guard configurations. All
management operations can be performed either through Oracle Enterprise Manager, which
uses the Broker, or through the Broker’s specialized command-line interface (DGMGRL).
The following diagram shows an overview of the Oracle Data Guard architecture.
Fast-Start Failover
This capability allows Data Guard to automatically, and quickly fail over to a previously chosen,
synchronized standby database in the event of loss of the primary database, without requiring
any manual steps to invoke the failover, and without incurring any data loss. Following a fast-
start failover, once the old primary database is repaired, Data Guard automatically reinstates it
to be a standby database. This act restores high availability to the Data Guard configuration.
Several enhancements have been made in the redo transmission architecture to make sure
redo data generated on the primary database can be transmitted as quickly and efficiently as
possible to the standby database(s).
A physical standby database can be activated as a primary database, opened read/write for
reporting purposes, and then flashed back to a point in the past to be easily converted back to a
physical standby database. At this point, Data Guard automatically synchronizes the standby
database with the primary database. This allows the physical standby database to be utilized for
read/write reporting and cloning activities.
Automatic deletion of applied archived redo log files in logical standby databases
Archived logs, once they are applied on the logical standby database, are automatically deleted,
reducing storage consumption on the logical standby and improving Data Guard manageability.
Physical standby databases have already had this functionality since Oracle Database 10g
Release 1, with Flash Recovery Area.
Oracle Enterprise Manager has been enhanced to provide granular, up-to-date monitoring of
Data Guard configurations, so that administrators may make an informed and expedient
decision regarding managing this configuration.
With this feature, redo data can be applied on the standby database (whether Redo Apply or
SQL Apply) as soon as they have written to a Standby Redo Log (SRL). Prior releases of Data
Guard require this redo data to be archived at the standby database in the form of archivelogs
before they can be applied. The Real Time Apply feature allows standby databases to be closely
synchronized with the primary database, enabling up-to-date and real-time reporting
(especially for Data Guard SQL Apply). This also enables faster switchover and failover times,
which in turn reduces planned and unplanned downtime for the business.
The impact of a disaster is often measured in terms of Recovery Point Objective (RPO – i.e. how
much data can a business afford to lose in the event of a disaster) and Recovery Time Objective
(RTO – i.e. how much time a business can afford to be down in the event of a disaster). With
Oracle Data Guard, when Maximum Protection is used in combination with Real Time Apply,
businesses get the benefits of both zero data loss as well as minimal downtime in the event of a
disaster and this makes Oracle Data Guard the only solution available today with the best RPO
and RTO benefits for a business.
Data Guard in 10g has been integrated with the Flashback family of features to bring the
Flashback feature benefits to a Data Guard configuration. One such benefit is human error
protection. In Oracle9i, administrators may configure Data Guard with an apply delay to protect
standby databases from possible logical data corruptions that occurred on the primary
database. The side-effects of such delays are that any reporting that gets done on the standby
DBA CLASS NOTES | version 2.0 186
KANNA TECHNOLOGIES
database is done on old data, and switchover/failover gets delayed because the accumulated
logs have to be applied first. In Data Guard 10g, with the Real Time Apply feature, such delayed-
reporting or delayed-switchover/failover issues do not exist, and – if logical corruptions do land
up affecting both the primary and standby database, the administrator may decide to use
Flashback Database on both the primary and standby databases to quickly revert the databases
to an earlier point-in-time to back out such user errors.
Another benefit that such integration provides is during failovers. In releases prior to 10g,
following any failover operation, the old primary database must be recreated (as a new standby
database) from a backup of the new primary database, if the administrator intends to bring it
back in the Data Guard configuration. This may be an issue when the database sizes are fairly
large, and the primary/standby databases are hundreds/thousands of miles away. However, in
Data Guard 10g, after the primary server fault is repaired, the primary database may simply be
brought up in mounted mode, “flashed back” (using flashback database) to the SCN at which
the failover occurred, and then brought back as a standby database in the Data Guard
configuration. No re-instantiation is required.
Logical standby database can now be created from an online backup of the primary database,
without shutting down or quiescing the primary database, as was the case in prior releases. No
shutdown of the primary system implies production downtime is eliminated, and no quiesce
implies no waiting for quiescing to take effect and no dependence on Resource Manager.
Rolling Upgrades:
Oracle Database 10g supports database software upgrades (from Oracle Database 10g Patchset
1 onwards) in a rolling fashion, with near zero database downtime, by using Data Guard SQL
Apply. The steps involve upgrading the logical standby database to the next release, running in
a mixed mode to test and validate the upgrade, doing a role reversal by switching over to the
upgraded database, and then finally upgrading the old primary database. While running in a
mixed mode for testing purpose, the upgrade can be aborted and the software downgraded,
without data loss. For additional data protection during these steps, a second standby database
may be used.
By supporting rolling upgrades with minimal downtimes, Data Guard reduces the large
maintenance windows typical of many administrative tasks, and enables the 24×7 operation of
the business.
Data Guard provides an efficient and comprehensive disaster recovery and high availability
solution. Automatic failover and easy-to-manage switchover capabilities allow quick role
reversals between primary and standby databases, minimizing the downtime of the primary
database for planned and unplanned outages.
A standby database also provides an effective safeguard against data corruptions and user
errors. Storage level physical corruptions on the primary database do not propagate to the
standby database. Similarly, logical corruptions or user errors that cause the primary database
to be permanently damaged can be resolved. Finally, the redo data is validated at the time it is
received at the standby database and further when applied to the standby database.
A physical standby database can be used for backups and read-only reporting, thereby reducing
the primary database workload and saving valuable CPU and I/O cycles. In Oracle Database 10g
Release 2, a physical standby database can also be easily converted back and forth between
being a physical standby database and an open read/write database. A logical standby database
allows its tables to be simultaneously available for read-only access while they are updated
from the primary database. A logical standby database also allows users to perform data
manipulation operations on tables that are not updated from the primary database. Finally,
additional indexes and materialized views can be created in the logical standby database for
better reporting performance.
If network connectivity is lost between the primary and one or more standby databases, redo
data cannot be sent from the primary to those standby databases. Once connectivity is re-
established, the missing redo data is automatically detected by Data Guard and the necessary
archive logs are automatically transmitted to the standby databases. The standby databases are
resynchronized with the primary database, with no manual intervention by the administrator.
Data Guard Broker automates the management and monitoring tasks across the multiple
databases in a Data Guard configuration. Administrators may use either Oracle Enterprise
Manager or the Broker’s own specialized command-line interface (DGMGRL) to take advantage
of this integrated management framework.
Create standby redo log groups on standby database (start with next group number; create one more
group than current number of groups) after switching out of managed recovery mode:
Add a tempfile to the standby database for switchover or read-only access, then, switch back to
managed recovery:
SQL> alter database recover managed standby database disconnect from session;
SQL> exit
Create standby logfile groups on the primary database for switchovers (start with next group number;
create one more group than current number of groups):
$ sqlplus “/ as sysdba”
Switch to the desired “maximum availability” protection mode on the primary database (from the
default “maximum performance”):
SQL> select value from v$parameter where name = ‘log_archive_dest_2′; — must show LGWR SYNC
If in read-only access, switch back to managed recovery (after terminating any other active sessions):
SQL> alter database recover managed standby database disconnect from session;
SQL> alter database recover managed standby database disconnect from session;
SQL> startup
On the primary:
SQL> alter database recover managed standby database disconnect from session;
On the standby:
Change tnsnames.ora entry on all servers to swap the connect strings (myserver_prod and
myserver_prod2).
On the standby: SQL> alter database recover managed standby database finish;
SQL> startup
Change tnsnames.ora entry on all servers to point the primary connect string to the standby database.
This query detects gaps in the logs that have been received. If any rows are returned by this query then
there is a gap in the sequence numbers of the logs that have been received.
This query detects bad statuses. When a bad status is present this query will return a “1′.
The ‘ARCH’ process should always be ‘CONNECTED’. The ‘MRP0′ process should always be waiting for a
log or applying a log, and when this is not true it will report the error in the status. The ‘RFS’ process
exists when the Primary is connected to the Standby and should always be ‘IDLE’ or ‘RECEIVING’.
This query detects missing processes. If we do not have exactly 3 distinct processes then there is a
problem, and this query will return a “1′.
The most likely process to be missing is the ‘RFS’ which is the connection to the Primary database. You
must resolve the problem preventing the Primary from connecting to the Standby before this process
will start running again.
# Verify all STANDBY PROCESSES are running normally on the STANDBY database.
A query with good results follows proving all processes are connected with normal statuses.
SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, APPLIED FROM V$ARCHIVED_LOG WHERE FIRST_TIME >
TRUNC(SYSDATE) ORDER BY SEQUENCE#;
V$DATABASE