Professional Documents
Culture Documents
Oracle DBA: Backup and Recovery Scripts: Rajendra Gutta
Oracle DBA: Backup and Recovery Scripts: Rajendra Gutta
Scripts
By Rajendra Gutta
Having the right backup and recovery procedures is crucial to the operation of any database. It is the
responsibility of the database administrator to protect the database from system faults, crashes, and natural
calamities resulting from a variety of circumstances. Learn how to choose the best backup and recovery
mechanism for your Oracle system.
Having the right backup and recovery procedures is the lifeblood of any database. Companies live on data,
and, if that data is not available, the whole company collapses. As a result, it is the responsibility of the
database administrator to protect the database from system faults, crashes, and natural calamities resulting
from a variety of circumstances.
The choice of a backup and recovery mechanism depends mainly on the following factors:
Export
Hot backup
Cold backup
When the database is running in NOARCHIVELOG mode, the choice of backup is as follows:
Export
Cold backup
Cold Backup
Offline or cold backups are performed when the database is completely shutdown. The disadvantage of an
offline backup is that it cannot be done if the database needs to be run 24/7. Additionally, you can only
recover the database up to the point when the last backup was made unless the database is running in
ARCHIVELOG mode.
The general steps involved in performing a cold backup are shown in Figure 3.1. These general steps are
used in writing cold backup scripts for Unix and Windows NT.
Data files
Control files
Init.ora and config.ora files
CAUTION
Backing up online redo log files is not advised in all cases, except when performing cold backup with the
database running in NOARCHIVELOG mode. If you make a cold backup in ARCHIVELOG mode do not
backup redo log files. There is a chance that you may accidentally overwrite your real online redo logs,
preventing you from doing a complete recovery.
If your database is running in ARCHIVELOG mode, when you perform cold backup you should also backup
archive logs that exist.
Before performing a cold backup, you need to know the location of the files that need to be backed up.
Because the database structure changes day to day as more files get added or moved between directories,
it is always better to query the database to get the physical structure of database before making a cold
backup.
To get the structure of the database, query the following dynamic data dictionary tables:
Backup the control file and perform a trace of the control file using
SQL>alter database backup controlfile to '/u10/backup/control.ctl';
Hot Backup
An online backup or hot backup is also referred to as ARCHIVE LOG backup. An online backup can only be
done when the database is running in ARCHIVELOG mode and the database is open. When the database
is running in ARCHIVELOG mode, the archiver (ARCH) background process will make a copy of the online
redo log file to archive backup location.
An online backup consists of backing up the following files. But, because the database is open while
performing a backup, you have to follow the procedure shown in Figure 3.2 to backup the files:
The general steps involved in performing hot backup are shown in Figure 3.2. These general steps are
used in writing hot backup scripts for Unix and Windows NT.
The steps in Figure 3.2 are explained as follows.
Step 1Put the tablespace in the Backup mode and copy the data files.
Assume that your database has two tablespaces, USERS and TOOLS. To back up the files for these two
tablespaces, first put the tablespace in backup mode by using the ALTER statement as follows:
SQL>alter tablespace USERS begin backup;
After the tablespace is in Backup mode, you can use the SELECT statement to list the data files for the
USERS tablespace, and the copy (cp) command to copy the files to the backup location. Assume that the
USERS tablespace has two data filesusers01.dbf and users02.dbf.
SQL>select file_name from dba_data_files
where tablespace_name='USERS';
$cp /u01/oracle/users01.dbf /u10/backup
$cp /u01/oracle/users01.dbf /u10/backup
The following command ends the backup process and puts the tablespace back in normal mode.
SQL>alter tablespace USERS end backup;
You have to repeat this process for all tablespaces. You can get the list of tablespaces by using the
following SQL statement:
SQL>select tablespace_name from dba_tablespaces;
Step 2Back up the control and Init.ora files.
To backup the control file,
SQL>alter database backup controlfile to '/u10/backup/control.ctl';
You can copy the Init.ora file to a backup location using
$cp $ORACLE_HOME/dbs/initorcl.ora /u10/backup
Step 3Stop archiving.
Archiving is a continuous process and, without stopping archiver, you might unintentionally copy the file that
the archiver is currently writing. To avoid this, first stop the archiver and then copy the archive files to
backup location. You can stop the archiver as follows:
SQL>alter system switch logfile;
SQL>alter system archive log stop;
The first command switches redo log file and the second command stops the archiver process.
Step 4Back up the archive files.
To avoid backing up the archive file that is currently being written, we find the least sequence number that is
to be archived from the V$LOG view, and then backup all the archive files before that sequence number.
The archive file location is defined by the LOG_ARCHIVE_DEST_n parameter in the Init.ora file.
select min(sequence#) from v$log
where archived='NO';
Step 5Restart the archive process.
The following command restarts the archiver process:
SQL>alter system archive log start;
Now you have completed the hot backup of database.
An online backup of a database will keep the database open and functional for 24/7 operations. It is advised
to schedule online backups when there is the least user activity on the database, because backing up the
database is very I/O intensive and users can see slow response during the backup period. Additionally, if
the user activity is very high, the archive destination might fill up very fast.
Logical Export
Export is the single most versatile utility available to perform a backup of the database, de-fragment the
database, and port the database or individual objects from one operating system to another operating
system.
Conventional Path (default)Uses SQL layer to create the export file. The fact is that the SQL layer
introduces CPU overhead due to character set, converting numbers, dates and so on. This is time
consuming.
Direct path (DIRECT=YES)Skips the SQL layer and reads directly from database buffers or
private buffers. Therefore it is much faster than conventional path.
We will discuss scripts to perform the full, user-level, and table-level export of database. The scripts also
show you how to compress and split the export file while performing the export. This is especially useful if
the underlying operating system has a limitation of 2GB maximum file limit.
Understand scripting
This chapter requires understanding of basic Unix shell and DOS batch programming techniques that are
described in Chapter 2 "Building Blocks." That chapter explained some of the common routines that will be
used across most of the scripts presented here.
This book could have provided much more simple scripts. But, considering standardization across all scripts
and the reusability of individual sections for your own writing of scripts, I am focusing on providing a
comprehensive script, rather than a temporary fix. After you understand one script, it is easy to follow the
flow for the rest of the scripts.
Cold Backup
Cold backup program (see Listing 3.1) performs the cold backup of the database under the Unix
environment. The script takes two input parametersSID and OWNER. SID is the instance to be backed
up, and OWNER is the Unix account under which Oracle is running. Figure 3.3 describes the functionality of
the cold backup program. Each box represents a corresponding function in the program.
cp -p ${datafile} ${DATAFILE_DIR}
funct_chk_ux_cmd_stat "Failed to copy datafile file to
backup location"
done
#Copy current init<SID>.ora file to backup directory
echo " Copying current init.ora file" >> ${BACKUPLOGFILE}
cp -p ${init_file} ${INITFILE_DIR}/init${ORA_SID}.ora
funct_chk_ux_cmd_stat "Failed to copy init.ora file to backup location"
echo "################ Init.ora File " >> ${RESTOREFILE}
echo cp -p ${INITFILE_DIR}/init${ORA_SID}.ora ${init_file}
>> ${RESTOREFILE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "COLDBACKUP_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already existing
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_bkup_dir() {
RESTOREFILE_DIR="${BACKUPDIR}/restorefile_dir"
BACKUPLOG_DIR="${BACKUPDIR}/backuplog_dir"
DATAFILE_DIR="${BACKUPDIR}/datafile_dir"
CONTROLFILE_DIR="${BACKUPDIR}/controlfile_dir"
REDOLOG_DIR="${BACKUPDIR}/redolog_dir"
ARCLOG_DIR="${BACKUPDIR}/arclog_dir"
INITFILE_DIR="${BACKUPDIR}/initfile_dir"
BACKUPLOGFILE="${BACKUPLOG_DIR}/backup_log_${ORA_SID}"
RESTOREFILE="${RESTOREFILE_DIR}/restorefile_${ORA_SID}"
LOGFILE="${LOGDIR}/${ORA_SID}.log"
if
if
if
if
if
if
if
[
[
[
[
[
[
[
!
!
!
!
!
!
!
-d
-d
-d
-d
-d
-d
-d
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":"
'{print $2}'
ORA_BASE=echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++) print
"/"$i}'
ORACLE_BASE=echo $ORA_BASE|tr -d " "
init_file=$ORA_HOME/dbs/init$ORA_SID.ora
#log_arch_dest1=sed /#/d $init_file|grep -i log_archive_dest|
nawk -F "=" '{print $2}'
#log_arch_dest=echo $log_arch_dest1|tr -d "'"|tr -d '"'
udump_dest=${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select value from v\\$parameter
where name='user_dump_dest';
exit
EOF
if [ x$ORA_HOME = 'x' ]; then
echo "COLDBACKUP_FAIL: Can't get ORACLE_HOME from oratab file
for $ORA_SID"|tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
if [ ! -f $init_file ]; then
echo "COLDBACKUP_FAIL: init$ORA_SID.ora does not exist in
ORACLE_HOME/dbs"|tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
if [ x$udump_dest = 'x' ]; then
echo "COLDBACKUP_FAIL: user_dump_dest not defined in init$ORA_SID.ora"|
tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_ux_cmd_stat(): Check the exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_ux_cmd_stat() {
if [ $? != 0 ]; then
echo "date" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "COLDBACKUP_FAIL: ${1} "| tee -a ${BACKUPLOGFILE}
>> ${LOGFILE}
exit 1
fi
}
############################################################
#
MAIN
############################################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
# Set environment variables
BACKUPDIR="/u02/${ORA_SID}/cold"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"
DYN_DIR="${TOOLS}/DYN_FILES"
LOGDIR="${TOOLS}/localog"
JOBNAME="dbcoldbackup"
echo " Starting coldbackup of ${ORA_SID} "
funct_chk_parm
funct_chk_bkup_dir
funct_get_vars
funct_verify
funct_build_dynfiles
funct_shutdown_i
funct_startup_r
funct_shutdown_n
funct_verify_shutdown
funct_cold_backup
funct_startup_n
echo "${ORA_SID}, Coldbackup Completed successfully on date +\"%c\""
|tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
########
END MAIN ##########################
In the main function, set correct values for the BACKUPDIR, ORATABDIR, and TOOLS variables
highlighted in the cold backup script. The default location of ORATABDIR is different for each flavor
of Unix. For information about the default location of the ORATAB file for different flavors of Unix,
refer to Chapter 13, "Unix, Windows NT, and Oracle."
Check for the existence of SID in oratab file. If not already there, you must add the instance.
Check for existence of initSID.ora file in the ORACLE_HOME/dbs directory. If it is in a different
location, you can create a soft link to the ORACLE_HOME/dbs directory.
Pass SID and OWNER as parameters to the program.
The database must be running when you start the program. It gets required information by querying
the database and then shuts down the database and performs cold backup.
main() The main function defines the variables required and calls the functions to be executed.
The variables BACKUPDIR defines the backup location, ORATABDIR defines the oratab file
location. oratab files maintain the list of instances and their home directories on the machine. This
file is created by default when oracle is installed. If it is not there, you must create one. OWNER is the
owner of Oracle software directories. A sample oratab file can be found at the end of the chapter.
funct_get_vars() This function gets ORACLE_HOME from the oratab file and
USER_DUMP_DEST from the initSID.ora file. The value of USER_DUMP_DEST is used to back up
the trace of the control file.
funct_build_dynfiles() This function generates a list of files from the database for backup. It
also creates SQL statements for temporary files. These temporary files do not need to be backed
up, but can be recreated when a restore is performed. These temporary files are session-specific
and do not have any content when the database is closed.
funct_shutdown_i() This function shuts down the database in Immediate mode, so that any
user connected to the database will be disconnected immediately.
funct_startup_r() This function starts up the database in Restricted mode, so that no one can
connect to the database except users with Restrict privileges.
funct_shutdown_n() This function performs a clean shutdown of the database.
funct_chk_ux_cmd_stat() This function is used to check the status of Unix commands,
especially after copying files to a backup location.
Restore File
A cold backup program creates a restore file that contains the commands to restore the database. This
functionality is added based on the fact that a lot of DBAs perform backups but, when it comes to recovery,
they will not have any procedures to make the recovery faster. With the restore file, it is easier to restore
files to the original location because it has all the commands ready to restore the backup. Otherwise, you
need to know the structure of the databasewhat files are located where. A sample restore file is shown in
Listing 3.2.
Hot Backup
Listing 3.4 provides the script to perform the hot backup of a database under the Unix
environment. The hot backup script takes two input parametersSID and OWNER. SID is the
instance to be backed up, and OWNER is the Unix account under which Oracle is running.
Figure 3.4 shows the functionality of the hot backup program. Each box represents a
corresponding function in the program.
exit
EOF
# This gets the redo sequence number that is being archived
# and remove this from the list of files to be backed up
ARCSEQ=${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select min(sequence#) from v\\$log
where archived='NO';
exit
EOF
#Get current list of archived redo log files
ARCLOG_FILES=ls ${log_arch_dest}/*|grep -v $ARCSEQ
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter system archive log start;
exit
EOF
#Prepare restore file for arc log files
echo "##### Archive Log Files" >> ${RESTOREFILE}
for arc_file in echo $ARCLOG_FILES
do
echo cp -p ${ARCLOG_DIR}/echo $arc_file|awk -F"/" '{print $NF}'
$arc_file >> ${RESTOREFILE}
done
#Copy arc log files to backup location
#remove the archived redo logs from the log_archive_dest if copy is successful
cp -p ${ARCLOG_FILES} ${ARCLOG_DIR}
if [ $? = 0 ]; then
rm ${ARCLOG_FILES}
else
echo "date" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "HOTBACKUP_FAIL: Failed to copy Archive log files" |
tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
echo "End backup of archived redo logs" >> ${BACKUPLOGFILE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_init_backup(): Backup init.ora file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_init_backup(){
#Copy current init<SID>.ora file to backup directory
echo " Copying current init${ORA_SID}.ora file" >> ${BACKUPLOGFILE}
cp -p ${init_file} ${INITFILE_DIR}/init${ORA_SID}.ora
funct_chk_ux_cmd_stat "Failed to copy init.ora file to backup location"
# Prepare restore file for init.ora
echo "############# Parameter Files" >> ${RESTOREFILE}
echo cp -p ${INITFILE_DIR}/init${ORA_SID}.ora ${init_file} >> ${RESTOREFILE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_temp_backup(): Prepre SQL for temp files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_temp_backup(){
echo "############# Recreate the following Temporary Files" >> ${RESTOREFILE}
${ORACLE_HOME}/bin/sqlplus -s <<EOF >> ${RESTOREFILE}
/ as sysdba
set heading off feedback off
select 'alter tablespace '||tablespace_name||' add tempfile '||''||
file_name||''||' reuse'||';'
from dba_temp_files;
exit
EOF
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
#funct_hot_backup(): Backup datafiles
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_hot_backup(){
# Get the list of tablespaces
echo "Building tablespace list " >> ${BACKUPLOGFILE}
tablespace_list=${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select distinct tablespace_name from dba_data_files
order by tablespace_name;
exit
EOF
echo "##### DATE:" date > ${RESTOREFILE}
echo "####Data Files(Please restore only corrupted files)" >> ${RESTOREFILE}
for tblspace in echo $tablespace_list
do
# Get the datafiles for the current tablespace
datafile_list=${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select file_name from dba_data_files
where tablespace_name = '${tblspace}';
exit
EOF
echo " Beginning back up of tablespace ${tblspace}..." >> ${BACKUPLOGFILE}
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter tablespace ${tblspace} begin backup;
exit
EOF
# Copy datafiles of current tablespace
for datafile in echo $datafile_list
do
echo "Copying datafile ${datafile}..." >> ${BACKUPLOGFILE}
# The next command prepares restore file
echo cp -p ${DATAFILE_DIR}/echo $datafile|awk -F"/" '{print $NF}'
$datafile >> ${RESTOREFILE}
cp -p ${datafile} ${DATAFILE_DIR}
if [ $? != 0 ]; then
echo "date" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "HOTBACKUP_FAIL: Failed to copy file to backup location "|
tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
# Ending the tablespace backup before exiting
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
[
[
[
[
[
[
[
!
!
!
!
!
!
!
-d
-d
-d
-d
-d
-d
-d
rm -f ${REDOLOG_DIR}/*
rm -f ${ARCLOG_DIR}/*
rm -f ${INITFILE_DIR}/*
echo "${JOBNAME}: hotbackup of ${ORA_SID} begun on date +\"%c\"" >
${BACKUPLOGFILE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print $2}'
ORA_BASE=echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'
ORACLE_BASE=echo $ORA_BASE|tr -d " "
init_file=$ORA_HOME/dbs/init$ORA_SID.ora
#log_arch_dest1=sed /#/d $init_file|grep -i log_archive_dest|
nawk -F "=" '{print $2}'
#log_arch_dest=echo $log_arch_dest1|tr -d "'"|tr -d '"'
udump_dest=${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select value from v\\$parameter
where name='user_dump_dest';
exit
EOF
if [ x$ORA_HOME = 'x' ]; then
echo "HOTBACKUP_FAIL: can't get ORACLE_HOME from oratab file for $ORA_SID"
| tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
if [ !
echo
| tee
exit
fi
-f $init_file ]; then
"HOTBACKUP_FAIL: init$ORA_SID.ora does not exist in ORACLE_HOME/dbs"
-a ${BACKUPLOGFILE} >> ${LOGFILE}
1
############################################################
#
MAIN
############################################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
# Set environment variables
BACKUPDIR="/u02/${ORA_SID}/hot"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"
log_arch_dest="/export/home/orcl/arch"
DYN_DIR="${TOOLS}/DYN_FILES"
LOGDIR="${TOOLS}/localog"
JOBNAME="dbhotbackup"
echo " Starting hotbackup of .... ${ORA_SID}"
funct_chk_parm
funct_chk_bkup_dir
funct_get_vars
funct_verify
funct_chk_dblogmode
funct_hot_backup
funct_temp_backup
funct_control_backup
funct_archivelog_backup
funct_init_backup
echo "${ORA_SID}, hotbackup Completed successfully on date +\"%c\"" |
tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
######## END MAIN #########################
In the main function, set the correct values for BACKUPDIR, ORATABDIR, TOOLS, and
log_arch_dest variables highlighted in the script. The default location of ORATABDIR is
different for each flavor of Unix.
Check for existence of the SID instance in the oratab file. If not already there, you must
add the instance.
Check for the existence of the initSID.ora file in the ORACLE_HOME/dbs directory. If it is
in a different location, you must create a soft link to the ORACLE_HOME/dbs directory.
Pass SID and OWNER as parameters to the program:
main() BACKUPDIR defines the backup location. ORATABDIR defines the oratab file
location. oratab files maintain the list of instances and their home directories on the
machine. This file is created by default when Oracle is installed. If it is not there, you must
create one. OWNER is the owner of the Oracle software directories.
funct_get_vars() Make sure that the USER_DUMP_DEST parameter is set correctly in
Init.ora file. I was reluctant to get LOG_ARCHIVE_DEST from the Init.ora file because
there are some changes between Oracle 7 and 8 in the way the archive destination is
defined. There are a variety of ways that you can define log_archive_dest based on how
many destinations you are using. Consequently, I have given the option to define
log_archive_dest in main function.
funct_temp_backup() Oracle 7 and Oracle 8 support permanent temporary tablespaces
(created with create tablespce tablespace_name ... temporary). Apart from this,
Oracle 8I has new features to create temporary tablespaces that do not need back up
(created with create tablespace temporary...). Data in these temporary tablespaces is
session-specific and gets deleted as soon as the session is disconnected. Because of the
nature of these temporary tablespaces, you do not need to back them up; in the case of a
restore, you can just add the data file for these temporary tablespaces. The files for these
temporary tablespaces are listed under the dba_temp_files data dictionary view.
funct_control_backup() In addition to taking backup of control file, this function also
backs up the trace of the control file. The trace of the control file will be useful to examine
the structure of the database. This is the single most important piece of information that
you need to perform a good recovery, especially if the database has hundreds of files.
funct_chk_bkup_dir() This function creates backup directories for data, control, redo
log, archivelog, init files, restore files, and backup log files.
Restore file
The restore file for hot backup looks similar to cold backup. Please refer to the explanation under
the heading restore file for cold backup.
Export
The export program (see Listing 3.5) performs a full export of the database under Unix
environment. The export script takes two input parametersSID and OWNER. SID is the instance to
be backed up, and OWNER is the Unix account under which Oracle is running. Figure 3.5 shows the
functionality of the export and split export programs. Each box represents a corresponding
function in the program.
Figure 3.5 Functions in export and split export scripts for Unix.
######################################################################
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_verify(){
STATUS=ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}
funct_chk_unix_command_status "Database is down for given SID($ORA_SID),
Owner($ORA_OWNER). Can't perform export "
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_cleanup(): Cleanup interim files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_cleanup() {
echo "Left for user convenience" > /dev/null
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_parfile(): This will create parameter file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_build_parfile() {
# This line makes sure that it always creates a new parameter file
echo " " >${PARFILE}
echo "userid=system/manager">>${PARFILE}
# if you use connect string. see next line.
#userid=system/manager@${CON_STRING}
#echo "Owner=scott">>${PARFILE}
#echo "Tables=scott.T1">>${PARFILE}
echo "Full=Y">>${PARFILE}
#echo "Direct=Y">>${PARFILE}
echo "Grants=Y">>${PARFILE}
echo "Indexes=Y">>${PARFILE}
echo "Rows=Y">>${PARFILE}
echo "Constraints=Y">>${PARFILE}
echo "Compress=N">>${PARFILE}
echo "Consistent=Y">>${PARFILE}
echo "File=${FILE}">>${PARFILE}
echo "Log=${EXPORT_DIR}/${ORA_SID}.exp.log">>${PARFILE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_export(): Export the database
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_export() {
# Remove old export file
rm -f ${FILE}
${ORACLE_HOME}/bin/exp parfile=${PARFILE}
if [ $? != 0 ]; then
echo date >> $LOGDIR/${ORA_SID}.log
echo "EXPORT_FAIL: ${ORA_SID}, Export Failed" >> $LOGDIR/${ORA_SID}.log
funct_cleanup
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "EXPORT_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already exist
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_bkup_dir() {
EXPORT_DIR=${BACKUPDIR}
if [ ! -d ${EXPORT_DIR} ]; then mkdir -p ${EXPORT_DIR}; fi
if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi
if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi
FILE="${EXPORT_DIR}/${ORA_SID}.dmp"
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print $2}'
ORA_BASE=echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'
ORACLE_BASE=echo $ORA_BASE|tr -d " "
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
#CON_STRING=${ORA_SID}.company.com
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "date" >> ${LOGDIR}/${ORA_SID}.log
echo "EXPORT_FAIL: ${1} " >> ${LOGDIR}/${ORA_SID}.log
exit 1
fi
}
######################################
#
MAIN
######################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
# Set up the environment
BACKUPDIR="/u02/${ORA_SID}/export"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"
DYN_DIR="${TOOLS}/DYN_FILES"
PARFILE="${DYN_DIR}/export.par"
LOGDIR="${TOOLS}/localog"
echo "... Now exporting .... ${ORA_SID}"
funct_chk_parm
funct_get_vars
funct_verify
funct_chk_bkup_dir
funct_build_parfile
funct_export
funct_cleanup
echo date >> $LOGDIR/${ORA_SID}.log
echo "${ORA_SID}, export completed successfully" >> $LOGDIR/${ORA_SID}.log
####################### END MAIN ###############################
In the main function, set the correct values for BACKUPDIR, ORATABDIR, and TOOLS
variables highlighted in the export script. The default location of ORATABDIR is different for
each flavor of Unix.
Check for existence of SID in the oratab file. If not already there, you must add the
instance.
The funct_build_parfile() function builds the parameter file. By default, it performs a
full export. You can modify the parameters to perform a user- or table-level export.
Pass SID and OWNER as parameters to the program:
Split Export
The split export program (see Listing 3.6) performs an export of the database. Additionally, if the
export file is larger than 2GB, the script compresses the export file and splits into multiple files to
overcome the export limitation of a 2GB file system size. This is the only way to split the export
file prior to Oracle 8i. New features in 8I allow you to split the export file into multiple files, but it
does not compress the files on-the-fly to save space. The script uses the Unix commands split
and compress to perform splitting and compressing of the files. The functions of the script are
explained in Figure 3.5.
The split export script takes two input parametersSID and OWNER. SID is the instance to be
backed up, and OWNER is the Unix account under which Oracle is running.
Export New Features in 8i
In 8i, Oracle introduced two new export parameters called FILESIZE and QUERY. FILESIZE
specifies the maximum file size of each dump file. This overcomes the 2GB file system limitations
of export command operating systems. By using the QUERY parameter, you can export the subset of
a table data. During an import, when using split export files, you have to specify the same
FILESIZE limit.
funct_splitcompress_pipe() {
# Creates pipe for compressing
if [ ! -r ${PIPE_DEVICE} ]; then
/etc/mknod ${PIPE_DEVICE} p
fi
#Creates pipe for splitting
if [ ! -r ${SPLIT_PIPE_DEVICE} ]; then
/etc/mknod ${SPLIT_PIPE_DEVICE} p
fi
# Splits the file for every 500MB
# As it splits it adds aa,bb,cc ... zz to the name
nohup split -b1000m - ${ZFILE} < ${SPLIT_PIPE_DEVICE} &
nohup compress < ${PIPE_DEVICE} >${SPLIT_PIPE_DEVICE} &
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_parfile(): Creates parameter file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_build_parfile() {
# This line makes sure that it always creates a new parameter file
echo " " >${PARFILE}
echo "userid=system/manager">>${PARFILE}
echo "Full=Y">>${PARFILE}
#echo "tables=scott.t1">>${PARFILE}
echo "Grants=Y">>${PARFILE}
echo "Indexes=Y">>${PARFILE}
echo "Rows=Y">>${PARFILE}
echo "Constraints=Y">>${PARFILE}
echo "Compress=N">>${PARFILE}
echo "Consistent=Y">>${PARFILE}
echo "File=${PIPE_DEVICE}">>${PARFILE}
echo "Log=${EXPORT_DIR}/${ORA_SID}.exp.log">>${PARFILE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_export(): Export the database
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_export() {
# Remove old export file
rm -f ${ZFILE}
${ORACLE_HOME}/bin/exp parfile=${PARFILE}
if [ $? != 0 ]; then
echo date >> $LOGDIR/${ORA_SID}.log
echo "EXPORT_FAIL: ${ORA_SID}, Export Failed" >> $LOGDIR/${ORA_SID}.log
funct_cleanup
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "EXPORT_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already existing
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_bkup_dir() {
EXPORT_DIR=${BACKUPDIR}
if [ ! -d ${EXPORT_DIR} ]; then mkdir -p ${EXPORT_DIR}; fi
if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi
if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi
ZFILE="${EXPORT_DIR}/${ORA_SID}.dmp.Z"
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print $2}'
ORA_BASE=echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'
ORACLE_BASE=echo $ORA_BASE|tr -d " "
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "date" >> ${LOGDIR}/${ORA_SID}.log
echo "EXPORT_FAIL: ${1} " >> ${LOGDIR}/${ORA_SID}.log
exit 1
fi
}
#######################################
##
MAIN
#######################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
# Set up environment
BACKUPDIR="/u02/${ORA_SID}/export"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"
DYN_DIR="${TOOLS}/DYN_FILES"
PARFILE="${DYN_DIR}/export.par"
LOGDIR="${TOOLS}/localog"
PIPE_DEVICE="/tmp/export_${ORA_SID}_pipe"
SPLIT_PIPE_DEVICE="/tmp/split_${ORA_SID}_pipe"
echo "... Now exporting .... ${ORA_SID}"
funct_chk_parm
funct_get_vars
funct_verify
funct_chk_bkup_dir
funct_splitcompress_pipe
funct_build_parfile
funct_export
funct_cleanup
echo date >> $LOGDIR/${ORA_SID}.log
echo "${ORA_SID}, export completed successfully" >> $LOGDIR/${ORA_SID}.log
####################### END MAIN ###############################
funct_splitcompress_pipe() This
Split Import
The split import program (see Listing 3.7) performs an import using the compressed split export
dump files created by the splitZxport program. The script takes two input parametersSID and
OWNER. SID is the instance to be backed up, and OWNER is the Unix account under which Oracle is
running.
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_cleanup(): Cleanup interim files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_cleanup() {
rm f ${PIPE_DEVICE}
rm f ${SPLIT_PIPE_DEVICE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_desplitcompress_pipe(): Creates pipe for uncompressing and
desplitting of file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_desplitcompress_pipe() {
# Creates pipe for uncompressing
if [ ! -r ${PIPE_DEVICE} ]; then
/etc/mknod ${PIPE_DEVICE} p
fi
#Creates pipe for desplitting
if [ ! -r ${SPLIT_PIPE_DEVICE} ]; then
/etc/mknod ${SPLIT_PIPE_DEVICE} p
fi
nohup
sleep
nohup
sleep
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_parfile(): Creates parameter file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_build_parfile() {
# This line makes sure that it always creates a new parameter file
echo " " >${PARFILE}
echo "userid=system/manager">>${PARFILE}
#echo "indexfile=${BACKUPDIR}/${ORA_SID}.ddl">>${PARFILE}
#echo "Owner=scott">>${PARFILE}
#echo "Fromuser=kishan">>${PARFILE}
#echo "Touser=aravind">>${PARFILE}
#echo "Tables=T1,T2,t3,t4">>${PARFILE}
echo "Full=Y">>${PARFILE}
echo "Ignore=Y">>${PARFILE}
echo "Commit=y">>${PARFILE}
echo "File=${PIPE_DEVICE}">>${PARFILE}
echo "Log=${BACKUPDIR}/${ORA_SID}.imp.log">>${PARFILE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_import(): Import the database
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_import() {
${ORACLE_HOME}/bin/imp parfile=${PARFILE}
if [ $? != 0 ]; then
echo date >> $LOGDIR/${ORA_SID}.log
echo "IMPORT_FAIL: ${ORA_SID}, Import Failed" >> $LOGDIR/${ORA_SID}.log
funct_cleanup
fi
}
exit 1
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "IMPORT_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Check for backup directories
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_bkup_dir() {
if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi
if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print $2}'
ORA_BASE=echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'
ORACLE_BASE=echo $ORA_BASE|tr -d " "
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "date" >> ${LOGDIR}/${ORA_SID}.log
echo "IMPORT_FAIL: ${1} " >> ${LOGDIR}/${ORA_SID}.log
exit 1
fi
}
#######################################
##
MAIN
#######################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
# Set up environment
BACKUPDIR="/u02/${ORA_SID}/export"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"
In the main() function, set the correct values for the BACKUPDIR, ORATABDIR, and TOOLS
variables highlighted in the import script. The default location of ORATABDIR is different for
each flavor of Unix.
Check for the existence of the SID in the oratab file. If not already there, you must add the
instance.
List all split filenames in the ZFILES variable in the main() function.
The funct_build_parfile() function builds the parameter file. By default, it performs a
full import. You can modify the settings to perform a user or table import.
Pass SID and OWNER as parameters to the program:
funct_desplitcompress_pipe() The
Listing 3.8 contains the script to perform a backup of Oracle software. The script takes two input
parametersSID and OWNER. SID is the instance to be backed up, and OWNER is the Unix account
under which Oracle is running.
In the main function, set correct values for BACKUPDIR, ORATABDIR, and TOOLS variables
highlighted in the software backup script. The default location of ORATABDIR is different for
each flavor of Unix.
Check for the existence of the SID in the oratab file. If not already there, you must add the
instance.
If your Oracle software directory structure does not follow OFA guidelines, set ORA_BASE
and ORA_HOME manually in funct_get_vars().
Pass SID and OWNER as parameters to the program:
and
assuming that the Oracle software is installed using OFA (optimal Flexible Architecture)
guidelines. If not, you have to manually set ORA_BASE and ORA_HOME in the
funct_get_vars() function. 'nohup' commands submit the tar command at the server.
main() If the database is running, it shuts down the database and starts backing up the
software directories. When the backup is complete, it restarts the database.
Troubleshooting and status check:
The important thing here is that the backup log file defined by BACKUPLOGFILE contains detailed
information about each step of the backup process. This is a very good place to start investigating
why a backup has failed or for related errors. This file will also have the start and end time of
backup.
A single line about the success or failure of backup is appended to SID.log file every time backup
is performed. This file is located under the directory defined by the LOGDIR variable. The messages
for a software backup are 'SOFTWAREBACKUP_FAIL', if the software backup failed, and 'Software
Backup Completed', successfully', if the backup completes successfully.
File to extract
Destination directory
Cold Backup
Listing 3.9 performs a cold backup of a database under the Windows NT environment. The cold
backup script takes SID, the instance to be backed up, as the input parameter. The general steps to
write a backup script in Unix and Windows NT are the same. The only difference is that we will be
using commands that are understood by Windows NT. Figure 3.6 shows the functionality of a cold
backup program under Windows NT. Each box represents a corresponding section in the program.
For example, the Parameter Checking section checks for the necessary input parameters and also
checks for the existence of the backup directories.
CFILE=%BACKUP_DIR%\log\coldbackup.sql
ERR_FILE=%BACKUP_DIR%\log\cerrors.log
LOG_FILE=%BACKUP_DIR%\log\cbackup.log
BKP_DIR=%BACKUP_DIR%
%ORA_HOME%\sqlplus
%ORA_HOME%\sqlplus
%ORA_HOME%\sqlplus
%ORA_HOME%\sqlplus
-s
-s
-s
-s
%CONNECT_USER%
%CONNECT_USER%
%CONNECT_USER%
%CONNECT_USER%
@%CFILE%
@shutdown_i_nt.sql
@startup_r_nt.sql
@shutdown_n_nt.sql
Check to see that ORA_HOME, BACKUP_DIR, and TOOLS are set to correct values according to
your directory structure. These variables are highlighted in the script.
Verify that CONNECT_USER is set to correct the username and password.
Define the INIT_FILE variable to the location of the Init.ora file.
Be sure that the user running the program has Write access to backup directories.
When you run the program, pass SID as a parameter.
You can schedule automatic backups using the 'at' command, as shown in the following:
at 23:00 "c:\backup\coldbackup_nt.bat ORCL"
Runs at 23:00 hrs on current date.
at 23:00 /every:M,T,W,Th,F "c:\backup\coldbackup_nt.bat ORCL "
This command runs a backup at 23:00 hours every Monday, Tuesday, Wednesday, Thursday, and
Friday.
The "Create Dynamic Files" section in the coldbackup_nt.bat program creates the
coldbackup.sql file (see Listing 3.10) under the log directory. coldbackup.sql is called from
coldbackup_nt.bat and generates a list of data, control, and redo log files to be backed up from
the database. A sample coldbackup.sql is shown in Listing 3.10 for your understanding. The
contents of this file are derived based on the structure of the database.
'
'
When the coldbackup.sql file is called from the coldbackup_nt.bat program, it spools output to the
coldbackup_list.bat DOS batch file (see Listing 3.11). This file has the commands necessary for
performing the cold backup.
This is only a sample file. Note that in the contents of file data, control, redo log, and Init.ora
files are copied to respective backup directories.
Hot Backup
The hot backup program (see Listing 3.12) performs a hot backup of a database under the
Windows NT environment. The hot backup script takes SID, the instance to be backed up, as the
input parameter.
ORA_HOME=c:\oracle\ora81\bin
CONNECT_USER="/ as sysdba"
ORACLE_SID=%1
BACKUP_DIR=c:\backup\%ORACLE_SID%\hot
INIT_FILE=c:\oracle\admin\orcl\pfile\init.ora
ARC_DEST=c:\oracle\oradata\orcl\archive
set TOOLS=c:\oracomn\admin\my_dba
set LOGDIR=%TOOLS%\localog
set LOGFILE=%LOGDIR%\%ORACLE_SID%.log
set HFILE=%BACKUP_DIR%\log\hotbackup.sql
set ERR_FILE=%BACKUP_DIR%\log\herrors.log
set LOG_FILE=%BACKUP_DIR%\log\hbackup.log
set BKP_DIR=%BACKUP_DIR%
REM :::::::::::::::::::: End Declare Variables Section
REM :::::::::::::::::::: Begin Parameter Checking Section
if "%1" == " goto usage
REM
Create
if not exist
if not exist
if not exist
if not exist
if not exist
dbms_output.put_line('exit;');
End;
>>%HFILE%
/
>>%HFILE%
spool off
>>%HFILE%
exit;
>>%HFILE%
>>%HFILE%
Hot backup program functionality can be shown with the similar diagram as for a cold backup. The
sections and their purposes in the program are the same as for a cold backup.
Check to see that ORA_HOME, BACKUP_DIR, and TOOLS are set to the correct values according
to your directory structure. These variables are highlighted in the script.
Verify that CONNECT_USER is set to the correct username and password.
Define the INIT_FILE variable to the location of the Init.ora file.
Define the ARC_DEST variable to the location archive destination.
Be sure that the user running the program has Write access to the backup directories.
The hotbackup.sql file is called from hotbackup_nt.bat and it spools output to the
hotbackup_list.sql SQL file (see Listing 3.14). This file has the commands necessary for
performing a hot backup.
This is only a sample file. Note in the file that the data, control, archive log, and Init.ora files are
copied to their respective backup directories. First, it puts the tablespace into Backup mode, copies
the corresponding files to backup location, and then turns off the Backup mode for that tablespace.
This process is repeated for each tablespace, and each copy command puts the status of the copy
operation to hbackup.log and reports any errors to the herrors.log file.
Listing 3.14 is generated based on the structure of the database. In a real environment, the database
structure changes as more data files or tablespaces get added. Because of this, it is important to
generate the backup commands dynamically, as shown in hotbackup_list.sql. It performs the
actual backup and is called from hotbackup_nt.bat.
c:backup\orcl\hot\control
c:\backup\orcl\hot\arch
Export
The export program (see Listing 3.15) performs a full export of the database under a Windows NT
environment. The export script takes SID, the instance to be backed up, as the input parameter.
ORA_HOME=c:\oracle\ora81\bin
ORACLE_SID=%1
CONNECT_USER=system/manager
BACKUP_DIR=c:\backup\%ORACLE_SID%\export
set TOOLS=c:\oracomn\admin\my_dba
set LOGDIR=%TOOLS%\localog
set LOGFILE=%LOGDIR%\%ORACLE_SID%.log
REM :::::::::::::::::::: End Declare Variables Section
REM :::::::::::::::::::: Begin Parameter Checking Section
if "%1" == " goto usage
REM Create backup directories if already not exist
if not exist %BACKUP_DIR% mkdir %BACKUP_DIR%
if not exist %LOGDIR%
mkdir %LOGDIR%
REM Check to see that there were no create errors
if not exist %BACKUP_DIR% goto backupdir
REM Deletes previous backup. Make sure you have it on tape.
del/q %BACKUP_DIR%\*
REM :::::::::::::::::::: End Parameter Checking Section
REM :::::::::::::::::::: Begin Export Section
%ORA_HOME%\exp %CONNECT_USER% parfile=export_par.txt
(echo Export Completed Successfully & date/T & time/T) >> %LOGFILE%
goto end
REM :::::::::::::::::::: End Export Section
REM :::::::::::::::::::: Begin Error handling section
:usage
echo Error, Usage: coldbackup_nt.bat SID
goto end
:backupdir
This program performs an export of the database by using the parameter file specified by
export_par.txt. In Listing 3.16 is a sample parameter file that performs a full export of the
database. You can modify the parameter file to suit to your requirements.
Check to see that ORA_HOME and BACKUP_DIR, TOOLS are set to correct values according to
your directory structure. These variables are highlighted in the program.
Verify that CONNECT_USER is set to the correct username and password.
Be sure that the user running the program has Write access to the backup directories.
Edit the parameter file to your specific requirements. Specify the full path of the location of
your parameter file in the program.
When you run the program, pass SID as a parameter.
Recovery Principles
Recovery principles are the same, regardless of whether you are in a Unix or Windows NT
environment. The following are general guidelines for recovery using a cold backup, hot backup,
and export.
Definitions
Control FileThe control file contains records that describe and maintain information
about the physical structure of a database. The control file is updated continuously during
database use and must be available for writing whenever the database is open. If the control
file is not accessible, the database will not open.
System Change Number (SCN)The system change number is a clock value for the
database that describes a committed version of the database. The SCN functions as a
sequence generator for a database and controls concurrency and redo record ordering.
Think of the SCN as a timestamp that helps ensure transaction consistency.
CheckpointA checkpoint is a data structure in the control file that defines a consistent
point of the database across all threads of a redo log. Checkpoints are similar to SCNs and
they also describe which threads exist at that SCN. Checkpoints are used by recovery to
ensure that Oracle starts reading the log threads for the redo application at the correct point.
For a parallel server, each checkpoint has its own redo information.
Now you determined that you are not able to connect to the database.
As a second step, try to see whether the processes are running by using the following command.
$ps ef|grep i ORCL
This should list the processes that are running. If it does not list any processes, you are sure that the
database is down.
As a third step, check the alert log file for any errors. The alert log file is located under the
directory defined by BACKGROUND_DUMP_DEST in the Init.ora file.
This file lists any errors encountered by database. If you see any errors, note the time of the error,
error number, and error message. If you do not see any errors, start up the database (sometimes it
will report an error when you try to startup the database). If the database starts, that is wonderful!
If it doesn't start, it will generally complain about the error onscreen and also report the error in the
alert log file. Check the alert log again for more information.
Now you determined from the error that the database is not finding one of the data files.
As a fourth step, inform the project manager that somebody has caused a problem in the database
and try to find out what happened (a hard disk problem or perhaps somebody deleted the file).
Limit your time to this research based on time available.
As a fifth step, try to determine what kind of backups you have taken recently and see which one is
most beneficial for recovering as much data as possible. This depends on the types of backups your
site is employing to protect from database crashes.
If you have a hot backup mechanism in place, you can be sure that you can recover all or most of
the data. If you have an export or cold backup mechanism in place, the data changes since the time
of last backup will be lost.
As a sixth step, follow the instructions in this chapter, given your recovery scenario.
Recovery When a Redo Log File Is LostTo recover the database when a redo log
file is lost or corrupted
alter database clear logfile group 1;
missing logfiles. To create the new control file, you need to know the full structure of the database.
We have taken the trace of control file by using Alter database backup controlfile to trace
as part of the backup. Follow the steps explained in Chapter 10, "Database Maintenance and
Reorganization," for creating a new control file.
There was a database corruption at 5 p.m. in the evening and the database crashed. When I tried to
bring up the database, the database opened and immediately died as soon as I started executing any
SQL statement. This crippled my ability to perform troubleshooting of the problem. I restored the
database from a backup and applied the archive redo log files up to just before the time of the crash
and the database came up fine. Remember, you have to use the latest control file to roll forward
with the archived redo log files, so that the Oracle knows what archived redo log files to apply.
At this point, you will be prompted for the location of the archived redo log files, if
necessary.
4. Open the database:
alter database open
5. After the database is open, take the tablespace offline. For example, if the corrupted data
file belongs to USERS tablespace, use the following command:
alter tablespace users offline;
Here, tablespace can be taken offline either with a normal, temporary, or immediate
priority. If possible, take the damaged tablespace offline with a normal or temporary
priority to minimize the amount of recovery.
6. Start the recovery on the tablespace:
recover tablespace users;
At this point, you will be prompted for the location of the archived redo log files, if
necessary.
7. Bring the tablespace online:
alter tablespace users online;
5. After the database is open, take the tablespace offline. For example, if the corrupted data
file belongs to USERS tablespace, use the following command:
alter tablespace users offline;
Here, tablespace can be taken offline either with a normal, temporary, or immediate
priority. If possible, take the damaged tablespace offline with a normal or temporary
priority to minimize the amount of recovery.
6. Start the recovery on the data file:
recover datafile '/u01/oradata/users01.dbf';
At this point, you will be prompted for the location of the archived redo log files, if
necessary.
7. Bring the tablespace online:
alter tablespace users online;
At this point, you will be prompted for the location of the archived redo log files, if
necessary. Enter cancel to cancel recovery after Oracle has applied the archived redo log
file just prior to the point of corruption. If a backup control file or recreated control file is
being used with incomplete recovery, you should specify the using backup controlfile
option. In cancel-based recovery, you cannot stop in the middle of applying a redo log file.
You either completely apply a redo log file or you don't apply it at all. In time-based
recovery, you can apply to a specific point in time, regardless of the archived redo log
number.
4. Open the database:
alter database open resetlogs
Whenever an incomplete media recovery is being performed or the backup control file is
used for recovery, the database should be opened with the resetlogs option. The resetlogs
option will reset the redo log files.
5. Perform a full backup of database.
If you open the database with resetlogs, a full backup of the database should be
performed immediately after recovery. Otherwise, you will not be able to recover changes
made after you reset the logs.
6. Verify that the recovery worked.
For example
recover database until time '1999-01-01:12:00:00' using backup
controlfile
At this point, you will be prompted for the location of the archived redo log files, if
necessary. Oracle automatically terminates the recovery when it reaches the correct time. If
a backup control file or recreated control file is being used with incomplete recovery, you
should specify the using backup controlfile option.
4. Open the database:
alter database open resetlogs
Whenever an incomplete media recovery is being performed or the backup control file is
used, the database should be opened with the resetlogs option, so that it resets the log
numbering.
For example
recover database until change 2315 using backup controlfile
At this point, you will be prompted for the location of the archived redo log files, if
necessary. Oracle automatically terminates the recovery when it reaches the correct system
change number (SCN).
If a backup control file or a recreated control file is being used with an incomplete
recovery, you should specify using the backup controlfile option.
4. Open the database.
alter database open resetlogs
Before you actually start recovering the database, you can obtain information about the files that
need recovery by executing the following command. To execute the statement, the database must
be mounted. The command also gives error information.
select b.name, a.error from v$recover_file a, v$datafile b
where a.file# = b.file#
Full
User-level
Table-level
Full Import
A full import can be used to restore the database in case of a database crash. For example, you
have a full export of the database from yesterday and your database crashed this afternoon. You
can use the import command to restore the database from the previous day's backup. The restore
steps are as follows.
1. Create a blank databaseRefer to Chapter 10 for instructions on how to create a database.
2. Import the databaseThe following command performs a full database import, assuming
that your export dump filename is export.dmp. The IGNORE=Y option ignores any create
errors, and the DESTROY=N option does not destroy the existing tablespaces.
C:\>imp system/manager file=export.dmp log=import.log full=y ignore=y destroy=n
3. Verify the import log for any errorsWith this import, the data changes between your
previous backup and the crash will be lost.
Table-Level Import
A table level import allows you to import specific objects without importing the whole database.
Example 1:
For example, if one of the developers requests that you transfer the EMP and DEPT tables of user
SCOTT from database ORCL to TEST. You can use the following steps to transfer these two tables.
1. Set your ORACLE_SID to ORCL.
C:\>set ORACLE_SID=ORCL
This step sets the correct database to which to connect.
Or
SQL>Drop table EMP;SQL>Drop table DEPT;
6. C:\>set ORACLE_SID=TEST
7. C:\>imp system/manager fromuser=scott touser=scott tables=(EMP,DEPT)
8. file=export.dmp log=import.log ignore=Y
This command imports the SALES table from previous backup. After the import check the
import log file for any errors.
RMAN executable
Recovery catalog database (Database to hold the catalog)
Recovery catalog schema in the recovery catalog database (Schema to hold the metadata
information)
Optional Media Management Software (for tape backups)
Sample Files
Sample oratab File
Listing 3.17 is created by the Oracle installer when you install the Oracle database under Unix
operating system. The installer adds the instance name, Oracle home directory, and auto startup
flag (Y/N) for the database in the format [SID:ORACLE_HOME:FLAG]. The auto startup flag tells
whether the Oracle database should be started automatically when the system is rebooted.