You are on page 1of 5

manual standby database under oracle standard edition

sean hull,

oracle's standby technology has been rebranded as dataguard in recent versions
of oracle. oracle has added a layer of technology automation on top of the
standby technology, making automatic standby technology seamless. but what
about the folks on standard edition oracle? are they left out in the cold?
well, it turns out that it is still possible to create a *manual* standby database on
oracle se. here's how you do it.
1. first you need to create the initial standby database. here are the steps to
do that:
a. put the primary database in archivelog mode, if it is not already, and add at
least log_archive_dest and log_archive_start to your init.ora.
sql> shutdown immediate
sql> startup mount
sql> alter system archive log start;
b. next, create a hotbackup of the primary database. although you can do this
with rman, it is probably easiest to just do it manually so you know what is going
on. for each tablespace do:
sql> alter tablespace example begin backup;
sql> !cp example01.dbf /my/db/backup/
sql> !cp example02.dbf /my/db/backup/
sql> !cp example03.dbf /my/db/backup/
sql> alter tablespace example end backup;
in the above example, the '!' symbol tells sqlplus to run the command from the
shell, so we're using the unix 'cp' command to make copies of those files (which
are now frozen in backup mode) in another location.
c. now, create a standby controlfile from primary database:
alter database create standby controlfile as '/oracle/dbs/stbycf.ctl';
d. at this point, you want to copy everything over to the standby server including
datafiles, standby controlfile & config files:
$ scp /my/db/backup/*.dbf oracle@
e. from the standby machine, edit the standby init.ora file. use this parameter to
tell oracle where files on the primary database will be located on the standby. for
example if you had files in /ora/oracle on primary, and they are moved to
/export/home/oracle on standby, this would work for you:
note that you can use multiple pairs of values here, if you have files in different
locations. alternatively, you can startup and mount the standby database then
sql> alter database rename file '/ora/oracle/myfile.dbf' to
/export/home/oracle/myfile.dbf' as an example.
now you're also likely to have a new location for your archived redo log files, and
that's where the parameter log_file_name_convert comes into play.
important note, neither of these two parameters work for the online redolog files.
those you will have to rename yourself. if you do not do so, you will get an error
at the time you try to switchover your standby database. such errors are easily
remedied by running that command.
f. now, it's time to start the standby instance and mount it.
sql> startup nomount
sql> atler database mount standby database;
g. almost there. lastly, we need to recover the standby database using the auto
option. note that you should build a simple shell script to startup sqlplus and run
these commands. a name like would work well. you can then
run this periodically, say every half hour, from cron to apply any new archived
redolog files that have showed up via below.
sql> recover standby database;
h. now, of course, you'll want to test your standby database. you do this by
starting up in read-only mode.
sql> alter database open read only;
i. don't forget to put it back in standby mode so that when your script runs from cron, it won't return errors.
sql> shutdown immediate;
sql> startup nomount
sql> atler database mount standby database;
2. what scripts should run via cron on the primary and standby database?
as we mentioned earlier, a script called would work well on
the standby database. this script applies new archived redologs that have arrived
from the production system. run it every half hour and see how that works for
you. the database must be mounted in standby mode (not read-only) or this
script will fail.
you'll also want a script on the production server. name it, and
run it every thirty minutes to start with. this can use rsync to move redolog files
from production to standby. a command like this would work:
$ rsync -e ssh -pazv /ora/oracle/arch/
note that you may want to adjust options to ssh to your needs. in addition, this
presumes you have ssh autologin configured. read up on the ssh-keygen
command. the .ssh directory contains a public key, which is shipped over to the
standby machine, and put in the "authorized_keys" file. ssh will then login without
a password. rsync uses ssh as the transport mechanism, so it also executes
without a password. rsync is very smart and only copies blocks and pieces of
files that are different, so it is very fast, and also does checksums to guarantee
3. is the standby database behind the production database?
yes, keep in mind we are creating a manual standby database. the standby
database will tend to be behind production by about half the size of a redolog file.
so if those files are 100m, and you generate 100m of transactions in 30 minutes,
then on average standby will be fifteen minutes behind.
4. what types of changes and statements on production will not be
automatically applied to standby?
in database parlance, any physical changes to the db, plus any commands,
issues with the nologging option. physical changes include creation of new
tablespaces, adding new datafiles, renaming datafiles, autosizing of datafiles,
altering redolog files, altering controlfiles and so on. in addition, primary database
processes or commands using the unrecoverable option will not be propagated
to the standby database.
there are specific and detailed instructions for making some of these physical
changes on the standby db manually, however in many cases recreating the
entire standby database per the instructions above, might be the best option.
5. how can we verify that the standby database is up to date?
if you already have the script running from cron, disable it.
then login with sqlplus and issue:
sql> alter database open read only;
now that you have the database open read-only, run whatever sql commands
you want to in order to verify some change which you know about on production.
when you are done, shutdown, and startup in standby mode again. don't forget to
reenable in the crontab.
6. what happens if the standby system restarts?
you could have it automatically start the standby database. in that case, be sure
to just check the logfiles. if you want to do it manually in those instances, fire up
sqlplus and then issue:
sql> startup nomount
sql> alter database mount standby database;
7. what kind of messages can i expect to see in the standby alert.log?
the alert.log is going to have a lot of extra messages since we are repeatedly
trying to recover when there may or may not be new transaction logs. when it
does this it will say, "looking for archived logfile 1_356.dbf, not found". on the
other hand, if it finds it, it will say that it is applying it. you can use unix
commands "grep" and "less" to scan through the alert.log file quickly.
8. what other scripts should be put in place?
a. a script to cleanup old archived redo logs on primary.
b. a script to cleanup old archived redo logs on standby
c. a script to rotate and archive the alert.log file when it gets large
d. a script to watch the alert.log file for ora-xxxxx errors and report them to nagios
if it finds any (on both primary and standby)
e. a script to login (via ssh autologin) and check what the latest archived redolog
file is, and then also login to the standby and check the alert.log file to verify that
those transactions have been applied.
9. how do we switchover in the event of a failure of the primary?
switchover *can* be done with a script, however i recommend with our manual
standby database that you (a) monitor for emergencies on production and (b)
manually perform the failover if necessary. this will avoid false positives. also, it
allows you to ship additional redolog data if you have it available from production.
the switchover is a two-step process.
a. apply remaining redo as we have done before with commands in
b. startup the database normally, in a read-write mode.
10. what network changes need to happen to failover?
the listener.ora file should be already configured. you can use the same config as
primary with a different ip, or you can give this db a different tnsname. for
instance, you could call primary seana and standby seanb. then in your
application server configs, when you failover, your database connection
configurations need to be updated to point to seanb. the app servers will
probably also need to be restarted at this point.
11. why can't the primary ship redologs and synchronous changes?
basically they call it a *manual* standby database for a reason. dataguard
supports options that look like the following:
log_archive_dest_3='service=stby1 lgwr sync affirm'
again, these are not available in oracle se.
12. once we've failed over, how do we switch back to the primary?
switching back to the primary database involves these steps:
a. follow the steps in item 1 above to create a standby database on what was the
primary system.
b. if you want to be perfectly clean syncing, do the following:
sql> shutdown immediate
sql> startup restrict
sql> alter system switch logfile
sql> shutdown immediate
c. copy over the last archived redolog files
d. apply them and switchover as described in item 8 above.
13. are there special init.ora parameters? what makes our standby
database special?
the main two things that make it a standby database are:
a. the standby control file (created from primary)
- alter database create standby controlfile as '/my/path/to/standby.ctl
b. the process of mounting as a standby database
- startup nomount pfile=standby.ora
- alter database mount standby database;
there are of course some init.ora parameters which are special for the standby
database as well:
so if you do a "shutdown immediate" on the standby, you would start again with:
sql> startup nomount
sql> alter database mount standby database;

standby database technology in oracle is a powerful high availability solution.
even if you're using oracle se, you can still take advantage of these features built
into oracle, with just a little scripting, hand holding, and ample monitoring. do
your research, test, test, and test again on a development server. and don't forget
to monitor all your logfiles for errors. following these guidelines, you should be in
very good shape, at a much lower cost.