Professional Documents
Culture Documents
1 Installation Guide
Other company, product, and service names are the properties of their respective owners.
4 Oracle Migration.................................................................................................................................................23
4.1 Prerequisites..........................................................................................................................................23
4.2 Introduction...........................................................................................................................................23
4.3 Procedure..............................................................................................................................................24
4.3.1 General Issues..............................................................................................................................24
4.3.2 Generate List of Users in Original Database...............................................................................25
4.3.3 Generate Migration Scripts..........................................................................................................26
4.3.4 Exporting Users from Old Database............................................................................................27
4.3.5 Initializing User Accounts in New Database...............................................................................28
i
Table of Contents
4 Oracle Migration
4.3.6 Importing Data into New Database.............................................................................................28
4.4 Database-Specific Considerations.........................................................................................................30
4.4.1 WHSM.........................................................................................................................................30
4.4.2 SLM.............................................................................................................................................30
4.4.2.1 Adding Administrative Users............................................................................................31
4.4.2.2 Setting the SLM_MAIN_Q_SEQUENCE.........................................................................31
4.5 Updating Omega Configuration............................................................................................................32
4.6 Updating Configuration.xml File..........................................................................................................33
ii
Introduction to Oracle Guide for Omega 2017.1
This guide explains how to set up Oracle specifically for Omega* 2017.1.
The PDF of this document is included on the DVD when the DVD is created. However, updates are continuously
made to the documentation as issues are discovered, so the PDF on the DVD is not necessarily the most recent
version available. Please check the www.software.slb.com Web site for the most recent version of this document.
Click Support, log in if necessary, click Omega, and then click Documentation to find the latest documentation.
The following sections discuss installing Oracle for Omega. They cover installing Oracle 11 or 12 and creating
Oracle 11 or 12 database instances using the most recent procedures.
1.1 Prerequisites
The installation procedures make the following assumptions:
Note: Do not change the Oracle host name or IP address after Oracle is installed. If you do, Oracle will not
launch and several Oracle configuration files will have to be reconfigured.
Assumptions/prerequisites
• The Oracle installation files have been copied to /download/dir on the Oracle host.
• The oracle user account must exist and it must be in group dba. It is recommended that this be a local
account. Whether it is a local or remote account, it is a requirement that the Oracle accounts home
directory (~oracle) must exist and the root user on the Oracle server system must be able to update the
oracle user's .CSHRC file.
• The install-oracle and create-instance scripts use the hostname and hostname -f commands to determine
the host name and to populate various files ($ORACLE_HOME/network/admin/listener.ora,
$ORACLE_HOME/network/admin/tnsnames.ora and the SP file for each instance in
$ORACLE_HOME/dbs). If you don't wish to use this name (because, for example, all of your clients are
on a private network) you may need to modify your name service files to return the host name that you
want.
• The installation program will automatically add some entries to /etc/sysctl.conf to match Oracle
requirements. The following parameters are not set by the Oracle installation program and you will need
to verify that they are set to the specified values (or larger). If the server you are using has an OS that has
been installed by Schlumberger to Schlumberger standards, then these will have been set at OS
installation time and these parameters should already be present:
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
• The maximum number of file descriptors usable by the oracle account also needs to be increased. At a
minimum, you should have the following set in /etc/security/limits.conf.
• Make sure that the soft limit for the number of users processes (nproc) for the Oracle user is not set to the
default (1024 for RHEL 6, 4096 for RHEL 7). See the settings in /etc/security/limits.d/90-nproc.conf
(RHEL6) or /etc/security/limits.d/20-nproc.conf (RHEL 7). It is recommended to simply comment out the
line setting the soft limit for non-root users.
• The initial release of RHEL 7.2 has the default setting RemoveIPC=yes in /etc/systemd/logind.conf.
Either change this to no and restart systemd-logind or upgrade the systemd RPM to version
219-19.el7_2.4 or newer. The Oracle installation scripts should add RemoveIPC=no but will not restart
systemd-logind.
su - root
cd /download/dir
gpg oracle.bin-XXXX-SE-64.tgz.gpg
gpg oracle-scripts-XXXX.tgz.gpg
Try this:
su - root
unset DISPLAY
cd /download/dir
gpg oracle.bin-XXXX-SE-64.tgz.gpg
gpg oracle-scripts-XXXX.tgz.gpg
su - root
cd /oracle
tar xzf /download/dir/oracle-scripts-XXXX.tgz
This will install all the scripts to be used for the installation into /oracle/scripts. If the /oracle directory already
belongs to the oracle user account this may be done as the oracle account. Otherwise, it will need to be done as
the root account.
In version 4.6 of the installation scripts, we introduced an option to perform a scripted (non-interactive)
installation. Decide how you want to proceed and follow the steps in either the next section for an interactive
installation, or the following section for a scripted installation.
cd /oracle/scripts
./install-oracle
This will install a version of the Oracle binaries into /oracle and will
initialise various parameter files. It will not create any instances
(databases) - use the create-instance script for that.
At this point, the script will check for any previously installed versions of Oracle and any running Oracle
instances.
Reinstalling the same version of Oracle on a system is supported to fix problems with corrupted files. This should
only be done under expert guidance. If you attempt to install Oracle on a system with previously installed
versions, the script will list the pre-existing versions and any previously created instances that it can find,
otherwise it will go straight to the following question on the location of the tar file:
11.2
The script will then ask for the location of the Oracle binary tar file:
Please enter the name of the directory containing the Oracle binary
tar kit [/install] : /download/dir
If no Oracle binary files are found in /download/dir, the following message will be written out:
The program will terminate. If you have more than one Oracle binary file in the /download/dir directory, you will
be given the choice of which to install:
1. 11.2.0.4.8 SE (64bit)
2. 12.1.0.2.5 SE (64bit)
The script will now do some further checks to make sure that you are not installing a 32-bit version of Oracle onto
a 64-bit OS or vice versa. As we no longer support 32-bit versions of either Oracle or the OS you should not see
these.
If you are attempting to re-install an Oracle version the program will exit if there are running instances:
The script will now unpack the binary tar file, change the ownership of the installed files (from both tar kits) to
the oracle account and update various system files:
or
Note: If more than one tar file is in the directory, then the first one in the standard collating sequence
(output of ls) will be used.
Note: the scripted mode installation cannot be used to over-write an existing Oracle installation - this must
be done in interactive mode.
cd /oracle/scripts
./install-oracle --tardir=directory
or
cd /oracle/scripts
./install-oracle -t directory
Where "directory" is the name of the directory containing the binary tar file.
• The instance creation script must be run as the oracle user account.
• $ORACLE_HOME must be set to the location of the correct Oracle version before starting the script.
• The installation scripts have been installed as described above.
• The Oracle binaries have been installed as described above (or have been installed via one of the older
installation kits).
• We recommend that you do not change the host name once you have created a database instance. If you
do change the host name, then you must reconfigure several network-related components. For the
database instance to run normally, make sure that your host name stays the same after you create any
database instances.
Notes: Each run of the instance creation script will create one instance of the selected type. You do not need to
create all database instance types. Smaller sites can create all required instances on a single server. Larger sites
will need to consider the expected load on the system and put different database instances on different servers as
required.
• The OPM instance is the fundamental instance that is needed on all sites.
• The InVA instance is only needed if you plan to use InVA.
• The WHSM instance is only needed if you plan to set up WHSM. Large centers that have heavy tape
reading or writing requirements might need a separate Oracle server for the WHSM instance.
• The Storage Library Manager (SLM) instance is only needed if you are going to use SLM.
• The OCM instance is only needed if you are going to install OCM. The load of an OCM database on
Oracle is relatively low. Only a large site needs a dedicated OCM server with separate Oracle database.
There are two ways to create an instance, interactive way or scripted way. In either case the instance creation
script must be run as the oracle account so you must become the oracle user by whatever means appropriate:
login, su or sudo.
cd /oracle/scripts
./create-instance
- /oracle/11.2/network/admin/tnsnames.ora
- /oracle/11.2/network/admin/listener.ora
- /etc/oratab
The script will then list the instances that currently exist on the system and again confirm whether you wish to
proceed:
Unless you use one of these instance names, the running databases
will not be affected
The installation script will now ask for the type of instance to create. This example will create an OPM database:
What type of Oracle instance do you wish to create? Your choices are:
1. OPM
2. InVA 5
3. WHSM
4. SLM
5. OCM
It will now ask for the name of the instance you wish to create. The default name will be constructed from the site
code (first two letters of the hostname of the Oracle server system, the type of instance being created and 001):
Please enter the name of the new instance (<=8 characters) [gyopm001] : gyopm101
Finally, the script will ask for the size of the database instance to be created. For most database types two sizes are
supported: Small for standalone systems or laptops and Standard for everything else. For OPM databases, two
additional sizes are supported: Large and Custom. Custom defaults to the same values as Large, but also allows
you to adjust parameters individually. Use of the Custom option should only be made under guidance from GS-IT
support/R&E. See the section on additional options for custom OPM size below for the parameters that can be
varied.
Please enter the size of the instance to create? Your choices are:
Unless using size option 4 the script requires no further interaction and should go on to create a new instance.
This will take 5 to 10 minutes depending on which database type/size is being created. The process will start to
print out information regarding different stages of the database creation; the last line of the printout should be the
following:
All the log files are in $ORACLE_HOME/admin/$ORACLE_SID/scripts. Check the log files to see if there are
any error messages. Pay attention to the following points when you check the logs.
• The name of the directory containing the log files generated by the database installation program. This is
useful for verifying the setting of the sys and system passwords. These are generated according to an
unpublished algorithm. The new passwords are logged in the file cloneDBCreation.log; look for the
alter user lines.
cd /oracle/scripts
./create-instance --type=DB_type --name=DB_name --size=option
or
cd /oracle/scripts
./create-instance -t DB_type -n DB_name -s option
where:
DB_type
is one of the 5 standard database types (OPM, IN5, WHS, SLM, OCM)
DB_name
is the name you wish to give the database (note this must be the full name; the script will not attempt to
guess a name based on the host name and database type).
option
is the size option (this should be the numerical value 1 or 2 (or additionally 3 or 4 for OPM instances).
See the size selection in the interactive instance creation above for the interpretation of the different
numbers.
As for the interactive installation the script will check for the following prerequisites and will exit if they are not
correct:
See the notes above in the interactive instance creation section about checking the logs.
Please enter values for the following parameters. Default values as indicated
System Global Area (SGA) size [4G] :
Space available for database backup/recovery files [400G] :
Size of REDO logs [512M] :
Size of UNDO tablespace [4G] :
If installing an Oracle 12 database memory management is based on the Memory target size. In this case the
question about the "System Global Area (SGA) size" will be replaced with:
Note: Before editing /etc/sysctl.conf, it is a good idea to back up the file as /etc/sysctl.conf_timestamp.
1. The processes mediating communications between Omega and Oracle need to know what these
usernames and passwords are. So far as possible these usernames/passwords should not be visible to other
users.
2. The default values are well known; that is, they appear in various scripts in various places.
Issue (1) is mitigated by storing the username/password pair in a file that is only readable by the processes that
need this information.
Issue (2) is mitigated by changing the password (and, possibly, the username) and then updating the files used in
the previous point.
The setting up of the administrative accounts login files is done by using the Setup command and is covered
elsewhere in the Omega Installation Guide. However, as the files must exist before any Omega processes are run
and the information in these files must match the information in the database it can be useful to set these up
manually immediately after the creation of the database instance, particularly if you wish to change the passwords
or usernames (Setup will not overwrite existing files).
All of the administrative accounts login files should be placed in the directory /etc/omega. The permissions on
the files should be as restrictive as possible with the files belonging to the user account running the daemons that
need to read the files (generally opm) and the access restricted to that account alone (i.e mode 600
-rw-------).
All the login files consist of two lines of text: the username on the first line and the password on the second line.
These files are read by the processes accessing the OPM and InVA databases respectively. Permissions should be
similar to:
opmadmin
opmadmin!
2.0.1.1.2 RDMAdminLogin
If you have an RDM schema in your OPM database you will need an RDMAdminLogin file for processes to
communicate with this schema. The permissions for this file should be the same as for the OPMAdminLogin
file. The default contents will be:
RDMDB
RDMDB
2.0.1.1.3 OPMAdminLogin_WHSM_XXWHS001
This file contains the name and password of the administrative account in the WHSM database with the name
XXWHS001. The permissions for this file will depend on the location as it is read by two distinct processes:
1. The WHSM Event Consumer Daemon (normally run as the opm account)
2. The WHSM Master Daemon (normally run as the whsmusr account)
If the processes run on separate servers then the file should belong to the relevant user and have permissions of
mode 600 -rw-------. e.g.
If the two processes run on the same server the recommended permissions are:
You can change the password of any of the above accounts. A script called update-passwd is supplied to do
this with, and can be found in the /oracle/scripts directory. Running the script without arguments will
show the usage:
cd /oracle/scripts
./update-passwd
Usage: ./update-passwd dbname syspasswd user newpasswd
where: dbname is the name of the database
syspasswd is the password of the system account in dbname
user is the name of the oracle user whose password is to be changed
newpasswd is the new password for the user
Note: for this script syspasswd is the password of the system account. e.g. running:
will change the password of the RDMDB administrative account to newRDMpass (In practice we currently don't
use case sensitive passwords at present so the case of the new password is immaterial). Once the password has
been change in the database you would then need to edit /etc/omega/RDMAdminLogin to reflect the new
password.
This cannot be done for the RDM administrative account . For this account the username cannot be changed, only
the password.
You can create a new administrative user to replace opmadmin with the create-adminuser script. Running
the script without arguments will show the usage:
cd /oracle/scripts
./create-adminuser
Usage: ./create-adminuser dbname syspasswd [adminuser] [adminpasswd]
where: dbname is the name of the database
syspasswd is the password of the sys account in dbname
adminuser is the name of the administrative user to be created (default: opmadmin)
adminpasswd is the password of the administrative user to be created
(default: opmadmin!)
Note: for this script syspasswd is the password of the sys account. e.g. running:
will create a new administrative account for database xxwhs001. You would then need to edit the
/etc/omega/OPMAdminLogin_WHSM_XXWHS001 to use the new account and password (in both locations
if appropriate).
It is strongly recommended to delete the existing administrative account. Use the sqlplus command and run
drop user opmadmin cascade.
Note: Only one username/password combination is allowed for in the default configuration (i.e. there is only one
administrative login file). If you have two or more OPM/InVA instances on a system they will either have to use
create-adminuser to set up the same username/password for their administrative accounts or the startup of
the server processes will have to be configured by hand. Correct dbname should be specified for sqlplus
command.
To stop the logging from occurring you will need to create an sqlnet.ora file which should be installed in the
directory from the TNS_ADMIN environment variable defined in the Omega configuration database. This file
should contain the following two lines:
TRACE_LEVEL_CLIENT=OFF
DIAG_ADR_ENABLED=OFF
To stop the tracing for the oracle user on the database server itself you should also put a copy of sqlnet.ora
containing these lines in the default TNS_ADMIN directory efined for oracle environment , which can be different
from omega environment (su oracle ; echo $TNS_ADMIN). oracle 12 similar actions should be taken but note
that the sqlnet.ora file should have been created with additional entries to allow connection from older
Oracle clients like following:
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=8
SQLNET.ALLOWED_LOGON_VERSION_SERVER=8
For sites running WHSM, please check/copy the tnsnames.ora file for all Omega versions if Omega and Oracle
TNS_ADMIN locations are different.
If you would like to understand Oracle backup and recovery in detail, refer to the Oracle manuals on the Oracle
website.
For Omega, the Oracle RMAN database backup is scheduled for every installed database. The last successful full
backup is stored on the Oracle database server. A database can be can be recovered from the last successful
backup. If the archive logs are available, the database can be recovered from the point of failure.
OS backup procedures should be scheduled to ensure that Oracle backup files are copied to tapes or other devices
before they are overwritten by the next Oracle backup task.
The Oracle utility RMAN (Recovery Manager) performs the physical backup of an Oracle database. This utility
generates a set of Oracle backup files that can be used to reconstruct the Oracle database. While the RMAN
recovery task is running, regular OS database files cannot be used.
Both full database and incremental database backup options are installed with all database instances; however,
only the full database option is enabled. The full database backup action is performed between 2 a.m. to 5 a.m.
local time daily.
The Oracle Export utility adds a logical backup (database dump) option for an Oracle database. This utility
provides the ability to recover tablespaces and single objects. Enabling Oracle Export for all databases is optional
but highly recommended.
The /oracle/scripts/export.pl script runs the Export utility. To schedule an export add the following
to /etc/cron.d/oracle:
The default output location for exported data is determined by the DB_RECOVERY_FILE_DEST parameter
(which is the same location as used for the RMAN backups; default: /oracle_data/archive/<dbname>).
For centers that do not have enough space on this filesystem for both RMAN and Export backup files, change the
location of the Export .dmp and .log files by adding the comment # export:<directory> to the
/etc/oratab file.
# export:/wgdisk/hn0001/archive
# This file is used by ORACLE utilities. It is created by root.sh
# and updated by the Database Configuration Assistant when creating
# a database.
#
# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME::
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
hyopm002:/oracle/10.2:Y
hyin5002:/oracle/10.2:Y
A common reason for wanting to do an additional backup is to recover disk space in the flash recovery area. If
you have generated a large number of transactions in a small period of time (e.g., by importing a large project),
disk space may be filled with archive log files. Because backup files are compressed and the backup procedure
removes the archive files, disk space is recovered.
4.1 Prerequisites
This procedure is intended to be as general as possible but it does have some restrictions. Currently it assumes:
• a standard Schlumberger Oracle installation for Omega on both the source and target systems . If the
target system is the same as the source (i.e. you are upgrading a system in situ) you will need to make
sure that the /oracle partition is large enough for the newer Oracle version - see the Oracle Installation
procedures for minimum requirements.
• an Oracle 10.2, 11.2 or 12.1 installation.
• one of the standard Schlumberger Oracle database instances (OPM, InVA, WHSM, OCM or SLM) which
has been created using the standard procedures and assumptions.
• the name of the Oracle instance you are migrating to and the version of Oracle it will run.
• you will need to know the SYSTEM password for both the source and target databases and be able to shut
down/start up all Omega and Oracle services (either as the root user or via sudo. To run sqplus you will
need to be able to sudo to a user in the dba group (usually oracle). For all other commands any valid
Linux account should be sufficient (though most people will use the oracle account for all commands that
do not require root privileges). Make sure that you have permissions to write to/read from the backup
directories as the user running the export/import.
4.2 Introduction
This migration procedure is based on the use of two perl scripts which in turn are used to create three shell
scripts and one SQL procedure. The two perl scripts are:
gen-mig-list
generates list of Oracle users to be migrated. Note that the Oracle users represent different things in
different databases so the lists are used in different ways.
gen-mig-scripts
takes the list of Oracle users created by gen-mig-list plus additional information about the source
and target database instances and writes scripts to export the users from the old database, run any
necessary SQL to set up users in the new database and finally to import users into the new database.
The two scripts must both be run on the system from which the information is being migrated. It should be
possible to run the scripts using any valid Linux account if migrating databases in a standard user environment.
4.3 Procedure
4.3.1 General Issues
You are strongly encouraged to take suitable backups if you are going to destroy the original database before
creating the new database (e.g., if the reason for the migration is to install a new OS version on the existing
hardware). If the migration is to new hardware the copies of the database on the old server should be the best form
of backup. There is an example cold-backup script in /oracle/scripts (cold-backup-ex) which can be used as the
basis of a cold backup (you will need to make a backup of /etc/oratab separately).
The database instance to be migrated must be idle (with no user connections) when you run the export script and
must not be used after that point. It does not have to be idle when you run the two script generation scripts but you
must not create any new users/projects after this point (though you could manually edit the user lists if you do so).
It is very difficult to come up with a hard and fast estimate of the time taken to do a migration. It should be
possible to safely run the export script on a "live" database to get an estimate of how long the export will take
(though note that this export will not be usable for an import because the results may not be consistent; it is also
recommended to do the backup at a relatively quiet time particularly for larger databases). The import will
typically take longer than the export - probably by a factor of two or three.
Where INSTANCE_NAME is the name of the database instance to be exported and SYSTEM_PASSWORD is the
password of the system account in that database.
The list created by the script is sorted and written to standard output so this information will need to be captured
in some way - most easily by redirecting standard output to a file. What information is captured and reported
differs depending on the type of database:
• OPM or InVA. The list of users is essentially the same as the list of projects in the database. For OPM
this will also include the RDM information if this is present in the database. If you don't want to migrate
all of the projects in a database you can edit the list and remove all the projects that you don't want to
move. (Note that some databases will report the RDMDB users twice - remove the second version if this
is the case).
• OCM. The list of users is essentially the list of OCM releases that have historically been supported by this
database, one or more OCMTIMER users plus the JAM user. If you know what you are doing you should be
able to remove the older OCM release versions and, possibly, the older OCMTIMER users. The gains from
doing this are likely to be small, however, so it's probably not worth doing.
• WHSM: The list of users is equivalent to the Linux accounts that have been given access rights to the WHSM
database. It does not include the WHSMUSR user which is the one which actually "owns" all the data. This user
is handled separately. You can delete any of the users from the list that you no longer require.
• SLM: The list of users is equivalent to the Linux accounts that have been given access rights to the SLM
database. It does not include the TPMGR user which is the one which actually "owns" all the data. This user is
handled separately. Note that if any users have been given SLM_ADMIN rights they will appear twice in the
user list. Both entries must be removed and these users added to the new database instance using the
add-users script. As for WHSM you can delete any of the other users from the list that you no longer
require.
Note that for WHSM and SLM there is no actual requirement to migrate any of these users and the list could be
ignored completely (though you will need to supply a dummy file as an argument to the gen-mig-scripts
program). In general any users required can be recreated at a later date by the add-users script that is a part of
the standard Oracle installation scripts tar kit.
FILENAME
is the name of the file created by gen-mig-list
INPUT_INSTANCE
is the name of the input database (which, of course, should be the same as that used by
gen-mig-list to create FILENAME)
OUTPUT_INSTANCE
is the name of the target database. This is optional and if not supplied is assumed to be the same
as INPUT_INSTANCE
OUTPUT_ORACLE_VERSION
is the version of Oracle to be used on the target system. This is optional and if not supplied is
assumed to be the same as the Oracle version used by INPUT_INSTANCE. The only acceptable
values for this currently are 10.2 or 11.2
The script will create four more scripts in your current working directory:
INPUT_INSTANCE_export.sh
a script to export all of the required users from the database INPUT_INSTANCE
OUTPUT_INSTANCE_ddl.sql
a set of SQL commands to create the required new users in the new database
OUTPUT_INSTANCE
OUTPUT_INSTANCE_run_ddl.sh
a wrapper script to run OUTPUT_INSTANCE_ddl.sql
OUTPUT_INSTANCE_import.sh
a script to import all of the required users into the database OUTPUT_INSTANCE
This will ask for the system password of the database (and for the dump directory if you forget to set
$DUMPLOC). It will then create a set of export dumps in the directory $DUMPLOC. For OPM, InVA and OCM
there will be one dump file for each project/user in the FILENAME supplied to gen-mig-scripts. For
WHSM and SLM there will be one dump file - for the WHSMUSR and TPMGR users respectively.
If not using a networked file system the dump files will need to be copied to the new server.
For OPM, InVA and OCM database instances you must run the OUTPUT_INSTANCE_run_ddl.sh script to
initialize all the necessary user accounts and to create the appropriate table spaces for these accounts. For SLM
and WHSM it is not required to run OUTPUT_INSTANCE_run_ddl.sh at this point (it can be left until after
the import or skipped entirely) as we will not be importing any data associated with these users.
This will ask for the password of the system account for the new database.
Note: For OPM and InVA instances this process may take quite some time. This is because the data files
containing the table spaces are created as approximately the same size as the data files in the original
database. This means that the data file is likely to be less fragmented (which will be useful when using the
database later on ) and that the import should be faster (though the overall speed of the two steps is, in fact,
little changed).
Note that the procedures listed here for stopping/starting archive logging will require the Linux user account to be
a member of the dba group. The import itself does not have this requirement and can be run by any Linux
account.
Note: the import script will ask for the password of the system account for the OUTPUT_INSTANCE
database and also the location of the dump directory if you do not specify $DUMPLOC.
Once the import script has completed and you have re-enabled archive logging your database is ready for use
(though see the section on setting the SLM_MAIN_Q_SEQUENCE for SLM databases below). if your database is
on a new system or has a new name you will need to update the appropriate Omega configuration parameters and
then you can start the relevant Omega daemons.
4.4.1 WHSM
In general there should be no additional requirements for WHSM. However, you might decide not to use the
OUTPUT_INSTANCE_run_ddl.sh script and instead use the add-users script to add users with the
whsmuser role (WHSM_ROLE). Run /oracle/scripts/add-users --help to get more information on
using this.
4.4.2 SLM
There may be some issues with SLM databases created via the create-instance script prior to revisions in
August 2013. This is because the old creation procedures used bitmapped indexes which require the Enterprise
Edition of Oracle. If in doubt consult with support before migrating an SLM database.
when running the SLM import. This can be ignored (there is only one row and it already contains the necessary
information)..
Like WHSM you may wish to add users with the slmuser (SLM_USER) role by running the add-users script.
There are also two additional steps that must be done separately for this database.
Users that have the slmadmin (SLM_ADMIN) role will not be created by the
OUTPUT_INSTANCE_run_ddl.sh. As noted above these should have been removed from the user list created
in the first stage of this procedure and must now be recreated using the add-users script:
where:
PASSWORD
is the password of the Oracle system account
user1, user2
are the usernames of the accounts to which you wish to give administrative privileges. These can
either be obtained from reviewing the directory list obtained in the first procedure or can be a
different set of users.
In the SLM database there is a sequence variable that keeps track of the history of the usage of the database.
When the data is imported into the new database by the OUTPUT_INSTANCE_import.sh script this sequence
number is reset to 1. You will need to determine the value in the original database and initialise the sequence
number in the new database to the same value.
To obtain the current value in the original database use sqlplus to connect to the database and run the
following command to get the current value of SLM_MAIN_Q_SEQUENCE in that database:
LAST_NUMBER
-----------
8099
On the new database server use sqlplus to connect to the database and update the value of
SLM_MAIN_Q_SEQUENCE
This database will have to be updated for every baseline being run in a center that is affected by the migration.
Depending on which services/databases are migrated you will need to make the following changes. If doing a
partial migration you have the potential to scope these values on a project basis. This is not generally
recommended however - you should generally move all of the projects using a given baseline.
Corba->ORBInitialHost
• The following all take the form: HOSTNAME:PORT:DBNAME. The PORT should be unchanged and
should be 1521 in all cases. The HOSTNAME should be updated to the FQDN of the new server and the
DBNAME should be changed to the name of the new database (if they change).
OpmDatabase
RdmDatabase
InVa->Database->ConnectionString
You will need to change these if you move the location of the whsm2opm server.
Environment->WHSM2OPM_SERVER
Environment->WHSM2OPM_SERVER_NEXT
You will need to change this if you move the SLM or WHSM servers. Make sure that the
tnsnames.ora file that is in the directory pointed to by this variable contains the correct information
for both databases if necessary.
Environment->TNS_ADMIN
Environment->TPSLM_SERVER
Environment->WHSM_SERVER
ORACLE_HOME is only currently used by the SLM programs TPslmreports and TPslmdbup.
Environment->ORACLE_HOME