Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

HP/ORACLE CTC Technical Report

November 2004

by Rebecca Schlecht (HP), Rainer Marekwia (Oracle), Jaime Blasco (HP), Roland Knapp (Oracle)

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX (Step-by-Step)
The latest updated version of this document can always be found at: http://www.hporaclectc.com/cug/assets/RACupgrade9i-10g.doc

Setup and Configuration Phase 1. 9i RAC -> 10g RAC
1.0 Gather Optimizer Statistics Before the Upgrade (to minimize downtime) 1.1 Install the Oracle 10g Software 1.1.1 Install 10g Cluster Ready Services 1.1.2 Install Oracle 10g Software Database Binaries 1.2 Upgrade of Existing Oracle9i RAC Database Using Oracle Database Upgrade Assistant (DBUA) 1.3 Re-Initialize the OCR Only if Needed

Phase 2: Migration from Raw Devices to ASM Phase 3. Migrate Serviceguard to Oracle Clusterware
3.1 3.2 Copy the OCR from a Shared Logical Volume to a Raw Device Re-Install CRS

Setup and Configuration:
This document provides a step-by-step description about how to upgrade from an Oracle9i Real Application Clusters (RAC) installation on HP-UX to Oracle Database 10g with RAC. The first step is to upgrade the clustered database to Oracle Database 10g using the Database Upgrade Assistant (DBUA), as described in Phase 1.

Page 1/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

A possible second option for migrating to Oracle Database 10g is, to migrate the Oracle database files from raw devices, that were required with Oracle9i RAC, to Oracle Automatic Storage Management (ASM) using RMAN. This is described in Phase 2. Another new option with Oracle Database 10g with RAC is to use RAC without a vendor cluster manager. In Phase 3, the steps are described about how to migrate an Oracle RAC database that uses HP Serviceguard (Oracle 10g) to use the Oracle integrated clusterware (Oracle 10g), known as Cluster Ready Services (CRS). Report any feedback about this documentation to rainer.marekwia@oracle.com or roland.knapp@oracle.com. In our example, we used a two node cluster (called “RAC”) on HP-UX with Serviceguard on nodes ev02 and ev03:
root@ev02:/root$ cmviewcl CLUSTER RAC NODE ev02 ev03 STATUS up STATUS up up STATE running running

The database files are stored on shared logical volumes (SLVM) in volume group vg_rac on a DS2405 storage:
root@ev02:/root$ vgdisplay vg_rac --- Volume groups --VG Name /dev/vg_rac VG Write Access read/write VG Status available, shared, server Max LV 255 Cur LV 23 Open LV 23 Max PV 16 Cur PV 2 Act PV 2 Max PE per PV 8683 VGDA 4 PE Size (Mbytes) 4 Total PE 17362 Alloc PE 14672 Free PE 2690 Total PVG 0 Total Spare PVs 0 Total Spare PVs in use 0

Page 2/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

To improve manageability, soft links were created in Oracle’s admin directory to point to the logical volumes, for example:
lrwxrwxrwx 1 oracle dba /dev/vg_rac/rRAC_raw_control01_170 lrwxrwxrwx 1 oracle dba /dev/vg_rac/rRAC_raw_system_1000 34 Jun 21 09:56 control01 -> 32 Jun 21 09:56 system ->

HP’s Serviceguard Products version 11.15.00 are also installed:
root@ev02:/root$ swlist |grep -i guard T1905BA A.11.15.00 MC / ServiceGuard T1907BA A.11.15.00 ServiceGuard Extension for RAC

This is an extract from the cluster ASCII configuration file:
CLUSTER_NAME RAC

# Cluster Lock Parameters # FIRST_CLUSTER_LOCK_VG /dev/vg_rac NODE_NAME ev02 NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.136.26.153 NETWORK_INTERFACE lan1 HEARTBEAT_IP 10.0.0.1 FIRST_CLUSTER_LOCK_PV /dev/dsk/c4t1d0 NODE_NAME NETWORK_INTERFACE HEARTBEAT_IP NETWORK_INTERFACE HEARTBEAT_IP FIRST_CLUSTER_LOCK_PV OPS_VOLUME_GROUP ev03 lan0 15.136.26.155 lan1 10.0.0.2 /dev/dsk/c4t1d0 /dev/vg_rac

Test were performed on an HP Itanium system with HP-UX 11.23 installed:
root@ev02:/etc/cmcluster$ uname -a HP-UX ev02 B.11.23 U ia64 3452104077 unlimited-user license

Oracle 9i Release 9.2 was installed plus the latest available patchset 9.2.0.5: Make sure that you have the appropriate /etc/hosts entries for all of the RAC nodes that are part of your Oracle9i RAC environment and make sure that the entries for the private interconnects are in the /etc/hosts files on all systems. Also add the virtual IP addresses (VIP) and names to the /etc/hosts files. The Virtual Interconnect Protocol Configuration Assistant (VIPCA) uses the virtual IP addresses after you have installed the Oracle binaries to create the VIP services.

Page 3/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

Phase 1: 9iRAC -> 10g RAC
1.0 Gather Optimizer Statistics Before the Upgrade (to minimize downtime) Refer to chapter 3 of the Oracle Database Upgrade Guide, part number B10763-02. 1.1 Install the Oracle 10g Software Stop the Oracle9i GSD daemons by running the following command on each of the nodes in your cluster database environment:
gsdctl stop Successfully stopped GSD on local node

1.1.1 Install 10g Cluster Ready Services Installation of Oracle Database 10g CRS on top of the running Serviceguard clusterware.

Page 4/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

The Installer detects the existing Serviceguard cluster (existence of /opt/nmapi/ ) , and shows the cluster nodes.

On all of the nodes, change the owner and access rights of the raw devices that you want to use for Oracle directly (such as the Oracle Cluster Registry (OCR), voting disk, disks for ASM, and so on):
chown oracle:dba /dev/rdsk/c4t….d0 chmod 660 /dev/rdsk/c4t….d0 for i in 3 4 5 6 7 8 9 10 11 12 13 14 do remsh ev03 chown oracle:dba /dev/rdsk/c4t"$i"d0 done

Page 5/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

The installation steps are similar to the steps that you would follow for a new Oracle Database 10g installation, except that the installation does not request an OCR location (Instead, you will use the Oracle9i srvconfig.loc file and upgrade it to the Oracle Database 10g OCR format).

Page 6/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

After the installation is finished, run the root.sh script on all of the nodes that are part of this cluster upgrade. On node1 run the following command. Sample output appears after the command:
# ./root.sh Checking to see if any 9i GSD is up Checking to see if Oracle CRS stack is already up... : Oracle Cluster Registry configuration upgraded successfully : Now formatting voting device: /dev/rdsk/c4t3d0 Successful in setting block0 for voting disk. Format complete. Adding daemons to inittab Preparing Oracle Cluster Ready Services (CRS): Jun 23 04:34:52 ev02 syslog: (Oracle CSSD will be run out of init) Jun 23 04:34:52 ev02 syslog: (Oracle EVMD will be run out of init) Jun 23 04:34:52 ev02 syslog: (Oracle CRSD will be run out of init, set to start boot services) Expecting the CRS daemons to be up within 600 seconds. : CSS is active on these nodes. ev02 CSS is inactive on these nodes. ev03 Local node checking complete.

Page 7/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

Run the root.sh script on the remaining nodes to start the CRS daemons as shown in the following example: On Node 2:
ev03:{/opt/oracle/product/CRS#/root/} $ ./root.sh Checking to see if any 9i GSD is up Checking to see if Oracle CRS stack is already up... : Oracle Cluster Registry configuration upgraded successfully : clscfg: EXISTING configuration version 2 detected. clscfg: version 2 is 10G Release 1. Successfully accumulated necessary OCR keys. : Oracle Cluster Registry for cluster has already been initialized Adding daemons to inittab Preparing Oracle Cluster Ready Services (CRS): Jun 22 23:39:03 ev03 root: (Oracle CSSD will be run out of init) Jun 22 23:39:03 ev03 root: (Oracle EVMD will be run out of init) Jun 22 23:39:04 ev03 root: (Oracle CRSD will be run out of init, set to start bo ot services) Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. ev02 ev03 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M)

During this installation, the Oracle Universal Installer (OUI) will upgrade the Oracle9i srvconfig.loc file to the Oracle Database 10g OCR format and change the pointers in /var/opt/oracle accordingly as shown in the following pre- and post-processing examples: Before upgrade:
oracle@ev03RAC2:/var/opt/oracle> ls -slrt total 32 16 -rw-rw-r-1 oracle sys 50 Jun 21 06:19 srvConfig.loc 16 -rw-r--r-1 root sys 62 Jun 21 22:35 oraInst.loc oracle@ev03RAC2:/var/opt/oracle> pg srvConfig.loc srvconfig_loc=/dev/vg_rac/rRAC_raw_srvmconfig_110

Page 8/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

After upgrade:
# ll total 48 -rw-r--r-drwxrwxr-x -rw-r--r-drwxr-xr-x -rw-r--r--

1 5 1 3 1

root root oracle root root

dba sys dba sys dba

67 96 62 96 24

Jun Jun Jun Jun Jun

23 23 22 23 23

04:33 04:33 04:35 04:33 04:33

ocr.loc oprocd oraInst.loc scls_scr srvConfig.loc

# pg srvConfig.loc srvconfig_loc=/dev/null # pg ocr.loc ocrconfig_loc=/dev/vg_rac/rRAC_raw_srvmconfig_110 local_only=FALSE

The following processes will be running after a successful CRS installation:
ev03:{/opt/oracle/product/CRS/bin#/root/} $ ps -ef|grep CRS oracle 7567 1 /opt/oracle/product/CRS/bin/evmd.bin oracle 7692 7567 /opt/oracle/product/CRS/bin/evmlogger.bin -o /opt/oracle/produc root 7572 1 /opt/oracle/product/CRS/bin/crsd.bin oracle 7672 7570 /opt/oracle/product/CRS/bin/ocssd || exit 137 oracle 7673 7672 /opt/oracle/product/CRS/bin/ocssd.bin

1.1.2 Install Oracle 10g Software Database Binaries Run the runInstaller command to run the OUI on node one: Screen: “Specify Hardware Cluster Installation Mode” -> select cluster installation, select all nodes Screen: “Select Database Configuration” -> select “Do not create a starter Database”

Page 9/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

After the OUI has copied the binaries to the second node, the VIPCA is started when calling root.sh on the first node (Please make sure that the VIP name and IP address are in the /etc/hosts file):

Page 10/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

Afterwards, run root.sh on node2:
# ./root.sh Running Oracle10 root.sh script… The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /opt/oracle/product/10g Enter the full pathname of the local bin directory: [/usr/local/bin]: The file “dbhome” already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin … The file “oraenv” already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin … The file “coraenv” already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin … Adding entry to /etc/oratab file… Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed.

CRS resources are already configured

Page 11/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

1.2 Upgrade of Existing Oracle9i RAC Database Using Oracle Database Upgrade Assistant (DBUA) If you did not have any database listeners configured in the Oracle9i srvconfig repository, then start the Oracle Database 10g Network Configuration Assistant (NETCA) ($ORACLE_HOME/bin/netca) to configure a standard listener in your Oracle Database 10g RAC environment (see Bug 3724769 B10763 –01), before starting the DBUA:
Hdr: 3724769 10.1.0.2 RDBMS 10.1.0.2 UPGRADE PRODID-5 PORTID-59 Abstract: DBUA DOES NOT COVER THE NONE EXISTANCE FROM A LISTERNER IN 9I

Also note Bug 3098076, if you see the following Security Exception:
Hdr: 3098076 10.1.0.1 ODBMIG 10.1.0.1 PRODID-888 PORTID-453 Abstract: DBUA FAILS WITH SECURITYEXCEPTION WHILE UPGRADING RAC DATABASE

Important Note when running the Database that you want to upgrade in ARCHIVE LOG mode: The Oracle 10.1.0.2 DBUA turns off archiving during upgrade (the original archiving mode is restored after the upgrade is done). This improved upgrade performance. This can cause severe problems when also using Data Guard or Standby Databases in the environment. The DBUA gets the information to turn OFF archiving during the upgrade from a XML file which is generated by a PL/SQL package. As a workaround, users can make the following simple one word change in this PL/SQL file to make sure ARCHIVE LOG mode is not disabled during the upgrade:
a) Open the file in the Oracle 10g $ORACLE_HOME/rdbms/admin/utlu101x.sql b) Search for a line with text ArchiveLog Modify the line '< DisableArchiveLogMode value="true"/ >'); to '< DisableArchiveLogMode value="false"/ >'); c) Save the file and then run DBUA

The DBUA will set its own environment variables, so you do not have to change to the new Oracle Database 10g environment yet. Instead, leave your ORACLE_HOME variable pointing to your Oracle9i environment and start the Database Upgrade Assistant (DBUA) in the new Oracle Database 10g ORACLE_HOME.
$ /opt/oracle/product/10g/dbua &

Page 12/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

The DBUA for RAC uses the following algorithm to discover existing databases: 1) Check whether the DBUA is invoked in a cluster environment. If NO, then the selected database is assumed to be a "Single instance" Oracle Database by default. 2) If YES, then it checks whether the selected database exists in the OCR. If it does exist, then the DBUA obtains the necessary information from the OCR. 3) If the OCR does not contain the database information, then the DBUA checks the Registry (in case of UNIX, /etc/oratab’s) on the remote nodes to see if it can find the same database with a matching ORACLE_HOME. If YES, it assumes that it is a cluster database and asks you for the local instance name because it cannot find the local instance name anywhere else, nor can it determine what the local instance name is. Once the DBUA obtains the local instance details, the DBUA it connects to it and gathers the database information from the gv$ tables and populates the OCR. When required in subsequent steps, the upgrade process can read information from the OCR.

Page 13/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

To check the status and version of your OCR, run the following command:
oracle@ev03RAC2:/opt/oracle/product/10g/bin> ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 122792 Used space (kbytes) : 1932 Available space (kbytes) : 120860 Cluster registry integrity check succeeded

Version 2 indicates that this is an Oracle Database 10g OCR file.

When asked for a datafile for the Oracle Database 10g SYSAUX tablespace, use lvcreate to create a raw logical volume for SYSAUX, size 500MB. De-activate the shared volume group if necessary, and execute the following command as the root user:
# lvcreate –i 2 –I 1024 –L 501 –n sysaux /dev/vg_rac -i: number of disks to stripe across -I: stripe size in kilobytes -L: size of logical volume in MB

Page 14/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

If you have not used a binary server parameter file (spfile) with your Oracle9i RAC database, then you have to create an additional logical volume to hold the spfile. You will need the spfile later during the installation.
# lvcreate –i 2 –I 1024 –L 51 –n spfile /dev/vg_rac

Create a symbolic link, if necessary, from the database’s admin directory.

At the next page, specify “recompile packages” YES and choose the degree of parallelism. When asked at the next page to specify an spfile, use a shared logical volume such as /dev/vg_rac/rRAC_raw_spfile_50. Next page: Password page, specify one or multiple passwords for your Oracle Database 10g RAC environment.

Page 15/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

Page 16/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

Now the individual upgrade steps will be performed and after the successful upgrade to Oracle Database 10g, the detailed Upgrade Results will be displayed. The following is an example of these results. Page 17/30 11/22/2004 HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

Upgrade Results Database upgrade has been completed successfully, and the database is ready to use. Source Database Target Database Name: rac rac Version: 9.2.0 10.1 Oracle Home: /opt/oracle/product/9.2 /opt/oracle/product/10g Services Configured Service Name Preferred Instances Available Instances TAF Policy TAFboth RAC2, RAC1 None Schalke04 RAC1 RAC2 Basic Upgrade Details The following is a summary of the steps performed during the database upgrade. Log files for all the steps, as well as this summary, are available in "/opt/oracle/product/admin/rac/upgrade1". Step Name Log File Name Status Pre Upgrade PreUpgrade.log Successful Oracle Server Oracle_Server.log Successful Enterprise Manager Repository emRepository.log Successful Post Upgrade PostUpgrade.log Successful* : : Parameters Added: Name Value pga_aggregate_target 25165824 large_pool_size 8388608 java_pool_size 50331648 job_queue_processes 1 Parameters Updated: Name Old Value New Value shared_pool_size 100663296 301989888 Obsolete Parameters Removed: max_enabled_roles Parameters Commented: Name Value RAC1.local_listener null RAC2.local_listener null Go to top

Page 18/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

1.3

Re-Initialize the OCR Only if Needed

If at any point in time you have to completely re-create your Oracle Cluster Registry (OCR), then follow the steps described in the following MetaLink Note: Doc ID: Note:239998.1 Subject: 10g RAC: How to Clean Up After a Failed CRS Install

First, restart your system, then run the following commands: I. dd if=/dev/zero of=/dev/vg_rac/rRAC_raw_srvmconfig_110 bs=1024000 count=130 II. execute root.sh (both nodes) in CRS_HOME III. execute root.sh (both nodes) in ORACLE_HOME (calling vipca) IV. call netca V. srvctl add database / instances
srvctl add database -d rac -o /opt/oracle/product/10g srvctl add instance -d rac -i RAC1 -n ev02 srvctl add instance -d rac -i RAC2 -n ev03

Page 19/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

Phase 2: Migration from Raw Devices to ASM
To create the ASM instances on both nodes, create a password file and the parameter files init+ASM[1-2].ora in the Oracle Database 10g directory $ORACLE_HOME/dbs.
Example init+ASM1.ora: cluster_database=true +ASM2.instance_number=2 +ASM1.instance_number=1 asm_diskgroups='DG1' asm_diskstring='/dev/rdsk/*' background_dump_dest='/opt/oracle/product/admin/ASM/bdump' core_dump_dest='/opt/oracle/product/admin/ASM/cdump' instance_type='asm' large_pool_size=12M remote_login_passwordfile='exclusive' user_dump_dest='/opt/oracle/product/admin/ASM/udump'

Then do the following: • startup the ASM instances on both nodes • create a diskgroup with the needed redundancy
create diskgroup DG1 external redundancy disk ´/dev/rdsk/c4t5d0’,´/dev/rdsk/c4t6d0’;

• •

verify, that the diskgroup is mounted edit the pfile on both RAC database instances to make the following additional entries (initRAC1.ora, initRAC2.ora):

*.db_create_file_dest=+DG1 *.db_create_online_log_dest_1=+DG1 *.control_files='+DG1/rac/controlfile/control01'

Determine whether block change tracking is enabled. If block change tracking is enabled, then disable change tracking by running the following commands:
SQL> select * from V$BLOCK_CHANGE_TRACKING; SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

Now you can start with the RMAN migration of your Oracle Database 10g RAC environment from shared logical raw devices to ASM. Start up the Oracle Database 10g RAC instance on one node with the current Oracle Database 10g initialization parameter file using RMAN:
#rman target /

Page 20/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

RMAN> startup nomount pfile=initRAC1.ora connected to target database (not started) Oracle instance started Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers 465567744 1298080 361558368 100663296 2048000 bytes bytes bytes bytes bytes

RMAN will read the controlfile from the existing location to store the controlfile in ASM:
RMAN> restore controlfile from '/opt/oracle/RAC/control01'; Starting restore at 30-JUN-04 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=207 devtype=DISK channel ORA_DISK_1: copied controlfile copy output filename=+DG1/rac/controlfile/control01 Finished restore at 30-JUN-04 released channel: ORA_DISK_1 RMAN> alter database mount; database mounted RMAN> show all; RMAN configuration parameters are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 2; CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/product/10g/dbs/snapcf_RAC1.f'; # default

Page 21/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

Using the RMAN command shown in the following example, RMAN copies the datafiles from the shared logical volumes to the ASM diskgroup (specified with *.db_create_file_dest=+DG1):
RMAN> backup as copy database format '+DG1'; Starting backup at 30-JUN-04 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=207 devtype=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: sid=206 devtype=DISK channel ORA_DISK_1: starting datafile copy input datafile fno=00003 name=/opt/oracle/RAC/user channel ORA_DISK_2: starting datafile copy input datafile fno=00002 name=/opt/oracle/RAC/undo1_01 output filename=+DG1/rac/datafile/undo10.258.1 tag=TAG20040630T012817 recid=1 stamp=530155995 channel ORA_DISK_2: datafile copy complete, elapsed time: 00:05:05 channel ORA_DISK_2: starting datafile copy input datafile fno=00006 name=/opt/oracle/RAC/undo2_01 output filename=+DG1/rac/datafile/undo20.259.1 tag=TAG20040630T012817 recid=2 stamp=530156033 channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:35 channel ORA_DISK_2: starting datafile copy input datafile fno=00001 name=/opt/oracle/RAC/system output filename=+DG1/rac/datafile/system.260.1 tag=TAG20040630T012817 recid=3 stamp=530156062 channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:25 channel ORA_DISK_2: starting datafile copy input datafile fno=00004 name=/opt/oracle/RAC/tools1_01 output filename=+DG1/rac/datafile/user01.257.1 tag=TAG20040630T012817 recid=4 stamp=530156075 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:06:21 channel ORA_DISK_1: starting datafile copy input datafile fno=00005 name=/opt/oracle/RAC/tools2_01 output filename=+DG1/rac/datafile/tools01.261.1 tag=TAG20040630T012817 recid=5 stamp=530156075 channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:15 channel ORA_DISK_2: starting datafile copy input datafile fno=00007 name=/opt/oracle/RAC/sysaux01.dbf output filename=+DG1/rac/datafile/tools02.262.1 tag=TAG20040630T012817 recid=6 stamp=530156090 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting datafile copy copying current controlfile output filename=+DG1/rac/datafile/sysaux.263.1 tag=TAG20040630T012817 recid=7 stamp=530156092 channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:15 output filename=+DG1/rac/controlfile/backup.264.1 tag=TAG20040630T012817 recid=8 stamp=530156094 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01 Finished backup at 30-JUN-04

Page 22/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

At this point, switch your target database to use the datafiles that RMAN copied to the ASM diskgroup:
RMAN> switch database to copy; datafile datafile datafile datafile datafile datafile datafile 1 2 3 4 5 6 7 switched switched switched switched switched switched switched to to to to to to to datafile datafile datafile datafile datafile datafile datafile copy copy copy copy copy copy copy "+DG1/rac/datafile/system.260.1" "+DG1/rac/datafile/undo10.258.1" "+DG1/rac/datafile/user01.257.1" "+DG1/rac/datafile/tools01.261.1" "+DG1/rac/datafile/tools02.262.1" "+DG1/rac/datafile/undo20.259.1" "+DG1/rac/datafile/sysaux.263.1"

Open the database:
SQL> alter database open; Database altered.

The Controlfile(s) and datafiles including the UNDO files are located on ASM. However, the RMAN “backup as copy command” does not copy the online redo log files or the temporary files as shown by the output from the following command:
SQL> select name from v$datafile; NAME ------------------------------------------------------------------------------+DG1/rac/datafile/system.260.1 +DG1/rac/datafile/undo10.258.1 +DG1/rac/datafile/user01.257.1 +DG1/rac/datafile/tools01.261.1 +DG1/rac/datafile/tools02.262.1 +DG1/rac/datafile/undo20.259.1 +DG1/rac/datafile/sysaux.263.1 7 rows selected. SQL> select GROUP#,MEMBER from v$logfile; GROUP# MEMBER --------------------------------------------------------------------1 /opt/oracle/RAC/redo1_01 2 /opt/oracle/RAC/redo1_02 3 /opt/oracle/RAC/redo1_03 4 /opt/oracle/RAC/redo2_01 5 /opt/oracle/RAC/redo2_02 6 /opt/oracle/RAC/redo2_03 7 6 rows selected.

Page 23/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

To also move the redo logs to the ASM disks, you can • • manually add a new log file to every log file group and then drop the log file located on the shared raw devices after the necessary log switches. You must do this for both instances. Otherwise, you can use the following PL/SQL procedure to perform the redo log migration:

declare cursor orlc is select lf.member, l.bytes from v$log l, v$logfile lf where l.group# = lf.group# and lf.type = 'ONLINE' order by l.thread#, l.sequence#; type numTab_t is table of number index by binary_integer; type charTab_t is table of varchar2(1024) index by binary_integer; byteslist numTab_t; namelist charTab_t; procedure migrateorlfile(name IN varchar2, bytes IN number) is retry number; stmt varchar2(1024); als varchar2(1024) := 'alter system switch logfile'; begin select count(*) into retry from v$logfile; stmt := 'alter database add logfile size ' || bytes; execute immediate stmt; stmt := 'alter database drop logfile ''' || name || ''''; for i in 1..retry loop begin execute immediate stmt; exit; exception when others then if i > retry then raise; end if; execute immediate als; end; end loop; end; begin open orlc; fetch orlc bulk collect into namelist, byteslist; close orlc; for i in 1..namelist.count loop migrateorlfile(namelist(i), byteslist(i)); end loop; end; /

Page 24/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

After migrating the redo logs from the shared volumes to ASM:
SQL> select * from v$logfile; GROUP# STATUS TYPE MEMBER IS_ --------------------------------------------------------------------1 ONLINE +DG1/rac/onlinelog/redo1_04 NO 2 ONLINE +DG1/rac/onlinelog/redo1_05 NO 3 ONLINE +DG1/rac/onlinelog/redo1_06 NO 4 ONLINE +DG1/rac/onlinelog/redo2_04 NO 5 ONLINE +DG1/rac/onlinelog/redo2_05 NO 6 ONLINE +DG1/rac/onlinelog/redo2_06 NO 6 rows selected.

Manually create a temporary tablespace (tempfile). Because the control files do not have a record for the temporary tablespace, RMAN does not migrate or overwrite the tempfile. Thus, re-create the temporary tablespace manually in ASM using the following command:
SQL> create temporary tablespace temp_tempfile extent management local; Tablespace created.

In addition, move the spfile to ASM using the following command:
SQL> create spfile='+DG1/RAC/parameterfile/spfileRAC1.ora' from pfile='/opt/oracle/product/10g/dbs/initRAC1.ora'; File created.

Now, optionally, you can delete the old database-files. RMAN can manage the deletion of the old datafiles, except for the control files and online redo logs. Therefore, delete the control files and online redo logs manually as in the following example:
RMAN> run { # delete datafiles DELETE COPY OF DATABASE; HOST 'rm <old_online_redo_logs>'; HOST 'rm <old_control_file_copies>'; }

SQL> select FILE_NUMBER, BLOCKS, BYTES, SPACE, TYPE, STRIPE from v$asm_file; FILE_NUMBER BLOCKS BYTES SPACE TYPE STRIPE ---------------------------------------------------------------- -----256 325 2662400 8388608 CONTROLFILE FINE 257 2048001 1.6777E+10 1.6779E+10 DATAFILE COARSE 258 1536001 1.2583E+10 1.2585E+10 DATAFILE COARSE 259 153601 1258299392 1260388352 DATAFILE COARSE

Page 25/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

260 128001 261 64001 262 64001 263 64001 264 325 265 512001 266 512001 267 512001 268 512001 269 512001 270 512001 271 512001 272 12801 273 5 18 rows selected.

1048584192 524296192 524296192 524296192 2662400 524289024 524289024 524289024 524289024 524289024 524289024 524289024 104865792 5120

1050673152 526385152 526385152 526385152 8388608 529530880 529530880 529530880 529530880 529530880 529530880 529530880 105906176 1048576

DATAFILE COARSE DATAFILE COARSE DATAFILE COARSE DATAFILE COARSE CONTROLFILE FINE ONLINELOG FINE ONLINELOG FINE ONLINELOG FINE ONLINELOG FINE ONLINELOG FINE ONLINELOG FINE ONLINELOG FINE TEMPFILE COARSE PARAMETERFILE COARSE

After the migration is complete, enable change tracking if it was enabled before the migration by running the following command:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

Update the initRAC[1-2].ora initialization parameter files to point to the new location of the spfile on ASM as in the following example:
SPFILE='+DG1/RAC/parameterfile/spfileRAC.ora'

Phase 3. Migrate Serviceguard to Oracle Clusterware
If you use Serviceguard as a cluster manager on HP-UX, then you can see the following entries in the alert.log file:
Mon Jun 21 19:13:31 2004 cluster interconnect IPC version:Oracle UDP/IP IPC Vendor 1 proto 2 Version 1.0

This indicates that you are using the vendor cluster manager, HP Serviceguard. 3.1 Copy the OCR from a Shared Logical Volume to a Raw Device
pg ocr.loc ocrconfig_loc=/dev/vg_rac/rRAC_raw_srvmconfig_110 local_only=FALSE

Shutdown all instances (both the database instance and the ASM instances) using the following command:

Page 26/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

srvctl stop database -d RAC SQL> shutdown immediate; ASM instance shutdown SQL> exit

Shutdown the Oracle cluster services and Serviceguard (cmhaltcl –v) and add the following comments to the file /etc/inittab in the last 3 entries
#h1:3:respawn:/sbin/init.d/init.evmd run >/dev/null 2>&1 </dev/null #h2:3:respawn:/sbin/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null #h3:3:respawn:/sbin/init.d/init.crsd run >/dev/null 2>&1 </dev/null

On both nodes, shutdown the CRS services by re-starting the server. Temporarily activate the shared logical volume group to copy the OCR using the following command example:
ev02:/sbin/init.d $ vgchange -a s vg_rac Activated volume group in Shared Mode. This node is the Server. Volume group "vg_rac" has been successfully changed. chgrp dba /dev/rdsk/c4t13d0 dd if=/dev/vg_rac/rRAC_raw_srvmconfig_110 of=/dev/rdsk/c4t13d0 bs=1024000 count=120

Modify the ocr.loc file on both nodes to point to the raw disk instead of the shared volume. This is an example of a modified ocr.loc file:
ocrconfig_loc=/dev/rdsk/c4t13d0 local_only=FALSE

Before de-installing Serviceguard, de-activate the shared volume groups that you are using and stop the cluster using the following commands:
vgchange -a n vg_rac cmhaltcl

To de-install Serviceguard, either a) (unspoorted) mv /opt/nmapi /opt/nmapi.org , or b) use swremove to remove both SG and SGeRAC

Page 27/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

3.2 Re-Install CRS Before installing a new CRS home, you must de-install the existing CRS home. You can use the runInstaller provided with the CRS stage area. Also clean-up the CRS directories on all of the nodes. (re-installation in existing CRS home returns error with Oracle9i running gsd)

Page 28/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

Run the root.sh script after the installation completes on all of the nodes in your cluster, one at a time. Note, that in contrast to the process stack with Serviceguard as a cluster manager, you now have an additional process oprocd running.
root 25520 1 -m 50 oracle 22971 1 oracle 25558 22971 /opt/oracle/produc oracle 25524 22976 oracle 25525 25524 root 22978 1 /opt/oracle/product/OCL/bin/oprocd start -t 1000 /opt/oracle/product/OCL/bin/evmd.bin /opt/oracle/product/OCL/bin/evmlogger.bin -o /opt/oracle/product/OCL/bin/ocssd || exit 137 /opt/oracle/product/OCL/bin/ocssd.bin /opt/oracle/product/OCL/bin/crsd.bin

You must still re-link the Oracle binaries before starting the instances. To do this, first re-link with rac_on to use the Oracle stub skgxn library using the following command:
cd $ORACLE_HOME/rdbms/lib make –f ins_rdbms.mk rac_on rm -f /opt/oracle/product/10g/lib/libskgxp10.a cp /opt/oracle/product/10g/lib//libskgxpu.a /opt/oracle/product/10g/lib/libskgxp10.a

Page 29/30

11/22/2004

HP/Oracle CTC

Oracle Real Application Clusters Upgrade from Oracle9i to Oracle Database 10g on HP-UX

- Use stub SKGXN library rm -f /opt/oracle/product/10g/lib/libskgxn2.so cp /opt/oracle/product/10g/lib//libskgxns.so \ /opt/oracle/product/10g/lib/libskgxn2.so /usr/ccs/bin/ar cr /opt/oracle/product/10g/rdbms/lib/libknlopt.a /opt/oracle/product/10g/rdbms/lib/kcsm.o

Then re-link everything using the command make –f ins_rdbms.mk install or $ORACLE_HOME/bin/relink all After re-installing CRS, verify that you are using Oracle’s integrated clusterware, CRS. Use SQL*Plus to query the x$ksxpia view to see which cluster manager Oracle is using: After migrating to Oracle Database 10g, you should see output from the command as shown in the following example:
SQL> select * from x$ksxpia; ADDR INDX INST_ID P PICK NAME_KSXPIA IP_KSXPIA ---------------- ---------- ---------- - ---- --------------- --------000000039088CA50 0 1 Y OCR ce1 10.1.1.25

On HP-UX, when using Serviceguard as a cluster manager, you should see the following output from the following command example:
SQL> select * from x$ksxpia; ADDR INDX INST_ID P PICK NAME_KSXPIA IP_KSXPIA ---------------- ---------- ---------- - ---- --------------- --------00000003936B8580 0 1 OSD lo0:1 172.16.193.1 with: OCR … Oracle Clusterware OSD … Operating System dependent CI … indicates that the init.ora parameter cluster_interconnects is specified

The result of this select query enables you to verify that you are using the Oracle CRS clusterware.

Page 30/30

11/22/2004

HP/Oracle CTC