Professional Documents
Culture Documents
Bill Burton, Joshua Ort Oracle RAC Assurance Team, Real Applications Clusters Product Development
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracles products remains at the sole discretion of Oracle.
Oracle Confidential
Agenda
Upgrade Concepts and Considerations Grid Infrastructure
Upgrade paths Planning , Preparation and Prerequisites Completing the upgrade and post upgrade validation
RAC Database
Upgrade paths Planning , Preparation and Prerequisites Completing the upgrade and post upgrade validation
Upgrade Concepts
Upgrade Concepts
Upgrade Considerations
With Oracle Grid Infrastructure 11.2, ASM and Oracle Clusterware are installed into a single home directory, which is referred to as the Grid Infrastructure home.
However, Oracle Clusterware and ASM remain separate products.
The grid infrastructure version must be greater than or equal to the version of the resources it manages eg ASM, RDBMS All 11.2 upgrades are out of place
Upgrade Considerations
Oracle Home/Base
ORACLE_BASE for GI should be different than the ORACLE_BASE for Oracle Database. Each installation user should have its own Oracle Base The 11.2 grid infrastructure home must be owned by the same user as the pre-11.2 clusterware home. rootupgrade.sh will change the ownership of the grid infrastructure home directory and its parents to root
Oracle 10gR1
Clusterware and ASM support direct upgrade from 10.1.0.5 Earlier versions require upgrade to 10.1.0.5 first Clusterware can be rolling upgraded ASM cannot be rolling upgraded
Oracle 11gR1
Clusterware and ASM support direct upgrade from 11.1.0.6 and 11.1.0.7 Clusterware can be rolling upgraded ASM can be rolling upgraded from 11.1.0.6 Requires patch for bug 6872001 on the 11.1.0.6 ASM home for 11.1.0.6 ASM to be rolling upgraded. ASM can also be rolling upgraded from 11.1.0.7, no patch required
Check Metalink for Upgrade Issues Check Upgrade Companion Note: 785351.1 Check Certification Matrix
http://www.oracle.com/technology/support/metalink/index.html
The goal of the Oracle Real Application Clusters (RAC) Starter Kit is to provide you with the latest information on generic and platform specific best practices for implementing an Oracle RAC cluster. meant to supplement the Oracle Documentation set See Metalink Note: 45715.1
RAC Assurance Support Team: 11gR2 RAC Starter Kit and Best Practices
Unset ORACLE_HOME, ORACLE_BASE in the environment for the installing user as the install scripts handle these. Avoid Installer AttachHome issues Set the following parameter in the SSH daemon configuration file "/etc/ssh/sshd_config" on all cluster nodes before running oui. LoginGraceTime 0 Restart sshd Provision network resources for Single Client Access Name (SCAN)
Oracle Database 11g release 2 clients connect to the database using SCAN VIPS. The SCAN is associated with the entire cluster rather than an individual node. Resolves to up to 3 IP Addresses in DNS or GNS
IP Addresses returned in a round robin manner .
SCAN listeners run under the grid infrastructure home. Provides load balancing and failover for client connections Check this white paper for more details
http://www.oracle.com/technology/products/database/clustering/pdf/scan.pdf
Name: mycluster -scan1.mydomain.com Address: 10.148.46. 79 Name: mycluster -scan1.mydomain.com Address: 10.148.46. 77 Name: mycluster -scan1.mydomain.com Address: 10.148.46. 78
Top Level Flow Verify the Hardware/Software Environment Install the Software Configure the Software Finalize the Upgrade
Completing the upgrade Install the Software Node Selection and SSH
Completing the upgrade Install the Software Node Selection and SSH
Completing the upgrade Configure the Software run the root scripts
Run GI_HOME/rootupgrade.sh on each node of the cluster as root
High level logging to stdout See log file in GI_HOME/cfgtoollogs/crsconfig/ for more detail. Must be run on all nodes : First node on its own to successful completion All but the last node can then be run in parallel Last Node on its own. Does a rolling upgrade of Clusterware stack Only finalizes the upgrade as the last node completes Prior to that, the clusterware is effectively running on the old version
ohasd process replaces init.cssd,init.crsd,init.evmd, and oprocd Starts CSSD in exclusive Mode. Will only work on first node all others get CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node ratus-vm1, number 1, and is terminating. An active cluster was found during exclusive startup, restarting to join the cluster Upgrades CSS voting disks. Starts/Restarts CSS clustered, and other Clusterware daemons.
Finalizes local nodes upgrade
Completing the upgrade Configure the Software run the rootupgrade.sh Actions automatically performed on Last Node
Actions performed when all other nodes already have new software version installed and have completed rootupgrade.sh Sets the new active version Cluster is effectively on old release until this point ASM resource added if needed (not present previously) New resources added e.g. SCAN, SCAN Listener Nodeapps and new resources started
Post-upgrade validation
Clusterware processes
$ ps -ef|grep -v grep |grep d.bin oracle 9824 1 0 Jul14 ? 00:00:00 /u01/app/grid11gR2/bin/oclskd.bin root 22161 1 0 Jul13 ? 00:00:15 /u01/app/grid11gR2/bin/ohasd.bin reboot oracle 24161 1 0 Jul13 ? 00:00:00 /u01/app/grid11gR2/bin/mdnsd.bin oracle 24172 1 0 Jul13 ? 00:00:00 /u01/app/grid11gR2/bin/gipcd.bin oracle 24183 1 0 Jul13 ? 00:00:03 /u01/app/grid11gR2/bin/gpnpd.bin oracle 24257 1 0 Jul13 ? 00:01:26 /u01/app/grid11gR2/bin/ocssd.bin root 24309 1 0 Jul13 ? 00:00:06 /u01/app/grid11gR2/bin/octssd.bin root 24323 1 0 Jul13 ? 00:01:03 /u01/app/grid11gR2/bin/crsd.bin reboot root 24346 1 0 Jul13 ? 00:00:00 /u01/app/grid11gR2/bin/oclskd.bin oracle 24374 1 0 Jul13 ? 00:00:03 /u01/app/grid11gR2/bin/evmd.bin
Clusterware checks
$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online $ crsctl check has CRS-4638: Oracle High Availability Services is online $ crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.1.0]
Post-upgrade validation
$ crsctl check cluster -all ************************************************************** rat-rm2-ipfix006: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rat-rm2-ipfix007: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rat-rm2-ipfix008: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************
Post-upgrade validation
$CRS_HOME/bin/crs_stat is deprecated in 11.2, use crsctl instead $ crsctl status resource -t
NAME TARGET STATE SERVER STATE_DETAILS -----------------------------------------------ora.DATA.dg ASM disk group (new resource) ONLINE ONLINE rat-rm2-ipfix006 ONLINE ONLINE rat-rm2-ipfix007 ONLINE ONLINE rat-rm2-ipfix008 ora.LISTENER.lsnr ONLINE ONLINE rat-rm2-ipfix006 ONLINE ONLINE rat-rm2-ipfix007 ONLINE ONLINE rat-rm2-ipfix008 ora.asm ONLINE ONLINE rat-rm2-ipfix006 Started ONLINE ONLINE rat-rm2-ipfix007 Started ONLINE ONLINE rat-rm2-ipfix008 Started ora.eons new resource ONLINE ONLINE rat-rm2-ipfix006 ONLINE ONLINE rat-rm2-ipfix007 ONLINE ONLINE rat-rm2-ipfix008 ora.gsd OFFLINE OFFLINE rat-rm2-ipfix006 OFFLINE OFFLINE rat-rm2-ipfix007 ora.gsd OFFLINE is normal, OFFLINE OFFLINE rat-rm2-ipfix008 unless running 9i RAC too OFFLINE OFFLINE rat-rm2-ipfix008 ora.net1.network new resource ONLINE ONLINE rat-rm2-ipfix006 ONLINE ONLINE rat-rm2-ipfix007 ONLINE ONLINE rat-rm2-ipfix008
Clusterware Downgrade
Restores OCR from backup taken before upgrade rootcrs.pl -downgrade [-force] on all but last node rootcrs.pl -downgrade -lastnode -oldcrshome <CH> version 11.1.0.6.0 [-force]
This restores OCR
Interoperability Success Factors 11.2 GI with older Database If you want to create a database in the old DB home after GI Upgrade
Apply patch for Bug 8288940 on 10.2.0.4 or 11.1 to create a database using dbca from that release, otherwise dbca fails to see the 11.2 ASM instance is running. All nodes must be pinned ( GI upgrade does this )
Oracle 10gR1
Database supports direct upgrade from 10.1.0.5 Earlier versions require upgrade to 10.1.0.5 first
Oracle 10gR2
Database supports direct upgrade from 10.2.0.2 Earlier versions require upgrade to 10.2.0.2 or higher first
Oracle 11gR1
Database supports direct upgrade from 11.1.0.6 and 11.1.0.7
Backup everything
Define and test a fallback strategy
Be aware of known issues (Metalink Note: 161818.1) Make decisions regarding the database compatibility init parameter. Some changes cannot be reverted. Check for and apply critical patches to 11.2 home prior to database upgrade Test, test and test until comfortable with the results Gather Baseline stats (OS, DB, ..etc) After upgrade, check logs
Verify you meet the minimum requirements for shared memory pool sizes Check for invalid SYS and SYSTEM owned objects Disable all jobs that are executed at the OS level or by third parties before the database upgrade See upgrade guide for full list of requirements
CVU still used to check O/S prerequisites and fix up scripts generated. Software copied to all selected nodes prior to dbua running.
Provides detailed summary report prior to upgrade Shuts down all database instances to perform the upgrade Provides detailed results report.
DB Datafiles
Command can be run from any node Handles removal of old files and creation of new ones
[root bin]$ ./crsctl replace votedisk +DATA
Disk group redundancy affects how many voting files can reside on an ASM diskgroup. External: 1 voting File Normal: 3 voting Files (minimum of 3 failure groups) High: 5 voting Files (minimum of 3 failure groups)
ASM file systems created on ASM Volumes ASM Volumes created in ASM Disk Groups ACFS requires disk groups with compatibility 11.2 or greater Managed from : asmcmd - many new commands in 11.2 asmca - new to 11.2 sqlplus - connect as /sysasm
QUESTIONS ANSWERS
The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracles products remains at the sole discretion of Oracle.