You are on page 1of 228

Using Serviceguard Extension for RAC

Manufacturing Part Number : T1859-90038 May 2006 Update Copyright 2006 Hewlett-Packard Development Company, L.P. All rights reserved.

Legal Notices
Copyright 2003- 2006 Hewlett-Packard Development Company, L.P. Publication Dates: June 2003, June 2004, February 2005, December 2005, March 2006, May 2006 Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendors standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Oracle is a registered trademark of Oracle Corporation. UNIX is a registered trademark in the United States and other countries, licensed exclusively through The Open Group. VERITAS is a registered trademark of VERITAS Software Corporation. VERITAS File System is a trademark of VERITAS Software Corporation.

Contents
1. Introduction to Serviceguard Extension for RAC
What is a Serviceguard Extension for RAC Cluster? . . . . . . . . . . . . . . . . . . . . . . . . . . Group Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Packages in a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Serviceguard Extension for RAC Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Group Membership Daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of SGeRAC and Cluster File System (CFS)/ Cluster Volume Manager (CVM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Package Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storage Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of SGeRAC and Oracle 10g RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of SGeRAC and Oracle 9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Serviceguard Works with Oracle 9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Group Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Packages for Oracle RAC Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Packages for Oracle Listeners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Node Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Larger Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Up to Four Nodes with SCSI Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Point to Point Connections to Storage Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extended Distance Cluster Using Serviceguard Extension for RAC . . . . . . . . . . . . . . 16 17 18 19 19 20 20 20 22 23 23 23 24 25 26 28 28 30 32

2. Serviceguard Configuration for Oracle 10g RAC


Interface Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Group Membership API (NMAPI2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SGeRAC Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster Timeouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Cluster Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shared Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Listener . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RAC Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automated Startup and Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manual Startup and Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shared Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planning Storage for Oracle Cluster Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planning Storage for Oracle 10g RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Planning with SLVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 34 34 34 35 36 37 38 40 40 40 40 41 42 42

Contents
Storage Planning with CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Planning with CVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Serviceguard Extension for RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration File Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Storage Infrastructure with LVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Building Volume Groups for RAC on Mirrored Disks. . . . . . . . . . . . . . . . . . . . . . . . . Building Mirrored Logical Volumes for RAC with LVM Commands . . . . . . . . . . . . . Creating RAC Volume Groups on Disk Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Logical Volumes for RAC on Disk Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Demo Database Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying the Logical Volume Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exporting the Logical Volume Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Oracle Real Application Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster Configuration ASCII File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Storage Infrastructure with CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a SGeRAC Cluster with CFS 4.1 for Oracle 10g . . . . . . . . . . . . . . . . . . . . . Initializing the VERITAS Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting CFS from the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Storage Infrastructure with CVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Initializing the VERITAS Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using CVM 4.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using CVM 3.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Demo Database Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding Disk Groups to the Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . Prerequisites for Oracle 10g (Sample Installation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Oracle 10g Cluster Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing on Local File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Oracle 10g RAC Binaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing RAC Binaries on a Local File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing RAC Binaries on Cluster File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a RAC Demo Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a RAC Demo Database on SLVM or CVM . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a RAC Demo Database on CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify that Oracle Disk Manager is Configured. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Oracle to Use Oracle Disk Manager Library . . . . . . . . . . . . . . . . . . . . . . . 42 44 46 47 48 48 50 52 54 55 57 57 59 60 65 65 65 70 73 73 73 77 79 80 82 83 88 88 89 89 89 90 90 91 92 93

Contents
Verify that Oracle Disk Manager is Running. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Oracle to Stop Using Oracle Disk Manager Library . . . . . . . . . . . . . . . . . Using Serviceguard Packages to Synchronize with Oracle 10g RAC . . . . . . . . . . . . . . Preparing Oracle Cluster Software for Serviceguard Packages. . . . . . . . . . . . . . . . . Configure Serviceguard Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 96 97 97 97

3. Serviceguard Configuration for Oracle 9i RAC


Planning Database Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Planning with SLVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storage Planning with CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Planning with CVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Serviceguard Extension for RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration File Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operating System Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Storage Infrastructure with LVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Building Volume Groups for RAC on Mirrored Disks. . . . . . . . . . . . . . . . . . . . . . . . Building Mirrored Logical Volumes for RAC with LVM Commands . . . . . . . . . . . . Creating RAC Volume Groups on Disk Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Logical Volumes for RAC on Disk Arrays . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Demo Database Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying the Logical Volume Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exporting the Logical Volume Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Oracle Real Application Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster Configuration ASCII File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Storage Infrastructure with CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a SGeRAC Cluster with CFS 4.1 for Oracle 9i . . . . . . . . . . . . . . . . . . . . . Initializing the VERITAS Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting CFS from the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Storage Infrastructure with CVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Initializing the VERITAS Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using CVM 4.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using CVM 3.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mirror Detachment Policies with CVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Demo Database Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding Disk Groups to the Cluster Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Oracle 9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Install Oracle Software into CFS Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 102 104 106 109 110 111 112 112 114 116 118 119 121 121 123 124 129 129 129 134 136 136 136 140 143 143 145 147 148 148

Contents
Create Database with Oracle Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify that Oracle Disk Manager is Configured. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure Oracle to use Oracle Disk Manager Library . . . . . . . . . . . . . . . . . . . . . . . . Verify Oracle Disk Manager is Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Oracle to Stop using Oracle Disk Manager Library . . . . . . . . . . . . . . . . Using Packages to Configure Startup and Shutdown of RAC Instances . . . . . . . . . . Starting Oracle Instances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Packages to Launch Oracle RAC Instances . . . . . . . . . . . . . . . . . . . . . . . . Configuring Packages that Access the Oracle RAC Database . . . . . . . . . . . . . . . . . Adding or Removing Packages on a Running Cluster . . . . . . . . . . . . . . . . . . . . . . . Writing the Package Control Script. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 151 152 153 155 156 156 157 158 159 159

4. Maintenance and Troubleshooting


Reviewing Cluster and Package States with the cmviewcl Command . . . . . . . . . . . . 168 Types of Cluster and Package States. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Examples of Cluster and Package States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Online Reconfiguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Online Node Addition and Deletion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Managing the Shared Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Single Node Online volume Re-Configuration (SNOR) . . . . . . . . . . . . . . . . . . . . . . 184 Making LVM Volume Groups Shareable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Activating an LVM Volume Group in Shared Mode . . . . . . . . . . . . . . . . . . . . . . . . . 187 Making Offline Changes to Shared Volume Groups . . . . . . . . . . . . . . . . . . . . . . . . . 188 Adding Additional Shared LVM Volume Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Changing the VxVM or CVM Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . 190 Removing Serviceguard Extension for RAC from a System . . . . . . . . . . . . . . . . . . . . 192 Monitoring Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Using Event Monitoring Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Using EMS Hardware Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Adding Disk Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Replacing Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Replacing a Mechanism in a Disk Array Configured with LVM . . . . . . . . . . . . . . . 195 Replacing a Mechanism in an HA Enclosure Configured with Exclusive LVM . . . 195 Online Replacement of a Mechanism in an HA Enclosure Configured with Shared LVM (SLVM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Offline Replacement of a Mechanism in an HA Enclosure Configured with Shared LVM (SLVM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 6

Contents
Replacing a Lock Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . On-line Hardware Maintenance with In-line SCSI Terminator . . . . . . . . . . . . . . . Replacement of I/O Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replacement of LAN Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Off-Line Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . On-Line Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . After Replacing the Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monitoring RAC Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 198 201 202 202 202 203 204

A. Software Upgrades
Rolling Software Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steps for Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of Rolling Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations of Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Non-Rolling Software Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steps for Non-Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations of Non-Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Migrating a SGeRAC Cluster with Cold Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 207 209 215 216 217 218 219

B. Blank Planning Worksheets


LVM Volume Group and Physical Volume Worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 222 VxVM Disk Group and Disk Worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Oracle Logical Volume Worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

Contents

Printing History
Table 1 Printing Date June 2003 Document Edition and Printing Date Part Number T1859-90006 First Edition Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) June 2004 T1859-90017 Second Edition Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) February 2005 T1859-90017 Second Edition February 2005 Update Web (http://www.docs.hp.com/) October 2005 T1859-90033 Third Edition Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) December 2005 T1859-90033 Third Edition, First Reprint Web (http://www.docs.hp.com/) March 2006 T1859-90038 Third Edition, Second Reprint Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) May 2006 T1859-90038 Third Edition May 2006 Update Web (http://www.docs.hp.com/) The last printing date and part number indicate the current edition. Changes in the May 2006 update version include software upgrade procedures for SGeRAC clusters. The last printing date and part number indicate the current edition, which applies to the 11.14, 11.15, 11.16 and 11.17 version of Serviceguard Extension for RAC (Oracle Real Application Cluster). Edition

The printing date changes when a new edition is printed. (Minor corrections and updates which are incorporated at reprint do not cause the date to change.) The part number is revised when extensive technical changes are incorporated. New editions of this manual will incorporate all material updated since the previous edition. To ensure that you receive the new editions, you should subscribe to the appropriate product support service. See your HP sales representative for details. HP Printing Division:
Business Critical Computing Hewlett-Packard Co. 19111 Pruneridge Ave. Cupertino, CA 95014

10

Preface
The May 2006 update includes a new appendix on software upgrade procedures for SGeRAC clusters. Also, this guide describes how to use the Serviceguard Extension for RAC (Oracle Real Application Cluster) to configure Serviceguard clusters for use with Oracle Real Application Cluster software on HP High Availability clusters running the HP-UX operating system. The contents are as follows: Chapter 1, Introduction, describes a Serviceguard cluster and provides a roadmap for using this guide. This chapter should be used as a supplement to Chapters 13 of the Managing Serviceguard users guide. Chapter 2, Serviceguard Configuration for Oracle 10g RAC, describes the additional steps you need to take to use Serviceguard with Real Application Clusters when configuring Oracle 10g RAC. This chapter should be used as a supplement to Chapters 46 of the Managing Serviceguard users guide. Chapter 3, Serviceguard Configuration for Oracle 9i RAC, describes the additional steps you need to take to use Serviceguard with Real Application Clusters when configuring Oracle 9i RAC. This chapter should be used as a supplement to Chapters 46 of the Managing Serviceguard users guide. Chapter 4, Maintenance and Troubleshooting, describes tools and techniques necessary for ongoing cluster operation. This chapter should be used as a supplement to Chapters 78 of the Managing Serviceguard users guide. Appendix A, Software Upgrades, describes rolling, non-rolling and migration with cold install upgrade procedures for SGeRAC clusters. Appendix B, Blank Planning Worksheets, contains planning worksheets for LVM, VxVM, and Oracle Logical Volume.

Related Publications

The following documents contain additional useful information: Clusters for High Availability: a Primer of HP Solutions. Hewlett-Packard Professional Books: Prentice Hall PTR, 2001 (ISBN 0-13-089355-2) Managing Serviceguard Twelfth Edition (B3936-90100)

11

Using High Availability Monitors (B5736-90046) Using the Event Monitoring Service (B7612-90015) Using Advanced Tape Services (B3936-90032) Designing Disaster Tolerant High Availability Clusters (B7660-90017) Managing Serviceguard Extension for SAP (T2803-90002) Managing Systems and Workgroups (5990-8172) Managing Serviceguard NFS (B5140-90017) HP Auto Port Aggregation Release Notes

Before attempting to use VxVM storage with Serviceguard, please refer to the following: VERITAS Volume Manager Administrators Guide. This contains a glossary of VERITAS terminology. VERITAS Volume Manager Storage Administrator Administrators Guide VERITAS Volume Manager Reference Guide VERITAS Volume Manager Migration Guide VERITAS Volume Manager for HP-UX Release Notes

If you will be using VERITAS CVM 4.1 or the VERITAS Cluster File System with Serviceguard, please refer to the HP Serviceguard Storage Management Suite Version A.01.00 Release Notes (T2771-90028). These release notes describe suite bundles for the integration of HP Serviceguard A.11.17 with Symantecs VERITAS Storage Foundation. Use the following URL to access HPs high availability web page: http://www.hp.com/go/ha

Use the following URL for access to a wide variety of HP-UX documentation: http://docs.hp.com/hpux

Problem Reporting If you have any problems with the software or documentation, please contact your local Hewlett-Packard Sales Office or Customer Service Center.

12

Conventions

We use the following typographical conventions. audit (5) An HP-UX manpage. audit is the name and 5 is the section in the HP-UX Reference. On the web and on the Instant Information CD, it may be a hot link to the manpage itself. From the HP-UX command line, you can enter man audit or man 5 audit to view the manpage. See man (1). The title of a book. On the web and on the Instant Information CD, it may be a hot link to the book itself. The name of a keyboard key. Note that Return and Enter both refer to the same key. Text that is emphasized. Text that is strongly emphasized. The defined use of an important word or phrase. Text displayed by the computer. Commands and other text that you type. A command name or qualified command phrase. The name of a variable that you may replace in a command or function or information in a display that represents several possible values. The contents are optional in formats and command descriptions. If the contents are a list separated by |, you must choose one of the items. The contents are required in formats and command descriptions. If the contents are a list separated by |, you must choose one of the items. The preceding element may be repeated an arbitrary number of times. Separates items in a list of choices.

Book Title
KeyCap

Emphasis Emphasis Term ComputerOut UserInput Command Variable

[ ]

{ }

... |

13

14

Introduction to Serviceguard Extension for RAC

Introduction to Serviceguard Extension for RAC


Serviceguard Extension for RAC (SGeRAC) enables the Oracle Real Application Cluster (RAC), formerly known as Oracle Parallel Server RDBMS, to run on HP high availability clusters under the HP-UX operating system. This chapter introduces Serviceguard Extension for RAC and shows where to find different kinds of information in this book. The following topics are presented: What is a Serviceguard Extension for RAC Cluster? Serviceguard Extension for RAC Architecture Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM) Overview of SGeRAC and Oracle 10g RAC Overview of SGeRAC and Oracle 9i RAC Node Failure Larger Clusters Extended Distance Cluster Using Serviceguard Extension for RAC

Chapter 1

15

Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster?

What is a Serviceguard Extension for RAC Cluster?


A high availability cluster is a grouping of HP servers having sufficient redundancy of software and hardware components that a single point of failure will not disrupt the availability of computer services. High availability clusters configured with Oracle Real Application Cluster software are known as RAC clusters. Figure 1-1 shows a very simple picture of the basic configuration of a RAC cluster on HP-UX. Figure 1-1 Overview of Oracle RAC Configuration on HP-UX

In the figure, two loosely coupled systems (each one known as a node) are running separate instances of Oracle software that read data from and write data to a shared set of disks. Clients connect to one node or the other via LAN.

16

Chapter 1

Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? RAC on HP-UX lets you maintain a single database image that is accessed by the HP servers in parallel, thereby gaining added processing power without the need to administer separate databases. Further, when properly configured, Serviceguard Extension for RAC provides a highly available database that continues to operate even if one hardware component should fail.

Group Membership
Oracle RAC systems implement the concept of group membership, which allows multiple instances of RAC to run on each node. Related processes are configured into groups. Groups allow processes in different instances to choose which other processes to interact with. This allows the support of multiple databases within one RAC cluster. A Group Membership Service (GMS) component provides a process monitoring facility to monitor group membership status. GMS is provided by the cmgmsd daemon, which is an HP component installed with Serviceguard Extension for RAC. Figure 1-2 shows how group membership works. Nodes 1 through 4 of the cluster share the Sales database, but only Nodes 3 and 4 share the HR database. Consequently, there is one instance of RAC each on Node 1 and Node 2, and there are two instances of RAC each on Node 3 and Node 4. The RAC processes accessing the Sales database constitute one group, and the RAC processes accessing the HR database constitute another group.

Chapter 1

17

Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? Figure 1-2 Group Membership Services

Using Packages in a Cluster


In order to make other important applications highly available (in addition to the Oracle Real Application Cluster), you can configure your RAC cluster to use packages. Packages group applications and services together; in the event of a service, node, or network failure, Serviceguard Extension for RAC can automatically transfer control of all system resources in a designated package to another node within the cluster, allowing your applications to remain available with minimal interruption.

NOTE

In RAC clusters, you create packages to start and stop RAC itself as well as to run applications that access the database instances. For details on the use of packages with RAC, refer to section, Using Packages to Configure Startup and Shutdown of RAC Instances on page 156 located in chapter 3.

18

Chapter 1

Introduction to Serviceguard Extension for RAC Serviceguard Extension for RAC Architecture

Serviceguard Extension for RAC Architecture


This chapter discusses the main software components used by Serviceguard Extension for RAC in some detail. The components are: Oracle Components Custom Oracle Database Serviceguard Extension for RAC Components Group Membership Services (RAC) Serviceguard Components Package Manager Cluster Manager Network Manager Operating System Volume Manager Software HP-UX Kernel

Group Membership Daemon


In addition to the Serviceguard daemon processes mentioned in Chapter 3 of Managing Serviceguard users guide, there is another daemon that is used by Oracle to enable communication with Serviceguard Extension for RAC: cmgmsdGroup Membership Daemon for RAC 9i or later

This HP daemon provides group membership services for Oracle Real Application Cluster 9i or later. Group membership allows multiple Oracle instances to run on the same cluster node. GMS is illustrated in Figure 1-2 on page 18.

Chapter 1

19

Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM)

Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM)
SGeRAC supports Cluster File System (CFS) through Serviceguard. For more detail information on CFS support refer to the Managing Serviceguard Twelfth Edition users guide.

Package Dependencies
When CFS is used as shared storage, the application and software using the CFS storage should be configured to start and stop using Serviceguard packages. These application packages should be configured with a package dependency on the underlying multi-node packages, which manages the CFS and CVM storage reserves. Configuring the application to be start/stop through SG package is to ensure the synchronization of storage activation/deactivation and application startup/shutdown. With CVM configurations using multi-node packages, CVM shared storage should be configured in Serviceguard packages with package dependencies. Refer to the Managing Serviceguard Twelfth Edition users guide for detailed information on multi-node package.

Storage Configuration Options


Prior to CFS, the only option in a SGeRAC cluster to provide shared storage for a RAC cluster was through raw volumes, using either SLVM or CVM that are used for Oracle data files. The application software is installed on a local file system. In addition to SLVM and CVM, SGeRAC supports CFS. CFS provides SGeRAC with additional options, such as improved manageability. When planning a RAC cluster, application software could be installed once and be visible by all cluster nodes. A central location is available to store runtime logs, for example, RAC alert logs.

20

Chapter 1

Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM) Oracle RAC data files can be created on a CFS, allowing the database administrator or Oracle software to create additional data files without the need of root system administrator privileges. The archive area can now be on a CFS. Oracle instances on any cluster node can access the archive area when database recovery requires the archive logs.

Chapter 1

21

Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Oracle 10g RAC

Overview of SGeRAC and Oracle 10g RAC


Starting with Oracle 10g RAC, Oracle has bundled its own cluster software. The initial release is called Oracle Cluster Ready Service (CRS). CRS is used both as a generic term referring to the Oracle cluster software and as a specific term referring to a component within the Oracle clusters software. At subsequent release, Oracle generic CRS is renamed to Oracle Clusterware. Oracle Clusterware is the generic term referring to the Oracle Cluster Software. The Oracle Cluster Software includes the following components: Cluster Synchronization Services (CSS), Cluster Ready Service (CRS), and Event Management (EVM). CSS manages the Oracle cluster membership and provides its own group membership service to RAC instances. When installed on a SGeRAC cluster, CSS utilizes the group membership service provided by SGeRAC. CRS manages Oracle's cluster resources based on configuration, including start, stop, and monitor, and failover of the resources. EVM publishes events generated by CRS and may run scripts when certain events occur. When installed on a SGeRAC cluster, both the Oracle cluster software and RAC can continue to rely on the shared storage capability, networking monitoring, as well as other capabilities provided through Serviceguard and SGeRAC.

NOTE

In this document, the generic terms CRS and Oracle Clusterware will subsequently be referred to as Oracle Cluster Software. The use of the term CRS will still be used when referring to a sub-component of Oracle Cluster Software.

For more detail information on Oracle 10g RAC refer to Chapter 2, Serviceguard Configuration for Oracle 10g RAC.

22

Chapter 1

Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Oracle 9i RAC

Overview of SGeRAC and Oracle 9i RAC


How Serviceguard Works with Oracle 9i RAC
Serviceguard provides the cluster framework for Oracle, a relational database product in which multiple database instances run on different cluster nodes. A central component of Real Application Clusters is the distributed lock manager (DLM), which provides parallel cache management for database instances. Each node in a RAC cluster starts an instance of the DLM process when the Oracle instance starts and the instances then communicate with each other over the network.

Group Membership
The group membership service (GMS) is the means by which Oracle instances communicate with the Serviceguard cluster software. GMS runs as a separate daemon process that communicates with the cluster manager. This daemon is an HP component known as cmgmsd. The cluster manager starts up, monitors, and shuts down the cmgmsd. When an Oracle instance starts, the instance registers itself with cmgmsd; thereafter, if an Oracle instance fails, cmgmsd notifies other members of the same group to perform recovery. If cmgmsd dies unexpectedly, Serviceguard will fail the node with a TOC (Transfer of Control).

Chapter 1

23

Introduction to Serviceguard Extension for RAC Configuring Packages for Oracle RAC Instances

Configuring Packages for Oracle RAC Instances


Oracle instances can be configured as packages with a single node in their node list.

NOTE

Packages that start and halt Oracle instances (called instance packages) do not fail over from one node to another; they are single-node packages. You should include only one NODE_NAME in the package ASCII configuration file. The AUTO_RUN setting in the package configuration file will determine whether the RAC instance will start up as the node joins the cluster. Your cluster may include RAC and non-RAC packages in the same cluster.

24

Chapter 1

Introduction to Serviceguard Extension for RAC Configuring Packages for Oracle Listeners

Configuring Packages for Oracle Listeners


Oracle listeners can be configured as packages within the cluster (called listener packages). Each node with a RAC instance can be configured with a listener package. Listener packages are configured to automatically fail over from the original node to an adoptive node. When the original node is restored, the listener package automatically fails back to the original node. In the listener package ASCII configuration file, the FAILBACK_POLICY is set to AUTOMATIC. The SUBNET is a set of monitored subnets. The package can be set to automatically startup with the AUTO_RUN setting. Each RAC instance can be configured to be registered with listeners that are assigned to handle client connections. The listener package script is configured to add the package IP address and start the listener on the node. For example, on a two node cluster with one database, each node can have one RAC instance and one listener package. Oracle clients can be configured to connect to either package IP address (or corresponding hostname) using Oracle Net Services. When a node failure occurs, existing client connection to the package IP address will be reset after the listener package fails over and adds the package IP address. Subsequent connections for clients configured with basic failover, clients would connect to the next available listener package's IP address and listener.

Chapter 1

25

Introduction to Serviceguard Extension for RAC Node Failure

Node Failure
RAC cluster configuration is designed so that in the event of a node failure, another node with a separate instance of Oracle can continue processing transactions. Figure 1-3 shows a typical cluster with instances running on both nodes. Figure 1-3 Before Node Failure

Figure 1-4 shows the condition where Node 1 has failed and Package 1 has been transferred to Node 2. Oracle instance 1 is no longer operating, but it does not fail over to Node 2. Package 1s IP address was transferred to Node 2 along with the package. Package 1 continues to be

26

Chapter 1

Introduction to Serviceguard Extension for RAC Node Failure available and is now running on Node 2. Also note that Node 2 can now access both Package 1s disk and Package 2s disk. Oracle instance 2 now handles all database access, since instance 1 has gone down. Figure 1-4 After Node Failure

In the above figure, pkg1 and pkg2 are not instance packages. They are shown to illustrate the movement of packages in general.

Chapter 1

27

Introduction to Serviceguard Extension for RAC Larger Clusters

Larger Clusters
Serviceguard Extension for RAC supports clusters of up to 16 nodes. The actual cluster size is limited by the type of storage and the type of volume manager used.

Up to Four Nodes with SCSI Storage


You can configure up to four nodes using a shared F/W SCSI bus; for more than 4 nodes, FibreChannel must be used. An example of a four-node RAC cluster appears in the following figure.

28

Chapter 1

Introduction to Serviceguard Extension for RAC Larger Clusters Figure 1-5 Four-Node RAC Cluster

Chapter 1

29

Introduction to Serviceguard Extension for RAC Larger Clusters In this type of configuration, each node runs a separate instance of RAC and may run one or more high availability packages as well. The figure shows a dual Ethernet configuration with all four nodes connected to a disk array (the details of the connections depend on the type of disk array). In addition, each node has a mirrored root disk (R and R'). Nodes may have multiple connections to the same array using alternate links (PV links) to take advantage of the array's use of RAID levels for data protection. Alternate links are further described in the section Creating RAC Volume Groups on Disk Arrays on page 116.

Point to Point Connections to Storage Devices


Some storage devices allow point-to-point connection to a large number of host nodes without using a shared SCSI bus. An example is shown in Figure 1-6, a cluster consisting of eight nodes with a FibreChannel interconnect. (Client connection is provided through Ethernet.) The nodes access shared data on an HP StorageWorks XP series or EMC disk array configured with 16 I/O ports. Each node is connected to the array using two separate Fibre channels configured with PV Links. Each channel is a dedicated bus; there is no daisy-chaining.

30

Chapter 1

Introduction to Serviceguard Extension for RAC Larger Clusters Figure 1-6 Eight-Node Cluster with XP or EMC Disk Array

FibreChannel switched configurations also are supported using either an arbitrated loop or fabric login topology. For additional information about supported cluster configurations, refer to the HP 9000 Servers Configuration Guide, available through your HP representative.

Chapter 1

31

Introduction to Serviceguard Extension for RAC Extended Distance Cluster Using Serviceguard Extension for RAC

Extended Distance Cluster Using Serviceguard Extension for RAC


Basic Serviceguard clusters are usually configured in a single data center, often in a single room, to provide protection against failures in CPUs, interface cards, and software. Extended Serviceguard clusters are specialized cluster configurations, which allow a single cluster to extend across two or three separate data centers for increased disaster tolerance. Depending on the type of links employed, distances of up to 100 km between data centers can be achieved. Refer to Chapter 2 of the Designing Disaster Tolerant High Availability Clusters manual, which discusses several types of extended distance cluster that use basic Serviceguard technology with software mirroring (using MirrorDisk/UX or CVM) and Fibre Channel.

32

Chapter 1

Serviceguard Configuration for Oracle 10g RAC

Serviceguard Configuration for Oracle 10g RAC


This chapter shows the additional planning and configuration that is needed to use Oracle Real Application Clusters 10g with Serviceguard. The following topics are presented: Interface Areas Oracle Cluster Software Planning Storage for Oracle Cluster Software Planning Storage for Oracle 10g RAC Installing Serviceguard Extension for RAC Installing Oracle Real Application Clusters Creating a Storage Infrastructure with CFS Creating a Storage Infrastructure with CVM Installing Oracle 10g Cluster Software

Chapter 2

33

Serviceguard Configuration for Oracle 10g RAC Interface Areas

Interface Areas
This section documents interface areas where there is expected interaction between SGeRAC and Oracle 10g Cluster Software and RAC.

Group Membership API (NMAPI2)


The NMAPI2 client links with the SGeRAC provided NMAPI2 library for group membership service. The group membership is layered on top of the SGeRAC cluster membership where all the primary group members are processes within cluster nodes. Cluster membership has node names and group membership has process names. Upon a SGeRAC group membership change, SGeRAC delivers the new group membership to other members of the same group.

SGeRAC Detection
When Oracle 10g Cluster Software is installed on a SGeRAC cluster, Oracle Cluster Software detects the existence of SGeRAC and CSS uses SGeRAC group membership.

Cluster Timeouts
SGeRAC uses heartbeat timeouts to determine when any SGeRAC cluster member has failed or when any cluster member is unable to communicate with the other cluster members. CSS uses a similar mechanism for CSS memberships. Each RAC instance group membership also has a timeout mechanism, which triggers Instance Membership Recovery (IMR).

NOTE

HP and Oracle support SGeRAC to provide group membership to CSS.

Serviceguard Cluster Timeout The Serviceguard cluster heartbeat timeout is set according to user requirements for availability. The Serviceguard cluster reconfiguration time is determined by the cluster timeout, configuration, the reconfiguration algorithm, and activities during reconfiguration. 34 Chapter 2

Serviceguard Configuration for Oracle 10g RAC Interface Areas CSS Timeout When SGeRAC is on the same cluster as Oracle Cluster Software, the CSS timeout is set to a default value of 600 seconds (10 minutes) at Oracle software installation. This timeout is configurable with Oracle tools and should not be changed without ensuring that the CSS timeout allows enough time for Serviceguard Extension for RAC (SGeRAC) reconfiguration and to allow multipath (if configured) reconfiguration to complete. On a single point of failure, for example, node failure, Serviceguard reconfigures first and SGeRAC delivers the new group membership to CSS via NMAPI2. If there is a change in group membership, SGeRAC updates the members of the new membership. After receiving the new group membership, CSS in turn initiates its own recovery action as needed and propagates the new group membership to the RAC instances.

NOTE

As a general guideline, the CSS TIMEOUT should be greater of either 180 seconds or 25 times the Serviceguard NODE_TIMEOUT.

RAC IMR Timeout RAC instance IMR timeout is configurable. RAC IMR expects group membership changes to occur within this time or IMR will begin evicting group members. The IMR timeout must be above the SGeRAC reconfiguration time and adhere to any Oracle-specified relation to CSS reconfiguration time.

Oracle Cluster Software


Oracle Cluster Software should be started after activating its required shared storage resources. Shared storage resources can be activated after SGeRAC completes startup. Oracle Cluster Software should not activate any shared storage. Similarly, for halting SGeRAC at run level 3 and removing shared storage resources from Oracle Cluster Software, Oracle Cluster Software must be halted first.

Chapter 2

35

Serviceguard Configuration for Oracle 10g RAC Interface Areas Automated Oracle Cluster Software Startup and Shutdown The preferred mechanism that allows Serviceguard to notify Oracle Cluster Software to start and to request Oracle Cluster Software to shutdown is the use of Serviceguard packages. Monitoring Oracle Cluster Software daemon monitoring is performed through programs initiated by the HP-UX init process. SGeRAC monitors Oracle Cluster Software to the extent that CSS is a NMAPI2 group membership client and group member. SGeRAC provides group membership notification to the remaining group members when CSS enters and leaves the group membership.

Shared Storage
SGeRAC supports shared storage using HP Shared Logical Volume Manager (SLVM), Veritas Cluster File System (CFS) and Veritas Cluster Volume Manager (CVM). The file /var/opt/oracle/oravg.conf must not be present so Oracle Cluster Software will not activate or deactivate any shared storage. Multipath Multipath is supported through either SLVM pvlinks or CVM Dynamic Multipath (DMP). In some configurations, SLVM or CVM does not need to be configured for multipath as the multipath is provided by the storage array. Since Oracle Cluster Software checks availability of the shared device for the vote disk through periodic monitoring, the multipath detection and failover time must be less than CRS's timeout specified by the Cluster Synchronization Service (CSS) MISSCOUNT. On SGeRAC configurations, the CSS MISSCOUNT value is set to 600 seconds. Multipath failover time is typically between 30 to 120 seconds. OCR and Vote Device Shared storage for the OCR and Vote device should be on supported shared storage volume managers with multipath configured and with either the correct multipath failover time or CSS timeout.

36

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Interface Areas Mirroring and Resilvering On node and cluster wide failures, when SLVM mirroring is used and Oracle resilvering is available, the recommendation for the logical volume mirror recovery policy is set to full mirror resynchronization (NOMWC) for control and redo files and no mirror resynchronization (NONE) for the datafiles since Oracle would perform resilvering on the datafiles based on the redo log.

NOTE

If Oracle resilvering is not available, the mirror recovery policy should be set to full mirror resynchronization (NOMWC) of all control, redo, and datafiles.

Shared Storage Activation Depending on your version of Oracle Cluster Software, the default configuration for activation of the shared storage for Oracle Cluster Software may be controlled by the /var/opt/oracle/oravg.conf file. For the default configuration where the shared storage is activated by SGeRAC before starting Oracle Cluster Software or RAC instance, the oravg.conf file should not be present.

Listener
Automated Startup and Shutdown CRS can be configured to automatically start, monitor, restart, and halt listeners. If CRS is not configured to start the listener automatically at Oracle Cluster Software startup, the listener startup can be automated with supported commands, such as srvctl and lsnrctl, through scripts or SGeRAC packages. If the SGeRAC package is configured to start the listener, the SGeRAC package would contain the virtual IP address required by the listener. Manual Startup and Shutdown Manual listener startup and shutdown is supported through the following commands: srvctl and lsnrctl

Chapter 2

37

Serviceguard Configuration for Oracle 10g RAC Interface Areas

Network Monitoring
SGeRAC cluster provides network monitoring. For networks that are redundant and monitored by Serviceguard cluster, Serviceguard cluster provides local failover capability between local network interfaces (LAN) that is transparent to applications utilizing User Datagram Protocol (UDP) and Transport Control Protocol (TCP). For virtual IP addresses (floating or package IP address) in Serviceguard, Serviceguard also provides remote failover capability of network connection endpoints between cluster nodes and transparent local failover capability of network connection endpoints between redundant local network interfaces.

NOTE

Serviceguard can not be responsible for networks or connection endpoints that it is not configured to monitor.

SGeRAC Heartbeat Network Serviceguard supports multiple heartbeat networks, private or public. Serviceguard heartbeat network can be configured as a single network connection with redundant LAN or multiple connections with multiple LANs (single or redundant). CSS Heartbeat Network The CSS IP addresses for peer communications are fixed IP addresses. When CSS heartbeats are on a single network connection and does not support multiple heartbeat networks. To protect against a network single point of failure, the CSS heartbeat network should be configured with redundant physical networks under SGeRAC monitoring. Since SGeRAC does not support heartbeat over Hyperfabric (HF) networks, the preferred configuration is for CSS and Serviceguard to share the same cluster interconnect. RAC Cluster Interconnect Each set of RAC instances maintains peer communications on a single connection and may not support multiple connections on HP-UX with SGeRAC. To protect against a network single point of failure, the RAC cluster interconnect should be configured with redundant networks under Serviceguard monitoring and for Serviceguard to take action 38 Chapter 2

Serviceguard Configuration for Oracle 10g RAC Interface Areas (either a local failover or an instance package shutdown, or both) if the RAC cluster interconnect fails. Serviceguard does not monitor Hyperfabric networks directly (integration of Serviceguard and HF/EMS monitor is supported). Public Client Access When the client connection endpoint (virtual or floating IP address) is configured using Serviceguard packages, Serviceguard provides monitoring, local failover, and remote failover capabilities. When Serviceguard packages are not used, Serviceguard does not provide monitor or failover support.

Chapter 2

39

Serviceguard Configuration for Oracle 10g RAC RAC Instances

RAC Instances
Automated Startup and Shutdown
CRS can be configured to automatically start, monitor, restart, and halt RAC instances. If CRS is not configured to automatically start the RAC instance at Oracle Cluster Software startup, the RAC instance startup can be automated through scripts using supported commands, such as srvctl or sqlplus, in a SGeRAC package to start and halt RAC instances.

NOTE

svrctl and sqlplus are Oracle commands.

Manual Startup and Shutdown


Manual RAC instance startup and shutdown is supported through the following commands: srvctl or sqlplus

Shared Storage
It is expected the shared storage is available when the RAC instance is started. Since the RAC instance expects the shared storage to be available, ensure the shared storage is activated. For SLVM, the shared volume groups must be activated and for CVM, the disk group must be activated. For CFS, the cluster file system must be mounted.

40

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle Cluster Software

Planning Storage for Oracle Cluster Software


Oracle Cluster Software requires shared storage for the Oracle Cluster Registry (OCR) and a vote device. Automatic Storage Management can not be used for the OCR and vote device since these files must be accessible before Oracle Cluster Software starts. The minimum required size for the OCR is 100MB and for the vote disk is 20 MB. The Oracle OCR and vote device can be created on supported shared storage, including SLVM logical volumes, CVM raw volumes, and CFS file systems.

Chapter 2

41

Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC

Planning Storage for Oracle 10g RAC


Volume Planning with SLVM
Storage capacity for the Oracle database must be provided in the form of logical volumes located in shared volume groups. The Oracle software requires at least two log files for each Oracle instance, several Oracle control files and data files for the database itself. For all these files, Serviceguard Extension for RAC uses HP-UX raw logical volumes, which are located in volume groups that are shared between the nodes in the cluster. High availability is achieved by using high availability disk arrays in RAID modes. The logical units of storage on the arrays are accessed from each node through multiple physical volume links (PV links, also known as alternate links), which provide redundant paths to each unit of storage. Fill out a Logical Volume worksheet to provide logical volume names for logical volumes that you will create with the lvcreate command. The Oracle DBA and the HP-UX system administrator should prepare this worksheet together. Create entries for shared volumes only. For each logical volume, enter the full pathname of the raw logical volume device file. Be sure to include the desired size in MB. Following is a sample worksheet filled out. However, this sample is only representative. For different versions of the Oracle database, the size of files are different. Refer to Appendix B, Blank Planning Worksheets, for samples of blank worksheets. Make as many copies as you need. Fill out the worksheet and keep it for future reference.

Storage Planning with CFS


With CFS, the database software, database files (control, redo, data files), and archive logs may reside on a cluster file system visible by all nodes. Also, the OCR and vote device can reside on CFS directories.

42

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC
ORACLE LOGICAL VOLUME WORKSHEET FOR LVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Cluster Registry: _____/dev/vg_ops/rora_ocr_____100___ (once per cluster) Oracle Cluster Vote Disk: ____/dev/vg_ops/rora_vote_____20___ (once per cluster) Oracle Control File: _____/dev/vg_ops/ropsctl1.ctl______110______ Oracle Control File 2: ___/dev/vg_ops/ropsctl2.ctl______110______ Oracle Control File 3: ___/dev/vg_ops/ropsctl3.ctl______110______ Instance 1 Redo Log 1: ___/dev/vg_ops/rops1log1.log_____120______ Instance 1 Redo Log 2: ___/dev/vg_ops/rops1log2.log_____120_______ Instance 1 Redo Log 3: ___/dev/vg_ops/rops1log3.log_____120_______ Instance 1 Redo Log: __________________________________________________ Instance 1 Redo Log: __________________________________________________ Instance 2 Redo Log 1: ___/dev/vg_ops/rops2log1.log____120________ Instance 2 Redo Log 2: ___/dev/vg_ops/rops2log2.log____120________ Instance 2 Redo Log 3: ___/dev/vg_ops/rops2log3.log____120________ Instance 2 Redo Log: _________________________________________________ Instance 2 Redo Log: __________________________________________________ Data: System ___/dev/vg_ops/ropssystem.dbf___500__________ Data: Sysaux ___/dev/vg_ops/ropssysaux.dbf___800__________ Data: Temp ___/dev/vg_ops/ropstemp.dbf______250_______ Data: Users ___/dev/vg_ops/ropsusers.dbf_____120_________ Data: User data ___/dev/vg_ops/ropsdata1.dbf_200__________ Data: User data ___/dev/vg_ops/ropsdata2.dbf__200__________ Data: User data ___/dev/vg_ops/ropsdata3.dbf__200__________ Parameter: spfile1 ___/dev/vg_ops/ropsspfile1.ora __5_____ Password: ______/dev/vg_ops/rpwdfile.ora__5_______ Instance 1 undotbs1: /dev/vg_ops/ropsundotbs1.dbf___500___ Instance 2 undotbs2: /dev/vg_ops/ropsundotbs2.dbf___500___ Data: example1__/dev/vg_ops/ropsexample1.dbf__________160____

Chapter 2

43

Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC

Volume Planning with CVM


Storage capacity for the Oracle database must be provided in the form of volumes located in shared disk groups. The Oracle software requires at least two log files for each Oracle instance, several Oracle control files and data files for the database itself. For all these files, Serviceguard Extension for RAC uses HP-UX raw volumes, which are located in disk groups that are shared between the nodes in the cluster. High availability is achieved by using high availability disk arrays in RAID nodes. The logical units of storage on the arrays are accessed from each node through multiple physical volume links via DMP (Dynamic Multi-pathing), which provides redundant paths to each unit of storage. Fill out the VERITAS Volume worksheet to provide volume names for volumes that you will create using the VERITAS utilities. The Oracle DBA and the HP-UX system administrator should prepare this worksheet together. Create entries for shared volumes only. For each volume, enter the full pathname of the raw volume device file. Be sure to include the desired size in MB. Following is a sample worksheet filled out. Refer to Appendix B, Blank Planning Worksheets, for samples of blank worksheets. Make as many copies as you need. Fill out the worksheet and keep it for future reference.

44

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC
ORACLE LOGICAL VOLUME WORKSHEET FOR CVM Page ___ of ____ =============================================================================== RAW VOLUME NAME SIZE (MB) Oracle Cluster Registry: _____/dev/vx/rdsk/ops_dg/ora_ocr_____100___ (once per cluster) Oracle Cluster Vote Disk: ____/dev/vx/rdsk/ops_dg/ora_vote_____20___ (once per cluster) Oracle Control File: _____/dev/vx/rdsk/ops_dg/opsctl1.ctl______110______ Oracle Control File 2: ___/dev/vx/rdsk/ops_dg/opsctl2.ctl______110______ Oracle Control File 3: ___/dev/vx/rdsk/ops_dg/opsctl3.ctl______110______ Instance 1 Redo Log 1: ___/dev/vx/rdsk/ops_dg/ops1log1.log_____120______ Instance 1 Redo Log 2: ___/dev/vx/rdsk/ops_dg/ops1log2.log_____120______ Instance 1 Redo Log 3: ___/dev/vx/rdsk/ops_dg/ops1log3.log_____120_______ Instance 1 Redo Log: __________________________________________________ Instance 1 Redo Log: __________________________________________________ Instance 2 Redo Log 1: ___/dev/vx/rdsk/ops_dg/ops2log1.log____120________ Instance 2 Redo Log 2: ___/dev/vx/rdsk/ops_dg/ops2log2.log____120________ Instance 2 Redo Log 3: ___/dev/vx/rdsk/ops_dg/ops2log3.log____120________ Instance 2 Redo Log: _________________________________________________ Instance 2 Redo Log: __________________________________________________ Data: System ___/dev/vx/rdsk/ops_dg/opssystem.dbf___500__________ Data: Sysaux ___/dev/vx/rdsk/ops_dg/opssysaux.dbf___800__________ Data: Temp ___/dev/vx/rdsk/ops_dg/opstemp.dbf______250_______ Data: Users ___/dev/vx/rdsk/ops_dg/opsusers.dbf_____120_________ Data: User data ___/dev/vx/rdsk/ops_dg/opsdata1.dbf_200__________ Data: User data ___/dev/vx/rdsk/ops_dg/opsdata2.dbf__200__________ Data: User data ___/dev/vx/rdsk/ops_dg/opsdata3.dbf__200__________ Parameter: spfile1 ___/dev/vx/rdsk/ops_dg/opsspfile1.ora __5_____ Password: ______/dev/vx/rdsk/ops_dg/pwdfile.ora__5_______ Instance 1 undotbs1: /dev/vx/rdsk/ops_dg/opsundotbs1.dbf___500___ Instance 2 undotbs2: /dev/vx/rdsk/ops_dg/opsundotbs2.dbf___500___ Data: example1__/dev/vx/rdsk/ops_dg/opsexample1.dbf__________160____

Chapter 2

45

Serviceguard Configuration for Oracle 10g RAC Installing Serviceguard Extension for RAC

Installing Serviceguard Extension for RAC


Installing Serviceguard Extension for RAC includes updating the software and rebuilding the kernel to support high availability cluster operation for Oracle Real Application Clusters. Prior to installing Serviceguard Extension for RAC, the following must be installed: Correct version of HP-UX Correct version of Serviceguard

To install Serviceguard Extension for RAC, use the following steps for each node:

NOTE

For the up to date version compatibility for Serviceguard and HP-UX, see the SGeRAC release notes.

1. Mount the distribution media in the tape drive, CD, or DVD reader. 2. Run Software Distributor, using the swinstall command. 3. Specify the correct input device. 4. Choose the following bundle from the displayed list:
Serviceguard Extension for RAC

5. After choosing the bundle, select OK to install the software.

46

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Configuration File Parameters

Configuration File Parameters


You need to code specific entries for all the storage groups that you want to use in an Oracle RAC configuration. If you are using LVM, the OPS_VOLUME_GROUP parameter is included in the cluster ASCII file. If you are using VERITAS CVM, the STORAGE_GROUP parameter is included in the package ASCII file. Details are as follows: OPS_VOLUME_GROUP The name of an LVM volume group whose disks are attached to at least two nodes in the cluster; the disks will be accessed by more than one node at a time using SLVM with concurrency control provided by Oracle RAC. Such disks are considered cluster aware. Volume groups listed under this parameter are marked for activation in shared mode. The entry can contain up to 40 characters. STORAGE_GROUP This parameter is used for CVM disk groups. Enter the names of all the CVM disk groups the package will use. In the ASCII package configuration file, this parameter is called STORAGE_GROUP. Unlike LVM volume groups, CVM disk groups are not entered in the cluster configuration file, they are entered in the package configuration file.

NOTE

CVM 4.x with CFS does not use the STORAGE_GROUP parameter because the disk group activation is performed by the multi-node package. CVM 3.x or 4.x without CFS uses the STORAGE_GROUP parameter in the ASCII package configuration file in order to activate the disk group. Do not enter the names of LVM volume groups or VxVM disk groups in the package ASCII configuration file.

Chapter 2

47

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM

Creating a Storage Infrastructure with LVM


In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), VERITAS Cluster Volume Manager (CVM), or VERITAS Volume Manager (VxVM). LVM and VxVM configuration are done before cluster configuration, and CVM configuration is done after cluster configuration. This section describes how to create LVM volume groups for use with Oracle data. Before configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager. Separate procedures are given for the following: Building Volume Groups for RAC on Mirrored Disks Building Mirrored Logical Volumes for RAC with LVM Commands Creating RAC Volume Groups on Disk Arrays Creating Logical Volumes for RAC on Disk Arrays

The Event Monitoring Service HA Disk Monitor provides the capability to monitor the health of LVM disks. If you intend to use this monitor for your mirrored disks, you should configure them in physical volume groups. For more information, refer to the manual Using HA Monitors.

Building Volume Groups for RAC on Mirrored Disks


The procedure described in this section uses physical volume groups for mirroring of individual disks to ensure that each logical volume is mirrored to a disk on a different I/O bus. This kind of arrangement is known as PVG-strict mirroring. It is assumed that your disk hardware is already configured in such a way that a disk to be used as a mirror copy is connected to each node on a different bus than the bus that is used for the other (primary) copy. For more information on using LVM, refer to the HP-UX Managing Systems and Workgroups manual.

48

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Creating Volume Groups and Logical Volumes If your volume groups have not been set up, use the procedure in the next sections. If you have already done LVM configuration, skip ahead to the section Installing Oracle Real Application Clusters. Selecting Disks for the Volume Group Obtain a list of the disks on both nodes and identify which device files are used for the same disk on both. Use the following command on each node to list available disks as they are known to each system:
# lssf /dev/dsk/*

In the following examples, we use /dev/rdsk/c1t2d0 and /dev/rdsk/c0t2d0, which happen to be the device names for the same disks on both ftsys9 and ftsys10. In the event that the device file names are different on the different nodes, make a careful note of the correspondences. Creating Physical Volumes On the configuration node (ftsys9), use the pvcreate command to define disks as physical volumes. This only needs to be done on the configuration node. Use the following commands to create two physical volumes for the sample configuration:
# pvcreate -f /dev/rdsk/c1t2d0 # pvcreate -f /dev/rdsk/c0t2d0

Creating a Volume Group with PVG-Strict Mirroring Use the following steps to build a volume group on the configuration node (ftsys9). Later, the same volume group will be created on other nodes. 1. First, set up the group directory for vgops: # mkdir /dev/vg_ops 2. Next, create a control file named group in the directory /dev/vg_ops, as follows: # mknod /dev/vg_ops/group c 64 0xhh0000 The major number is always 64, and the hexadecimal minor number has the form
0xhh0000

Chapter 2

49

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 3. Create the volume group and add physical volumes to it with the following commands: # vgcreate -g bus0 /dev/vg_ops /dev/dsk/c1t2d0 # vgextend -g bus1 /dev/vg_ops /dev/dsk/c0t2d0 The first command creates the volume group and adds a physical volume to it in a physical volume group called bus0. The second command adds the second drive to the volume group, locating it in a different physical volume group named bus1. The use of physical volume groups allows the use of PVG-strict mirroring of disks and PV links. 4. Repeat this procedure for additional volume groups.

Building Mirrored Logical Volumes for RAC with LVM Commands


After you create volume groups and define physical volumes for use in them, you define mirrored logical volumes for data, logs, and control files. It is recommended that you use a shell script to issue the commands described in the next sections. The commands you use for creating logical volumes vary slightly depending on whether you are creating logical volumes for RAC redo log files or for use with Oracle data. Creating Mirrored Logical Volumes for RAC Redo Logs and Control Files Create logical volumes for use as redo log and control files by selecting mirror consistency recovery. Use the same options as in the following example: # lvcreate -m 1 -M n -c y -s g -n redo1.log -L 28 /dev/vg_ops The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c y means that mirror consistency recovery is enabled; the -s g means that mirroring is

50

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM PVG-strict, that is, it occurs between different physical volume groups; the -n redo1.log option lets you specify the name of the logical volume; and the -L 28 option allocates 28 megabytes.

NOTE

It is important to use the -M n and -c y options for both redo logs and control files. These options allow the redo log files to be resynchronized by SLVM following a system crash before Oracle recovery proceeds. If these options are not set correctly, you may not be able to continue with database recovery.

If the command is successful, the system will display messages like the following:
Logical volume /dev/vg_ops/redo1.log has been successfully created with character device /dev/vg_ops/rredo1.log Logical volume /dev/vg_ops/redo1.log has been successfully extended

Note that the character device file name (also called the raw logical volume name) is used by the Oracle DBA in building the RAC database. Creating Mirrored Logical Volumes for RAC Data Files Following a system crash, the mirrored logical volumes need to be resynchronized, which is known as resilvering. If Oracle does not perform resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of NOMWC. This is done by disabling mirror write caching and enabling mirror consistency recovery. With NOMWC, SLVM performs the resynchronization. Create logical volumes for use as Oracle data files by using the same options as in the following example:
# lvcreate -m 1 -M n -c y -s g -n system.dbf -L 408 /dev/vg_ops

The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c y means that mirror consistency recovery is enabled; the -s g means that mirroring is PVG-strict, that is, it occurs between different physical volume groups; the -n system.dbf option lets you specify the name of the logical volume; and the -L 408 option allocates 408 megabytes.

Chapter 2

51

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM If Oracle performs resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of NONE by disabling both mirror write caching and mirror consistency recovery. With a mirror consistency policy of NONE, SLVM does not perform the resynchronization.

NOTE

Contact Oracle to determine if your version of Oracle RAC allows resilvering and to appropriately configure the mirror consistency recovery policy for your logical volumes.

Create logical volumes for use as Oracle data files by using the same options as in the following example: # lvcreate -m 1 -M n -c n -s g -n system.dbf -L 408 \ /dev/vg_ops The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c n means that mirror consistency recovery is disabled; the -s g means that mirroring is PVG-strict, that is, it occurs between different physical volume groups; the -n system.dbf option lets you specify the name of the logical volume; and the -L 408 option allocates 408 megabytes. If the command is successful, the system will display messages like the following:
Logical volume /dev/vg_ops/system.dbf has been successfully created with character device /dev/vg_ops/rsystem.dbf Logical volume /dev/vg_ops/system.dbf has been successfully extended

Note that the character device file name (also called the raw logical volume name) is used by the Oracle DBA in building the OPS database.

Creating RAC Volume Groups on Disk Arrays


The procedure described in this section assumes that you are using RAID-protected disk arrays and LVMs physical volume links (PV links) to define redundant data paths from each node in the cluster to every logical unit on the array.

52

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM On your disk arrays, you should use redundant I/O channels from each node, connecting them to separate controllers on the array. Then you can define alternate links to the LUNs or logical disks you have defined on the array. If you are using SAM, choose the type of disk array you wish to configure, and follow the menus to define alternate links. If you are using LVM commands, specify the links on the command line. The following example shows how to configure alternate links using LVM commands. The following disk configuration is assumed:
8/0.15.0 8/0.15.1 8/0.15.2 8/0.15.3 8/0.15.4 8/0.15.5 10/0.3.0 10/0.3.1 10/0.3.2 10/0.3.3 10/0.3.4 10/0.3.5 /dev/dsk/c0t15d0 /dev/dsk/c0t15d1 /dev/dsk/c0t15d2 /dev/dsk/c0t15d3 /dev/dsk/c0t15d4 /dev/dsk/c0t15d5 /dev/dsk/c1t3d0 /dev/dsk/c1t3d1 /dev/dsk/c1t3d2 /dev/dsk/c1t3d3 /dev/dsk/c1t3d4 /dev/dsk/c1t3d5 /* /* /* /* /* /* /* /* /* /* /* /* I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel 0 0 0 0 0 0 1 1 1 1 1 1 (8/0) (8/0) (8/0) (8/0) (8/0) (8/0) (10/0) (10/0) (10/0) (10/0) (10/0) (10/0) SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI address address address address address address address address address address address address 15 15 15 15 15 15 3 3 3 3 3 3 LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN 0 1 2 3 4 5 0 1 2 3 4 5 */ */ */ */ */ */ */ */ */ */ */ */

Assume that the disk array has been configured, and that both the following device files appear for the same LUN (logical disk) when you run the ioscan command:
/dev/dsk/c0t15d0 /dev/dsk/c1t3d0

Use the following procedure to configure a volume group for this logical disk: 1. First, set up the group directory for vg_ops:
# mkdir /dev/vg_ops

2. Next, create a control file named group in the directory /dev/vg_ops, as follows:
# mknod /dev/vg_ops/group c 64 0xhh0000

The major number is always 64, and the hexadecimal minor number has the form
0xhh0000

Chapter 2

53

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups:
# ls -l /dev/*/group

3. Use the pvcreate command on one of the device files associated with the LUN to define the LUN to LVM as a physical volume.
# pvcreate -f /dev/rdsk/c0t15d0

It is only necessary to do this with one of the device file names for the LUN. The -f option is only necessary if the physical volume was previously used in some other volume group. 4. Use the following to create the volume group with the two links:
# vgcreate /dev/vg_ops /dev/dsk/c0t15d0 /dev/dsk/c1t3d0

LVM will now recognize the I/O channel represented by /dev/dsk/c0t15d0 as the primary link to the disk; if the primary link fails, LVM will automatically switch to the alternate I/O channel represented by /dev/dsk/c1t3d0. Use the vgextend command to add additional disks to the volume group, specifying the appropriate physical volume name for each PV link. Repeat the entire procedure for each distinct volume group you wish to create. For ease of system administration, you may wish to use different volume groups to separate logs from data and control files.

NOTE

The default maximum number of volume groups in HP-UX is 10. If you intend to create enough new volume groups that the total exceeds ten, you must increase the maxvgs system parameter and then re-build the HP-UX kernel. Use SAM and select the Kernel Configuration area, then choose Configurable Parameters. Maxvgs appears on the list.

Creating Logical Volumes for RAC on Disk Arrays


After you create volume groups and add PV links to them, you define logical volumes for data, logs, and control files. The following are some examples:

54

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM
# # # # lvcreate lvcreate lvcreate lvcreate -n -n -n -n ops1log1.log -L 4 /dev/vg_ops opsctl1.ctl -L 4 /dev/vg_ops system.dbf -L 28 /dev/vg_ops opsdata1.dbf -L 1000 /dev/vg_ops

Oracle Demo Database Files


The following set of files is required for the Oracle demo database which you can create during the installation process. Table 2-1 Required Oracle File Names for Demo Database Oracle File Size (MB)* 110 110 110 120 120 120 120 120 120 400 800 250 120 200 200 200

Logical Volume Name opsctl1.ctl opsctl2.ctl opsctl3.ctl ops1log1.log ops1log2.log ops1log3.log ops2log1.log ops2log2.log ops2log3.log opssystem.dbf opssysaux.dbf opstemp.dbf opsusers.dbf opsdata1.dbf opsdata2.dbf opsdata3.dbf

LV Size (MB) 118 118 118 128 128 128 128 128 128 408 808 258 128 208 208 208

Raw Logical Volume Path Name /dev/vg_ops/ropsctl1.ctl /dev/vg_ops/ropsctl2.ctl /dev/vg_ops/ropsctl3.ctl /dev/vg_ops/rops1log1.log /dev/vg_ops/rops1log2.log /dev/vg_ops/rops1log3.log /dev/vg_ops/rops2log1.log /dev/vg_ops/rops2log2.log /dev/vg_ops/rops2log3.log /dev/vg_ops/ropssystem.dbf /dev/vg_ops/ropssysaux.dbf /dev/vg_ops/ropstemp.dbf /dev/vg_ops/ropsusers.dbf /dev/vg_ops/ropsdata1.dbf /dev/vg_ops/ropsdata2.dbf /dev/vg_ops/ropsdata3.dbf

Chapter 2

55

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Table 2-1 Required Oracle File Names for Demo Database (Continued) Oracle File Size (MB)* 5 5 500 500 160

Logical Volume Name opsspfile1.ora pwdfile.ora opsundotbs1.dbf opsundotbs2.dbf example1.dbf

LV Size (MB) 5 5 508 508 168

Raw Logical Volume Path Name /dev/vg_ops/ropsspfile1.ora /dev/vg_ops/rpwdfile.ora /dev/vg_ops/ropsundotbs1.log /dev/vg_ops/ropsundotbs2.log /dev/vg_ops/ropsexample1.dbf

The size of the logical volume is larger than the Oracle file size because Oracle needs extra space to allocate a header in addition to the file's actual data capacity. Create these files if you wish to build the demo database. The three logical volumes at the bottom of the table are included as additional data files, which you can create as needed, supplying the appropriate sizes. If your naming conventions require, you can include the Oracle SID and/or the database name to distinguish files for different instances and different databases. If you are using the ORACLE_BASE directory structure, create symbolic links to the ORACLE_BASE files from the appropriate directory. Example:
# ln -s /dev/vg_ops/ropsctl1.ctl \ /u01/ORACLE/db001/ctrl01_1.ctl

After creating these files, set the owner to oracle and the group to dba with a file mode of 660. The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.

56

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Displaying the Logical Volume Infrastructure

Displaying the Logical Volume Infrastructure


To display the volume group, use the vgdisplay command:
# vgdisplay -v /dev/vg_ops

Exporting the Logical Volume Infrastructure


Before the Oracle volume groups can be shared, their configuration data must be exported to other nodes in the cluster. This is done either in Serviceguard Manager or by using HP-UX commands, as shown in the following sections. Exporting with LVM Commands Use the following commands to set up the same volume group on another cluster node. In this example, the commands set up a new volume group on a system known as ftsys10. This volume group holds the same physical volume that was created on a configuration node known as ftsys9. To set up the volume group on ftsys10 (and other nodes), use the following steps: 1. On ftsys9, copy the mapping of the volume group to a specified file.
# vgexport -s -p -m /tmp/vg_ops.map /dev/vg_ops

2. Still on ftsys9, copy the map file to ftsys10 (and to additional nodes as necessary.)
# rcp /tmp/vg_ops.map ftsys10:/tmp/vg_ops.map

3. On ftsys10 (and other nodes, as necessary), create the volume group directory and the control file named group:
# mkdir /dev/vg_ops # mknod /dev/vg_ops/group c 64 0xhh0000

For the group file, the major number is always 64, and the hexadecimal minor number has the form
0xhh0000

Chapter 2

57

Serviceguard Configuration for Oracle 10g RAC Displaying the Logical Volume Infrastructure where hh must be unique to the volume group you are creating. If possible, use the same number as on ftsys9. Use the following command to display a list of existing volume groups:
# ls -l /dev/*/group

4. Import the volume group data using the map file from node ftsys9. On node ftsys10 (and other nodes, as necessary), enter:
# vgimport -s -m /tmp/vg_ops.map /dev/vg_ops

58

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Installing Oracle Real Application Clusters

Installing Oracle Real Application Clusters


NOTE Some versions of Oracle RAC requires installation of additional software. Refer to your version of Oracle for specific requirements.

Before installing the Oracle Real Application Cluster software, make sure the storage cluster is running. Login as the oracle user on one node and then use the Oracle installer to install Oracle software and to build the correct Oracle runtime executables. When executables are installed to a local file system on each node, the Oracle installer copies the executables to the other nodes in the cluster. For details on Oracle installation, refer to the Oracle installation documentation. As part of this installation, the Oracle installer installs the executables and optionally, the Oracle installer can build an Oracle demo database on the primary node. The demo database files can be the character (raw) device files names for the logical volumes create earlier. For a demo database on SLVM or CVM, create logical volumes as shown in Table 2-1, Required Oracle File Names for Demo Database. As the installer prompts for the database file names, either the pathnames of the raw logical volumes instead of using the defaults. If you do not wish to install the demo database, select install software only.

Chapter 2

59

Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File

Cluster Configuration ASCII File


The following is an example of an ASCII configuration file generated with the cmquerycl command using the -w full option on a system with Serviceguard Extension for RAC. The OPS_VOLUME_GROUP parameters appear at the end of the file.
# # # # # ********************************************************************** ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE *************** ***** For complete details about cluster parameters and how to ******* ***** set them, consult the Serviceguard manual. ********************* **********************************************************************

# Enter a name for this cluster. This name will be used to identify the # cluster when viewing or manipulating it. CLUSTER_NAME cluster 1

# # # # # # # # # # # # # # # # # # # # # #

Cluster Lock Parameters The cluster lock is used as a tie-breaker for situations in which a running cluster fails, and then two equal-sized sub-clusters are both trying to form a new cluster. The cluster lock may be configured using only one of the following alternatives on a cluster: the LVM lock disk the quorom server

Consider the following when configuring a cluster. For a two-node cluster, you must use a cluster lock. For a cluster of three or four nodes, a cluster lock is strongly recommended. For a cluster of more than four nodes, a cluster lock is recommended. If you decide to configure a lock for a cluster of more than four nodes, it must be a quorum server. Lock Disk Parameters. Use the FIRST_CLUSTER_LOCK_VG and FIRST_CLUSTER_LOCK_PV parameters to define a lock disk. The FIRST_CLUSTER_LOCK_VG is the LVM volume group that holds the cluster lock. This volume group should not be used by any other cluster as a cluster lock device.

# Quorum Server Parameters. Use the QS_HOST, QS_POLLING_INTERVAL,

60

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # and QS_TIMEOUT_EXTENSION parameters to define a quorum server. The QS_HOST is the host name or IP address of the system that is running the quorum server process. The QS_POLLING_INTERVAL (microseconds) is the interval at which Serviceguard checks to make sure the quorum server is running. The optional QS_TIMEOUT_EXTENSION (microseconds) is used to increase the time interval after which the quorum server is marked DOWN. The default quorum server timeout is calculated from the Serviceguard cluster parameters, including NODE_TIMEOUT and HEARTBEAT_INTERVAL. If you are experiencing quorum server timeouts, you can adjust these parameters, or you can include the QS_TIMEOUT_EXTENSION parameter. The value of QS_TIMEOUT_EXTENSION will directly effect the amount of time it takes for cluster reformation in the event of failure. For example, if QS_TIMEOUT_EXTENSION is set to 10 seconds, the cluster reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTENSION was set to 0. This delay applies even if there is no delay in contacting the Quorum Server. The recommended value for QS_TIMEOUT_EXTENSION is 0, which is used as the default and the maximum supported value is 30000000 (5 minutes). For example, to configure a quorum server running on node qshost with 120 seconds for the QS_POLLING_INTERVAL and to add 2 seconds to the system assigned value for the quorum server timeout, enter: QS_HOST qshost QS_POLLING_INTERVAL 120000000 QS_TIMEOUT_EXTENSION 2000000

# # # # # # # # # # # #

Definition of nodes in the cluster. Repeat node definitions as necessary for additional nodes. NODE_NAME is the specified nodename in the cluster. It must match the hostname and both cannot contain full domain name. Each NETWORK_INTERFACE, if configured with IPv4 address, must have ONLY one IPv4 address entry with it which could be either HEARTBEAT_IP or STATIONARY_IP. Each NETWORK_INTERFACE, if configured with IPv6 address(es) can have multiple IPv6 address entries(up to a maximum of 2, only one IPv6 address entry belonging to site-local scope and only one belonging to global scope) which must be all STATIONARY_IP. They cannot be HEARTBEAT_IP.

Chapter 2

61

Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File

NODE_NAME ever3a NETWORK_INTERFACE lan0 STATIONARY_IP15.244.64.140 NETWORK_INTERFACE lan1 HEARTBEAT_IP192.77.1.1 NETWORK_INTERFACE lan2 # List of serial device file names # For example: # SERIAL_DEVICE_FILE /dev/tty0p0 # Primary Network Interfaces on Bridged Net 1: lan0. # Warning: There are no standby network interfaces on bridged net 1. # Primary Network Interfaces on Bridged Net 2: lan1. # Possible standby Network Interfaces on Bridged Net 2: lan2.

# Cluster Timing Parameters (microseconds). # # # # # # # # # The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds). This default setting yields the fastest cluster reformations. However, the use of the default value increases the potential for spurious reformations due to momentary system hangs or network load spikes. For a significant portion of installations, a setting of 5000000 to 8000000 (5 to 8 seconds) is more appropriate. The maximum value recommended for NODE_TIMEOUT is 30000000 (30 seconds). 1000000 2000000

HEARTBEAT_INTERVAL NODE_TIMEOUT

# Configuration/Reconfiguration Timing Parameters (microseconds). AUTO_START_TIMEOUT NETWORK_POLLING_INTERVAL 600000000 2000000

# Network Monitor Configuration Parameters. # The NETWORK_FAILURE_DETECTION parameter determines how LAN card failures are detected. # If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound # message count stops increasing or when both inbound and outbound # message counts stop increasing. # If set to INOUT, both the inbound and outbound message counts must

62

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File
# stop increasing before the card is considered down. NETWORK_FAILURE_DETECTION INOUT # Package Configuration Parameters. # Enter the maximum number of packages which will be configured in the cluster. # You can not add packages beyond this limit. # This parameter is required. MAX_CONFIGURED_PACKAGES 150

# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #

Access Control Policy Parameters. Three entries set the access control policy for the cluster: First line must be USER_NAME, second USER_HOST, and third USER_ROLE. Enter a value after each. 1. USER_NAME can either be ANY_USER, or a maximum of 8 login names from the /etc/passwd file on user host. 2. USER_HOST is where the user can issue Serviceguard commands. If using Serviceguard Manager, it is the COM server. Choose one of these three values: ANY_SERVICEGUARD_NODE, or (any) CLUSTER_MEMBER_NODE, or a specific node. For node, use the official hostname from domain name server, and not an IP addresses or fully qualified name. 3. USER_ROLE must be one of these three values: * MONITOR: read-only capabilities for the cluster and packages * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages in the cluster * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative commands for the cluster. Access control policy does not set a role for configuration capability. To configure, a user must log on to one of the clusters nodes as root (UID=0). Access control policy cannot limit root users access. MONITOR and FULL_ADMIN can only be set in the cluster configuration file, and they apply to the entire cluster. PACKAGE_ADMIN can be set in the cluster or a package configuration file. If set in the cluster configuration file, PACKAGE_ADMIN applies to all configured packages. If set in a package configuration file, PACKAGE_ADMIN applies to that package only. Conflicting or redundant policies will cause an error while applying the configuration, and stop the process. The maximum number of access policies that can be configured in the cluster is 200.

Chapter 2

63

Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File
# # # # # #

Example: to configure a role for user john from node noir to administer a cluster and all its packages, enter: USER_NAME john USER_HOST noir USER_ROLE FULL_ADMIN

# # # # # #

List of cluster aware LVM Volume Groups. These volume groups will be used by package applications via the vgchange -a e command. Neither CVM or VxVM Disk Groups should be used here. For example: VOLUME_GROUP /dev/vgdatabase VOLUME_GROUP /dev/vg02

# # # # # # # #

List of OPS Volume Groups. Formerly known as DLM Volume Groups, these volume groups will be used by OPS or RAC cluster applications via the vgchange -a s command. (Note: the name DLM_VOLUME_GROUP is also still supported for compatibility with earlier versions.) For example: OPS_VOLUME_GROUP /dev/vgdatabase OPS_VOLUME_GROUP /dev/vg02

64

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS

Creating a Storage Infrastructure with CFS


Creating a SGeRAC Cluster with CFS 4.1 for Oracle 10g
With CFS, the database software and database files (control, redo, data files), and archive logs may reside on a cluster file system, which is visible by all nodes. The following software needs to be installed in order to use this configuration: SGeRAC CFS

CFS and SGeRAC are available in selected HP Serviceguard Storage Management Suite bundles. Refer to the HP Serviceguard Storage Management Suite Version A.01.00 Release Notes. In the example below, both the Oracle RAC software and datafiles reside on CFS. There is a single Oracle home. Three CFS file systems are created for Oracle home, Oracle datafiles, and for the Oracle Cluster Registry (OCR) and vote device. The Oracle Cluster Software home is on a local file system.
/cfs/mnt1 for Oracle Base and Home /cfs/mnt2 for Oracle datafiles /cfs/mnt3 - for OCR and Vote device

Initializing the VERITAS Volume Manager


Use the following steps to create a two node SGeRAC cluster with CFS and Oracle 1. Initialize the VERITAS Volume Manager If not already done, install the VxVM license key on all nodes. Use the following command: # vxinstall

Chapter 2

65

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS

NOTE

CVM 4.1 does not require rootdg

2. Create the Cluster ASCII file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file 3. Create the Cluster # cmapplyconf -C clm.asc 4. Start the Cluster # cmruncl # cmviewcl The following output will be displayed:
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running

5. Configure the Cluster Volume Manager (CVM) Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM/CFS stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets. # cfscluster config -s The following output will be displayed: CVM is now configured Starting CVM... It might take a few minutes to complete When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster:

66

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS # vxdctl -c mode The following output will be displayed: mode: enabled: cluster active - SLAVE master: ever3b or mode: enabled: cluster active - MASTER slave: ever3b 6. Converting Disks from LVM to CVM You can use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Twelfth Edition users guide Appendix G. 7. Initializing Disks for CVM/CFS You need to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /etc/vx/bin/vxdisksetup -i c4t4d0 8. Create the Disk Group for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init cfsdg1 c4t4d0 9. Create the Disk Group Multi-Node package. Use the following command to add the disk group to the cluster: # cfsdgadm add cfsdg1 all=sw The following output will be displayed:

Chapter 2

67

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS Package name SG-CFS-DG-1 was generated to control the resource shared disk group cfsdg1 is associated with the cluster. 10. Activate the Disk Group # cfsdgadm activate cfsdg1 11. Creating Volumes and Adding a Cluster Filesystem # vxassist -g cfsdg1 make vol1 10240m # vxassist -g cfsdg1 make vol2 10240m # vxassist -g cfsdg1 make vol3 300m # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol1 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol2 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol3 The following output will be displayed: version 6 layout 307200 sectors, 307200 blocks of size 1024, log size 1024 blocks largefiles supported 12. Configure Mount Point # cfsmntadm add cfsdg1 vol1 /cfs/mnt1 all=rw The following output will be displayed:

68

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS Package name SG-CFS-MP-1 was generated to control the resource. Mount point /cfs/mnt1 was associated with the cluster. # cfsmntadm add cfsdg1 vol2 /cfs/mnt2 all=rw The following output will be displayed: Package name SG-CFS-MP-2 was generated to control the resource. Mount point /cfs/mnt2 was associated with the cluster. # cfsmntadm add cfsdg1 vol3 /cfs/mnt3 all=rw The following output will be displayed: Package name SG-CFS-MP-3 was generated to control the resource. Mount point /cfs/mnt3 that was associated with the cluster.

NOTE

The diskgroup and mount point multi-node packages (SG-CFS-DG_ID# and SG-CFS-MP_ID#) do not monitor the health of the disk group and mount point. They check that the application packages that depend on them have access to the disk groups and mount points. If the dependent application package loses access and cannot read and write to the disk, it will fail; however that will not cause the DG or MP multi-node package to fail.

13. Mount Cluster Filesystem # cfsmount /cfs/mnt1 # cfsmount /cfs/mnt2 # cfsmount /cfs/mnt3 14. Check CFS Mount Points # bdf | grep cfs
/dev/vx/dsk/cfsdg1/vol1 10485760 /dev/vx/dsk/cfsdg1/vol2 10485760 /dev/vx/dsk/cfsdg1/vol3 307200 19651 9811985 19651 9811985 1802 286318 0% /cfs/mnt1 0% /cfs/mnt2 1% /cfs/mnt3

Chapter 2

69

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS 15. View the Configuration # cmviewcl
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running GMS_STATE unknown unknown

MULTI_NODE_PACKAGES PACKAGE SG-CFS-pkg SG-CFS-DG-1 SG-CFS-MP-1 SG-CFS-MP-2 SG-CFS-MP-3 STATUS up up up up up STATE running running running running running AUTO_RUN enabled enabled enabled enabled enabled SYSTEM yes no no no no

CAUTION

Once you create the disk group and mount point packages, it is critical that you administer the cluster with the cfs commands, including cfsdgadm, cfsmntadm, cfsmount, and cfsumount. If you use the general commands such as mount and umount, it could cause serious problems, such as writing to the local file system instead of the cluster file system. Any form of the mount command (for example, mount -o cluster, dbed_chkptmount, or sfrac_chkptmount) other than cfsmount or cfsumount in a HP Serviceguard Storage Management Suite environment with CFS should be done with caution. These non-cfs commands could cause conflicts with subsequent command operations on the file system or Serviceguard packages. Use of these other forms of mount will not create an appropriate multi-node package which means that the cluster packages are not aware of the file system changes

Deleting CFS from the Cluster


Halt the applications that are using CFS file systems. 1. Unmount CFS Mount Points # cfsumount /cfs/mnt1 # cfsumount /cfs/mnt2 70 Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS # cfsumount /cfs/mnt3 2. Delete Mount Point Multi-node Package # cfsmntadm delete /cfs/mnt1 The following output will be generated: Mount point /cfs/mnt1 was disassociated from the cluster # cfsmntadm delete /cfs/mnt2 The following output will be generated: Mount point /cfs/mnt2 was disassociated from the cluster # cfsmntadm delete /cfs/mnt3 The following output will be generated: Mount point /cfs/mnt3 was disassociated from the cluster Cleaning up resource controlling shared disk group cfsdg1 Shared disk group cfsdg1 was disassociated from the cluster.

NOTE

The disk group package is deleted if there is no dependency.

3. Delete Disk Group Multi-node Package # cfsdgadm delete cfsdg1 The following output will be generated: Shared disk group cfsdg1 was disassociated from the cluster.

NOTE

cfsmntadm delete also deletes the disk group if there is no dependent package. To ensure the disk group deletion is complete, use the above command to delete the disk group package.

4. De-configure CVM # cfscluster stop

Chapter 2

71

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS The following output will be generated: Stopping CVM...CVM is stopped # cfscluster unconfig The following output will be displayed: CVM is now unconfigured

72

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM

Creating a Storage Infrastructure with CVM


In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), VERITAS Volume Manager (VxVM), or VERITAS Cluster Volume Manager (CVM). LVM and VxVM configuration are done before cluster configuration, and CVM configuration is done after cluster configuration. This section shows how to configure storage using the VERITAS Cluster Volume Manager (CVM). The examples show how to configure RAC disk groups, but you can also create CVM disk groups for non-RAC use. For more information, including details about configuration of plexes (mirrors), multi-pathing, and RAID, refer to the HP-UX documentation for the VERITAS Volume Manager.

Initializing the VERITAS Volume Manager


If you are about to create disk groups for the first time, you need to initialize the Volume Manager. This is done by creating a disk group known as rootdg that contains at least one disk. Use the following command after installing CVM on each node: # vxinstall This displays a menu-driven program that steps you through the CVM initialization sequence. From the main menu, choose the Custom option, and specify the disk you wish to include in rootdg.

IMPORTANT

Creating a rootdg disk group is only necessary the first time you use the Volume Manager. CVM 4.1 does not require a rootdg.

Using CVM 4.x


This section has information on how to prepare the cluster and the system multi-node package with CVM 4.x only (without the CFS filesystem).

Chapter 2

73

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Preparing the Cluster and the System Multi-node Package for use with CVM 4.x 1. Create the Cluster file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file 2. Create the Cluster # cmapplyconf -C clm.asc Start the Cluster # cmruncl # cmviewcl The following output will be displayed:
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running

3. Configure the Cluster Volume Manager (CVM) Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets. Use the cmapplyconf command: # cmapplyconf -P /etc/cmcluster/cfs/SG-CFS-pkg.conf # cmrunpkg SG-CFS-pkg When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster: # vxdctl -c mode The following output will be displayed:

74

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM mode: enabled: cluster active - SLAVE master: ever3b or mode: enabled: cluster active - MASTER slave: ever3b Converting Disks from LVM to CVM Use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Thirteenth Edition users guide Appendix G. Initializing Disks for CVM It is necessary to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /etc/vx/bin/vxdisksetup -i c4t4d0 Create the Disk Group for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c4t4d0 4. Creating Volumes and Adding a Cluster Filesystem # vxassist -g ops_dg make vol1 10240m # vxassist -g ops_dg make vol2 10240m # vxassist -g ops_dg make vol3 300m 5. View the Configuration

Chapter 2

75

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM # cmviewcl
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running

MULTI_NODE_PACKAGES PACKAGE STATUS SG-CFS-pkg up STATE running AUTO_RUN enabled SYSTEM yes

IMPORTANT

After creating these files, use the vxedit command to change the ownership of the raw volume files to oracle and the group membership to dba, and to change the permissions to 660. Example: # cd /dev/vx/rdsk/ops_dg # vxedit -g ops_dg set user=oracle * # vxedit -g ops_dg set group=dba * # vxedit -g ops_dg set mode=660 * The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.

Mirror Detachment Policies with CVM The required CVM disk mirror detachment policy is global, which means that as soon as one node cannot see a specific mirror copy (plex), all nodes cannot see it as well. The alternate policy is local, which means that if one node cannot see a specific mirror copy, then CVM will deactivate access to the volume for that node only. This policy can be re-set on a disk group basis by using the vxedit command, as follows: # vxedit set diskdetpolicy=global <DiskGroupName>

76

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM

NOTE

The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager.

Using CVM 3.x


This section has information on how to prepare the cluster and the system multi-node package with CVM 3.x. Preparing the Cluster for Use with CVM 3.x In order to use the VERITAS Cluster Volume Manager (CVM) version 3.x, the cluster must be running with a special CVM package. This means that the cluster must already be configured and running before you create disk groups.

NOTE

Cluster configuration is described in the previous section.

To prepare the cluster for CVM disk group configuration, you need to ensure that only one heartbeat subnet is configured. Then use the following command, which creates the special package that communicates cluster information to CVM: # cmapplyconf -P /etc/cmcluster/cvm/VxVM-CVM-pkg.conf

WARNING

This file should never be edited.

After the above command completes, start the cluster and create disk groups for shared use as described in the following sections. Starting the Cluster and Identifying the Master Node Run the cluster, which will activate the special CVM package: # cmruncl

Chapter 2

77

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM After the cluster is started, it will now run with a special system multi-node package named VxVM-CVM-pkg, which is on all nodes. This package is shown in the following output of the cmviewcl -v command:
CLUSTER bowls NODE spare split strike STATUS up STATUS up up up STATE running running running

SYSTEM_MULTI_NODE_PACKAGES: PACKAGE STATUS VxVM-CVM-pkg up STATE running

When CVM starts up, it selects a master node, and this is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster: # vxdctl -c mode One node will identify itself as the master. Create disk groups from this node. Converting Disks from LVM to CVM You can use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Thirteenth Edition users guide Appendix G. Initializing Disks for CVM You need to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /usr/lib/vxvm/bin/vxdisksetup -i /dev/dsk/c0t3d2

78

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Creating Disk Groups for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c0t3d2 Verify the configuration with the following command: # vxdg list
NAME rootdg ops_dg STATE enabled enabled,shared ID 971995699.1025.node1 972078742.1084.node2

Creating Volumes
Use the vxassist command to create logical volumes. The following is an example: # vxassist -g ops_dg make log_files 1024m This command creates a 1024 MB volume named log_files in a disk group named ops_dg. The volume can be referenced with the block device file /dev/vx/dsk/ops_dg/log_files or the raw (character) device file /dev/vx/rdsk/ops_dg/log_files. Verify the configuration with the following command: # vxdg list

IMPORTANT

After creating these files, use the vxedit command to change the ownership of the raw volume files to oracle and the group membership to dba, and to change the permissions to 660. Example: # cd /dev/vx/rdsk/ops_dg # vxedit -g ops_dg set user=oracle * # vxedit -g ops_dg set group=dba * # vxedit -g ops_dg set mode=660 * The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.

Chapter 2

79

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Mirror Detachment Policies with CVM The required CVM disk mirror detachment policy is global, which means that as soon as one node cannot see a specific mirror copy (plex), all nodes cannot see it as well. The alternate policy is local, which means that if one node cannot see a specific mirror copy, then CVM will deactivate access to the volume for that node only. This policy can be re-set on a disk group basis by using the vxedit command, as follows: # vxedit set diskdetpolicy=global <DiskGroupName>

NOTE

The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager.

Oracle Demo Database Files


The following set of volumes is required for the Oracle demo database which you can create during the installation process. Table 2-2 Required Oracle File Names for Demo Database Oracle File Size (MB) 110 110 110 120 120 120 120 120

Volume Name opsctl1.ctl opsctl2.ctl opsctl3.ctl ops1log1.log ops1log2.log ops1log3.log ops2log1.log ops2log2.log

Size (MB) 118 118 118 128 128 128 128 128

Raw Device File Name /dev/vx/rdsk/ops_dg/opsctl1.ctl /dev/vx/rdsk/ops_dg/opsctl2.ctl /dev/vx/rdsk/ops_dg/opsctl3.ctl /dev/vx/rdsk/ops_dg/ops1log1.log /dev/vx/rdsk/ops_dg/ops1log2.log /dev/vx/rdsk/ops_dg/ops1log3.log /dev/vx/rdsk/ops_dg/ops2log1.log /dev/vx/rdsk/ops_dg/ops2log2.log

80

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Table 2-2 Required Oracle File Names for Demo Database (Continued) Oracle File Size (MB) 120 500 800 250 120 200 200 200 500 500 500 500 160

Volume Name ops2log3.log opssystem.dbf opssysaux.dbf opstemp.dbf opsusers.dbf opsdata1.dbf opsdata2.dbf opsdata3.dbf opsspfile1.ora opspwdfile.ora opsundotbs1.dbf opsundotbs2.dbf opsexmple1.dbf

Size (MB) 128 508 808 258 128 208 208 208 508 508 508 508 168

Raw Device File Name /dev/vx/rdsk/ops_dg/ops2log3.log /dev/vx/rdsk/ops_dg/opssystem.dbf /dev/vx/rdsk/ops_dg/opssysaux.dbf /dev/vx/rdsk/ops_dg/opstemp.dbf /dev/vx/rdsk/ops_dg/opsusers.dbf /dev/vx/rdsk/ops_dg/opsdata1.dbf /dev/vx/rdsk/ops_dg/opsdata2.dbf /dev/vx/rdsk/ops_dg/opsdata3.dbf /dev/vx/rdsk/ops_dg/opsspfile1.ora /dev/vx/rdsk/ops_dg/opspwdfile.ora /dev/vx/rdsk/ops_dg/opsundotbs1.dbf /dev/vx/rdsk/ops_dg/opsundotbs2.dbf /dev/vx/rdsk/ops_dg/opsexample1.dbf

Create these files if you wish to build the demo database. The three logical volumes at the bottom of the table are included as additional data files, which you can create as needed, supplying the appropriate sizes. If your naming conventions require, you can include the Oracle SID and/or the database name to distinguish files for different instances and different databases. If you are using the ORACLE_BASE directory structure, create symbolic links to the ORACLE_BASE files from the appropriate directory. Example:
# ln -s /dev/vx/rdsk/ops_dg/opsctl1.ctl \ /u01/ORACLE/db001/ctrl01_1.ctl

Example:

Chapter 2

81

Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM 1. Create an ASCII file, and define the path for each database object.
control1=/u01/ORACLE/db001/ctrl01_1.ctl

2. Set the following environment variable where filename is the name of the ASCII file created.
# export DBCA_RAW_CONFIG=<full path>/filename

Adding Disk Groups to the Cluster Configuration


For CVM 4.x, if the multi-node package was configured for disk group activation, the application package should be configured with package dependency to ensure the CVM disk group is active. For CVM 3.5 and CVM 4.x (without using multi-node package) after creating units of CVM storage with VxVM commands, you need to specify the disk groups in each package configuration ASCII file. Use one STORAGE_GROUP parameter for each CVM disk group the package will use. You also need to identify the CVM disk groups, file systems, logical volumes, and mount options in the package control script. For more detailed information on the package configuration process refer to the Managing Serviceguard Thirteenth Edition users guide.

82

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation)

Prerequisites for Oracle 10g (Sample Installation)


The following sample steps prepare a SGeRAC cluster for Oracle 10g. Refer to the Oracle documentation for Oracle installation details. 1.

Create Inventory Groups on each Node


Create the Oracle Inventory group if one does not exist, create the OSDBA group, and create the Operator Group (optional). # /usr/sbin/groupadd oinstall # /usr/sbin/groupadd dba # /usr/sbin/groupadd oper

2.

Create Oracle User on each Node


# /usr/bin/useradd -u 203 -g oinstall -G dba,oper oracle

3.

Change the Password on each Node


# passwd oracle

4.

Create Symbolic Links


Required if Motif 2.1 Development Environment Package is not installed. # ln -s /usr/lib/libX11.3 /usr/lib/libX11.sl # ln -s /usr/lib/libXIE.2 /usr/lib/libXIE.sl # ln -s /usr/lib/libXext.3 /usr/lib/libXext.sl # ln -s /usr/lib/libXhp11.3 /usr/lib/Xhp11.sl # ln -s /usr/lib/libXi.3 /usr/lib/libXi.sl # ln -s /usr/lib/libXm.4 /usr/lib/libXm.sl # ln -s /usr/lib/libXp.2 /usr/lib/libXp.sl # ln -s /usr/lib/libXt.3 /usr/lib/libXt.sl # ln -s /usr/lib/libXtst.2 /usr/lib/libXtst.sl

Chapter 2

83

Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation)

5. 6.

Enable Remote Access (ssh and remsh) for Oracle User on all Nodes Create File System for Oracle Directories
In the following samples, /mnt/app is a mounted file system for Oracle software. Assume there is a private disk c4t5d0 at 18 GB size on all nodes. Create the local file system on each node. # umask 022 # pvcreate /dev/rdsk/c4t5d0 # mkdir /dev/vg01 # mknod /dev/vg01/group c 64 0x010000 # vgcreate /dev/vg01 /dev/dsk/c4t5d0 # lvcreate -L 16000 /dev/vg01 # newfs -F vxfs /dev/vg01/rlvol1 # mkdir -p /mnt/app # mount /dev/vg01/lvol1 /mnt/app # chmod 775 /mnt/app

7.

Create Oracle Cluster Software Home Directory


For installing Oracle Cluster Software on local file system, create the directories on each node. # mkdir -p /mnt/app/crs/oracle/product/10.2.0/crs # chown -R oracle:oinstall \ /mnt/app/crs/oracle/product/10.2.0/crs # chmod -R 775 /mnt/app/crs/oracle/product/10.2.0/crs

8.

Create Oracle Base Directory (for RAC Binaries on Local File System)
If installing RAC binaries on local file system, create the oracle base directory on each node. # mkdir -p /mnt/app/oracle # chown -R oracle:oinstall /mnt/app/oracle # chmod -R 775 /mnt/app/oracle

84

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # usermod -d /mnt/app/oracle oracle 9.

Create Oracle Base Directory (For RAC Binaries on Cluster File System)
If installing RAC binaries on Cluster File System, create the oracle base directory once since this is CFS directory visible by all nodes. The CFS file system used is /cfs/mnt1. # mkdir -p /cfs/mnt1/oracle # chown -R oracle:oinstall /cfs/mnt1/oracle # chmod -R 775 /cfs/mnt1/oracle # chmod 775 /cfs # chmod 775 /cfs/mnt1 Modify oracle user to use new home directory on each node. # usermod -d /cfs/mnt1/oracle oracle

10.

Prepare Shared Storage on SLVM


This section assumes the OCR, Vote device, and database files are created on SLVM volume group vg_ops. a. Change Permission of Shared Logical Volume Group This section assumes the OCR, Vote device, and database files are created on SLVM volume group vg_ops. # chmod 755 /dev/vg_ops b. Change Permission and Ownership of Oracle Cluster Software Vote Device and Database Files # chown oracle:oinstall /dev/vg_ops/r* # chmod 660 /dev/vg_ops/r* c. Change Permission of OCR Device # chown root:oinstall /dev/vg_ops/rora_ocr # chmod 640 /dev/vg_ops/rora_ocr d. Create Raw Device Mapping File for Oracle Database Configuration Assistant In this example, the database name is ver10

Chapter 2

85

Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # ORACLE_BASE=/mnt/app/oracle; export ORACLE_BASE # mkdir -p $ORACLE_BASE/oradata/ver10 # chown -R oracle:oinstall $ORACLE_BASE/oradata # chmod -R 755 $ORACLE_BASE/oradata The following is a sample of the mapping file for DBCA:
system=/dev/vg_ops/ropssystem.dbf sysaux=/dev/vg_ops/ropssysaux.dbf undotbs1=/dev/vg_ops/ropsundotbs01.dbf undotbs2=/dev/vg_ops/ropsundotbs02.dbf example=/dev/vg_ops/ropsexample1.dbf users=/dev/vg_ops/ropsusers.dbf redo1_1=/dev/vg_ops/rops1log1.log redo1_2=/dev/vg_ops/rops1log2.log redo2_1=/dev/vg_ops/rops2log1.log redo2_2=/dev/vg_ops/rops2log2.log control1=/dev/vg_ops/ropsctl1.ctl control2=/dev/vg_ops/ropsctl2.ctl control3=/dev/vg_ops/ropsctl3.ctl temp=/dev/vg_ops/ropstmp.dbf spfile=/dev/vg_ops/ropsspfile1.ora

In this sample, create the DBCA mapping file and place at: /mnt/app/oracle/oradata/ver10/ver10_raw.conf. 11.

Prepare Shared Storage on CFS


This section assumes the OCR, Vote device, and database files are created on CFS directories. The OCR and vote device reside on /cfs/mnt3 and the demo database files reside on /cfs/mnt2. a. Create OCR and Vote Device Directories on CFS Create OCR and vote device directories on Cluster File System. Run commands only on one node. # chmod 775 /cfs # chmod 755 /cfs/mnt3 # cd /cfs/mnt3 # mkdir OCR # chmod 755 OCR # mkdir VOTE

86

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # chmod 755 VOTE # chown -R oracle:oinstall /cfs/mnt3 b. Create Directory for Oracle Demo Database on CFS Create the CFS directory to store Oracle database files. Run commands only on one node. # chmod 775 /cfs # chmod 775 /cfs/mnt2 # cd /cfs/mnt2 # mkdir oradata # chown oracle:oinstall oradata # chmod 775 oradata

Chapter 2

87

Serviceguard Configuration for Oracle 10g RAC Installing Oracle 10g Cluster Software

Installing Oracle 10g Cluster Software


The following sample steps for a SGeRAC cluster for Oracle 10g. Refer to the Oracle documentation for Oracle installation details.

Installing on Local File System


Logon as a oracle user: $ export DISPLAY={display}:0.0 $ cd <10g Cluster Software disk directory> $ ./runInstaller Use following guidelines when installing on a local file system: 1. Specify CRS HOME as /mnt/app/crs/oracle/product/10.2.0/crs. This is a local file system. 2. Specify OCR Location as /dev/vg_ops/rora_ocr if using SLVM for OCR or as /cfs/mnt3/OCR/ocr_file if using CFS for OCR. 3. Specify Vote Disk Location as /dev/vg_ops/rora_vote if using SLVM vote device or as /cfs/mnt3/VOTE/vote_file if using CFS for vote device. 4. When prompted, run orainstRoot.sh on each node. 5. When prompted, run root.sh on each node.

88

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Installing Oracle 10g RAC Binaries

Installing Oracle 10g RAC Binaries


The following sample steps for a SGeRAC cluster for Oracle 10g. Refer to the Oracle documentation for Oracle installation details.

Installing RAC Binaries on a Local File System


Logon as a oracle user: $ export ORACLE BASE=/mnt/app/oracle $ export DISPLAY={display}:0.0 $ cd <10g RAC installation disk directory> $ ./runInstaller Use following guidelines when installing on a local file system: 1. In this example, the path to ORACLE_HOME is on a local file system /mnt/app/oracle/product/10.2.0/db_1. 2. Select installation for database software only. 3. When prompted, run root.sh on each node.

Installing RAC Binaries on Cluster File System


Logon as a oracle user: $ export ORACLE BASE=/cfs/mnt1/oracle $ export DISPLAY={display}:0.0 $ cd <10g RAC installation disk directory> $ ./runInstaller Use following guidelines when installing on a local file system: 1. In this example, the path to ORACLE_HOME is locate on a CFS directory /cfs/mnt1/oracle/product/10.2.0/db_1. 2. Select installation for database software only. 3. When prompted, run root.sh on each node.

Chapter 2

89

Serviceguard Configuration for Oracle 10g RAC Creating a RAC Demo Database

Creating a RAC Demo Database


This section demonstrates the steps for creating a demo database with datafiles on raw volumes with SLVM or CVM, or with Cluster File System.

Creating a RAC Demo Database on SLVM or CVM


Export environment variables for oracle user:
export ORACLE_BASE=/mnt/app/oracle export \ DBCA_RAW_CONFIG=/mnt/app/oracle/oradata/ver10/ver10_raw.conf export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1 export ORA_CRS_HOME=/mnt/app/crs/oracle/product/10.2.0/crs LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:$ORACLE_HOME/rd bms/lib SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/lib32 export LD_LIBRARY_PATH SHLIB_PATH export \ PATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/usr/local/bin: CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbm s/jlib:$ORACLE_HOME/network/jlib export CLASSPATH export DISPLAY={display}:0.0

1. Setting up Listeners with Oracle Network Configuration Assistant Use the Oracle network configuration assistant to configure the listeners with the following command: $ netca 2. Create Database with Database Configuration Assistant Use the Oracle database configuration assistant to create the database with the following command: $ dbca Use following guidelines when installing on a local file system:

90

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Creating a RAC Demo Database a. In this sample, the database name and SID prefix are ver10.

b. Select the storage option for raw devices.

Creating a RAC Demo Database on CFS


Export environment variables for oracle user:
export ORACLE_BASE=/cfs/mnt1/oracle export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1 export ORA_CRS_HOME=/mnt/app/crs/oracle/product/10.2.0/crs LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:$ORACLE_HOME/rd bms/lib SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/lib32 export LD_LIBRARY_PATH SHLIB_PATH export \ PATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/usr/local/bin: CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbm s/jlib:$ORACLE_HOME/network/jlib export CLASSPATH export DISPLAY={display}:0.0

1. Setting up Listeners with Oracle Network Configuration Assistant Use the Oracle network configuration assistant to configure the listeners with the following command: $ netca 2. Create Database with Database Configuration Assistant Use the Oracle database configuration assistant to create the database with the following command: $ dbca Use following guidelines when installing on a local file system: a. In this sample, the database name and SID prefix are ver10. b. Select the storage option for Cluster File System. c. Enter /cfs/mnt2/oradata as the common directory.

Chapter 2

91

Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Configured

Verify that Oracle Disk Manager is Configured


NOTE The following steps are specific to CFS.

1. Check the license # /opt/VRTS/bin/vxlictest -n VERITAS Storage Foundation for Oracle -f ODM output: Using VERITAS License Manager API Version 3.00, Build 2 ODM feature is licensed 2. Check that the VRTSodm package is installed: # swlist VRTSodm output: VRTSodm VRTSodm.ODM-KRN VRTSodm.ODM-MAN VRTSodm.ODM-RUN 4.1m 4.1m 4.1m 4.1m VERITAS VERITAS VERITAS VERITAS Oracle Disk Manager ODM kernel files ODM manual pages ODM commands

3. Check that libodm.sl is present # ll -L /opt/VRTSodm/lib/libodm.sl output: -rw-r--r-- 1 root sys 14336 Apr 25 18:42 /opt/VRTSodm/lib/libodm.sl

92

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Configuring Oracle to Use Oracle Disk Manager Library

Configuring Oracle to Use Oracle Disk Manager Library


NOTE The following steps are specific to CFS.

1. Login as Oracle user 2. Shutdown database 3. Link the Oracle Disk Manager library into Oracle home for Oracle 10g For HP 9000 Systems: $ rm ${ORACLE_HOME}/lib/libodm10.sl $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm10.sl For Integrity Systems: $ rm ${ORACLE_HOME}/lib/libodm10.so $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm10.so 4. Start Oracle database

Chapter 2

93

Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Running

Verify that Oracle Disk Manager is Running


NOTE The following steps are specific to CFS.

1. Start the cluster and Oracle database (if not already started) 2. Check that the Oracle instance is using the Oracle Disk Manager function: # cat /dev/odm/stats
abort: cancel: commit: create: delete: identify: io: reidentify: resize: unidentify: mname: vxctl: vxvers: io req: io calls: comp req: comp calls: io mor cmp: io zro cmp: cl receive: cl ident: cl reserve: cl delete: cl resize: cl same op: cl opt idn: cl opt rsv: **********: 0 0 18 18 0 349 12350590 78 0 203 0 0 10 9102431 6911030 73480659 5439560 461063 2330 66145 18 8 1 0 0 0 332 17

3. Verify that the Oracle Disk Manager is loaded:

94

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Running # kcmodule -P state odm Output: state loaded 4. In the alert log, verify the Oracle instance is running. The log should contain output similar to the following: Oracle instance running with ODM: VERITAS 4.1 ODM Library, Version 1.1

Chapter 2

95

Serviceguard Configuration for Oracle 10g RAC Configuring Oracle to Stop Using Oracle Disk Manager Library

Configuring Oracle to Stop Using Oracle Disk Manager Library


NOTE The following steps are specific to CFS.

1. Login as Oracle user 2. Shutdown database 3. Change directories: $ cd ${ORACLE_HOME}/lib 4. Remove the file linked to the ODM library For HP 9000 Systems: $ rm libodm10.sl $ ln -s ${ORACLE_HOME}/lib/libodmd10.sl \ ${ORACLE_HOME}/lib/libodm10.sl For Integrity Systems: $ rm libodm10.so $ ln -s ${ORACLE_HOME}/lib/libodmd10.so \ ${ORACLE_HOME}/lib/libodm10.so 5. Restart the database

96

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC

Using Serviceguard Packages to Synchronize with Oracle 10g RAC


It is recommended to start and stop Oracle Cluster Software in a Serviceguard package, as that will ensure that Oracle Cluster Software will start after SGeRAC is started and will stop before SGeRAC is halted. Serviceguard packages should also be used to synchronize storage activation and deactivation with Oracle Cluster Software and RAC instances.

Preparing Oracle Cluster Software for Serviceguard Packages


Stopping the Oracle Cluster Software on each Node For 10g 10.1.0.04 or later: # /sbin/init.d/init.crs stop For 10g 10.2.0.01 or later: # <CRS HOME>/bin/crsctl stop crs Wait until Oracle Cluster Software completely stops. (Check CRS logs or check for Oracle processes, ps -ef | grep ocssd.bin) Change Oracle Cluster Software from Starting at Boot Time on each Node For 10g 10.1.0.04 or later: # /sbin/init.d/init.crs disable For 10g 10.2.0.01 or later: # <CRS HOME>/bin/crsctl disable crs

Configure Serviceguard Packages


A Serviceguard package is needed for each node to start and stop Oracle Cluster Software. Storage Activation (SLVM)

Chapter 2

97

Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC When the Oracle Cluster Software required storage is configured on SLVM volume groups or CVM disk groups, the Serviceguard package should be configured to activate and deactivate the required storage in the package control script. As an example, modify the control script to activate the volume group in shared mode and set VG in the package control script for SLVM volume groups. VG[0]=vg_ops Storage Activation (CVM) When the Oracle Cluster Software required storage is configured on CVM disk groups, the Serviceguard package should be configured to activate and deactivate the required storage in the package configuration file and control script. In the package configuration file, specify the disk group with the STORAGE_GROUP key word. In the package controls script, specify the disk group with the CVM_DG variable. As an example, in the package configuration file should have the following: STORAGE_GROUP ops_dg Modify the package control script to set the CVM disk group to activate for shared write and to specify the disk group. CVM_DG[0]=ops_dg Storage Activation (CFS) When the Oracle Cluster Software required storage is configured on a Cluster File System (CFS), the Serviceguard package should be configured to depend on the CFS multi-node package through package dependency. With package dependency, the Serviceguard package that starts Oracle Cluster Software will not run until its dependent CFS multi-node package is up and will halt before the CFS multi-node package is halted. Setup dependency conditions in Serviceguard package configuration file (SAMPLE).
DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION mp1 SG-CFS-MP-1=UP SAME_NODE

98

Chapter 2

Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC
DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION mp2 SG-CFS-MP-2=UP SAME_NODE mp3 SG-CFS-MP-3=UP SAME_NODE

Starting and Stopping Oracle Cluster Software In the Serviceguard package control script, configure the Oracle Cluster Software start in the customer_defined_run_cmds function For 10g 10.1.0.04 or later: /sbin/init.d/init.crs start For 10g 10.2.0.01 or later: <CRS HOME>/bin/crsctl start crs In the Serviceguard package control script, configure the Oracle Cluster Software stop in the customer_defined_halt_cmds function. For 10g 10.1.0.04 or later: /sbin/init.d/init.crs stop For 10g 10.2.0.01 or later: <CRS HOME>/bin/crsctl stop crs When stopping Oracle Cluster Software in a Serviceguard package, it may be necessary to verify that the Oracle processes have stopped and exited before deactivating storage or halting CFS multi-node package. The verification can be done with a script that loops and checks for the successful stop message in the Oracle Cluster Software logs or the existences of Oracle processes that needed to be stopped, specifically the CSS daemon (ocssd.bin). For example, this script could be called by the Serviceguard package control script after the command to halt Oracle Cluster Software and before storage deactivation.

Chapter 2

99

Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC

100

Chapter 2

Serviceguard Configuration for Oracle 9i RAC

Serviceguard Configuration for Oracle 9i RAC


This chapter shows the additional planning and configuration that is needed to use Oracle Real Application Clusters 9i with Serviceguard. The following topics are presented: Planning Database Storage Installing Serviceguard Extension for RAC Creating a Storage Infrastructure with LVM Installing Oracle Real Application Clusters Cluster Configuration ASCII File Creating a Storage Infrastructure with CFS Creating a Storage Infrastructure with CVM Configuring Packages that Access the Oracle RAC Database Using Packages to Configure Startup and Shutdown of RAC Instances

Chapter 3

101

Serviceguard Configuration for Oracle 9i RAC Planning Database Storage

Planning Database Storage


The files needed by the Oracle database must be placed on shared storage that is accessible to all RAC cluster nodes. This section shows how to plan the storage using SLVM, VERITAS CFS, or VERITAS CVM.

Volume Planning with SLVM


Storage capacity for the Oracle database must be provided in the form of logical volumes located in shared volume groups. The Oracle software requires at least two log files (and one undo tablespace for Oracle9) for each Oracle instance, several Oracle control files and data files for the database itself. For all these files, Serviceguard Extension for RAC uses HP-UX raw logical volumes, which are located in volume groups that are shared between the nodes in the cluster. High availability is achieved by using high availability disk arrays in RAID modes. The logical units of storage on the arrays are accessed from each node through multiple physical volume links (PV links, also known as alternate links), which provide redundant paths to each unit of storage. Fill out a Logical Volume worksheet to provide logical volume names for logical volumes that you will create with the lvcreate command. The Oracle DBA and the HP-UX system administrator should prepare this worksheet together. Create entries for shared volumes only. For each logical volume, enter the full pathname of the raw logical volume device file. Be sure to include the desired size in MB. Following is a sample worksheet filled out. However, this sample is only representative. For different versions of the Oracle database, the size of files are different. Refer to Appendix B, Blank Planning Worksheets, for samples of blank worksheets. Make as many copies as you need. Fill out the worksheet and keep it for future reference.

102

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Planning Database Storage

ORACLE LOGICAL VOLUME WORKSHEET FOR LVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Control File _____/dev/vg_ops/ropsctl1.ctl_______108______ Oracle Control File 2: ___/dev/vg_ops/ropsctl2.ctl______108______ Oracle Control File 3: ___/dev/vg_ops/ropsctl3.ctl______104______ Instance 1 Redo Log 1: ___/dev/vg_ops/rops1log1.log_____20______ Instance 1 Redo Log 2: ___/dev/vg_ops/rops1log2.log_____20_______ Instance 1 Redo Log 3: ___/dev/vg_ops/rops1log3.log_____20_______ Instance 1 Redo Log: __________________________________________________ Instance 1 Redo Log: __________________________________________________ Instance 2 Redo Log 1: ___/dev/vg_ops/rops2log1.log____20________ Instance 2 Redo Log 2: ___/dev/vg_ops/rops2log2.log____20________ Instance 2 Redo Log 3: ___/dev/vg_ops/rops2log3.log____20_________ Instance 2 Redo Log: _________________________________________________ Instance 2 Redo Log: __________________________________________________ Data: System ___/dev/vg_ops/ropssystem.dbf___400__________ Data: Temp ___/dev/vg_ops/ropstemp.dbf______100_______ Data: Users ___/dev/vg_ops/ropsusers.dbf_____120_________ Data: Tools ___/dev/vg_ops/ropstools.dbf____15___________ Data: User data ___/dev/vg_ops/ropsdata1.dbf_200__________ Data: User data ___/dev/vg_ops/ropsdata2.dbf__200__________ Data: User data ___/dev/vg_ops/ropsdata3.dbf__200__________ Data: Rollback ___/dev/vg_ops/ropsrollback.dbf__300_________ Parameter: spfile1 /dev/vg_ops/ropsspfile1.ora __5_____ Instance 1 undotbs1: /dev/vg_ops/ropsundotbs1.dbf___312___ Instance 2 undotbs2: /dev/vg_ops/ropsundotbs2.dbf___312___ Data: example1__/dev/vg_ops/ropsexample1.dbf__________160____ data: cwmlite1__/dev/vg_ops/ropscwmlite1.dbf__100____ Data: indx1__/dev/vg_ops/ropsindx1.dbf____70___ Data: drsys1__/dev/vg_ops/ropsdrsys1.dbf___90___

Chapter 3

103

Serviceguard Configuration for Oracle 9i RAC Planning Database Storage

Storage Planning with CFS


With CFS, the database software and database files (control, redo, data files), and archive logs may reside on a cluster file system, which is visible by all nodes. The following software needs to be installed in order to use this configuration: SGeRAC CFS

NOTE

CFS and SGeRAC is available in the following HP Serviceguard Storage Management Suite bundle: HP Serviceguard Cluster File System for RAC (SGCFSRAC) (T2777BA).

For specific bundle information, refer to the HP Serviceguard Storage Management Suite Version A.01.00 Release Notes.

Configuration Combinations with Oracle 9i RAC The following configuration combinations show a sample of the configuration choices (See Table 3-1): Configuration 1 is for Oracle software and database files to reside on CFS. Configuration 2 is for Oracle software and archives to reside on CFS, while the database files are on raw volumes either SLVM or CVM. Configuration 3 is for Oracle software and archives to reside on local file system, while database files reside on CFS.

104

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Planning Database Storage Configuration 4 is for Oracle software and archive to reside on local file system, while the database files are on raw volumes either SLVM or CVM. RAC Software, Archive, Datafiles, SRVM RAC Software, Archive CFS CFS Local FS Local FS RAC Datafiles, SRVM CFS Raw (SLVM or CVM) CFS Raw (SLVM or CVM)

Table 3-1 Configuration 1 2 3 4

NOTE

Mixing files between CFS database files and raw volumes is allowable, but not recommended. RAC datafiles on CFS requires Oracle Disk Manager (ODM).

Using Single CFS Home or Local Home With a single CFS home, the software installs once and all the files are visible on all nodes. Serviceguard cluster services needs to be up for the CFS home to be accessible. For Oracle RAC, some files are still local (/etc/oratab, /var/opt/oracle/, /usr/local/bin/). With a local file system home, the software is installed and copied to every nodes local file system. Serviceguard cluster services does not need to be up for the local file system to be accessible. There would be multiple homes for Oracle. Considerations on using CFS for RAC datafiles and Server Management Storage (SRVM) Use the following list when considering to use CFS for database storage: Single file system view Simpler setup for archive recovery since archive area is visible by all nodes

Chapter 3

105

Serviceguard Configuration for Oracle 9i RAC Planning Database Storage Oracle create database files Online changes (OMF - Oracle Managed Files) within CFS Better manageability Manual intervention when modifying volumes, DGs, disks Requires the SGeRAC and CFS software CFS and SGeRAC is available in selected HP Serviceguard Storage Management Suite bundles. Refer to the HP Serviceguard Storage Management Suite Version A.01.00 Release Notes. Considerations on using Raw Volumes for RAC datafiles and Server Management storage (SRVM) Use the following list when considering to use Raw Volumes for database storage: SLVM is part of SGeRAC. Create volumes for each datafile. CVM 4.x Disk group activation performed by disk group multi-node package. Disk group activation performed by application package (without the HP Serviceguard Storage Management Suite bundle-see bundle information above). CVM 3.x Disk group activation is performed by application package.

Volume Planning with CVM


Storage capacity for the Oracle database must be provided in the form of volumes located in shared disk groups. The Oracle software requires at least two log files (and one undo tablespace for Oracle9) for each Oracle instance, several Oracle control files and data files for the database itself. For all these files, Serviceguard Extension for RAC uses HP-UX raw volumes, which are located in disk groups that are shared between the nodes in the cluster. High availability is achieved by using high

106

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Planning Database Storage availability disk arrays in RAID nodes. The logical units of storage on the arrays are accessed from each node through multiple physical volume links via DMP (Dynamic Multi-pathing), which provides redundant paths to each unit of storage. Fill out the VERITAS Volume worksheet to provide volume names for volumes that you will create using the VERITAS utilities. The Oracle DBA and the HP-UX system administrator should prepare this worksheet together. Create entries for shared volumes only. For each volume, enter the full pathname of the raw volume device file. Be sure to include the desired size in MB. Following is a sample worksheet filled out. Refer to Appendix B, Blank Planning Worksheets, for samples of blank worksheets. Make as many copies as you need. Fill out the worksheet and keep it for future reference.

Chapter 3

107

Serviceguard Configuration for Oracle 9i RAC Planning Database Storage

ORACLE LOGICAL VOLUME WORKSHEET FOR CVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Control File 1: ___/dev/vx/rdsk/ops_dg/opsctl1.ctl______100______ Oracle Control File 2: ___/dev/vx/rdsk/ops_dg/opsctl2.ctl______100______ Oracle Control File 3: ___/dev/vx/rdsk/ops_dg/opsctl3.ctl______100______ Instance 1 Redo Log 1: ___/dev/vx/rdsk/ops_dg/ops1log1.log_____20______ Instance 1 Redo Log 2: ___/dev/vx/rdsk/ops_dg/ops1log2.log_____20_______ Instance 1 Redo Log 3: ___/dev/vx/rdsk/ops_dg/ops1log3.log_____20_______ Instance 1 Redo Log: ___________________________________________________ Instance 1 Redo Log: ___________________________________________________ Instance 2 Redo Log 1: ___/dev/vx/rdsk/ops_dg/ops2log1.log____20________ Instance 2 Redo Log 2: ___/dev/vx/rdsk/ops_dg/ops2log2.log____20________ Instance 2 Redo Log 3: ___/dev/vx/rdsk/ops_dg/ops2log3.log____20_________ Instance 2 Redo Log: _________________________________________________ Instance 2 Redo Log: __________________________________________________ Data: System ___/dev/vx/rdsk/ops_dg/system.dbf___400__________ Data: Temp ___/dev/vx/rdsk/ops_dg/temp.dbf______100_______ Data: Users ___/dev/vx/rdsk/ops_dg/users.dbf_____120_________ Data: Tools ___/dev/vx/rdsk/ops_dg/tools.dbf____15___________ Data: User data ___/dev/vx/rdsk/ops_dg/data1.dbf_200__________ Data: User data ___/dev/vx/rdsk/ops_dg/data2.dbf__200__________ Data: User data ___/dev/vx/rdsk/ops_dg/data3.dbf__200__________ Data: Rollback ___/dev/vx/rdsk/ops_dg/rollback.dbf__300_________ Parameter: spfile1 /dev/vx/rdsk/ops_dg/spfile1.ora __5_____ Instance 1 undotbs1: /dev/vx/rdsk/ops_dg/undotbs1.dbf___312___ Instance 2 undotbs2: /dev/vx/rdsk/ops_dg/undotbs2.dbf___312___ Data: example1__/dev/vx/rdsk/ops_dg/example1.dbf__________160____ data: cwmlite1__/dev/vx/rdsk/ops_dg/cwmlite1.dbf__100____ Data: indx1__/dev/vx/rdsk/ops_dg/indx1.dbf____70___ Data: drsys1__/dev/vx/rdsk/ops_dg/drsys1.dbf___90___

108

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Installing Serviceguard Extension for RAC

Installing Serviceguard Extension for RAC


Installing Serviceguard Extension for RAC includes updating the software and rebuilding the kernel to support high availability cluster operation for Oracle Real Application Clusters. Prior to installing Serviceguard Extension for RAC, the following must be installed: Correct version of HP-UX Correct version of Serviceguard

NOTE

For the most current version compatibility for Serviceguard and HP-UX, see the SGeRAC release notes.

To install Serviceguard Extension for RAC, use the following steps for each node: 1. Mount the distribution media in the tape drive, CD, or DVD reader. 2. Run Software Distributor, using the swinstall command. 3. Specify the correct input device. 4. Choose the following bundle from the displayed list:
Serviceguard Extension for RAC

5. After choosing the bundle, select OK. The software is loaded.

Chapter 3

109

Serviceguard Configuration for Oracle 9i RAC Configuration File Parameters

Configuration File Parameters


You need to code specific entries for all the storage groups that you want to use in an Oracle RAC configuration. If you are using LVM, the OPS_VOLUME_GROUP parameter is included in the cluster ASCII file. If you are using VERITAS CVM, the STORAGE_GROUP parameter is included in the package ASCII file. Details are as follows: OPS_VOLUME_GROUP The name of an LVM volume group whose disks are attached to at least two nodes in the cluster; the disks will be accessed by more than one node at a time using SLVM with concurrency control provided by Oracle RAC. Such disks are considered cluster aware. Volume groups listed under this parameter are marked for activation in shared mode. The entry can contain up to 40 characters. STORAGE_GROUP This parameter is used for CVM disk groups. Enter the names of all the CVM disk groups the package will use. In the ASCII package configuration file, this parameter is called STORAGE_GROUP.

NOTE

CVM 4.x with CFS does not use the STORAGE_GROUP parameter because the disk group activation is performed by the multi-node package. CVM 3.x or 4.x without CFS uses the STORAGE_GROUP parameter in the ASCII package configuration file in order to activate the disk group. Do not enter the names of LVM volume groups or VxVM disk groups in the package ASCII configuration file.

110

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Operating System Parameters

Operating System Parameters


The maximum number of Oracle server processes cmgmsd can handle is 8192. When there are more than 8192 server processes connected to cmgmsd, then cmgmsd will start to reject new requests. Oracle foreground server processes are needed to handle the requests of the DB client connected to the DB instance. Each foreground server process can either be a dedicated or a shared server process. In the case of a dedicated process, there is a one-to-one correspondence between the DB client and the foreground server process it invokes. The shared server processes can handle multiple DB clients. In the case where a DB instance is configured to support a large number of DB clients, it is necessary to adjust the maxfiles parameter. This to make sure there are enough file descriptors to support the necessary number of Oracle foreground and background server processes.

Chapter 3

111

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM

Creating a Storage Infrastructure with LVM


In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), VERITAS Cluster Volume Manager (CVM), or VERITAS Volume Manager (VxVM). LVM and VxVM configuration are done before cluster configuration, and CVM configuration is done after cluster configuration. This section describes how to create LVM volume groups for use with Oracle data. Before configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager. Separate procedures are given for the following: Building Volume Groups for RAC on Mirrored Disks Building Mirrored Logical Volumes for RAC with LVM Commands Creating RAC Volume Groups on Disk Arrays Creating Logical Volumes for RAC on Disk Arrays

The Event Monitoring Service HA Disk Monitor provides the capability to monitor the health of LVM disks. If you intend to use this monitor for your mirrored disks, you should configure them in physical volume groups. For more information, refer to the manual Using HA Monitors.

Building Volume Groups for RAC on Mirrored Disks


The procedure described in this section uses physical volume groups for mirroring of individual disks to ensure that each logical volume is mirrored to a disk on a different I/O bus. This kind of arrangement is known as PVG-strict mirroring. It is assumed that your disk hardware is already configured in such a way that a disk to be used as a mirror copy is connected to each node on a different bus than the bus that is used for the other (primary) copy. For more information on using LVM, refer to the HP-UX Managing Systems and Workgroups manual.

112

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Creating Volume Groups and Logical Volumes If your volume groups have not been set up, use the procedure in the next sections. If you have already done LVM configuration, skip ahead to the section Installing Oracle Real Application Clusters. Selecting Disks for the Volume Group Obtain a list of the disks on both nodes and identify which device files are used for the same disk on both. Use the following command on each node to list available disks as they are known to each system:
# lssf /dev/dsk/*

In the following examples, we use /dev/rdsk/c1t2d0 and /dev/rdsk/c0t2d0, which happen to be the device names for the same disks on both ftsys9 and ftsys10. In the event that the device file names are different on the different nodes, make a careful note of the correspondences. Creating Physical Volumes On the configuration node (ftsys9), use the pvcreate command to define disks as physical volumes. This only needs to be done on the configuration node. Use the following commands to create two physical volumes for the sample configuration:
# pvcreate -f /dev/rdsk/c1t2d0 # pvcreate -f /dev/rdsk/c0t2d0

Creating a Volume Group with PVG-Strict Mirroring Use the following steps to build a volume group on the configuration node (ftsys9). Later, the same volume group will be created on other nodes. 1. First, set up the group directory for vgops:
# mkdir /dev/vg_ops

2. Next, create a control file named group in the directory /dev/vg_ops, as follows:
# mknod /dev/vg_ops/group c 64 0xhh0000

The major number is always 64, and the hexadecimal minor number has the form
0xhh0000

Chapter 3

113

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups:
# ls -l /dev/*/group

3. Create the volume group and add physical volumes to it with the following commands:
# vgcreate -g bus0 /dev/vg_ops /dev/dsk/c1t2d0 # vgextend -g bus1 /dev/vg_ops /dev/dsk/c0t2d0

The first command creates the volume group and adds a physical volume to it in a physical volume group called bus0. The second command adds the second drive to the volume group, locating it in a different physical volume group named bus1. The use of physical volume groups allows the use of PVG-strict mirroring of disks and PV links. 4. Repeat this procedure for additional volume groups.

Building Mirrored Logical Volumes for RAC with LVM Commands


After you create volume groups and define physical volumes for use in them, you define mirrored logical volumes for data, logs, and control files. It is recommended that you use a shell script to issue the commands described in the next sections. The commands you use for creating logical volumes vary slightly depending on whether you are creating logical volumes for RAC redo log files or for use with Oracle data. Creating Mirrored Logical Volumes for RAC Redo Logs and Control Files Create logical volumes for use as redo log and control files by selecting mirror consistency recovery. Use the same options as in the following example:
# lvcreate -m 1 -M n -c y -s g -n redo1.log -L 28 /dev/vg_ops

The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c y means that mirror consistency recovery is enabled; the -s g means that mirroring is

114

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM PVG-strict, that is, it occurs between different physical volume groups; the -n redo1.log option lets you specify the name of the logical volume; and the -L 28 option allocates 28 megabytes.

NOTE

It is important to use the -M n and -c y options for both redo logs and control files. These options allow the redo log files to be resynchronized by SLVM following a system crash before Oracle recovery proceeds. If these options are not set correctly, you may not be able to continue with database recovery.

If the command is successful, the system will display messages like the following:
Logical volume /dev/vg_ops/redo1.log has been successfully created with character device /dev/vg_ops/rredo1.log Logical volume /dev/vg_ops/redo1.log has been successfully extended

Note that the character device file name (also called the raw logical volume name) is used by the Oracle DBA in building the RAC database. Creating Mirrored Logical Volumes for RAC Data Files Following a system crash, the mirrored logical volumes need to be resynchronized, which is known as resilvering. If Oracle does not perform resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of NOMWC. This is done by disabling mirror write caching and enabling mirror consistency recovery. With NOMWC, SLVM performs the resynchronization. Create logical volumes for use as Oracle data files by using the same options as in the following example:
# lvcreate -m 1 -M n -c y -s g -n system.dbf -L 408 /dev/vg_ops

The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c y means that mirror consistency recovery is enabled; the -s g means that mirroring is PVG-strict, that is, it occurs between different physical volume groups; the -n system.dbf option lets you specify the name of the logical volume; and the -L 408 option allocates 408 megabytes.

Chapter 3

115

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM If Oracle performs the resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of NONE by disabling both mirror write caching and mirror consistency recovery. With a mirror consistency policy of NONE, SLVM does not perform the resynchronization.

NOTE

Contact Oracle to determine if your version of Oracle RAC allows resilvering and to appropriately configure the mirror consistency recovery policy for your logical volumes.

Create logical volumes for use as Oracle data files by using the same options as in the following example:
# lvcreate -m 1 -M n -c n -s g -n system.dbf -L 408 /dev/vg_ops

The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c n means that mirror consistency recovery is disabled; the -s g means that mirroring is PVG-strict, that is, it occurs between different physical volume groups; the -n system.dbf option lets you specify the name of the logical volume; and the -L 408 option allocates 408 megabytes. If the command is successful, the system will display messages like the following:
Logical volume /dev/vg_ops/system.dbf has been successfully created with character device /dev/vg_ops/rsystem.dbf Logical volume /dev/vg_ops/system.dbf has been successfully extended

Note that the character device file name (also called the raw logical volume name) is used by the Oracle DBA in building the OPS database.

Creating RAC Volume Groups on Disk Arrays


The procedure described in this section assumes that you are using RAID-protected disk arrays and LVMs physical volume links (PV links) to define redundant data paths from each node in the cluster to every logical unit on the array. On your disk arrays, you should use redundant I/O channels from each node, connecting them to separate controllers on the array. Then you can define alternate links to the LUNs or logical disks you have defined on 116 Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM the array. If you are using SAM, choose the type of disk array you wish to configure, and follow the menus to define alternate links. If you are using LVM commands, specify the links on the command line. The following example shows how to configure alternate links using LVM commands. The following disk configuration is assumed:
8/0.15.0 8/0.15.1 8/0.15.2 8/0.15.3 8/0.15.4 8/0.15.5 10/0.3.0 10/0.3.1 10/0.3.2 10/0.3.3 10/0.3.4 10/0.3.5 /dev/dsk/c0t15d0 /dev/dsk/c0t15d1 /dev/dsk/c0t15d2 /dev/dsk/c0t15d3 /dev/dsk/c0t15d4 /dev/dsk/c0t15d5 /dev/dsk/c1t3d0 /dev/dsk/c1t3d1 /dev/dsk/c1t3d2 /dev/dsk/c1t3d3 /dev/dsk/c1t3d4 /dev/dsk/c1t3d5 /* /* /* /* /* /* /* /* /* /* /* /* I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel 0 0 0 0 0 0 1 1 1 1 1 1 (8/0) (8/0) (8/0) (8/0) (8/0) (8/0) (10/0) (10/0) (10/0) (10/0) (10/0) (10/0) SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI address address address address address address address address address address address address 15 15 15 15 15 15 3 3 3 3 3 3 LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN 0 1 2 3 4 5 0 1 2 3 4 5 */ */ */ */ */ */ */ */ */ */ */ */

Assume that the disk array has been configured, and that both the following device files appear for the same LUN (logical disk) when you run the ioscan command:
/dev/dsk/c0t15d0 /dev/dsk/c1t3d0

Use the following procedure to configure a volume group for this logical disk: 1. First, set up the group directory for vg_ops:
# mkdir /dev/vg_ops

2. Next, create a control file named group in the directory /dev/vg_ops, as follows:
# mknod /dev/vg_ops/group c 64 0xhh0000

The major number is always 64, and the hexadecimal minor number has the format:
0xhh0000

where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups: Chapter 3 117

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM
# ls -l /dev/*/group

3. Use the pvcreate command on one of the device files associated with the LUN to define the LUN to LVM as a physical volume.
# pvcreate -f /dev/rdsk/c0t15d0

It is only necessary to do this with one of the device file names for the LUN. The -f option is only necessary if the physical volume was previously used in some other volume group. 4. Use the following to create the volume group with the two links:
# vgcreate /dev/vg_ops /dev/dsk/c0t15d0 /dev/dsk/c1t3d0

LVM will now recognize the I/O channel represented by /dev/dsk/c0t15d0 as the primary link to the disk; if the primary link fails, LVM will automatically switch to the alternate I/O channel represented by /dev/dsk/c1t3d0. Use the vgextend command to add additional disks to the volume group, specifying the appropriate physical volume name for each PV link. Repeat the entire procedure for each distinct volume group you wish to create. For ease of system administration, you may wish to use different volume groups to separate logs from data and control files.

NOTE

The default maximum number of volume groups in HP-UX is 10. If you intend to create enough new volume groups that the total exceeds ten, you must increase the maxvgs system parameter and then re-build the HP-UX kernel. Use SAM and select the Kernel Configuration area, then choose Configurable Parameters. Maxvgs appears on the list.

Creating Logical Volumes for RAC on Disk Arrays


After you create volume groups and add PV links to them, you define logical volumes for data, logs, and control files. The following are some examples:
# # # # lvcreate lvcreate lvcreate lvcreate -n -n -n -n ops1log1.log -L 28 /dev/vg_ops opsctl1.ctl -L 108 /dev/vg_ops system.dbf -L 408 /dev/vg_ops opsdata1.dbf -L 208 /dev/vg_ops

118

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM

Oracle Demo Database Files


The following set of files is required for the Oracle demo database which you can create during the installation process. Table 3-2 Required Oracle File Names for Demo Database Oracle File Size (MB)* 100 100 100 20 20 20 20 20 20 400 100 120 15 200 200 200 300 5 312

Logical Volume Name opsctl1.ctl opsctl2.ctl opsctl3.ctl ops1log1.log ops1log2.log ops1log3.log ops2log1.log ops2log2.log ops2log3.log opssystem.dbf opstemp.dbf opsusers.dbf opstools.dbf opsdata1.dbf opsdata2.dbf opsdata3.dbf opsrollback.dbf opsspfile1.ora opsundotbs1.dbf

LV Size (MB) 108 108 108 28 28 28 28 28 28 408 108 128 24 208 208 208 308 5 320

Raw Logical Volume Path Name /dev/vg_ops/ropsctl1.ctl /dev/vg_ops/ropsctl2.ctl /dev/vg_ops/ropsctl3.ctl /dev/vg_ops/rops1log1.log /dev/vg_ops/rops1log2.log /dev/vg_ops/rops1log3.log /dev/vg_ops/rops2log1.log /dev/vg_ops/rops2log2.log /dev/vg_ops/rops2log3.log /dev/vg_ops/ropssystem.dbf /dev/vg_ops/ropstemp.dbf /dev/vg_ops/ropsusers.dbf /dev/vg_ops/ropstools.dbf /dev/vg_ops/ropsdata1.dbf /dev/vg_ops/ropsdata2.dbf /dev/vg_ops/ropsdata3.dbf /dev/vg_ops/ropsroolback.dbf /dev/vg_ops/ropsspfile1.ora /dev/vg_ops/ropsundotbs1.dbf

Chapter 3

119

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Table 3-2 Required Oracle File Names for Demo Database (Continued) Oracle File Size (MB)* 312 160 100 70 90

Logical Volume Name opsundotbs2.dbf opsexample1.dbf opscwmlite1.dbf opsindx1.dbf opsdrsys1.dbf


*

LV Size (MB) 320 168 108 78 98

Raw Logical Volume Path Name /dev/vg_ops/ropsundotbs2.dbf /dev/vg_ops/ropsexample1.dbf /dev/vg_ops/ropscwmlite1.dbf /dev/vg_ops/ropsindx1.dbf /dev/vg_ops/ropsdrsys1.dbf

The size of the logical volume is larger than the Oracle file size because Oracle needs extra space to allocate a header in addition to the file's actual data capacity.

Create these files if you wish to build the demo database. The three logical volumes at the bottom of the table are included as additional data files, which you can create as needed, supplying the appropriate sizes. If your naming conventions require, you can include the Oracle SID and/or the database name to distinguish files for different instances and different databases. If you are using the ORACLE_BASE directory structure, create symbolic links to the ORACLE_BASE files from the appropriate directory. Example:
# ln -s /dev/vg_ops/ropsctl1.ctl \ /u01/ORACLE/db001/ctrl01_1.ctl

After creating these files, set the owner to oracle and the group to dba with a file mode of 660. The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.

120

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Displaying the Logical Volume Infrastructure

Displaying the Logical Volume Infrastructure


To display the volume group, use the vgdisplay command:
# vgdisplay -v /dev/vg_ops

Exporting the Logical Volume Infrastructure


Before the Oracle volume groups can be shared, their configuration data must be exported to other nodes in the cluster. This is done either in Serviceguard Manager or by using HP-UX commands, as shown in the following sections. Exporting with LVM Commands Use the following commands to set up the same volume group on another cluster node. In this example, the commands set up a new volume group on a system known as ftsys10. This volume group holds the same physical volume that was created on a configuration node known as ftsys9. To set up the volume group on ftsys10 (and other nodes), use the following steps: 1. On ftsys9, copy the mapping of the volume group to a specified file.
# vgexport -s -p -m /tmp/vg_ops.map /dev/vg_ops

2. Still on ftsys9, copy the map file to ftsys10 (and to additional nodes as necessary.)
# rcp /tmp/vg_ops.map ftsys10:/tmp/vg_ops.map

3. On ftsys10 (and other nodes, as necessary), create the volume group directory and the control file named group:
# mkdir /dev/vg_ops # mknod /dev/vg_ops/group c 64 0xhh0000

For the group file, the major number is always 64, and the hexadecimal minor number has the format:
0xhh0000

Chapter 3

121

Serviceguard Configuration for Oracle 9i RAC Displaying the Logical Volume Infrastructure where hh must be unique to the volume group you are creating. If possible, use the same number as on ftsys9. Use the following command to display a list of existing volume groups:
# ls -l /dev/*/group

4. Import the volume group data using the map file from node ftsys9. On node ftsys10 (and other nodes, as necessary), enter:
# vgimport -s -m /tmp/vg_ops.map /dev/vg_ops

122

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Installing Oracle Real Application Clusters

Installing Oracle Real Application Clusters


NOTE Some versions of Oracle RAC requires installation of additional software. Refer to your version of Oracle for specific requirements.

Before installing the Oracle Real Application Cluster software, make sure the cluster is running. Login as the oracle user on one node and then use the Oracle installer to install Oracle software and to build the correct Oracle runtime executables. When the executables are installed to a cluster file system, the Oracle installer has an option to install the executables once. When executables are installed to a local file system on each node, the Oracle installer copies the executables to the other nodes in the cluster. For details on Oracle installation, refer to the Oracle installation documentation. As part of this installation, the Oracle installer installs the executables and optionally, the Oracle installer can build an Oracle demo database on the primary node. The demo database files can be either the character (raw) device files names for the logical volumes created earlier, or the database can reside on a cluster file system. For a demo database on SLVM or CVM, create logical volumes as shown in Table 3-2, Required Oracle File Names for Demo Database, earlier in this chapter. As the installer prompts for the database file names, use the pathnames of the raw logical volumes instead of using the defaults.

NOTE

If you do not wish to install the demo database, select install software only.

Chapter 3

123

Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File

Cluster Configuration ASCII File


The following is an example of an ASCII configuration file generated with the cmquerycl command using the -w full option on a system with Serviceguard Extension for RAC. The OPS_VOLUME_GROUP parameters appear at the end of the file.
# # # # # ********************************************************************** ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE *************** ***** For complete details about cluster parameters and how to ******* ***** set them, consult the Serviceguard manual. ********************* **********************************************************************

# Enter a name for this cluster. This name will be used to identify the # cluster when viewing or manipulating it. CLUSTER_NAME cluster 1

# # # # # # # # # # # # # # # # # # # # # #

Cluster Lock Parameters The cluster lock is used as a tie-breaker for situations in which a running cluster fails, and then two equal-sized sub-clusters are both trying to form a new cluster. The cluster lock may be configured using only one of the following alternatives on a cluster: the LVM lock disk the quorom server

Consider the following when configuring a cluster. For a two-node cluster, you must use a cluster lock. For a cluster of three or four nodes, a cluster lock is strongly recommended. For a cluster of more than four nodes, a cluster lock is recommended. If you decide to configure a lock for a cluster of more than four nodes, it must be a quorum server. Lock Disk Parameters. Use the FIRST_CLUSTER_LOCK_VG and FIRST_CLUSTER_LOCK_PV parameters to define a lock disk. The FIRST_CLUSTER_LOCK_VG is the LVM volume group that holds the cluster lock. This volume group should not be used by any other cluster as a cluster lock device.

# Quorum Server Parameters. Use the QS_HOST, QS_POLLING_INTERVAL,

124

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File


# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # and QS_TIMEOUT_EXTENSION parameters to define a quorum server. The QS_HOST is the host name or IP address of the system that is running the quorum server process. The QS_POLLING_INTERVAL (microseconds) is the interval at which Serviceguard checks to make sure the quorum server is running. The optional QS_TIMEOUT_EXTENSION (microseconds) is used to increase the time interval after which the quorum server is marked DOWN. The default quorum server timeout is calculated from the Serviceguard cluster parameters, including NODE_TIMEOUT and HEARTBEAT_INTERVAL. If you are experiencing quorum server timeouts, you can adjust these parameters, or you can include the QS_TIMEOUT_EXTENSION parameter. The value of QS_TIMEOUT_EXTENSION will directly effect the amount of time it takes for cluster reformation in the event of failure. For example, if QS_TIMEOUT_EXTENSION is set to 10 seconds, the cluster reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTENSION was set to 0. This delay applies even if there is no delay in contacting the Quorum Server. The recommended value for QS_TIMEOUT_EXTENSION is 0, which is used as the default and the maximum supported value is 30000000 (5 minutes). For example, to configure a quorum server running on node qshost with 120 seconds for the QS_POLLING_INTERVAL and to add 2 seconds to the system assigned value for the quorum server timeout, enter: QS_HOST qshost QS_POLLING_INTERVAL 120000000 QS_TIMEOUT_EXTENSION 2000000

# # # # # # # # # # # #

Definition of nodes in the cluster. Repeat node definitions as necessary for additional nodes. NODE_NAME is the specified nodename in the cluster. It must match the hostname and both cannot contain full domain name. Each NETWORK_INTERFACE, if configured with IPv4 address, must have ONLY one IPv4 address entry with it which could be either HEARTBEAT_IP or STATIONARY_IP. Each NETWORK_INTERFACE, if configured with IPv6 address(es) can have multiple IPv6 address entries(up to a maximum of 2, only one IPv6 address entry belonging to site-local scope and only one belonging to global scope) which must be all STATIONARY_IP. They cannot be HEARTBEAT_IP.

Chapter 3

125

Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File

NODE_NAME ever3a NETWORK_INTERFACE lan0 STATIONARY_IP15.244.64.140 NETWORK_INTERFACE lan1 HEARTBEAT_IP192.77.1.1 NETWORK_INTERFACE lan2 # List of serial device file names # For example: # SERIAL_DEVICE_FILE /dev/tty0p0 # Primary Network Interfaces on Bridged Net 1: lan0. # Warning: There are no standby network interfaces on bridged net 1. # Primary Network Interfaces on Bridged Net 2: lan1. # Possible standby Network Interfaces on Bridged Net 2: lan2.

# Cluster Timing Parameters (microseconds). # # # # # # # # # The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds). This default setting yields the fastest cluster reformations. However, the use of the default value increases the potential for spurious reformations due to momentary system hangs or network load spikes. For a significant portion of installations, a setting of 5000000 to 8000000 (5 to 8 seconds) is more appropriate. The maximum value recommended for NODE_TIMEOUT is 30000000 (30 seconds). 1000000 2000000

HEARTBEAT_INTERVAL NODE_TIMEOUT

# Configuration/Reconfiguration Timing Parameters (microseconds). AUTO_START_TIMEOUT NETWORK_POLLING_INTERVAL 600000000 2000000

# Network Monitor Configuration Parameters. # The NETWORK_FAILURE_DETECTION parameter determines how LAN card failures are detected. # If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound # message count stops increasing or when both inbound and outbound # message counts stop increasing. # If set to INOUT, both the inbound and outbound message counts must

126

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File


# stop increasing before the card is considered down. NETWORK_FAILURE_DETECTION INOUT # Package Configuration Parameters. # Enter the maximum number of packages which will be configured in the cluster. # You can not add packages beyond this limit. # This parameter is required. MAX_CONFIGURED_PACKAGES 150

# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #

Access Control Policy Parameters. Three entries set the access control policy for the cluster: First line must be USER_NAME, second USER_HOST, and third USER_ROLE. Enter a value after each. 1. USER_NAME can either be ANY_USER, or a maximum of 8 login names from the /etc/passwd file on user host. 2. USER_HOST is where the user can issue Serviceguard commands. If using Serviceguard Manager, it is the COM server. Choose one of these three values: ANY_SERVICEGUARD_NODE, or (any) CLUSTER_MEMBER_NODE, or a specific node. For node, use the official hostname from domain name server, and not an IP addresses or fully qualified name. 3. USER_ROLE must be one of these three values: * MONITOR: read-only capabilities for the cluster and packages * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages in the cluster * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative commands for the cluster. Access control policy does not set a role for configuration capability. To configure, a user must log on to one of the clusters nodes as root (UID=0). Access control policy cannot limit root users access. MONITOR and FULL_ADMIN can only be set in the cluster configuration file, and they apply to the entire cluster. PACKAGE_ADMIN can be set in the cluster or a package configuration file. If set in the cluster configuration file, PACKAGE_ADMIN applies to all configured packages. If set in a package configuration file, PACKAGE_ADMIN applies to that package only. Conflicting or redundant policies will cause an error while applying the configuration, and stop the process. The maximum number of access policies that can be configured in the cluster is 200.

Chapter 3

127

Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File


# # # # # #

Example: to configure a role for user john from node noir to administer a cluster and all its packages, enter: USER_NAME john USER_HOST noir USER_ROLE FULL_ADMIN

# # # # # #

List of cluster aware LVM Volume Groups. These volume groups will be used by package applications via the vgchange -a e command. Neither CVM or VxVM Disk Groups should be used here. For example: VOLUME_GROUP /dev/vgdatabase VOLUME_GROUP /dev/vg02

# # # # # # # #

List of OPS Volume Groups. Formerly known as DLM Volume Groups, these volume groups will be used by OPS or RAC cluster applications via the vgchange -a s command. (Note: the name DLM_VOLUME_GROUP is also still supported for compatibility with earlier versions.) For example: OPS_VOLUME_GROUP /dev/vgdatabase OPS_VOLUME_GROUP /dev/vg02

128

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS

Creating a Storage Infrastructure with CFS


The following are example steps for creating a storage infrastructure for CFS.

Creating a SGeRAC Cluster with CFS 4.1 for Oracle 9i


The following software needs to be pre-installed in order to use this configuration: SGeRAC and CFS are included with the HP Serviceguard Storage Management Suite bundle T2777BA or Mission Critical Operating Environment (MCOE) T2797BA. CFS is included with the following bundles: (SGeRAC needs to be installed separately) T2776BA HP Serviceguard Cluster File System for Oracle (SGCFSO) T2775BA HP Serviceguard Cluster File System (SGCFS). Refer to the HP Serviceguard Storage Management Suite Version A.01.00 Release Notes. In the following example, both the Oracle software and datafiles reside on CFS. There is a single Oracle home. The following three CFS file systems are created for Oracle home, Oracle datafiles, and SRVM data:
/cfs/mnt1 for Oracle Base and Home /cfs/mnt2 for Oracle datafiles /cfs/cfssrvm - for SRVM data

Initializing the VERITAS Volume Manager


Use the following steps to create a two node SGeRAC cluster with CFS and Oracle: 1. Initialize the VERITAS Volume Manager If not already done, install the VxVM license key on all nodes. Use the following command: # vxinstall

Chapter 3

129

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS

NOTE

CVM 4.1 does not require rootdg

2. Create the Cluster file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file 3. Create the Cluster # cmapplyconf -C clm.asc 4. Start the Cluster # cmruncl # cmviewcl The following output will be displayed:
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running

5. Configure the Cluster Volume Manager (CVM) Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM/CFS stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets. # cfscluster config -s The following output will be displayed: CVM is now configured Starting CVM... It might take a few minutes to complete When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster:

130

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS # vxdctl -c mode The following output will be displayed: mode: enabled: cluster active - SLAVE master: ever3b or mode: enabled: cluster active - MASTER slave: ever3b 6. Converting Disks from LVM to CVM Use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Twelfth Edition users guide Appendix G. 7. Initializing Disks for CVM/CFS You need to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /etc/vx/bin/vxdisksetup -i c4t4d0 8. Create the Disk Group for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init cfsdg1 c4t4d0 9. Create the Disk Group Multi-Node package. Use the following command to add the disk group to the cluster: # cfsdgadm add cfsdg1 all=sw The following output will be displayed:

Chapter 3

131

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS Package name SG-CFS-DG-1 was generated to control the resource shared disk group cfsdg1 is associated with the cluster. 10. Activate the Disk Group # cfsdgadm activate cfsdg1 11. Creating Volumes and Adding a Cluster Filesystem # vxassist -g cfsdg1 make vol1 10240m # vxassist -g cfsdg1 make vol2 10240m # vxassist -g cfsdg1 make volsrvm 300m # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol1 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol2 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/volsrvm The following output will be displayed: version 6 layout 307200 sectors, 307200 blocks of size 1024, log size 1024 blocks largefiles supported 12. Configure Mount Point # cfsmntadm add cfsdg1 vol1 /cfs/mnt1 all=rw The following output will be displayed:

132

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS Package name SG-CFS-MP-1 was generated to control the resource. Mount point /cfs/mnt1 was associated with the cluster. # cfsmntadm add cfsdg1 vol2 /cfs/mnt2 all=rw The following output will be displayed: Package name SG-CFS-MP-2 was generated to control the resource. Mount point /cfs/mnt2 was associated with the cluster. # cfsmntadm add cfsdg1 volsrvm /cfs/cfssrvm all=rw The following output will be displayed: Package name SG-CFS-MP-3 was generated to control the resource. Mount point /cfs/cfssrvm that was associated with the cluster. 13. Mount Cluster Filesystem # cfsmount /cfs/mnt1 # cfsmount /cfs/mnt2 # cfsmount /cfs/cfssrvm 14. Check CFS Mount Points # bdf | grep cfs
/dev/vx/dsk/cfsdg1/vol1 10485760 19651 9811985 0% /cfs/mnt1 /dev/vx/dsk/cfsdg1/vol2 10485760 19651 9811985 0% /cfs/mnt2 /dev/vx/dsk/cfsdg1/volsrvm 307200 1802 286318 1% /cfs/cfssrvm

15. View the Configuration # cmviewcl


CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running

MULTI_NODE_PACKAGES

Chapter 3

133

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS

PACKAGE SG-CFS-pkg SG-CFS-DG-1 SG-CFS-MP-1 SG-CFS-MP-2 SG-CFS-MP-3

STATUS up up up up up

STATE running running running running running

AUTO_RUN enabled enabled enabled enabled enabled

SYSTEM yes no no no no

Deleting CFS from the Cluster


Halt the applications that are using CFS file systems. 1. Unmount CFS Mount Points # cfsumount /cfs/mnt1 # cfsumount /cfs/mnt2 # cfsumount /cfs/cfssrvm 2. Delete MP MNP # cfsmntadm delete /cfs/mnt1 The following output will be generated: Mount point /cfs/mnt1 was disassociated from the cluster # cfsmntadm delete /cfs/mnt2 The following output will be generated: Mount point /cfs/mnt2 was disassociated from the cluster # cfsmntadm delete /cfs/cfssrvm The following output will be generated: Mount point /cfs/cfssrvm was disassociated from the cluster Cleaning up resource controlling shared disk group cfsdg1 Shared disk group cfsdg1 was disassociated from the cluster.

NOTE

The disk group is deleted if there is no dependency.

134

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS 3. Delete DG MNP # cfsdgadm delete cfsdg1 The following output will be generated: Shared disk group cfsdg1 was disassociated from the cluster.

NOTE

cfsmntadm delete also deletes the disk group if there is no dependent package. To ensure the disk group deletion is complete, use the above command to delete the disk group package.

4. De-configure CVM # cfscluster stop The following output will be generated: Stopping CVM...CVM is stopped # cfscluster unconfig The following output will be generated: CVM is now unconfigured

Chapter 3

135

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM

Creating a Storage Infrastructure with CVM


In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), VERITAS Volume Manager (VxVM), or VERITAS Cluster Volume Manager (CVM). LVM and VxVM configuration are done before cluster configuration, and CVM configuration is done after cluster configuration. This section shows how to configure storage using the VERITAS Cluster Volume Manager (CVM). The examples show how to configure RAC disk groups, but you can also create CVM disk groups for non-RAC use. For more information, including details about configuration of plexes (mirrors), multi-pathing, and RAID, refer to the HP-UX documentation for the VERITAS Volume Manager.

Initializing the VERITAS Volume Manager


If you are about to create disk groups for the first time, you need to initialize the Volume Manager. This is done by creating a disk group known as rootdg that contains at least one disk. Use the following command after installing CVM on each node: # vxinstall This displays a menu-driven program that steps you through the CVM initialization sequence. From the main menu, choose the Custom option, and specify the disk you wish to include in rootdg.

IMPORTANT

Creating a rootdg disk group is only necessary the first time you use the Volume Manager. CVM 4.1 does not require a rootdg.

Using CVM 4.x


This section has information on how to prepare the cluster and the system multi-node package with CVM 4.x only (without the CFS filesystem).

136

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM For more detailed information on how to configure CVM 4.x, refer the Managing Serviceguard Twelfth Edition users guide. Preparing the Cluster and the System Multi-node Package for use with CVM 4.x 1. Create the Cluster file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file

NOTE

To prepare the cluster for CVM configuration, you need to be sure MAX_CONFIGURED_PACKAGES to minimum of 3 (the default value for MAX_CONFIGURED_PACKAGES for Serviceguard A.11.17 is 150) cluster configuration file. In the sample set the value to 10.

2. Create the Cluster # cmapplyconf -C clm.asc Start the Cluster # cmruncl # cmviewcl The following output will be displayed:
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running

3. Configure the Cluster Volume Manager (CVM) Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets. # cmapplyconf -P /etc/cmcluster/cfs/SG-CFS-pkg.conf

Chapter 3

137

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM # cmrunpkg SG-CFS-pkg When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster: # vxdctl -c mode The following output will be displayed: mode: enabled: cluster active - SLAVE master: ever3b Converting Disks from LVM to CVM Use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Thirteenth Edition users guide Appendix G. Initializing Disks for CVM You need to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /etc/vx/bin/vxdisksetup -i c4t4d0 Create the Disk Group for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c4t4d0 4. Creating Volumes and Adding a Cluster Filesystem # vxassist -g ops_dg make vol1 10240m

138

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM # vxassist -g ops_dg make vol2 10240m # vxassist -g ops_dg make volsrvm 300m 5. View the Configuration # cmviewcl
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running

MULTI_NODE_PACKAGES PACKAGE STATUS SG-CFS-pkg up STATE running AUTO_RUN enabled SYSTEM yes

IMPORTANT

After creating these files, use the vxedit command to change the ownership of the raw volume files to oracle and the group membership to dba, and to change the permissions to 660. Example: # cd /dev/vx/rdsk/ops_dg # vxedit -g ops_dg set user=oracle * # vxedit -g ops_dg set group=dba * # vxedit -g ops_dg set mode=660 * The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.

Mirror Detachment Policies with CVM The required CVM disk mirror detachment policy is global, which means that as soon as one node cannot see a specific mirror copy (plex), all nodes cannot see it as well. The alternate policy is local, which means that if one node cannot see a specific mirror copy, then CVM will deactivate access to the volume for that node only. This policy can be re-set on a disk group basis by using the vxedit command, as follows: # vxedit set diskdetpolicy=global <DiskGroupName>

Chapter 3

139

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM

NOTE

The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager.

Using CVM 3.x


This section has information on how to prepare the cluster with CVM 3.x. Preparing the Cluster for Use with CVM 3.x In order to use the VERITAS Cluster Volume Manager (CVM) version 3.5, the cluster must be running with a special CVM package. This means that the cluster must already be configured and running before you create disk groups.

NOTE

Cluster configuration is described in the previous section.

To prepare the cluster for CVM disk group configuration, you need to ensure that only one heartbeat subnet is configured. Then use the following command, which creates the special package that communicates cluster information to CVM: # cmapplyconf -P /etc/cmcluster/cvm/VxVM-CVM-pkg.conf

WARNING

This file should never be edited.

After the above command completes, start the cluster and create disk groups for shared use as described in the following sections. Starting the Cluster and Identifying the Master Node Run the cluster, which will activate the special CVM package: # cmruncl

140

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM After the cluster is started, it will now run with a special system multi-node package named VxVM-CVM-pkg, which is on all nodes. This package is shown in the following output of the cmviewcl -v command:
CLUSTER bowls NODE spare split strike STATUS up STATUS up up up STATE running running running

SYSTEM_MULTI_NODE_PACKAGES: PACKAGE STATUS VxVM-CVM-pkg up STATE running

When CVM starts up, it selects a master node, and this is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster: # vxdctl -c mode One node will identify itself as the master. Create disk groups from this node. Converting Disks from LVM to CVM Use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Twelfth Edition users guide Appendix G. Initializing Disks for CVM Initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /usr/lib/vxvm/bin/vxdisksetup -i /dev/dsk/c0t3d2

Chapter 3

141

Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM Creating Disk Groups for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c0t3d2 Verify the configuration with the following command: # vxdg list
NAME rootdg ops_dg STATE enabled enabled,shared ID 971995699.1025.node1 972078742.1084.node2

142

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Creating Volumes

Creating Volumes
Use the vxassist command to create logical volumes. The following is an example: # vxassist -g log_files make ops_dg 1024m This command creates a 1024 MB volume named log_files in a disk group named ops_dg. The volume can be referenced with the block device file /dev/vx/dsk/ops_dg/log_files or the raw (character) device file /dev/vx/rdsk/ops_dg/log_files. Verify the configuration with the following command: # vxdg list

IMPORTANT

After creating these files, use the vxedit command to change the ownership of the raw volume files to oracle and the group membership to dba, and to change the permissions to 660. Example: # cd /dev/vx/rdsk/ops_dg # vxedit -g ops_dg set user=oracle * # vxedit -g ops_dg set group=dba * # vxedit -g ops_dg set mode=660 * The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.

Mirror Detachment Policies with CVM


The required CVM disk mirror detachment policy is global, which means that as soon as one node cannot see a specific mirror copy (plex), all nodes cannot see it as well. The alternate policy is local, which means that if one node cannot see a specific mirror copy, then CVM will deactivate access to the volume for that node only. This policy can be re-set on a disk group basis by using the vxedit command, as follows: # vxedit set diskdetpolicy=global <DiskGroupName>

Chapter 3

143

Serviceguard Configuration for Oracle 9i RAC Creating Volumes

NOTE

The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager.

144

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Oracle Demo Database Files

Oracle Demo Database Files


The following set of volumes is required for the Oracle demo database which you can create during the installation process. Table 3-3 Required Oracle File Names for Demo Database Oracle File Size (MB) 100 100 100 20 20 20 20 20 20 400 100 120 15 200 200 200 300 5

Volume Name opsctl1.ctl opsctl2.ctl opsctl3.ctl ops1log1.log ops1log2.log ops1log3.log ops2log1.log ops2log2.log ops2log3.log opssystem.dbf opstemp.dbf opsusers.dbf opstools.dbf opsdata1.dbf opsdata2.dbf opsdata3.dbf opsrollback.dbf opsspfile1.ora

Size (MB) 108 108 108 28 28 28 28 28 28 408 108 128 24 208 208 208 308 5

Raw Device File Name /dev/vx/rdsk/ops_dg/opsctl1.ctl /dev/vx/rdsk/ops_dg/opsctl2.ctl /dev/vx/rdsk/ops_dg/opsctl3.ctl /dev/vx/rdsk/ops_dg/oops1log1.log /dev/vx/rdsk/ops_dg/ops1log2.log /dev/vx/rdsk/ops_dg/ops1log3.log /dev/vx/rdsk/ops_dg/ops2log1.log /dev/vx/rdsk/ops_dg/ops2log2.log /dev/vx/rdsk/ops_dg/ops2log3.log /dev/vx/rdsk/ops_dg/opssystem.dbf /dev/vx/rdsk/ops_dg/opstemp.dbf /dev/vx/rdsk/ops_dg/opsusers.dbf /dev/vx/rdsk/ops_dg/opstools.dbf /dev/vx/rdsk/ops_dg/opsdata1.dbf /dev/vx/rdsk/ops_dg/opsdata2.dbf /dev/vx/rdsk/ops_dg/opsdata3.dbf /dev/vx/rdsk/ops_dg/opsroolback.dbf /dev/vx/rdsk/ops_dg/opsspfile1.ora

Chapter 3

145

Serviceguard Configuration for Oracle 9i RAC Oracle Demo Database Files Table 3-3 Required Oracle File Names for Demo Database (Continued) Oracle File Size (MB) 312 312 160 100 70

Volume Name opsundotbs1.dbf opsundotbs2.dbf opsexample1.dbf opscwmlite1.dbf opsindx1.dbf

Size (MB) 320 320 168 108 78

Raw Device File Name /dev/vx/rdsk/ops_dg/opsundotbs1.dbf /dev/vx/rdsk/ops_dg/opsundotbs2.dbf /dev/vx/rdsk/ops_dg/opsexample1.dbf /dev/vx/rdsk/ops_dg/opscwmlite1.dbf /dev/vx/rdsk/ops_dg/opsindx1.dbf

Create these files if you wish to build the demo database. The three logical volumes at the bottom of the table are included as additional data files, which you can create as needed, supplying the appropriate sizes. If your naming conventions require, you can include the Oracle SID and/or the database name to distinguish files for different instances and different databases. If you are using the ORACLE_BASE directory structure, create symbolic links to the ORACLE_BASE files from the appropriate directory. Example:
# ln -s /dev/vx/rdsk/ops_dg/opsctl1.ctl \ /u01/ORACLE/db001/ctrl01_1.ctl

Example, Oracle9: 1. Create an ASCII file, and define the path for each database object.
control1=/dev/vx/rdsk/ops_dg/opsctl1.ctl or control1=/u01/ORACLE/db001/ctrl01_1.ctl

2. Set the following environment variable where filename is the name of the ASCII file created.
# export DBCA_RAW_CONFIG=<full path>/filename

146

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Adding Disk Groups to the Cluster Configuration

Adding Disk Groups to the Cluster Configuration


For CVM 4.x, if the multi-node package was configured for disk group activation, the application package should be configured with package dependency to ensure the CVM disk group is active. For CVM 3.5 and CVM 4.x (without using multi-node package) after creating units of CVM storage with VxVM commands, you need to specify the disk groups in each package configuration ASCII file. Use one STORAGE_GROUP parameter for each CVM disk group the package will use. It is necessary to identify the CVM disk groups, file systems, logical volumes, and mount options in the package control script. For more detailed information on the package configuration process refer to the Managing Serviceguard Thirteenth Edition users guide.

Chapter 3

147

Serviceguard Configuration for Oracle 9i RAC Installing Oracle 9i RAC

Installing Oracle 9i RAC


The following sample steps for a SGeRAC cluster for Oracle 9i. Refer to the Oracle documentation for Oracle installation details.

Install Oracle Software into CFS Home


Oracle RAC software is installed using the Oracle Universal Installer. This section describes installation of Oracle RAC software onto a CFS home. 1. Oracle Pre-installation Steps a. Create user accounts. Create user and group for Oracle accounts on all nodes using the following commands: # groupadd -g 99 dba # useradd -g dba -u 999 -d /cfs/mnt1/oracle oracle Create Oracle home directory on CFS # cd /cfs/mnt1 # mkdir /cfs/mnt1/oracle # chown oracle:dba oracle Change password for Oracle account on all nodes. # passwd oracle b. Set up for remote commands Setup user equivalence for all nodes by adding node name entries to /etc/hosts.equiv or add entries to the .rhosts of oracle account. c. Set up CFS directory for Oracle datafiles. # cd /cfs/mnt2 # mkdir oradata # chown oracle:dba oradata # chmod 755 oradata 148 Chapter 3

Serviceguard Configuration for Oracle 9i RAC Installing Oracle 9i RAC # ll


total 0 drwxr-xr-x lost+found drwxr-xr-x oradat 2 root 2 oracle root dba 96 Jun 96 Jun 3 11:43 3 13:45

d. Set up CFS directory for Server Management. Preallocate space for srvm (200MB) # prealloc /cfs/cfssrvm/ora_srvm 209715200 # chown oracle:dba /cfs/cfssrvm/ora_srvm 2. Install Oracle RAC Software a. Install Oracle (software only) with Oracle Universal Installer as oracle user # su - oracle When using CFS for SRVM, set SRVM_SHARED_CONFIG $ export SRVM_SHARED_CONFIG=/cfs/cfssrvm/ora_srvm b. Set DISPLAY $ export DISPLAY=${display}:0.0 c. Run Oracle Universal Installer and follow installation steps $ cd <Oracle installation disk directory> $ ./runInstaller

Create Database with Oracle Tools


Refer to Oracle documentation for more detailed information on creating Oracle database. 1. Set up Environment Variables Use the following as an example:
export ORACLE_BASE=/cfs/mnt1/oracle export ORACLE_HOME=$ORACLE_BASE/product/9.2.0.2 export PATH=$ORACLE_HOME/bin:$PATH

Chapter 3

149

Serviceguard Configuration for Oracle 9i RAC Installing Oracle 9i RAC


LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:$ORACLE_HOM E/rdbms/lib SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/lib32 export LD_LIBRARY_PATH SHLIB_PATH export CLASSPATH=/opt/java1.3/lib CLASSPATH=$CLASSPATH:$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$O RACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib export CLASSPATH export DISPLAY={display}:0.0

2. Set up Listeners with Oracle Network Configuration Assistant $ netca 3. Start GSD on all Nodes $ gsdctl start
Output: Successfully started GSD on local node

4. Run Database Configuration Assistant to Create the Database on CFS File System $ dbca -datafileDestination /cfs/mnt2/oradata

150

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Verify that Oracle Disk Manager is Configured

Verify that Oracle Disk Manager is Configured


NOTE The following steps are specific to CFS.

1. Check the license: # /opt/VRTS/bin/vxlictest -n VERITAS Storage Foundation for Oracle -f ODM The following output will be displayed: Using VERITAS License Manager API Version 3.00, Build 2 ODM feature is licensed 2. Check that the VRTSodm package is installed: # swlist VRTSodm The following output will be displayed: VRTSodm 4.1m VERITAS Oracle Disk Manager VRTSodm.ODM-KRN 4.1m VERITAS ODM kernel files VRTSodm.ODM-MAN 4.1m VERITAS ODM manual pages VRTSodm.ODM-RUN 4.1m VERITAS ODM commands 3. Check that libodm.sl is present: # ll -L /opt/VRTSodm/lib/libodm.sl The following output will be displayed: -rw-r--r-- 1 root sys 14336 Apr 25 18:42 /opt/VRTSodm/lib/libodm.sl

Chapter 3

151

Serviceguard Configuration for Oracle 9i RAC Configure Oracle to use Oracle Disk Manager Library

Configure Oracle to use Oracle Disk Manager Library


NOTE The following steps are specific to CFS.

1. Logon as Oracle user 2. Shutdown database 3. Link the Oracle Disk Manager library into Oracle home using the following commands: For HP 9000 systems: $ rm ${ORACLE_HOME}/lib/libodm9.sl $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm9.sl For Integrity systems: $ rm ${ORACLE_HOME}/lib/libodm9.so $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm9.so 4. Start the Oracle database

152

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Verify Oracle Disk Manager is Running

Verify Oracle Disk Manager is Running


NOTE The following steps are specific to CFS.

1. Start the cluster and Oracle database (if not already started) 2. Check that the Oracle instance is using the Oracle Disk Manager function with following command: # cat /dev/odm/stats
abort: cancel: commit: create: delete: identify: io: reidentify: resize: unidentify: mname: vxctl: vxvers: io req: io calls: comp req: comp calls: io mor cmp: io zro cmp: cl receive: cl ident: cl reserve: cl delete: cl resize: cl same op: cl opt idn: cl opt rsv: **********: 0 0 18 18 0 349 12350590 78 0 203 0 0 10 9102431 6911030 73480659 5439560 461063 2330 66145 18 8 1 0 0 0 332 17

Chapter 3

153

Serviceguard Configuration for Oracle 9i RAC Verify Oracle Disk Manager is Running 3. Verify that Oracle Disk Manager is loaded with the following command: # kcmodule -P state odm The following output will be displayed: state loaded 4. In the alert log, verify the Oracle instance is running. The log should contain output similar to the following: Oracle instance running with ODM: VERITAS 4.1 ODM Library, Version 1.1

154

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Configuring Oracle to Stop using Oracle Disk Manager Library

Configuring Oracle to Stop using Oracle Disk Manager Library


NOTE The following steps are specific to CFS.

1. Login as Oracle user 2. Shutdown the database 3. Change directories: $ cd ${ORACE_HOME}/lib 4. Remove the file linked to the ODM library: For HP 9000 systems: $ rm libodm9.sl $ ln -s ${ORACLE_HOME}/lib/libodmd9.sl \ ${ORACLE_HOME}/lib/libodm9.sl For Integrity systems: $ rm libodm9.so $ ln -s ${ORACLE_HOME}/lib/libodmd9.so \ ${ORACLE_HOME}/lib/libodm9.so 5. Restart the database

Chapter 3

155

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances

Using Packages to Configure Startup and Shutdown of RAC Instances


To automate the startup and shutdown of RAC instances on the nodes of the cluster, you can create packages which activate the appropriate volume groups and then run RAC. Refer to the section Creating Packages to Launch Oracle RAC Instances

NOTE

The maximum number of RAC instances for Oracle 9i is 127 per cluster. For Oracle 10g refer to Oracles requirements.

Starting Oracle Instances


Once the Oracle installation is complete, ensure that all package control scripts are in place on each node and that each /etc/rc.config.d/cmcluster script contains the entry AUTOSTART_CMCLD=1. Then reboot each node. Within a couple of minutes following reboot, the cluster will reform, and the package control scripts will bring up the database instances and application programs. When Oracle has been started, you can use the SAM process management area or the ps -ef command on both nodes to verify that all RAC daemons and Oracle processes are running. Starting Up and Shutting Down Manually To start up and shut down RAC instances without using packages, you can perform the following steps. Starting up involves the following sequence: 1. Start up the cluster (cmrunnode or cmruncl) 2. Activate the database volume groups or disk groups in shared mode. 3. Bring up Oracle in shared mode. 4. Bring up the Oracle applications, if any. 156 Shutting down involves the following sequence: Chapter 3

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances 1. Shut down the Oracle applications, if any. 2. Shut down Oracle. 3. Deactivate the database volume groups or disk groups. 4. Shut down the cluster (cmhaltnode or cmhaltcl). If the shutdown sequence described above is not followed, cmhaltcl or cmhaltnode may fail with a message that GMS clients (RAC 9i) are active or that shared volume groups are active.

Creating Packages to Launch Oracle RAC Instances


To coordinate the startup and shutdown of RAC instances with cluster node startup and shutdown, you create a one-node package for each node that runs an RAC instance. In the package configuration file, you should specify only the single node on which the instance will run and specify the control script that is to be executed every time the instance node or the entire RAC cluster starts up or shuts down.

NOTE

You must create the RAC instance package with a PACKAGE_TYPE of FAILOVER, but the fact that you are entering only one node ensures that the instance will only run on that node.

To simplify the creation of RAC instance packages, you can use the Oracle template provided with the separately purchasable ECM Toolkits product (T1909BA). Use the special toolkit scripts that are provided, and follow the instructions that appear in the README file. Also refer to the section Customizing the Control Script for RAC Instances below for more information. To create the package with Serviceguard Manager select the cluster. Go to the actions menu and choose configure package. To modify a package, select the package. For an instance package, create one package for each instance. On each node, supply the SID name for the package name. To create a package on the command line, use the cmmakepkg command to get an editable configuration file. Set the AUTO_RUN parameter to YES, if you want the instance to start up as soon as the node joins the cluster. In addition, you should set the NODE_FAILFAST_ENABLED parameter to NO.

Chapter 3

157

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances If you are using CVM disk groups for the RAC database, be sure to include the name of each disk group on a separate STORAGE_GROUP line in the configuration file. If you are using CFS or CVM for RAC shared storage with multi-node packages, the package containing the RAC instance should be configured with package dependency to depend on the multi-node packages. The following is a sample of the setup dependency conditions in application package configuration file:
DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION mp1 SG-CFS-MP-1=UP SAME_NODE mp2 SG-CFS-MP-2=UP SAME_NODE mp3 SG-CFS-MP-3=UP SAME_NODE

Configuring Packages that Access the Oracle RAC Database


You can also use packages to start up applications that access the RAC instances. If an application is intended to fail over among cluster nodes, then you must set it up as a distinct package, separate from the package that starts and stops the RAC instance. Use the following procedures for packages that contain applications which access the RAC database: 1. In the ASCII package configuration file, set the AUTO_RUN parameter to NO, or if you are using Serviceguard Manager to configure packages, set Automatic Switching to Disabled. This keeps the package from starting up immediately when the node joins the cluster, and before RAC is running. 2. You can then manually start the package using the cmmodpkg -e packagename command after RAC is started. Alternatively, you can choose to automate the process of package activation by writing your

158

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances own script, and copying it to all nodes that can run the package. This script should contain the cmmodpkg -e command and activate the package after RAC and the cluster manager have started.

Adding or Removing Packages on a Running Cluster


You can add or remove packages while the cluster is running, subject to the limit of MAX_CONFIGURED_PACKAGES. For more detailed information on adding or removing packages online, refer to section Cluster and Package Maintenance in the Managing Serviceguard Thirteenth Edition users guide.

Writing the Package Control Script


The package control script contains all the information necessary to run all the services in the package, monitor them during operation, react to a failure, and halt the package when necessary. You can use either Serviceguard Manager or HP-UX commands to create or modify the package control script. For security reasons, the control script must reside in a directory with the string cmcluster in the path. Using Serviceguard Manager to Write the Package Control Script As you complete the tabs for the configuration, the control script can be generated automatically. When asked to supply the pathname of the package run and halt scripts, use the filenames from the ECM toolkit. For more information, use the Help key. When you create a package control script this way, you do not need to do any further editing, but you may customize the script if you wish. Using Commands to Write the Package Control Script Each package must have a separate control script, which must be executable. The control script is placed in the package directory and is given the same name as specified in the RUN_SCRIPT and HALT_SCRIPT parameters in the package ASCII configuration file. The package control script template contains both the run instructions and the halt instructions for the package. You can use a single script for both run and halt operations, or, if you wish, you can create separate scripts. Use the following procedure to create a control scripts for the sample package pkg1. Chapter 3 159

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances First, generate a control script template:
# cmmakepkg -s /etc/cmcluster/pkg1/control.sh

You may customize the script, as described in the section, Customizing the Package Control Script. Customizing the Package Control Script Check the definitions and declarations at the beginning of the control script using the information in the Package Configuration worksheet. You need to customize as follows: Update the PATH statement to reflect any required paths needed to start your services. If you are using LVM, enter the names of volume groups to be activated using the VG[] array parameters, and select the appropriate options for the storage activation command, including options for mounting and unmounting filesystems, if desired. Do not use the VXVM_DG[] or CVM_DG[] parameters for LVM volume groups. If you are using CVM, enter the names of disk groups to be activated using the CVM_DG[] array parameters, and select the appropriate storage activation command, CVM_ACTIVATION_CMD. Do not use the VG[] or VXVM_DG[] parameters for CVM disk groups. If you are using VxVM disk groups without CVM, enter the names of VxVM disk groups that will be imported using the VXVM_DG[] array parameters. Enter one disk group per array element. Do not use the CVM_DG[] or VG[] parameters for VxVM disk groups without CVM. Also, do not specify an activation command. Add the names of logical volumes and file systems that will be mounted on them. If you are using mirrored VxVM disks, specify the mirror recovery option VXVOL. Select the appropriate options for the storage activation command (not applicable for basic VxVM disk groups), and also include options for mounting and unmounting filesystems, if desired. Specify the filesystem mount retry and unmount count options. Define IP subnet and IP address pairs for your package. Add service name(s).

160

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Add service command(s) Add a service restart parameter, if desired.

NOTE

Use care in defining service run commands. Each run command is executed by the control script in the following way: The cmrunserv command executes each run command and then monitors the process id of the process created by the run command. When the command started by cmrunserv exits, Serviceguard determines that a failure has occurred and takes appropriate action, which may include transferring the package to an adoptive node. If a run command is a shell script that runs some other command and then exits, Serviceguard will consider this normal exit as a failure.

To avoid problems in the execution of control scripts, ensure that each run command is the name of an actual service and that its process remains alive until the actual service stops.

If you need to define a set of run and halt operations in addition to the defaults, create functions for them in the sections under the heading CUSTOMER DEFINED FUNCTIONS. Optimizing for Large Numbers of Storage Units A set of four variables is provided to allow performance improvement when employing a large number of filesystems or storage groups. For more detail, see the comments in the control script template. They are: CONCURRENT_VGCHANGE_OPERATIONSdefines a number of parallel LVM volume group activations during package startup as well and deactivations during package shutdown. CONCURRENT_FSCK_OPERATIONSdefines a number of parallel fsck operations that will be carried out at package startup. CONCURRENT_MOUNT_AND_UMOUNT_OPERATIONSdefines a number of parallel mount operations during package startup and unmount operations during package shutdown.

Chapter 3

161

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Customizing the Control Script for RAC Instances Use the package control script to perform the following: Activation and deactivation of RAC volume groups. Startup and shutdown of the RAC instance. Monitoring of the RAC instance.

Set RAC environment variables in the package control script to define the correct execution environment for RAC. Enter the names of the LVM volume groups you wish to activate in shared mode in the VG[] array. Use a different array element for each RAC volume group. (Remember that RAC volume groups must also be coded in the cluster configuration file using OPS_VOLUME_GROUP parameters.) Be sure to specify shared activation with the vgchange command by setting the VGCHANGE parameter as follows:
VGCHANGE="vgchange -a s

If your disks are mirrored with LVM mirroring on separate physical paths and you want to override quorum, use the following setting:
VGCHANGE="vgchange -a s -q n

Enter the names of the CVM disk groups you wish to activate in shared mode in the CVM_DG[] array. Use a different array element for each RAC disk group. (Remember that CVM disk groups must also be coded in the package ASCII configuration file using STORAGE_GROUP parameters.) Be sure to an appropriate type of shared activation with the CVM activation command. For example:
CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=sharedwrite"

Do not define the RAC instance as a package service. Instead, include the commands that start up an RAC instance in the customer_defined_run_commands section of the package control script. Similarly, you should include the commands that halt an RAC instance in the customer_defined_halt_commands section of the package control script. Define the Oracle monitoring command as a service command, or else use the special Oracle script provided with the ECM Toolkit.

162

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Using the Command Line to Configure an Oracle RAC Instance Package Serviceguard Manager provides a template to configure package behavior that is specific to an Oracle RAC Instance package. The RAC Instance package starts the Oracle RAC instance, monitors the Oracle processes, and stops the RAC instance. The configuration of the RAC Instance Package make use of the Enterprise Cluster Master Toolkit (ECMT) to start, monitor, and stop the Oracle database instance. For details on the use of ECMT, reference ECMT documentation. Each Oracle RAC database can have database instance running on all nodes of a SGeRAC cluster. Therefore, it is not necessary to failover the database instance to a different SGeRAC node. This is the main difference between an Oracle RAC Instance Package and a single instance Oracle package. Information for Creating the Oracle RAC Instance Package on a SGeRAC Node Use the following steps to set up the pre-package configuration on a SGeRAC node: 1. Gather the RAC Instance SID_NAME. If you are using Serviceguard Manager, this is in the cluster Properties. Example: SID_NAME=ORACLE_TEST0 For an ORACLE RAC Instance for a two-node cluster, each node would have an SID_NAME. 2. Gather the RAC Instance package name for each node, which should be the same as the SID_NAME for each node Example: ORACLE_TEST0 3. Gather the shared volume group name for the RAC database. In Serviceguard Manager, see cluster Properties. Example: /dev/vgora92db 4. Create the Oracle RAC Instance Package directory /etc/cmcluster/pkg/${SID_NAME} Example: /etc/cmcluster/pkg/ORACLE_TEST0

Chapter 3

163

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances 5. Copy the Oracle shell script templates from the ECMT default source directory to the package directory. # cd /etc/cmcluster/pkg/${SID_NAME} # cp -p /opt/cmcluster/toolkit/oracle/* Example: # cd /etc/cmcluster/pkg/ORACLE_TEST0 # cp -p /opt/cmcluster/toolkit/oracle/* Edit haoracle.conf as per README 6. Gather the package service name for monitoring Oracle instance processes. In Serviceguard Manager, this information can be found under the Services tab. SERVICE_NAME[0]=${SID_NAME} SERVICE_CMD[0]=etc/cmcluster/pkg/${SID_NAME}/toolkit.sh SERVICE_RESTART[0]=-r 2 Example: SERVICE_NAME[0]=ORACLE_TEST0 SERVICE_CMD[0]=/etc/cmcluster/pkg/ORACLE_TEST0/toolkit. sh SERVICE_RESTART[0]=-r 2 7. Gather how to start the database using an ECMT script. In Serviceguard Manager, enter this filename for the control script start command. /etc/cmcluster/pkg/${SID_NAME}/toolkit.sh start Example: /etc/cmcluster/pkg/ORACLE_TEST0/toolkit.sh start 8. Gather how to stop the database using an ECMT script. In Serviceguard Manager, enter this filename for the control script stop command. /etc/cmcluster/pkg/${SID_NAME}/toolkit.sh stop Example: /etc/cmcluster/pkg/ORACLE_TEST0/toolkit.sh stop

164

Chapter 3

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Using Serviceguard Manager to Configure Oracle RAC Instance Package The following steps use the information from the example in section 2.2. It is assumed that the SGeRAC cluster environment is configured and the ECMT can be used to start the Oracle RAC database instance. 1. Start Serviceguard Manager and Connect to the cluster. Figure 3-1 shows a RAC Instance package for node sg21. The package name is ORACLE_TEST0 Figure 3-1 Serviceguard Manager display for a RAC Instance package

2. Create the Package. 3. Select the Parameters and select the parameters to edit.

Chapter 3

165

Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Next select the check box Enable template(x) to enable Package Template for Oracle RAC. The template defaults can be reset with the Reset template defaults push button. When enabling the template for Oracle RAC, the package can only be run on one node. 4. Select the Node tab and select the node to run this package. 5. Select Networks tab and add monitored subnet for package. 6. Select Services tab and configure services. 7. Select the Control Script tab and configure parameters. Configure volume groups and customer defined run/halt functions. 8. Apply the package configuration after filling in the specified parameters. Enabling DB Provider Monitoring To monitor a remote Serviceguard RAC cluster, the entry of the GUI user name and server name must be in /etc/cmcluster/cmclnodelist file on all nodes in the cluster to be viewed. DB Provider Monitoring Using Control Access Policy (CAP) To monitor a local SGeRAC cluster as a non-root user, the GUI user name, server name and user role (at least monitor role) must be configured through CAP. For a remote cluster, any GUI user name (non-root or root), server name and user role (at least monitor role) should be configured through CAP. Please refer Control Access Policy for Serviceguard Commands and API clients External Specification for details.

166

Chapter 3

Maintenance and Troubleshooting

Maintenance and Troubleshooting


This chapter includes information about carrying out routine maintenance on an Real Application Cluster configuration. As presented here, these tasks differ in some details from the similar tasks described in the Managing Serviceguard users guide. Tasks include: Reviewing Cluster and Package States with the cmviewcl Command Online Reconfiguration Managing the Shared Storage Removing Serviceguard Extension for RAC from a System Monitoring Hardware Adding Disk Hardware Replacing Disks Replacement of I/O Cards Replacement of LAN Cards

Chapter 4

167

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command

Reviewing Cluster and Package States with the cmviewcl Command


A cluster or its component nodes may be in several different states at different points in time. Status information for clusters, packages and other cluster elements is shown in the output of the cmviewcl command and in some displays in Serviceguard Manager. This section explains the meaning of many of the common conditions the cluster or package may be in. Information about cluster status is stored in the status database, which is maintained on each individual node in the cluster. You can display information contained in this database by issuing the cmviewcl command: # cmviewcl -v The command when issued with the -v option displays information about the whole cluster. See the man page for a detailed description of other cmviewcl options.

TIP

Some commands take longer to complete in large configurations. In particular, you can expect Serviceguards CPU utilization to increase during cmviewcl -v as the number of packages and services increases.

You can also specify that the output should be formatted as it was in a specific earlier release by using the -r option indicating the release format you wish. Example: # cmviewcl -r A.11.16 See the man page for a detailed description of other cmviewcl options.

Types of Cluster and Package States


A cluster or its component nodes may be in several different states at different points in time. The following sections describe many of the common conditions the cluster or package may be in.

168

Chapter 4

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Examples of Cluster and Package States The following is an example of the output generated shown for the cmviewcl command:
CLUSTER cluster_mo NODE minie STATUS up STATUS up STATE running

Quorum_Server_Status: NAME STATUS white up Network_Parameters: INTERFACE STATUS PRIMARY up PRIMARY up STANDBY up NODE mo STATUS up

STATE running

PATH 0/0/0/0 0/8/0/0/4/0 0/8/0/0/6/0 STATE running

NAME lan0 lan1 lan3

Quorum_Server_Status: NAME STATUS white up Network_Parameters: INTERFACE STATUS PRIMARY up PRIMARY up STANDBY up MULTI_NODE_PACKAGES PACKAGE SG-CFS-pkg NODE_NAME minie STATUS up STATUS up

STATE running

PATH 0/0/0/0 0/8/0/0/4/0 0/8/0/0/6/0

NAME lan0 lan1 lan3

STATE running SWITCHING enabled

AUTO_RUN enabled

SYSTEM yes

Script_Parameters: ITEM STATUS Service up Service up Service up Service up Service up NODE_NAME STATUS

MAX_RESTARTS 0 5 5 0 0

RESTARTS 0 0 0 0 0

NAME SG-CFS-vxconfigd SG-CFS-sgcvmd SG-CFS-vxfsckd SG-CFS-cmvxd SG-CFS-cmvxpingd

SWITCHING

Chapter 4

169

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
mo up Script_Parameters: ITEM STATUS Service up Service up Service up Service up Service up PACKAGE SG-CFS-DG-1 NODE_NAME minie STATUS up STATUS up enabled

MAX_RESTARTS 0 5 5 0 0 STATE running STATE running

RESTARTS 0 0 0 0 0 AUTO_RUN enabled

NAME SG-CFS-vxconfigd SG-CFS-sgcvmd SG-CFS-vxfsckd SG-CFS-cmvxd SG-CFS-cmvxpingd SYSTEM no

SWITCHING enabled

Dependency_Parameters: DEPENDENCY_NAME SG-CFS-pkg NODE_NAME mo STATUS up

SATISFIED yes STATE running SWITCHING enabled

Dependency_Parameters: DEPENDENCY_NAME SG-CFS-pkg PACKAGE SG-CFS-MP-1 NODE_NAME minie STATUS up STATUS up

SATISFIED yes STATE running STATE running AUTO_RUN enabled SWITCHING enabled SYSTEM no

Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1 NODE_NAME mo STATUS up

SATISFIED yes STATE running SWITCHING enabled

Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1 PACKAGE SG-CFS-MP-2 NODE_NAME minie STATUS up STATUS up

SATISFIED yes STATE running STATE running AUTO_RUN enabled SWITCHING enabled SYSTEM no

Dependency_Parameters:

170

Chapter 4

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
DEPENDENCY_NAME SG-CFS-DG-1 NODE_NAME mo STATUS up SATISFIED yes STATE running SWITCHING enabled

Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1 PACKAGE SG-CFS-MP-3 NODE_NAME minie STATUS up STATUS up

SATISFIED yes STATE running STATE running AUTO_RUN enabled SWITCHING enabled SYSTEM no

Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1 NODE_NAME mo STATUS up

SATISFIED yes STATE running SWITCHING enabled

Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1

SATISFIED yes

Types of Cluster and Package States A cluster or its component nodes may be in several different states at different points in time. The following sections describe many of the common conditions the cluster or package may be in. Cluster Status The status of a cluster may be one of the following: Up. At least one node has a running cluster daemon, and reconfiguration is not taking place. Down. No cluster daemons are running on any cluster node. Starting. The cluster is in the process of determining its active membership. At least one cluster daemon is running. Unknown. The node on which the cmviewcl command is issued cannot communicate with other nodes in the cluster.

Chapter 4

171

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Node Status and State The status of a node is either up (active as a member of the cluster) or down (inactive in the cluster), depending on whether its cluster daemon is running or not. Note that a node might be down from the cluster perspective, but still up and running HP-UX. A node may also be in one of the following states: Failed. A node never sees itself in this state. Other active members of the cluster will see a node in this state if that node was in an active cluster, but is no longer, and is not halted. Reforming. A node is in this state when the cluster is re-forming. The node is currently running the protocols which ensure that all nodes agree to the new membership of an active cluster. If agreement is reached, the status database is updated to reflect the new cluster membership. Running. A node in this state has completed all required activity for the last re-formation and is operating normally. Halted. A node never sees itself in this state. Other nodes will see it in this state after the node has gracefully left the active cluster, for instance with a cmhaltnode command. Unknown. A node never sees itself in this state. Other nodes assign a node this state if it has never been an active cluster member.

Package Status and State The status of a package can be one of the following: Up. The package control script is active. Down. The package control script is not active. Unknown.

A system multi-node package is up when it is running on all the activecluster nodes. A multi-node package is up if it is running on any of itsconfigured nodes. The state of the package can be one of the following: 172 Starting. The start instructions in the control script are being run. Running. Services are active and being monitored. Halting. The halt instructions in the control script are being run. Chapter 4

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Package Switching Attributes Packages also have the following switching attributes: Package Switching. Enabled means that the package can switch to another node in the event of failure. Switching Enabled for a Node. Enabled means that the package can switch to the referenced node. Disabled means that the package cannot switch to the specified node until the node is enabled for the package using the cmmodpkg command. Every package is marked Enabled or Disabled for each node that is either a primary or adoptive node for the package. For multi-node packages, node switching Disabled means the package cannot start on that node. Status of Group Membership The state of the cluster for Oracle RAC is one of the following: Up. Services are active and being monitored. The membership appears in the output of cmviewcl -l group. Down. The cluster is halted and GMS services have been stopped. The membership does not appear in the output of the cmviewcl -l group.

The following is an example of the group membership output shown in the cmviewcl command:
# cmviewcl -l group GROUP DGop DBOP DAALL_DB IGOPALL MEMBER 1 0 1 0 0 1 2 1 PID 10394 10499 10501 10396 10396 10501 10423 10528 MEMBER_NODE comanche chinook comanche chinook comanche chinook comanche chinook

where the cmviewcl output values are: GROUP the name of a configured group

Chapter 4

173

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command MEMBER PID MEMBER_NODE Service Status Services have only status, as follows: Up. The service is being monitored. Down. The service is not running. It may have halted or failed. Uninitialized. The service is included in the package configuration, but it was not started with a run command in the control script. Unknown. the ID number of a member of a group the Process ID of the group member the Node on which the group member is running

Network Status The network interfaces have only status, as follows: Up. Down. Unknown. We cannot determine whether the interface is up or down. This can happen when the cluster is down. A standby interface has this status.

Serial Line Status The serial line has only status, as follows: Up. Heartbeats are received over the serial line. Down. Heartbeat has not been received over the serial line within 2 times the NODE_TIMEOUT value. Recovering. A corrupt message was received on the serial line, and the line is in the process of resynchronizing. Unknown. We cannot determine whether the serial line is up or down. This can happen when the remote node is down.

174

Chapter 4

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Failover and Failback Policies Packages can be configured with one of two values for the FAILOVER_POLICY parameter: CONFIGURED_NODE. The package fails over to the next node in the node list in the package configuration file. MIN_PACKAGE_NODE. The package fails over to the node in the cluster with the fewest running packages on it.

Packages can also be configured with one of two values for the FAILBACK_POLICY parameter: AUTOMATIC. With this setting, a package, following a failover, returns to its primary node when the primary node becomes available again. MANUAL. With this setting, a package, following a failover, must be moved back to its original node by a system administrator.

Failover and failback policies are displayed in the output of the cmviewcl -v command.

Examples of Cluster and Package States


The following sample output from the cmviewcl -v command shows status for the cluster in the sample configuration. Normal Running Status Everything is running normally; both nodes in a two-node cluster are running, and each Oracle RAC instance package is running as well. The only packages running are Oracle RAC instance packages.
CLUSTER example NODE ftsys9 STATUS up STATUS up

STATE running

Network_Parameters: INTERFACE STATUS PRIMARY up STANDBY up PACKAGE ops_pkg1 STATUS up

PATH 56/36.1 60/6 STATE running

NAME lan0 lan1 AUTO_RUN disabled NODE ftsys9

Chapter 4

175

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Start configured_node Failback manual Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled rent) NODE ftsys10 STATUS up STATE running

NAME ftsys9

(cur

Network_Parameters: INTERFACE STATUS PRIMARY up STANDBY up PACKAGE ops_pkg2 0 STATUS up

PATH 28.1 32.1 STATE running

NAME lan0 lan1 AUTO_RUN disabled NODE ftsys1

Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Start configured_node Failback manual Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled rent) Alternate up enabled ftsys9

NAME ftsys10

(cur

Quorum Server Status If the cluster is using a quorum server for tie-breaking services, the display shows the server name, state and status following the entry for each node, as in the following excerpt from the output of cmviewcl -v:
CLUSTER example NODE ftsys9 STATUS up STATUS up STATE running

176

Chapter 4

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
Quorum Server Status: NAME STATUS lp-qs up ... NODE ftsys10 STATUS up

STATE running

STATE running

Quorum Server Status: NAME STATUS lp-qs up

STATE running

CVM Package Status If the cluster is using the VERITAS Cluster Volume Manager for disk storage, the system multi-node package CVM-VxVM-pkg must be running on all active nodes for applications to be able to access CVM disk groups. This package is shown in the following output of the cmviewcl command:
CLUSTER example NODE ftsys8 ftsys9 STATUS up STATUS down up STATE halted running

SYSTEM_MULTI_NODE_PACKAGES: PACKAGE STATUS VxVM-CVM-pkg up STATE running

When you use the -v option, the display shows the system multi-node package associated with each active node in the cluster, as in the following:
SYSTEM_MULTI_NODE_PACKAGES: PACKAGE STATUS VxVM-CVM-pkg up NODE ftsys8 STATUS down STATE running STATE halted STATE running

NODE STATUS ftsys9 up Script_Parameters:

Chapter 4

177

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
ITEM STATUS Service up VxVM-CVM-pkg.srv MAX_RESTARTS 0 RESTARTS 0 NAME

Status After Moving the Package to Another Node After issuing the following command:
# cmrunpkg -n ftsys9 pkg2

the output of the cmviewcl -v command is as follows:


CLUSTER example NODE ftsys9 STATUS up STATUS up STATE running

Network_Parameters: INTERFACE STATUS PRIMARY up STANDBY up

PATH 56/36.1 60/6

NAME lan0 lan1

PACKAGE pkg1

STATUS up

STATE running

AUTO_RUN enabled

NODE ftsys9

Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Failover min_package_node Failback manual Script_Parameters: ITEM STATUS Service up Subnet up Resource up

MAX_RESTARTS 0 0

RESTARTS NAME 0 service1 0 15.13.168.0 /example/float

Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled Alternate up enabled PACKAGE pkg2 STATUS up STATE running

NAME ftsys9 ftsys10 AUTO_RUN disabled

(current)

NODE ftsys9

178

Chapter 4

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Failover min_package_node Failback manual Script_Parameters: ITEM STATUS NAME MAX_RESTARTS Service up service2.1 0 Subnet up 15.13.168.0 0 Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled Alternate up enabled NODE ftsys10 STATUS up STATE running

RESTARTS 0 0

NAME ftsys10 ftsys9

(current)

Network_Parameters: INTERFACE STATUS PRIMARY up STANDBY up

PATH 28.1 32.1

NAME lan0 lan1

Now pkg2 is running on node ftsys9. Note that it is still disabled from switching. Status After Package Switching is Enabled The following command changes package status back to Package Switching Enabled:
# cmmodpkg -e pkg2

The output of the cmviewcl command is now as follows:


CLUSTER example NODE ftsys9 PACKAGE pkg1 pkg2 NODE ftsys10 STATUS up STATUS up STATUS up up STATUS up STATE running STATE running running STATE running AUTO_RUN enabled enabled NODE ftsys9 ftsys9

Chapter 4

179

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Both packages are now running on ftsys9 and pkg2 is enabled for switching. Ftsys10 is running the daemon and no packages are running on ftsys10. Status After Halting a Node After halting ftsys10, with the following command:
# cmhaltnode ftsys10

the output of cmviewcl is as follows on ftsys9:


CLUSTER example NODE ftsys9 PACKAGE pkg1 pkg2 NODE ftsys10 STATUS up STATUS up STATUS up up STATUS down STATE running STATE running running STATE halted AUTO_RUN enabled enabled NODE ftsys9 ftsys9

This output is seen on both ftsys9 and ftsys10. Viewing RS232 Status If you are using a serial (RS232) line as a heartbeat connection, you will see a list of configured RS232 device files in the output of the cmviewcl -v command. The following shows normal running status:
CLUSTER example NODE ftsys9 STATUS up STATUS up

STATE running

Network_Parameters: INTERFACE STATUS PRIMARY up Serial_Heartbeat: DEVICE_FILE_NAME /dev/tty0p0

PATH 56/36.1

NAME lan0

STATUS up

CONNECTED_TO: ftsys10

/dev/tty0p0

180

Chapter 4

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command

NODE ftsys10

STATUS up

STATE running

Network_Parameters: INTERFACE STATUS PRIMARY up Serial_Heartbeat: DEVICE_FILE_NAME /dev/tty0p0

PATH 28.1

NAME lan0

STATUS up

CONNECTED_TO: ftsys9 /dev/tty0p0

The following shows status when the serial line is not working:
CLUSTER example NODE ftsys9 STATUS up STATUS up

STATE running

Network_Parameters: INTERFACE STATUS PRIMARY up Serial_Heartbeat: DEVICE_FILE_NAME /dev/tty0p0 NODE ftsys10 STATUS up

PATH 56/36.1

NAME lan0

STATUS down STATE running

CONNECTED_TO: ftsys10 /dev/tty0p0

Network_Parameters: INTERFACE STATUS PRIMARY up Serial_Heartbeat: DEVICE_FILE_NAME /dev/tty0p0

PATH 28.1

NAME lan0

STATUS down

CONNECTED_TO: ftsys9 /dev/tty0p0

Viewing Data on Unowned Packages The following example shows packages that are currently unowned, that is, not running on any configured node. Information on monitored resources is provided for each node on which package can run this information allows you to identify the cause of a failure and decide where to start the package up again.

Chapter 4

181

Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
UNOWNED_PACKAGES PACKAGE PKG3 STATUS down STATE halted AUTO_RUN enabled NODE unowned

Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Failover min_package_node Failback automatic Script_Parameters: ITEM STATUS Resource up Subnet up Resource up Subnet up Resource up Subnet up Resource up Subnet up

NODE_NAME manx manx burmese burmese tabby tabby persian persian

NAME /resource/random 192.8.15.0 /resource/random 192.8.15.0 /resource/random 192.8.15.0 /resource/random 192.8.15.0

Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled Alternate up enabled Alternate up enabled Alternate up enabled

NAME manx burmese tabby persian

182

Chapter 4

Maintenance and Troubleshooting Online Reconfiguration

Online Reconfiguration
The online reconfiguration feature provides a method to make configuration changes online to a Serviceguard Extension for RAC (SGeRAC) cluster. Specifically, this provides the ability to add or/and delete nodes from a running SGeRAC Cluster, and to reconfigure SLVM VG while it is being accessed by only one node.

Online Node Addition and Deletion


Online node addition enables the addition or deletion of nodes in a SGeRAC cluster to or from another running cluster. Node(s) can be added and/or deleted by changing the cluster configuration. This is done by editing the cluster specification file and re-applying the configuration to the already running cluster. For deleting online node(s), the node(s) needs to be halted before deleting them from the cluster. Use the following steps for adding a node using online node reconfiguration: 1. Export the mapfile for the volume groups that needs to be visible in the new node (vgexport -s -m mapfile -p <sharedvg>). 2. Copy the mapfile to the new node. 3. Import the volume groups into the new node. (vgimport -s -m mapfile <sharedvg>). 4. Add node to the cluster online- edit the cluster configuration file to add the node details and run cmapplyconf. 5. Make the new node join the cluster (cmrunnode) and run the services. Use the following steps for deleting a node using online node reconfiguration: 1. Halt the node in the cluster by running cmhaltnode. 2. Edit the cluster configuration file to delete a node(s). 3. Run cmapplyconf.

Chapter 4

183

Maintenance and Troubleshooting Managing the Shared Storage

Managing the Shared Storage


Single Node Online volume Re-Configuration (SNOR)
The SLVM Single Node Online volume Re-configuration (SNOR) feature provides a method for changing the configuration for an active shared volume group (VG) in a SGeRAC cluster. SLVM SNOR allows the reconfiguration of a shared volume group and of logical and physical volumes in the VG. This is done while keeping the VG active on a single node in exclusive mode. After the reconfiguration, the VG can be reactivated in shared mode and later can be activated on other nodes. In addition, as the VG is activated on a single node, in exclusive mode , the application can be up and running. The volume group changes need to be performed on the node when the vgchange command is executed. The command line option to the vgchange command (vgchange -a <e or s> -x) allows activation mode switches between shared and exclusive on a shared VG active in a single node. Use the following procedures to set up an on-line re-configuration: 1. Identify the shared volume group on which a configuration change is required, for example vg_shared. 2. Identify one node, for example node1, of the cluster that is running an application, such as RAC using the shared volume group. The applications using the volume group, vg_shared, on this node will remain unaffected during the procedure. The cluster application needs to be scaled down to a single cluster node, using the volume group, vg_shared. 3. On all other nodes of the cluster, except node1, deactivate the volume group using the -n option to vgchange(1M). For example, # vgchange -a n vg_shared Ensure the volume group, vg_shared, is now active only on a single cluster node, node1. 4. Change the activation mode to exclusive on the node1 where the volume group is active, use the vgchange(1M) command option. For example,

184

Chapter 4

Maintenance and Troubleshooting Managing the Shared Storage # vgchange -a e -x vg_shared

NOTE

Ensure that none of the mirrored logical volumes in this volume group have Consistency Recovery set to MWC (refer lvdisplay(1M)). Changing the mode back to shared will not be allowed in that case, since Mirror Write Cache consistency recovery (MWC) is not valid in volume groups activated in shared mode.

5. Make the desired configuration change for the volume group on the node where the volume group is active, run the required command to change the configuration. For example, to add a mirror copy, use the following command: # lvextend -m 2 /dev/vg_shared/lvol1 6. Export the changes to other cluster nodes if required. If the configuration change required the creation or deletion of a new logical or physical volume (i.e., any of the following commands were used - lvcreate(1M), lvreduce(1M), vgextend(1M), vgreduce(1M), lvsplit(1M), lvmerge(1M) then the following sequence of steps is required. a. From the same node, export the mapfile for vg_shared. For example # vgexport -s -p -m /tmp/vg_shared.map vg_shared b. Copy the mapfile thus obtained to all the other nodes of the cluster. c. On the other cluster nodes, export vg_shared and re-import it using the new map file. For example, # vgexport vg_shared # mkdir /dev/vg_shared # mknod /dev/vg_shared/group c 64 0xhh0000 # vgimport -s -m /tmp/vg_shared.map vg_shared

CAUTION

If Business Copies, or Business Continuity Volumes (BCs or BCVs) are in use, then run vgchgid(1M) before starting the procedure.

Chapter 4

185

Maintenance and Troubleshooting Managing the Shared Storage The vgimport(1M)/vgexport(1M) sequence will not preserve the order of physical volumes in the /etc/lvmtab file. If the ordering is significant due to the presence of active-passive devices, or if the volume group has been configured to maximize throughput by ordering the paths accordingly, the ordering would need to be repeated.

7. Change the activation mode back to shared on the node in the cluster where the volume group vg_shared is active. Change the mode back to shared. # vgchange -a s -x vg_shared On the other cluster nodes, activate vg_shared in shared mode # vgchange -a s vg_shared 8. Backup the changes made to the volume group using vgcfgbackup on all nodes. # vgcfgbackup vg_shared

Making LVM Volume Groups Shareable


Normally, volume groups are marked to be activated in shared mode when they are listed with the OPS_VOLUME_GROUP parameter in the cluster configuration file or in Serviceguard Manager. which occurs when the configuration is applied. However, in some cases you may want to manually make a volume group sharable. For example, if you wish to add a new shared volume group without shutting down the cluster, you can use the manual method to do it online. However, when convenient, it's a good practice to bring down the cluster and reconfigure it to include the new volume group. 1. Use the vgchange command on each node to ensure that the volume group to be shared is currently inactive on all nodes. Example:
# vgchange -a n /dev/vg_ops

2. On the configuration node, use the vgchange command to make the volume group shareable by members of the cluster:
# vgchange -S y -c y /dev/vg_ops

186

Chapter 4

Maintenance and Troubleshooting Managing the Shared Storage This command is issued from the configuration node only, and the cluster must be running on all nodes for the command to succeed. Note that both the -S and the -c options are specified. The -S y option makes the volume group shareable, and the -c y option causes the cluster id to be written out to all the disks in the volume group. In effect, this command specifies the cluster to which a node must belong in order to obtain shared access to the volume group. Making a Volume Group Unshareable Use the following steps to unmark a previously marked shared volume group: 1. Remove the volume group name from the ASCII cluster configuration file. 2. Enter the following command:
# vgchange -S n -c n /dev/volumegroup

The above example marks the volume group as non-shared and not associated with a cluster.

Activating an LVM Volume Group in Shared Mode


Activation and deactivation of shared volume groups is normally done through a control script. If you need to perform activation from the command line, you can issue the following command from each node to activate the volume group in shared mode. (The node on which you first enter the command becomes the server node.)
# vgchange -a s -p /dev/vg_ops

The following message is displayed:


Activated volume group in shared mode. This node is the Server.

When the same command is entered on the second node, the following message is displayed:
Activated volume group in shared mode. This node is a Client.

Chapter 4

187

Maintenance and Troubleshooting Managing the Shared Storage

NOTE

Do not share volume groups that are not part of the RAC configuration unless shared access is controlled.

Deactivating a Shared Volume Group Issue the following command from each node to deactivate the shared volume group:
# vgchange -a n /dev/vg_ops

Remember that volume groups remain shareable even when nodes enter and leave the cluster.

NOTE

If you wish to change the capacity of a volume group at a later time, you must deactivate and unshare the volume group first. If you add disks, you must specify the appropriate physical volume group name and make sure the /etc/lvmpvg file is correctly updated on both nodes.

Making Offline Changes to Shared Volume Groups


You may need to change the volume group configuration of RAC shared logical volumes to add capacity to the data files or to add log files. No configuration changes are allowed on shared LVM volume groups while they are activated. The volume group must be deactivated first on all nodes, and marked as non-shareable. Use the following procedure (examples assume the volume group is being shared by node 1 and node 2, and they use the volume group vg_ops): 1. Ensure that the Oracle RAC database is not active on either node. 2. From node 2, use the vgchange command to deactivate the volume group:
# vgchange -a n /dev/vg_ops

3. From node 2, use the vgexport command to export the volume group:
# vgexport -m /tmp/vg_ops.map.old /dev/vg_ops

This dissociates the volume group from node 2.

188

Chapter 4

Maintenance and Troubleshooting Managing the Shared Storage 4. From node 1, use the vgchange command to deactivate the volume group:
# vgchange -a n /dev/vg_ops

5. Use the vgchange command to mark the volume group as unshareable:


# vgchange -S n -c n /dev/vg_ops

6. Prior to making configuration changes, activate the volume group in normal (non-shared) mode:
# vgchange -a y /dev/vg_ops

7. Use normal LVM commands to make the needed changes. Be sure to set the raw logical volume device file's owner to oracle and group to dba, with a mode of 660. 8. Next, still from node 1, deactivate the volume group:
# vgchange -a n /dev/vg_ops

9. Use the vgexport command with the options shown in the example to create a new map file:
# vgexport -p -m /tmp/vg_ops.map /dev/vg_ops

Make a copy of /etc/lvmpvg in /tmp/lvmpvg, then copy the file to /tmp/lvmpvg on node 2. Copy the file /tmp/vg_ops.map to node 2. 10. Use the following command to make the volume group shareable by the entire cluster again:
# vgchange -S y -c y /dev/vg_ops

11. On node 2, issue the following command:


# mkdir /dev/vg_ops

12. Create a control file named group in the directory /dev/vg_ops, as in the following:
# mknod /dev/vg_ops/group c 64 0xhh0000

The major number is always 64, and the hexadecimal minor number has the format:
0xhh0000

where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Chapter 4 189

Maintenance and Troubleshooting Managing the Shared Storage 13. Use the vgimport command, specifying the map file you copied from the configuration node. In the following example, the vgimport command is issued on the second node for the same volume group that was modified on the first node:
# vgimport -v -m /tmp/vg_ops.map /dev/vg_ops /dev/dsk/c0t2d0/dev/dsk/c1t2d0

14. Activate the volume group in shared mode by issuing the following command on both nodes:
# vgchange -a s -p /dev/vg_ops

Skip this step if you use a package control script to activate and deactivate the shared volume group as a part of RAC startup and shutdown.

Adding Additional Shared LVM Volume Groups


To add capacity or to organize your disk resources for ease of management, you may wish to create additional shared volume groups for your Oracle RAC databases. If you decide to use additional shared volume groups, they must conform to the following rules: Volume groups should include different PV links to each logical unit on the disk array. Volume group names must be the same on all nodes in the cluster. Logical volume names must be the same on all nodes in the cluster.

If you are adding or removing shared LVM volume groups, make sure that you modify the cluster configuration file and any package control script that activates and deactiveates the shared LVM volume groups.

Changing the VxVM or CVM Storage Configuration


You can add VxVM disk groups to the cluster configuration while the cluster is running. To add new CVM disk groups, the cluster must be running. If you are creating new CVM disk groups, be sure to determine the master node on which to do the creation by using the following command: # vxdctl -c mode

190

Chapter 4

Maintenance and Troubleshooting Managing the Shared Storage One node will identify itself as the master. Create disk groups from this node. Similarly, you can delete VxVM or CVM disk groups provided they are not being used by a cluster node at the time.

NOTE

For CVM without CFS, if you are adding a disk group to the cluster configuration, make sure you also modify any package or create the package control script that imports and deports this disk group. If you are adding a CVM disk group, be sure to add the STORAGE_GROUP entry for the disk group to the pacakge ASCII file. For CVM with CFS, if you are adding a disk group to the cluster configuration, make sure you also create the corresponding multi-node package. If you are adding a CVM disk group, be sure to add to the packages that depend on the CVM disk group the necessary package dependency. If you are removing a disk group from the cluster configuration, make sure that you also modify or delete any package control script that imports and deports this disk group. If you are removing a CVM disk group, be sure to remove the STORAGE_GROUP entries for the disk group from the package ASCII file. When removig a disk group that is activated and deactivated through a mulit-node package, make sure to modify or remove any configured package dependencies to the multi-node package.

Chapter 4

191

Maintenance and Troubleshooting Removing Serviceguard Extension for RAC from a System

Removing Serviceguard Extension for RAC from a System


If you wish to remove a node from Serviceguard Extension for RAC operation, use the swremove command to delete the software. Note the following: The cluster service should not be running on the node from which you will be deleting Serviceguard Extension for RAC. The node from which you are deleting Serviceguard Extension for RAC should not be in the cluster configuration. If you are removing Serviceguard Extension for RAC from more than one node, swremove should be issued on one node at a time.

NOTE

After removing Serviceguard Extension for RAC, your cluster will still have Serviceguard installed. For information about removing Serviceguard, refer to the Managing Serviceguard users guide for your version of the product.

192

Chapter 4

Maintenance and Troubleshooting Monitoring Hardware

Monitoring Hardware
Good standard practice in handling a high availability system includes careful fault monitoring so as to prevent failures if possible or at least to react to them swiftly when they occur. The following should be monitored for errors or warnings of all kinds: Disks CPUs Memory LAN cards Power sources All cables Disk interface cards

Some monitoring can be done through simple physical inspection, but for the most comprehensive monitoring, you should examine the system log file (/var/adm/syslog/syslog.log) periodically for reports on all configured HA devices. The presence of errors relating to a device will show the need for maintenance.

Using Event Monitoring Service


Event Monitoring Service (EMS) allows you to configure monitors of specific devices and system resources. You can direct alerts to an administrative workstation where operators can be notified of further action in case of a problem. For example, you could configure a disk monitor to report when a mirror was lost from a mirrored volume group being used in a non-RAC package. Refer to the manual Using the Event Monitoring Service (B7609-90022) for additional information.

Using EMS Hardware Monitors


A set of hardware monitors is available for monitoring and reporting on memory, CPU, and many other system values. Refer to the EMS Hardware Monitors Users Guide (B6191-90020) for additional information.

Chapter 4

193

Maintenance and Troubleshooting Adding Disk Hardware

Adding Disk Hardware


As your system expands, you may need to add disk hardware. This also means modifying the logical volume structure. Use the following general procedure: 1. Halt packages. 2. Ensure that the Oracle database is not active on either node. 3. Deactivate and mark as unshareable any shared volume groups. 4. Halt the cluster. 5. Deactivate automatic cluster startup. 6. Shutdown and power off system before installing new hardware. 7. Install the new disk hardware with connections on all nodes. 8. Reboot all nodes. 9. On the configuration node, add the new physical volumes to existing volume groups, or create new volume groups as needed. 10. Start up the cluster. 11. Make the volume groups shareable, then import each shareable volume group onto the other nodes in the cluster. 12. Activate the volume groups in shared mode on all nodes. 13. Start up the Oracle RAC instances on all nodes. 14. Activate automatic cluster startup.

NOTE

As you add new disks to the system, update the planning worksheets (described in Appendix B, Blank Planning Worksheets, so as to record the exact configuration you are using.

194

Chapter 4

Maintenance and Troubleshooting Replacing Disks

Replacing Disks
The procedure for replacing a faulty disk mechanism depends on the type of disk configuration you are using and on the type of Volume Manager software. For a description of replacement procedures using VERITAS VxVM or CVM, refer to the chapter on Administering Hot-Relocation in the VERITAS Volume Manager Administrators Guide. Additional information is found in the VERITAS Volume Manager Troubleshooting Guide. The following paragraphs describe how to replace disks that are configured with LVM. Separate descriptions are provided for replacing a disk in an array and replacing a disk in a high availability enclosure.

Replacing a Mechanism in a Disk Array Configured with LVM


With any HA disk array configured in RAID 1 or RAID 5, refer to the arrays documentation for instruction on how to replace a faulty mechanism. After the replacement, the device itself automatically rebuilds the missing data on the new disk. No LVM activity is needed. This process is known as hot swapping the disk.

NOTE

If your LVM installation requires online replacement of disk mechanisms, the use of disk arrays may be required, because software mirroring of JBODs with MirrorDisk/UX does not permit hot swapping for disks that are activated in shared mode.

Replacing a Mechanism in an HA Enclosure Configured with Exclusive LVM


Non-Oracle data that is used by packages may be configured in volume groups that use exclusive (one-node-at-a-time) activation. If you are using exclusive activation and software mirroring with MirrorDisk/UX and the mirrored disks are mounted in a high availability disk enclosure, you can use the following steps to hot plug a disk mechanism:

Chapter 4

195

Maintenance and Troubleshooting Replacing Disks 1. Identify the physical volume name of the failed disk and the name of the volume group in which it was configured. In the following examples, the volume group name is shown as /dev/vg_sg01 and the physical volume name is shown as /dev/c2t3d0. Substitute the volume group and physical volume names that are correct for your system. 2. Identify the names of any logical volumes that have extents defined on the failed physical volume. 3. On the node on which the volume group is currently activated, use the following command for each logical volume that has extents on the failed physical volume: # lvreduce -m 0 /dev/vg_sg01/lvolname /dev/dsk/c2t3d0 4. At this point, remove the failed disk and insert a new one. The new disk will have the same HP-UX device name as the old one. 5. On the node from which you issued the lvreduce command, issue the following command to restore the volume group configuration data to the newly inserted disk: # vgcfgrestore /dev/vg_sg01 /dev/dsk/c2t3d0 6. Issue the following command to extend the logical volume to the newly inserted disk: # lvextend -m 1 /dev/vg_sg01 /dev/dsk/c2t3d0 7. Finally, use the lvsync command for each logical volume that has extents on the failed physical volume. This synchronizes the extents of the new disk with the extents of the other mirror. # lvsync /dev/vg_sg01/lvolname

Online Replacement of a Mechanism in an HA Enclosure Configured with Shared LVM (SLVM)


If you are using software mirroring for shared concurrent activation of Oracle RAC data with MirrorDisk/UX and the mirrored disks are mounted in a high availability disk enclosure, use the following steps to carry out online replacement: 1. Make a note of the physical volume name of the failed mechanism (for example, /dev/dsk/c2t3d0).

196

Chapter 4

Maintenance and Troubleshooting Replacing Disks 2. Halt all the applications using the SLVM VG on all the nodes but one. 3. Re-activate the volume group in exclusive mode on all nodes of the cluster: # vgchange -a e -x <slvm vg> 4. Reconfigure the volume: vgextend, lvextend, disk addition, etc 5. Activate the volume group to shared mode: # vgchange -a s -x <slvm vg>

Offline Replacement of a Mechanism in an HA Enclosure Configured with Shared LVM (SLVM)


Hot plugging of disks is not supported for Oracle RAC data, which is configured in volume groups with Shared LVM (SLVM). If you need this capability, you should use disk arrays for your Oracle RAC data. If you are using software mirroring for shared concurrent activation of Oracle RAC data with MirrorDisk/UX and the mirrored disks are mounted in a high availability disk enclosure, use the following steps to carry out offline replacement: 1. Make a note of the physical volume name of the failed mechanism (for example, /dev/dsk/c2t3d0). 2. Deactivate the volume group on all nodes of the cluster: # vgchange -a n vg_ops 3. Replace the bad disk mechanism with a good one. 4. From one node, initialize the volume group information on the good mechanism using vgcfgrestore(1M), specifying the name of the volume group and the name of the physical volume that is being replaced: # vgcfgrestore /dev/vg_ops /dev/dsk/c2t3d0 5. Activate the volume group on one node in exclusive mode then deactivate the volume group: # vgchange -a e vg_ops

Chapter 4

197

Maintenance and Troubleshooting Replacing Disks This will synchronize the stale logical volume mirrors. This step can be time-consuming, depending on hardware characteristics and the amount of data. 6. Deactivate the volume group: # vgchange -a n vg_ops 7. Activate the volume group on all the nodes in shared mode using vgchange - a s: # vgchange -a s vg_ops

Replacing a Lock Disk


Replacing a failed lock disk mechanism is the same as replacing a data disk. If you are using a dedicated lock disk (one with no user data on it), then you need to issue only one LVM command: # vgcfgrestore /dev/vg_lock /dev/dsk/c2t1d0 After doing this, wait at least an hour, then review the syslog file for a message showing that the lock disk is healthy again.

On-line Hardware Maintenance with In-line SCSI Terminator


Serviceguard allows on-line SCSI disk controller hardware repairs to all cluster nodes if you use HPs in-line terminator (C2980A) on nodes connected to the end of the shared FW/SCSI bus. The in-line terminator cable is a 0.5 meter extension cable with the terminator on the male end, which connects to the controller card for an external bus. The in-line terminator is used instead of the termination pack that is attached to the controller card and makes it possible to physically disconnect the node from the end of the F/W SCSI bus without breaking the bus's termination. (Nodes attached to the middle of a bus using a Y cable also can be detached from the bus without harm.) When using in-line terminators and Y cables, ensure that all orange-socketed termination packs are removed from the controller cards.

198

Chapter 4

Maintenance and Troubleshooting Replacing Disks

NOTE

You cannot use inline terminators with internal FW/SCSI buses on D and K series systems, and you cannot use the inline terminator with single-ended SCSI buses. You must not use an inline terminator to connect a node to a Y cable.

Figure 4-1 shows a three-node cluster with two F/W SCSI buses. The solid line and the dotted line represent different buses, both of which have inline terminators attached to nodes 1 and 3. Y cables are also shown attached to node 2. Figure 4-1 F/W SCSI Buses with In-line Terminators

The use of in-line SCSI terminators allows you to do hardware maintenance on a given node by temporarily moving its packages to another node and then halting the original node while its hardware is serviced. Following the replacement, the packages can be moved back to the original node.

Chapter 4

199

Maintenance and Troubleshooting Replacing Disks Use the following procedure to disconnect a node that is attached to the bus with an in-line SCSI terminator or with a Y cable: 1. Move any packages on the node that requires maintenance to a different node. 2. Halt the node that requires maintenance. The cluster will re-form, and activity will continue on other nodes. Packages on the halted node will switch to other available nodes if they are configured to switch. 3. Disconnect the power to the node. 4. Disconnect the node from the in-line terminator cable or Y cable if necessary. The other nodes accessing the bus will encounter no problems as long as the in-line terminator or Y cable remains connected to the bus. 5. Replace or upgrade hardware on the node, as needed. 6. Reconnect the node to the in-line terminator cable or Y cable if necessary. 7. Reconnect power and reboot the node. If AUTOSTART_CMCLD is set to 1 in the /etc/rc.config.d/cmcluster file, the node will rejoin the cluster. 8. If necessary, move packages back to the node from their alternate locations and restart them.

200

Chapter 4

Maintenance and Troubleshooting Replacement of I/O Cards

Replacement of I/O Cards


After an I/O card failure, you can replace the card using the following steps. It is not necessary to bring the cluster down to do this if you are using SCSI inline terminators or Y cables at each node. 1. Halt the node by using Serviceguard Manager or the cmhaltnode command. Packages should fail over normally to other nodes. 2. Remove the I/O cable from the card. With SCSI inline terminators, this can be done without affecting the disks or other nodes on the bus. 3. Using SAM, select the option to do an on-line replacement of an I/O card. 4. Remove the defective I/O card. 5. Install the new card. The new card must be exactly the same card type, and it must be installed in the same slot as the card you removed. 6. In SAM, select the option to attach the new I/O card. 7. Add the node back into the cluster by using Serviceguard Manager or the cmrunnode command.

Chapter 4

201

Maintenance and Troubleshooting Replacement of LAN Cards

Replacement of LAN Cards


If you have a LAN card failure, which requires the LAN card to be replaced, you can replace it on-line or off-line depending on the type of hardware and operating system you are running. It is not necessary to bring the cluster down to do this.

Off-Line Replacement
The following steps show how to replace a LAN card off-line. These steps apply to both HP-UX 11.0 and 11i: 1. Halt the node by using the cmhaltnode command. 2. Shut down the system using /etc/shutdown, then power down the system. 3. Remove the defective LAN card. 4. Install the new LAN card. The new card must be exactly the same card type, and it must be installed in the same slot as the card you removed. 5. Power up the system. 6. If necessary, add the node back into the cluster by using the cmrunnode command. (You can omit this step if the node is configured to join the cluster automatically.)

On-Line Replacement
If your system hardware supports hotswap I/O cards, and if the system is running HP-UX 11i (B.11.11 or later), you have the option of replacing the defective LAN card on-line. This will significantly improve the overall availability of the system. To do this, follow the steps provided in the section How to On-line Replace (OLR) a PCI Card Using SAM in the document Configuring HP-UX for Peripherals. The OLR procedure also requires that the new card must be exactly the same card type as the card you removed to avoid improper operation of the network driver. Serviceguard will automatically recover the LAN card once it has been replaced and reconnected to the network.

202

Chapter 4

Maintenance and Troubleshooting Replacement of LAN Cards

After Replacing the Card


After the on-line or off-line replacement of LAN cards has been done, Serviceguard will detect that the MAC address (LLA) of the card has changed from the value stored in the cluster binary configuration file, and it will notify the other nodes in the cluster of the new MAC address. The cluster will operate normally after this. It is also recommended that you update the new MAC address in the cluster binary configuration file by re-applying the cluster configuration. Use the following steps for on-line reconfiguration: 1. Use the cmgetconf command to obtain a fresh ASCII configuration file, as follows:
# cmgetconf config.ascii

2. Use the cmapplyconf command to apply the configuration and copy the new binary file to all cluster nodes:
# cmapplyconf -C config.ascii

This procedure updates the binary file with the new MAC address and thus avoids data inconsistency between the outputs of the cmviewconcl and lanscan commands.

Chapter 4

203

Maintenance and Troubleshooting Monitoring RAC Instances

Monitoring RAC Instances


The DB Provider provides the capability to monitor RAC databases. RBA (Role Based Access) enables a non-root user to have the capability to monitor RAC instances using Serviceguard Manager.

204

Chapter 4

Software Upgrades

Software Upgrades
Serviceguard Extension for RAC (SGeRAC) software upgrades can be done in the two following ways: rolling upgrade non-rolling upgrade

Instead of an upgrade, moving to a new version can be done with: migration with cold install

Rolling upgrade is a feature of SGeRAC that allows you to perform a software upgrade on a given node without bringing down the entire cluster. SGeRAC supports rolling upgrades on version A.11.15 and later, and requires all nodes to be running on the same operating system revision and architecture. Non-rolling upgrade allows you to perform a software upgrade from any previous revision to any higher revision or between operating system versions but requires halting the entire cluster. The rolling and non-rolling upgrade processes can also be used any time one system needs to be taken offline for hardware maintenance or patch installations. Until the upgrade process is complete on all nodes, you cannot change the cluster configuration files, and you will not be able to use any of the new features of the Serviceguard/SGeRAC release. There may be circumstances when, instead of doing an upgrade, you prefer to do a migration with cold install. The cold install process erases the pre-existing operating system and data and then installs the new operating system and software; you must then restore the data. The advantage of migrating with cold install is that the software can be installed without regard for the software currently on the system or concern for cleaning up old software. A significant factor when deciding to either do an upgrade or cold install is overall system downtime. A rolling upgrade will cause the least downtime. This is because only one node in the cluster is down at any one time. A non-rolling upgrade may require more down time, because the entire cluster has to be brought down during the upgrade process.

Appendix A

205

Software Upgrades

One advantage of both rolling and non-rolling upgrades versus cold install is that upgrades retain the pre-existing operating system, software and data. Conversely, the cold install process erases the pre-existing system; you must re-install the operating system, software and data. For these reasons, a cold install may require more downtime. The sections in this appendix are as follows: Rolling Software Upgrades Steps for Rolling Upgrades Example of Rolling Upgrade Limitations of Rolling Upgrades Non-Rolling Software Upgrades Steps for Non-Rolling Upgrades Limitations of Non-Rolling Upgrades Migrating a SGeRAC Cluster with Cold Install

206

Appendix A

Software Upgrades Rolling Software Upgrades

Rolling Software Upgrades


SGeRAC version A.11.15 and later allow you to roll forward to any higher revision provided all of the following conditions are met: The upgrade must be done on systems of the same architecture (HP 9000 or Integrity Servers). All nodes in the cluster must be running on the same version of HP-UX. Each node must be running a version of HP-UX that supports the new SGeRAC version. Each node must be running a version of Serviceguard that supports the new SGeRAC version.

For more information on support, compatibility, and features for SGeRAC, refer to the Serviceguard Compatibility and Feature Matrix, located at http://docs.hp.com -> High Availability -> Serviceguard Extension for RAC.

Steps for Rolling Upgrades


Use the following steps when performing a rolling SGeRAC software upgrade: 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on the local node (if running). 2. Halt Serviceguard/SGeRAC on the local node by issuing the Serviceguard cmhaltnode command. 3. Edit the /etc/rc.config.d/cmcluster file to include the following line:
AUTOSTART_CMCLD = 0

4. Upgrade the node to the new Serviceguard and SGeRAC release. (SGeRAC requires the compatible version of Serviceguard.) 5. Edit the /etc/rc.config.d/cmcluster file, on the local node, to include the following line:
AUTOSTART_CMCLD = 1

Appendix A

207

Software Upgrades Rolling Software Upgrades

NOTE

It is optional to set this parameter to 1. If you want the node to join the cluster at boot time, set this parameter to 1, otherwise set it to 0.

6. Restart the cluster on the upgraded node (if desired). You can do this in Serviceguard Manager, or from the command line, issue the Serviceguard cmrunnode command. 7. Restart Oracle (RAC, CRS, Clusterware, OPS) software on the local node. 8. Repeat steps 1-7 on the other nodes, one node at a time until all nodes have been upgraded.

NOTE

Be sure to plan sufficient system capacity to allow moving the packages from node to node during the upgrade process to maintain optimum performance.

If a cluster fails before the rolling upgrade is complete (perhaps because of a catastrophic power failure), the cluster could be restarted by entering the cmruncl command from a node which has been upgraded to the latest revision of the software. Keeping Kernels Consistent If you change kernel parameters or perform network tuning with ndd as part of doing a rolling upgrade, be sure to change the parameters to the same values on all nodes that can run the same packages in a failover scenario. The ndd command allows the examination and modification of several tunable parameters that affect networking operation and behavior.

208

Appendix A

Software Upgrades Rolling Software Upgrades

Example of Rolling Upgrade


The following example shows a simple rolling upgrade on two nodes, each running standard Serviceguard and RAC instance packages, as shown in Figure A-1. (This and the following figures show the starting point of the upgrade as SGeRAC A.11.15 for illustration only. A roll to SGeRAC version A.11.16 is shown.) SGeRAC rolling upgrade requires the same operating system version on all nodes. The example assumes all nodes are running HP-UX 11i v2. For your systems, substitute the actual release numbers of your rolling upgrade path.

NOTE

While you are performing a rolling upgrade, warning messages may appear while the node is determining what version of software is running. This is a normal occurrence and not a cause for concern.

Figure A-1

Running Cluster Before Rolling Upgrade

Appendix A

209

Software Upgrades Rolling Software Upgrades Step 1. 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on node 1. 2. Halt node 1. This will cause the nodes packages to start up on an adoptive node. You can do this in Serviceguard Manager, or from the command line issue the following:
# cmhaltnode -f node1

This will cause the failover package to be halted cleanly and moved to node 2. The Serviceguard daemon on node 1 is halted, and the result is shown in Figure A-2. Figure A-2 Running Cluster with Packages Moved to Node 2

210

Appendix A

Software Upgrades Rolling Software Upgrades Step 2. Upgrade node 1 and install the new version of Serviceguard and SGeRAC (A.11.16), as shown in Figure A-3.

NOTE

If you install Serviceguard and SGeRAC separately, Serviceguard must be installed before installing SGeRAC.

Figure A-3

Node 1 Upgraded to SG/SGeRAC 11.16

Appendix A

211

Software Upgrades Rolling Software Upgrades Step 3. 1. Restart the cluster on the upgraded node (node 1) (if desired). You can do this in Serviceguard Manager, or from the command line issue the following:
# cmrunnode node1

2. At this point, different versions of the Serviceguard daemon (cmcld) are running on the two nodes, as shown in Figure A-4. 3. Start Oracle (RAC, CRS, Clusterware, OPS) software on node 1. Figure A-4 Node 1 Rejoining the Cluster

212

Appendix A

Software Upgrades Rolling Software Upgrades Step 4. 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on node 2. 2. Halt node 2. You can do this in Serviceguard Manager, or from the command line issue the following:
# cmhaltnode -f node2

This causes both packages to move to node 1; see Figure A-5. 3. Upgrade node 2 to Serviceguard and SGeRAC (A.11.16) as shown in Figure A-5. 4. When upgrading is finished, enter the following command on node 2 to restart the cluster on node 2:
# cmrunnode node2

5. Start Oracle (RAC, CRS, Clusterware, OPS) software on node 2. Figure A-5 Running Cluster with Packages Moved to Node 1

Appendix A

213

Software Upgrades Rolling Software Upgrades Step 5. Move PKG2 back to its original node. Use the following commands:
# cmhaltpkg pkg2 # cmrunpkg -n node2 pkg2 # cmmodpkg -e pkg2

The cmmodpkg command re-enables switching of the package, which is disabled by the cmhaltpkg command. The final running cluster is shown in Figure A-6. Figure A-6 Running Cluster After Upgrades

214

Appendix A

Software Upgrades Rolling Software Upgrades

Limitations of Rolling Upgrades


The following limitations apply to rolling upgrades: During a rolling upgrade, you should issue Serviceguard/SGeRAC commands (other than cmrunnode and cmhaltnode) only on a node containing the latest revision of the software. Performing tasks on a node containing an earlier revision of the software will not work or will cause inconsistent results. You cannot modify the cluster or package configuration until the upgrade is complete. Also, you cannot modify the hardware configurationincluding the clusters network configurationduring a rolling upgrade. This means that you must upgrade all nodes to the new release before you can modify the configuration file and copy it to all nodes. The new features of the Serviceguard/SGeRAC release may not work until all nodes have been upgraded. Binary configuration files may be incompatible between releases of Serviceguard/SGeRAC. Do not manually copy configuration files between nodes. Within a Serviceguard/SGeRAC cluster, no more than two versions of Serviceguard and SGeRAC can be running while the rolling upgrade is in progress. You can perform a rolling upgrade only on a configuration that has not been modified since the last time the cluster was started. Rolling upgrades are not intended as a means of using mixed releases of Serviceguard and SGeRAC within the same cluster. SGeRAC requires the compatible version of Serviceguard. Upgrade all cluster nodes as quickly as possible to the new release level. For more information on support, compatibility, and features for SGeRAC, refer to the Serviceguard Compatibility and Feature Matrix, located at http://docs.hp.com -> High Availability -> Serviceguard Extension for RAC You cannot delete Serviceguard/SGeRAC software (via swremove) from a node while the cluster is in the process of a rolling upgrade.

Appendix A

215

Software Upgrades Non-Rolling Software Upgrades

Non-Rolling Software Upgrades


A non-rolling upgrade allows you to perform a software upgrade from any previous revision to any higher revision or between operating system versions. For example, you may do a non-rolling upgrade from SGeRAC A.11.14 on HP-UX 11i v1 to A.11.16 on HP-UX 11i v2, given both are running the same architecture. The cluster cannot be running during a non-rolling upgrade, therefore it is necessary to halt the entire cluster in order to perform the upgrade. The next section describes the steps for doing a non-rolling software upgrade.

216

Appendix A

Software Upgrades Non-Rolling Software Upgrades

Steps for Non-Rolling Upgrades


Use the following steps for a non-rolling software upgrade: 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on all nodes in the cluster. 2. Halt all nodes in the cluster.
# cmhaltcl -f

3. If necessary, upgrade all the nodes in the cluster to the new HP-UX release. 4. Upgrade all the nodes in the cluster to the new Serviceguard/SGeRAC release. 5. Restart the cluster. Use the following command:
# cmruncl

6. If necessary, upgrade all the nodes in the cluster to the new Oracle (RAC, CRS, Clusterware, OPS) software release. 7. Restart Oracle (RAC, CRS, Clusterware, OPS) software on all nodes in the cluster and configure the Serviceguard/SGeRAC packages and Oracle as needed.

Appendix A

217

Software Upgrades Non-Rolling Software Upgrades

Limitations of Non-Rolling Upgrades


The following limitations apply to non-rolling upgrades: Binary configuration files may be incompatible between releases of Serviceguard. Do not manually copy configuration files between nodes. It is necessary to halt the entire cluster when performing a non-rolling upgrade.

218

Appendix A

Software Upgrades Non-Rolling Software Upgrades

Migrating a SGeRAC Cluster with Cold Install


There may be circumstances when you prefer a cold install of the HP-UX operating system rather than an upgrade. The cold install process erases the pre-existing operating system and data and then installs the new operating system and software; you must then restore the data.

CAUTION

The cold install process erases the pre-existing software, operating system, and data. If you want to retain any existing software, make sure to back up that software before migrating.

Use the following process as a checklist to prepare the migration: 1. Back up the required data, including databases, user and application data, volume group configurations, etc. 2. Halt cluster applications, including RAC and then halt the cluster. 3. Do a cold install of the HP-UX operating system. For more information on the cold install process, see the HP-UX Installation and Update Guide located at http://docs.hp.com -> By OS Release -> Installing and Updating 4. Install additional required software that did not come with your version of HP-UX OE. 5. Install a Serviceguard/SGeRAC version that is compatible with the new HP-UX operating system version. For more information on support, compatibility, and features for SGeRAC, refer to the Serviceguard Compatibility and Feature Matrix, located at http://docs.hp.com -> High Availability -> Serviceguard Extension for RAC 6. Recreate any user accounts needed for the cluster applications. 7. Recreate the network and storage configurations (that is, set up stationary IP addresses and create LVM volume groups and/or CVM disk groups required for the cluster). 8. Recreate the SGeRAC cluster. 9. Restart the cluster. 10. Reinstall the cluster applications, such as RAC. 11. Restore the data. Appendix A 219

Software Upgrades Non-Rolling Software Upgrades

220

Appendix A

Blank Planning Worksheets

Blank Planning Worksheets


This appendix reprints blank planning worksheets used in preparing the RAC cluster. You can duplicate any of these worksheets that you find useful and fill them in as a part of the planning process.

Appendix B

221

Blank Planning Worksheets LVM Volume Group and Physical Volume Worksheet

LVM Volume Group and Physical Volume Worksheet


VG and PHYSICAL VOLUME WORKSHEET Page ___ of ____ ========================================================================== Volume Group Name: ______________________________________________________ PV Link 1 PV Link2 Physical Volume Name:_____________________________________________________ Physical Volume Name:_____________________________________________________ Physical Volume Name:_____________________________________________________ Physical Volume Name: ____________________________________________________ Physical Volume Name: ____________________________________________________ Physical Volume Name: ____________________________________________________ Physical Volume Name: ____________________________________________________ Volume Group Name: _______________________________________________________ PV Link 1 PV Link2

Physical Volume Name: _____________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________

222

Appendix B

Blank Planning Worksheets VxVM Disk Group and Disk Worksheet

VxVM Disk Group and Disk Worksheet


DISK GROUP WORKSHEET Page ___ of ____ =========================================================================== Disk Group Name: __________________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________

Disk Group Name: __________________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________

Appendix B

223

Blank Planning Worksheets Oracle Logical Volume Worksheet

Oracle Logical Volume Worksheet


NAME Oracle Control File 1: _____________________________________________________ Oracle Control File 2: _____________________________________________________ Oracle Control File 3: _____________________________________________________ Instance 1 Redo Log 1: Instance 1 Redo Log 2: Instance 1 Redo Log 3: Instance 1 Redo Log: Instance 1 Redo Log: Instance 2 Redo Log 1: Instance 2 Redo Log 2: Instance 2 Redo Log 3: Instance 2 Redo Log: Instance 2 Redo Log: Data: System Data: Rollback Data: Temp Data: Users Data: Tools _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ SIZE

224

Appendix B

Index
A activation of volume groups in shared mode, 187 adding packages on a running cluster, 159 administration cluster and package states, 168 array replacing a faulty mechanism, 195, 196, 197 AUTO_RUN parameter, 157 AUTO_START_TIMEOUT in sample configuration file, 60, 124 B building a cluster CVM infrastructure, 73, 136 building an RAC cluster displaying the logical volume infrastructure, 57, 121 logical volume infrastructure, 48, 112 building logical volumes for RAC, 54, 118 C CFS, 65, 70 creating storage infrastructure, 129 deleting from the cluster, 134 cluster state, 175 status options, 171 cluster configuration file, 60, 124 cluster node startup and shutdown OPS instances, 157 cluster volume group creating physical volumes, 49, 113 CLUSTER_NAME (cluster name) in sample configuration file, 60, 124 control script creating with commands, 159 creating with SAM, 159 in package configuration, 159 starting OPS instances, 162 creating a SGeRAC Cluster, 65 creating a storage infrastructure, 65 CVM creating a storage infrastructure, 73, 136 use of the VxVM-CVM-pkg, 77, 140 CVM_ACTIVATION_CMD in package control script, 161 CVM_DG in package control script, 161 D deactivation of volume groups, 188 deciding when and where to run packages, 24 deleting from the cluster, 70 deleting nodes while the cluster is running,
190

demo database files, 55, 80, 119, 145 disk choosing for volume groups, 49, 113 disk arrays creating logical volumes, 53, 117 disk storage creating the infrastructure with CVM, 73,
136

disks replacing, 195 E eight-node cluster with disk array figure, 31 EMS for preventive monitoring, 193 enclosure for disks replacing a faulty mechanism, 195 Event Monitoring Service in troubleshooting, 193 exporting shared volume group data, 57, 121 exporting files LVM commands, 57, 121 extended distance cluster building, 32 F figures eight-node cluster with EMC disk array, 31 node 1 rejoining the cluster, 212 node 1 upgraded to HP-UX 111.00, 211 running cluster after upgrades, 214 running cluster before rolling upgrade, 209 running cluster with packages moved to node 1, 213 running cluster with packages moved to node 2, 210 FIRST_CLUSTER_LOCK_PV in sample configuration file, 60, 124 FIRST_CLUSTER_LOCK_VG in sample configuration file, 60, 124 FS 225

Index
in sample package control script, 161 FS_MOUNT_OPT in sample package control script, 161 G GMS group membership services, 23 group membership services define, 23 H hardware adding disks, 194 monitoring, 193 heartbeat subnet address parameter in cluster manager configuration, 47, 110 HEARTBEAT_INTERVAL in sample configuration file, 60, 124 HEARTBEAT_IP in sample configuration file, 60, 124 high availability cluster defined, 16 I in-line terminator permitting online hardware maintenance,
198

using to obtain a list of disks, 49, 113 LV in sample package control script, 161 LVM creating on disk arrays, 53, 117 LVM commands exporting files, 57, 121 M maintaining a RAC cluster, 167 maintenance adding disk hardware, 194 making changes to shared volume groups,
188

monitoring hardware, 193 N network status, 174 NETWORK_INTERFACE in sample configuration file, 60, 124 NETWORK_POLLING_INTERVAL (network polling interval) in sample configuration file, 60, 124 node halting status, 180 in an RAC cluster, 16 status and state, 172 NODE_FAILFAST_ENABLED parameter,
157

installing Oracle RAC, 59, 123 installing software Serviceguard Extension for RAC, 46, 109 IP in sample package control script, 161 IP address switching, 27 L lock disk replacing a faulty mechanism, 198 logical volumes blank planning worksheet, 223, 224 creating, 54, 118 creating for a cluster, 50, 79, 114, 142, 143 creating the infrastructure, 48, 112 disk arrays, 53, 117 filled in planning worksheet, 42, 44, 102,
106

NODE_TIMEOUT (heartbeat timeout) in sample configuration file, 60, 124 O online hardware maintenance by means of in-line SCSI terminators, 198 Online node addition and deletion, 183 Online reconfiguration, 183 OPS control scripts for starting instances, 162 packages to access database, 158 startup and shutdown instances, 157 startup and shutdown volume groups, 156 OPS cluster starting up with scripts, 156 opsctl.ctl Oracle demo database files, 55, 80, 119, 145 opslog.log Oracle demo database files, 55, 80, 119, 145

lssf 226

Index
optimizing packages for large numbers of storage units, 161 Oracle demo database files, 55, 80, 119, 145 Oracle 10 RAC installing binaries, 89 Oracle 10g RAC introducing, 33 Oracle 9i RAC installing, 148 introducing, 101 Oracle Disk Manager configuring, 92 Oracle Parallel Server starting up instances, 156 Oracle RAC installing, 59, 123 Oracle10g installing, 88 P package basic concepts, 17, 18 moving status, 178 state, 175 status and state, 172 switching status, 179 package configuration service name parameter, 47, 110 writing the package control script, 159 package control script generating with commands, 159 packages accessing OPS database, 158 deciding where and when to run, 24 launching OPS instances, 157 startup and shutdown volume groups, 156 parameter AUTO_RUN, 157 NODE_FAILFAST_ENABLED, 157 performance optimizing packages for large numbers of storage units, 161 physical volumes creating for clusters, 49, 113 filled in planning worksheet, 222 planning worksheets for logical volume planning, 42,
44, 102, 106

worksheets for physical volume planning,


222

planning worksheets blanks, 221 point to point connections to storage devices,


30

PVG-strict mirroring creating volume groups with, 49, 113 Q quorum server status and state, 176 R RAC group membership services, 23 overview of configuration, 16 status, 173 RAC cluster defined, 16 removing packages on a running cluster, 159 removing Serviceguard Extension for RAC from a system,, 192 replacing disks, 195 rollback.dbf Oracle demo database files, 55, 56, 81, 119,
145

rolling software upgrades example, 209 steps, 207, 217 rolling upgrade limitations, 215, 218 RS232 status, viewing, 180 running cluster adding or removing packages, 159 S serial line status, 174 service status, 174 service name parameter in package configuration, 47, 110 SERVICE_CMD in sample package control script, 161 SERVICE_NAME in sample package control script, 161 parameter in package configuration, 47, 110 SERVICE_RESTART in sample package control script, 161 227

Index
Serviceguard Extension for RAC installing, 46, 109 introducing, 15 shared mode activation of volume groups, 187 deactivation of volume groups, 188 shared volume groups making volume groups shareable, 186 sharing volume groups, 57, 121 SLVM making volume groups shareable, 186 SNOR configuration, 184 software upgrades, 205 state cluster, 175 node, 172 of cluster and package, 168 package, 172, 175 status cluster, 171 halting node, 180 moving package, 178 network, 174 node, 172 normal running RAC, 176 of cluster and package, 168 package, 172 RAC, 173 serial line, 174 service, 174 switching package, 179 SUBNET in sample package control script, 161 switching IP addresses, 27 system multi-node package used with CVM, 77, 140 system.dbf Oracle demo database files, 55, 81 T temp.dbf Oracle demo database files, 55, 81, 119, 145 tools.dbf Oracle demo database files, 119, 145 troubleshooting monitoring hardware, 193 replacing disks, 195 V VG in sample package control script, 161 VGCHANGE in package control script, 161 viewing RS232 status, 180 volume group creating for a cluster, 49, 113 creating physical volumes for clusters, 49,
113

volume groups adding shared volume groups, 190 displaying for RAC, 57, 121 exporting to other nodes, 57, 121 making changes to shared volume groups,
188

making shareable, 186 making unshareable, 187 OPS startup and shutdown, 156 VOLUME_GROUP in sample configuration file, 60, 124 VXVM_DG in package control script, 161 VxVM-CVM-pkg, 77, 140 W worksheet logical volume planning, 42, 44, 102, 106 worksheets physical volume planning, 222 worksheets for planning blanks, 221

228

You might also like