Professional Documents
Culture Documents
T1859 90038 (SGextRAC)
T1859 90038 (SGextRAC)
Manufacturing Part Number : T1859-90038 May 2006 Update Copyright 2006 Hewlett-Packard Development Company, L.P. All rights reserved.
Legal Notices
Copyright 2003- 2006 Hewlett-Packard Development Company, L.P. Publication Dates: June 2003, June 2004, February 2005, December 2005, March 2006, May 2006 Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendors standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Oracle is a registered trademark of Oracle Corporation. UNIX is a registered trademark in the United States and other countries, licensed exclusively through The Open Group. VERITAS is a registered trademark of VERITAS Software Corporation. VERITAS File System is a trademark of VERITAS Software Corporation.
Contents
1. Introduction to Serviceguard Extension for RAC
What is a Serviceguard Extension for RAC Cluster? . . . . . . . . . . . . . . . . . . . . . . . . . . Group Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Packages in a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Serviceguard Extension for RAC Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Group Membership Daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of SGeRAC and Cluster File System (CFS)/ Cluster Volume Manager (CVM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Package Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storage Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of SGeRAC and Oracle 10g RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of SGeRAC and Oracle 9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Serviceguard Works with Oracle 9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Group Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Packages for Oracle RAC Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Packages for Oracle Listeners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Node Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Larger Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Up to Four Nodes with SCSI Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Point to Point Connections to Storage Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extended Distance Cluster Using Serviceguard Extension for RAC . . . . . . . . . . . . . . 16 17 18 19 19 20 20 20 22 23 23 23 24 25 26 28 28 30 32
Contents
Storage Planning with CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Planning with CVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Serviceguard Extension for RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration File Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Storage Infrastructure with LVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Building Volume Groups for RAC on Mirrored Disks. . . . . . . . . . . . . . . . . . . . . . . . . Building Mirrored Logical Volumes for RAC with LVM Commands . . . . . . . . . . . . . Creating RAC Volume Groups on Disk Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Logical Volumes for RAC on Disk Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Demo Database Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying the Logical Volume Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exporting the Logical Volume Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Oracle Real Application Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster Configuration ASCII File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Storage Infrastructure with CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a SGeRAC Cluster with CFS 4.1 for Oracle 10g . . . . . . . . . . . . . . . . . . . . . Initializing the VERITAS Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting CFS from the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Storage Infrastructure with CVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Initializing the VERITAS Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using CVM 4.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using CVM 3.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Demo Database Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding Disk Groups to the Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . Prerequisites for Oracle 10g (Sample Installation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Oracle 10g Cluster Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing on Local File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Oracle 10g RAC Binaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing RAC Binaries on a Local File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing RAC Binaries on Cluster File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a RAC Demo Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a RAC Demo Database on SLVM or CVM . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a RAC Demo Database on CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify that Oracle Disk Manager is Configured. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Oracle to Use Oracle Disk Manager Library . . . . . . . . . . . . . . . . . . . . . . . 42 44 46 47 48 48 50 52 54 55 57 57 59 60 65 65 65 70 73 73 73 77 79 80 82 83 88 88 89 89 89 90 90 91 92 93
Contents
Verify that Oracle Disk Manager is Running. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Oracle to Stop Using Oracle Disk Manager Library . . . . . . . . . . . . . . . . . Using Serviceguard Packages to Synchronize with Oracle 10g RAC . . . . . . . . . . . . . . Preparing Oracle Cluster Software for Serviceguard Packages. . . . . . . . . . . . . . . . . Configure Serviceguard Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 96 97 97 97
Contents
Create Database with Oracle Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify that Oracle Disk Manager is Configured. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure Oracle to use Oracle Disk Manager Library . . . . . . . . . . . . . . . . . . . . . . . . Verify Oracle Disk Manager is Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Oracle to Stop using Oracle Disk Manager Library . . . . . . . . . . . . . . . . Using Packages to Configure Startup and Shutdown of RAC Instances . . . . . . . . . . Starting Oracle Instances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Packages to Launch Oracle RAC Instances . . . . . . . . . . . . . . . . . . . . . . . . Configuring Packages that Access the Oracle RAC Database . . . . . . . . . . . . . . . . . Adding or Removing Packages on a Running Cluster . . . . . . . . . . . . . . . . . . . . . . . Writing the Package Control Script. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 151 152 153 155 156 156 157 158 159 159
Contents
Replacing a Lock Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . On-line Hardware Maintenance with In-line SCSI Terminator . . . . . . . . . . . . . . . Replacement of I/O Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replacement of LAN Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Off-Line Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . On-Line Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . After Replacing the Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monitoring RAC Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 198 201 202 202 202 203 204
A. Software Upgrades
Rolling Software Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steps for Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of Rolling Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations of Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Non-Rolling Software Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steps for Non-Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations of Non-Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Migrating a SGeRAC Cluster with Cold Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 207 209 215 216 217 218 219
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Contents
Printing History
Table 1 Printing Date June 2003 Document Edition and Printing Date Part Number T1859-90006 First Edition Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) June 2004 T1859-90017 Second Edition Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) February 2005 T1859-90017 Second Edition February 2005 Update Web (http://www.docs.hp.com/) October 2005 T1859-90033 Third Edition Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) December 2005 T1859-90033 Third Edition, First Reprint Web (http://www.docs.hp.com/) March 2006 T1859-90038 Third Edition, Second Reprint Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) May 2006 T1859-90038 Third Edition May 2006 Update Web (http://www.docs.hp.com/) The last printing date and part number indicate the current edition. Changes in the May 2006 update version include software upgrade procedures for SGeRAC clusters. The last printing date and part number indicate the current edition, which applies to the 11.14, 11.15, 11.16 and 11.17 version of Serviceguard Extension for RAC (Oracle Real Application Cluster). Edition
The printing date changes when a new edition is printed. (Minor corrections and updates which are incorporated at reprint do not cause the date to change.) The part number is revised when extensive technical changes are incorporated. New editions of this manual will incorporate all material updated since the previous edition. To ensure that you receive the new editions, you should subscribe to the appropriate product support service. See your HP sales representative for details. HP Printing Division:
Business Critical Computing Hewlett-Packard Co. 19111 Pruneridge Ave. Cupertino, CA 95014
10
Preface
The May 2006 update includes a new appendix on software upgrade procedures for SGeRAC clusters. Also, this guide describes how to use the Serviceguard Extension for RAC (Oracle Real Application Cluster) to configure Serviceguard clusters for use with Oracle Real Application Cluster software on HP High Availability clusters running the HP-UX operating system. The contents are as follows: Chapter 1, Introduction, describes a Serviceguard cluster and provides a roadmap for using this guide. This chapter should be used as a supplement to Chapters 13 of the Managing Serviceguard users guide. Chapter 2, Serviceguard Configuration for Oracle 10g RAC, describes the additional steps you need to take to use Serviceguard with Real Application Clusters when configuring Oracle 10g RAC. This chapter should be used as a supplement to Chapters 46 of the Managing Serviceguard users guide. Chapter 3, Serviceguard Configuration for Oracle 9i RAC, describes the additional steps you need to take to use Serviceguard with Real Application Clusters when configuring Oracle 9i RAC. This chapter should be used as a supplement to Chapters 46 of the Managing Serviceguard users guide. Chapter 4, Maintenance and Troubleshooting, describes tools and techniques necessary for ongoing cluster operation. This chapter should be used as a supplement to Chapters 78 of the Managing Serviceguard users guide. Appendix A, Software Upgrades, describes rolling, non-rolling and migration with cold install upgrade procedures for SGeRAC clusters. Appendix B, Blank Planning Worksheets, contains planning worksheets for LVM, VxVM, and Oracle Logical Volume.
Related Publications
The following documents contain additional useful information: Clusters for High Availability: a Primer of HP Solutions. Hewlett-Packard Professional Books: Prentice Hall PTR, 2001 (ISBN 0-13-089355-2) Managing Serviceguard Twelfth Edition (B3936-90100)
11
Using High Availability Monitors (B5736-90046) Using the Event Monitoring Service (B7612-90015) Using Advanced Tape Services (B3936-90032) Designing Disaster Tolerant High Availability Clusters (B7660-90017) Managing Serviceguard Extension for SAP (T2803-90002) Managing Systems and Workgroups (5990-8172) Managing Serviceguard NFS (B5140-90017) HP Auto Port Aggregation Release Notes
Before attempting to use VxVM storage with Serviceguard, please refer to the following: VERITAS Volume Manager Administrators Guide. This contains a glossary of VERITAS terminology. VERITAS Volume Manager Storage Administrator Administrators Guide VERITAS Volume Manager Reference Guide VERITAS Volume Manager Migration Guide VERITAS Volume Manager for HP-UX Release Notes
If you will be using VERITAS CVM 4.1 or the VERITAS Cluster File System with Serviceguard, please refer to the HP Serviceguard Storage Management Suite Version A.01.00 Release Notes (T2771-90028). These release notes describe suite bundles for the integration of HP Serviceguard A.11.17 with Symantecs VERITAS Storage Foundation. Use the following URL to access HPs high availability web page: http://www.hp.com/go/ha
Use the following URL for access to a wide variety of HP-UX documentation: http://docs.hp.com/hpux
Problem Reporting If you have any problems with the software or documentation, please contact your local Hewlett-Packard Sales Office or Customer Service Center.
12
Conventions
We use the following typographical conventions. audit (5) An HP-UX manpage. audit is the name and 5 is the section in the HP-UX Reference. On the web and on the Instant Information CD, it may be a hot link to the manpage itself. From the HP-UX command line, you can enter man audit or man 5 audit to view the manpage. See man (1). The title of a book. On the web and on the Instant Information CD, it may be a hot link to the book itself. The name of a keyboard key. Note that Return and Enter both refer to the same key. Text that is emphasized. Text that is strongly emphasized. The defined use of an important word or phrase. Text displayed by the computer. Commands and other text that you type. A command name or qualified command phrase. The name of a variable that you may replace in a command or function or information in a display that represents several possible values. The contents are optional in formats and command descriptions. If the contents are a list separated by |, you must choose one of the items. The contents are required in formats and command descriptions. If the contents are a list separated by |, you must choose one of the items. The preceding element may be repeated an arbitrary number of times. Separates items in a list of choices.
Book Title
KeyCap
[ ]
{ }
... |
13
14
Chapter 1
15
Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster?
In the figure, two loosely coupled systems (each one known as a node) are running separate instances of Oracle software that read data from and write data to a shared set of disks. Clients connect to one node or the other via LAN.
16
Chapter 1
Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? RAC on HP-UX lets you maintain a single database image that is accessed by the HP servers in parallel, thereby gaining added processing power without the need to administer separate databases. Further, when properly configured, Serviceguard Extension for RAC provides a highly available database that continues to operate even if one hardware component should fail.
Group Membership
Oracle RAC systems implement the concept of group membership, which allows multiple instances of RAC to run on each node. Related processes are configured into groups. Groups allow processes in different instances to choose which other processes to interact with. This allows the support of multiple databases within one RAC cluster. A Group Membership Service (GMS) component provides a process monitoring facility to monitor group membership status. GMS is provided by the cmgmsd daemon, which is an HP component installed with Serviceguard Extension for RAC. Figure 1-2 shows how group membership works. Nodes 1 through 4 of the cluster share the Sales database, but only Nodes 3 and 4 share the HR database. Consequently, there is one instance of RAC each on Node 1 and Node 2, and there are two instances of RAC each on Node 3 and Node 4. The RAC processes accessing the Sales database constitute one group, and the RAC processes accessing the HR database constitute another group.
Chapter 1
17
Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? Figure 1-2 Group Membership Services
NOTE
In RAC clusters, you create packages to start and stop RAC itself as well as to run applications that access the database instances. For details on the use of packages with RAC, refer to section, Using Packages to Configure Startup and Shutdown of RAC Instances on page 156 located in chapter 3.
18
Chapter 1
Introduction to Serviceguard Extension for RAC Serviceguard Extension for RAC Architecture
This HP daemon provides group membership services for Oracle Real Application Cluster 9i or later. Group membership allows multiple Oracle instances to run on the same cluster node. GMS is illustrated in Figure 1-2 on page 18.
Chapter 1
19
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM)
Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM)
SGeRAC supports Cluster File System (CFS) through Serviceguard. For more detail information on CFS support refer to the Managing Serviceguard Twelfth Edition users guide.
Package Dependencies
When CFS is used as shared storage, the application and software using the CFS storage should be configured to start and stop using Serviceguard packages. These application packages should be configured with a package dependency on the underlying multi-node packages, which manages the CFS and CVM storage reserves. Configuring the application to be start/stop through SG package is to ensure the synchronization of storage activation/deactivation and application startup/shutdown. With CVM configurations using multi-node packages, CVM shared storage should be configured in Serviceguard packages with package dependencies. Refer to the Managing Serviceguard Twelfth Edition users guide for detailed information on multi-node package.
20
Chapter 1
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM) Oracle RAC data files can be created on a CFS, allowing the database administrator or Oracle software to create additional data files without the need of root system administrator privileges. The archive area can now be on a CFS. Oracle instances on any cluster node can access the archive area when database recovery requires the archive logs.
Chapter 1
21
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Oracle 10g RAC
NOTE
In this document, the generic terms CRS and Oracle Clusterware will subsequently be referred to as Oracle Cluster Software. The use of the term CRS will still be used when referring to a sub-component of Oracle Cluster Software.
For more detail information on Oracle 10g RAC refer to Chapter 2, Serviceguard Configuration for Oracle 10g RAC.
22
Chapter 1
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Oracle 9i RAC
Group Membership
The group membership service (GMS) is the means by which Oracle instances communicate with the Serviceguard cluster software. GMS runs as a separate daemon process that communicates with the cluster manager. This daemon is an HP component known as cmgmsd. The cluster manager starts up, monitors, and shuts down the cmgmsd. When an Oracle instance starts, the instance registers itself with cmgmsd; thereafter, if an Oracle instance fails, cmgmsd notifies other members of the same group to perform recovery. If cmgmsd dies unexpectedly, Serviceguard will fail the node with a TOC (Transfer of Control).
Chapter 1
23
Introduction to Serviceguard Extension for RAC Configuring Packages for Oracle RAC Instances
NOTE
Packages that start and halt Oracle instances (called instance packages) do not fail over from one node to another; they are single-node packages. You should include only one NODE_NAME in the package ASCII configuration file. The AUTO_RUN setting in the package configuration file will determine whether the RAC instance will start up as the node joins the cluster. Your cluster may include RAC and non-RAC packages in the same cluster.
24
Chapter 1
Introduction to Serviceguard Extension for RAC Configuring Packages for Oracle Listeners
Chapter 1
25
Node Failure
RAC cluster configuration is designed so that in the event of a node failure, another node with a separate instance of Oracle can continue processing transactions. Figure 1-3 shows a typical cluster with instances running on both nodes. Figure 1-3 Before Node Failure
Figure 1-4 shows the condition where Node 1 has failed and Package 1 has been transferred to Node 2. Oracle instance 1 is no longer operating, but it does not fail over to Node 2. Package 1s IP address was transferred to Node 2 along with the package. Package 1 continues to be
26
Chapter 1
Introduction to Serviceguard Extension for RAC Node Failure available and is now running on Node 2. Also note that Node 2 can now access both Package 1s disk and Package 2s disk. Oracle instance 2 now handles all database access, since instance 1 has gone down. Figure 1-4 After Node Failure
In the above figure, pkg1 and pkg2 are not instance packages. They are shown to illustrate the movement of packages in general.
Chapter 1
27
Larger Clusters
Serviceguard Extension for RAC supports clusters of up to 16 nodes. The actual cluster size is limited by the type of storage and the type of volume manager used.
28
Chapter 1
Introduction to Serviceguard Extension for RAC Larger Clusters Figure 1-5 Four-Node RAC Cluster
Chapter 1
29
Introduction to Serviceguard Extension for RAC Larger Clusters In this type of configuration, each node runs a separate instance of RAC and may run one or more high availability packages as well. The figure shows a dual Ethernet configuration with all four nodes connected to a disk array (the details of the connections depend on the type of disk array). In addition, each node has a mirrored root disk (R and R'). Nodes may have multiple connections to the same array using alternate links (PV links) to take advantage of the array's use of RAID levels for data protection. Alternate links are further described in the section Creating RAC Volume Groups on Disk Arrays on page 116.
30
Chapter 1
Introduction to Serviceguard Extension for RAC Larger Clusters Figure 1-6 Eight-Node Cluster with XP or EMC Disk Array
FibreChannel switched configurations also are supported using either an arbitrated loop or fabric login topology. For additional information about supported cluster configurations, refer to the HP 9000 Servers Configuration Guide, available through your HP representative.
Chapter 1
31
Introduction to Serviceguard Extension for RAC Extended Distance Cluster Using Serviceguard Extension for RAC
32
Chapter 1
Chapter 2
33
Interface Areas
This section documents interface areas where there is expected interaction between SGeRAC and Oracle 10g Cluster Software and RAC.
SGeRAC Detection
When Oracle 10g Cluster Software is installed on a SGeRAC cluster, Oracle Cluster Software detects the existence of SGeRAC and CSS uses SGeRAC group membership.
Cluster Timeouts
SGeRAC uses heartbeat timeouts to determine when any SGeRAC cluster member has failed or when any cluster member is unable to communicate with the other cluster members. CSS uses a similar mechanism for CSS memberships. Each RAC instance group membership also has a timeout mechanism, which triggers Instance Membership Recovery (IMR).
NOTE
Serviceguard Cluster Timeout The Serviceguard cluster heartbeat timeout is set according to user requirements for availability. The Serviceguard cluster reconfiguration time is determined by the cluster timeout, configuration, the reconfiguration algorithm, and activities during reconfiguration. 34 Chapter 2
Serviceguard Configuration for Oracle 10g RAC Interface Areas CSS Timeout When SGeRAC is on the same cluster as Oracle Cluster Software, the CSS timeout is set to a default value of 600 seconds (10 minutes) at Oracle software installation. This timeout is configurable with Oracle tools and should not be changed without ensuring that the CSS timeout allows enough time for Serviceguard Extension for RAC (SGeRAC) reconfiguration and to allow multipath (if configured) reconfiguration to complete. On a single point of failure, for example, node failure, Serviceguard reconfigures first and SGeRAC delivers the new group membership to CSS via NMAPI2. If there is a change in group membership, SGeRAC updates the members of the new membership. After receiving the new group membership, CSS in turn initiates its own recovery action as needed and propagates the new group membership to the RAC instances.
NOTE
As a general guideline, the CSS TIMEOUT should be greater of either 180 seconds or 25 times the Serviceguard NODE_TIMEOUT.
RAC IMR Timeout RAC instance IMR timeout is configurable. RAC IMR expects group membership changes to occur within this time or IMR will begin evicting group members. The IMR timeout must be above the SGeRAC reconfiguration time and adhere to any Oracle-specified relation to CSS reconfiguration time.
Chapter 2
35
Serviceguard Configuration for Oracle 10g RAC Interface Areas Automated Oracle Cluster Software Startup and Shutdown The preferred mechanism that allows Serviceguard to notify Oracle Cluster Software to start and to request Oracle Cluster Software to shutdown is the use of Serviceguard packages. Monitoring Oracle Cluster Software daemon monitoring is performed through programs initiated by the HP-UX init process. SGeRAC monitors Oracle Cluster Software to the extent that CSS is a NMAPI2 group membership client and group member. SGeRAC provides group membership notification to the remaining group members when CSS enters and leaves the group membership.
Shared Storage
SGeRAC supports shared storage using HP Shared Logical Volume Manager (SLVM), Veritas Cluster File System (CFS) and Veritas Cluster Volume Manager (CVM). The file /var/opt/oracle/oravg.conf must not be present so Oracle Cluster Software will not activate or deactivate any shared storage. Multipath Multipath is supported through either SLVM pvlinks or CVM Dynamic Multipath (DMP). In some configurations, SLVM or CVM does not need to be configured for multipath as the multipath is provided by the storage array. Since Oracle Cluster Software checks availability of the shared device for the vote disk through periodic monitoring, the multipath detection and failover time must be less than CRS's timeout specified by the Cluster Synchronization Service (CSS) MISSCOUNT. On SGeRAC configurations, the CSS MISSCOUNT value is set to 600 seconds. Multipath failover time is typically between 30 to 120 seconds. OCR and Vote Device Shared storage for the OCR and Vote device should be on supported shared storage volume managers with multipath configured and with either the correct multipath failover time or CSS timeout.
36
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Interface Areas Mirroring and Resilvering On node and cluster wide failures, when SLVM mirroring is used and Oracle resilvering is available, the recommendation for the logical volume mirror recovery policy is set to full mirror resynchronization (NOMWC) for control and redo files and no mirror resynchronization (NONE) for the datafiles since Oracle would perform resilvering on the datafiles based on the redo log.
NOTE
If Oracle resilvering is not available, the mirror recovery policy should be set to full mirror resynchronization (NOMWC) of all control, redo, and datafiles.
Shared Storage Activation Depending on your version of Oracle Cluster Software, the default configuration for activation of the shared storage for Oracle Cluster Software may be controlled by the /var/opt/oracle/oravg.conf file. For the default configuration where the shared storage is activated by SGeRAC before starting Oracle Cluster Software or RAC instance, the oravg.conf file should not be present.
Listener
Automated Startup and Shutdown CRS can be configured to automatically start, monitor, restart, and halt listeners. If CRS is not configured to start the listener automatically at Oracle Cluster Software startup, the listener startup can be automated with supported commands, such as srvctl and lsnrctl, through scripts or SGeRAC packages. If the SGeRAC package is configured to start the listener, the SGeRAC package would contain the virtual IP address required by the listener. Manual Startup and Shutdown Manual listener startup and shutdown is supported through the following commands: srvctl and lsnrctl
Chapter 2
37
Network Monitoring
SGeRAC cluster provides network monitoring. For networks that are redundant and monitored by Serviceguard cluster, Serviceguard cluster provides local failover capability between local network interfaces (LAN) that is transparent to applications utilizing User Datagram Protocol (UDP) and Transport Control Protocol (TCP). For virtual IP addresses (floating or package IP address) in Serviceguard, Serviceguard also provides remote failover capability of network connection endpoints between cluster nodes and transparent local failover capability of network connection endpoints between redundant local network interfaces.
NOTE
Serviceguard can not be responsible for networks or connection endpoints that it is not configured to monitor.
SGeRAC Heartbeat Network Serviceguard supports multiple heartbeat networks, private or public. Serviceguard heartbeat network can be configured as a single network connection with redundant LAN or multiple connections with multiple LANs (single or redundant). CSS Heartbeat Network The CSS IP addresses for peer communications are fixed IP addresses. When CSS heartbeats are on a single network connection and does not support multiple heartbeat networks. To protect against a network single point of failure, the CSS heartbeat network should be configured with redundant physical networks under SGeRAC monitoring. Since SGeRAC does not support heartbeat over Hyperfabric (HF) networks, the preferred configuration is for CSS and Serviceguard to share the same cluster interconnect. RAC Cluster Interconnect Each set of RAC instances maintains peer communications on a single connection and may not support multiple connections on HP-UX with SGeRAC. To protect against a network single point of failure, the RAC cluster interconnect should be configured with redundant networks under Serviceguard monitoring and for Serviceguard to take action 38 Chapter 2
Serviceguard Configuration for Oracle 10g RAC Interface Areas (either a local failover or an instance package shutdown, or both) if the RAC cluster interconnect fails. Serviceguard does not monitor Hyperfabric networks directly (integration of Serviceguard and HF/EMS monitor is supported). Public Client Access When the client connection endpoint (virtual or floating IP address) is configured using Serviceguard packages, Serviceguard provides monitoring, local failover, and remote failover capabilities. When Serviceguard packages are not used, Serviceguard does not provide monitor or failover support.
Chapter 2
39
RAC Instances
Automated Startup and Shutdown
CRS can be configured to automatically start, monitor, restart, and halt RAC instances. If CRS is not configured to automatically start the RAC instance at Oracle Cluster Software startup, the RAC instance startup can be automated through scripts using supported commands, such as srvctl or sqlplus, in a SGeRAC package to start and halt RAC instances.
NOTE
Shared Storage
It is expected the shared storage is available when the RAC instance is started. Since the RAC instance expects the shared storage to be available, ensure the shared storage is activated. For SLVM, the shared volume groups must be activated and for CVM, the disk group must be activated. For CFS, the cluster file system must be mounted.
40
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle Cluster Software
Chapter 2
41
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC
42
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC
ORACLE LOGICAL VOLUME WORKSHEET FOR LVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Cluster Registry: _____/dev/vg_ops/rora_ocr_____100___ (once per cluster) Oracle Cluster Vote Disk: ____/dev/vg_ops/rora_vote_____20___ (once per cluster) Oracle Control File: _____/dev/vg_ops/ropsctl1.ctl______110______ Oracle Control File 2: ___/dev/vg_ops/ropsctl2.ctl______110______ Oracle Control File 3: ___/dev/vg_ops/ropsctl3.ctl______110______ Instance 1 Redo Log 1: ___/dev/vg_ops/rops1log1.log_____120______ Instance 1 Redo Log 2: ___/dev/vg_ops/rops1log2.log_____120_______ Instance 1 Redo Log 3: ___/dev/vg_ops/rops1log3.log_____120_______ Instance 1 Redo Log: __________________________________________________ Instance 1 Redo Log: __________________________________________________ Instance 2 Redo Log 1: ___/dev/vg_ops/rops2log1.log____120________ Instance 2 Redo Log 2: ___/dev/vg_ops/rops2log2.log____120________ Instance 2 Redo Log 3: ___/dev/vg_ops/rops2log3.log____120________ Instance 2 Redo Log: _________________________________________________ Instance 2 Redo Log: __________________________________________________ Data: System ___/dev/vg_ops/ropssystem.dbf___500__________ Data: Sysaux ___/dev/vg_ops/ropssysaux.dbf___800__________ Data: Temp ___/dev/vg_ops/ropstemp.dbf______250_______ Data: Users ___/dev/vg_ops/ropsusers.dbf_____120_________ Data: User data ___/dev/vg_ops/ropsdata1.dbf_200__________ Data: User data ___/dev/vg_ops/ropsdata2.dbf__200__________ Data: User data ___/dev/vg_ops/ropsdata3.dbf__200__________ Parameter: spfile1 ___/dev/vg_ops/ropsspfile1.ora __5_____ Password: ______/dev/vg_ops/rpwdfile.ora__5_______ Instance 1 undotbs1: /dev/vg_ops/ropsundotbs1.dbf___500___ Instance 2 undotbs2: /dev/vg_ops/ropsundotbs2.dbf___500___ Data: example1__/dev/vg_ops/ropsexample1.dbf__________160____
Chapter 2
43
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC
44
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC
ORACLE LOGICAL VOLUME WORKSHEET FOR CVM Page ___ of ____ =============================================================================== RAW VOLUME NAME SIZE (MB) Oracle Cluster Registry: _____/dev/vx/rdsk/ops_dg/ora_ocr_____100___ (once per cluster) Oracle Cluster Vote Disk: ____/dev/vx/rdsk/ops_dg/ora_vote_____20___ (once per cluster) Oracle Control File: _____/dev/vx/rdsk/ops_dg/opsctl1.ctl______110______ Oracle Control File 2: ___/dev/vx/rdsk/ops_dg/opsctl2.ctl______110______ Oracle Control File 3: ___/dev/vx/rdsk/ops_dg/opsctl3.ctl______110______ Instance 1 Redo Log 1: ___/dev/vx/rdsk/ops_dg/ops1log1.log_____120______ Instance 1 Redo Log 2: ___/dev/vx/rdsk/ops_dg/ops1log2.log_____120______ Instance 1 Redo Log 3: ___/dev/vx/rdsk/ops_dg/ops1log3.log_____120_______ Instance 1 Redo Log: __________________________________________________ Instance 1 Redo Log: __________________________________________________ Instance 2 Redo Log 1: ___/dev/vx/rdsk/ops_dg/ops2log1.log____120________ Instance 2 Redo Log 2: ___/dev/vx/rdsk/ops_dg/ops2log2.log____120________ Instance 2 Redo Log 3: ___/dev/vx/rdsk/ops_dg/ops2log3.log____120________ Instance 2 Redo Log: _________________________________________________ Instance 2 Redo Log: __________________________________________________ Data: System ___/dev/vx/rdsk/ops_dg/opssystem.dbf___500__________ Data: Sysaux ___/dev/vx/rdsk/ops_dg/opssysaux.dbf___800__________ Data: Temp ___/dev/vx/rdsk/ops_dg/opstemp.dbf______250_______ Data: Users ___/dev/vx/rdsk/ops_dg/opsusers.dbf_____120_________ Data: User data ___/dev/vx/rdsk/ops_dg/opsdata1.dbf_200__________ Data: User data ___/dev/vx/rdsk/ops_dg/opsdata2.dbf__200__________ Data: User data ___/dev/vx/rdsk/ops_dg/opsdata3.dbf__200__________ Parameter: spfile1 ___/dev/vx/rdsk/ops_dg/opsspfile1.ora __5_____ Password: ______/dev/vx/rdsk/ops_dg/pwdfile.ora__5_______ Instance 1 undotbs1: /dev/vx/rdsk/ops_dg/opsundotbs1.dbf___500___ Instance 2 undotbs2: /dev/vx/rdsk/ops_dg/opsundotbs2.dbf___500___ Data: example1__/dev/vx/rdsk/ops_dg/opsexample1.dbf__________160____
Chapter 2
45
Serviceguard Configuration for Oracle 10g RAC Installing Serviceguard Extension for RAC
To install Serviceguard Extension for RAC, use the following steps for each node:
NOTE
For the up to date version compatibility for Serviceguard and HP-UX, see the SGeRAC release notes.
1. Mount the distribution media in the tape drive, CD, or DVD reader. 2. Run Software Distributor, using the swinstall command. 3. Specify the correct input device. 4. Choose the following bundle from the displayed list:
Serviceguard Extension for RAC
46
Chapter 2
NOTE
CVM 4.x with CFS does not use the STORAGE_GROUP parameter because the disk group activation is performed by the multi-node package. CVM 3.x or 4.x without CFS uses the STORAGE_GROUP parameter in the ASCII package configuration file in order to activate the disk group. Do not enter the names of LVM volume groups or VxVM disk groups in the package ASCII configuration file.
Chapter 2
47
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM
The Event Monitoring Service HA Disk Monitor provides the capability to monitor the health of LVM disks. If you intend to use this monitor for your mirrored disks, you should configure them in physical volume groups. For more information, refer to the manual Using HA Monitors.
48
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Creating Volume Groups and Logical Volumes If your volume groups have not been set up, use the procedure in the next sections. If you have already done LVM configuration, skip ahead to the section Installing Oracle Real Application Clusters. Selecting Disks for the Volume Group Obtain a list of the disks on both nodes and identify which device files are used for the same disk on both. Use the following command on each node to list available disks as they are known to each system:
# lssf /dev/dsk/*
In the following examples, we use /dev/rdsk/c1t2d0 and /dev/rdsk/c0t2d0, which happen to be the device names for the same disks on both ftsys9 and ftsys10. In the event that the device file names are different on the different nodes, make a careful note of the correspondences. Creating Physical Volumes On the configuration node (ftsys9), use the pvcreate command to define disks as physical volumes. This only needs to be done on the configuration node. Use the following commands to create two physical volumes for the sample configuration:
# pvcreate -f /dev/rdsk/c1t2d0 # pvcreate -f /dev/rdsk/c0t2d0
Creating a Volume Group with PVG-Strict Mirroring Use the following steps to build a volume group on the configuration node (ftsys9). Later, the same volume group will be created on other nodes. 1. First, set up the group directory for vgops: # mkdir /dev/vg_ops 2. Next, create a control file named group in the directory /dev/vg_ops, as follows: # mknod /dev/vg_ops/group c 64 0xhh0000 The major number is always 64, and the hexadecimal minor number has the form
0xhh0000
Chapter 2
49
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 3. Create the volume group and add physical volumes to it with the following commands: # vgcreate -g bus0 /dev/vg_ops /dev/dsk/c1t2d0 # vgextend -g bus1 /dev/vg_ops /dev/dsk/c0t2d0 The first command creates the volume group and adds a physical volume to it in a physical volume group called bus0. The second command adds the second drive to the volume group, locating it in a different physical volume group named bus1. The use of physical volume groups allows the use of PVG-strict mirroring of disks and PV links. 4. Repeat this procedure for additional volume groups.
50
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM PVG-strict, that is, it occurs between different physical volume groups; the -n redo1.log option lets you specify the name of the logical volume; and the -L 28 option allocates 28 megabytes.
NOTE
It is important to use the -M n and -c y options for both redo logs and control files. These options allow the redo log files to be resynchronized by SLVM following a system crash before Oracle recovery proceeds. If these options are not set correctly, you may not be able to continue with database recovery.
If the command is successful, the system will display messages like the following:
Logical volume /dev/vg_ops/redo1.log has been successfully created with character device /dev/vg_ops/rredo1.log Logical volume /dev/vg_ops/redo1.log has been successfully extended
Note that the character device file name (also called the raw logical volume name) is used by the Oracle DBA in building the RAC database. Creating Mirrored Logical Volumes for RAC Data Files Following a system crash, the mirrored logical volumes need to be resynchronized, which is known as resilvering. If Oracle does not perform resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of NOMWC. This is done by disabling mirror write caching and enabling mirror consistency recovery. With NOMWC, SLVM performs the resynchronization. Create logical volumes for use as Oracle data files by using the same options as in the following example:
# lvcreate -m 1 -M n -c y -s g -n system.dbf -L 408 /dev/vg_ops
The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c y means that mirror consistency recovery is enabled; the -s g means that mirroring is PVG-strict, that is, it occurs between different physical volume groups; the -n system.dbf option lets you specify the name of the logical volume; and the -L 408 option allocates 408 megabytes.
Chapter 2
51
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM If Oracle performs resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of NONE by disabling both mirror write caching and mirror consistency recovery. With a mirror consistency policy of NONE, SLVM does not perform the resynchronization.
NOTE
Contact Oracle to determine if your version of Oracle RAC allows resilvering and to appropriately configure the mirror consistency recovery policy for your logical volumes.
Create logical volumes for use as Oracle data files by using the same options as in the following example: # lvcreate -m 1 -M n -c n -s g -n system.dbf -L 408 \ /dev/vg_ops The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c n means that mirror consistency recovery is disabled; the -s g means that mirroring is PVG-strict, that is, it occurs between different physical volume groups; the -n system.dbf option lets you specify the name of the logical volume; and the -L 408 option allocates 408 megabytes. If the command is successful, the system will display messages like the following:
Logical volume /dev/vg_ops/system.dbf has been successfully created with character device /dev/vg_ops/rsystem.dbf Logical volume /dev/vg_ops/system.dbf has been successfully extended
Note that the character device file name (also called the raw logical volume name) is used by the Oracle DBA in building the OPS database.
52
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM On your disk arrays, you should use redundant I/O channels from each node, connecting them to separate controllers on the array. Then you can define alternate links to the LUNs or logical disks you have defined on the array. If you are using SAM, choose the type of disk array you wish to configure, and follow the menus to define alternate links. If you are using LVM commands, specify the links on the command line. The following example shows how to configure alternate links using LVM commands. The following disk configuration is assumed:
8/0.15.0 8/0.15.1 8/0.15.2 8/0.15.3 8/0.15.4 8/0.15.5 10/0.3.0 10/0.3.1 10/0.3.2 10/0.3.3 10/0.3.4 10/0.3.5 /dev/dsk/c0t15d0 /dev/dsk/c0t15d1 /dev/dsk/c0t15d2 /dev/dsk/c0t15d3 /dev/dsk/c0t15d4 /dev/dsk/c0t15d5 /dev/dsk/c1t3d0 /dev/dsk/c1t3d1 /dev/dsk/c1t3d2 /dev/dsk/c1t3d3 /dev/dsk/c1t3d4 /dev/dsk/c1t3d5 /* /* /* /* /* /* /* /* /* /* /* /* I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel 0 0 0 0 0 0 1 1 1 1 1 1 (8/0) (8/0) (8/0) (8/0) (8/0) (8/0) (10/0) (10/0) (10/0) (10/0) (10/0) (10/0) SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI address address address address address address address address address address address address 15 15 15 15 15 15 3 3 3 3 3 3 LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN 0 1 2 3 4 5 0 1 2 3 4 5 */ */ */ */ */ */ */ */ */ */ */ */
Assume that the disk array has been configured, and that both the following device files appear for the same LUN (logical disk) when you run the ioscan command:
/dev/dsk/c0t15d0 /dev/dsk/c1t3d0
Use the following procedure to configure a volume group for this logical disk: 1. First, set up the group directory for vg_ops:
# mkdir /dev/vg_ops
2. Next, create a control file named group in the directory /dev/vg_ops, as follows:
# mknod /dev/vg_ops/group c 64 0xhh0000
The major number is always 64, and the hexadecimal minor number has the form
0xhh0000
Chapter 2
53
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups:
# ls -l /dev/*/group
3. Use the pvcreate command on one of the device files associated with the LUN to define the LUN to LVM as a physical volume.
# pvcreate -f /dev/rdsk/c0t15d0
It is only necessary to do this with one of the device file names for the LUN. The -f option is only necessary if the physical volume was previously used in some other volume group. 4. Use the following to create the volume group with the two links:
# vgcreate /dev/vg_ops /dev/dsk/c0t15d0 /dev/dsk/c1t3d0
LVM will now recognize the I/O channel represented by /dev/dsk/c0t15d0 as the primary link to the disk; if the primary link fails, LVM will automatically switch to the alternate I/O channel represented by /dev/dsk/c1t3d0. Use the vgextend command to add additional disks to the volume group, specifying the appropriate physical volume name for each PV link. Repeat the entire procedure for each distinct volume group you wish to create. For ease of system administration, you may wish to use different volume groups to separate logs from data and control files.
NOTE
The default maximum number of volume groups in HP-UX is 10. If you intend to create enough new volume groups that the total exceeds ten, you must increase the maxvgs system parameter and then re-build the HP-UX kernel. Use SAM and select the Kernel Configuration area, then choose Configurable Parameters. Maxvgs appears on the list.
54
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM
# # # # lvcreate lvcreate lvcreate lvcreate -n -n -n -n ops1log1.log -L 4 /dev/vg_ops opsctl1.ctl -L 4 /dev/vg_ops system.dbf -L 28 /dev/vg_ops opsdata1.dbf -L 1000 /dev/vg_ops
Logical Volume Name opsctl1.ctl opsctl2.ctl opsctl3.ctl ops1log1.log ops1log2.log ops1log3.log ops2log1.log ops2log2.log ops2log3.log opssystem.dbf opssysaux.dbf opstemp.dbf opsusers.dbf opsdata1.dbf opsdata2.dbf opsdata3.dbf
LV Size (MB) 118 118 118 128 128 128 128 128 128 408 808 258 128 208 208 208
Raw Logical Volume Path Name /dev/vg_ops/ropsctl1.ctl /dev/vg_ops/ropsctl2.ctl /dev/vg_ops/ropsctl3.ctl /dev/vg_ops/rops1log1.log /dev/vg_ops/rops1log2.log /dev/vg_ops/rops1log3.log /dev/vg_ops/rops2log1.log /dev/vg_ops/rops2log2.log /dev/vg_ops/rops2log3.log /dev/vg_ops/ropssystem.dbf /dev/vg_ops/ropssysaux.dbf /dev/vg_ops/ropstemp.dbf /dev/vg_ops/ropsusers.dbf /dev/vg_ops/ropsdata1.dbf /dev/vg_ops/ropsdata2.dbf /dev/vg_ops/ropsdata3.dbf
Chapter 2
55
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Table 2-1 Required Oracle File Names for Demo Database (Continued) Oracle File Size (MB)* 5 5 500 500 160
Raw Logical Volume Path Name /dev/vg_ops/ropsspfile1.ora /dev/vg_ops/rpwdfile.ora /dev/vg_ops/ropsundotbs1.log /dev/vg_ops/ropsundotbs2.log /dev/vg_ops/ropsexample1.dbf
The size of the logical volume is larger than the Oracle file size because Oracle needs extra space to allocate a header in addition to the file's actual data capacity. Create these files if you wish to build the demo database. The three logical volumes at the bottom of the table are included as additional data files, which you can create as needed, supplying the appropriate sizes. If your naming conventions require, you can include the Oracle SID and/or the database name to distinguish files for different instances and different databases. If you are using the ORACLE_BASE directory structure, create symbolic links to the ORACLE_BASE files from the appropriate directory. Example:
# ln -s /dev/vg_ops/ropsctl1.ctl \ /u01/ORACLE/db001/ctrl01_1.ctl
After creating these files, set the owner to oracle and the group to dba with a file mode of 660. The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.
56
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Displaying the Logical Volume Infrastructure
2. Still on ftsys9, copy the map file to ftsys10 (and to additional nodes as necessary.)
# rcp /tmp/vg_ops.map ftsys10:/tmp/vg_ops.map
3. On ftsys10 (and other nodes, as necessary), create the volume group directory and the control file named group:
# mkdir /dev/vg_ops # mknod /dev/vg_ops/group c 64 0xhh0000
For the group file, the major number is always 64, and the hexadecimal minor number has the form
0xhh0000
Chapter 2
57
Serviceguard Configuration for Oracle 10g RAC Displaying the Logical Volume Infrastructure where hh must be unique to the volume group you are creating. If possible, use the same number as on ftsys9. Use the following command to display a list of existing volume groups:
# ls -l /dev/*/group
4. Import the volume group data using the map file from node ftsys9. On node ftsys10 (and other nodes, as necessary), enter:
# vgimport -s -m /tmp/vg_ops.map /dev/vg_ops
58
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Installing Oracle Real Application Clusters
Before installing the Oracle Real Application Cluster software, make sure the storage cluster is running. Login as the oracle user on one node and then use the Oracle installer to install Oracle software and to build the correct Oracle runtime executables. When executables are installed to a local file system on each node, the Oracle installer copies the executables to the other nodes in the cluster. For details on Oracle installation, refer to the Oracle installation documentation. As part of this installation, the Oracle installer installs the executables and optionally, the Oracle installer can build an Oracle demo database on the primary node. The demo database files can be the character (raw) device files names for the logical volumes create earlier. For a demo database on SLVM or CVM, create logical volumes as shown in Table 2-1, Required Oracle File Names for Demo Database. As the installer prompts for the database file names, either the pathnames of the raw logical volumes instead of using the defaults. If you do not wish to install the demo database, select install software only.
Chapter 2
59
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File
# Enter a name for this cluster. This name will be used to identify the # cluster when viewing or manipulating it. CLUSTER_NAME cluster 1
# # # # # # # # # # # # # # # # # # # # # #
Cluster Lock Parameters The cluster lock is used as a tie-breaker for situations in which a running cluster fails, and then two equal-sized sub-clusters are both trying to form a new cluster. The cluster lock may be configured using only one of the following alternatives on a cluster: the LVM lock disk the quorom server
Consider the following when configuring a cluster. For a two-node cluster, you must use a cluster lock. For a cluster of three or four nodes, a cluster lock is strongly recommended. For a cluster of more than four nodes, a cluster lock is recommended. If you decide to configure a lock for a cluster of more than four nodes, it must be a quorum server. Lock Disk Parameters. Use the FIRST_CLUSTER_LOCK_VG and FIRST_CLUSTER_LOCK_PV parameters to define a lock disk. The FIRST_CLUSTER_LOCK_VG is the LVM volume group that holds the cluster lock. This volume group should not be used by any other cluster as a cluster lock device.
60
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # and QS_TIMEOUT_EXTENSION parameters to define a quorum server. The QS_HOST is the host name or IP address of the system that is running the quorum server process. The QS_POLLING_INTERVAL (microseconds) is the interval at which Serviceguard checks to make sure the quorum server is running. The optional QS_TIMEOUT_EXTENSION (microseconds) is used to increase the time interval after which the quorum server is marked DOWN. The default quorum server timeout is calculated from the Serviceguard cluster parameters, including NODE_TIMEOUT and HEARTBEAT_INTERVAL. If you are experiencing quorum server timeouts, you can adjust these parameters, or you can include the QS_TIMEOUT_EXTENSION parameter. The value of QS_TIMEOUT_EXTENSION will directly effect the amount of time it takes for cluster reformation in the event of failure. For example, if QS_TIMEOUT_EXTENSION is set to 10 seconds, the cluster reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTENSION was set to 0. This delay applies even if there is no delay in contacting the Quorum Server. The recommended value for QS_TIMEOUT_EXTENSION is 0, which is used as the default and the maximum supported value is 30000000 (5 minutes). For example, to configure a quorum server running on node qshost with 120 seconds for the QS_POLLING_INTERVAL and to add 2 seconds to the system assigned value for the quorum server timeout, enter: QS_HOST qshost QS_POLLING_INTERVAL 120000000 QS_TIMEOUT_EXTENSION 2000000
# # # # # # # # # # # #
Definition of nodes in the cluster. Repeat node definitions as necessary for additional nodes. NODE_NAME is the specified nodename in the cluster. It must match the hostname and both cannot contain full domain name. Each NETWORK_INTERFACE, if configured with IPv4 address, must have ONLY one IPv4 address entry with it which could be either HEARTBEAT_IP or STATIONARY_IP. Each NETWORK_INTERFACE, if configured with IPv6 address(es) can have multiple IPv6 address entries(up to a maximum of 2, only one IPv6 address entry belonging to site-local scope and only one belonging to global scope) which must be all STATIONARY_IP. They cannot be HEARTBEAT_IP.
Chapter 2
61
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File
NODE_NAME ever3a NETWORK_INTERFACE lan0 STATIONARY_IP15.244.64.140 NETWORK_INTERFACE lan1 HEARTBEAT_IP192.77.1.1 NETWORK_INTERFACE lan2 # List of serial device file names # For example: # SERIAL_DEVICE_FILE /dev/tty0p0 # Primary Network Interfaces on Bridged Net 1: lan0. # Warning: There are no standby network interfaces on bridged net 1. # Primary Network Interfaces on Bridged Net 2: lan1. # Possible standby Network Interfaces on Bridged Net 2: lan2.
# Cluster Timing Parameters (microseconds). # # # # # # # # # The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds). This default setting yields the fastest cluster reformations. However, the use of the default value increases the potential for spurious reformations due to momentary system hangs or network load spikes. For a significant portion of installations, a setting of 5000000 to 8000000 (5 to 8 seconds) is more appropriate. The maximum value recommended for NODE_TIMEOUT is 30000000 (30 seconds). 1000000 2000000
HEARTBEAT_INTERVAL NODE_TIMEOUT
# Network Monitor Configuration Parameters. # The NETWORK_FAILURE_DETECTION parameter determines how LAN card failures are detected. # If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound # message count stops increasing or when both inbound and outbound # message counts stop increasing. # If set to INOUT, both the inbound and outbound message counts must
62
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File
# stop increasing before the card is considered down. NETWORK_FAILURE_DETECTION INOUT # Package Configuration Parameters. # Enter the maximum number of packages which will be configured in the cluster. # You can not add packages beyond this limit. # This parameter is required. MAX_CONFIGURED_PACKAGES 150
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Access Control Policy Parameters. Three entries set the access control policy for the cluster: First line must be USER_NAME, second USER_HOST, and third USER_ROLE. Enter a value after each. 1. USER_NAME can either be ANY_USER, or a maximum of 8 login names from the /etc/passwd file on user host. 2. USER_HOST is where the user can issue Serviceguard commands. If using Serviceguard Manager, it is the COM server. Choose one of these three values: ANY_SERVICEGUARD_NODE, or (any) CLUSTER_MEMBER_NODE, or a specific node. For node, use the official hostname from domain name server, and not an IP addresses or fully qualified name. 3. USER_ROLE must be one of these three values: * MONITOR: read-only capabilities for the cluster and packages * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages in the cluster * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative commands for the cluster. Access control policy does not set a role for configuration capability. To configure, a user must log on to one of the clusters nodes as root (UID=0). Access control policy cannot limit root users access. MONITOR and FULL_ADMIN can only be set in the cluster configuration file, and they apply to the entire cluster. PACKAGE_ADMIN can be set in the cluster or a package configuration file. If set in the cluster configuration file, PACKAGE_ADMIN applies to all configured packages. If set in a package configuration file, PACKAGE_ADMIN applies to that package only. Conflicting or redundant policies will cause an error while applying the configuration, and stop the process. The maximum number of access policies that can be configured in the cluster is 200.
Chapter 2
63
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File
# # # # # #
Example: to configure a role for user john from node noir to administer a cluster and all its packages, enter: USER_NAME john USER_HOST noir USER_ROLE FULL_ADMIN
# # # # # #
List of cluster aware LVM Volume Groups. These volume groups will be used by package applications via the vgchange -a e command. Neither CVM or VxVM Disk Groups should be used here. For example: VOLUME_GROUP /dev/vgdatabase VOLUME_GROUP /dev/vg02
# # # # # # # #
List of OPS Volume Groups. Formerly known as DLM Volume Groups, these volume groups will be used by OPS or RAC cluster applications via the vgchange -a s command. (Note: the name DLM_VOLUME_GROUP is also still supported for compatibility with earlier versions.) For example: OPS_VOLUME_GROUP /dev/vgdatabase OPS_VOLUME_GROUP /dev/vg02
64
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS
CFS and SGeRAC are available in selected HP Serviceguard Storage Management Suite bundles. Refer to the HP Serviceguard Storage Management Suite Version A.01.00 Release Notes. In the example below, both the Oracle RAC software and datafiles reside on CFS. There is a single Oracle home. Three CFS file systems are created for Oracle home, Oracle datafiles, and for the Oracle Cluster Registry (OCR) and vote device. The Oracle Cluster Software home is on a local file system.
/cfs/mnt1 for Oracle Base and Home /cfs/mnt2 for Oracle datafiles /cfs/mnt3 - for OCR and Vote device
Chapter 2
65
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS
NOTE
2. Create the Cluster ASCII file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file 3. Create the Cluster # cmapplyconf -C clm.asc 4. Start the Cluster # cmruncl # cmviewcl The following output will be displayed:
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running
5. Configure the Cluster Volume Manager (CVM) Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM/CFS stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets. # cfscluster config -s The following output will be displayed: CVM is now configured Starting CVM... It might take a few minutes to complete When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster:
66
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS # vxdctl -c mode The following output will be displayed: mode: enabled: cluster active - SLAVE master: ever3b or mode: enabled: cluster active - MASTER slave: ever3b 6. Converting Disks from LVM to CVM You can use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Twelfth Edition users guide Appendix G. 7. Initializing Disks for CVM/CFS You need to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /etc/vx/bin/vxdisksetup -i c4t4d0 8. Create the Disk Group for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init cfsdg1 c4t4d0 9. Create the Disk Group Multi-Node package. Use the following command to add the disk group to the cluster: # cfsdgadm add cfsdg1 all=sw The following output will be displayed:
Chapter 2
67
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS Package name SG-CFS-DG-1 was generated to control the resource shared disk group cfsdg1 is associated with the cluster. 10. Activate the Disk Group # cfsdgadm activate cfsdg1 11. Creating Volumes and Adding a Cluster Filesystem # vxassist -g cfsdg1 make vol1 10240m # vxassist -g cfsdg1 make vol2 10240m # vxassist -g cfsdg1 make vol3 300m # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol1 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol2 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol3 The following output will be displayed: version 6 layout 307200 sectors, 307200 blocks of size 1024, log size 1024 blocks largefiles supported 12. Configure Mount Point # cfsmntadm add cfsdg1 vol1 /cfs/mnt1 all=rw The following output will be displayed:
68
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS Package name SG-CFS-MP-1 was generated to control the resource. Mount point /cfs/mnt1 was associated with the cluster. # cfsmntadm add cfsdg1 vol2 /cfs/mnt2 all=rw The following output will be displayed: Package name SG-CFS-MP-2 was generated to control the resource. Mount point /cfs/mnt2 was associated with the cluster. # cfsmntadm add cfsdg1 vol3 /cfs/mnt3 all=rw The following output will be displayed: Package name SG-CFS-MP-3 was generated to control the resource. Mount point /cfs/mnt3 that was associated with the cluster.
NOTE
The diskgroup and mount point multi-node packages (SG-CFS-DG_ID# and SG-CFS-MP_ID#) do not monitor the health of the disk group and mount point. They check that the application packages that depend on them have access to the disk groups and mount points. If the dependent application package loses access and cannot read and write to the disk, it will fail; however that will not cause the DG or MP multi-node package to fail.
13. Mount Cluster Filesystem # cfsmount /cfs/mnt1 # cfsmount /cfs/mnt2 # cfsmount /cfs/mnt3 14. Check CFS Mount Points # bdf | grep cfs
/dev/vx/dsk/cfsdg1/vol1 10485760 /dev/vx/dsk/cfsdg1/vol2 10485760 /dev/vx/dsk/cfsdg1/vol3 307200 19651 9811985 19651 9811985 1802 286318 0% /cfs/mnt1 0% /cfs/mnt2 1% /cfs/mnt3
Chapter 2
69
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS 15. View the Configuration # cmviewcl
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running GMS_STATE unknown unknown
MULTI_NODE_PACKAGES PACKAGE SG-CFS-pkg SG-CFS-DG-1 SG-CFS-MP-1 SG-CFS-MP-2 SG-CFS-MP-3 STATUS up up up up up STATE running running running running running AUTO_RUN enabled enabled enabled enabled enabled SYSTEM yes no no no no
CAUTION
Once you create the disk group and mount point packages, it is critical that you administer the cluster with the cfs commands, including cfsdgadm, cfsmntadm, cfsmount, and cfsumount. If you use the general commands such as mount and umount, it could cause serious problems, such as writing to the local file system instead of the cluster file system. Any form of the mount command (for example, mount -o cluster, dbed_chkptmount, or sfrac_chkptmount) other than cfsmount or cfsumount in a HP Serviceguard Storage Management Suite environment with CFS should be done with caution. These non-cfs commands could cause conflicts with subsequent command operations on the file system or Serviceguard packages. Use of these other forms of mount will not create an appropriate multi-node package which means that the cluster packages are not aware of the file system changes
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS # cfsumount /cfs/mnt3 2. Delete Mount Point Multi-node Package # cfsmntadm delete /cfs/mnt1 The following output will be generated: Mount point /cfs/mnt1 was disassociated from the cluster # cfsmntadm delete /cfs/mnt2 The following output will be generated: Mount point /cfs/mnt2 was disassociated from the cluster # cfsmntadm delete /cfs/mnt3 The following output will be generated: Mount point /cfs/mnt3 was disassociated from the cluster Cleaning up resource controlling shared disk group cfsdg1 Shared disk group cfsdg1 was disassociated from the cluster.
NOTE
3. Delete Disk Group Multi-node Package # cfsdgadm delete cfsdg1 The following output will be generated: Shared disk group cfsdg1 was disassociated from the cluster.
NOTE
cfsmntadm delete also deletes the disk group if there is no dependent package. To ensure the disk group deletion is complete, use the above command to delete the disk group package.
Chapter 2
71
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS The following output will be generated: Stopping CVM...CVM is stopped # cfscluster unconfig The following output will be displayed: CVM is now unconfigured
72
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM
IMPORTANT
Creating a rootdg disk group is only necessary the first time you use the Volume Manager. CVM 4.1 does not require a rootdg.
Chapter 2
73
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Preparing the Cluster and the System Multi-node Package for use with CVM 4.x 1. Create the Cluster file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file 2. Create the Cluster # cmapplyconf -C clm.asc Start the Cluster # cmruncl # cmviewcl The following output will be displayed:
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running
3. Configure the Cluster Volume Manager (CVM) Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets. Use the cmapplyconf command: # cmapplyconf -P /etc/cmcluster/cfs/SG-CFS-pkg.conf # cmrunpkg SG-CFS-pkg When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster: # vxdctl -c mode The following output will be displayed:
74
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM mode: enabled: cluster active - SLAVE master: ever3b or mode: enabled: cluster active - MASTER slave: ever3b Converting Disks from LVM to CVM Use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Thirteenth Edition users guide Appendix G. Initializing Disks for CVM It is necessary to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /etc/vx/bin/vxdisksetup -i c4t4d0 Create the Disk Group for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c4t4d0 4. Creating Volumes and Adding a Cluster Filesystem # vxassist -g ops_dg make vol1 10240m # vxassist -g ops_dg make vol2 10240m # vxassist -g ops_dg make vol3 300m 5. View the Configuration
Chapter 2
75
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM # cmviewcl
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running
MULTI_NODE_PACKAGES PACKAGE STATUS SG-CFS-pkg up STATE running AUTO_RUN enabled SYSTEM yes
IMPORTANT
After creating these files, use the vxedit command to change the ownership of the raw volume files to oracle and the group membership to dba, and to change the permissions to 660. Example: # cd /dev/vx/rdsk/ops_dg # vxedit -g ops_dg set user=oracle * # vxedit -g ops_dg set group=dba * # vxedit -g ops_dg set mode=660 * The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.
Mirror Detachment Policies with CVM The required CVM disk mirror detachment policy is global, which means that as soon as one node cannot see a specific mirror copy (plex), all nodes cannot see it as well. The alternate policy is local, which means that if one node cannot see a specific mirror copy, then CVM will deactivate access to the volume for that node only. This policy can be re-set on a disk group basis by using the vxedit command, as follows: # vxedit set diskdetpolicy=global <DiskGroupName>
76
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM
NOTE
The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager.
NOTE
To prepare the cluster for CVM disk group configuration, you need to ensure that only one heartbeat subnet is configured. Then use the following command, which creates the special package that communicates cluster information to CVM: # cmapplyconf -P /etc/cmcluster/cvm/VxVM-CVM-pkg.conf
WARNING
After the above command completes, start the cluster and create disk groups for shared use as described in the following sections. Starting the Cluster and Identifying the Master Node Run the cluster, which will activate the special CVM package: # cmruncl
Chapter 2
77
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM After the cluster is started, it will now run with a special system multi-node package named VxVM-CVM-pkg, which is on all nodes. This package is shown in the following output of the cmviewcl -v command:
CLUSTER bowls NODE spare split strike STATUS up STATUS up up up STATE running running running
When CVM starts up, it selects a master node, and this is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster: # vxdctl -c mode One node will identify itself as the master. Create disk groups from this node. Converting Disks from LVM to CVM You can use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Thirteenth Edition users guide Appendix G. Initializing Disks for CVM You need to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /usr/lib/vxvm/bin/vxdisksetup -i /dev/dsk/c0t3d2
78
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Creating Disk Groups for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c0t3d2 Verify the configuration with the following command: # vxdg list
NAME rootdg ops_dg STATE enabled enabled,shared ID 971995699.1025.node1 972078742.1084.node2
Creating Volumes
Use the vxassist command to create logical volumes. The following is an example: # vxassist -g ops_dg make log_files 1024m This command creates a 1024 MB volume named log_files in a disk group named ops_dg. The volume can be referenced with the block device file /dev/vx/dsk/ops_dg/log_files or the raw (character) device file /dev/vx/rdsk/ops_dg/log_files. Verify the configuration with the following command: # vxdg list
IMPORTANT
After creating these files, use the vxedit command to change the ownership of the raw volume files to oracle and the group membership to dba, and to change the permissions to 660. Example: # cd /dev/vx/rdsk/ops_dg # vxedit -g ops_dg set user=oracle * # vxedit -g ops_dg set group=dba * # vxedit -g ops_dg set mode=660 * The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.
Chapter 2
79
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Mirror Detachment Policies with CVM The required CVM disk mirror detachment policy is global, which means that as soon as one node cannot see a specific mirror copy (plex), all nodes cannot see it as well. The alternate policy is local, which means that if one node cannot see a specific mirror copy, then CVM will deactivate access to the volume for that node only. This policy can be re-set on a disk group basis by using the vxedit command, as follows: # vxedit set diskdetpolicy=global <DiskGroupName>
NOTE
The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager.
Volume Name opsctl1.ctl opsctl2.ctl opsctl3.ctl ops1log1.log ops1log2.log ops1log3.log ops2log1.log ops2log2.log
Size (MB) 118 118 118 128 128 128 128 128
Raw Device File Name /dev/vx/rdsk/ops_dg/opsctl1.ctl /dev/vx/rdsk/ops_dg/opsctl2.ctl /dev/vx/rdsk/ops_dg/opsctl3.ctl /dev/vx/rdsk/ops_dg/ops1log1.log /dev/vx/rdsk/ops_dg/ops1log2.log /dev/vx/rdsk/ops_dg/ops1log3.log /dev/vx/rdsk/ops_dg/ops2log1.log /dev/vx/rdsk/ops_dg/ops2log2.log
80
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Table 2-2 Required Oracle File Names for Demo Database (Continued) Oracle File Size (MB) 120 500 800 250 120 200 200 200 500 500 500 500 160
Volume Name ops2log3.log opssystem.dbf opssysaux.dbf opstemp.dbf opsusers.dbf opsdata1.dbf opsdata2.dbf opsdata3.dbf opsspfile1.ora opspwdfile.ora opsundotbs1.dbf opsundotbs2.dbf opsexmple1.dbf
Size (MB) 128 508 808 258 128 208 208 208 508 508 508 508 168
Raw Device File Name /dev/vx/rdsk/ops_dg/ops2log3.log /dev/vx/rdsk/ops_dg/opssystem.dbf /dev/vx/rdsk/ops_dg/opssysaux.dbf /dev/vx/rdsk/ops_dg/opstemp.dbf /dev/vx/rdsk/ops_dg/opsusers.dbf /dev/vx/rdsk/ops_dg/opsdata1.dbf /dev/vx/rdsk/ops_dg/opsdata2.dbf /dev/vx/rdsk/ops_dg/opsdata3.dbf /dev/vx/rdsk/ops_dg/opsspfile1.ora /dev/vx/rdsk/ops_dg/opspwdfile.ora /dev/vx/rdsk/ops_dg/opsundotbs1.dbf /dev/vx/rdsk/ops_dg/opsundotbs2.dbf /dev/vx/rdsk/ops_dg/opsexample1.dbf
Create these files if you wish to build the demo database. The three logical volumes at the bottom of the table are included as additional data files, which you can create as needed, supplying the appropriate sizes. If your naming conventions require, you can include the Oracle SID and/or the database name to distinguish files for different instances and different databases. If you are using the ORACLE_BASE directory structure, create symbolic links to the ORACLE_BASE files from the appropriate directory. Example:
# ln -s /dev/vx/rdsk/ops_dg/opsctl1.ctl \ /u01/ORACLE/db001/ctrl01_1.ctl
Example:
Chapter 2
81
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM 1. Create an ASCII file, and define the path for each database object.
control1=/u01/ORACLE/db001/ctrl01_1.ctl
2. Set the following environment variable where filename is the name of the ASCII file created.
# export DBCA_RAW_CONFIG=<full path>/filename
82
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation)
2.
3.
4.
Chapter 2
83
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation)
5. 6.
Enable Remote Access (ssh and remsh) for Oracle User on all Nodes Create File System for Oracle Directories
In the following samples, /mnt/app is a mounted file system for Oracle software. Assume there is a private disk c4t5d0 at 18 GB size on all nodes. Create the local file system on each node. # umask 022 # pvcreate /dev/rdsk/c4t5d0 # mkdir /dev/vg01 # mknod /dev/vg01/group c 64 0x010000 # vgcreate /dev/vg01 /dev/dsk/c4t5d0 # lvcreate -L 16000 /dev/vg01 # newfs -F vxfs /dev/vg01/rlvol1 # mkdir -p /mnt/app # mount /dev/vg01/lvol1 /mnt/app # chmod 775 /mnt/app
7.
8.
Create Oracle Base Directory (for RAC Binaries on Local File System)
If installing RAC binaries on local file system, create the oracle base directory on each node. # mkdir -p /mnt/app/oracle # chown -R oracle:oinstall /mnt/app/oracle # chmod -R 775 /mnt/app/oracle
84
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # usermod -d /mnt/app/oracle oracle 9.
Create Oracle Base Directory (For RAC Binaries on Cluster File System)
If installing RAC binaries on Cluster File System, create the oracle base directory once since this is CFS directory visible by all nodes. The CFS file system used is /cfs/mnt1. # mkdir -p /cfs/mnt1/oracle # chown -R oracle:oinstall /cfs/mnt1/oracle # chmod -R 775 /cfs/mnt1/oracle # chmod 775 /cfs # chmod 775 /cfs/mnt1 Modify oracle user to use new home directory on each node. # usermod -d /cfs/mnt1/oracle oracle
10.
Chapter 2
85
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # ORACLE_BASE=/mnt/app/oracle; export ORACLE_BASE # mkdir -p $ORACLE_BASE/oradata/ver10 # chown -R oracle:oinstall $ORACLE_BASE/oradata # chmod -R 755 $ORACLE_BASE/oradata The following is a sample of the mapping file for DBCA:
system=/dev/vg_ops/ropssystem.dbf sysaux=/dev/vg_ops/ropssysaux.dbf undotbs1=/dev/vg_ops/ropsundotbs01.dbf undotbs2=/dev/vg_ops/ropsundotbs02.dbf example=/dev/vg_ops/ropsexample1.dbf users=/dev/vg_ops/ropsusers.dbf redo1_1=/dev/vg_ops/rops1log1.log redo1_2=/dev/vg_ops/rops1log2.log redo2_1=/dev/vg_ops/rops2log1.log redo2_2=/dev/vg_ops/rops2log2.log control1=/dev/vg_ops/ropsctl1.ctl control2=/dev/vg_ops/ropsctl2.ctl control3=/dev/vg_ops/ropsctl3.ctl temp=/dev/vg_ops/ropstmp.dbf spfile=/dev/vg_ops/ropsspfile1.ora
In this sample, create the DBCA mapping file and place at: /mnt/app/oracle/oradata/ver10/ver10_raw.conf. 11.
86
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # chmod 755 VOTE # chown -R oracle:oinstall /cfs/mnt3 b. Create Directory for Oracle Demo Database on CFS Create the CFS directory to store Oracle database files. Run commands only on one node. # chmod 775 /cfs # chmod 775 /cfs/mnt2 # cd /cfs/mnt2 # mkdir oradata # chown oracle:oinstall oradata # chmod 775 oradata
Chapter 2
87
Serviceguard Configuration for Oracle 10g RAC Installing Oracle 10g Cluster Software
88
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Installing Oracle 10g RAC Binaries
Chapter 2
89
Serviceguard Configuration for Oracle 10g RAC Creating a RAC Demo Database
1. Setting up Listeners with Oracle Network Configuration Assistant Use the Oracle network configuration assistant to configure the listeners with the following command: $ netca 2. Create Database with Database Configuration Assistant Use the Oracle database configuration assistant to create the database with the following command: $ dbca Use following guidelines when installing on a local file system:
90
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Creating a RAC Demo Database a. In this sample, the database name and SID prefix are ver10.
1. Setting up Listeners with Oracle Network Configuration Assistant Use the Oracle network configuration assistant to configure the listeners with the following command: $ netca 2. Create Database with Database Configuration Assistant Use the Oracle database configuration assistant to create the database with the following command: $ dbca Use following guidelines when installing on a local file system: a. In this sample, the database name and SID prefix are ver10. b. Select the storage option for Cluster File System. c. Enter /cfs/mnt2/oradata as the common directory.
Chapter 2
91
Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Configured
1. Check the license # /opt/VRTS/bin/vxlictest -n VERITAS Storage Foundation for Oracle -f ODM output: Using VERITAS License Manager API Version 3.00, Build 2 ODM feature is licensed 2. Check that the VRTSodm package is installed: # swlist VRTSodm output: VRTSodm VRTSodm.ODM-KRN VRTSodm.ODM-MAN VRTSodm.ODM-RUN 4.1m 4.1m 4.1m 4.1m VERITAS VERITAS VERITAS VERITAS Oracle Disk Manager ODM kernel files ODM manual pages ODM commands
3. Check that libodm.sl is present # ll -L /opt/VRTSodm/lib/libodm.sl output: -rw-r--r-- 1 root sys 14336 Apr 25 18:42 /opt/VRTSodm/lib/libodm.sl
92
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Configuring Oracle to Use Oracle Disk Manager Library
1. Login as Oracle user 2. Shutdown database 3. Link the Oracle Disk Manager library into Oracle home for Oracle 10g For HP 9000 Systems: $ rm ${ORACLE_HOME}/lib/libodm10.sl $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm10.sl For Integrity Systems: $ rm ${ORACLE_HOME}/lib/libodm10.so $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm10.so 4. Start Oracle database
Chapter 2
93
Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Running
1. Start the cluster and Oracle database (if not already started) 2. Check that the Oracle instance is using the Oracle Disk Manager function: # cat /dev/odm/stats
abort: cancel: commit: create: delete: identify: io: reidentify: resize: unidentify: mname: vxctl: vxvers: io req: io calls: comp req: comp calls: io mor cmp: io zro cmp: cl receive: cl ident: cl reserve: cl delete: cl resize: cl same op: cl opt idn: cl opt rsv: **********: 0 0 18 18 0 349 12350590 78 0 203 0 0 10 9102431 6911030 73480659 5439560 461063 2330 66145 18 8 1 0 0 0 332 17
94
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Running # kcmodule -P state odm Output: state loaded 4. In the alert log, verify the Oracle instance is running. The log should contain output similar to the following: Oracle instance running with ODM: VERITAS 4.1 ODM Library, Version 1.1
Chapter 2
95
Serviceguard Configuration for Oracle 10g RAC Configuring Oracle to Stop Using Oracle Disk Manager Library
1. Login as Oracle user 2. Shutdown database 3. Change directories: $ cd ${ORACLE_HOME}/lib 4. Remove the file linked to the ODM library For HP 9000 Systems: $ rm libodm10.sl $ ln -s ${ORACLE_HOME}/lib/libodmd10.sl \ ${ORACLE_HOME}/lib/libodm10.sl For Integrity Systems: $ rm libodm10.so $ ln -s ${ORACLE_HOME}/lib/libodmd10.so \ ${ORACLE_HOME}/lib/libodm10.so 5. Restart the database
96
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC
Chapter 2
97
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC When the Oracle Cluster Software required storage is configured on SLVM volume groups or CVM disk groups, the Serviceguard package should be configured to activate and deactivate the required storage in the package control script. As an example, modify the control script to activate the volume group in shared mode and set VG in the package control script for SLVM volume groups. VG[0]=vg_ops Storage Activation (CVM) When the Oracle Cluster Software required storage is configured on CVM disk groups, the Serviceguard package should be configured to activate and deactivate the required storage in the package configuration file and control script. In the package configuration file, specify the disk group with the STORAGE_GROUP key word. In the package controls script, specify the disk group with the CVM_DG variable. As an example, in the package configuration file should have the following: STORAGE_GROUP ops_dg Modify the package control script to set the CVM disk group to activate for shared write and to specify the disk group. CVM_DG[0]=ops_dg Storage Activation (CFS) When the Oracle Cluster Software required storage is configured on a Cluster File System (CFS), the Serviceguard package should be configured to depend on the CFS multi-node package through package dependency. With package dependency, the Serviceguard package that starts Oracle Cluster Software will not run until its dependent CFS multi-node package is up and will halt before the CFS multi-node package is halted. Setup dependency conditions in Serviceguard package configuration file (SAMPLE).
DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION mp1 SG-CFS-MP-1=UP SAME_NODE
98
Chapter 2
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC
DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION mp2 SG-CFS-MP-2=UP SAME_NODE mp3 SG-CFS-MP-3=UP SAME_NODE
Starting and Stopping Oracle Cluster Software In the Serviceguard package control script, configure the Oracle Cluster Software start in the customer_defined_run_cmds function For 10g 10.1.0.04 or later: /sbin/init.d/init.crs start For 10g 10.2.0.01 or later: <CRS HOME>/bin/crsctl start crs In the Serviceguard package control script, configure the Oracle Cluster Software stop in the customer_defined_halt_cmds function. For 10g 10.1.0.04 or later: /sbin/init.d/init.crs stop For 10g 10.2.0.01 or later: <CRS HOME>/bin/crsctl stop crs When stopping Oracle Cluster Software in a Serviceguard package, it may be necessary to verify that the Oracle processes have stopped and exited before deactivating storage or halting CFS multi-node package. The verification can be done with a script that loops and checks for the successful stop message in the Oracle Cluster Software logs or the existences of Oracle processes that needed to be stopped, specifically the CSS daemon (ocssd.bin). For example, this script could be called by the Serviceguard package control script after the command to halt Oracle Cluster Software and before storage deactivation.
Chapter 2
99
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC
100
Chapter 2
Chapter 3
101
102
Chapter 3
ORACLE LOGICAL VOLUME WORKSHEET FOR LVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Control File _____/dev/vg_ops/ropsctl1.ctl_______108______ Oracle Control File 2: ___/dev/vg_ops/ropsctl2.ctl______108______ Oracle Control File 3: ___/dev/vg_ops/ropsctl3.ctl______104______ Instance 1 Redo Log 1: ___/dev/vg_ops/rops1log1.log_____20______ Instance 1 Redo Log 2: ___/dev/vg_ops/rops1log2.log_____20_______ Instance 1 Redo Log 3: ___/dev/vg_ops/rops1log3.log_____20_______ Instance 1 Redo Log: __________________________________________________ Instance 1 Redo Log: __________________________________________________ Instance 2 Redo Log 1: ___/dev/vg_ops/rops2log1.log____20________ Instance 2 Redo Log 2: ___/dev/vg_ops/rops2log2.log____20________ Instance 2 Redo Log 3: ___/dev/vg_ops/rops2log3.log____20_________ Instance 2 Redo Log: _________________________________________________ Instance 2 Redo Log: __________________________________________________ Data: System ___/dev/vg_ops/ropssystem.dbf___400__________ Data: Temp ___/dev/vg_ops/ropstemp.dbf______100_______ Data: Users ___/dev/vg_ops/ropsusers.dbf_____120_________ Data: Tools ___/dev/vg_ops/ropstools.dbf____15___________ Data: User data ___/dev/vg_ops/ropsdata1.dbf_200__________ Data: User data ___/dev/vg_ops/ropsdata2.dbf__200__________ Data: User data ___/dev/vg_ops/ropsdata3.dbf__200__________ Data: Rollback ___/dev/vg_ops/ropsrollback.dbf__300_________ Parameter: spfile1 /dev/vg_ops/ropsspfile1.ora __5_____ Instance 1 undotbs1: /dev/vg_ops/ropsundotbs1.dbf___312___ Instance 2 undotbs2: /dev/vg_ops/ropsundotbs2.dbf___312___ Data: example1__/dev/vg_ops/ropsexample1.dbf__________160____ data: cwmlite1__/dev/vg_ops/ropscwmlite1.dbf__100____ Data: indx1__/dev/vg_ops/ropsindx1.dbf____70___ Data: drsys1__/dev/vg_ops/ropsdrsys1.dbf___90___
Chapter 3
103
NOTE
CFS and SGeRAC is available in the following HP Serviceguard Storage Management Suite bundle: HP Serviceguard Cluster File System for RAC (SGCFSRAC) (T2777BA).
For specific bundle information, refer to the HP Serviceguard Storage Management Suite Version A.01.00 Release Notes.
Configuration Combinations with Oracle 9i RAC The following configuration combinations show a sample of the configuration choices (See Table 3-1): Configuration 1 is for Oracle software and database files to reside on CFS. Configuration 2 is for Oracle software and archives to reside on CFS, while the database files are on raw volumes either SLVM or CVM. Configuration 3 is for Oracle software and archives to reside on local file system, while database files reside on CFS.
104
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage Configuration 4 is for Oracle software and archive to reside on local file system, while the database files are on raw volumes either SLVM or CVM. RAC Software, Archive, Datafiles, SRVM RAC Software, Archive CFS CFS Local FS Local FS RAC Datafiles, SRVM CFS Raw (SLVM or CVM) CFS Raw (SLVM or CVM)
NOTE
Mixing files between CFS database files and raw volumes is allowable, but not recommended. RAC datafiles on CFS requires Oracle Disk Manager (ODM).
Using Single CFS Home or Local Home With a single CFS home, the software installs once and all the files are visible on all nodes. Serviceguard cluster services needs to be up for the CFS home to be accessible. For Oracle RAC, some files are still local (/etc/oratab, /var/opt/oracle/, /usr/local/bin/). With a local file system home, the software is installed and copied to every nodes local file system. Serviceguard cluster services does not need to be up for the local file system to be accessible. There would be multiple homes for Oracle. Considerations on using CFS for RAC datafiles and Server Management Storage (SRVM) Use the following list when considering to use CFS for database storage: Single file system view Simpler setup for archive recovery since archive area is visible by all nodes
Chapter 3
105
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage Oracle create database files Online changes (OMF - Oracle Managed Files) within CFS Better manageability Manual intervention when modifying volumes, DGs, disks Requires the SGeRAC and CFS software CFS and SGeRAC is available in selected HP Serviceguard Storage Management Suite bundles. Refer to the HP Serviceguard Storage Management Suite Version A.01.00 Release Notes. Considerations on using Raw Volumes for RAC datafiles and Server Management storage (SRVM) Use the following list when considering to use Raw Volumes for database storage: SLVM is part of SGeRAC. Create volumes for each datafile. CVM 4.x Disk group activation performed by disk group multi-node package. Disk group activation performed by application package (without the HP Serviceguard Storage Management Suite bundle-see bundle information above). CVM 3.x Disk group activation is performed by application package.
106
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage availability disk arrays in RAID nodes. The logical units of storage on the arrays are accessed from each node through multiple physical volume links via DMP (Dynamic Multi-pathing), which provides redundant paths to each unit of storage. Fill out the VERITAS Volume worksheet to provide volume names for volumes that you will create using the VERITAS utilities. The Oracle DBA and the HP-UX system administrator should prepare this worksheet together. Create entries for shared volumes only. For each volume, enter the full pathname of the raw volume device file. Be sure to include the desired size in MB. Following is a sample worksheet filled out. Refer to Appendix B, Blank Planning Worksheets, for samples of blank worksheets. Make as many copies as you need. Fill out the worksheet and keep it for future reference.
Chapter 3
107
ORACLE LOGICAL VOLUME WORKSHEET FOR CVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Control File 1: ___/dev/vx/rdsk/ops_dg/opsctl1.ctl______100______ Oracle Control File 2: ___/dev/vx/rdsk/ops_dg/opsctl2.ctl______100______ Oracle Control File 3: ___/dev/vx/rdsk/ops_dg/opsctl3.ctl______100______ Instance 1 Redo Log 1: ___/dev/vx/rdsk/ops_dg/ops1log1.log_____20______ Instance 1 Redo Log 2: ___/dev/vx/rdsk/ops_dg/ops1log2.log_____20_______ Instance 1 Redo Log 3: ___/dev/vx/rdsk/ops_dg/ops1log3.log_____20_______ Instance 1 Redo Log: ___________________________________________________ Instance 1 Redo Log: ___________________________________________________ Instance 2 Redo Log 1: ___/dev/vx/rdsk/ops_dg/ops2log1.log____20________ Instance 2 Redo Log 2: ___/dev/vx/rdsk/ops_dg/ops2log2.log____20________ Instance 2 Redo Log 3: ___/dev/vx/rdsk/ops_dg/ops2log3.log____20_________ Instance 2 Redo Log: _________________________________________________ Instance 2 Redo Log: __________________________________________________ Data: System ___/dev/vx/rdsk/ops_dg/system.dbf___400__________ Data: Temp ___/dev/vx/rdsk/ops_dg/temp.dbf______100_______ Data: Users ___/dev/vx/rdsk/ops_dg/users.dbf_____120_________ Data: Tools ___/dev/vx/rdsk/ops_dg/tools.dbf____15___________ Data: User data ___/dev/vx/rdsk/ops_dg/data1.dbf_200__________ Data: User data ___/dev/vx/rdsk/ops_dg/data2.dbf__200__________ Data: User data ___/dev/vx/rdsk/ops_dg/data3.dbf__200__________ Data: Rollback ___/dev/vx/rdsk/ops_dg/rollback.dbf__300_________ Parameter: spfile1 /dev/vx/rdsk/ops_dg/spfile1.ora __5_____ Instance 1 undotbs1: /dev/vx/rdsk/ops_dg/undotbs1.dbf___312___ Instance 2 undotbs2: /dev/vx/rdsk/ops_dg/undotbs2.dbf___312___ Data: example1__/dev/vx/rdsk/ops_dg/example1.dbf__________160____ data: cwmlite1__/dev/vx/rdsk/ops_dg/cwmlite1.dbf__100____ Data: indx1__/dev/vx/rdsk/ops_dg/indx1.dbf____70___ Data: drsys1__/dev/vx/rdsk/ops_dg/drsys1.dbf___90___
108
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Installing Serviceguard Extension for RAC
NOTE
For the most current version compatibility for Serviceguard and HP-UX, see the SGeRAC release notes.
To install Serviceguard Extension for RAC, use the following steps for each node: 1. Mount the distribution media in the tape drive, CD, or DVD reader. 2. Run Software Distributor, using the swinstall command. 3. Specify the correct input device. 4. Choose the following bundle from the displayed list:
Serviceguard Extension for RAC
Chapter 3
109
NOTE
CVM 4.x with CFS does not use the STORAGE_GROUP parameter because the disk group activation is performed by the multi-node package. CVM 3.x or 4.x without CFS uses the STORAGE_GROUP parameter in the ASCII package configuration file in order to activate the disk group. Do not enter the names of LVM volume groups or VxVM disk groups in the package ASCII configuration file.
110
Chapter 3
Chapter 3
111
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM
The Event Monitoring Service HA Disk Monitor provides the capability to monitor the health of LVM disks. If you intend to use this monitor for your mirrored disks, you should configure them in physical volume groups. For more information, refer to the manual Using HA Monitors.
112
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Creating Volume Groups and Logical Volumes If your volume groups have not been set up, use the procedure in the next sections. If you have already done LVM configuration, skip ahead to the section Installing Oracle Real Application Clusters. Selecting Disks for the Volume Group Obtain a list of the disks on both nodes and identify which device files are used for the same disk on both. Use the following command on each node to list available disks as they are known to each system:
# lssf /dev/dsk/*
In the following examples, we use /dev/rdsk/c1t2d0 and /dev/rdsk/c0t2d0, which happen to be the device names for the same disks on both ftsys9 and ftsys10. In the event that the device file names are different on the different nodes, make a careful note of the correspondences. Creating Physical Volumes On the configuration node (ftsys9), use the pvcreate command to define disks as physical volumes. This only needs to be done on the configuration node. Use the following commands to create two physical volumes for the sample configuration:
# pvcreate -f /dev/rdsk/c1t2d0 # pvcreate -f /dev/rdsk/c0t2d0
Creating a Volume Group with PVG-Strict Mirroring Use the following steps to build a volume group on the configuration node (ftsys9). Later, the same volume group will be created on other nodes. 1. First, set up the group directory for vgops:
# mkdir /dev/vg_ops
2. Next, create a control file named group in the directory /dev/vg_ops, as follows:
# mknod /dev/vg_ops/group c 64 0xhh0000
The major number is always 64, and the hexadecimal minor number has the form
0xhh0000
Chapter 3
113
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups:
# ls -l /dev/*/group
3. Create the volume group and add physical volumes to it with the following commands:
# vgcreate -g bus0 /dev/vg_ops /dev/dsk/c1t2d0 # vgextend -g bus1 /dev/vg_ops /dev/dsk/c0t2d0
The first command creates the volume group and adds a physical volume to it in a physical volume group called bus0. The second command adds the second drive to the volume group, locating it in a different physical volume group named bus1. The use of physical volume groups allows the use of PVG-strict mirroring of disks and PV links. 4. Repeat this procedure for additional volume groups.
The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c y means that mirror consistency recovery is enabled; the -s g means that mirroring is
114
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM PVG-strict, that is, it occurs between different physical volume groups; the -n redo1.log option lets you specify the name of the logical volume; and the -L 28 option allocates 28 megabytes.
NOTE
It is important to use the -M n and -c y options for both redo logs and control files. These options allow the redo log files to be resynchronized by SLVM following a system crash before Oracle recovery proceeds. If these options are not set correctly, you may not be able to continue with database recovery.
If the command is successful, the system will display messages like the following:
Logical volume /dev/vg_ops/redo1.log has been successfully created with character device /dev/vg_ops/rredo1.log Logical volume /dev/vg_ops/redo1.log has been successfully extended
Note that the character device file name (also called the raw logical volume name) is used by the Oracle DBA in building the RAC database. Creating Mirrored Logical Volumes for RAC Data Files Following a system crash, the mirrored logical volumes need to be resynchronized, which is known as resilvering. If Oracle does not perform resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of NOMWC. This is done by disabling mirror write caching and enabling mirror consistency recovery. With NOMWC, SLVM performs the resynchronization. Create logical volumes for use as Oracle data files by using the same options as in the following example:
# lvcreate -m 1 -M n -c y -s g -n system.dbf -L 408 /dev/vg_ops
The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c y means that mirror consistency recovery is enabled; the -s g means that mirroring is PVG-strict, that is, it occurs between different physical volume groups; the -n system.dbf option lets you specify the name of the logical volume; and the -L 408 option allocates 408 megabytes.
Chapter 3
115
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM If Oracle performs the resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of NONE by disabling both mirror write caching and mirror consistency recovery. With a mirror consistency policy of NONE, SLVM does not perform the resynchronization.
NOTE
Contact Oracle to determine if your version of Oracle RAC allows resilvering and to appropriately configure the mirror consistency recovery policy for your logical volumes.
Create logical volumes for use as Oracle data files by using the same options as in the following example:
# lvcreate -m 1 -M n -c n -s g -n system.dbf -L 408 /dev/vg_ops
The -m 1 option specifies single mirroring; the -M n option ensures that mirror write cache recovery is set off; the -c n means that mirror consistency recovery is disabled; the -s g means that mirroring is PVG-strict, that is, it occurs between different physical volume groups; the -n system.dbf option lets you specify the name of the logical volume; and the -L 408 option allocates 408 megabytes. If the command is successful, the system will display messages like the following:
Logical volume /dev/vg_ops/system.dbf has been successfully created with character device /dev/vg_ops/rsystem.dbf Logical volume /dev/vg_ops/system.dbf has been successfully extended
Note that the character device file name (also called the raw logical volume name) is used by the Oracle DBA in building the OPS database.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM the array. If you are using SAM, choose the type of disk array you wish to configure, and follow the menus to define alternate links. If you are using LVM commands, specify the links on the command line. The following example shows how to configure alternate links using LVM commands. The following disk configuration is assumed:
8/0.15.0 8/0.15.1 8/0.15.2 8/0.15.3 8/0.15.4 8/0.15.5 10/0.3.0 10/0.3.1 10/0.3.2 10/0.3.3 10/0.3.4 10/0.3.5 /dev/dsk/c0t15d0 /dev/dsk/c0t15d1 /dev/dsk/c0t15d2 /dev/dsk/c0t15d3 /dev/dsk/c0t15d4 /dev/dsk/c0t15d5 /dev/dsk/c1t3d0 /dev/dsk/c1t3d1 /dev/dsk/c1t3d2 /dev/dsk/c1t3d3 /dev/dsk/c1t3d4 /dev/dsk/c1t3d5 /* /* /* /* /* /* /* /* /* /* /* /* I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel 0 0 0 0 0 0 1 1 1 1 1 1 (8/0) (8/0) (8/0) (8/0) (8/0) (8/0) (10/0) (10/0) (10/0) (10/0) (10/0) (10/0) SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI SCSI address address address address address address address address address address address address 15 15 15 15 15 15 3 3 3 3 3 3 LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN LUN 0 1 2 3 4 5 0 1 2 3 4 5 */ */ */ */ */ */ */ */ */ */ */ */
Assume that the disk array has been configured, and that both the following device files appear for the same LUN (logical disk) when you run the ioscan command:
/dev/dsk/c0t15d0 /dev/dsk/c1t3d0
Use the following procedure to configure a volume group for this logical disk: 1. First, set up the group directory for vg_ops:
# mkdir /dev/vg_ops
2. Next, create a control file named group in the directory /dev/vg_ops, as follows:
# mknod /dev/vg_ops/group c 64 0xhh0000
The major number is always 64, and the hexadecimal minor number has the format:
0xhh0000
where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups: Chapter 3 117
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM
# ls -l /dev/*/group
3. Use the pvcreate command on one of the device files associated with the LUN to define the LUN to LVM as a physical volume.
# pvcreate -f /dev/rdsk/c0t15d0
It is only necessary to do this with one of the device file names for the LUN. The -f option is only necessary if the physical volume was previously used in some other volume group. 4. Use the following to create the volume group with the two links:
# vgcreate /dev/vg_ops /dev/dsk/c0t15d0 /dev/dsk/c1t3d0
LVM will now recognize the I/O channel represented by /dev/dsk/c0t15d0 as the primary link to the disk; if the primary link fails, LVM will automatically switch to the alternate I/O channel represented by /dev/dsk/c1t3d0. Use the vgextend command to add additional disks to the volume group, specifying the appropriate physical volume name for each PV link. Repeat the entire procedure for each distinct volume group you wish to create. For ease of system administration, you may wish to use different volume groups to separate logs from data and control files.
NOTE
The default maximum number of volume groups in HP-UX is 10. If you intend to create enough new volume groups that the total exceeds ten, you must increase the maxvgs system parameter and then re-build the HP-UX kernel. Use SAM and select the Kernel Configuration area, then choose Configurable Parameters. Maxvgs appears on the list.
118
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM
Logical Volume Name opsctl1.ctl opsctl2.ctl opsctl3.ctl ops1log1.log ops1log2.log ops1log3.log ops2log1.log ops2log2.log ops2log3.log opssystem.dbf opstemp.dbf opsusers.dbf opstools.dbf opsdata1.dbf opsdata2.dbf opsdata3.dbf opsrollback.dbf opsspfile1.ora opsundotbs1.dbf
LV Size (MB) 108 108 108 28 28 28 28 28 28 408 108 128 24 208 208 208 308 5 320
Raw Logical Volume Path Name /dev/vg_ops/ropsctl1.ctl /dev/vg_ops/ropsctl2.ctl /dev/vg_ops/ropsctl3.ctl /dev/vg_ops/rops1log1.log /dev/vg_ops/rops1log2.log /dev/vg_ops/rops1log3.log /dev/vg_ops/rops2log1.log /dev/vg_ops/rops2log2.log /dev/vg_ops/rops2log3.log /dev/vg_ops/ropssystem.dbf /dev/vg_ops/ropstemp.dbf /dev/vg_ops/ropsusers.dbf /dev/vg_ops/ropstools.dbf /dev/vg_ops/ropsdata1.dbf /dev/vg_ops/ropsdata2.dbf /dev/vg_ops/ropsdata3.dbf /dev/vg_ops/ropsroolback.dbf /dev/vg_ops/ropsspfile1.ora /dev/vg_ops/ropsundotbs1.dbf
Chapter 3
119
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Table 3-2 Required Oracle File Names for Demo Database (Continued) Oracle File Size (MB)* 312 160 100 70 90
Raw Logical Volume Path Name /dev/vg_ops/ropsundotbs2.dbf /dev/vg_ops/ropsexample1.dbf /dev/vg_ops/ropscwmlite1.dbf /dev/vg_ops/ropsindx1.dbf /dev/vg_ops/ropsdrsys1.dbf
The size of the logical volume is larger than the Oracle file size because Oracle needs extra space to allocate a header in addition to the file's actual data capacity.
Create these files if you wish to build the demo database. The three logical volumes at the bottom of the table are included as additional data files, which you can create as needed, supplying the appropriate sizes. If your naming conventions require, you can include the Oracle SID and/or the database name to distinguish files for different instances and different databases. If you are using the ORACLE_BASE directory structure, create symbolic links to the ORACLE_BASE files from the appropriate directory. Example:
# ln -s /dev/vg_ops/ropsctl1.ctl \ /u01/ORACLE/db001/ctrl01_1.ctl
After creating these files, set the owner to oracle and the group to dba with a file mode of 660. The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.
120
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Displaying the Logical Volume Infrastructure
2. Still on ftsys9, copy the map file to ftsys10 (and to additional nodes as necessary.)
# rcp /tmp/vg_ops.map ftsys10:/tmp/vg_ops.map
3. On ftsys10 (and other nodes, as necessary), create the volume group directory and the control file named group:
# mkdir /dev/vg_ops # mknod /dev/vg_ops/group c 64 0xhh0000
For the group file, the major number is always 64, and the hexadecimal minor number has the format:
0xhh0000
Chapter 3
121
Serviceguard Configuration for Oracle 9i RAC Displaying the Logical Volume Infrastructure where hh must be unique to the volume group you are creating. If possible, use the same number as on ftsys9. Use the following command to display a list of existing volume groups:
# ls -l /dev/*/group
4. Import the volume group data using the map file from node ftsys9. On node ftsys10 (and other nodes, as necessary), enter:
# vgimport -s -m /tmp/vg_ops.map /dev/vg_ops
122
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Installing Oracle Real Application Clusters
Before installing the Oracle Real Application Cluster software, make sure the cluster is running. Login as the oracle user on one node and then use the Oracle installer to install Oracle software and to build the correct Oracle runtime executables. When the executables are installed to a cluster file system, the Oracle installer has an option to install the executables once. When executables are installed to a local file system on each node, the Oracle installer copies the executables to the other nodes in the cluster. For details on Oracle installation, refer to the Oracle installation documentation. As part of this installation, the Oracle installer installs the executables and optionally, the Oracle installer can build an Oracle demo database on the primary node. The demo database files can be either the character (raw) device files names for the logical volumes created earlier, or the database can reside on a cluster file system. For a demo database on SLVM or CVM, create logical volumes as shown in Table 3-2, Required Oracle File Names for Demo Database, earlier in this chapter. As the installer prompts for the database file names, use the pathnames of the raw logical volumes instead of using the defaults.
NOTE
If you do not wish to install the demo database, select install software only.
Chapter 3
123
# Enter a name for this cluster. This name will be used to identify the # cluster when viewing or manipulating it. CLUSTER_NAME cluster 1
# # # # # # # # # # # # # # # # # # # # # #
Cluster Lock Parameters The cluster lock is used as a tie-breaker for situations in which a running cluster fails, and then two equal-sized sub-clusters are both trying to form a new cluster. The cluster lock may be configured using only one of the following alternatives on a cluster: the LVM lock disk the quorom server
Consider the following when configuring a cluster. For a two-node cluster, you must use a cluster lock. For a cluster of three or four nodes, a cluster lock is strongly recommended. For a cluster of more than four nodes, a cluster lock is recommended. If you decide to configure a lock for a cluster of more than four nodes, it must be a quorum server. Lock Disk Parameters. Use the FIRST_CLUSTER_LOCK_VG and FIRST_CLUSTER_LOCK_PV parameters to define a lock disk. The FIRST_CLUSTER_LOCK_VG is the LVM volume group that holds the cluster lock. This volume group should not be used by any other cluster as a cluster lock device.
124
Chapter 3
# # # # # # # # # # # #
Definition of nodes in the cluster. Repeat node definitions as necessary for additional nodes. NODE_NAME is the specified nodename in the cluster. It must match the hostname and both cannot contain full domain name. Each NETWORK_INTERFACE, if configured with IPv4 address, must have ONLY one IPv4 address entry with it which could be either HEARTBEAT_IP or STATIONARY_IP. Each NETWORK_INTERFACE, if configured with IPv6 address(es) can have multiple IPv6 address entries(up to a maximum of 2, only one IPv6 address entry belonging to site-local scope and only one belonging to global scope) which must be all STATIONARY_IP. They cannot be HEARTBEAT_IP.
Chapter 3
125
NODE_NAME ever3a NETWORK_INTERFACE lan0 STATIONARY_IP15.244.64.140 NETWORK_INTERFACE lan1 HEARTBEAT_IP192.77.1.1 NETWORK_INTERFACE lan2 # List of serial device file names # For example: # SERIAL_DEVICE_FILE /dev/tty0p0 # Primary Network Interfaces on Bridged Net 1: lan0. # Warning: There are no standby network interfaces on bridged net 1. # Primary Network Interfaces on Bridged Net 2: lan1. # Possible standby Network Interfaces on Bridged Net 2: lan2.
# Cluster Timing Parameters (microseconds). # # # # # # # # # The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds). This default setting yields the fastest cluster reformations. However, the use of the default value increases the potential for spurious reformations due to momentary system hangs or network load spikes. For a significant portion of installations, a setting of 5000000 to 8000000 (5 to 8 seconds) is more appropriate. The maximum value recommended for NODE_TIMEOUT is 30000000 (30 seconds). 1000000 2000000
HEARTBEAT_INTERVAL NODE_TIMEOUT
# Network Monitor Configuration Parameters. # The NETWORK_FAILURE_DETECTION parameter determines how LAN card failures are detected. # If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound # message count stops increasing or when both inbound and outbound # message counts stop increasing. # If set to INOUT, both the inbound and outbound message counts must
126
Chapter 3
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Access Control Policy Parameters. Three entries set the access control policy for the cluster: First line must be USER_NAME, second USER_HOST, and third USER_ROLE. Enter a value after each. 1. USER_NAME can either be ANY_USER, or a maximum of 8 login names from the /etc/passwd file on user host. 2. USER_HOST is where the user can issue Serviceguard commands. If using Serviceguard Manager, it is the COM server. Choose one of these three values: ANY_SERVICEGUARD_NODE, or (any) CLUSTER_MEMBER_NODE, or a specific node. For node, use the official hostname from domain name server, and not an IP addresses or fully qualified name. 3. USER_ROLE must be one of these three values: * MONITOR: read-only capabilities for the cluster and packages * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages in the cluster * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative commands for the cluster. Access control policy does not set a role for configuration capability. To configure, a user must log on to one of the clusters nodes as root (UID=0). Access control policy cannot limit root users access. MONITOR and FULL_ADMIN can only be set in the cluster configuration file, and they apply to the entire cluster. PACKAGE_ADMIN can be set in the cluster or a package configuration file. If set in the cluster configuration file, PACKAGE_ADMIN applies to all configured packages. If set in a package configuration file, PACKAGE_ADMIN applies to that package only. Conflicting or redundant policies will cause an error while applying the configuration, and stop the process. The maximum number of access policies that can be configured in the cluster is 200.
Chapter 3
127
Example: to configure a role for user john from node noir to administer a cluster and all its packages, enter: USER_NAME john USER_HOST noir USER_ROLE FULL_ADMIN
# # # # # #
List of cluster aware LVM Volume Groups. These volume groups will be used by package applications via the vgchange -a e command. Neither CVM or VxVM Disk Groups should be used here. For example: VOLUME_GROUP /dev/vgdatabase VOLUME_GROUP /dev/vg02
# # # # # # # #
List of OPS Volume Groups. Formerly known as DLM Volume Groups, these volume groups will be used by OPS or RAC cluster applications via the vgchange -a s command. (Note: the name DLM_VOLUME_GROUP is also still supported for compatibility with earlier versions.) For example: OPS_VOLUME_GROUP /dev/vgdatabase OPS_VOLUME_GROUP /dev/vg02
128
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS
Chapter 3
129
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS
NOTE
2. Create the Cluster file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file 3. Create the Cluster # cmapplyconf -C clm.asc 4. Start the Cluster # cmruncl # cmviewcl The following output will be displayed:
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running
5. Configure the Cluster Volume Manager (CVM) Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM/CFS stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets. # cfscluster config -s The following output will be displayed: CVM is now configured Starting CVM... It might take a few minutes to complete When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster:
130
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS # vxdctl -c mode The following output will be displayed: mode: enabled: cluster active - SLAVE master: ever3b or mode: enabled: cluster active - MASTER slave: ever3b 6. Converting Disks from LVM to CVM Use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Twelfth Edition users guide Appendix G. 7. Initializing Disks for CVM/CFS You need to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /etc/vx/bin/vxdisksetup -i c4t4d0 8. Create the Disk Group for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init cfsdg1 c4t4d0 9. Create the Disk Group Multi-Node package. Use the following command to add the disk group to the cluster: # cfsdgadm add cfsdg1 all=sw The following output will be displayed:
Chapter 3
131
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS Package name SG-CFS-DG-1 was generated to control the resource shared disk group cfsdg1 is associated with the cluster. 10. Activate the Disk Group # cfsdgadm activate cfsdg1 11. Creating Volumes and Adding a Cluster Filesystem # vxassist -g cfsdg1 make vol1 10240m # vxassist -g cfsdg1 make vol2 10240m # vxassist -g cfsdg1 make volsrvm 300m # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol1 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol2 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/volsrvm The following output will be displayed: version 6 layout 307200 sectors, 307200 blocks of size 1024, log size 1024 blocks largefiles supported 12. Configure Mount Point # cfsmntadm add cfsdg1 vol1 /cfs/mnt1 all=rw The following output will be displayed:
132
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS Package name SG-CFS-MP-1 was generated to control the resource. Mount point /cfs/mnt1 was associated with the cluster. # cfsmntadm add cfsdg1 vol2 /cfs/mnt2 all=rw The following output will be displayed: Package name SG-CFS-MP-2 was generated to control the resource. Mount point /cfs/mnt2 was associated with the cluster. # cfsmntadm add cfsdg1 volsrvm /cfs/cfssrvm all=rw The following output will be displayed: Package name SG-CFS-MP-3 was generated to control the resource. Mount point /cfs/cfssrvm that was associated with the cluster. 13. Mount Cluster Filesystem # cfsmount /cfs/mnt1 # cfsmount /cfs/mnt2 # cfsmount /cfs/cfssrvm 14. Check CFS Mount Points # bdf | grep cfs
/dev/vx/dsk/cfsdg1/vol1 10485760 19651 9811985 0% /cfs/mnt1 /dev/vx/dsk/cfsdg1/vol2 10485760 19651 9811985 0% /cfs/mnt2 /dev/vx/dsk/cfsdg1/volsrvm 307200 1802 286318 1% /cfs/cfssrvm
MULTI_NODE_PACKAGES
Chapter 3
133
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS
STATUS up up up up up
SYSTEM yes no no no no
NOTE
134
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS 3. Delete DG MNP # cfsdgadm delete cfsdg1 The following output will be generated: Shared disk group cfsdg1 was disassociated from the cluster.
NOTE
cfsmntadm delete also deletes the disk group if there is no dependent package. To ensure the disk group deletion is complete, use the above command to delete the disk group package.
4. De-configure CVM # cfscluster stop The following output will be generated: Stopping CVM...CVM is stopped # cfscluster unconfig The following output will be generated: CVM is now unconfigured
Chapter 3
135
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM
IMPORTANT
Creating a rootdg disk group is only necessary the first time you use the Volume Manager. CVM 4.1 does not require a rootdg.
136
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM For more detailed information on how to configure CVM 4.x, refer the Managing Serviceguard Twelfth Edition users guide. Preparing the Cluster and the System Multi-node Package for use with CVM 4.x 1. Create the Cluster file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file
NOTE
To prepare the cluster for CVM configuration, you need to be sure MAX_CONFIGURED_PACKAGES to minimum of 3 (the default value for MAX_CONFIGURED_PACKAGES for Serviceguard A.11.17 is 150) cluster configuration file. In the sample set the value to 10.
2. Create the Cluster # cmapplyconf -C clm.asc Start the Cluster # cmruncl # cmviewcl The following output will be displayed:
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running
3. Configure the Cluster Volume Manager (CVM) Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets. # cmapplyconf -P /etc/cmcluster/cfs/SG-CFS-pkg.conf
Chapter 3
137
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM # cmrunpkg SG-CFS-pkg When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster: # vxdctl -c mode The following output will be displayed: mode: enabled: cluster active - SLAVE master: ever3b Converting Disks from LVM to CVM Use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Thirteenth Edition users guide Appendix G. Initializing Disks for CVM You need to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /etc/vx/bin/vxdisksetup -i c4t4d0 Create the Disk Group for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c4t4d0 4. Creating Volumes and Adding a Cluster Filesystem # vxassist -g ops_dg make vol1 10240m
138
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM # vxassist -g ops_dg make vol2 10240m # vxassist -g ops_dg make volsrvm 300m 5. View the Configuration # cmviewcl
CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running
MULTI_NODE_PACKAGES PACKAGE STATUS SG-CFS-pkg up STATE running AUTO_RUN enabled SYSTEM yes
IMPORTANT
After creating these files, use the vxedit command to change the ownership of the raw volume files to oracle and the group membership to dba, and to change the permissions to 660. Example: # cd /dev/vx/rdsk/ops_dg # vxedit -g ops_dg set user=oracle * # vxedit -g ops_dg set group=dba * # vxedit -g ops_dg set mode=660 * The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.
Mirror Detachment Policies with CVM The required CVM disk mirror detachment policy is global, which means that as soon as one node cannot see a specific mirror copy (plex), all nodes cannot see it as well. The alternate policy is local, which means that if one node cannot see a specific mirror copy, then CVM will deactivate access to the volume for that node only. This policy can be re-set on a disk group basis by using the vxedit command, as follows: # vxedit set diskdetpolicy=global <DiskGroupName>
Chapter 3
139
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM
NOTE
The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager.
NOTE
To prepare the cluster for CVM disk group configuration, you need to ensure that only one heartbeat subnet is configured. Then use the following command, which creates the special package that communicates cluster information to CVM: # cmapplyconf -P /etc/cmcluster/cvm/VxVM-CVM-pkg.conf
WARNING
After the above command completes, start the cluster and create disk groups for shared use as described in the following sections. Starting the Cluster and Identifying the Master Node Run the cluster, which will activate the special CVM package: # cmruncl
140
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM After the cluster is started, it will now run with a special system multi-node package named VxVM-CVM-pkg, which is on all nodes. This package is shown in the following output of the cmviewcl -v command:
CLUSTER bowls NODE spare split strike STATUS up STATUS up up up STATE running running running
When CVM starts up, it selects a master node, and this is the node from which you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster: # vxdctl -c mode One node will identify itself as the master. Create disk groups from this node. Converting Disks from LVM to CVM Use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Twelfth Edition users guide Appendix G. Initializing Disks for CVM Initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM). To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /usr/lib/vxvm/bin/vxdisksetup -i /dev/dsk/c0t3d2
Chapter 3
141
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM Creating Disk Groups for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c0t3d2 Verify the configuration with the following command: # vxdg list
NAME rootdg ops_dg STATE enabled enabled,shared ID 971995699.1025.node1 972078742.1084.node2
142
Chapter 3
Creating Volumes
Use the vxassist command to create logical volumes. The following is an example: # vxassist -g log_files make ops_dg 1024m This command creates a 1024 MB volume named log_files in a disk group named ops_dg. The volume can be referenced with the block device file /dev/vx/dsk/ops_dg/log_files or the raw (character) device file /dev/vx/rdsk/ops_dg/log_files. Verify the configuration with the following command: # vxdg list
IMPORTANT
After creating these files, use the vxedit command to change the ownership of the raw volume files to oracle and the group membership to dba, and to change the permissions to 660. Example: # cd /dev/vx/rdsk/ops_dg # vxedit -g ops_dg set user=oracle * # vxedit -g ops_dg set group=dba * # vxedit -g ops_dg set mode=660 * The logical volumes are now available on the primary node, and the raw logical volume names can now be used by the Oracle DBA.
Chapter 3
143
NOTE
The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager.
144
Chapter 3
Volume Name opsctl1.ctl opsctl2.ctl opsctl3.ctl ops1log1.log ops1log2.log ops1log3.log ops2log1.log ops2log2.log ops2log3.log opssystem.dbf opstemp.dbf opsusers.dbf opstools.dbf opsdata1.dbf opsdata2.dbf opsdata3.dbf opsrollback.dbf opsspfile1.ora
Size (MB) 108 108 108 28 28 28 28 28 28 408 108 128 24 208 208 208 308 5
Raw Device File Name /dev/vx/rdsk/ops_dg/opsctl1.ctl /dev/vx/rdsk/ops_dg/opsctl2.ctl /dev/vx/rdsk/ops_dg/opsctl3.ctl /dev/vx/rdsk/ops_dg/oops1log1.log /dev/vx/rdsk/ops_dg/ops1log2.log /dev/vx/rdsk/ops_dg/ops1log3.log /dev/vx/rdsk/ops_dg/ops2log1.log /dev/vx/rdsk/ops_dg/ops2log2.log /dev/vx/rdsk/ops_dg/ops2log3.log /dev/vx/rdsk/ops_dg/opssystem.dbf /dev/vx/rdsk/ops_dg/opstemp.dbf /dev/vx/rdsk/ops_dg/opsusers.dbf /dev/vx/rdsk/ops_dg/opstools.dbf /dev/vx/rdsk/ops_dg/opsdata1.dbf /dev/vx/rdsk/ops_dg/opsdata2.dbf /dev/vx/rdsk/ops_dg/opsdata3.dbf /dev/vx/rdsk/ops_dg/opsroolback.dbf /dev/vx/rdsk/ops_dg/opsspfile1.ora
Chapter 3
145
Serviceguard Configuration for Oracle 9i RAC Oracle Demo Database Files Table 3-3 Required Oracle File Names for Demo Database (Continued) Oracle File Size (MB) 312 312 160 100 70
Create these files if you wish to build the demo database. The three logical volumes at the bottom of the table are included as additional data files, which you can create as needed, supplying the appropriate sizes. If your naming conventions require, you can include the Oracle SID and/or the database name to distinguish files for different instances and different databases. If you are using the ORACLE_BASE directory structure, create symbolic links to the ORACLE_BASE files from the appropriate directory. Example:
# ln -s /dev/vx/rdsk/ops_dg/opsctl1.ctl \ /u01/ORACLE/db001/ctrl01_1.ctl
Example, Oracle9: 1. Create an ASCII file, and define the path for each database object.
control1=/dev/vx/rdsk/ops_dg/opsctl1.ctl or control1=/u01/ORACLE/db001/ctrl01_1.ctl
2. Set the following environment variable where filename is the name of the ASCII file created.
# export DBCA_RAW_CONFIG=<full path>/filename
146
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Adding Disk Groups to the Cluster Configuration
Chapter 3
147
d. Set up CFS directory for Server Management. Preallocate space for srvm (200MB) # prealloc /cfs/cfssrvm/ora_srvm 209715200 # chown oracle:dba /cfs/cfssrvm/ora_srvm 2. Install Oracle RAC Software a. Install Oracle (software only) with Oracle Universal Installer as oracle user # su - oracle When using CFS for SRVM, set SRVM_SHARED_CONFIG $ export SRVM_SHARED_CONFIG=/cfs/cfssrvm/ora_srvm b. Set DISPLAY $ export DISPLAY=${display}:0.0 c. Run Oracle Universal Installer and follow installation steps $ cd <Oracle installation disk directory> $ ./runInstaller
Chapter 3
149
2. Set up Listeners with Oracle Network Configuration Assistant $ netca 3. Start GSD on all Nodes $ gsdctl start
Output: Successfully started GSD on local node
4. Run Database Configuration Assistant to Create the Database on CFS File System $ dbca -datafileDestination /cfs/mnt2/oradata
150
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Verify that Oracle Disk Manager is Configured
1. Check the license: # /opt/VRTS/bin/vxlictest -n VERITAS Storage Foundation for Oracle -f ODM The following output will be displayed: Using VERITAS License Manager API Version 3.00, Build 2 ODM feature is licensed 2. Check that the VRTSodm package is installed: # swlist VRTSodm The following output will be displayed: VRTSodm 4.1m VERITAS Oracle Disk Manager VRTSodm.ODM-KRN 4.1m VERITAS ODM kernel files VRTSodm.ODM-MAN 4.1m VERITAS ODM manual pages VRTSodm.ODM-RUN 4.1m VERITAS ODM commands 3. Check that libodm.sl is present: # ll -L /opt/VRTSodm/lib/libodm.sl The following output will be displayed: -rw-r--r-- 1 root sys 14336 Apr 25 18:42 /opt/VRTSodm/lib/libodm.sl
Chapter 3
151
Serviceguard Configuration for Oracle 9i RAC Configure Oracle to use Oracle Disk Manager Library
1. Logon as Oracle user 2. Shutdown database 3. Link the Oracle Disk Manager library into Oracle home using the following commands: For HP 9000 systems: $ rm ${ORACLE_HOME}/lib/libodm9.sl $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm9.sl For Integrity systems: $ rm ${ORACLE_HOME}/lib/libodm9.so $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm9.so 4. Start the Oracle database
152
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Verify Oracle Disk Manager is Running
1. Start the cluster and Oracle database (if not already started) 2. Check that the Oracle instance is using the Oracle Disk Manager function with following command: # cat /dev/odm/stats
abort: cancel: commit: create: delete: identify: io: reidentify: resize: unidentify: mname: vxctl: vxvers: io req: io calls: comp req: comp calls: io mor cmp: io zro cmp: cl receive: cl ident: cl reserve: cl delete: cl resize: cl same op: cl opt idn: cl opt rsv: **********: 0 0 18 18 0 349 12350590 78 0 203 0 0 10 9102431 6911030 73480659 5439560 461063 2330 66145 18 8 1 0 0 0 332 17
Chapter 3
153
Serviceguard Configuration for Oracle 9i RAC Verify Oracle Disk Manager is Running 3. Verify that Oracle Disk Manager is loaded with the following command: # kcmodule -P state odm The following output will be displayed: state loaded 4. In the alert log, verify the Oracle instance is running. The log should contain output similar to the following: Oracle instance running with ODM: VERITAS 4.1 ODM Library, Version 1.1
154
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Configuring Oracle to Stop using Oracle Disk Manager Library
1. Login as Oracle user 2. Shutdown the database 3. Change directories: $ cd ${ORACE_HOME}/lib 4. Remove the file linked to the ODM library: For HP 9000 systems: $ rm libodm9.sl $ ln -s ${ORACLE_HOME}/lib/libodmd9.sl \ ${ORACLE_HOME}/lib/libodm9.sl For Integrity systems: $ rm libodm9.so $ ln -s ${ORACLE_HOME}/lib/libodmd9.so \ ${ORACLE_HOME}/lib/libodm9.so 5. Restart the database
Chapter 3
155
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances
NOTE
The maximum number of RAC instances for Oracle 9i is 127 per cluster. For Oracle 10g refer to Oracles requirements.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances 1. Shut down the Oracle applications, if any. 2. Shut down Oracle. 3. Deactivate the database volume groups or disk groups. 4. Shut down the cluster (cmhaltnode or cmhaltcl). If the shutdown sequence described above is not followed, cmhaltcl or cmhaltnode may fail with a message that GMS clients (RAC 9i) are active or that shared volume groups are active.
NOTE
You must create the RAC instance package with a PACKAGE_TYPE of FAILOVER, but the fact that you are entering only one node ensures that the instance will only run on that node.
To simplify the creation of RAC instance packages, you can use the Oracle template provided with the separately purchasable ECM Toolkits product (T1909BA). Use the special toolkit scripts that are provided, and follow the instructions that appear in the README file. Also refer to the section Customizing the Control Script for RAC Instances below for more information. To create the package with Serviceguard Manager select the cluster. Go to the actions menu and choose configure package. To modify a package, select the package. For an instance package, create one package for each instance. On each node, supply the SID name for the package name. To create a package on the command line, use the cmmakepkg command to get an editable configuration file. Set the AUTO_RUN parameter to YES, if you want the instance to start up as soon as the node joins the cluster. In addition, you should set the NODE_FAILFAST_ENABLED parameter to NO.
Chapter 3
157
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances If you are using CVM disk groups for the RAC database, be sure to include the name of each disk group on a separate STORAGE_GROUP line in the configuration file. If you are using CFS or CVM for RAC shared storage with multi-node packages, the package containing the RAC instance should be configured with package dependency to depend on the multi-node packages. The following is a sample of the setup dependency conditions in application package configuration file:
DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION mp1 SG-CFS-MP-1=UP SAME_NODE mp2 SG-CFS-MP-2=UP SAME_NODE mp3 SG-CFS-MP-3=UP SAME_NODE
158
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances own script, and copying it to all nodes that can run the package. This script should contain the cmmodpkg -e command and activate the package after RAC and the cluster manager have started.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances First, generate a control script template:
# cmmakepkg -s /etc/cmcluster/pkg1/control.sh
You may customize the script, as described in the section, Customizing the Package Control Script. Customizing the Package Control Script Check the definitions and declarations at the beginning of the control script using the information in the Package Configuration worksheet. You need to customize as follows: Update the PATH statement to reflect any required paths needed to start your services. If you are using LVM, enter the names of volume groups to be activated using the VG[] array parameters, and select the appropriate options for the storage activation command, including options for mounting and unmounting filesystems, if desired. Do not use the VXVM_DG[] or CVM_DG[] parameters for LVM volume groups. If you are using CVM, enter the names of disk groups to be activated using the CVM_DG[] array parameters, and select the appropriate storage activation command, CVM_ACTIVATION_CMD. Do not use the VG[] or VXVM_DG[] parameters for CVM disk groups. If you are using VxVM disk groups without CVM, enter the names of VxVM disk groups that will be imported using the VXVM_DG[] array parameters. Enter one disk group per array element. Do not use the CVM_DG[] or VG[] parameters for VxVM disk groups without CVM. Also, do not specify an activation command. Add the names of logical volumes and file systems that will be mounted on them. If you are using mirrored VxVM disks, specify the mirror recovery option VXVOL. Select the appropriate options for the storage activation command (not applicable for basic VxVM disk groups), and also include options for mounting and unmounting filesystems, if desired. Specify the filesystem mount retry and unmount count options. Define IP subnet and IP address pairs for your package. Add service name(s).
160
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Add service command(s) Add a service restart parameter, if desired.
NOTE
Use care in defining service run commands. Each run command is executed by the control script in the following way: The cmrunserv command executes each run command and then monitors the process id of the process created by the run command. When the command started by cmrunserv exits, Serviceguard determines that a failure has occurred and takes appropriate action, which may include transferring the package to an adoptive node. If a run command is a shell script that runs some other command and then exits, Serviceguard will consider this normal exit as a failure.
To avoid problems in the execution of control scripts, ensure that each run command is the name of an actual service and that its process remains alive until the actual service stops.
If you need to define a set of run and halt operations in addition to the defaults, create functions for them in the sections under the heading CUSTOMER DEFINED FUNCTIONS. Optimizing for Large Numbers of Storage Units A set of four variables is provided to allow performance improvement when employing a large number of filesystems or storage groups. For more detail, see the comments in the control script template. They are: CONCURRENT_VGCHANGE_OPERATIONSdefines a number of parallel LVM volume group activations during package startup as well and deactivations during package shutdown. CONCURRENT_FSCK_OPERATIONSdefines a number of parallel fsck operations that will be carried out at package startup. CONCURRENT_MOUNT_AND_UMOUNT_OPERATIONSdefines a number of parallel mount operations during package startup and unmount operations during package shutdown.
Chapter 3
161
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Customizing the Control Script for RAC Instances Use the package control script to perform the following: Activation and deactivation of RAC volume groups. Startup and shutdown of the RAC instance. Monitoring of the RAC instance.
Set RAC environment variables in the package control script to define the correct execution environment for RAC. Enter the names of the LVM volume groups you wish to activate in shared mode in the VG[] array. Use a different array element for each RAC volume group. (Remember that RAC volume groups must also be coded in the cluster configuration file using OPS_VOLUME_GROUP parameters.) Be sure to specify shared activation with the vgchange command by setting the VGCHANGE parameter as follows:
VGCHANGE="vgchange -a s
If your disks are mirrored with LVM mirroring on separate physical paths and you want to override quorum, use the following setting:
VGCHANGE="vgchange -a s -q n
Enter the names of the CVM disk groups you wish to activate in shared mode in the CVM_DG[] array. Use a different array element for each RAC disk group. (Remember that CVM disk groups must also be coded in the package ASCII configuration file using STORAGE_GROUP parameters.) Be sure to an appropriate type of shared activation with the CVM activation command. For example:
CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=sharedwrite"
Do not define the RAC instance as a package service. Instead, include the commands that start up an RAC instance in the customer_defined_run_commands section of the package control script. Similarly, you should include the commands that halt an RAC instance in the customer_defined_halt_commands section of the package control script. Define the Oracle monitoring command as a service command, or else use the special Oracle script provided with the ECM Toolkit.
162
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Using the Command Line to Configure an Oracle RAC Instance Package Serviceguard Manager provides a template to configure package behavior that is specific to an Oracle RAC Instance package. The RAC Instance package starts the Oracle RAC instance, monitors the Oracle processes, and stops the RAC instance. The configuration of the RAC Instance Package make use of the Enterprise Cluster Master Toolkit (ECMT) to start, monitor, and stop the Oracle database instance. For details on the use of ECMT, reference ECMT documentation. Each Oracle RAC database can have database instance running on all nodes of a SGeRAC cluster. Therefore, it is not necessary to failover the database instance to a different SGeRAC node. This is the main difference between an Oracle RAC Instance Package and a single instance Oracle package. Information for Creating the Oracle RAC Instance Package on a SGeRAC Node Use the following steps to set up the pre-package configuration on a SGeRAC node: 1. Gather the RAC Instance SID_NAME. If you are using Serviceguard Manager, this is in the cluster Properties. Example: SID_NAME=ORACLE_TEST0 For an ORACLE RAC Instance for a two-node cluster, each node would have an SID_NAME. 2. Gather the RAC Instance package name for each node, which should be the same as the SID_NAME for each node Example: ORACLE_TEST0 3. Gather the shared volume group name for the RAC database. In Serviceguard Manager, see cluster Properties. Example: /dev/vgora92db 4. Create the Oracle RAC Instance Package directory /etc/cmcluster/pkg/${SID_NAME} Example: /etc/cmcluster/pkg/ORACLE_TEST0
Chapter 3
163
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances 5. Copy the Oracle shell script templates from the ECMT default source directory to the package directory. # cd /etc/cmcluster/pkg/${SID_NAME} # cp -p /opt/cmcluster/toolkit/oracle/* Example: # cd /etc/cmcluster/pkg/ORACLE_TEST0 # cp -p /opt/cmcluster/toolkit/oracle/* Edit haoracle.conf as per README 6. Gather the package service name for monitoring Oracle instance processes. In Serviceguard Manager, this information can be found under the Services tab. SERVICE_NAME[0]=${SID_NAME} SERVICE_CMD[0]=etc/cmcluster/pkg/${SID_NAME}/toolkit.sh SERVICE_RESTART[0]=-r 2 Example: SERVICE_NAME[0]=ORACLE_TEST0 SERVICE_CMD[0]=/etc/cmcluster/pkg/ORACLE_TEST0/toolkit. sh SERVICE_RESTART[0]=-r 2 7. Gather how to start the database using an ECMT script. In Serviceguard Manager, enter this filename for the control script start command. /etc/cmcluster/pkg/${SID_NAME}/toolkit.sh start Example: /etc/cmcluster/pkg/ORACLE_TEST0/toolkit.sh start 8. Gather how to stop the database using an ECMT script. In Serviceguard Manager, enter this filename for the control script stop command. /etc/cmcluster/pkg/${SID_NAME}/toolkit.sh stop Example: /etc/cmcluster/pkg/ORACLE_TEST0/toolkit.sh stop
164
Chapter 3
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Using Serviceguard Manager to Configure Oracle RAC Instance Package The following steps use the information from the example in section 2.2. It is assumed that the SGeRAC cluster environment is configured and the ECMT can be used to start the Oracle RAC database instance. 1. Start Serviceguard Manager and Connect to the cluster. Figure 3-1 shows a RAC Instance package for node sg21. The package name is ORACLE_TEST0 Figure 3-1 Serviceguard Manager display for a RAC Instance package
2. Create the Package. 3. Select the Parameters and select the parameters to edit.
Chapter 3
165
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Next select the check box Enable template(x) to enable Package Template for Oracle RAC. The template defaults can be reset with the Reset template defaults push button. When enabling the template for Oracle RAC, the package can only be run on one node. 4. Select the Node tab and select the node to run this package. 5. Select Networks tab and add monitored subnet for package. 6. Select Services tab and configure services. 7. Select the Control Script tab and configure parameters. Configure volume groups and customer defined run/halt functions. 8. Apply the package configuration after filling in the specified parameters. Enabling DB Provider Monitoring To monitor a remote Serviceguard RAC cluster, the entry of the GUI user name and server name must be in /etc/cmcluster/cmclnodelist file on all nodes in the cluster to be viewed. DB Provider Monitoring Using Control Access Policy (CAP) To monitor a local SGeRAC cluster as a non-root user, the GUI user name, server name and user role (at least monitor role) must be configured through CAP. For a remote cluster, any GUI user name (non-root or root), server name and user role (at least monitor role) should be configured through CAP. Please refer Control Access Policy for Serviceguard Commands and API clients External Specification for details.
166
Chapter 3
Chapter 4
167
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
TIP
Some commands take longer to complete in large configurations. In particular, you can expect Serviceguards CPU utilization to increase during cmviewcl -v as the number of packages and services increases.
You can also specify that the output should be formatted as it was in a specific earlier release by using the -r option indicating the release format you wish. Example: # cmviewcl -r A.11.16 See the man page for a detailed description of other cmviewcl options.
168
Chapter 4
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Examples of Cluster and Package States The following is an example of the output generated shown for the cmviewcl command:
CLUSTER cluster_mo NODE minie STATUS up STATUS up STATE running
Quorum_Server_Status: NAME STATUS white up Network_Parameters: INTERFACE STATUS PRIMARY up PRIMARY up STANDBY up NODE mo STATUS up
STATE running
Quorum_Server_Status: NAME STATUS white up Network_Parameters: INTERFACE STATUS PRIMARY up PRIMARY up STANDBY up MULTI_NODE_PACKAGES PACKAGE SG-CFS-pkg NODE_NAME minie STATUS up STATUS up
STATE running
AUTO_RUN enabled
SYSTEM yes
Script_Parameters: ITEM STATUS Service up Service up Service up Service up Service up NODE_NAME STATUS
MAX_RESTARTS 0 5 5 0 0
RESTARTS 0 0 0 0 0
SWITCHING
Chapter 4
169
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
mo up Script_Parameters: ITEM STATUS Service up Service up Service up Service up Service up PACKAGE SG-CFS-DG-1 NODE_NAME minie STATUS up STATUS up enabled
SWITCHING enabled
SATISFIED yes STATE running STATE running AUTO_RUN enabled SWITCHING enabled SYSTEM no
SATISFIED yes STATE running STATE running AUTO_RUN enabled SWITCHING enabled SYSTEM no
Dependency_Parameters:
170
Chapter 4
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
DEPENDENCY_NAME SG-CFS-DG-1 NODE_NAME mo STATUS up SATISFIED yes STATE running SWITCHING enabled
SATISFIED yes STATE running STATE running AUTO_RUN enabled SWITCHING enabled SYSTEM no
SATISFIED yes
Types of Cluster and Package States A cluster or its component nodes may be in several different states at different points in time. The following sections describe many of the common conditions the cluster or package may be in. Cluster Status The status of a cluster may be one of the following: Up. At least one node has a running cluster daemon, and reconfiguration is not taking place. Down. No cluster daemons are running on any cluster node. Starting. The cluster is in the process of determining its active membership. At least one cluster daemon is running. Unknown. The node on which the cmviewcl command is issued cannot communicate with other nodes in the cluster.
Chapter 4
171
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Node Status and State The status of a node is either up (active as a member of the cluster) or down (inactive in the cluster), depending on whether its cluster daemon is running or not. Note that a node might be down from the cluster perspective, but still up and running HP-UX. A node may also be in one of the following states: Failed. A node never sees itself in this state. Other active members of the cluster will see a node in this state if that node was in an active cluster, but is no longer, and is not halted. Reforming. A node is in this state when the cluster is re-forming. The node is currently running the protocols which ensure that all nodes agree to the new membership of an active cluster. If agreement is reached, the status database is updated to reflect the new cluster membership. Running. A node in this state has completed all required activity for the last re-formation and is operating normally. Halted. A node never sees itself in this state. Other nodes will see it in this state after the node has gracefully left the active cluster, for instance with a cmhaltnode command. Unknown. A node never sees itself in this state. Other nodes assign a node this state if it has never been an active cluster member.
Package Status and State The status of a package can be one of the following: Up. The package control script is active. Down. The package control script is not active. Unknown.
A system multi-node package is up when it is running on all the activecluster nodes. A multi-node package is up if it is running on any of itsconfigured nodes. The state of the package can be one of the following: 172 Starting. The start instructions in the control script are being run. Running. Services are active and being monitored. Halting. The halt instructions in the control script are being run. Chapter 4
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Package Switching Attributes Packages also have the following switching attributes: Package Switching. Enabled means that the package can switch to another node in the event of failure. Switching Enabled for a Node. Enabled means that the package can switch to the referenced node. Disabled means that the package cannot switch to the specified node until the node is enabled for the package using the cmmodpkg command. Every package is marked Enabled or Disabled for each node that is either a primary or adoptive node for the package. For multi-node packages, node switching Disabled means the package cannot start on that node. Status of Group Membership The state of the cluster for Oracle RAC is one of the following: Up. Services are active and being monitored. The membership appears in the output of cmviewcl -l group. Down. The cluster is halted and GMS services have been stopped. The membership does not appear in the output of the cmviewcl -l group.
The following is an example of the group membership output shown in the cmviewcl command:
# cmviewcl -l group GROUP DGop DBOP DAALL_DB IGOPALL MEMBER 1 0 1 0 0 1 2 1 PID 10394 10499 10501 10396 10396 10501 10423 10528 MEMBER_NODE comanche chinook comanche chinook comanche chinook comanche chinook
where the cmviewcl output values are: GROUP the name of a configured group
Chapter 4
173
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command MEMBER PID MEMBER_NODE Service Status Services have only status, as follows: Up. The service is being monitored. Down. The service is not running. It may have halted or failed. Uninitialized. The service is included in the package configuration, but it was not started with a run command in the control script. Unknown. the ID number of a member of a group the Process ID of the group member the Node on which the group member is running
Network Status The network interfaces have only status, as follows: Up. Down. Unknown. We cannot determine whether the interface is up or down. This can happen when the cluster is down. A standby interface has this status.
Serial Line Status The serial line has only status, as follows: Up. Heartbeats are received over the serial line. Down. Heartbeat has not been received over the serial line within 2 times the NODE_TIMEOUT value. Recovering. A corrupt message was received on the serial line, and the line is in the process of resynchronizing. Unknown. We cannot determine whether the serial line is up or down. This can happen when the remote node is down.
174
Chapter 4
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Failover and Failback Policies Packages can be configured with one of two values for the FAILOVER_POLICY parameter: CONFIGURED_NODE. The package fails over to the next node in the node list in the package configuration file. MIN_PACKAGE_NODE. The package fails over to the node in the cluster with the fewest running packages on it.
Packages can also be configured with one of two values for the FAILBACK_POLICY parameter: AUTOMATIC. With this setting, a package, following a failover, returns to its primary node when the primary node becomes available again. MANUAL. With this setting, a package, following a failover, must be moved back to its original node by a system administrator.
Failover and failback policies are displayed in the output of the cmviewcl -v command.
STATE running
Chapter 4
175
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Start configured_node Failback manual Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled rent) NODE ftsys10 STATUS up STATE running
NAME ftsys9
(cur
Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Start configured_node Failback manual Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled rent) Alternate up enabled ftsys9
NAME ftsys10
(cur
Quorum Server Status If the cluster is using a quorum server for tie-breaking services, the display shows the server name, state and status following the entry for each node, as in the following excerpt from the output of cmviewcl -v:
CLUSTER example NODE ftsys9 STATUS up STATUS up STATE running
176
Chapter 4
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
Quorum Server Status: NAME STATUS lp-qs up ... NODE ftsys10 STATUS up
STATE running
STATE running
STATE running
CVM Package Status If the cluster is using the VERITAS Cluster Volume Manager for disk storage, the system multi-node package CVM-VxVM-pkg must be running on all active nodes for applications to be able to access CVM disk groups. This package is shown in the following output of the cmviewcl command:
CLUSTER example NODE ftsys8 ftsys9 STATUS up STATUS down up STATE halted running
When you use the -v option, the display shows the system multi-node package associated with each active node in the cluster, as in the following:
SYSTEM_MULTI_NODE_PACKAGES: PACKAGE STATUS VxVM-CVM-pkg up NODE ftsys8 STATUS down STATE running STATE halted STATE running
Chapter 4
177
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
ITEM STATUS Service up VxVM-CVM-pkg.srv MAX_RESTARTS 0 RESTARTS 0 NAME
Status After Moving the Package to Another Node After issuing the following command:
# cmrunpkg -n ftsys9 pkg2
PACKAGE pkg1
STATUS up
STATE running
AUTO_RUN enabled
NODE ftsys9
Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Failover min_package_node Failback manual Script_Parameters: ITEM STATUS Service up Subnet up Resource up
MAX_RESTARTS 0 0
Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled Alternate up enabled PACKAGE pkg2 STATUS up STATE running
(current)
NODE ftsys9
178
Chapter 4
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Failover min_package_node Failback manual Script_Parameters: ITEM STATUS NAME MAX_RESTARTS Service up service2.1 0 Subnet up 15.13.168.0 0 Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled Alternate up enabled NODE ftsys10 STATUS up STATE running
RESTARTS 0 0
(current)
Now pkg2 is running on node ftsys9. Note that it is still disabled from switching. Status After Package Switching is Enabled The following command changes package status back to Package Switching Enabled:
# cmmodpkg -e pkg2
Chapter 4
179
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Both packages are now running on ftsys9 and pkg2 is enabled for switching. Ftsys10 is running the daemon and no packages are running on ftsys10. Status After Halting a Node After halting ftsys10, with the following command:
# cmhaltnode ftsys10
This output is seen on both ftsys9 and ftsys10. Viewing RS232 Status If you are using a serial (RS232) line as a heartbeat connection, you will see a list of configured RS232 device files in the output of the cmviewcl -v command. The following shows normal running status:
CLUSTER example NODE ftsys9 STATUS up STATUS up
STATE running
PATH 56/36.1
NAME lan0
STATUS up
CONNECTED_TO: ftsys10
/dev/tty0p0
180
Chapter 4
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
NODE ftsys10
STATUS up
STATE running
PATH 28.1
NAME lan0
STATUS up
The following shows status when the serial line is not working:
CLUSTER example NODE ftsys9 STATUS up STATUS up
STATE running
Network_Parameters: INTERFACE STATUS PRIMARY up Serial_Heartbeat: DEVICE_FILE_NAME /dev/tty0p0 NODE ftsys10 STATUS up
PATH 56/36.1
NAME lan0
PATH 28.1
NAME lan0
STATUS down
Viewing Data on Unowned Packages The following example shows packages that are currently unowned, that is, not running on any configured node. Information on monitored resources is provided for each node on which package can run this information allows you to identify the cause of a failure and decide where to start the package up again.
Chapter 4
181
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command
UNOWNED_PACKAGES PACKAGE PKG3 STATUS down STATE halted AUTO_RUN enabled NODE unowned
Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Failover min_package_node Failback automatic Script_Parameters: ITEM STATUS Resource up Subnet up Resource up Subnet up Resource up Subnet up Resource up Subnet up
Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled Alternate up enabled Alternate up enabled Alternate up enabled
182
Chapter 4
Online Reconfiguration
The online reconfiguration feature provides a method to make configuration changes online to a Serviceguard Extension for RAC (SGeRAC) cluster. Specifically, this provides the ability to add or/and delete nodes from a running SGeRAC Cluster, and to reconfigure SLVM VG while it is being accessed by only one node.
Chapter 4
183
184
Chapter 4
NOTE
Ensure that none of the mirrored logical volumes in this volume group have Consistency Recovery set to MWC (refer lvdisplay(1M)). Changing the mode back to shared will not be allowed in that case, since Mirror Write Cache consistency recovery (MWC) is not valid in volume groups activated in shared mode.
5. Make the desired configuration change for the volume group on the node where the volume group is active, run the required command to change the configuration. For example, to add a mirror copy, use the following command: # lvextend -m 2 /dev/vg_shared/lvol1 6. Export the changes to other cluster nodes if required. If the configuration change required the creation or deletion of a new logical or physical volume (i.e., any of the following commands were used - lvcreate(1M), lvreduce(1M), vgextend(1M), vgreduce(1M), lvsplit(1M), lvmerge(1M) then the following sequence of steps is required. a. From the same node, export the mapfile for vg_shared. For example # vgexport -s -p -m /tmp/vg_shared.map vg_shared b. Copy the mapfile thus obtained to all the other nodes of the cluster. c. On the other cluster nodes, export vg_shared and re-import it using the new map file. For example, # vgexport vg_shared # mkdir /dev/vg_shared # mknod /dev/vg_shared/group c 64 0xhh0000 # vgimport -s -m /tmp/vg_shared.map vg_shared
CAUTION
If Business Copies, or Business Continuity Volumes (BCs or BCVs) are in use, then run vgchgid(1M) before starting the procedure.
Chapter 4
185
Maintenance and Troubleshooting Managing the Shared Storage The vgimport(1M)/vgexport(1M) sequence will not preserve the order of physical volumes in the /etc/lvmtab file. If the ordering is significant due to the presence of active-passive devices, or if the volume group has been configured to maximize throughput by ordering the paths accordingly, the ordering would need to be repeated.
7. Change the activation mode back to shared on the node in the cluster where the volume group vg_shared is active. Change the mode back to shared. # vgchange -a s -x vg_shared On the other cluster nodes, activate vg_shared in shared mode # vgchange -a s vg_shared 8. Backup the changes made to the volume group using vgcfgbackup on all nodes. # vgcfgbackup vg_shared
2. On the configuration node, use the vgchange command to make the volume group shareable by members of the cluster:
# vgchange -S y -c y /dev/vg_ops
186
Chapter 4
Maintenance and Troubleshooting Managing the Shared Storage This command is issued from the configuration node only, and the cluster must be running on all nodes for the command to succeed. Note that both the -S and the -c options are specified. The -S y option makes the volume group shareable, and the -c y option causes the cluster id to be written out to all the disks in the volume group. In effect, this command specifies the cluster to which a node must belong in order to obtain shared access to the volume group. Making a Volume Group Unshareable Use the following steps to unmark a previously marked shared volume group: 1. Remove the volume group name from the ASCII cluster configuration file. 2. Enter the following command:
# vgchange -S n -c n /dev/volumegroup
The above example marks the volume group as non-shared and not associated with a cluster.
When the same command is entered on the second node, the following message is displayed:
Activated volume group in shared mode. This node is a Client.
Chapter 4
187
NOTE
Do not share volume groups that are not part of the RAC configuration unless shared access is controlled.
Deactivating a Shared Volume Group Issue the following command from each node to deactivate the shared volume group:
# vgchange -a n /dev/vg_ops
Remember that volume groups remain shareable even when nodes enter and leave the cluster.
NOTE
If you wish to change the capacity of a volume group at a later time, you must deactivate and unshare the volume group first. If you add disks, you must specify the appropriate physical volume group name and make sure the /etc/lvmpvg file is correctly updated on both nodes.
3. From node 2, use the vgexport command to export the volume group:
# vgexport -m /tmp/vg_ops.map.old /dev/vg_ops
188
Chapter 4
Maintenance and Troubleshooting Managing the Shared Storage 4. From node 1, use the vgchange command to deactivate the volume group:
# vgchange -a n /dev/vg_ops
6. Prior to making configuration changes, activate the volume group in normal (non-shared) mode:
# vgchange -a y /dev/vg_ops
7. Use normal LVM commands to make the needed changes. Be sure to set the raw logical volume device file's owner to oracle and group to dba, with a mode of 660. 8. Next, still from node 1, deactivate the volume group:
# vgchange -a n /dev/vg_ops
9. Use the vgexport command with the options shown in the example to create a new map file:
# vgexport -p -m /tmp/vg_ops.map /dev/vg_ops
Make a copy of /etc/lvmpvg in /tmp/lvmpvg, then copy the file to /tmp/lvmpvg on node 2. Copy the file /tmp/vg_ops.map to node 2. 10. Use the following command to make the volume group shareable by the entire cluster again:
# vgchange -S y -c y /dev/vg_ops
12. Create a control file named group in the directory /dev/vg_ops, as in the following:
# mknod /dev/vg_ops/group c 64 0xhh0000
The major number is always 64, and the hexadecimal minor number has the format:
0xhh0000
where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Chapter 4 189
Maintenance and Troubleshooting Managing the Shared Storage 13. Use the vgimport command, specifying the map file you copied from the configuration node. In the following example, the vgimport command is issued on the second node for the same volume group that was modified on the first node:
# vgimport -v -m /tmp/vg_ops.map /dev/vg_ops /dev/dsk/c0t2d0/dev/dsk/c1t2d0
14. Activate the volume group in shared mode by issuing the following command on both nodes:
# vgchange -a s -p /dev/vg_ops
Skip this step if you use a package control script to activate and deactivate the shared volume group as a part of RAC startup and shutdown.
If you are adding or removing shared LVM volume groups, make sure that you modify the cluster configuration file and any package control script that activates and deactiveates the shared LVM volume groups.
190
Chapter 4
Maintenance and Troubleshooting Managing the Shared Storage One node will identify itself as the master. Create disk groups from this node. Similarly, you can delete VxVM or CVM disk groups provided they are not being used by a cluster node at the time.
NOTE
For CVM without CFS, if you are adding a disk group to the cluster configuration, make sure you also modify any package or create the package control script that imports and deports this disk group. If you are adding a CVM disk group, be sure to add the STORAGE_GROUP entry for the disk group to the pacakge ASCII file. For CVM with CFS, if you are adding a disk group to the cluster configuration, make sure you also create the corresponding multi-node package. If you are adding a CVM disk group, be sure to add to the packages that depend on the CVM disk group the necessary package dependency. If you are removing a disk group from the cluster configuration, make sure that you also modify or delete any package control script that imports and deports this disk group. If you are removing a CVM disk group, be sure to remove the STORAGE_GROUP entries for the disk group from the package ASCII file. When removig a disk group that is activated and deactivated through a mulit-node package, make sure to modify or remove any configured package dependencies to the multi-node package.
Chapter 4
191
Maintenance and Troubleshooting Removing Serviceguard Extension for RAC from a System
NOTE
After removing Serviceguard Extension for RAC, your cluster will still have Serviceguard installed. For information about removing Serviceguard, refer to the Managing Serviceguard users guide for your version of the product.
192
Chapter 4
Monitoring Hardware
Good standard practice in handling a high availability system includes careful fault monitoring so as to prevent failures if possible or at least to react to them swiftly when they occur. The following should be monitored for errors or warnings of all kinds: Disks CPUs Memory LAN cards Power sources All cables Disk interface cards
Some monitoring can be done through simple physical inspection, but for the most comprehensive monitoring, you should examine the system log file (/var/adm/syslog/syslog.log) periodically for reports on all configured HA devices. The presence of errors relating to a device will show the need for maintenance.
Chapter 4
193
NOTE
As you add new disks to the system, update the planning worksheets (described in Appendix B, Blank Planning Worksheets, so as to record the exact configuration you are using.
194
Chapter 4
Replacing Disks
The procedure for replacing a faulty disk mechanism depends on the type of disk configuration you are using and on the type of Volume Manager software. For a description of replacement procedures using VERITAS VxVM or CVM, refer to the chapter on Administering Hot-Relocation in the VERITAS Volume Manager Administrators Guide. Additional information is found in the VERITAS Volume Manager Troubleshooting Guide. The following paragraphs describe how to replace disks that are configured with LVM. Separate descriptions are provided for replacing a disk in an array and replacing a disk in a high availability enclosure.
NOTE
If your LVM installation requires online replacement of disk mechanisms, the use of disk arrays may be required, because software mirroring of JBODs with MirrorDisk/UX does not permit hot swapping for disks that are activated in shared mode.
Chapter 4
195
Maintenance and Troubleshooting Replacing Disks 1. Identify the physical volume name of the failed disk and the name of the volume group in which it was configured. In the following examples, the volume group name is shown as /dev/vg_sg01 and the physical volume name is shown as /dev/c2t3d0. Substitute the volume group and physical volume names that are correct for your system. 2. Identify the names of any logical volumes that have extents defined on the failed physical volume. 3. On the node on which the volume group is currently activated, use the following command for each logical volume that has extents on the failed physical volume: # lvreduce -m 0 /dev/vg_sg01/lvolname /dev/dsk/c2t3d0 4. At this point, remove the failed disk and insert a new one. The new disk will have the same HP-UX device name as the old one. 5. On the node from which you issued the lvreduce command, issue the following command to restore the volume group configuration data to the newly inserted disk: # vgcfgrestore /dev/vg_sg01 /dev/dsk/c2t3d0 6. Issue the following command to extend the logical volume to the newly inserted disk: # lvextend -m 1 /dev/vg_sg01 /dev/dsk/c2t3d0 7. Finally, use the lvsync command for each logical volume that has extents on the failed physical volume. This synchronizes the extents of the new disk with the extents of the other mirror. # lvsync /dev/vg_sg01/lvolname
196
Chapter 4
Maintenance and Troubleshooting Replacing Disks 2. Halt all the applications using the SLVM VG on all the nodes but one. 3. Re-activate the volume group in exclusive mode on all nodes of the cluster: # vgchange -a e -x <slvm vg> 4. Reconfigure the volume: vgextend, lvextend, disk addition, etc 5. Activate the volume group to shared mode: # vgchange -a s -x <slvm vg>
Chapter 4
197
Maintenance and Troubleshooting Replacing Disks This will synchronize the stale logical volume mirrors. This step can be time-consuming, depending on hardware characteristics and the amount of data. 6. Deactivate the volume group: # vgchange -a n vg_ops 7. Activate the volume group on all the nodes in shared mode using vgchange - a s: # vgchange -a s vg_ops
198
Chapter 4
NOTE
You cannot use inline terminators with internal FW/SCSI buses on D and K series systems, and you cannot use the inline terminator with single-ended SCSI buses. You must not use an inline terminator to connect a node to a Y cable.
Figure 4-1 shows a three-node cluster with two F/W SCSI buses. The solid line and the dotted line represent different buses, both of which have inline terminators attached to nodes 1 and 3. Y cables are also shown attached to node 2. Figure 4-1 F/W SCSI Buses with In-line Terminators
The use of in-line SCSI terminators allows you to do hardware maintenance on a given node by temporarily moving its packages to another node and then halting the original node while its hardware is serviced. Following the replacement, the packages can be moved back to the original node.
Chapter 4
199
Maintenance and Troubleshooting Replacing Disks Use the following procedure to disconnect a node that is attached to the bus with an in-line SCSI terminator or with a Y cable: 1. Move any packages on the node that requires maintenance to a different node. 2. Halt the node that requires maintenance. The cluster will re-form, and activity will continue on other nodes. Packages on the halted node will switch to other available nodes if they are configured to switch. 3. Disconnect the power to the node. 4. Disconnect the node from the in-line terminator cable or Y cable if necessary. The other nodes accessing the bus will encounter no problems as long as the in-line terminator or Y cable remains connected to the bus. 5. Replace or upgrade hardware on the node, as needed. 6. Reconnect the node to the in-line terminator cable or Y cable if necessary. 7. Reconnect power and reboot the node. If AUTOSTART_CMCLD is set to 1 in the /etc/rc.config.d/cmcluster file, the node will rejoin the cluster. 8. If necessary, move packages back to the node from their alternate locations and restart them.
200
Chapter 4
Chapter 4
201
Off-Line Replacement
The following steps show how to replace a LAN card off-line. These steps apply to both HP-UX 11.0 and 11i: 1. Halt the node by using the cmhaltnode command. 2. Shut down the system using /etc/shutdown, then power down the system. 3. Remove the defective LAN card. 4. Install the new LAN card. The new card must be exactly the same card type, and it must be installed in the same slot as the card you removed. 5. Power up the system. 6. If necessary, add the node back into the cluster by using the cmrunnode command. (You can omit this step if the node is configured to join the cluster automatically.)
On-Line Replacement
If your system hardware supports hotswap I/O cards, and if the system is running HP-UX 11i (B.11.11 or later), you have the option of replacing the defective LAN card on-line. This will significantly improve the overall availability of the system. To do this, follow the steps provided in the section How to On-line Replace (OLR) a PCI Card Using SAM in the document Configuring HP-UX for Peripherals. The OLR procedure also requires that the new card must be exactly the same card type as the card you removed to avoid improper operation of the network driver. Serviceguard will automatically recover the LAN card once it has been replaced and reconnected to the network.
202
Chapter 4
2. Use the cmapplyconf command to apply the configuration and copy the new binary file to all cluster nodes:
# cmapplyconf -C config.ascii
This procedure updates the binary file with the new MAC address and thus avoids data inconsistency between the outputs of the cmviewconcl and lanscan commands.
Chapter 4
203
204
Chapter 4
Software Upgrades
Software Upgrades
Serviceguard Extension for RAC (SGeRAC) software upgrades can be done in the two following ways: rolling upgrade non-rolling upgrade
Instead of an upgrade, moving to a new version can be done with: migration with cold install
Rolling upgrade is a feature of SGeRAC that allows you to perform a software upgrade on a given node without bringing down the entire cluster. SGeRAC supports rolling upgrades on version A.11.15 and later, and requires all nodes to be running on the same operating system revision and architecture. Non-rolling upgrade allows you to perform a software upgrade from any previous revision to any higher revision or between operating system versions but requires halting the entire cluster. The rolling and non-rolling upgrade processes can also be used any time one system needs to be taken offline for hardware maintenance or patch installations. Until the upgrade process is complete on all nodes, you cannot change the cluster configuration files, and you will not be able to use any of the new features of the Serviceguard/SGeRAC release. There may be circumstances when, instead of doing an upgrade, you prefer to do a migration with cold install. The cold install process erases the pre-existing operating system and data and then installs the new operating system and software; you must then restore the data. The advantage of migrating with cold install is that the software can be installed without regard for the software currently on the system or concern for cleaning up old software. A significant factor when deciding to either do an upgrade or cold install is overall system downtime. A rolling upgrade will cause the least downtime. This is because only one node in the cluster is down at any one time. A non-rolling upgrade may require more down time, because the entire cluster has to be brought down during the upgrade process.
Appendix A
205
Software Upgrades
One advantage of both rolling and non-rolling upgrades versus cold install is that upgrades retain the pre-existing operating system, software and data. Conversely, the cold install process erases the pre-existing system; you must re-install the operating system, software and data. For these reasons, a cold install may require more downtime. The sections in this appendix are as follows: Rolling Software Upgrades Steps for Rolling Upgrades Example of Rolling Upgrade Limitations of Rolling Upgrades Non-Rolling Software Upgrades Steps for Non-Rolling Upgrades Limitations of Non-Rolling Upgrades Migrating a SGeRAC Cluster with Cold Install
206
Appendix A
For more information on support, compatibility, and features for SGeRAC, refer to the Serviceguard Compatibility and Feature Matrix, located at http://docs.hp.com -> High Availability -> Serviceguard Extension for RAC.
4. Upgrade the node to the new Serviceguard and SGeRAC release. (SGeRAC requires the compatible version of Serviceguard.) 5. Edit the /etc/rc.config.d/cmcluster file, on the local node, to include the following line:
AUTOSTART_CMCLD = 1
Appendix A
207
NOTE
It is optional to set this parameter to 1. If you want the node to join the cluster at boot time, set this parameter to 1, otherwise set it to 0.
6. Restart the cluster on the upgraded node (if desired). You can do this in Serviceguard Manager, or from the command line, issue the Serviceguard cmrunnode command. 7. Restart Oracle (RAC, CRS, Clusterware, OPS) software on the local node. 8. Repeat steps 1-7 on the other nodes, one node at a time until all nodes have been upgraded.
NOTE
Be sure to plan sufficient system capacity to allow moving the packages from node to node during the upgrade process to maintain optimum performance.
If a cluster fails before the rolling upgrade is complete (perhaps because of a catastrophic power failure), the cluster could be restarted by entering the cmruncl command from a node which has been upgraded to the latest revision of the software. Keeping Kernels Consistent If you change kernel parameters or perform network tuning with ndd as part of doing a rolling upgrade, be sure to change the parameters to the same values on all nodes that can run the same packages in a failover scenario. The ndd command allows the examination and modification of several tunable parameters that affect networking operation and behavior.
208
Appendix A
NOTE
While you are performing a rolling upgrade, warning messages may appear while the node is determining what version of software is running. This is a normal occurrence and not a cause for concern.
Figure A-1
Appendix A
209
Software Upgrades Rolling Software Upgrades Step 1. 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on node 1. 2. Halt node 1. This will cause the nodes packages to start up on an adoptive node. You can do this in Serviceguard Manager, or from the command line issue the following:
# cmhaltnode -f node1
This will cause the failover package to be halted cleanly and moved to node 2. The Serviceguard daemon on node 1 is halted, and the result is shown in Figure A-2. Figure A-2 Running Cluster with Packages Moved to Node 2
210
Appendix A
Software Upgrades Rolling Software Upgrades Step 2. Upgrade node 1 and install the new version of Serviceguard and SGeRAC (A.11.16), as shown in Figure A-3.
NOTE
If you install Serviceguard and SGeRAC separately, Serviceguard must be installed before installing SGeRAC.
Figure A-3
Appendix A
211
Software Upgrades Rolling Software Upgrades Step 3. 1. Restart the cluster on the upgraded node (node 1) (if desired). You can do this in Serviceguard Manager, or from the command line issue the following:
# cmrunnode node1
2. At this point, different versions of the Serviceguard daemon (cmcld) are running on the two nodes, as shown in Figure A-4. 3. Start Oracle (RAC, CRS, Clusterware, OPS) software on node 1. Figure A-4 Node 1 Rejoining the Cluster
212
Appendix A
Software Upgrades Rolling Software Upgrades Step 4. 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on node 2. 2. Halt node 2. You can do this in Serviceguard Manager, or from the command line issue the following:
# cmhaltnode -f node2
This causes both packages to move to node 1; see Figure A-5. 3. Upgrade node 2 to Serviceguard and SGeRAC (A.11.16) as shown in Figure A-5. 4. When upgrading is finished, enter the following command on node 2 to restart the cluster on node 2:
# cmrunnode node2
5. Start Oracle (RAC, CRS, Clusterware, OPS) software on node 2. Figure A-5 Running Cluster with Packages Moved to Node 1
Appendix A
213
Software Upgrades Rolling Software Upgrades Step 5. Move PKG2 back to its original node. Use the following commands:
# cmhaltpkg pkg2 # cmrunpkg -n node2 pkg2 # cmmodpkg -e pkg2
The cmmodpkg command re-enables switching of the package, which is disabled by the cmhaltpkg command. The final running cluster is shown in Figure A-6. Figure A-6 Running Cluster After Upgrades
214
Appendix A
Appendix A
215
216
Appendix A
3. If necessary, upgrade all the nodes in the cluster to the new HP-UX release. 4. Upgrade all the nodes in the cluster to the new Serviceguard/SGeRAC release. 5. Restart the cluster. Use the following command:
# cmruncl
6. If necessary, upgrade all the nodes in the cluster to the new Oracle (RAC, CRS, Clusterware, OPS) software release. 7. Restart Oracle (RAC, CRS, Clusterware, OPS) software on all nodes in the cluster and configure the Serviceguard/SGeRAC packages and Oracle as needed.
Appendix A
217
218
Appendix A
CAUTION
The cold install process erases the pre-existing software, operating system, and data. If you want to retain any existing software, make sure to back up that software before migrating.
Use the following process as a checklist to prepare the migration: 1. Back up the required data, including databases, user and application data, volume group configurations, etc. 2. Halt cluster applications, including RAC and then halt the cluster. 3. Do a cold install of the HP-UX operating system. For more information on the cold install process, see the HP-UX Installation and Update Guide located at http://docs.hp.com -> By OS Release -> Installing and Updating 4. Install additional required software that did not come with your version of HP-UX OE. 5. Install a Serviceguard/SGeRAC version that is compatible with the new HP-UX operating system version. For more information on support, compatibility, and features for SGeRAC, refer to the Serviceguard Compatibility and Feature Matrix, located at http://docs.hp.com -> High Availability -> Serviceguard Extension for RAC 6. Recreate any user accounts needed for the cluster applications. 7. Recreate the network and storage configurations (that is, set up stationary IP addresses and create LVM volume groups and/or CVM disk groups required for the cluster). 8. Recreate the SGeRAC cluster. 9. Restart the cluster. 10. Reinstall the cluster applications, such as RAC. 11. Restore the data. Appendix A 219
220
Appendix A
Appendix B
221
Blank Planning Worksheets LVM Volume Group and Physical Volume Worksheet
Physical Volume Name: _____________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________
222
Appendix B
Disk Group Name: __________________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________ Physical Volume Name: _____________________________________________________
Appendix B
223
224
Appendix B
Index
A activation of volume groups in shared mode, 187 adding packages on a running cluster, 159 administration cluster and package states, 168 array replacing a faulty mechanism, 195, 196, 197 AUTO_RUN parameter, 157 AUTO_START_TIMEOUT in sample configuration file, 60, 124 B building a cluster CVM infrastructure, 73, 136 building an RAC cluster displaying the logical volume infrastructure, 57, 121 logical volume infrastructure, 48, 112 building logical volumes for RAC, 54, 118 C CFS, 65, 70 creating storage infrastructure, 129 deleting from the cluster, 134 cluster state, 175 status options, 171 cluster configuration file, 60, 124 cluster node startup and shutdown OPS instances, 157 cluster volume group creating physical volumes, 49, 113 CLUSTER_NAME (cluster name) in sample configuration file, 60, 124 control script creating with commands, 159 creating with SAM, 159 in package configuration, 159 starting OPS instances, 162 creating a SGeRAC Cluster, 65 creating a storage infrastructure, 65 CVM creating a storage infrastructure, 73, 136 use of the VxVM-CVM-pkg, 77, 140 CVM_ACTIVATION_CMD in package control script, 161 CVM_DG in package control script, 161 D deactivation of volume groups, 188 deciding when and where to run packages, 24 deleting from the cluster, 70 deleting nodes while the cluster is running,
190
demo database files, 55, 80, 119, 145 disk choosing for volume groups, 49, 113 disk arrays creating logical volumes, 53, 117 disk storage creating the infrastructure with CVM, 73,
136
disks replacing, 195 E eight-node cluster with disk array figure, 31 EMS for preventive monitoring, 193 enclosure for disks replacing a faulty mechanism, 195 Event Monitoring Service in troubleshooting, 193 exporting shared volume group data, 57, 121 exporting files LVM commands, 57, 121 extended distance cluster building, 32 F figures eight-node cluster with EMC disk array, 31 node 1 rejoining the cluster, 212 node 1 upgraded to HP-UX 111.00, 211 running cluster after upgrades, 214 running cluster before rolling upgrade, 209 running cluster with packages moved to node 1, 213 running cluster with packages moved to node 2, 210 FIRST_CLUSTER_LOCK_PV in sample configuration file, 60, 124 FIRST_CLUSTER_LOCK_VG in sample configuration file, 60, 124 FS 225
Index
in sample package control script, 161 FS_MOUNT_OPT in sample package control script, 161 G GMS group membership services, 23 group membership services define, 23 H hardware adding disks, 194 monitoring, 193 heartbeat subnet address parameter in cluster manager configuration, 47, 110 HEARTBEAT_INTERVAL in sample configuration file, 60, 124 HEARTBEAT_IP in sample configuration file, 60, 124 high availability cluster defined, 16 I in-line terminator permitting online hardware maintenance,
198
using to obtain a list of disks, 49, 113 LV in sample package control script, 161 LVM creating on disk arrays, 53, 117 LVM commands exporting files, 57, 121 M maintaining a RAC cluster, 167 maintenance adding disk hardware, 194 making changes to shared volume groups,
188
monitoring hardware, 193 N network status, 174 NETWORK_INTERFACE in sample configuration file, 60, 124 NETWORK_POLLING_INTERVAL (network polling interval) in sample configuration file, 60, 124 node halting status, 180 in an RAC cluster, 16 status and state, 172 NODE_FAILFAST_ENABLED parameter,
157
installing Oracle RAC, 59, 123 installing software Serviceguard Extension for RAC, 46, 109 IP in sample package control script, 161 IP address switching, 27 L lock disk replacing a faulty mechanism, 198 logical volumes blank planning worksheet, 223, 224 creating, 54, 118 creating for a cluster, 50, 79, 114, 142, 143 creating the infrastructure, 48, 112 disk arrays, 53, 117 filled in planning worksheet, 42, 44, 102,
106
NODE_TIMEOUT (heartbeat timeout) in sample configuration file, 60, 124 O online hardware maintenance by means of in-line SCSI terminators, 198 Online node addition and deletion, 183 Online reconfiguration, 183 OPS control scripts for starting instances, 162 packages to access database, 158 startup and shutdown instances, 157 startup and shutdown volume groups, 156 OPS cluster starting up with scripts, 156 opsctl.ctl Oracle demo database files, 55, 80, 119, 145 opslog.log Oracle demo database files, 55, 80, 119, 145
lssf 226
Index
optimizing packages for large numbers of storage units, 161 Oracle demo database files, 55, 80, 119, 145 Oracle 10 RAC installing binaries, 89 Oracle 10g RAC introducing, 33 Oracle 9i RAC installing, 148 introducing, 101 Oracle Disk Manager configuring, 92 Oracle Parallel Server starting up instances, 156 Oracle RAC installing, 59, 123 Oracle10g installing, 88 P package basic concepts, 17, 18 moving status, 178 state, 175 status and state, 172 switching status, 179 package configuration service name parameter, 47, 110 writing the package control script, 159 package control script generating with commands, 159 packages accessing OPS database, 158 deciding where and when to run, 24 launching OPS instances, 157 startup and shutdown volume groups, 156 parameter AUTO_RUN, 157 NODE_FAILFAST_ENABLED, 157 performance optimizing packages for large numbers of storage units, 161 physical volumes creating for clusters, 49, 113 filled in planning worksheet, 222 planning worksheets for logical volume planning, 42,
44, 102, 106
PVG-strict mirroring creating volume groups with, 49, 113 Q quorum server status and state, 176 R RAC group membership services, 23 overview of configuration, 16 status, 173 RAC cluster defined, 16 removing packages on a running cluster, 159 removing Serviceguard Extension for RAC from a system,, 192 replacing disks, 195 rollback.dbf Oracle demo database files, 55, 56, 81, 119,
145
rolling software upgrades example, 209 steps, 207, 217 rolling upgrade limitations, 215, 218 RS232 status, viewing, 180 running cluster adding or removing packages, 159 S serial line status, 174 service status, 174 service name parameter in package configuration, 47, 110 SERVICE_CMD in sample package control script, 161 SERVICE_NAME in sample package control script, 161 parameter in package configuration, 47, 110 SERVICE_RESTART in sample package control script, 161 227
Index
Serviceguard Extension for RAC installing, 46, 109 introducing, 15 shared mode activation of volume groups, 187 deactivation of volume groups, 188 shared volume groups making volume groups shareable, 186 sharing volume groups, 57, 121 SLVM making volume groups shareable, 186 SNOR configuration, 184 software upgrades, 205 state cluster, 175 node, 172 of cluster and package, 168 package, 172, 175 status cluster, 171 halting node, 180 moving package, 178 network, 174 node, 172 normal running RAC, 176 of cluster and package, 168 package, 172 RAC, 173 serial line, 174 service, 174 switching package, 179 SUBNET in sample package control script, 161 switching IP addresses, 27 system multi-node package used with CVM, 77, 140 system.dbf Oracle demo database files, 55, 81 T temp.dbf Oracle demo database files, 55, 81, 119, 145 tools.dbf Oracle demo database files, 119, 145 troubleshooting monitoring hardware, 193 replacing disks, 195 V VG in sample package control script, 161 VGCHANGE in package control script, 161 viewing RS232 status, 180 volume group creating for a cluster, 49, 113 creating physical volumes for clusters, 49,
113
volume groups adding shared volume groups, 190 displaying for RAC, 57, 121 exporting to other nodes, 57, 121 making changes to shared volume groups,
188
making shareable, 186 making unshareable, 187 OPS startup and shutdown, 156 VOLUME_GROUP in sample configuration file, 60, 124 VXVM_DG in package control script, 161 VxVM-CVM-pkg, 77, 140 W worksheet logical volume planning, 42, 44, 102, 106 worksheets physical volume planning, 222 worksheets for planning blanks, 221
228