Professional Documents
Culture Documents
Rac Node Add
Rac Node Add
No Title
[an error occurred while processing this directive]
by Michael New, MichaelNew@earthlink.net, Gradation LLC
Contents
Introduction
Hardware and Costs
Install and Configure the Linux Operating System on the New Node
Configure Access to the Shared Storage
Install and Configure ASMLib 2.0
Pre-installation Tasks for Oracle Grid Infrastructure for a Cluster
Extend Oracle Grid Infrastructure for a Cluster to the New Node
Extend Oracle Database Software to the New Node
Add New Instance to the Cluster Database
Extend Oracle ACFS Cluster File System to the New Node
About the Author
Introduction
As your organization grows, so too does your need for more application and database resources to support the company's IT systems. Oracle RAC 11g provides a scalable
framework which allows DBA's to effortlessly extend the database tier to support this increased demand. As the number of users and transactions increase, additional Oracle
instances can be added to the Oracle database cluster to distribute the extra load.
This document is an extension to my article "Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5)". Contained in this new article are the steps required to add a
single node to an already running and configured two-node Oracle RAC 11g Release 2 (11.2.0.3) for Linux x86_64 environment on the CentOS 5 Linux platform. All shared
disk storage for Oracle RAC is based on iSCSI using Openfiler running on a separate node (also known as the Network Storage Server). Although this article was written
and tested on CentOS 5 Linux, it should work unchanged with Red Hat Enterprise Linux 5 or Oracle Linux 5.
To add nodes to an existing Oracle RAC, Oracle Corporation recommends using the Oracle cloning procedures which are described in the Oracle Universal Installer and
OPatch User's Guide. This article, however, uses manual procedures to add nodes and instances to the existing Oracle RAC. The manual procedures method described in
this guide involve using the addNode.sh script to install the Oracle Grid Infrastructure home then the Oracle Database home to the new node and finally extending the
cluster database by adding a new instance on the new node. In other words, you extend the software and the new instance onto the new Oracle RAC node in the same
order as you installed the Grid Infrastructure and Oracle database software components on the existing RAC.
The reader has already built and configured a two-node Oracle RAC using the article "Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5)".
Note: The current Oracle RAC has been upgraded from its base release 11.2.0.1 to 11.2.0.3 by applying the 10404530 patchset.
To maintain the existing naming convention, the new Oracle RAC node being added to the existing cluster will be named racnode3 running a new instance named
racdb3 making it a three-node cluster.
All shared disk storage for the existing Oracle RAC is based on iSCSI using a Network Storage Server; namely Openfiler Release 2.3 (Final) x86_64.
The existing Oracle RAC is not using Grid Naming Service (GNS) to assign IP addresses.
The new Oracle RAC node has the same operating system version and installed patches as the existing Oracle RAC nodes.
The existing Oracle RAC does not use shared Oracle homes for the Grid Infrastructure or Database software.
The Oracle Grid Infrastructure and Oracle Database software is installed using the optional Job Role Separation configuration. One OS user is created to own each
Oracle software product — "grid" for the Oracle Grid Infrastructure owner and "oracle" for the Oracle Database software.
Oracle ASM is being used for all Clusterware files, database files, and the Fast Recovery Area. The OCR file and the voting files are stored in an ASM disk group
named +CRS. The ASMLib support library is configured to provide persistent paths and permissions for storage devices used with Oracle ASM.
All database files are configured using Oracle Managed Files (OMF) with four (4) ASM disk groups (CRS, RACDB_DATA, DOCS, FRA).
The existing Oracle RAC is configured with Oracle ASM Cluster File System (Oracle ACFS) and Oracle ASM Dynamic Volume Manager (ADVM) which is being used
as a shared file system to store files maintained outside of the Oracle database. The mount point for the cluster file system is /oradocs on all Oracle RAC nodes
which mounts the docsvol1 ASM dynamic volume created in the DOCSDG1 ASM disk group. The cluster file system will be extended to the new Oracle RAC node.
During the creation of the existing Oracle RAC, the installation of Oracle Grid Infrastructure and the Oracle Database software were only performed from one node in
the RAC cluster — namely from racnode1 as the grid and oracle user account respectively. The Oracle Universal Installer (OUI) on that particular node would then
use the ssh and scp commands to run remote commands on and copy the Oracle software to all other nodes within the RAC cluster. The grid and oracle user
accounts on the node running the OUI (runInstaller) had to be trusted by all other nodes in the cluster. This meant that the grid and oracle user accounts had to
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 1/26
11/15/2018 DBA Tips Archive for Oracle
run the secure shell commands (ssh or scp) on the Linux server executing the OUI (racnode1) against all other Linux servers in the cluster without being prompted
for a password. The same security requirements hold true when extending Oracle RAC.
User equivalence will be configured so that the Oracle Grid Infrastructure home and Oracle Database home will be securely copied from racnode1 to the new Oracle
RAC node using ssh and scp without being prompted for a password. Setting up SSH user equivalence has been greatly simplified in Oracle 11g by running the
runSSHSetup.sh script which can be found in the GRID_HOME/oui/bin and ORACLE_HOME/oui/bin directories.
Oracle Documentation
While this guide provides detailed instructions for successfully extending an Oracle RAC 11g system, it is by no means a substitute for the official Oracle documentation
(see list below). In addition to this guide, users should also consult the following Oracle documents to gain a full understanding of alternative configuration options,
installation, and administration with Oracle RAC 11g. Oracle's official documentation site is docs.oracle.com.
Example Configuration
The example configuration used in this guide stores all physical database files (data, online redo logs, control files, archived redo logs) on ASM in an ASM disk group named
+RACDB_DATA while the Fast Recovery Area is created in a separate ASM disk group named +FRA.
The new three-node Oracle RAC and the network storage server will be configured as described in the table below after adding the new Oracle RAC node (racnode3).
Node Name Instance Name Database Name Processor RAM Operating System
racnode1 racdb1 1 x Dual Core Intel Xeon, 3.00 GHz 4GB CentOS 5.5 - (x86_64)
racnode2 racdb2 racdb.idevelopment.info 1 x Dual Core Intel Xeon, 3.00 GHz 4GB CentOS 5.5 - (x86_64)
racnode3 racdb3 1 x Dual Core Intel Xeon, 3.00 GHz 4GB CentOS 5.5 - (x86_64)
Network Configuration
Software Component OS User Primary Group Supplementary Groups Home Directory Oracle Base / Oracle Home
/u01/app/grid
Grid Infrastructure grid oinstall asmadmin, asmdba, asmoper /home/grid
/u01/app/11.2.0/grid
/u01/app/oracle
Oracle RAC oracle oinstall dba, oper, asmdba /home/oracle
/u01/app/oracle/product/11.2.0/dbhome_1
Storage Components
Storage Component File System Volume Size ASM Volume Group Name ASM Redundancy Openfiler Volume Name
The following is a conceptual look at what the environment will look like after adding the third Oracle RAC node (racnode3) to the cluster. Click on the graphic below to
enlarge the image.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 2/26
11/15/2018 DBA Tips Archive for Oracle
Figure 1: Adding racnode3 to the current Oracle RAC 11g Release 2 System
This article is only designed to work as documented with absolutely no substitutions. The only exception here is the choice of vendor hardware (i.e. machines, networking
equipment, and internal / external hard drives). Ensure that the hardware you purchase from the vendor is supported on Red Hat Enterprise Linux 5 and Openfiler 2.3 (Final
Release).
Dual Core Intel(R) Xeon(R) E3110, 3.0 GHz, 6MB Cache, 1333MHz
4GB, DDR2, 800MHz
160GB 7.2K RPM SATA 3Gbps Hard Drive
Integrated Graphics - (ATI ES1000)
Integrated Gigabit Ethernet - (Broadcom(R) NetXtreme IITM 5722)
16x DVD Drive
No Keyboard, Monitor, or Mouse - (Connected to KVM Switch)
US$500
Each Linux server for Oracle RAC should contain at least two NIC adapters. The Dell PowerEdge T100 includes an
embedded Broadcom(R) NetXtreme IITM 5722 Gigabit Ethernet NIC that will be used to connect to the public network.
A second NIC adapter will be used for the private network (RAC interconnect and Openfiler networked storage). Select
the appropriate NIC adapter that is compatible with the maximum data transmission speed of the network switch to be
used for the private network. For the purpose of this article, I used a Gigabit Ethernet switch (and a 1Gb Ethernet card)
for the private network.
Miscellaneous Components
2 x Network Cables
Total US$630
Install and Configure the Linux Operating System on the New Node
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 3/26
11/15/2018 DBA Tips Archive for Oracle
Install the Linux operation system on the new Oracle RAC node using the same procedures documented in the original guide describing the two-node Oracle RAC 11g
Release 2 configuration.
When configuring the machine name and networking, ensure to follow the same conventions used in the existing Oracle RAC system. For example, the following describes
the network configuration used for the new node.
(racnode3)
eth0
eth1
Continue by manually setting your hostname. I used racnode3 for the new Oracle RAC node. Finish this dialog off by supplying your gateway and DNS servers.
Install all required Linux packages on the new Oracle RAC node using the same procedures documented in the original guide describing the two-node Oracle RAC 11g
Release 2 configuration.
Network Configuration
The current Oracle RAC is not using Grid Naming Service (GNS) to assign IP addresses. The existing cluster uses the traditional method of manually assigning static IP
addresses in Domain Name Service (DNS). I often refer to this traditional method of manually assigning IP addresses as the "DNS method" given the fact that all IP
addresses should be resolved using DNS.
When using the DNS method for assigning IP addresses, Oracle recommends that all static IP addresses be manually configured in DNS for the new Oracle RAC node
before extending the Oracle Grid Infrastructure software. This would include the public IP address for the node, the RAC interconnect, and virtual IP address (VIP).
The following table displays the network configuration that will be used when adding a third node to the existing Oracle RAC. Note that every IP address will be registered in
DNS and the hosts file for each Oracle RAC node with the exception of the SCAN virtual IP. The SCAN virtual IP will only be registered in DNS.
Update DNS
Update DNS by adding an entry for the new Oracle RAC node in the forward and reverse zone definition files.
Next, configure the new node for name resolution by editing the "/etc/resolv.conf" file to contain the IP address of the name server and domain that matches those of
your DNS server and the domain you have configured.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 4/26
11/15/2018 DBA Tips Archive for Oracle
nameserver 192.168.1.195
search idevelopment.info
After modifying the /etc/resolv.conf file on the new node, verify that DNS is functioning correctly by testing forward and reverse lookups using the nslookup command-
line utility. Perform tests similar to the following from each node to all other nodes in your cluster.
Name: racnode3.idevelopment.info
Address: 192.168.1.153
Name: racnode3.idevelopment.info
Address: 192.168.1.153
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.187
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.188
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.189
Update /etc/hosts
After configuring DNS, update the /etc/hosts file on all Oracle RAC nodes to include entries for the new node being added and to also remove any entry that has to do with
IPv6.
Verify the network configuration by using the ping command to test the connection from each node in the cluster to the new Oracle RAC node being added.
# ping -c 3 racnode1.idevelopment.info
# ping -c 3 racnode2.idevelopment.info
# ping -c 3 racnode3.idevelopment.info
# ping -c 3 racnode1-priv.idevelopment.info
# ping -c 3 racnode2-priv.idevelopment.info
# ping -c 3 racnode3-priv.idevelopment.info
# ping -c 3 openfiler1.idevelopment.info
# ping -c 3 openfiler1-priv.idevelopment.info
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 5/26
11/15/2018 DBA Tips Archive for Oracle
# ping -c 3 racnode1
# ping -c 3 racnode2
# ping -c 3 racnode3
# ping -c 3 racnode1-priv
# ping -c 3 racnode2-priv
# ping -c 3 racnode3-priv
# ping -c 3 openfiler1
# ping -c 3 openfiler1-priv
The current Oracle RAC uses Oracle Cluster Time Synchronization Service (CTSS) as the network time protocol which means that the NTP service will need to be de-
configured and de-installed on the new Oracle RAC node before installing Oracle Grid Infrastructure.
Create Job Role Separation Operating System Privileges Groups, Users, and Directories
Perform the following user, group, directory configuration, and setting shell limit tasks for the grid and oracle users on the new Oracle RAC node.
The Oracle Grid Infrastructure and Oracle Database software is installed using the optional Job Role Separation configuration. One OS user is created to own each Oracle
software product — "grid" for the Oracle Grid Infrastructure owner and "oracle" for the Oracle Database software. Both Oracle software owners must have the Oracle
Inventory group (oinstall) as their primary group, so that each Oracle software installation owner can write to the central inventory (oraInventory), and so that OCR and
Oracle Clusterware resource permissions are set correctly.
It is important that the UID and GID of the grid and oracle user accounts on the new Oracle RAC node be identical to that of the existing RAC nodes.
grid
Start by creating the recommended OS groups and user for Grid Infrastructure on the new Oracle RAC node.
Log in to the new Oracle RAC node as the grid user account and create a .bash_profile. When setting the Oracle environment variables in the login script for the new
node, make certain to assign a unique Oracle SID for ASM (i.e. ORACLE_SID=+ASM3).
.bash_profile (grid)
oracle
Next, create the the recommended OS groups and user for the Oracle database software on the new Oracle RAC node.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 6/26
11/15/2018 DBA Tips Archive for Oracle
Log in to the new Oracle RAC node as the oracle user account and create a .bash_profile. When setting the Oracle environment variables in the login script for the new
node, make certain to assign a unique Oracle SID for the instance (i.e. ORACLE_SID=racdb3).
.bash_profile (oracle)
If this command displays information about the nobody user, then you do not have to create that user.
2. If the user nobody does not exist, then enter the following command to create it.
Configure an Oracle base path compliant with an Optimal Flexible Architecture (OFA) structure and correct permissions.
To improve the performance of the software on Linux systems, you must increase the following resource limits for the Oracle software owner users (grid, oracle).
During the creation of the existing Oracle RAC, the installation of Oracle Grid Infrastructure and the Oracle Database software were only performed from one node in the
RAC cluster — namely from racnode1 as the grid and oracle user account respectively. The Oracle Universal Installer (OUI) on that particular node would then use the
ssh and scp commands to run remote commands on and copy the Oracle software to all other nodes within the RAC cluster. The grid and oracle user accounts on the
node running the OUI (runInstaller) had to be trusted by all other nodes in the RAC cluster. This meant that the grid and oracle user accounts had to run the secure
shell commands (ssh or scp) on the Linux server executing the OUI (racnode1) against all other Linux servers in the cluster without being prompted for a password. The
same security requirements hold true when extending Oracle RAC.
User equivalence will be configured so that the Oracle Grid Infrastructure and Oracle Database software will be securely copied from racnode1 to the new Oracle RAC node
(racnode3) using ssh and scp without being prompted for a password. Setting up SSH user equivalence has been greatly simplified in Oracle 11g by running the
runSSHSetup.sh script which can be found in the GRID_HOME/oui/bin and ORACLE_HOME/oui/bin directories. This script will setup SSH equivalence between the local
host to specified remote hosts so you will be able to execute commands on remote hosts without providing any password or confirmation.
From the machine you will be using to extend the Oracle RAC (racnode1), run the runSSHSetup.sh script as both grid and then oracle to setup SSH equivalence.
grid
[grid@racnode1 ~]$ $GRID_HOME/oui/bin/runSSHSetup.sh -user grid -hosts "racnode2 racnode3" -advanced -exverify
This script will setup SSH Equivalence from the host 'racnode1.idevelopment.info' to specified remote hosts.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 7/26
11/15/2018 DBA Tips Archive for Oracle
ORACLE_HOME = /u01/app/11.2.0/grid
JAR_LOC = /u01/app/11.2.0/grid/oui/jlib
SSH_LOC = /u01/app/11.2.0/grid/oui/jlib
OUI_LOC = /u01/app/11.2.0/grid/oui
JAVA_HOME = /u01/app/11.2.0/grid/jdk
NOTE :
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. You may be prompted for
the password during the execution of the script.
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.
Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes
If The files containing the client public and private keys already exist on the local
host. The current private key may or may not have a passphrase associated with it. In
case you remember the passphrase and do not want to re-run ssh-keygen, type 'no'. If
you type 'yes', the script will remove the old private/public key files and, any
previous SSH user setups would be reset.
Enter 'yes', 'no'
no
------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user grid.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--racnode2:--
Running /usr/bin/ssh -x -l grid racnode2 date to verify SSH connectivity
has been setup from local host to racnode2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL. Please note that being prompted for a passphrase may be OK
but being prompted for a password is ERROR.
Wed Apr 18 17:01:13 EDT 2012
------------------------------------------------------------------------
--racnode3:--
Running /usr/bin/ssh -x -l grid racnode3 date to verify SSH connectivity
has been setup from local host to racnode3.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL. Please note that being prompted for a passphrase may be OK
but being prompted for a password is ERROR.
Wed Apr 18 17:00:17 EDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from racnode2 to racnode2
------------------------------------------------------------------------
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL.
Wed Apr 18 17:01:14 EDT 2012
------------------------------------------------------------------------
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 8/26
11/15/2018 DBA Tips Archive for Oracle
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from racnode2 to racnode3
------------------------------------------------------------------------
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL.
Wed Apr 18 17:00:17 EDT 2012
------------------------------------------------------------------------
-Verification from racnode2 complete-
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from racnode3 to racnode2
------------------------------------------------------------------------
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL.
Wed Apr 18 17:01:14 EDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from racnode3 to racnode3
------------------------------------------------------------------------
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL.
Wed Apr 18 17:00:18 EDT 2012
------------------------------------------------------------------------
-Verification from racnode3 complete-
SSH verification complete.
oracle
[oracle@racnode1 ~]$ $ORACLE_HOME/oui/bin/runSSHSetup.sh -user oracle -hosts "racnode2 racnode3" -advanced -exverify
This script will setup SSH Equivalence from the host 'racnode1.idevelopment.info' to specified remote hosts.
ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1
JAR_LOC = /u01/app/oracle/product/11.2.0/dbhome_1/oui/jlib
SSH_LOC = /u01/app/oracle/product/11.2.0/dbhome_1/oui/jlib
OUI_LOC = /u01/app/oracle/product/11.2.0/dbhome_1/oui
JAVA_HOME = /u01/app/oracle/product/11.2.0/dbhome_1/jdk
NOTE :
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. You may be prompted for
the password during the execution of the script.
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.
Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes
If The files containing the client public and private keys already exist on the local
host. The current private key may or may not have a passphrase associated with it.
In case you remember the passphrase and do not want to re-run ssh-keygen, type 'no'.
If you type 'yes', the script will remove the old private/public key files and, any
previous SSH user setups would be reset.
Enter 'yes', 'no'
no
------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user oracle.
2. The server may have disabled public key based authentication.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 9/26
11/15/2018 DBA Tips Archive for Oracle
3. The client public key on the server may be outdated.
4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--racnode2:--
Running /usr/bin/ssh -x -l oracle racnode2 date to verify SSH connectivity
has been setup from local host to racnode2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL. Please note that being prompted for a passphrase may be OK
but being prompted for a password is ERROR.
Wed Apr 18 16:53:40 EDT 2012
------------------------------------------------------------------------
--racnode3:--
Running /usr/bin/ssh -x -l oracle racnode3 date to verify SSH connectivity
has been setup from local host to racnode3.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL. Please note that being prompted for a passphrase may be OK
but being prompted for a password is ERROR.
Wed Apr 18 16:52:44 EDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from racnode2 to racnode2
------------------------------------------------------------------------
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL.
Wed Apr 18 16:53:41 EDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from racnode2 to racnode3
------------------------------------------------------------------------
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL.
Wed Apr 18 16:52:45 EDT 2012
------------------------------------------------------------------------
-Verification from racnode2 complete-
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from racnode3 to racnode2
------------------------------------------------------------------------
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL.
Wed Apr 18 16:53:41 EDT 2012
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from racnode3 to racnode3
------------------------------------------------------------------------
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF
YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN
SUCCESSFUL.
Wed Apr 18 16:52:45 EDT 2012
------------------------------------------------------------------------
-Verification from racnode3 complete-
SSH verification complete.
Perform the following OS configuration procedures on the new Oracle RAC node.
# Sets the maximum number of file-handles that the Linux kernel will allocate
fs.file-max = 6815744
# Defines the local port range that is used by TCP and UDP
# traffic to choose the local port
net.ipv4.ip_local_port_range = 9000 65500
If the shared volumes for your Oracle RAC are configured using Openfiler and iSCSI, then certain tasks will need to be performed so that the new Oracle RAC node can
access the iSCSI volumes.
From the Openfiler Storage Control Center home page, log in as an administrator. The default administration login credentials for Openfiler are:
https://openfiler1.idevelopment.info:446/
Username: openfiler
Password: password
Network access will need to be setup in Openfiler in order to identify the new Oracle RAC node so that it can access the iSCSI volumes through the storage (private)
network. Navigate to [System] / [Network Setup]. The "Network Access Configuration" section (at the bottom of the page) allows an administrator to setup networks and/or
hosts that will be allowed to access resources exported by the Openfiler appliance. Add the new Oracle RAC node individually rather than allowing the entire network to have
access to Openfiler resources.
The following image shows the results of adding the new Oracle RAC node.
Figure 2: Configure Openfiler Network Access for the New Oracle RAC Node
Before the iSCSI client on the new Oracle RAC node can access the shared volumes, it needs to be granted the appropriate permissions to the associated iSCSI targets.
From the Openfiler Storage Control Center, navigate to [Volumes] / [iSCSI Targets]. Under the "Target Configuration" sub-tab, use the pull-down menu to select one of the
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 11/26
11/15/2018 DBA Tips Archive for Oracle
current RAC iSCSI targets in the section "Select iSCSI Target" and then click the [Change] button.
Click on the grey sub-tab named "Network ACL" (next to "LUN Mapping" sub-tab). For the currently selected iSCSI target, change the "Access" for the new Oracle RAC node
from 'Deny' to 'Allow' and click the [Update] button. This needs to be performed for all of the RAC iSCSI targets.
The following image shows the results of granting access to the crs1 target for the new Oracle RAC node.
Figure 4: Update Network ACL for the New Oracle RAC Node
After granting access to the iSCSI targets from Openfiler, configure the iSCSI initiator on the new Oracle RAC node in order to access the shared volumes.
Determine if the iscsi-initiator-utils package is installed on the new Oracle RAC node.
If the iscsi-initiator-utils package is not installed, load CD/DVD #1 into the machine and perform the following:
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 12/26
11/15/2018 DBA Tips Archive for Oracle
Next, start the iscsid service and enable it to automatically start when the system boots. Also configure the iscsi service to automatically start which logs into iSCSI
targets needed at system startup.
Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server to verify the configuration is
functioning properly.
Manually log in to each of the available iSCSI targets using the iscsiadm command-line interface.
Ensure the client will automatically log in to each of the targets listed above when the machine is booted (or the iSCSI initiator service is started/restarted).
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.acfs1 -p 192.168.2.195 --op update -n node.startup -v automatic
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 --op update -n node.startup -v automatic
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 --op update -n node.startup -v automatic
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 --op update -n node.startup -v automatic
Create persistent local SCSI device names for each of the iSCSI target names using udev. Having a consistent local SCSI device name and which iSCSI target it maps to,
helps to differentiate between the three volumes when configuring ASM. Although this is not a strict requirement since we will be using ASMLib 2.0 for all volumes, it
provides a means of self-documentation to quickly identify the name and location of each iSCSI volume.
Start by creating the following rules file /etc/udev/rules.d/55-openiscsi.rules on the new Oracle RAC node.
# /etc/udev/rules.d/55-openiscsi.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"
Next, create a directory where udev scripts can be stored and then create the UNIX SHELL script that will be called when this event is received.
#!/bin/sh
# FILE: /etc/udev/scripts/iscsidev.sh
BUS=${1}
HOST=${BUS%%:*}
[ -e /sys/class/iscsi_host ] || exit 1
file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"
target_name=$(cat ${file})
echo "${target_name##*.}"
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 13/26
11/15/2018 DBA Tips Archive for Oracle
Let's see if our hard work paid off. Verify that the new Oracle RAC node is able to see each of the volumes that contain the partition (i.e. part1).
/dev/iscsi/crs1:
total 0
lrwxrwxrwx 1 root root 9 Apr 30 10:30 part -> ../../sdc
lrwxrwxrwx 1 root root 10 Apr 30 10:30 part1 -> ../../sdc1
/dev/iscsi/data1:
total 0
lrwxrwxrwx 1 root root 9 Apr 30 10:30 part -> ../../sde
lrwxrwxrwx 1 root root 10 Apr 30 10:30 part1 -> ../../sde1
/dev/iscsi/fra1:
total 0
lrwxrwxrwx 1 root root 9 Apr 30 10:30 part -> ../../sdd
lrwxrwxrwx 1 root root 10 Apr 30 10:30 part1 -> ../../sdd1
Copy the ASMLib 2.0 libraries and the kernel driver from one of the existing Oracle RAC nodes. Install the ASMLib software on the new Oracle RAC node.
Enter the following command to run the oracleasm initialization script with the configure option.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 14/26
11/15/2018 DBA Tips Archive for Oracle
Enter the following command to load the oracleasm kernel module.
To make the volumes available on the new Oracle RAC node enter the following command.
Verify that the new Oracle RAC node has identified the disks that are marked as Automatic Storage Management disks.
Install the operating system package cvuqdisk on the new Oracle RAC node. Without cvuqdisk, CVU cannot discover shared disks and you will receive the error message
"Package cvuqdisk not installed" when the CVU is run. Copy the cvuqdisk RPM from one of the existing Oracle RAC nodes.
Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk, which for this article is oinstall and then install the cvuqdisk package.
From one of the active nodes in the existing cluster, log in as the Oracle Grid Infrastructure owner and run cvuqdisk at the post-hardware installation to ensure that
racnode3 (the Oracle RAC node to be added) is ready from the perspective of the hardware and operating system.
As the Oracle Grid Infrastructure owner, run the CVU again, this time to determine the readiness of the new Oracle RAC node. Use the comp peer option to obtain a detailed
comparison of the properties of a reference node that is part of your existing cluster environment with the node to be added in order to determine any conflicts or
compatibility issues with required Linux packages, kernel settings, and so on.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 15/26
11/15/2018 DBA Tips Archive for Oracle
Specify the reference node (racnode1 in this example) against which you want CVU to compare the node to be added (the node(s) specified after the -n option). Also
provide the name of the Oracle inventory O/S group as well as the name of the OSDBA O/S group.
[grid@racnode1 ~]$ $GRID_HOME/bin/cluvfy comp peer -refnode racnode1 -n racnode3 -orainv oinstall -osdba dba -verbose
This is due to the fact that the report simply looks for mismatches between the properties of the nodes being compared. Certain properties will undoubtedly differ. For
example, the amount of available memory, the amount of free disk space for the Grid_home, and the free space in /tmp will rarely match exactly. These such mismatches
can be safely ignored. Differences, however, with kernel settings and required Linux packages should be addressed before extending the cluster.
Use CVU as the Oracle Grid Infrastructure owner one last time to determine the integrity of the cluster and whether it is ready for the new Oracle RAC node to be added.
Otherwise, the CVU will create fixup scripts (if the -fixup option was specified) with instructions to fix the cluster or node if the verification fails.
When the shared storage is Oracle ASM and ASMLib is being used, there are cases where you may receive the following error from CVU:
ERROR:
PRVF-5449 : Check of Voting Disk location "ORCL:CRSVOL1(ORCL:CRSVOL1)" failed on the following nodes:
As documented in Oracle BUG #10310848, this error can be safely ignored. The error was a result of having the voting disks stored in Oracle ASM which is a new feature of
Oracle 11g Release 2.
There are several applications which can be used when extending the Oracle RAC that use a Graphical User Interface (GUI) and require the use of an X11 display server.
The most notable of these GUI applications (or better known as an X application) is the Database Configuration Assistant (DBCA). If you are not logged directly on to the
graphical console of a node but rather you are using a remote client like SSH, PuTTY, or Telnet to connect to the node, any X application will require an X11 display server
installed on the client. For example, if you are making a terminal remote connection to racnode1 from a Windows workstation, you would need to install an X11 display
server on that Windows client (Xming for example). If you intend to run any of the Oracle GUI applications from a Windows workstation or other system with an X11 display
server installed, then perform the following actions.
2. Configure the security settings of the X server software to permit remote hosts to display X applications on the local system.
3. From the client workstation, SSH or Telnet to the server where you want to run the GUI applications from.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 16/26
11/15/2018 DBA Tips Archive for Oracle
Figure 5: Test X11 Display Server on Windows; Run xterm from Node 1 (racnode1)
The addNode.sh script will not complete unless all of the pre-requisites checks for Grid
Infrastructure are successful. For example, users who receive the PRVF-5449 message
(or any other error message) from the CVU will need to set the environment variable
IGNORE_PREADDNODE_CHECKS=Y before running addNode.sh in order to bypass the node
addition pre-check; otherwise, the silent node addition will fail without showing any errors
to the console.
Navigate to the Grid_home/oui/bin directory on one of the existing nodes in the cluster and run the addNode.sh script using the following syntax, where racnode3 is the
name of the node that you are adding and racnode3-vip is the VIP name for the node.
[grid@racnode1 ~]$ id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)
[grid@racnode1 ~]$ id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)
If the command is successful, you should see a prompt similar to the following:
...
The following configuration scripts need to be executed as the "root" user in each new cluster node.
Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes racnode3
/u01/app/11.2.0/grid/root.sh #On nodes racnode3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
Run the orainstRoot.sh and root.sh commands on the new Oracle RAC node. The root.sh script performs the work of configuring Grid Infrastructure on the new node
and includes adding High Availability Services to the /etc/inittab so that CRS starts up when the machine starts. When root.sh completes, all services for Oracle Grid
Infrastructure will be running.
It is best practice to run the CVU from one of the initial nodes in the Oracle RAC one last time to verify the cluster is integrated and that the new node has been successfully
added to the cluster at the network, shared storage, and clusterware levels.
After extending Oracle Grid Infrastructure, run the following tests to verify the install and configuration was successful from the new Oracle RAC node as the grid user. If
successful, the Oracle Clusterware daemons, the TNS listener, the ASM instance, etc. should be started by the root.sh script.
[grid@racnode3 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 18/26
11/15/2018 DBA Tips Archive for Oracle
LISTENER_SCAN2
LISTENER
If you installed the OCR and voting disk files on Oracle ASM, then use the following command syntax as the Grid Infrastructure installation owner to confirm that your Oracle
ASM installation is running.
[oracle@racnode1 ~]$ id
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)
If the command is successful, you should see a prompt similar to the following:
...
The following configuration scripts need to be executed as the "root" user in each new cluster node.
Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11.2.0/dbhome_1/root.sh #On nodes racnode3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
Run the root.sh command on the new Oracle RAC node as directed:
Change Group Ownership of 'oracle' Binary when using Job Role Separation
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 19/26
11/15/2018 DBA Tips Archive for Oracle
If your Oracle RAC is configured using Job Role Separation, the $ORACLE_HOME/bin/oracle binary may not have the proper group ownership after extending the Oracle
Database software on the new node. This will prevent the Oracle Database software owner (oracle) from accessing the ASMLib driver or ASM disks on the new node as
stated in My Oracle Support [ID 1084186.1] and [ID 1054033.1]. For example, after extending the Oracle Database software, the oracle binary on the new node is owned
by
instead of
The group ownership for the $ORACLE_HOME/bin/oracle binary on the new node should be set to the value of the ASM Administrators Group (OSASM) which in this guide is
asmadmin. When using ASMLib, you can always determine the OSASM group using the oracleasm configure command.
As the grid user, run the setasmgidwrap command to set the $ORACLE_HOME/bin/oracle binary to the proper group ownership.
This section describes both methods that can be used to add a new Oracle instance to an existing cluster database — DBCA or SRVCTL.
To use the GUI method, log in to one of the active nodes in the existing Oracle RAC as the Oracle owner (oracle) and execute the DBCA.
Instance
Select Add an Instance.
Management
List of cluster From the List of Cluster Databases page, select the active Oracle RAC database to which you want to add an instance. Enter user
databases name and password for the database user that has SYSDBA privileges.
List of cluster
databases Review the existing instances for the cluster database and click Next to add a new instance.
instances
Instance naming
On the Adding an Instance page, enter the instance name in the field at the top of this page if the instance name that DBCA
and node
provides does not match your existing instance naming scheme. Then select the target node name from the list and click Next.
selection
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 20/26
11/15/2018 DBA Tips Archive for Oracle
Expand the Tablespaces, Datafiles, and Redo Log Groups nodes to verify a new UNDO tablespace and Redo Log Groups of a
Instance Storage
new thread are being created for the purpose of the new instance then click Finish.
Summary Review the information on the Summary dialog and click OK.
Progress The DBCA displays a progress dialog showing DBCA performing the instance addition operation.
End of Add When DBCA completes the instance addition operation, DBCA displays a dialog asking whether you want to perform another
Instance operation. Click No to exit from the DBCA.
To use the command-line method (SRVCTL), start by logging in to the new Oracle RAC node as the Oracle owner (oracle) and create the Oracle dependencies such as the
password file, init.ora, oratab, and admin directories for the new instance.
From one of the active nodes in the existing Oracle RAC, log in as the Oracle owner (oracle) and issue the following commands to create the needed public log thread,
undo tablespace, and instance parameter entries for the new instance.
SQL> alter database add logfile thread 3 group 5 ('+FRA','+RACDB_DATA') size 50M, group 6 ('+FRA','+RACDB_DATA') size 50M;
Database altered.
Database altered.
SQL> create undo tablespace undotbs3 datafile '+RACDB_DATA' size 500M autoextend on next 100m maxsize 8g;
Tablespace created.
System altered.
System altered.
System altered.
Update the Oracle Cluster Registry (OCR) with the new instance being added to the cluster database as well as changes to any existing service(s). Specifically, add racdb3
to the racdb cluster database and verify the results.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 21/26
11/15/2018 DBA Tips Archive for Oracle
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +RACDB_DATA/racdb/spfileracdb.ora
Domain: idevelopment.info
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2,racdb3
Disk Groups: RACDB_DATA,FRA
Mount point paths:
Services: racdbsvc.idevelopment.info
Type: RAC
Database is administrator managed
With all of the prerequisites satisfied and OCR updated, start the racdb3 instance on the new Oracle RAC node.
After adding the new instance to the configuration using either DBCA or SRVCTL, add the new instance to any services you may have.
Configure TNSNAMES
When extending the Oracle Database software, a copy of the current $ORACLE_HOME/network/admin/tnsnames.ora file was copied to the new node which contains entries
for all of the initial instances. Update the tnsnames.ora file on each node by adding entries for the new instance.
RACDB3.IDEVELOPMENT.INFO =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = racnode3-vip.idevelopment.info)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = racdb.idevelopment.info)
(INSTANCE_NAME = racdb3)
)
)
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 22/26
11/15/2018 DBA Tips Archive for Oracle
LISTENERS_RACDB3.IDEVELOPMENT.INFO =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = racnode3-vip.idevelopment.info)(PORT = 1521))
)
LISTENERS_RACDB.IDEVELOPMENT.INFO =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = racnode1-vip.idevelopment.info)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = racnode2-vip.idevelopment.info)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = racnode3-vip.idevelopment.info)(PORT = 1521))
)
If you configured Oracle Enterprise Manager (Database Control), add the new instance to DB Control monitoring without recreating the repository.
After extending the cluster database by adding a new instance, use the emca from one of the original Oracle RAC nodes to check the current DB Control cluster
configuration.
In this example, OEM Database Control agent is running on the initial two Oracle RAC nodes. The agent for the new instance will need to be configured and started from one
of the original Oracle RAC nodes.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 23/26
11/15/2018 DBA Tips Archive for Oracle
Apr 30, 2012 11:13:20 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage
INFO:
**************** Current Configuration ****************
INSTANCE NODE DBCONTROL_UPLOAD_HOST
---------- ---------- ---------------------
Management Repository has been placed in secure mode wherein Enterprise Manager data will be encrypted.
***********************************************************
Enterprise Manager configuration completed successfully
FINISHED EMCA at Apr 30, 2012 11:13:20 PM
Verify the volume device(s) are externalized to the OS on the new node and appear dynamically as special file(s) in the /dev/asm directory.
Manually start the Oracle ASM volume driver on the new Oracle RAC node (if necessary).
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 24/26
11/15/2018 DBA Tips Archive for Oracle
[root@racnode3 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s
Configure the Oracle ASM volume driver to load automatically on system startup.
# chkconfig: 2345 30 21
# description: Load Oracle ASM volume driver on system startup
ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME
\$ORACLE_HOME/bin/acfsload start -s
EOF
Copy the Oracle ACFS executables to /sbin and set the appropriate permissions. The Oracle ACFS executables are located in the
GRID_HOME/install/usm/EL5/<ARCHITECTURE>/<KERNEL_VERSION>/<FULL_KERNEL_VERSION>/bin directory or in the /u01/app/11.2.0/grid/install/usm/cmds/bin
directory (12 files) and include any file without the *.ko extension.
Modify any of the Oracle ACFS shell scripts copied to the /sbin directory (above) to include the ORACLE_HOME for Grid Infrastructure. The successful execution of these
scripts requires access to certain Oracle shared libraries that are found in the Grid Infrastructure Oracle home. Since many of the Oracle ACFS shell scripts will be executed
as the root user account, the ORACLE_HOME environment variable will typically not be set in the shell and will result in the executable to fail. For example:
An easy workaround to get past this error is to set the ORACLE_HOME environment variable for the Oracle Grid Infrastructure home in the Oracle ACFS shell scripts on all
Oracle RAC nodes. The ORACLE_HOME should be set at the beginning of the file after the header comments as shown in the following example:
#!/bin/sh
#
# Copyright (c) 2001, 2009, Oracle and/or its affiliates. All rights reserved.
#
ORACLE_HOME=/u01/app/11.2.0/grid
ORA_CRS_HOME=%ORA_CRS_HOME%
if [ ! -d $ORA_CRS_HOME ]; then
ORA_CRS_HOME=$ORACLE_HOME
fi
...
Add the ORACLE_HOME environment variable for the Oracle Grid Infrastructure home as noted above to the following Oracle ACFS shell scripts on all Oracle RAC nodes:
/sbin/acfsdbg
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 25/26
11/15/2018 DBA Tips Archive for Oracle
/sbin/acfsutil
/sbin/advmutil
/sbin/fsck.acfs
/sbin/mkfs.acfs
/sbin/mount.acfs
Create a directory that will be used to mount the new Oracle ACFS.
Verify that the volume (and mount point) is registered in the Oracle ACFS mount registry so that Oracle Grid Infrastructure will mount and unmount volumes on startup and
shutdown.
All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document
may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at jhunter@idevelopment.info.
I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of
data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.
Last modified on
Thursday, 14-Jun-2012 23:27:25 EDT
Page Count: 80636
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_22.shtml 26/26