Professional Documents
Culture Documents
No Title
[an error occurred while processing this directive]
by Michael New, MichaelNew@earthlink.net, Gradation LLC
Contents
Introduction
Remove Oracle Instance
Remove Oracle Database Software
Remove Node from Clusterware
Remove Remaining Components
About the Author
Introduction
Although not as exciting as building an Oracle RAC or adding a new node and instance to a cluster database; removing a node from a clustered environment is just
as important to understand for a DBA managing Oracle RAC. While it is true that most of the attention in a cluster database environment is focused on extending
the database tier to support increased demand, the exact opposite is just as likely to be encountered where the DBA needs to remove a node from an existing
Oracle RAC. One scenario may be a node failure or that an underutilized server in the database cluster could be better served in another business unit. In either
case, a node can be removed from the cluster while the remaining nodes continue to service ongoing requests.
This document is an extension to two of my articles: "Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5) " and "Add a Node to an Existing Oracle
RAC 11g R2 Cluster on Linux - (RHEL 5) ". Contained in this new article are the steps required to remove a single node from an existing three-node Oracle RAC
11g Release 2 (11.2.0.3.0) environment on the CentOS 5 Linux platform. The node being removed is the third node I added in the second article. Although this
article was written and tested on CentOS 5 Linux, it should work unchanged with Red Hat Enterprise Linux 5 or Oracle Linux 5.
As part of removing a node from Oracle RAC, you must first remove the Oracle instance, de-install the Oracle Database software and then remove Oracle Grid
Infrastructure from the node you are deleting. In other words, you remove the software components from the node you are deleting in the reverse order that you
originally installed them. It is important that you perform each step contained this article in the order provided.
The reader has already built and configured a three-node Oracle RAC using the articles "Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5)"
and "Add a Node to an Existing Oracle RAC 11g R2 Cluster on Linux - (RHEL 5) ".
Note: The current Oracle RAC has been upgraded from its base release 11.2.0.1 to 11.2.0.3 by applying the 10404530 patchset.
The third node in the existing Oracle RAC named racnode3 (running the racdb3 instance) will be removed from the cluster making it a two-node cluster. It is
assumed that the node to be removed is available.
All shared disk storage for the existing Oracle RAC is based on iSCSI using a Network Storage Server; namely Openfiler Release 2.3 (Final) x86_64.
The existing Oracle RAC is not using Grid Naming Service (GNS) to assign IP addresses.
The existing Oracle RAC does not use shared Oracle homes for the Grid Infrastructure or Database software.
The Oracle Grid Infrastructure and Oracle Database software is installed using the optional Job Role Separation configuration. One OS user is created to
own each Oracle software product — "grid" for the Oracle Grid Infrastructure owner and "oracle" for the Oracle Database software.
Oracle ASM is being used for all Clusterware files, database files, and the Fast Recovery Area. The OCR file and the voting files are stored in an ASM disk
group named +CRS. The ASMLib support library is configured to provide persistent paths and permissions for storage devices used with Oracle ASM.
All database files are configured using Oracle Managed Files (OMF) with four (4) ASM disk groups (CRS, RACDB_DATA, DOCS, FRA).
The existing Oracle RAC is configured with Oracle ASM Cluster File System (Oracle ACFS) and Oracle ASM Dynamic Volume Manager (ADVM) which is
being used as a shared file system to store files maintained outside of the Oracle database. The mount point for the cluster file system is /oradocs on all
Oracle RAC nodes which mounts the docsvol1 ASM dynamic volume created in the DOCSDG1 ASM disk group.
User equivalence is configured for the grid and oracle OS account between all nodes in the cluster so that the Oracle Grid Infrastructure and Oracle
Database software will be securely removed from the node to be deleted (racnode3). User equivalence will need to be configured from a node that is to
remain a member of the Oracle RAC (racnode1 in this guide) to the node being removed so that ssh can be executed without being prompted for a
password
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 1/19
17/02/2018 DBA Tips Archive for Oracle
Oracle Documentation
While this guide provides detailed instructions for removing a node from an existing Oracle RAC 11g system, it is by no means a substitute for the official
Oracle documentation (see list below). In addition to this guide, users should also consult the following Oracle documents to gain a full understanding of
alternative configuration options, installation, and administration with Oracle RAC 11g. Oracle's official documentation site is docs.oracle.com.
Example Configuration
The example configuration used in this guide stores all physical database files (data, online redo logs, control files, archived redo logs) on ASM in an ASM disk
group named +RACDB_DATA while the Fast Recovery Area is created in a separate ASM disk group named +FRA.
The existing three-node Oracle RAC and the network storage server is configured as described in the table below.
Node Name Instance Name Database Name Processor RAM Operating System
racnode1 racdb1 1 x Dual Core Intel Xeon, 3.00 GHz 4GB CentOS 5.5 - (x86_64)
racnode2 racdb2 racdb.idevelopment.info 1 x Dual Core Intel Xeon, 3.00 GHz 4GB CentOS 5.5 - (x86_64)
racnode3 [Remove] racdb3 1 x Dual Core Intel Xeon, 3.00 GHz 4GB CentOS 5.5 - (x86_64)
Network Configuration
Software Component OS User Primary Group Supplementary Groups Home Directory Oracle Base / Oracle Home
/u01/app/grid
Grid Infrastructure grid oinstall asmadmin, asmdba, asmoper /home/grid
/u01/app/11.2.0/grid
/u01/app/oracle
Oracle RAC oracle oinstall dba, oper, asmdba /home/oracle
/u01/app/oracle/product/11.2.0/dbhome_1
Storage Components
Storage Component File System Volume Size ASM Volume Group Name ASM Redundancy Openfiler Volume Name
The following is a conceptual look at what the environment will look like after removing the third Oracle RAC node (racnode3) from the cluster. Click on the graphic
below to enlarge the image.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 2/19
17/02/2018 DBA Tips Archive for Oracle
Figure 1: Remove racnode3 from the existing Oracle RAC 11g Release 2 System
This article is only designed to work as documented with absolutely no substitutions. The only exception here is the choice of vendor hardware (i.e. machines,
networking equipment, and internal / external hard drives). Ensure that the hardware you purchase from the vendor is supported on Red Hat Enterprise Linux 5 and
Openfiler 2.3 (Final Release).
If Oracle Enterprise Manager (Database Control) is configured for the existing Oracle RAC, remove the instance from the DB Control cluster configuration before
removing it from the cluster database.
Run the emca command from any node in the cluster, except from the node where the instance we want to stop from being monitored is running.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 3/19
17/02/2018 DBA Tips Archive for Oracle
Doc ID: 578011.1 - How to manage DB Control 11.x for RAC Database with emca
Doc ID: 394445.1 - emca -deleteInst db fails with Database Instance unavailable
Backup OCR
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 4/19
17/02/2018 DBA Tips Archive for Oracle
Backup the OCR using ocrconfig -manualbackup from a node that is to remain a member of the Oracle RAC.
Note that voting disks are automatically backed up in OCR after the changes we will be making to the cluster.
The instance racdb3 is hosted on node racnode3 which is part of the existing Oracle RAC and the node being removed in this guide. The racdb3 instance is in
the preferred list of the service racdbsvc.idevelopment.info.
As the Oracle software owner, run the Oracle Database Configuration Assistant (DBCA) in silent mode from a node that will remain in the cluster to remove the
racdb3 instance from the existing cluster database. The instance that's being removed by DBCA must be up and running.
Verify that the racdb3 database instance was removed from the cluster database.
As seen from the output above, the racdb3 database instance was removed and only racdb1 and racdb2 remain.
When DBCA is used to delete the instance, it also takes care of removing any Oracle dependencies like the public redo log thread, undo tablespace, and all
instance related parameter entries for the deleted instance. This can be examined in the trace file produced by DBCA
/u01/app/oracle/cfgtoollogs/dbca/trace.log_OraDb11g_home1_<DATE>:
...
... thread SQL = SELECT THREAD# FROM V$THREAD WHERE UPPER(INSTANCE) = UPPER('racdb3')
... threadNum.length=1
... threadNum=3
... redoLog SQL =SELECT GROUP# FROM V$LOG WHERE THREAD# = 3
... redoLogGrNames length=2
... Group numbers=(5,6)
... logFileName SQL=SELECT MEMBER FROM V$LOGFILE WHERE GROUP# IN (5,6)
... logFiles length=4
... SQL= ALTER DATABASE DISABLE THREAD 3
... archive mode = false
... SQL= ALTER DATABASE DROP LOGFILE GROUP 5
... SQL= ALTER DATABASE DROP LOGFILE GROUP 6
... SQL=DROP TABLESPACE UNDOTBS3 INCLUDING CONTENTS AND DATAFILES
... sidParams.length=2
... SQL=ALTER SYSTEM RESET undo_tablespace SCOPE=SPFILE SID = 'racdb3'
... SQL=ALTER SYSTEM RESET instance_number SCOPE=SPFILE SID = 'racdb3'
...
Check if the redo log thread and UNDO tablespace for the deleted instance is removed (which for my example, they were successfully removed). If not, manually
remove them.
THREAD#
----------
3
GROUP#
----------
5
6
MEMBER
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 6/19
17/02/2018 DBA Tips Archive for Oracle
--------------------------------------------------
+RACDB_DATA/racdb/onlinelog/group_5.270.781657813
+FRA/racdb/onlinelog/group_5.281.781657813
+RACDB_DATA/racdb/onlinelog/group_6.271.781657815
+FRA/racdb/onlinelog/group_6.289.781657815
Database altered.
Database altered.
Database altered.
Tablespace dropped.
System altered.
System altered.
Log in as the Oracle Database software owner when executing the tasks in this section.
If any listeners are running from the Oracle home being removed, they will need to be disabled and stopped.
Check if any listeners are running from the Oracle home to be removed.
In Oracle 11g Release 2 (11.2) the default listener runs from Grid home. Since the listener is running from the Grid home (shown above), disabling and stopping the
listener can be skipped in release 11.2.
If any listeners were explicitly created to run from the Oracle home being removed, they would need to be disabled and stopped.
As the Oracle software owner, execute runInstaller from Oracle_home/oui/bin on the node being removed to update the inventory. Set "CLUSTER_NODES=
{name_of_node_to_delete}".
Checking swap space: must be greater than 500 MB. Actual 9983 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
Make sure to specify the -local flag as to not update the inventory on all nodes in the
cluster.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 7/19
17/02/2018 DBA Tips Archive for Oracle
After executing runInstaller on the node to be deleted, the inventory.xml file on that node (/u01/app/oraInventory/ContentsXML/inventory.xml) will
show only the node to be deleted under the Oracle home name.
...
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="racnode3"/>
</NODE_LIST>
</HOME>
...
The inventory.xml file on the other nodes will still show all of the nodes in the cluster. The inventory on the remaining nodes will be updated after de-installing the
Oracle Database software.
...
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="racnode1"/>
<NODE NAME="racnode2"/>
<NODE NAME="racnode3"/>
</NODE_LIST>
</HOME>
...
Before attempting to de-install the Oracle Database software, review the /etc/oratab file on the node to be deleted and remove any entries that contain a
database instance running out of the Oracle home being de-installed. Do not remove any +ASM entries.
...
#
+ASM3:/u01/app/11.2.0/grid:N # line added by Agent
racdb3:/u01/app/oracle/product/11.2.0/dbhome_1:N # line added for DBA scripts
...
If a rouge entry exists in the /etc/oratab file that contains the Oracle home being deleted, then the deinstall described in the next step will fail:
ERROR: The option -local will not modify any database configuration for this Oracle home.
Following databases have instances configured on local node : 'racdb3'. Remove these
database instances using dbca before de-installing the local Oracle home.
When using a non-shared Oracle home (as is the case in this example guide), run deinstall as the Oracle Database software owner from the node to be
removed in order to delete the Oracle Database software.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 8/19
17/02/2018 DBA Tips Archive for Oracle
#######################################################################
After the de-install completes, the Oracle home node will be successfully removed from the inventory.xml file on the local node.
Make sure to specify the -local flag as to not remove more than just the local node's
software. If -local is not specified then deinstall would apply to the entire cluster.
If this were a shared home then instead of de-installing the Oracle Database software, you
would simply detach the Oracle home from the inventory.
From one of the nodes that is to remain part of the cluster, execute runInstaller (without the -local option) as the Oracle software owner to update the
inventories with a list of the nodes that are to remain in the cluster. Use the CLUSTER_NODES option to specify the nodes that will remain in the cluster.
Checking swap space: must be greater than 500 MB. Actual 9521 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
Review the inventory.xml file on each remaining node in the cluster to verify the Oracle home name does not include the node being removed.
...
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="racnode1"/>
<NODE NAME="racnode2"/>
</NODE_LIST>
</HOME>
...
Verify Grid_home
Most of the commands in this section will be run as root. Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on
each node, where Grid_home is the location of the installed Oracle Clusterware software.
Unpin Node
Run the following command as root to determine whether the node you want to delete is active and whether it is pinned.
If the node being removed is already unpinned then you do not need to run the crsctl unpin css command below and can proceed to the next step.
If the node being removed is pinned with a fixed node number, then run the crsctl unpin css command as root from a node that is to remain a member of the
Oracle RAC in order to unpin the node and expire the CSS lease on the node you are deleting.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 9/19
17/02/2018 DBA Tips Archive for Oracle
CRS-4667: Node racnode3 successfully unpinned.
If Cluster Synchronization Services (CSS) is not running on the node you are deleting, then the crsctl unpin css command in this step fails.
An Oracle RAC node will only be pinned if it is using CTSS or used with a database
version < 11.2.
Before executing the rootcrs.pl script described in this section, you must ensure EMAGENT is not running on the node being deleted.
If you have been following along in this guide, the EMAGENT should not be running since the instance on the node being deleted was removed from OEM Database
Control Monitoring.
Next, disable the Oracle Clusterware applications and daemons running on the node to be deleted from the cluster. Run the rootcrs.pl script as root from the
Grid_home/crs/install directory on the node to be deleted.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 10/19
17/02/2018 DBA Tips Archive for Oracle
If you do not use the -force option in the preceding command or the node you are
deleting is not accessible for you to execute the preceding command, then the VIP
resource remains running on the node. You must manually stop and remove the VIP
resource using the following commands as root from any node that you are not deleting:
Where vip_name is the VIP for the node to be deleted. If you specify multiple VIP names,
then separate the names with commas and surround the list in double quotation marks
("").
For example,
If you are deleting all nodes from a cluster, then append the -lastnode option to the
preceding command to clear OCR and the voting disks, as follows:
Only use the -lastnode option if you are deleting all cluster nodes because that option
causes the rootcrs.pl script to clear OCR and the voting disks of data.
From a node that is to remain a member of the Oracle RAC, run the following command from the Grid_home/bin directory as root to update the Clusterware
configuration to delete the node from the cluster.
As the Oracle Grid Infrastructure owner, execute runInstaller from Grid_home/oui/bin on the node being removed to update the inventory. Set
"CLUSTER_NODES={name_of_node_to_delete}". Note that this step is missing in the official Oracle Documentation (Oracle Clusterware Administration and
Deployment Guide 11g Release 2 (11.2) E10717-11 April 2010).
Checking swap space: must be greater than 500 MB. Actual 9983 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
Make sure to specify the -local flag as to not update the inventory on all nodes in the
cluster.
After executing runInstaller on the node to be deleted, the inventory.xml file on that node (/u01/app/oraInventory/ContentsXML/inventory.xml) will
show only the node to be deleted under the Grid home name.
...
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="racnode3"/>
</NODE_LIST>
</HOME>
...
The inventory.xml file on the other nodes will still show all of the nodes in the cluster. The inventory on the remaining nodes will be updated after de-installing the
Oracle Grid Infrastructure software.
...
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 11/19
17/02/2018 DBA Tips Archive for Oracle
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="racnode1"/>
<NODE NAME="racnode2"/>
<NODE NAME="racnode3"/>
</NODE_LIST>
</HOME>
...
When using a non-shared Grid home (as is the case in this example guide), run deinstall as the Grid Infrastructure software owner from the node to be removed
in order to delete the Oracle Grid Infrastructure software.
The following information can be collected by running "/sbin/ifconfig -a" on node "racnode3"
Enter the IP netmask of Virtual IP "192.168.1.253" on node "racnode3"[255.255.255.0]
>
[ENTER]
Enter the network interface name on which the virtual IP address "192.168.1.253" is active
>
[ENTER]
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured
[LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes.
Execute the command on the local node after the execution completes on all the
remote nodes.
Run the following command as the root user or the administrator on node "racnode3".
/tmp/deinstall2012-05-07_01-21-53PM/perl/bin/perl \
-I/tmp/deinstall2012-05-07_01-21-53PM/perl/lib \
-I/tmp/deinstall2012-05-07_01-21-53PM/crs/install \
/tmp/deinstall2012-05-07_01-21-53PM/crs/install/rootcrs.pl \
-force \
-deconfig \
-paramfile "/tmp/deinstall2012-05-07_01-21-53PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
<----------------------------------------
Run the above command as root on the specified node(s) from a different shell:
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 13/19
17/02/2018 DBA Tips Archive for Oracle
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'racnode3' at the end of the session.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'racnode3' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
After the de-install completes, verify that the /etc/inittab file does not start Oracle Clusterware.
Make sure to specify the -local flag as to not remove more than just the local node's
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 14/19
17/02/2018 DBA Tips Archive for Oracle
software. If -local is not specified then deinstall would apply to the entire cluster.
If this were a shared home then instead of de-installing the Grid Infrastructure software,
you would simply detach the Grid home from the inventory.
From one of the nodes that is to remain part of the cluster, execute runInstaller (without the -local option) as the Grid Infrastructure software owner to update
the inventories with a list of the nodes that are to remain in the cluster. Use the CLUSTER_NODES option to specify the nodes that will remain in the cluster.
Checking swap space: must be greater than 500 MB. Actual 9559 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
Review the inventory.xml file on each remaining node in the cluster to verify the Grid home name does not include the node being removed.
...
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="racnode1"/>
<NODE NAME="racnode2"/>
</NODE_LIST>
</HOME>
...
Run the following CVU command to verify that the specified node has been successfully deleted from the cluster.
At this point, racnode3 is no longer a member of the cluster. However, if an OCR dump is taken from one of the reamining nodes, information about the deleted
node is still contained in the OCRDUMPFILE.
[SYSTEM.crs.e2eport.racnode3]
ORATEXT : (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.3.153)(PORT=50989))
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION :
PROCR_NONE, OTHER_PERMISSION : PROCR_NONE, USER_NAME : grid, GROUP_NAME : oinstall}
...
[SYSTEM.OCR.BACKUP.2.NODENAME]
ORATEXT : racnode3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION :
PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root}
...
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 15/19
17/02/2018 DBA Tips Archive for Oracle
[SYSTEM.OCR.BACKUP.DAY.NODENAME]
ORATEXT : racnode3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION :
PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root}
...
[SYSTEM.OCR.BACKUP.DAY_.NODENAME]
ORATEXT : racnode3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION :
PROCR_READ, USER_NAME : root, GROUP_NAME : root}
...
[SYSTEM.OCR.BACKUP.WEEK_.NODENAME]
ORATEXT : racnode3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION :
PROCR_READ, USER_NAME : root, GROUP_NAME : root}
...
[DATABASE.ASM.racnode3]
ORATEXT : racnode3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION :
PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall}
[DATABASE.ASM.racnode3.+asm3]
ORATEXT : +ASM3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION :
PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall}
[DATABASE.ASM.racnode3.+asm3.ORACLE_HOME]
ORATEXT : /u01/app/11.2.0/grid
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION :
PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall}
[DATABASE.ASM.racnode3.+asm3.ENABLED]
ORATEXT : true
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION :
PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall}
[DATABASE.ASM.racnode3.+asm3.VERSION]
ORATEXT : 11.2.0.3.0
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION :
PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall}
This does not mean that the node wasn't removed properly. It is still possible to add the node again with the same hostname, IP address, VIP, etc. anytime in the
future.
Remove ASMLib
Remove the ASMLib kernel driver, supporting software, and associated directories from racnode3.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 16/19
17/02/2018 DBA Tips Archive for Oracle
Modify the iSCSI initiator service on racnode3 so it will not automatically start and therefore will not attempt to discover iSCSI volumes from the Openfiler server.
Verify we are logged out of the iSCSI target by looking at the /dev/disk/by-path directory. If no other iSCSI targets exist on the client node, then after logging out
from the iSCSI target, the mappings for all targets should be gone and the following command should not find any files or directories:
[root@racnode3 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
ls: *openfiler*: No such file or directory
Update the record entry on the client node to disable automatic logins to the iSCSI targets.
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.acfs1 -p 192.168.2.195 --op update -n node.startup -v manual
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 --op update -n node.startup -v manual
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 --op update -n node.startup -v manual
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 --op update -n node.startup -v manual
If the iSCSI targets being removed are the only remaining targets and you don't plan on adding any further iSCSI targets in the future, then it is safe to remove the
iSCSI rules file and its call-out script.
If the iSCSI targets being removed are the only remaining targets and you don't plan on adding any further iSCSI targets in the future, then it is safe to disable the
iSCSI Initiator Service.
Network access to racnode3 will need to be revoked from Openfiler so that it cannot access the iSCSI volumes through the storage (private) network.
From the Openfiler Storage Control Center home page, log in as an administrator. The default administration login credentials for Openfiler are:
https://openfiler1.idevelopment.info:446/
Username: openfiler
Password: password
From the Openfiler Storage Control Center, navigate to [Volumes] / [iSCSI Targets]. Under the "Target Configuration" sub-tab, use the pull-down menu to select
one of the current RAC iSCSI targets in the section "Select iSCSI Target" and then click the [Change] button.
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 17/19
17/02/2018 DBA Tips Archive for Oracle
Click on the grey sub-tab named "Network ACL" (next to "LUN Mapping" sub-tab). For the currently selected iSCSI target, change the "Access" for the new Oracle
RAC node from 'Allow' to 'Deny' and click the [Update] button. This needs to be performed for all of the RAC iSCSI targets.
Figure 4: Update Network ACL for the Deleted Oracle RAC Node
Navigate to [System] / [Network Setup]. The "Network Access Configuration" section (at the bottom of the page) allows an administrator to setup networks and/or
hosts that will be allowed to access resources exported by the Openfiler appliance. Remove racnode3 from the network access configuration.
Figure 5: Revoke Openfiler Network Access for the Deleted Oracle RAC Node
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 18/19
17/02/2018 DBA Tips Archive for Oracle
Finally, remove the Grid and Oracle user accounts and all associated UNIX groups from racnode3.
All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This
document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at
jhunter@idevelopment.info.
I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or
destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.
Last modified on
Thursday, 14-Jun-2012 23:36:50 EDT
Page Count: 69959
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml 19/19