You are on page 1of 36

Table of Contents

Rac10gR2OnAIX................................................................................................................................................1 1. Introduction.........................................................................................................................................1 1.1. What you need to know......................................................................................................1 1.1.1. Software required for install...............................................................................1 1.1.2. Processor Model.................................................................................................1 1.2. Installation steps ............................................................................................................................................1 1.3. Schematic......................................................................................................................................................2 1.3.1. Hardware/software configuration BEFORE Oracle software install............................................2 1.3.2. Hardware/software configuration AFTER Oracle software install..............................................2 1.4. Installation Method.......................................................................................................................................2 2. Prepare the cluster nodes for Oracle RAC.......................................................................................................2 2.1. User Accounts..................................................................................................................................2 2.2. Check AIX Filesets..........................................................................................................................3 2.3. Check For Required APARS...........................................................................................................3 2.4. Check Shell Limits ...........................................................................................................................3 2.5. Check System and Network Parameters..........................................................................................3 2.6. Check Network Interfaces ................................................................................................................3 2.7. Check RSH or SSH Configuration..................................................................................................4 2.8. Create Symbolic Link for VIP Configuration Assistant..................................................................4 2.9. Set Environment Variables..............................................................................................................4 2.10. Run rootpre.sh................................................................................................................................5 4. Oracle Clusterware Installation and Configuration.........................................................................................5 4.1. Oracle Clusterware Install...............................................................................................................5 . 4.2. Verifying Oracle Clusterware Installation.....................................................................................16 5. Oracle Clusterware patching..........................................................................................................................18 5.1. Patch Oracle Clusterware to 10.2.0.3 .............................................................................................18 10. Oracle RAC Database Home Software Install.............................................................................................22 10.1. Install RAC 10.2.0.1....................................................................................................................22 11. Oracle RAC Software Home Patching........................................................................................................26 11.1. Patch Oracle RAC to 10.2.0.3 ......................................................................................................26 12. Oracle RAC Database Creation...................................................................................................................28 12.1. Oracle RAC Listeners Configuration ...........................................................................................28 12.2. Oracle RAC Database Configuration ...........................................................................................31 12.2.1. Oracle RAC Database Instance has been created........................................................35

Rac10gR2OnAIX

1. Introduction
1.1. What you need to know
This document, details the installation procedure for Oracle 10g Release Application Clusters (RAC) Release 2 v10.2.0.3 on IBM AIX v5.3, using the General Parallel File System (GPFS) v2.3. The end architecture will consist of the following components: Real Application Cluster (2 Node), 10.2.0.3 General Parallel Filesystem, 2.3.0.29 AIX, 5.3 ML06 GPFS is IBMs shared filesystem (cluster filesystem) and stands for General Parallel Filesystem. It will be used to store the Oracle Clusterware files (OCR and Vote) and the Oracle database files. At the time of this deployment the certified version of GPFS was 2.3, however, GPFS 3.1 has now been certified for 10.2.0.3 and above. Prior to commencing your install check to see which technology components are certified from http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_unix_new.html

1.1.1. Software required for install


The following is a list of software required for this installation: AIX 64-Bit kernel is required for 10gR2 RAC General Parallel File System (GPFS) v2.3.0.3 AIX 5L, Version 5.3 AIX 64-Bit Base Oracle 10gR2 DVD AIX 64-Bit Base Oracle 10g Release 2 (10.2.0.3) Patchset

1.1.2. Processor Model


This document covers the install on a 64-Bit kernel for a P-Series server.
Do note that all 10gR2 installations require a 64-Bit kernel.

1.2. Installation steps


An overview of the steps required to achieve the architecture discussed in section 1.1 is as follows: Preparation Pre-reqs. to make sure the cluster is setup OK. Stage all the software on one node, typically Node1 Establish Oracle Clusterware Install the Oracle Clusterware (using the push mechanism to install on the other nodes in the cluster) Patch the Clusterware to the latest patchset Establish RAC Database Rac10gR2OnAIX 1

Install an Oracle Software Home for RAC Database Patch the RAC Database Home to the latest patchset Create the RAC Database Instances

1.3. Schematic
1.3.1. Hardware/software configuration BEFORE Oracle software install 1.3.2. Hardware/software configuration AFTER Oracle software install

1.4. Installation Method


This document details the installation of Real Application Clusters 10g Release 2 on two nodes: dbpremia1 dbpremia2 The table below, provides more information with respect to the end result of the installation : O/S User Product ORACLE_HOME Instances Host oracle Oracle /oracrs/product/oracle/crs n/a dbpremia1 Clusterware oracle Oracle /oracle/product/oracle/db dbpremia1 dbpremia1 RDBMS oracle Oracle /oracrs/product/oracle/crs n/a dbpremia2 Clusterware oracle Oracle /oracle/product/oracle/db dbpremia2 dbpremia2 RDBMS In addition to that, the volume /dbpremia was formatted with GPFS and will be used to store the following files Voting Disks Oracle Cluster Registry (OCR) Oracle RAC Database Files

2. Prepare the cluster nodes for Oracle RAC


Prior to commencing the installation, the following tasks must be completed to ensure a smooth install process. 1. Create User Accounts 2. Networking 3. Complete O/S specific checks/tasks 4. Stage Oracle software for installation

2.1. User Accounts


Ensure the user 'oracle' and group 'dba' (assigned as primary) group exist on all nodes in the cluster with the same user and group ids on all nodes. Also ensure that the UID for the oracle user is less than 65536. The run the 'smit' command to create the group and user as follows:

1.2. Installation steps

# smit group # smit user

2.2. Check AIX Filesets


Using the following command to determine whether the following required filesets are installed. If any are found to be missing, the appropriate action must be taken to install them. #lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat bos.perf.libperfstat bos.perf.protocols rsct.basic.rte Since HACMP will not be used for this installation, use the following command to ensure that it is not installed - if it is, the necessary action must be taken to remove it, before proceeding.
#lslpp #lslpp #lslpp #lslpp -l cluster.es.* -l rsct.hacmp.rte -l rsct.compat.basic.hacmp.rte -l rsct.compat.clients.hacmp.rte

2.3. Check For Required APARS


Use the following command, to check whether the required APAR's have been applied:
#/usr/sbin/instfix -i -k "IYnumber"

Substitute "number" for the numeric value of the APAR.

2.4. Check Shell Limits


Using either the 'smit' utility or editing the /etc/security/limits file, verify that unlimited has been specified for both the oracle and root user. The requirement for root is there because the crs daemon runs as root. Add the following lines to the limits file:
rss = -1 data = -1

2.5. Check System and Network Parameters


Use the following command to verify the system parameters are set correctly:
#smit chgsys

The maximum number of processes for the oracle user should be 2048 or greater. Use the following command to verify the network parameters, match these against those documented in the 10g RAC installation guide for AIX.
#/usr/sbin/no -a | more

2.6. Check Network Interfaces


Ensure that the network interfaces that will be used for both public and private traffic have the same interface names on all nodes in the cluster
#ifconfig -a

The environment that this document is based on, had the following configuration:

2.1. User Accounts

Network Interface Purpose Comments en1 Oracle Private Network Used by oracle cache fusion and css hearbeat en0 Public Network Used for client connections en2 GPFS Private Network Used for GPFS internetwork Public Host Public IP Virtual Host Virtual IP Private Host Private IP dbpremia1 192.168.38.82 dbpremia1-vip 192.168.38.182 dbpremia1_priv 10.0.0.82 dbpremia2 192.168.38.83 dbpremia2-vip 192.168.38.183 dbpremia2_priv 10.0.0.83

2.7. Check RSH or SSH Configuration


SSH was not implemented in this particular configuration, so checks only related to rsh were performed. The files that need to be modified in order to achieve this are: /etc/hosts.equiv on each node .rhosts (found in $HOME, if not present create it) for each user Below is a sample entry of each file for a particular node - this should be replicated to all nodes, and the .rhosts file for the operating system accounts of root and oracle Show hosts.equiv and .rhosts Sample
dbpremia1 dbpremia2 dbpremia1 dbpremia2 root root oracle oracle

Hide hosts.equiv and .rhosts Sample

From each node the following command was executed:


# rsh hostname date

The value for hostname was both the public and private aliases for each node. This command should succeed with each alias without having to prompt for a password. You should also be able to rsh to the node itself using both public and private aliases. In addition to that enter the following lines into the oracle users .profile file:
if [ -t 0 ]; then stty intr ^C fi

2.8. Create Symbolic Link for VIP Configuration Assistant


The following symbolic link needs to be created for the VIP configuration assistant:
# ln -s /usr/sbin/lsattr /etc/lsattr

2.9. Set Environment Variables


Add the following lines into the .profile file of the oracle user, these are required as part of the installation process:
export AIXTHREAD_SCOPE=S umask 022

2.6. Check Network Interfaces

2.10. Run rootpre.sh


The rootpre.sh is required to be run on all nodes that will have oracle products installed on them. This script is located in the 10g installation media. The following is a sample run of the script. Show rootpre.sh execution # ./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_07-10-08.15:28:19 Saving the original files in /etc/ora_save_07-10-08.15:28:19.... Copying new kernel extension to /etc.... Loading the kernel extension from /etc Oracle Kernel Extension Loader for AIX Copyright (c) 1998,1999 Oracle Corporation Successfully loaded /etc/pw-syscall.64bit_kernel with kmid: 0x4485900 Successfully configured /etc/pw-syscall.64bit_kernel with kmid: 0x4485900 The kernel extension was successfuly loaded. Configuring Asynchronous I/O.... Configuring POSIX Asynchronous I/O....

Hide rootpre.sh execution

Checking if group services should be configured.... Nothing to configure.

Show all

Hide all

Warning: Can't find topic RACGuides.Rac10gR2AIXPrepareDisk

4. Oracle Clusterware Installation and Configuration


The guides include hidden sections, use the and image for each section to show/hide the section or you can Expand all or Collapse all by clicking these buttons. This is implemented using the Twisty Plugin which requires Java Script to be enabled on your browser.

4.1. Oracle Clusterware Install


The first process in configuring RAC requires the deployment of Oracle Clusterware, this is a mandatory requirement. Without the presence of this component, Oracle 10g RAC cannot be installed. This section details how to install and verify the installation of this component. Invoke the oracle universal installer from the second dvd of the 10g media pack, as shown below

2.10. Run rootpre.sh

Notes Notice that you will be asked whether you have run the rootpre.sh script Actions This was already run a a pre requisite step in the previous chapter Enter Y to proceed You will then be presented with a screen for specifying the details of the oracle inventory. Specify the path and the operating system group as seen in figure 2.

4.1. Oracle Clusterware Install

Notes Specifying the location for where the oracle inventory will be created Specify the operating system group name that will have permissions to modify entires in the inventory Actions Location of the oracle inventory in our example is : /oracrs/oraInventory The group name specified is dba - you may choose to specify oinstall (if it exists) - if you require separate groups to administer the installation and the database Next specify the ORACLE_HOME details for the Oracle Clusterware installation, as seen below

Notes Specify a location for where the clusterware will be installed. This will be the ORACLE_HOME for CRS. Specify a name which will act as a reference for this installation Actions In our example the ORACLE_HOME = /oracrs/product/oracle/crs In our example the name for this ORACLE_HOME is oracrshome Once this is done, the Product specific checks screen appears, it may fail for a few filesets in the O/S check, this can safely be ignored, as the OUI will check for all component dependencies such as HACMP and other filesets required for ASM, which we do not need for this particular installation. Next you will be presented with the cluster configuration screen, were details about the nodes participating in the cluster will have to be specified, if any nodes do not appear here add them as necessary.

4.1. Oracle Clusterware Install

Notes Information for only one node (dbpremia1) is displayed (this is the node where runInstaller, was invoked from) We need to add information relevant to all nodes in our cluster Actions Click the 'add' button to enter details for the node dbpremia2

Notes Clicking the add button results in add node dialog box appearing Actions Enter details pertaining to the public, private, virtual node name and ip addresses for additional nodes - dbpremia2 Once all nodes have been entered, click ok to proceed. 4.1. Oracle Clusterware Install 8

Once all the required information has been specified in the specify cluster configuration step, the figure above shows how this looks like. We now have two nodes that exist as part of our cluster configuration. Next, is the specify network interface usage screen, as shown below. Here we will be using the en0 and en1 interfaces for public and private interfaces.

Since the specify network interface usage screen shows an additional network interface en2 (gpfs private interface), and we do not require this, we will click edit and select not to use this interface. The figure below shows the result of this - Notice now how the 'interface type' column shows 'Do Not Use'

4.1. Oracle Clusterware Install

Notes The next step is to specify the location for the oracle clusterware component (OCR) Actions Here we enter /premia/crsdata/ocrfile/ocr_file.dbf Since RAID 5 has be implemented, we select external redundancy for this file. In this particular scenario, the location for the OCR and voting disks (next step) are the same for the database files. If possible one should consider using separate volumes for the clusterware files and the oracle database files. Similar to the previous screen, the screen below depicts the location and file name specified for the clusterware component voting disk.

4.1. Oracle Clusterware Install

10

Notes Specify a location for the voting disk Actions The location specified here was /premia/crsdata/votefile/vote_file.dbf This too has external redundancy specified Finally, a screen displaying the summary of the tasks the OUI will perform is displayed, after this the installation commences.

4.1. Oracle Clusterware Install

11

Notes The installation progress for the CRS install.

Once the installation reaches the end a pop up dialog box appears, which requires the following scripts to be run on each node in the RAC configuration orainsRoot.sh Root.sh Below is the output of root.sh on dbpremia1: Show root.sh execution # ./root.sh
WARNING: directory '/oracrs/product/oracle' is not owned by root WARNING: directory '/oracrs/product' is not owned by root WARNING: directory '/oracrs' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now.

Hide root.sh execution

4.1. Oracle Clusterware Install

12

Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/oracrs/product/oracle' is not owned by root WARNING: directory '/oracrs/product' is not owned by root WARNING: directory '/oracrs' is not owned by root Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: dbpremia1 dbpremia1_priv dbpremia1 node 2: dbpremia2 dbpremia2_priv dbpremia2 Creating OCR keys for user 'root', privgrp 'system'.. Operation successful. Now formatting voting device: /premia/crsdata/votefile/vote_file.dbf Format of 1 voting devices complete.

Startup will be queued to init within 30 seconds. Adding daemons to inittab Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. dbpremia1 CSS is inactive on these nodes. dbpremia2 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons

The following is the output of root.sh on dbpremia2: Show root.sh execution # ./root.sh
WARNING: directory '/oracrs/product/oracle' is not owned by root WARNING: directory '/oracrs/product' is not owned by root WARNING: directory '/oracrs' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/oracrs/product/oracle' is not owned by root WARNING: directory '/oracrs/product' is not owned by root WARNING: directory '/oracrs' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: dbpremia1 dbpremia1_priv dbpremia1 node 2: dbpremia2 dbpremia2_priv dbpremia2 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. dbpremia1 dbpremia2 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps The given interface(s), "en0" is not public. Public interfaces should be used to configure virtual IPs.

Hide root.sh execution

4.1. Oracle Clusterware Install

13

The following message: "The given interface(s), "en0" is not public. Public interfaces should be used to configure virtual IPs" Is expected due to the network chosen for the public interface see metalink note id: The workaround for this is to run 'vipca' from the last node as the root user. The screens below, show vipca being executed as the root user out of the crs ORACLE_HOME.

Notes The error returned during the run of root.sh on our second node requires that vipca be rerun as the root user. Failure to do this will result in the vip's not being assigned to the public interfaces of each node. Actions Invoke vipca from the command line as the root user. The first screen displayed is the welcome screen as seen above.

Next specify the network interface that the vips will use this will be the same as the public interface en0. Then details about the vip will be specified on the next screen, as shown in screen above. The IP Alias Name 4.1. Oracle Clusterware Install 14

and IP Address are those of the VIP. Once the necessary information has been added proceed to the summary screen and click next to start the various configuration assistants, as seen below.

Once the assistants complete successfully, another summary screen is displayed, detailing what has been done, see below.

To verify that the VIP's have been configured correctly, run the 'ifconfig -a' command and check to see that the public interface has the vip ip address assigned to it. The screen below shows that the ip 192.168.0.183 has been assigned to the public interface en0 in addition to the public ip address of 192.168.0.82 for node dbpremia1. A further check would now be to ping the vip of the second node dbpremia2, to see if it is up.

4.1. Oracle Clusterware Install

15

Below is a similar check performed on dbpremia2 to confirm its vip ip address of 192.168.0.183 has been assigned to the public interface of en0 (text in yellow below have comments). Show Network Interface $ ifconfig -a
en0: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD (ACTIVE),PSEG,LARGESEND,CHAIN> inet 192.168.38.83 netmask 0xffffff00 broadcast 192.168.38.255 inet 192.168.38.183 netmask 0xffffff00 broadcast 192.168.38.255 tcp_sendspace 131072 tcp_recvspace 65536

Hide Network Interface

4.2. Verifying Oracle Clusterware Installation


In order to verify that the oracle clusterware was installed successfully, the following checks can be performed. The olsnodes utility lists the nodes that are part of the cluster configuration, execute this from any node, as follows: Show olsnodes Hide olsnodes

$ CRS_ORACLE_HOME/bin/olsnodes
dbpremia1 dbpremia2

This confirms that the nodes dbpremia1 and dbpremia2 are part of the cluster configuration. Next we can check to see if the clusterware daemons (crsd,evmd and cssd) are running, by issuing the following ps command on each node. From dbpremia1: Show ps output for node dbpremia1 # ps -ef | grep -i d.bin
oracle 495776 606376 0 21:14:47 0:00 /oracrs/product/oracle/crs/bin/ocssd.bin

Hide ps output for node dbpremia1

test message 4.2. Verifying Oracle Clusterware Installation 16

oracle 610330 root 626762

1 1

0 21:14:46 0 21:14:46

- 0:00 /oracrs/product/oracle/crs/bin/evmd.bin 0:00 /oracrs/product/oracle/crs/bin/crsd.bin reboot

From dbpremia1: Show ps output for node dbpremia2 $ ps -ef | grep -i d.bin
oracle 241760 1 0 21:16:32 root 426042 1 0 21:16:32 oracle 471290 364684 0 21:16:33 0:00 /oracrs/product/oracle/crs/bin/evmd.bin 0:01 /oracrs/product/oracle/crs/bin/crsd.bin reboot - 0:00 /oracrs/product/oracle/crs/bin/ocssd.bin

Hide ps output for node dbpremia2

Additionally, we can also use the clusterware control utility crsctl from each node to verify that the oracle clusterware is up and healthy from each node. From dbpremia1: Show crsctl output for node dbpremia1 # crsctl check crs
CSS appears healthy CRS appears healthy EVM appears healthy

Hide crsctl output for node dbpremia1

From dbpremia2: Show crsctl output for node dbpremia2 # crsctl check crs
CSS appears healthy CRS appears healthy EVM appears healthy

Hide crsctl output for node dbpremia2

We can use the crs_stat command to show the entire status of the cluster this command needs to be executed from one node only, as it shows the entire cluster status: Show crs_stat output $ crs_stat -t
Name Type Target State Host -----------------------------------------------------------ora....ia1.gsd application ONLINE ONLINE dbpremia1 ora....ia1.ons application ONLINE ONLINE dbpremia1 ora....ia1.vip application ONLINE ONLINE dbpremia1 ora....ia2.gsd application ONLINE ONLINE dbpremia2 ora....ia2.ons application ONLINE ONLINE dbpremia2 ora....ia2.vip application ONLINE ONLINE dbpremia2

Hide crs_stat output

At this point we should also verify that the correct network interfaces have been chosen for the RAC environment via the oifcfg utility as follows (run from only one node): Show oifcfg output $ oifcfg getif
en0 en1 192.168.38.0 global public 10.0.0.0 global cluster_interconnect

Hide oifcfg output

4.2. Verifying Oracle Clusterware Installation

17

Finally since the Oracle Cluster Registry (OCR) is a key component for oracle clusterware we can verify its integrity using the ocrcheck utility, from any node, as follows: Show ocrcheck output $ ocrcheck
Status of Oracle Cluster Registry Version Total space (kbytes) Used space (kbytes) Available space (kbytes) ID : Device/File Name is as follows : : 2 : 262120 : 2004 : 260116 1923955616 : /premia/crsdata/ocrfile/ocr_file.dbf Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded

Hide ocrcheck output

5. Oracle Clusterware patching


The guides include hidden sections, use the and image for each section to show/hide the section or you can Expand all or Collapse all by clicking these buttons. This is implemented using the Twisty Plugin which requires Java Script to be enabled on your browser.

5.1. Patch Oracle Clusterware to 10.2.0.3


Prior to applying the 10.2.0.3 patchset to the oracle clusterware we need to stop the clusterware on both nodes, dbpremia1 and dbpremia2 this can be achieved via the following command run as the root user:
# crsctl stop crs

There is a requirement to run the 'slibclean' utility for the AIX operating system on each node where the patch will be applied. This utility should be run as the root user. Next the 10.2.0.3 patch is extracted to a stage area and the oracle universal installer (oui) is invoked out of this stage After the patchset welcome screen, we are presented with the 'Specify Home Details' screen, seen below. Here we must select the ORACLE_HOME that corresponds to our existing oracle clusterware software installation. This home should be available from the drop down list.

5. Oracle Clusterware patching

18

Eventually we will be presented with a screen titled 'Specify Hardware Cluster Installation Node', here make sure both nodes, dbpremia1 and dbpremia2 are listed. The patchset will automatically be applied to both nodes in the cluster configuration.

Finally we will be presented with the summary screen with the list of actions to be performed by this patchset:

Once the patchset application is complete, we will be presented with a few manual tasks that need to be performed as the root user.

5.1. Patch Oracle Clusterware to 10.2.0.3

19

Manual tasks include running the root102.sh script from both nodes, below is a sample of this script run:

During the running of the root102.sh script, various TOC related warnings will be generated, these can safely be ignored. The screen, below shows the same for dbpremia2

5.1. Patch Oracle Clusterware to 10.2.0.3

20

Again the 'TOC' related warnings can be safely ignored. Once the root102.sh scripts complete on both nodes, return to the patchset 'End of Installation' screen and click exit to complete the 10.2.0.3 patchset application to oracle clusterware. To verify that the clusterware has been upgraded to version 10.2.0.3 the following checks can be performed From dbpremia1: Show CRS Version - dbpremia1 Hide CRS Version - dbpremia1

# crsctl query crs softwareversion


CRS software version on node [dbpremia1] is [10.2.0.3.0]

From dbpremia2: Show CRS Version - dbpremia2 Hide CRS Version - dbpremia2

# crsctl query crs softwareversion


CRS software version on node [dbpremia1] is [10.2.0.3.0]

Warning: Can't find topic RACGuides.Rac10gR2AIXSoftwareAsm Warning: Can't find topic RACGuides.Rac10gR2AIXSoftwareAsmPatching Warning: Can't find topic RACGuides.Rac10gR2AIXASMListenerCreation Warning: Can't find topic RACGuides.Rac10gR2AIXASMInstanceandDiskgroup

5.1. Patch Oracle Clusterware to 10.2.0.3

21

10. Oracle RAC Database Home Software Install


10.1. Install RAC 10.2.0.1
Prior to commencing the RAC installation, verify that oracle clusterware is running on both nodes, use the following command:
# crsctl check crs

If the clusterware needs to be started execute the following command as the root user from both nodes
# crsctl start crs

As with the patchset installation, also be sure to run the slibclean utility on both nodes as the root user. Next, run the oracle universal installer from the first dvd from the database directory. As before you will be presented with the Specify Home Details screen, be sure to enter a new home name and path Oracle RAC binaries must be located in a separate ORACLE_HOME from the oracle clusterware, see below.

If the oracle clusterware is up, you should then be presented with a screen listing both nodes as part of the installation ensure both are selected, as shown below.

10. Oracle RAC Database Home Software Install

22

Next, we select the custom installation method and chose to do a software install only, since we plan to apply the 10.2.0.3 patchset to this ORACLE_HOME, before creating a database. After this the Available Product Components screen will be presented where various oracle components can be selected for install, the screen below shows this.

As with the clusterware installation, the OUI will perform its own checks and fail for certain filesets, this like before can be safely ignored, a list of the OUI operation is presented below, this shows which checks were successful and which failed checks we ignored. Show Failed Checks Hide Failed Checks

Checking operating system requirements ... Expected result: One of 5200.004,5300.002 Actual Result: 5300.002 Check complete. The overall result of this check is: Passed ========================================================== Checking operating system package requirements ...

10.1. Install RAC 10.2.0.1

23

Checking for bos.adt.base(0.0); found bos.adt.base(5.3.0.61). Passed Checking for bos.adt.lib(0.0); found bos.adt.lib(5.3.0.60). Passed Checking for bos.adt.libm(0.0); found bos.adt.libm(5.3.0.60). Passed Checking for bos.perf.libperfstat(0.0); found bos.perf.libperfstat(5.3.0.60). Passed Checking for bos.perf.perfstat(0.0); found bos.perf.perfstat(5.3.0.60). Passed Checking for bos.perf.proctools(0.0); found bos.perf.proctools(5.3.0.61). Passed Check complete. The overall result of this check is: Passed ========================================================== Checking recommended operating system patches Checking for IY59386(bos.rte.bind_cmds,5.3.0.1); found (bos.rte.bind_cmds,5.3.0.60). Passed Checking for IY60930(bos.mp,5.3.0.1); found (bos.mp,5.3.0.61). Passed Checking for IY60930(bos.mp64,5.3.0.1); found (bos.mp64,5.3.0.61). Passed Checking for IY66513(bos.mp64,5.3.0.20); found (bos.mp64,5.3.0.61). Passed Checking for IY66513(bos.mp,5.3.0.20); found (bos.mp,5.3.0.61). Passed Checking for IY70159(bos.mp,5.3.0.22); found (bos.mp,5.3.0.61). Passed Checking for IY70159(bos.mp64,5.3.0.22); found (bos.mp64,5.3.0.61). Passed Checking for IY58143(bos.mp64,5.3.0.1); found (bos.mp64,5.3.0.61). Passed Checking for IY58143(bos.acct,5.3.0.1); found (bos.acct,5.3.0.60). Passed Checking for IY58143(bos.adt.include,5.3.0.1); found (bos.adt.include,5.3.0.61). Passed Checking for IY58143(bos.adt.libm,5.3.0.1); found (bos.adt.libm,5.3.0.60). Passed Checking for IY58143(bos.adt.prof,5.3.0.1); found (bos.adt.prof,5.3.0.61). Passed Checking for IY58143(bos.alt_disk_install.rte,5.3.0.1); found (bos.alt_disk_install.rte,5.3.0.60). Checking for IY58143(bos.cifs_fs.rte,5.3.0.1); found Not found. Failed <<<< Checking for IY58143(bos.diag.com,5.3.0.1); found (bos.diag.com,5.3.0.61). Passed Checking for IY58143(bos.perf.libperfstat,5.3.0.1); found (bos.perf.libperfstat,5.3.0.60). Passed Checking for IY58143(bos.perf.perfstat,5.3.0.1); found (bos.perf.perfstat,5.3.0.60). Passed Checking for IY58143(bos.perf.tools,5.3.0.1); found (bos.perf.tools,5.3.0.61). Passed Checking for IY58143(bos.rte.boot,5.3.0.1); found (bos.rte.boot,5.3.0.60). Passed Checking for IY58143(bos.rte.archive,5.3.0.1); found (bos.rte.archive,5.3.0.60). Passed Checking for IY58143(bos.rte.bind_cmds,5.3.0.1); found (bos.rte.bind_cmds,5.3.0.60). Passed Checking for IY58143(bos.rte.control,5.3.0.1); found (bos.rte.control,5.3.0.61). Passed Checking for IY58143(bos.rte.filesystem,5.3.0.1); found (bos.rte.filesystem,5.3.0.61). Passed Checking for IY58143(bos.rte.install,5.3.0.1); found (bos.rte.install,5.3.0.61). Passed Checking for IY58143(bos.rte.libc,5.3.0.1); found (bos.rte.libc,5.3.0.61). Passed Checking for IY58143(bos.rte.lvm,5.3.0.1); found (bos.rte.lvm,5.3.0.61). Passed Checking for IY58143(bos.rte.man,5.3.0.1); found (bos.rte.man,5.3.0.60). Passed Checking for IY58143(bos.rte.methods,5.3.0.1); found (bos.rte.methods,5.3.0.61). Passed Checking for IY58143(bos.rte.security,5.3.0.1); found (bos.rte.security,5.3.0.61). Passed Checking for IY58143(bos.rte.serv_aid,5.3.0.1); found (bos.rte.serv_aid,5.3.0.61). Passed Check complete. The overall result of this check is: Failed <<<< Problem: Some recommended patches are missing (see above). Recommendation: You may actually have installed patches which have obsoleted these, in which case you can successfully continue with the install. If you have not, it is recommended that you do not continue. Refer to the readme to find out how to get the missing patches. ========================================================== Validating ORACLE_BASE location (if set) ... Check complete. The overall result of this check is: Passed ========================================================== Checking for proper system clean-up.... Check complete. The overall result of this check is: Passed ========================================================== Checking for Oracle Home incompatibilities .... Actual Result: NEW_HOME Check complete. The overall result of this check is: Passed ========================================================== Checking Oracle Clusterware version ... Check complete. The overall result of this check is: Passed ==========================================================

Passed

After this, standard screens appear which have not been included as they are self explanatory. So we skip ahead to the end of the installation were a dialog box pops up and asks for the root.sh script to be executed on each node in the cluster. This is shown, below.

10.1. Install RAC 10.2.0.1

24

Below is a sample run of the root.sh script for each node. From dbpremia1: Show root.sh for dbpremia1 # ./root.sh
Running Oracle10 root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle/product/oracle/db Enter the full pathname of the local bin directory: [/usr/local/bin]: Creating /usr/local/bin directory... Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed

Hide root.sh for dbpremia1

From dbpremia2: Show root.sh for dbpremia2 # ./root.sh


Running Oracle10 root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle/product/oracle/db Enter the full pathname of the local bin directory: [/usr/local/bin]: Creating /usr/local/bin directory... Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by

Hide root.sh for dbpremia2

10.1. Install RAC 10.2.0.1

25

Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed.

Once the scripts have been run on both nodes, the 10g RAC software installation is complete, you should be presented with the End of Installation screen, as shown below.

We will now proceed by applying the 10.2.0.3 patch set to this ORACLE_HOME.

11. Oracle RAC Software Home Patching


11.1. Patch Oracle RAC to 10.2.0.3
Prior to applying the 10.2.0.3 patchset to the RAC ORACLE_HOME, stop any oracle processes running out of that ORACLE_HOME. As before also run slibclean as the root user from both nodes. When done invoke the OUI from the patchset stage area. Similar to the process of applying the clusterware patchset , make sure the ORACLE_HOME corresponding the RAC binaries location is selected, it should be available from the drop down list, as seen below

11. Oracle RAC Software Home Patching

26

Next make sure that the cluster installation node screen shows both nodes in the cluster, as shown below.

Notes Both nodes, dbpremia1 and dbpremia2 are shown above Next proceed through the standard screens to commence the patchset application. Once this completes, the configuration assistants screen appears.

11.1. Patch Oracle RAC to 10.2.0.3

27

Finally, as before, the execute configuration scripts screen appears and requires root.sh to be run on each node

Run root.sh on both nodes before proceeding with the next step which is to create the network listeners and the RAC database.

12. Oracle RAC Database Creation


12.1. Oracle RAC Listeners Configuration
Since all software components are now installed and ready for use, we proceed by following the necessary steps to create a RAC database. Prior to executing dbca we must create the network listeners on each node of the cluster, using the netca utility. This utility need only be run from one of the nodes. The first screen presented when netca is run is important, ensure to select cluster configuration before proceeding to the next screen.

12. Oracle RAC Database Creation

28

In the next screen, select all nodes, that are part of the cluster configuration, below we can see both dbpremia1 and dbpremia2 are selected.

12.1. Oracle RAC Listeners Configuration

29

Clicking next takes us to the screen, where we choose to perform a listener configuration. The screens that, now, follow after are the standard screens seen when creating a listener.

Since there are no listeners present, the only option we have is the default which is to add a listener to our configuration.

12.1. Oracle RAC Listeners Configuration

30

When presented with the screen for the listener name, keep the default name LISTENER. This is because during the listener creation process, netca will automatically create listener names with each nodes hostname.

Once the creation process is complete, you should see messages similar to those shown in the screen above on the xconsole/xterm from where netca was invoked.

12.2. Oracle RAC Database Configuration


After the successful creation of the network listeners, we can now proceed with creating the RAC database. This is done by running dbca (database configuration assistant) from any node - as the oracle user. This utility is found under the ORACLE_HOME of the database software. Below is the welcome screen displayed once dbca is invoked, be sure to select the Real Application Clusters database option and click next to proceed.

12.2. Oracle RAC Database Configuration

31

Next select the option to create a database. As with previous screens that show node selection, ensure that both nodes are selected before clicking next.

12.2. Oracle RAC Database Configuration

32

You may then select any template depending on the type of database you wish to create, this option results in different initialization files for the oracle instances this can however, like any oracle parameters be modified even after the creation of the database. In our case we chose the general purpose option. Next we specify the database global name and a sid prefix for out database.

12.2. Oracle RAC Database Configuration

33

Next we chose to have database control configured to administer the database (screen not shown). On the Storage Options step, be sure to select cluster file system as we are using GPFS, this is shown below.

After this standard screens appear as they do for a single instance database, those screens have been omitted. Finally once the database has been created, a dialog box will pop up indicating the cluster database is now being brought up.

12.2. Oracle RAC Database Configuration

34

To verify that both the instances and the database are up and created successfully, the crsstat command can be used. Note the crsstat utilitly is a script that calls crs_stat. This script must be created manually and is not supplied with the installation (metalink note id: 259301.1). The output of this script once the database has been created is shown below.

From the output above one can see that all services on each node are up and running. A similar result can be seen from issuing crs_stat t from any node. Also note how each listener has the node name of its hosts appended to its name. Following this step completes the process of installing 10g RAC on AIX using GPFS as a storage option.

12.2.1. Oracle RAC Database Instance has been created

12.2.1. Oracle RAC Database Instance has been created

35

You might also like