You are on page 1of 146

Oracle 10g RAC Setup for Solaris on HP Hardware

Oracle 10g RAC Setup for Solaris Running on HP Hardware


Version Date Status Author File Pages Classification Distribution : : : : : : : : 0.2 23-Mar-2009 Valid Shibu Mathai Varghese Oracle 10g RAC Setup on HP Hardware.doc 146 Open Public

23-Mar-2009

Oracle 10g RAC Setup for Solaris on

History
Version
0.1 0.2

HP
Remark
Draft Valid

Date
23-Feb-2009 25-Mar-2009

Author
Shibu Shibu

References

23-Mar-2009

Oracle 10g RAC Setup for Solaris on

Table of Contents

HP

1 2 3 4 5 5.1 5.2 5.3

INTRODUCTION........................................................................................................ 5 SCOPE....................................................................................................................... 5 PRE-REQUISITES ..................................................................................................... 5 CONFIGURATION ..................................................................................................... 5 STORAGE ALLOCATION .......................................................................................... 5 Shared-Storage Allocation ...................................................................................... 5 iSCSI Initiator.......................................................................................................... 6 iSCSI Target ........................................................................................................... 6

5.4 Hardware ................................................................................................................ 6 5.4.1 Nodes .............................................................................................................. 6 5.4.2 HP StorageWorks All-in-One SB600C Storage Blade ...................................... 7 5.4.3 Network infrastructure...................................................................................... 7 5.4.4 Data files, OCR and VOTING disks ................................................................. 8 5.4.5 Logical Drive Creation using Array Controller Utility (ACU) .............................. 8 5.4.6 iSCSI Target Creation.................................................................................... 25 5.4.7 iSCSI Virtual Disk Creation ............................................................................ 31 5.4.8 Configure iSCSI Volumes on Oracle RAC Nodes .......................................... 49 5.4.9 Configure Solaris Partitions on Oracle RAC Nodes........................................ 50 5.4.10 Create UNIX Users and Groups on Oracle RAC Nodes................................. 55 5.4.11 Change Privilege/ Ownership of the Created Raw Disks on all Oracle RAC Nodes 56 5.4.12 Create Symbolic Links for all the Created Raw Disks on all Oracle RAC Nodes 57 5.4.13 Create Symbolic Link for the SSH on all Oracle RAC Nodes ......................... 58 5.4.14 Updating the /etc/hosts file............................................................................. 58 5.4.15 Configure SSH on Oracle RAC Nodes ........................................................... 59 5.4.16 Checking the Hardware Requirements of BLTEST1, BLTEST2 ..................... 61 5.4.17 Node Time Requirements .............................................................................. 62 5.4.18 Configuring Kernel Parameters On Solaris 10................................................ 63 5.4.19 Host Naming of the RAC Nodes in Solaris 10 ................................................ 63 5.4.20 Time Zones of the RAC Nodes in Solaris 10.................................................. 63 5.4.21 Network infrastructure.................................................................................... 63 5.4.22 Updates Hosts file in all the Cluster Nodes .................................................... 64 6 ORACLE 10G RAC INSTALLATION ........................................................................ 64

6.1 10.2.0.1 Clusterware Installation ........................................................................... 64 6.1.1 Verify Clusterware Pre-requisites................................................................... 64 6.1.2 Create the Default Home for CRS in all the Nodes involved in the Cluster..... 65 6.1.3 Run Root Pre-Requisite Check to ensure No Sun Cluster is running ............. 65

23-Mar-2009

6.1.4 Ensure the Display is set correctly and any X Server Software is working as required 65 6.1.5 Clusterware Setup using Oracle Universal Installer (OUI) .............................. 66
Oracle 10g RAC Setup for Solaris on

HP

6.2 10.2.0.1 Database Home Installation .................................................................... 81 6.2.1 Create the Default Home for Oracle Software in all the Nodes involved in the Cluster 81 6.2.2 Database Home Setup using Oracle Universal Installer (OUI) ....................... 82 6.3 10.2.0.1 Database Companion Installation............................................................ 93 6.3.1 Database Home Setup using Oracle Universal Installer (OUI) ....................... 93 6.4 10.2.0.3 Patch Installation................................................................................... 100 6.4.1 10.2.0.3 Patch Installation for Clusterware using Oracle Universal Installer (OUI) 100 6.4.2 10.2.0.3 Patch Installation for Database Home using Oracle Universal Installer (OUI) 108 6.5 10.2.0.3 ASM Installation .................................................................................... 116 6.5.1 ASM Installation using Oracle Universal Installer (OUI) ............................... 116 6.6 10.2.0.3 Database Installation............................................................................. 122 6.6.1 Database Installation using Oracle Universal Installer (OUI) ........................ 122 7 ACKNOWLEDGEMENTS....................................................................................... 146

23-Mar-2009

Oracle 10g RAC Setup for Solaris on

1 Introduction

This document defines the steps to be followed for the Setting up the Storage in HP Storage CS3000 using iSCSI and the allocation of the storage to the HP Blade Servers BLTEST1 and BLTEST2. In addition, this document will contain the steps to be followed for the Oracle 10g Real Application Clusters on Sun Solaris installed on the HP Blade Servers.

HP

2 Scope
The scope of the document pertains to the iSCSI Storage Allocation in HP Infrastructure and the Oracle 10g RAC Setup on Solaris running on this Infrastructure.

3 Pre-Requisites
Windows 2003 Storage Server running on HP Storage Blade Server iSCSI SAN Storage on HP Storage CS3000. Solaris 10 with latest Patch Set installed on HP Blade Server BLTEST1, BLTEST2 which will be the 2 Node of the Oracle 10g Real Application. Host Names of the RAC Nodes should be in lower case ONLY

4 Configuration
The following are the Storage Blade Server, 2 Blade Server for RAC and an HP Onboard Administration Server: RAC Blade Server 1:o Hostname : bltest1 o Server model : BL 460c G1 o IP address : 192.168.15.6 RAC Blade Server 2:o Hostname : bltest2 o Server model : BL 460c G1 o IP address : 192.168.15.7 Storage Blade Server:o Server model : BL 460c G1 o IP Address : 192.168.15.8 HP Onboard Administration : o IP Address : 192.168.15.2 Storage:o Server model : AIO SB600c

5 Storage Allocation
5.1 Shared-Storage Allocation
Today, fibre channel is one of the most popular solutions for shared storage. Fibre channel is a high-speed serial-transfer interface that is used to connect systems and storage devices in either point-to-point (FC-P2P), arbitrated loop (FC-AL), or switched topologies (FC-SW). Protocols supported by Fibre Channel include SCSI and IP. Fibre channel configurations can support as many as 127 nodes and have a throughput of up to 2.12 Gbps in each direction, and 4.25 Gbps is expected. Fibre channel, however, is very expensive. A less expensive alternative to fibre channel is SCSI. SCSI technology provides acceptable performance for shared storage, but for

administrators and developers who are used to GPL-based Linux prices, even SCSI can come in over budget, at around US$2,000 to US$5,000 for a two-node cluster.
23-Mar-2009 Oracle 10g RAC Setup for Solaris on Another popular solution is the Sun NFS (Network File System) found on a NAS. It can be

HP

used for shared storage but only if you are using a network appliance or something similar. Specifically, you need servers that guarantee direct I/O over NFS, TCP as the transport protocol, and read/write block sizes of 32K. The shared storage that will be used for this article is based on iSCSI technology using a Windows 2003 storage server installed with HP Storage Software. This solution offers an alternative to fibre channel.

5.2 iSCSI Initiator


Basically, an iSCSI initiator is a client device that connects and initiates requests to some service offered by a server (in this case an iSCSI target). The iSCSI initiator software will need to exist on each of the Oracle RAC nodes (BLTEST1 and BLTEST2).

5.3 iSCSI Target


An iSCSI target is the "server" component of an iSCSI network. This is typically the storage device that contains the information you want and answers requests from the initiator(s). For the purpose of this article, the node 192.168.15.8 will be the iSCSI target.

5.4 Hardware
5.4.1 Nodes
For our infrastructure, we used a cluster which is composed of Two HP ProLiant BL460c server with Solaris 10 using HP Blade SB600c storage. With features equal to standard 1U rack mount servers, the dual processor, multi-core BL460c combines power-efficient compute power and high density with expanded memory and I/O for maximum performance. Now with Low Voltage or Standard Quad-Core, and Dual-Core Intel Xeon processors, DDR2 Fully Buffered DIMMs, optional Serial Attached SAS or SATA hard drives, support of Multi-Function NICS and multiple I/O cards, the BL460c provides a performance system ideal for the full range of scale out applications. In this small form factor, the BL460c includes more features to ensure high-availability such as optional hotplug hard drives, mirrored-memory, online spare memory, memory interleaving, embedded RAID capability, and enhanced Remote Lights-Out management.

5.4.2 HP StorageWorks All-in-One SB600C Storage Blade


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

The All-in-One (AiO) SB600c is a preferred storage solution for customers who desire a shared storage solution in their blade chassis. The AiO SB600c storage blade provides the shared storage infrastructure required to support the Oracle RAC database.

HP

The SB600c consists of: Hardware o BL460c (E200i Smart Array Controller + 2x 146GB 10K RPM SAS Drives) o SB40c (P400 Smart Array Controller + 6x 146GB 10K RPM SAS Drives) o Total raw storage = 8x 146GB = 1.16 TB Operating systemWindows Storage Server 2003 R2 o Includes Microsoft iSCSI Software Target o Storage Optimized OS from Microsoft for block (iSCSI) and file shares (NFS, CIFS) o Snapshot technology Software o HP All-in-One Storage System Managerprovides an easy-to-use graphical interface that allows the end user to set up physical and logical volumes and also to create and present the iSCSI LUNs to the Solaris machines.

5.4.3 Network infrastructure


A private network (for instance a gigabit Ethernet network, using a gigabit switch to link each cluster nodes) is designed only for Oracle Interconnect use (cache fusion between instances). This dedicated network is mandatory. Standard Network Architecture Each node must have at least two network adapters: one for the public network interface and one for the private network interface (the interconnect). Network cards for Public network must have same name on each participating node in the RAC cluster. Network cards for Interconnect Network (Private) must have same Name on each participating Node in the RAC cluster. One virtual IP per node must be reserved, and not used on the network prior and after Oracle Clusterware installation. A Public network Interface is required both for the Public IP and the VIP (Virtual IP) For the public network, each network adapter must support TCP/IP. For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP (Gigabit Ethernet or better recommended). For increased reliability, configure redundant public and private network adapters for each node. For the private network, the endpoints of all designated interconnect interfaces must be completely reachable on the network. There should be no node that is not connected to every private network. You can test whether an interconnect interface is reachable using a ping command.

5.4.4 Data files, OCR and VOTING disks


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

The corresponding drives that is required for the Oracle 10g RAC Setup is given below:

HP

1. OCR/ Voting Disk 1024 MB 2. ASM Disk 683 GB

5.4.5 Logical Drive Creation using Array Controller Utility (ACU)


Take HP All-in-One Storage System Management Console

23-Mar-2009

Click on Array Configuration Utility and click on Smart Array P400 in Slot3 on the left hand side.
Oracle 10g RAC Setup for Solaris on

HP

Create Array with the Entire Available Storage Attached


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Assign RAID Level 5 Storage to the UNUSED Space allocated to the Array
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Note: Assigning the RAID Level 5 will reduce the Available Storage. Click Save to save the Configuration

Creation of LOGICAL Partitions from Microsoft Computer Management

23-Mar-2009

In HP AIO Storage management Console, Select Disk and Volume Management Disk Management. Upon selection the following screen fill be displayed which is a message which identifies the newly created logical drives with ACU. For demonstration purpose, the Oracle 10g RAC Setup for Solariscreated has been listed here configuration of the Last LUN on

HP

5.4.5.1.1
23-Mar-2009

Initialize the DISK for OCR/ Voting Disk

Oracle 10g RAC Setup for Solaris on

HP

Click NEXT
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click NEXT

Note: DO NOT CONVERT THE DISK TO A DYNAMIC DISK

Click NEXT
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click FINISH

5.4.5.1.2
23-Mar-2009

Creating the Logical Partition for OCR & Voting Disk Files

Right Click on Unallocated Space 107.4 G and click New Partition


Oracle 10g RAC Setup for Solaris on

HP

Select Primary Partition and click NEXT


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Change the Partition Size to 1024 and click NEXT

Click on NEXT Change the Drive to Q and click NEXT

23-Mar-2009

Oracle 10g RAC Setup for Solaris on

HP

Change the Volume Label to ocr_voting and Enable Perform Quick Format and click NEXT

Click on Finish

23-Mar-2009

Oracle 10g RAC Setup for Solaris on

HP

LOGICAL PARTITION creation for OCR/ VOTING DISK is complete now:

Now Select the Disk Management in the Computer Management to see the addition of the Logical Partition for OCR/ Voting disk:
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

5.4.5.1.3
23-Mar-2009

Creating the Logical Partition for ASM Disk Locations

Oracle 10g RACthe Disk 1 and right Click and Select New Partition Select Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select Primary Partition and Click on Next

Change Partition Size to 699907 and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Change the Drive to E Click Next

Change Volume Label to ASM_FILES, enable Quick Format and Click on Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click on Finish

5.4.6 iSCSI Target Creation


23-Mar-2009 In HP AIO Storage Management Console, Click Oracle 10g RAC Microsoft Solaris Target Management Setup for iSCSI on

HP

in HP All-in-One

Storage System

Right click iSCSI Target and Click Create iSCSI Target

Click on Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Set NAS Target Name as NAS1 Click on Next

Click on Advanced
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click on Add

23-Mar-2009

Select Identifier Type= IP Address and enter an IP Address=192.168.15.6 (One of the RAC Node IP Addresses)
Oracle 10g RAC Setup for Solaris on

HP

Click Ok and then Add IP=192.168.15.7 (One of the RAC Node Address)

23-Mar-2009

Now both the IP Addresses (also known as the RAC Node Address or iSCSI Clients) are displayed. Click Ok
Oracle 10g RAC Setup for Solaris on

HP

Now view the iSCSI Target created and right click NAS1 and select properties:
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Change the Description for NAS1:


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

5.4.7 iSCSI Virtual Disk Creation


This section details the creation of virtual disks/LUNS from the Logical Partitioned created using ACU and MS Disk Management Console. Click on Devices and a list of Devices available will be shown on top pane and available disks will be shown in bottom pane. iSCSI Virtual Disk Creation OCR/ Voting Disk

Right Click the ocr_voting partition (Q Drive) and Select Create Virtual Disk
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Change the Virtual Disk name to ocr_voting.vhd and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Set Size of Virtual Disk to 1020 and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Enter a Meaningful description for the Virtual Disk and Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

23-Mar-2009

Click Add (to add the iSCSI Targets in which this Virtual Disk/ LUNs would be available)
Oracle 10g RAC Setup for Solaris on

HP

Add NAS1 as the iSCSI target and Click Ok


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Finish to complete the Virtual Disk Creation


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Warning: Make sure you are selecting the correct Logical Disk. If the disk(s) you selected has been in use for some other targets, the possibility of data loss is immense.

23-Mar-2009

After virtual disks has been created for all the disks, the console will look like the following
Oracle 10g RAC Setup for Solaris on

HP

iSCSI Virtual Disk Creation ASM Disk


23-Mar-2009

Right Click the asm_files Oracle 10g RAC Setup for Solaris onpartition (E Drive) and Select Create Virtual Disk

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Change the Virtual Disk name to asm_files.vhd and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Set the Size of Virtual Disk to 698368 and then click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Add meaningful description to the Virtual Disk and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

23-Mar-2009

Click Add (to add the iSCSI targets which will have access to this Virtual Disk/ LUN)
Oracle 10g RAC Setup for Solaris on

HP

Add iSCSI Target = NAS1 and Click Ok


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Finish to complete the Virtual Disk Creation:


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Now this is how the console will look like after the Virtual Disks/ LUN ocr_voting.vhd and asm_files.vhd has been created.
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

5.4.8 Configure iSCSI Volumes on Oracle RAC Nodes


iSCSI Configuration We first need to verify that the iSCSI software packages are installed on our servers before we can proceed further. #pkginfo SUNWiscsiu SUNWiscir system SUNWiscir Sun iSCSI Device Driver (root) system SUNWiscsiu Sun iSCSI Management Utilities (usr) We will now configure the iSCSI target device to be discovered dynamically like so: #iscsiadm add discovery-address 192.168.2.195:3260 The iSCSI connection is not initiated until the discovery method is enabled. This is enabled using the following command: #iscsiadm modify discovery sendtargets enable Now, we need to create the iSCSI device links for the local system. The following command can be used to do this: #devfsadm -i iscsi

5.4.9 Configure Solaris Partitions on Oracle RAC Nodes


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

To verify that the iSCSI Devices are available on the node we will use the following format command by connecting to 192.168.15.6 as a root user and running the following.

HP

bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <DEFAULT cyl 8917 alt 2 hd 255 sec 63> /pci@0,0/pci8086,25e3@3/pci1166,103@0/pci103c,3211@8/sd@0,0 1. c4t3d0 <DEFAULT cyl 1017 alt 2 hd 64 sec 32> /iscsi/disk@0000iqn.1991-05.com.microsoft%3Aaio600c-nas1-target0001,0 2. c4t5d0 <DEFAULT cyl 44512 alt 2 hd 255 sec 126> /iscsi/disk@0000iqn.1991-05.com.microsoft%3Aaio600c-nas1-target0001,1 Specify disk (enter its number): 2 selecting c4t5d0 [disk formatted] FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk fdisk - run the fdisk program repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show vendor, product and revision volname - set 8-character volume name !<cmd> - execute <cmd>, then return quit format> Note: This Disk c4t3d0, c4t5d0 refers to the iSCSI SAN disks.

Now the Disks c4t3d0, c4t5d0 has to be made into Solaris Partitions using fdisk
23-Mar-2009 Oracle 10gp format> RAC Setup for Solaris on

HP

WARNING - This disk may be in use by an application that has modified the fdisk table. Ensure that this disk is not currently in use before proceeding to use fdisk. format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format> p PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition 7 - change `7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk !<cmd> - execute <cmd>, then return quit partition> p Current partition table (original): Total disk cylinders available: 44511 + 2 (reserved cylinders) Part Tag Flag Cylinders 0 unassigned wm 0 1 unassigned wm 0 2 backup wu 0 - 44510 3 unassigned wm 0 4 unassigned wm 0 5 unassigned wm 0 6 unassigned wm 0 7 unassigned wm 0 8 boot wu 0- 0 9 unassigned wm 0 partition> q FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk Size Blocks 0 (0/0/0) 0 0 (0/0/0) 0 681.94GB (44511/0/0) 1430138430 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 15.69MB (1/0/0) 32130 0 (0/0/0) 0

23-Mar-2009

fdisk - run the fdisk program repair - repair a defective sector label - write label to the disk Oracle 10g RAC Setup for Solaris on analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show vendor, product and revision volname - set 8-character volume name !<cmd> - execute <cmd>, then return quit format> disk

HP

AVAILABLE DISK SELECTIONS: 0. c1t0d0 <DEFAULT cyl 8917 alt 2 hd 255 sec 63> /pci@0,0/pci8086,25e3@3/pci1166,103@0/pci103c,3211@8/sd@0,0 1. c4t3d0 <DEFAULT cyl 1017 alt 2 hd 64 sec 32> /iscsi/disk@0000iqn.1991-05.com.microsoft%3Aaio600c-nas1-target0001,0 2. c4t5d0 <DEFAULT cyl 44511 alt 2 hd 255 sec 126> /iscsi/disk@0000iqn.1991-05.com.microsoft%3Aaio600c-nas1-target0001,1 Specify disk (enter its number)[2]: 1 selecting c4t3d0 [disk formatted] format> p WARNING - This disk may be in use by an application that has modified the fdisk table. Ensure that this disk is not currently in use before proceeding to use fdisk. format> p WARNING - This disk may be in use by an application that has modified the fdisk table. Ensure that this disk is not currently in use before proceeding to use fdisk. format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format> PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition 7 - change `7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk !<cmd> - execute <cmd>, then return quit partition> p Current partition table (original): Total disk cylinders available: 1016 + 2 (reserved cylinders)

23-Mar-2009

Part Tag Flag Cylinders 0 unassigned wm 0 Oracle 10g RAC Setup for Solaris on 1 unassigned wm 0 2 backup wu 0 - 1015 3 unassigned wm 0 4 unassigned wm 0 5 unassigned wm 0 6 unassigned wm 0 7 unassigned wm 0 8 boot wu 0- 0 9 unassigned wm 0

HP

Size Blocks 0 (0/0/0) 0 0 (0/0/0) 0 1016.00MB (1016/0/0) 2080768 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 1.00MB (1/0/0) 2048 0 (0/0/0) 0

Partition the Available Solaris Partitions/ Disks based on the Files required for OCR/ Voting Disk/ ASM Files etc is shown below. Only 1 Raw File Creation is demonstrated for this purpose. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <DEFAULT cyl 8917 alt 2 hd 255 sec 63> /pci@0,0/pci8086,25e3@3/pci1166,103@0/pci103c,3211@8/sd@0,0 1. c4t3d0 <DEFAULT cyl 1017 alt 2 hd 64 sec 32> /iscsi/disk@0000iqn.1991-05.com.microsoft%3Aaio600c-nas1-target0001,0 2. c4t5d0 <DEFAULT cyl 44512 alt 2 hd 255 sec 126> /iscsi/disk@0000iqn.1991-05.com.microsoft%3Aaio600c-nas1-target0001,1 Specify disk (enter its number): 2 selecting c4t5d0 [disk formatted] FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk fdisk - run the fdisk program repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show vendor, product and revision volname - set 8-character volume name !<cmd> - execute <cmd>, then return quit format> p PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition

23-Mar-2009

3 - change `3' partition 4 - change `4' partition 5 - change `5' partition Oracle 10g RAC Setup `6' partition 6 - change for Solaris on 7 - change `7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk !<cmd> - execute <cmd>, then return quit partition> p Current partition table (original): Total disk cylinders available: 44511 + 2 (reserved cylinders)

HP

Part Tag Flag Cylinders 0 unassigned wm 0 1 unassigned wm 0 2 backup wu 0 - 44510 3 unassigned wm 0 4 unassigned wm 0 5 unassigned wm 0 6 unassigned wm 0 7 unassigned wm 0 8 boot wu 0- 0 9 unassigned wm 0 partition> 0 Part Tag Flag Cylinders 0 unassigned wm 0

Size Blocks 0 (0/0/0) 0 0 (0/0/0) 0 681.94GB (44511/0/0) 1430138430 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 15.69MB (1/0/0) 32130 0 (0/0/0) 0 Size Blocks 0 (0/0/0)

Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 1 Enter partition size[0b, 0c, 12925e, 0.00mb, 0.00gb]: 100mb partition> label Ready to label disk, continue? Y partition> p Current partition table (original): Total disk cylinders available: 44511 + 2 (reserved cylinders) Part Tag Flag Cylinders 0 unassigned wm 1 1 unassigned wm 0 2 backup wu 0 - 44510 3 unassigned wm 0 4 unassigned wm 0 5 unassigned wm 0 6 unassigned wm 0 7 unassigned wm 0 8 boot wu 0- 0 9 unassigned wm 0 Size Blocks 100MB (0/0/0) 0 0 (0/0/0) 0 681.94GB (44511/0/0) 1430138430 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 0 (0/0/0) 0 15.69MB (1/0/0) 32130 0 (0/0/0) 0

23-Mar-2009

The following table lists the 7 Partitions in the Solaris Disk in the 2 Nodes to be used for RAC. A Symbolic link has been created as the Disk Names are different in the 2 Nodes of the RAC.
Oracle 10g RAC Setup for Solaris on

HP

Files Name OCR1 OCR2 VOTING1 VOTING2 VOTING3 DATA ARCH

Symbolic Name /oracle_files/ocr_disk1 /oracle_files/ocr_disk2 /oracle_files/voting_disk1 /oracle_files/voting_disk2 /oracle_files/voting_disk3 /oracle_files/data_disk1 /oracle_files/arch_disk1

Node 2 (BLTEST2) /dev/rdsk/c4t5d0s5 /dev/rdsk/c4t5d0s0 /dev/rdsk/c4t5d0s6 /dev/rdsk/c4t5d0s7 /dev/rdsk/c4t5d0s1 /dev/rdsk/c4t5d0s3 /dev/rdsk/c4t5d0s4

Node 1 (BLTEST1) /dev/rdsk/c2t3d0s5 /dev/rdsk/c2t3d0s0 /dev/rdsk/c2t3d0s6 /dev/rdsk/c2t3d0s7 /dev/rdsk/c2t3d0s1 /dev/rdsk/c2t3d0s3 /dev/rdsk/c2t3d0s4

5.4.10

Create UNIX Users and Groups on Oracle RAC Nodes

The UNIX user Oracle and the Groups DBA, OINSTALL will be used for the Oracle Installation. Connect to bltest1 (192.168.15.6) as a root user and run the following: # /usr/sbin/groupadd oinstall # /usr/sbin/groupadd dba # /usr/sbin/useradd -u 200 -g oinstall -G dba oracle # id oracle uid=200(oracle) gid=100(oinstall) # passwd oracle The Oracle Home directory should be /applns/oracle mkdir -p /applns/oracle chown oracle:oinstall /applns/oracle vi /etc/passwd Note: Edit the passwd file and replace oracle:x:200:100::/home/oracle:/bin/sh with oracle:x:200:100::/applns/oracle:/bin/bash Create the default Directory for CRS. This will be used as a location for the Oracle Clusterware. mkdir -p /applns/crs/oracle/product/10.2.0/app chown -R oracle:oinstall /applns/crs chmod -R 775 /applns/crs Connect to bltest2 (192.168.15.7) as a root user and run the following: # /usr/sbin/groupadd oinstall # /usr/sbin/groupadd dba # /usr/sbin/useradd -u 200 -g oinstall -G dba oracle # id oracle uid=200(oracle) gid=100(oinstall) # passwd oracle

Note: The UID and GID of the Users created should be the same in both the RAC Nodes. This is a pre-requisite for Oracle 10g Clusterware Installation to work.
23-Mar-2009 Oracle 10g RAC Setup for Solaris on The Oracle Home

HP directory

should

be

/applns/oracle

mkdir -p /applns/oracle chown oracle:oinstall /applns/oracle Create the default Directory for CRS. This will be used as a location for the Oracle Clusterware. mkdir -p /applns/crs/oracle/product/10.2.0/app chown -R oracle:oinstall /applns/crs Note: Edit the passwd file and replace oracle:x:200:100::/home/oracle:/bin/sh with oracle:x:200:100::/applns/oracle:/bin/bash

5.4.11 Change Privilege/ Ownership of the Created Raw Disks on all Oracle RAC Nodes
Connect to BLTEST1 (192.168.15.6) as root user and run the following: chown oracle:dba /dev/rdsk/c2t3d0s5 chown oracle:dba /dev/rdsk/c2t3d0s0 chown oracle:dba /dev/rdsk/c2t3d0s6 chown oracle:dba /dev/rdsk/c2t3d0s7 chown oracle:dba /dev/rdsk/c2t3d0s1 chown oracle:dba /dev/rdsk/c2t3d0s3 chown oracle:dba /dev/rdsk/c2t3d0s4 chmod 660 /dev/rdsk/c2t3d0s5 chmod 660 /dev/rdsk/c2t3d0s0 chmod 660 /dev/rdsk/c2t3d0s6 chmod 660 /dev/rdsk/c2t3d0s7 chmod 660 /dev/rdsk/c2t3d0s1 chmod 660 /dev/rdsk/c2t3d0s3 chmod 660 /dev/rdsk/c2t3d0s4 Connect to BLTEST2 (192.168.15.7) as root user and run the following: chown oracle:dba /dev/rdsk/c4t5d0s5 chown oracle:dba /dev/rdsk/c4t5d0s0 chown oracle:dba /dev/rdsk/c4t5d0s6 chown oracle:dba /dev/rdsk/c4t5d0s7 chown oracle:dba /dev/rdsk/c4t5d0s1 chown oracle:dba /dev/rdsk/c4t5d0s3 chown oracle:dba /dev/rdsk/c4t5d0s4 chmod 660 /dev/rdsk/c4t5d0s5 chmod 660 /dev/rdsk/c4t5d0s0 chmod 660 /dev/rdsk/c4t5d0s6 chmod 660 /dev/rdsk/c4t5d0s7 chmod 660 /dev/rdsk/c4t5d0s1 chmod 660 /dev/rdsk/c4t5d0s3

chmod 660 /dev/rdsk/c4t5d0s4


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

5.4.12 Create Symbolic Links for all the Created Raw Disks on all HP Oracle RAC Nodes
Since the same Disks are having different names in the nodes of the RAC, we require symbolic links for each of the Raw disks, as the Shared Files have to be of same name across all the Nodes for RAC to work. Connect to Node 1 (BLTEST1) as a root user and do the following: mkdir /oracle_files ln -s /dev/rdsk/c2t3d0s5 /oracle_files/ocr_disk1 ln -s /dev/rdsk/c2t3d0s0 /oracle_files/ocr_disk2 ln -s /dev/rdsk/c2t3d0s6 /oracle_files/voting_disk1 ln -s /dev/rdsk/c2t3d0s7 /oracle_files/voting_disk2 ln -s /dev/rdsk/c2t3d0s1 /oracle_files/voting_disk3 ln -s /dev/rdsk/c2t3d0s3 /oracle_files/data_disk1 ln -s /dev/rdsk/c2t3d0s4 /oracle_files/arch_disk1

Connect to Node 2 (BLTEST2) as a root user and do the following:


23-Mar-2009

mkdir /oracle_files

Oracle 10g RAC Setup for Solaris on ln -s /dev/rdsk/c4t5d0s5 /oracle_files/ocr_disk1

HP

ln -s /dev/rdsk/c4t5d0s0 /oracle_files /ocr_disk2 ln -s /dev/rdsk/c4t5d0s6 /oracle_files /voting_disk1 ln -s /dev/rdsk/c4t5d0s7 /oracle_files /voting_disk2 ln -s /dev/rdsk/c4t5d0s1 /oracle_files /voting_disk3 ln -s /dev/rdsk/c4t5d0s3 /oracle_files/data_disk1 ln -s /dev/rdsk/c4t5d0s4 /oracle_files /arch_disk1

5.4.13

Create Symbolic Link for the SSH on all Oracle RAC Nodes

Connect to Node 1 (BLTEST1) as a root user and run the following: mkdir -p /usr/local cd /usr/local ln -s /usr/bin bin Connect to Node 2 (BLTEST2) as a root user and run the following: mkdir -p /usr/local cd /usr/local ln -s /usr/bin bin Note: This is required as Clusterware looks for the ssh executable in the /usr/local/bin folder.

5.4.14

Updating the /etc/hosts file

The hosts file has to be updated with the Private IP and the Virtual IP of each Node. This is a pre-requisite for Oracle 10g RAC Installation. Connect as a root user in BLTEST1 and edit the /etc/hosts file # # Internet host table # ::1 localhost 127.0.0.1 localhost 192.168.15.6 bltest1 loghost 192.168.15.7 bltest2 10.10.1.1 bltest1-priv 10.10.1.2 bltest2-priv 192.168.15.201 bltest1-vip 192.168.15.202 bltest2-vip Connect as a root user in BLTEST2 and edit the /etc/hosts file # # Internet host table # ::1 localhost 127.0.0.1 localhost 192.168.15.7 bltest2 loghost 192.168.15.6 bltest1 10.10.1.1 bltest1-priv 10.10.1.2 bltest2-priv

192.168.15.201 bltest1-vip 192.168.15.202 bltest2-vip


23-Mar-2009 Oracle 10g RAC Configure SSH Setup for Solaris on 5.4.15

HP on Oracle RAC Nodes

Before you install and use Oracle Real Application Clusters, you should configure Secure shell (SSH) for the oracle user on all cluster nodes (BLTEST1 and BLTEST2). Using SSH provides greater security than Berkeley services remote shell (RSH). Oracle Universal Installer uses the rsh and scp commands during installation to run remote commands on and copy files to the other cluster nodes. You must configure SSH (or RSH) so that these commands do not prompt for a password. To configure SSH, you must first create RSA and DSA keys on each cluster node, and then copy the keys from all cluster node members into an authorized keys file on each node. For example, with the two-node cluster, BLTEST1 and BLTEST2, you create RSA and DSA keys on the local host, BLTEST1; create RSA and DSA keys on the second node, BLTEST2; and then copy RSA and DSA codes from both BLTEST1 and BLTEST2 to each node. Complete the following steps: Create RSA and DSA keys on each node: Complete the following steps on each Node: o Log in as the oracle user to BLTEST1 (192.168.15.6). Create the .ssh directory in the oracle users home directory and set the correct permissions on it: $ mkdir ~/.ssh $ chmod 700 ~/.ssh Enter the following commands to generate an RSA key for version 2 of the SSH protocol: $ /usr/bin/ssh-keygen -t rsa At the prompt: Accept the default location for the key file. Enter and confirm a pass phrase that is different from the oracle users password. (No Password would be fine). This command writes the public key to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file. Never distribute the private key to anyone. o Enter the following commands to generate a DSA key for version 2 of the SSH protocol: $ /usr/bin/ssh-keygen -t dsa At the prompts: Accept the default location for the key file Enter and confirm a pass phrase that is different from the oracle users password (No Password would be fine). This command writes the public key to the ~/.ssh/id_dsa.pub file and the private key to the ~/.ssh/id_dsa file. Never distribute the private key to anyone. Now repeat Steps 1 to 3 on the Node BLTEST2 (192.168.15.7). Now connect to BLTEST1 and add keys to an authorized key file by completing the following steps: On the local node, determine if you have an authorized key file

o o

23-Mar-2009

(~/.ssh/authorized_keys). If the authorized key file already exists, then proceed to step 2. Otherwise, enter the following commands: $ touch ~/.ssh/authorized_keys Oracle 10g RAC Setupcd ~/.ssh on $ for Solaris $ ls

HP

You should see the id_dsa.pub and id_rsa.pub keys that you have created. Using SSH, copy the contents of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files to the file ~/.ssh/authorized_keys, and provide the Oracle user password as prompted. This process is illustrated in the following syntax where the Oracle user path is /applns/oracle: [oracle@bltest1 .ssh]$ ssh bltest1 cat /applns/oracle/.ssh/id_rsa.pub >> authorized_keys oracle@bltest1s password: [oracle@bltest1 .ssh]$ ssh bltest1 cat /applns/oracle/.ssh/id_dsa.pub >> authorized_keys [oracle@bltest1 .ssh$ ssh bltest2 cat /applns/oracle/.ssh/id_rsa.pub >> authorized_keys oracle@bltest2s password: [oracle@bltest11 .ssh$ ssh bltest2 cat /home/oracle/.ssh/id_dsa.pub >>authorized_keys oracle@bltest2s password: Change the permissions on the Oracle users /.ssh/authorized_keys file on all cluster nodes: $ chmod 600 ~/.ssh/authorized_keys At this point, if you use ssh to log in to or run a command on another node, you are prompted for the pass phrase that you specified when you created the DSA key. Use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys file to the Oracle user .ssh directory on the remote node (BLTEST2). [oracle@bltest1 .ssh]scp authorized_keys node2:/applns/oracle/.ssh/

Enabling SSH User Equivalency on Cluster Member Nodes


23-Mar-2009 Oracle 10go To enable Solaris onUniversal Installer to use the ssh and scp commands RAC Setup for Oracle

without being prompted for a pass phrase, follow these steps: On the system (BLTEST1) where you want to run Oracle Universal Installer, log in as the oracle user. Enter the following commands: $ exec /usr/bin/ssh-agent $SHELL $ /usr/bin/ssh-add At the prompts, enter the pass phrase for each key that you generated. If you have configured SSH correctly, then you can now use the ssh or scp commands without being prompted for a password or a pass phrase. To test the SSH configuration, enter the following commands from the same terminal session, testing the configuration of each cluster node, where BLTEST1, BLTEST2, and so on, are the names of nodes in the cluster: $ ssh bltest1 date $ ssh bltest2 date Note: The above command will display the following first time only: The authenticity of host 'bltest1 192.168.15.6)' can't be established. RSA key fingerprint is 7z:ez:e7:f6:f4:f2:4f:8f:9z:79:85:62:20:90:92:z9. Are you sure you want to continue connecting (yes/no)? The authenticity of host 'bltest2 192.168.15.7)' can't be established. RSA key fingerprint is 7z:ez:e7:f6:f4:f2:4f:8f:9z:79:85:62:20:90:92:z9. Are you sure you want to continue connecting (yes/no)?

HP

5.4.16 Checking BLTEST2

the

Hardware

Requirements

of

BLTEST1,

Each system must meet the following minimum hardware requirements: At least 1 GB of physical RAM A Minimum of 10G Swap space equivalent to the multiple of the available RAM, as indicated in the following table: o 400 MB of disk space in the /tmp directory o 4 GB of disk space for the Oracle software, depending on the installation type and platform o 1.2 GB of disk space for a preconfigured database that uses file system storage (optional) o Additional disk space, either on a file system or in an Automatic Storage Management disk group, is required for the flash recovery area if you choose to configure automated backups. To ensure that each system meets these requirements: o To determine the physical RAM size, enter the following command: # /usr/sbin/prtconf | grep "Memory size" If the size of the physical RAM installed in the system is less than the required size, then you must install more memory before continuing. o To determine the size of the configured swap space, enter the following command: # /usr/sbin/swap s

23-Mar-2009

If necessary, refer to your operating system documentation for information about how to configure additional swap space. o To determine the amount of disk space available in the /tmp directory, enter the Oracle 10g RAC Setup for Solaris on following command: # df -k /tmp

HP

If there is less than 400 MB of disk space available in the /tmp directory, then complete one of the following steps: Delete unnecessary files from the /tmp directory to meet the disk space requirement. To determine the amount of disk space available in the /tmp directory, enter the following command o To determine whether the system architecture can run the Oracle software you have obtained, enter the following command: # /bin/isainfo -kv Note: The following is the expected output of this command: 64-bit amd64 kernel modules Ensure that the Oracle software you have is the correct Oracle software for your processor type. If the output of this command indicates that your system architecture does not match the system for which the Oracle software you have is written, then you cannot install the software. Obtain the correct software for your system architecture before proceeding further.

5.4.17

Node Time Requirements

Before starting the installation, ensure that each member node of closely as possible to the same date and time. Oracle strongly recommends Network Time Protocol feature of most operating systems for this nodes using the same reference Network Time Protocol server.

5.4.18
23-Mar-2009

Configuring Kernel Parameters On Solaris 10


Create a backup copy of the /etc/system file, for example: # cp /etc/system /etc/system.orig

Oracle 10g RAC Setup for Solaris on

HP

Open the /etc/system file in any text editor and, if necessary, add the following lines: set noexec_user_stack=1 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10 set semsys:seminfo_semmns=2000 set semsys:seminfo_semmsl=1000 set semsys:seminfo_semmni=100 set semsys:seminfo_semvmx=32767 set shmsys:shminfo_shmmax=4294967295 Enter the following command to restart the system: # /usr/sbin/reboot Repeat this procedure on all other nodes in the cluster.

5.4.19 5.4.20

Host Naming of the RAC Nodes in Solaris 10 Time Zones of the RAC Nodes in Solaris 10

It is Pre-Requisite that RAC Node naming should be in lower case.

For my System it is recommended to set GMT as the time zone for all the Nodes of the RAC cluster.

5.4.21

Network infrastructure

A private network (for instance a gigabit Ethernet network, using a gigabit switch to link each cluster nodes) is designed only for Oracle Interconnect use (cache fusion between instances). This dedicated network is mandatory. Standard Network Architecture Network cards for public network must have same name on each participating node in the RAC cluster. Network cards for Interconnect Network (Private) must have same Name on each participating Node in the RAC cluster. One virtual IP per node must be reserved, and not used on the network prior and after Oracle Clusterware installation. A Public network Interface is required both for the Public IP and the VIP (Virtual IP)

Based on the above needs the following are the Public IP, Interconnect IP and VIP of the Cluster Nodes
23-Mar-2009 Oracle 10g RAC Setup for Solaris on Name Node Name Interface

HP

bltest1 bltest1 bltest1 bltest2 bltest2 bltest2

bnx0 bnx1 bnx2 bnx0 bnx1 bnx2

Type Public Private Interconnect Public Private Interconnect

IP Address 192.168.15.6 10.10.1.1 192.168.15.201 192.168.15.7 10.10.1.2 192.168.15.202

5.4.22

Updates Hosts file in all the Cluster Nodes

Connect to Node 1 (BLTEST1) as a root user and ensure the following exists in the /etc/hosts file: 192.168.15.6 bltest1 loghost 192.168.15.7 bltest2 10.10.1.1 bltest1-priv 10.10.1.2 bltest2-priv 192.168.15.201 bltest1-vip 192.168.15.202 bltest2-vip Connect to Node 2 (BLTEST2) as a root user and ensure the following exists in the /etc/hosts file: 192.168.15.7 bltest2 loghost 192.168.15.6 bltest1 10.10.1.1 bltest1-priv 10.10.1.2 bltest2-priv 192.168.15.201 bltest1-vip 192.168.15.202 bltest2-vip

6 Oracle 10g RAC Installation


6.1 10.2.0.1 Clusterware Installation
6.1.1 Verify Clusterware Pre-requisites
Before Clusterware Installation, we need to check that the Nodes involved in the Set up and clusterware worth. Cluster Verification Utility is used for this. Cluster Verification Utility (CVU) is a tool that performs system checks. This guide provides CVU commands to assist you with confirming that your system is properly configured for Oracle Clusterware and Oracle Real Application Clusters installation. Run the following syntax to start CVU: $ /applns/setup/clusterware/cluvfy/ Note: The above path refers to the location where the Clusterware Binary/cluvfy is located. ./runcluvfy.sh stage -pre crsinst -n bltest1,bltest2 -r 10gR2 -verbose The output after running the above command in 192.168.15.213 is given in the attached log file below:

23-Mar-2009

cluvfy.log

Oracle 10g RAC Setup for Solaris on

HP

Note: The Expected Response after running the above command should be Pre-check for cluster services setup was successful on all the nodes. But although I have not got it, since some of the OS Patches required for Clusterware has not been installed, I am ignoring it as it will not cause any issues to the Clusterware Installation. The OS Patches can be installed later. Also the Swap space failure in the log attached is not corrected, as the actual Swap Space allocated is 20G. This will be evident during the Clusterware PreRequisite check during Clusterware Installation using Oracle Universal Installer (OUI).

6.1.2 Create the Default Home for CRS in all the Nodes involved in the Cluster
The following has to be run as a root user: ssh root@bltest1 mkdir -p /applns/crs/oracle/product/10.2.0/app cd /applns chown -R oracle: install crs ssh root@bltest2 mkdir -p /applns/crs/oracle/product/10.2.0/app cd /applns chown -R oracle: install crs

6.1.3 Run Root Pre-Requisite Check to ensure No Sun Cluster is running


The following has to be run as a root user in both nodes of the Cluster: ssh root@bltest1 cd /applns/setup/clusterware/rootpre ./rootpre.sh Expected Result: No SunCluster running ssh root@bltest2 cd /applns/setup/clusterware/rootpre ./rootpre.sh Expected Result: No SunCluster running

6.1.4 Ensure the Display is set correctly and any X Server Software is working as required
This is applicable only if you are initiating the Clusterware Setup from a remote system with X server software installed. If the above is applicable then do the following:

23-Mar-2009

Start the X server software. I used XMING in Windows for my setup Configure the security settings of the X server software to permit remote hosts to display X applications on the local system. Oracle 10g RACDISPLAY=192.168.73.27:0.0 export Setup for Solaris on (192.168.73.27 is the windows machine from which I initiated the Clusterware Setup). Connect to the remote system where you want to install the software and start a terminal session on that system, for example, an X terminal (xterm).

HP

Note: While Using XMING ensure that all the Fonts exists, else the Installation could get stuck in the middle.

6.1.5 Clusterware Setup using Oracle Universal Installer (OUI)


Connect to bltest1 as an Oracle user and run the following: ssh oracle@bltest1 export DISPLAY=192.168.73.27:0.0 cd /applns/setup/clusterware/ -bash-3.00$ ./runInstaller ******************************************************************************** Please run the script rootpre.sh as root on all machines/nodes. The script can be found at the toplevel of the CD or stage-area. Once you have run the script, please type Y to proceed Answer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle Clusterware installation. Answer 'n' to abort installation and then ask root to run 'rootpre.sh'. ******************************************************************************** Has 'rootpre.sh' been run by root? [y/n] (n) y Starting Oracle Universal Installer... Checking installer requirements... Checking operating system version: must be 5.10. Passed Actual 5.10

Checking Temp space: must be greater than 250 MB. Actual 11238 MB Passed Checking swap space: must be greater than 500 MB. Actual 11483 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 65536 Passed All installer requirements met. Preparing to launch Oracle Universal Installer from /tmp/OraInstall2009-02-23_11-5633AM. Please wait ...-bash-3.00$ Oracle Universal Installer, Version 10.2.0.1.0 Production Copyright (C) 1999, 2005, Oracle. All rights reserved. Warning: Cannot convert iso8859-1" to type FontStruct string "-monotype-arial-regular-r-normal--*-140-*-*-p-*-

Note: XMING should be running in the Local Machine 192.168.73.27 from where bltest1 is connected remotely.
23-Mar-2009 Oracle 10g RAC Setup for Solaris on Welcome Screen is displayed, Click Next

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Note: The Default Inventory Location displayed by the Installer is being used for this installation.

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Note: Enter the CRS Home Path /applns/crs/oracle/product/10.2.0/app

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Note: The Check Checking operating system package requirements does not succeed and I check the box indicating that it has been manually verified. This failed because of the Missing OS Patches required for the Clusterware. As mentioned earlier, this can be ignored and we can proceed with the Clusterware Installation.

Click Add
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Add the VIP, Interconnect IP and Public IP of the Remote Node involved in the Clusterware and Click OK

Note: This should be present in the /etc/hosts file in both the nodes.

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Edit
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Change Interface bnx0 to Public and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Enter the OCR/ OCR Mirror Location and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Enter the Voting Disk and 2 Mirrored Voting Disk Location and Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Install
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Progress of Clusterware Installation is displayed below:


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Note: Clusterware Installation is first completed on BLTEST1 (node in which OUI was initiated) and then it is done remotely on BLTEST2.

Run the following Scripts as Root user in bltest1 and then in bltest2 and Click OK:
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

The log after running the root.sh script in both the Cluster Nodes is given below:

clusterware_root.log

VIPCA installation is done below :


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Clusterware Installation is complete :


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

6.2 10.2.0.1 Database Home Installation


6.2.1 Create the Default Home for Oracle Software in all the Nodes involved in the Cluster
The following has to be run as a root user: ssh root@bltest1 mkdir -p /applns/oracle/product cd /applns chown -R oracle: install oracle ssh root@bltest2 mkdir -p /applns/oracle/product cd /applns chown -R oracle: install oracle

6.2.2 Database Home Setup using Oracle Universal Installer (OUI)


23-Mar-2009

Connect to bltest1 as an Oracle user and run the following Oracle 10g RAC Setup for Solaris on ssh oracle@bltest1 export DISPLAY=192.168.73.27:0.0 -bash-3.00$ cd /applns/setup/database/ -bash-3.00$ ./runInstaller Starting Oracle Universal Installer... Checking installer requirements... Checking operating system version: must be 5.10. Passed Actual 5.10

HP

Checking Temp space: must be greater than 250 MB. Actual 11238 MB Passed Checking swap space: must be greater than 500 MB. Actual 11483 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 65536 Passed All installer requirements met. Preparing to launch Oracle Universal Installer from /tmp/OraInstall2009-02-23_11-5633AM. Please wait ...-bash-3.00$ Oracle Universal Installer, Version 10.2.0.1.0 Production Copyright (C) 1999, 2005, Oracle. All rights reserved. Warning: Cannot convert iso8859-1" to type FontStruct string "-monotype-arial-regular-r-normal--*-140-*-*-p-*-

Welcome Screen is displayed, Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Note: Change the ORACLE_HOME path to /applns/oracle/product/10.2.0

Select bltest2 and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Note: The Check Checking operating system package requirements does not succeed and I check the box indicating that it has been manually verified. This failed because of the Missing OS Patches required for the Clusterware. As mentioned earlier, this can be ignored and we can proceed with the Clusterware Installation.

Select Install database Software only and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Note: Database will be created only after the Oracle Software has been installed and patched to 10.2.0.3.

Click Install
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Progress of Database Home Installation is displayed below:


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Note: Database Home Installation is first completed on BLTEST1 (node in which OUI was initiated) and then it is done remotely on BLTEST2.

Installation is completed
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

6.3 10.2.0.1 Database Companion Installation


6.3.1 Database Home Setup using Oracle Universal Installer (OUI)
Connect to bltest1 as an Oracle user and run the following ssh oracle@bltest1 export DISPLAY=192.168.73.27:0.0 -bash-3.00$ cd /applns/setup/companion/ -bash-3.00$ ./runInstaller Starting Oracle Universal Installer... Checking installer requirements... Checking operating system version: must be 5.10. Passed Actual 5.10

Checking Temp space: must be greater than 250 MB. Actual 11238 MB Passed Checking swap space: must be greater than 500 MB. Actual 11483 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 65536 Passed All installer requirements met. Preparing to launch Oracle Universal Installer from /tmp/OraInstall2009-02-23_11-5633AM. Please wait ...-bash-3.00$ Oracle Universal Installer, Version 10.2.0.1.0

Production Copyright (C) 1999, 2005, Oracle. All rights reserved.


23-Mar-2009 Oracle 10g RAC Setup forconvert on string Solaris Warning: Cannot

HP

"-monotype-arial-regular-r-normal--*-140-*-*-p-*-

iso8859-1" to type FontStruct

Welcome Screen is displayed, Click Next

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select Oracle Database 10g Products 10.2.0.1 and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next

Select bltest2 and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Note: The Check Checking operating system package requirements does not succeed and I check the box indicating that it has been manually verified. This failed because of the Missing OS Patches required for the Clusterware. As mentioned earlier, this can be ignored and we can proceed with the Clusterware Installation.

Click Install
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Installation is completed
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

6.4 10.2.0.3 Patch Installation


6.4.1 10.2.0.3 Patch Installation for Clusterware using Oracle Universal Installer (OUI)
Ensure that the CRS service is brought down in both the nodes before the activity is started. ssh root@bltest1 /applns/crs/oracle/product/10.2.0/app/bin/crsctl stop crs ssh root@bltest2 /applns/crs/oracle/product/10.2.0/app/bin/crsctl stop crs Connect to bltest1 as an Oracle user and run the following ssh oracle@bltest1 export DISPLAY=192.168.73.27:0.0 -bash-3.00$ cd /applns/setup/Disk1/ -bash-3.00$ ./runInstaller Starting Oracle Universal Installer... Checking installer requirements...

Checking operating system version: must be 5.10. Passed


23-Mar-2009

Actual 5.10 Passed Passed Actual 65536

Oracle 10g RAC Setup for Solaris be greater than 250 MB. Actual 11238 MB Checking Temp space: must on

HP

Checking swap space: must be greater than 500 MB. Actual 11483 MB Checking monitor: must be configured to display at least 256 colors. Passed All installer requirements met.

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2009-02-23_11-5633AM. Please wait ...-bash-3.00$ Oracle Universal Installer, Version 10.2.0.1.0 Production Copyright (C) 1999, 2005, Oracle. All rights reserved. Warning: Cannot convert iso8859-1" to type FontStruct string "-monotype-arial-regular-r-normal--*-140-*-*-p-*-

Welcome Screen is displayed, Click Next

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select bltest2 and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Install
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Installation in Progress
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Run the Root Script in the RAC Nodes and click OK.
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Installation is completed
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

6.4.2 10.2.0.3 Patch Installation for Database Home using Oracle Universal Installer (OUI)
Connect to bltest1 as an Oracle user and run the following ssh oracle@bltest1 export DISPLAY=192.168.73.27:0.0 -bash-3.00$ cd /applns/setup/Disk1/ -bash-3.00$ ./runInstaller Starting Oracle Universal Installer... Checking installer requirements... Checking operating system version: must be 5.10. Passed Actual 5.10

Checking Temp space: must be greater than 250 MB. Actual 11238 MB Passed Checking swap space: must be greater than 500 MB. Actual 11483 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 65536 Passed All installer requirements met. Preparing to launch Oracle Universal Installer from /tmp/OraInstall2009-02-23_11-5633AM. Please wait ...-bash-3.00$ Oracle Universal Installer, Version 10.2.0.1.0 Production

Copyright (C) 1999, 2005, Oracle. All rights reserved.


23-Mar-2009

Warning:

Cannot

convert

string

Oracle 10g RAC type FontStruct on iso8859-1" to Setup for Solaris

HP

"-monotype-arial-regular-r-normal--*-140-*-*-p-*-

Welcome Screen is displayed, Click Next

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select bltest2 and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Install
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Installation in Progress
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Run the Root Script in the RAC Nodes and click OK.
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

The Log after running the Root Script is attached below:

10.2.0.3_root.log

Installation is completed
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

6.5 10.2.0.3 ASM Installation


6.5.1 ASM Installation using Oracle Universal Installer (OUI)
Connect to bltest1 as an Oracle user and run the following ssh oracle@bltest1 export DISPLAY=192.168.73.27:0.0 -bash-3.00$ export ORACLE_HOME= /applns/oracle/product/10.2.0/db_1 -bash-3.00$ export PATH=$PATH: /applns/oracle/product/10.2.0/db_1/bin:/applns/crs/oracle/product/10.2.0/app: -bash-3.00$ /applns/oracle/product/10.2.0/db_1/bin/dbca Warning: Cannot convert string "-monotype-arial-regular-r-normal--*-140-*-*-p-*iso8859-1" to type FontStruct

Welcome Screen is displayed


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select Configure Automatic Storage Management and Click Next

Select All Nodes and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select Create Initialization Parameter File and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Yes so that LISTENERS are created on both the RAC Nodes

Note: Listener Names would be LISTENER_BLTEST1, LISTENER_BLTEST2

ASM Instance Creation in Progress


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select Create New Disk Group


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Complete the Installation by Clicking Finish


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

6.6 10.2.0.3 Database Installation


6.6.1 Database Installation using Oracle Universal Installer (OUI)
Connect to bltest1 as an Oracle user and run the following ssh oracle@bltest1 export DISPLAY=192.168.73.27:0.0 -bash-3.00$ export ORACLE_HOME= /applns/oracle/product/10.2.0/db_1 -bash-3.00$ export PATH=$PATH: /applns/oracle/product/10.2.0/db_1/bin:/applns/crs/oracle/product/10.2.0/app: -bash-3.00$ /applns/oracle/product/10.2.0/db_1/bin/dbca Warning: Cannot convert string "-monotype-arial-regular-r-normal--*-140-*-*-p-*iso8859-1" to type FontStruct

Welcome Screen is displayed


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select Create a Database and Click Next

Select All Nodes and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select Transaction Processing and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Enter Global DB Name and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select Database with Enterprise Manager and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Enter Password and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select Automatic Storage Management and Click Next


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Select all the Disk Groups to Mount, Mount All and Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

23-Mar-2009

Select Use Oracle-Managed Files, Enter +DATA1 and Select Multiplex Redo Logs and Control Files
Oracle 10g RAC Setup for Solaris on

HP

Enter the Redo Log and Control File Destinations and Click OK
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

23-Mar-2009

Enable Archiving and Click Edit Archive Mode Parameters


Oracle 10g RAC Setup for Solaris on

HP

Enter the Archive Log Destination and Click OK


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Next
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Finish
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click OK
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Click Ok
23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Database Installation in Progress


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

Database Installation Completed


23-Mar-2009 Oracle 10g RAC Setup for Solaris on

HP

7 Acknowledgements
Special thanks to Sethunath from whom I have learnt the basics of Real Application Cluster. This document wouldnt have been complete without his help in configuring the Storage in HP Hardware. Thanks to the almighty for the help rendered always.

You might also like