P. 1
Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.pdf

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.pdf

|Views: 85|Likes:
Published by sbinning1

More info:

Published by: sbinning1 on Mar 25, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

07/23/2013

pdf

text

original

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI

DBA: Linux Build Your Own Oracle RAC 11 g Cluster on Oracle Enterprise Linux and iSCSI by Jeffrey Hunter Learn how to set up and configure an Oracle RAC 11 g Release 2 development cluster on Oracle Enterprise Linux for less than US$2,700. The information in this guide is not validated by Oracle, is not supported by Oracle, and should only be used at your own risk; it is for educational purposes only. Updated November 2009 Contents 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. Introduction Oracle RAC 11 g Overview Shared-Storage Overview iSCSI Technology Hardware and Costs Install the Linux Operating System Install Required Linux Packages for Oracle RAC Network Configuration Cluster Time Synchronization Service Install Openfiler Configure iSCSI Volumes using Openfiler Configure iSCSI Volumes on Oracle RAC Nodes Create Job Role Separation Operating System Privileges Groups, Users, and Directories Logging In to a Remote System Using X Terminal Configure the Linux Servers for Oracle Configure RAC Nodes for Remote Access using SSH - (Optional) All Startup Commands for Both Oracle RAC Nodes Install and Configure ASMLib 2.0 Download Oracle RAC 11 g Release 2 Software Preinstallation Tasks for Oracle Grid Infrastructure for a Cluster Install Oracle Grid Infrastructure for a Cluster Postinstallation Tasks for Oracle Grid Infrastructure for a Cluster Create ASM Disk Groups for Data and Fast Recovery Area Install Oracle Database 11 g with Oracle Real Application Clusters Install Oracle Database 11 g Examples (formerly Companion) Create the Oracle Cluster Database Post Database Creation Tasks - (Optional) Create / Alter Tablespaces Verify Oracle Grid Infrastructure and Database Configuration Starting / Stopping the Cluster Troubleshooting Conclusion Acknowledgements   TAGS linux, rac, clustering All   DOWNLOAD Oracle Database 11 g

Downloads for this guide: Oracle Enterprise Linux Release 5 Update 4 — (Available for x86 and x86_64) Oracle Database 11 g Release 2, Grid Infrastructure, Examples — (11.2.0.1.0) — (Available for x86 and x86_64)
http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi.html?_template=/ocom/print[7/8/2010 12:15:46 PM]

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI

Openfiler 2.3 Respin (21-01-09) — (openfiler-2.3-x86-disc1.iso -OR- openfiler-2.3-x86_64-disc1.iso) ASMLib 2.0 Library RHEL5 - (2.0.4-1) — (oracleasmlib-2.0.4-1.el5.i386.rpm -OR- oracleasmlib-2.0.4-1.el5.x86_64.rpm)

  1. Introduction One of the most efficient ways to become familiar with Oracle Real Application Clusters (RAC) 11 g technology is to have access to an actual Oracle RAC 11 g cluster. There's no better way to understand its benefits—including fault tolerance, security, load balancing, and scalability—than to experience them directly. Unfortunately, for many shops, the price of the hardware required for a typical production RAC configuration makes this goal impossible. A small two-node cluster can cost from US$10,000 to well over US$20,000. This cost would not even include the heart of a production RAC environment, the shared storage. In most cases, this would be a Storage Area Network (SAN), which generally start at US$10,000. For those who want to become familiar with Oracle RAC 11 g without a major cash outlay, this guide provides a low-cost alternative to configuring an Oracle RAC 11 g Release 2 system using commercial off-the-shelf components and downloadable software at an estimated cost of US$2,200 to US$2,700. The system will consist of a two node cluster, both running Oracle Enterprise Linux (OEL) Release 5 Update 4 for x86_64, Oracle RAC 11 g Release 2 for Linux x86_64, and ASMLib 2.0. All shared disk storage for Oracle RAC will be based on iSCSI using Openfiler release 2.3 x86_64 running on a third node (known in this article as the Network Storage Server ). Although this article should work with Red Hat Enterprise Linux, Oracle Enterprise Linux (available for free) will provide the same if not better stability and will already include the ASMLib software packages (with the exception of the ASMLib userspace libraries which is a separate download). This guide is provided for educational purposes only , so the setup is kept simple to demonstrate ideas and concepts. For example, the shared Oracle Clusterware files (OCR and voting files) and all physical database files in this article will be set up on only one physical disk, while in practice that should be configured on multiple physical drives. In addition, each Linux node will only be configured with two network interfaces — one for the public network ( eth0 ) and one that will be used for both the Oracle RAC private interconnect "and" the network storage server for shared iSCSI access ( eth1 ). For a production RAC implementation, the private interconnect should be at least Gigabit (or more) with redundant paths and "only" be used by Oracle to transfer Cluster Manager and Cache Fusion related data. A third dedicated network interface ( eth2 , for example) should be configured on another redundant Gigabit network for access to the network storage server (Openfiler). Oracle Documentation While this guide provides detailed instructions for successfully installing a complete Oracle RAC 11 g system, it is by no means a substitute for the official Oracle documentation (see list below) . In addition to this guide, users should also consult the following Oracle documents to gain a full understanding of alternative configuration options, installation, and administration with Oracle RAC 11 g. Oracle's official documentation site is docs.oracle.com. Oracle Grid Infrastructure Installation Guide - 11g Release 2 (11.2) for Linux Clusterware Administration and Deployment Guide - 11g Release 2 (11.2) Oracle Real Application Clusters Installation Guide - 11g Release 2 (11.2) for Linux and UNIX Real Application Clusters Administration and Deployment Guide - 11g Release 2 (11.2) Oracle Database 2 Day + Real Application Clusters Guide - 11g Release 2 (11.2) Oracle Database Storage Administrator's Guide - 11g Release 2 (11.2) Network Storage Server Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. The entire software stack interfaces with open source applications such as Apache, Samba, LVM2, ext3, Linux NFS and iSCSI Enterprise Target. Openfiler combines these ubiquitous technologies into a small, easy to manage solution fronted by a powerful web-based management interface. Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its iSCSI capabilities to implement an inexpensive SAN for the shared storage components required by Oracle RAC 11 g. The operating system and Openfiler application will be installed on one internal SATA disk. A second internal 73GB 15K SCSI hard disk will be configured as a single "Volume Group" that will be used for all shared disk storage requirements. The Openfiler server will be configured to use this volume group for iSCSI based storage and will be used in our Oracle RAC 11 g configuration to store the shared files required by Oracle grid infrastructure and the Oracle RAC database. Oracle Grid Infrastructure 11 g Release 2 (11.2)
http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi.html?_template=/ocom/print[7/8/2010 12:15:46 PM]

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI

With Oracle grid infrastructure 11 g Release 2 (11.2), the Automatic Storage Management (ASM) and Oracle Clusterware software is packaged together in a single binary distribution and installed into a single home directory, which is referred to as the Grid Infrastructure home. You must install the grid infrastructure in order to use Oracle RAC 11 g Release 2. Configuration assistants start after the installer interview process that configure ASM and Oracle Clusterware. While the installation of the combined products is called Oracle grid infrastructure, Oracle Clusterware and Automatic Storage Manager remain separate products. After Oracle grid infrastructure is installed and configured on both nodes in the cluster, the next step will be to install the Oracle RAC software on both Oracle RAC nodes. In this article, the Oracle grid infrastructure and Oracle RAC software will be installed on both nodes using the optional Job Role Separation configuration. One OS user will be created to own each Oracle software product — " grid " for the Oracle grid infrastructure owner and " oracle" for the Oracle RAC software. Throughout this article, a user created to own the Oracle grid infrastructure binaries is called the grid user. This user will own both the Oracle Clusterware and Oracle Automatic Storage Management binaries. The user created to own the Oracle database binaries (Oracle RAC) will be called the oracle user. Both Oracle software owners must have the Oracle Inventory group ( oinstall ) as their primary group, so that each Oracle software installation owner can write to the central inventory (oraInventory), and so that OCR and Oracle Clusterware resource permissions are set correctly. The Oracle RAC software owner must also have the OSDBA group and the optional OSOPER group as secondary groups. Automatic Storage Management and Oracle Clusterware Files As previously mentioned, Automatic Storage Management (ASM) is now fully integrated with Oracle Clusterware in the Oracle grid infrastructure. Oracle ASM and Oracle Database 11 g Release 2 provide a more enhanced storage solution from previous releases. Part of this solution is the ability to store the Oracle Clusterware files; namely the Oracle Cluster Registry (OCR) and the Voting Files (VF — also known as the Voting Disks) on ASM. This feature enables ASM to provide a unified storage solution, storing all the data for the clusterware and the database, without the need for third-party volume managers or cluster file systems. Just like database files, Oracle Clusterware files are stored in an ASM disk group and therefore utilize the ASM disk group configuration with respect to redundancy. For example, a Normal Redundancy ASM disk group will hold a two-way-mirrored OCR. A failure of one disk in the disk group will not prevent access to the OCR. With a High Redundancy ASM disk group (three-way-mirrored), two independent disks can fail without impacting access to the OCR. With External Redundancy, no protection is provided by Oracle. Oracle only allows one OCR per disk group in order to protect against physical disk failures. When configuring Oracle Clusterware files on a production system, Oracle recommends using either normal or high redundancy ASM disk groups. If disk mirroring is already occurring at either the OS or hardware level, you can use external redundancy. The Voting Files are managed in a similar way to the OCR. They follow the ASM disk group configuration with respect to redundancy, but are not managed as normal ASM files in the disk group. Instead, each voting disk is placed on a specific disk in the disk group. The disk and the location of the Voting Files on the disks are stored internally within Oracle Clusterware. The following example describes how the Oracle Clusterware files are stored in ASM after installing Oracle grid infrastructure using this guide. To view the OCR, use ASMCMD:
[grid@racnode1 ~]$ asmcmd ASMCMD> ls -l +CRS/racnode-cluster/OCRFILE Type Redund Striped Time Sys OCRFILE UNPROT COARSE NOV 22 12:00:00 Y

Name REGISTRY.255.703024853

From the example above, you can see that after listing all of the ASM files in the +CRS/racnode-cluster/OCRFILE directory, it only shows the OCR ( REGISTRY.255.703024853). The listing does not show the Voting File(s) because they are not managed as normal ASM files. To find the location of all Voting Files within Oracle Clusterware, use the crsctl query css votedisk command as follows:
[grid@racnode1 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----------------------------- --------1. ONLINE 4cbbd0de4c694f50bfd3857ebd8ad8c4 (ORCL:CRSVOL1) [CRS] Located 1 voting disk(s).

If you decide against using ASM for the OCR and voting disk files, Oracle Clusterware still allows these files to be stored on a cluster file system like Oracle Cluster File System release 2 (OCFS2) or a NFS system. Please note that installing Oracle Clusterware files on raw or block devices is no longer supported, unless an existing system is being upgraded. Previous versions of this guide used OCFS2 for storing the OCR and voting disk files. This guide will store the OCR and voting disk files on ASM in an ASM disk group named +CRS using external redundancy which is one OCR location and one voting disk location. The ASM disk group should be be created on shared storage and be at least 2GB in size.
http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi.html?_template=/ocom/print[7/8/2010 12:15:46 PM]

168. http://www. the other surviving server (or servers) can take over the workload from the failed server and the application continues to function normally as if nothing has happened.3 .151 192.168. oper.oracle.(x86_64) OEL 5.com/technology/pub/articles/hunter-rac11gr2-iscsi.195 racdb.168. The first successful cluster product was developed by DataPoint in 1977 named ARCnet.251 192.e.2. The two Oracle RAC nodes and the network storage server will be configured as follows: Nodes Node Name racnode1 racnode2 openfiler1 Node Name racnode1 racnode2 openfiler1 Instance Name Database Name racdb1 racdb2   Public IP 192.1.2. A cluster is a group of two or more interconnected computers or servers that appear as if they are one server to end users and applications and generally share the same set of physical disks. it might be helpful to first clarify what a cluster is.1. 3. asmdba Storage Components ASM Volume Group Name +CRS +RACDB_DATA +FRA ASM Redundancy External External External Openfiler Volume Name racdb-crs1 racdb-data1 racdb-fra1 Home Directory /home/grid /home/oracle 6GB SCAN Name racnode-cluster-scan 192. Oracle was the first commercial database to support clustering at the database level. The only exception here is the choice of vendor hardware (i. Ensure that the hardware you purchase from the vendor is supported on Enterprise Linux 5 and Openfiler 2. The key benefit of clustering is to provide a highly available framework where the failure of one node (for example a database server running an instance of Oracle) does not bring down an entire application. networking equipment.168.1. The ARCnet product enjoyed much success by academia types in research labs. It wasn't until the 1980's when Digital Equipment Corporation (DEC) released its VAX cluster product for the VAX/VMS operating system.1.168.4 .4 . asmoper dba.00 GHz Network Configuration Virtual IP 192. control files.00 GHz 4GB 2 x Intel Xeon. The concept of clustering computers actually started several decades ago. click here.info   Private IP 192. but didn't really take off in the commercial market.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . Oracle RAC 11 g Overview Before introducing the details for building a RAC cluster.168.2. In the case of failure with one of the servers.(x86_64) SCAN IP 1 x Dual Core Intel Xeon. online redo logs.3 (Final Release).152 192.2.0/dbhome_1 This article is only designed to work as documented with absolutely no substitutions.252   Oracle Software Components Software Component OS User Grid Infrastructure Oracle RAC Storage Component OCR/Voting Disk Database Files Fast Recovery Area grid oracle File System ASM ASM ASM Supplementary Groups asmadmin.168.   2.168.195 Primary Group oinstall oinstall Volume Size 2GB 32GB 32GB Processor RAM Operating System OEL 5.00 GHz 4GB 1 x Dual Core Intel Xeon.(x86_64) Openfiler 2.1. It wasn't long.idevelopment.152 192. archived redo logs) will be installed on ASM in an ASM disk group named +RACDB_DATA while the Fast Recovery Area will be created in a separate ASM disk group named +FRA . machines.0/grid /u01/app/oracle /u01/app/oracle/product/11.3 using iSCSI.187 Oracle Base / Oracle Home /u01/app/grid /u01/app/11. asmdba. and internal / external hard drives).2.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI The Oracle physical database files (data. If you are looking for an example that takes advantage of Oracle RAC 10g release 2 with Oracle Enterprise Linux 5. With the release of Oracle 6 for the Digital VAX cluster product. 3.151 192.1.168. 3.

Like OPS. focuses on putting together your own Oracle RAC 11 g environment for development and testing by using Linux servers and a low cost shared disk solution. Oracle RAC allows multiple instances to access the same database (storage) simultaneously. Some vendors use an approach known as a Federated Cluster.the first database to run the parallel server.2). introduced with Oracle9i. but keep in mind that Oracle RAC still requires Oracle Clusterware as it is fully integrated with the database software.25 Gbps is expected. This process was called disk pinging. Each instance in the cluster must be able to access all of the data. at around US$2. Oracle Clusterware is the only clusterware that you need for most platforms on which Oracle RAC operates (except for Tru cluster. SCSI technology provides acceptable performance for shared storage. Each instance has its own redo log files and UNDO tablespace that are locally read-writeable. Fibre channel.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . Another popular solution is the Sun NFS (Network File System) found on a NAS. In later releases of Oracle. This new model paved the way for Oracle to not only have their own DLM. With Oracle 9i. The redo log files for an instance are only writeable by that instance and will only be read from another instance during system failure. iSCSI. As mentioned earlier.2). Oracle's clusterware product was available for all operating systems and was the required cluster technology for Oracle RAC. This guide uses Oracle Clusterware which as of 11 g Release 2 (11. which can reach prices of about US$300 for a single 36GB drive.000. Oracle's own DLM was included in Oracle 6. or switched topologies (FC-SW). Oracle introduced a generic lock manager that was integrated into the Oracle kernel. RAC provides fault tolerance. With cache fusion. the failure of one node will not cause the loss of access to the database. in which case you need vendor clusterware). Pre-configured Oracle RAC solutions are available from vendors such as Dell. Oracle decided to design and write their own DLM for the VAX/VMS cluster product which provided the fine-grain block level locking required by the database. The data disks must be globally available in order to allow all instances to access the database. IBM and HP for production environments.   3. is very expensive. This does not even include the fibre channel storage array and high-end drives. OPS was extended to included support for not only the VAX/VMS cluster product but also with most flavors of UNIX. in which data is spread across several machines rather than shared by all.000. however. You can still use clusterware from other vendors if the clusterware is certified. before Oracle realized the need for a more efficient and scalable distributed lock manager (DLM) as the one included with the VAX/VMS cluster product was not well suited for database applications. on the other hand. A typical fibre channel setup which includes fibre channel cards for the servers is roughly US$10. When using Oracle 10 g or higher. multiple instances use the same set of disks for storing data. By Oracle 7.000 to US$5. Fibre channel configurations can support as many as 127 nodes and have a throughput of up to 2. but made for a complex environment to setup and manage given the multiple layers involved. which does not include the cost of the servers that make up the cluster. Oracle 9i could still rely on external clusterware but was the first release to include their own clusterware product named Cluster Ready Services (CRS). Using the same IDLM. is the successor to Oracle Parallel Server. redo log files.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI however.000 for a two-node cluster. Protocols supported by Fibre Channel include SCSI and IP. is read all the time during normal database operation (e. With Oracle RAC. but to also create their own clusterware product in future releases. A less expensive alternative to fibre channel is SCSI. A big difference between Oracle RAC and OPS is the addition of Cache Fusion. Oracle Real Application Clusters (RAC). load balancing. however. fibre channel is a high-speed serial-transfer interface that is used to connect systems and storage devices in either point-to-point (FC-P2P). for CR fabrication). but for administrators and developers who are used to GPL-based Linux prices. then the requesting instance can read that data (after acquiring the required locks). arbitrated loop (FC-AL).oracle. By Oracle 10 g release 1. At the heart of Oracle RAC is a shared disk subsystem. is now a component of Oracle grid infrastructure. Oracle's approach to clustering leverages the collective processing power of all the nodes in the cluster and at the same time provides failover security. data is passed along a high-speed interconnect using a sophisticated locking algorithm. Not all database clustering solutions use shared storage. The UNDO. fibre channel is one of the most popular solutions for shared storage. Shared-Storage Overview Today. Just the fibre channel switch alone can start at around US$1. and 4. visit the Oracle RAC Product Center on OTN. control files and parameter file for all other instances in the cluster.g. however. CRS was only available for Windows and Linux. This article. this became known as the Integrated Distributed Lock Manager (IDLM) and relied on an additional layer known as the Operating System Dependant (OSD) layer. even SCSI can come in over budget. By Oracle8. It can be used for shared storage but only if you are using a network appliance or http://www. For more background about Oracle RAC. and performance benefits by allowing the system to scale out. Cluster Ready Services was renamed to Oracle Clusterware. With the release of Oracle Database 10 g Release 2 (10. The other instances in the cluster must be able to access them (read-only) in order to recover that instance in the event of a system failure. This framework required vendor-supplied clusterware which worked well.2 which gave birth to Oracle Parallel Server (OPS) . and at the same time since all instances access the same database.12 Gigabits per second in each direction.com/technology/pub/articles/hunter-rac11gr2-iscsi. With OPS a request for data from one instance to another required the data to be written to disk first.

For this article. An iSCSI initiator can be implemented using either software or hardware. the performance optimization of the NFS client configuration for database workloads. and QLogic. the only technology that existed for building a network based storage solution was a Fibre Channel Storage Area Network (FC SAN). Intel. but given the low-end hardware being used. Also consider that 10-Gigabit Ethernet is a reality today! As with any new technology. better scalability. iSCSI comes with its own set of acronyms and terminology. Specifically. Several of the advantages to FC SAN include greater performance. This solution offers a low-cost alternative to fibre channel for testing and educational purposes.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI something similar. One of the key drawbacks that has limited the benefits of using NFS and NAS for database storage has been performance degradation and complex configuration requirements. see the Oracle White Paper entitled "Oracle Database 11 g Direct NFS Client".com/technology/pub/articles/hunter-rac11gr2-iscsi. With iSCSI. however. With the popularity of Gigabit Ethernet and the demand for lower cost. See the Certify page on Oracle Metalink for supported Network Attached Storage (NAS) devices that can be used with Oracle RAC. an iSCSI SAN should be a separate physical network devoted entirely to storage. many product manufacturers have interpreted the Fibre Channel specifications differently from each other which has resulted in scores of interconnect problems. we will be using the free Linux Open-iSCSI software driver found in the iscsi-initiator-utils RPM. The shared storage that will be used for this article is based on iSCSI technology using a network storage server installed with Openfiler. While iSCSI has a promising future. better known as iSCSI. including Adaptec. Today. FC SANs suffer from three major disadvantages. is an Internet Protocol (IP)-based storage networking standard for establishing and managing connections between IP-based storage devices. and most important to us — support for server clustering! Still today. increased disk utilization. A hardware initiator is an iSCSI HBA (or a TCP Offload Engine (TOE) card). many of its early critics were quick to point out some of its inherent shortcomings with regards to performance. For the purpose of this article. and clients. The iSCSI software initiator is generally used with a standard network interface card (NIC) — a Gigabit Ethernet card in most cases. iSCSI Target http://www. it is only important to understand the difference between an iSCSI initiator and an iSCSI target. a new feature known as Direct NFS Client integrates the NFS client functionality directly in the Oracle software. The first is price. Based on an earlier set of ANSI protocols called Fiber Distributed Data Interface (FDDI). iSCSI SANs remain the leading competitor to FC SANs. iSCSI is a data transport protocol defined in the SCSI-3 specifications framework and is similar to Fibre Channel in that it is responsible for carrying block-level data over a storage network. the Internet Small Computer System Interface. These specialized cards are sometimes referred to as an iSCSI Host Bus Adaptor (HBA) or a TCP Offload Engine (TOE) card.oracle. Block-level communication means that data is transferred between the host and the client in chunks called blocks. Fibre Channel was developed to move SCSI commands over a storage network. Alacritech. The TCP/IP protocol. Software iSCSI initiators are available for most major operating system platforms. The second is incompatible hardware components. iSCSI HBAs are available from a number of vendors. this is usually not a problem. Since its adoption. hosts. Like a FC SAN. it should not be used in a production environment. TCP as the transport protocol. The third disadvantage is the fact that a Fibre Channel network is not Ethernet! It requires a separate network technology along with a second set of skill sets that need to exist with the data center staff. Through this integration. improved availability. Oracle is able to optimize the I/O path between the Oracle software and the NFS server resulting in significant performance gains. Ratified on February 11. iSCSI Initiator Basically. To learn more about Direct NFS Client. Standard NFS client software (client systems that use the operating system provided NFS driver) is not optimized for Oracle database file I/O access patterns.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . the cost of entry still remains prohibitive for small companies with limited IT budgets. most of the processing of the data (both TCP and iSCSI) is handled in software and is much slower than Fibre Channel which is handled completely in hardware. you need servers that guarantee direct I/O over NFS. which is basically just a specialized Ethernet card with a SCSI ASIC on-board to offload all the work (TCP and SCSI commands) from the system CPU. Fibre Channel has recently been given a run for its money by iSCSI-based storage systems. is very complex and CPU intensive. and in many cases automate. With the introduction of Oracle 11 g. 2003 by the Internet Engineering Task Force (IETF). The iSCSI initiator software will need to exist on each of the Oracle RAC nodes ( racnode1 and racnode2 ). Direct NFS Client can simplify. Database servers depend on this type of communication (as opposed to the file level communication used by most NAS systems) in order to work properly. however. however.   4. and read/write block sizes of 32K. While the costs involved in building a FC SAN have come down in recent years. iSCSI Technology For many years. For many the solution is to do away with iSCSI software initiators and invest in specialized cards that can offload TCP/IP and iSCSI processing from a server's CPU. The overhead incurred in mapping every SCSI command onto an equivalent iSCSI transaction is excessive. When purchasing Fibre Channel components from a common manufacturer. its components can be much the same as in a typical IP network (LAN). The beauty of iSCSI is its ability to utilize an already familiar IP network as its transport mechanism. an iSCSI initiator is a client device that connects and initiates requests to some service offered by a server (in this case an iSCSI target).

28 1.(SATA I) Wide Ultra3 SCSI Ultra160 SCSI PCI . Disk Interface / Network / BUS Serial Parallel (standard) 10Base-T Ethernet IEEE 802. flexibility.2 1. Customers who have strict requirements for high performance storage. I thought it would be appropriate to present the following chart that shows speed comparisons of the various types of disk interfaces and network technologies.014   920 115                                                                           0. and gigabytes (GB) per second with some of the more common ones highlighted in grey.11b wireless Wi-Fi (2.25   1. and mission critical reliability will undoubtedly continue to choose Fibre Channel. the node openfiler1 will be the iSCSI target. gigabits (Gb). and robust reliability.com/technology/pub/articles/hunter-rac11gr2-iscsi. Fibre Channel has clearly demonstrated its capabilities over the years with its capacity for extremely high speeds. does this mean the death of Fibre Channel anytime soon? Probably not. megabytes (MB).(IEEE1394a) USB 2. megabits (Mb).5 3 5       GB                                                         6. So with all of this talk about iSCSI.28 2. I provide the maximum transfer rates in kilobits (kb). Before closing out this section. kilobytes (KB).4 GHz band) USB 1. For each interface.6 160 160 264 320 320 400 480 640 640 800 1000 1064 1200 1.(33 MHz / 64-bit) http://www.html?_template=/ocom/print[7/8/2010 12:15:46 PM]                             1280 160 1280 160 2128 266 .Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI An iSCSI target is the "server" component of an iSCSI network.0 Wide Ultra2 SCSI Ultra3 SCSI FireWire 800 .375   1.92 0.115   10 11 12 24 40 54 80 100 100 133.115 0.5 12.064 1.(IEEE1394b) Gigabit Ethernet PCI .11g wireless WLAN (2. For the purpose of this article.1 Parallel (ECP/EPP) SCSI-1 IEEE 802.7 20 20 33 40 40 50 60 80 80 100 125 133 150                             1 1.375 0.4 GHz band) SCSI-2 (Fast SCSI / Fast Narrow SCSI) 100Base-T Ethernet (Fast Ethernet) ATA/100 (parallel) IDE Fast Wide SCSI (Wide SCSI) Speed Kb KB Mb MB Gb 115 14.5 16.oracle.75   10 12.128 Ultra SCSI (SCSI-3 / Fast-20 / Ultra   Narrow) Ultra IDE Wide Ultra SCSI (Fast Wide 20) Ultra2 SCSI FireWire 400 .(33 MHz / 32-bit) Serial ATA I . This is typically the storage device that contains the information you want and answers requests from the initiator(s). large complex connectivity.

56 3.(133 MHz / 32-bit) Serial ATA III . or Mouse .128 2.(ATI ES1000) Integrated Gigabit Ethernet .(Connected to KVM Switch) 1 .4                     1 1 4264 533 4800 600 6400 800               1064 8.(racnode1) Dell PowerEdge T100 Dual Core Intel(R) Xeon(R) E3110.(bidirectional) AGP 8x .(66 MHz / 32-bit) AGP 1x . 1333MHz 4GB. DDR2.25 2000 16 2 2133 17.(bidirectional)   5. A second NIC adapter will be used for the private network (RAC interconnect and Openfiler networked http://www.2 4 4.528 1250 10 1.(266 MHz / 32-bit) 10G Ethernet .oracle.4 2.html?_template=/ocom/print[7/8/2010 12:15:46 PM] US$450 .(100 MHz / 64-bit) PCI-X .0 GHz. The Dell PowerEdge T100 includes an embedded Broadcom(R) NetXtreme IITM 5722 Gigabit Ethernet NIC that will be used to connect to the public network.8 6.3ae) PCI-Express x4 .1 4000 32 8000 64 4 8 The hardware used to build our example Oracle RAC 11g environment consists of three Linux servers (two Oracle RAC nodes and one Network Storage Server) and components that can be purchased at many local computer stores or over the Internet. Hardware and Costs                                                                     2128 266 2128 266 2400 300 2560 3200 4000 4256 320 400 500 532 2.512 1066 8.(533 MHz / 32-bit) PCI-Express x8 .(66 MHz / 32-bit) Serial ATA II . 800MHz 160GB 7.(133 MHz / 64-bit) AGP 4x .256 4. 6MB Cache.064 2.Ethernet LAN Card Used for RAC interconnect to racnode2 and Openfiler networked storage.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI PCI .(bidirectional) PCI .(SATA III) PCI-X .(SATA II) Ultra320 SCSI FC-AL Fibre Channel PCI-Express x1 .2K RPM SATA 3Gbps Hard Drive Integrated Graphics .(66 MHz / 64-bit) AGP 2x .128 2.(bidirectional) PCI-Express x16 . Each Linux server for Oracle RAC should contain two NIC adapters.(IEEE 802.264 4.(Broadcom(R) NetXtreme IITM 5722) 16x DVD Drive No Keyboard.com/technology/pub/articles/hunter-rac11gr2-iscsi. 3. Oracle RAC Node 1 . Monitor.

     Gigabit Ethernet Intel(R) PRO/1000 PT Server Adapter . 3.(Connected to KVM Switch) http://www. I used a Gigabit Ethernet switch (and a 1Gb Ethernet card) for the private network. Select the appropriate NIC adapter that is compatible with the maximum data transmission speed of the network switch to be used for the private network. A second NIC adapter will be used for the private network (RAC interconnect and Openfiler networked storage).0 GHz.(EXPI9400PT) Network Storage Server .2K RPM SATA 3Gbps Hard Drive Integrated Graphics . For the purpose of this article. or Mouse . For the purpose of this article. Monitor.(racnode2) Dell PowerEdge T100 Dual Core Intel(R) Xeon(R) E3110. 6MB Cache.      Gigabit Ethernet Intel(R) PRO/1000 PT Server Adapter . Monitor. DDR2. or Mouse .(ATI ES1000) Integrated Gigabit Ethernet .(Broadcom(R) NetXtreme IITM 5722) 16x DVD Drive No Keyboard.com/technology/pub/articles/hunter-rac11gr2-iscsi.oracle.(Connected to KVM Switch) 1 . I used a Gigabit Ethernet switch (and a 1Gb Ethernet card) for the private network.(EXPI9400PT) Oracle RAC Node 2 . 1333MHz 4GB. Each Linux server for Oracle RAC should contain two NIC adapters.Ethernet LAN Card Used for RAC interconnect to racnode1 and Openfiler networked storage.(openfiler1) Dell PowerEdge 1800 Dual 3. 800MHz 160GB 7.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI storage).0GHz Xeon / 1MB Cache / 800FSB (SL7PE) 6GB of ECC Memory 500GB SATA Internal Hard Disk 73GB 15K SCSI Internal Hard Disk Integrated Graphics Single embedded Intel 10/100/1000 Gigabit NIC 16x DVD Drive No Keyboard.html?_template=/ocom/print[7/8/2010 12:15:46 PM] US$90 US$450 US$90 . The Dell PowerEdge T100 includes an embedded Broadcom(R) NetXtreme IITM 5722 Gigabit Ethernet NIC that will be used to connect to the public network. Select the appropriate NIC adapter that is compatible with the maximum data transmission speed of the network switch to be used for the private network.

(Connect racnode1 to public 6 patch cable .(DGS-2208) 6 . Select the appropriate NIC adapter that is compatible with the maximum data transmission speed of the network switch to be used for the private network. The second NIC adapter will be used for the private network (Openfiler networked storage).(PWLA8490MT) Miscellaneous Components 1 .Network Cables Category network) Category network) Category network) Category 6 patch cable . The Network Storage Server (Openfiler server) should contain two NIC adapters.Ethernet Switch Used for the interconnect between racnode1-priv and racnode2priv which will be on the 192. For example. Please be aware that any type of hard disk (internal or external) should work for database storage as long as it can be recognized by the network storage server (Openfiler) and has adequate space.(Connect racnode1 to interconnect US$50 US$125 US$800 http://www. I used a Gigabit Ethernet switch (and 1Gb Ethernet card) for the private network. but decided to make use of the faster SCSI disk for this example.      Gigabit Ethernet Intel(R) PRO/1000 MT Server Adapter .oracle.(Connect racnode2 to public 6 patch cable .(Connect openfiler1 to public 6 patch cable .com/technology/pub/articles/hunter-rac11gr2-iscsi. 1 . This switch will also be used for network storage traffic for Openfiler. For the purpose of this article. The Dell PowerEdge 1800 machine included an integrated 10/100/1000 Ethernet adapter that will be used to connect to the public network.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Note: The operating system and Openfiler application will be installed on the 500GB internal SATA disk.0 network.2. The Openfiler server will be configured to use this second hard disk for iSCSI based storage and will be used in our Oracle RAC 11 g configuration to store the shared files required by Oracle Clusterware as well as the clustered database files.      Gigabit Ethernet D-Link 8-port 10/100/1000 Desktop Switch . For the purpose of this article.168. A second internal 73GB 15K SCSI hard disk will be configured for the database storage. I used a Gigabit Ethernet switch (and 1Gb Ethernet cards) for the private network.Ethernet LAN Card Used for networked storage on the private network. I could have made an extra partition on the 500GB internal SATA disk for the iSCSI target.html?_template=/ocom/print[7/8/2010 12:15:46 PM] .

keyboard.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . However. Avocent provides a high quality and economical 4-port switch which includes four 6' cables: SwitchView 1000 . and mouse in order to access its console. Now that we have talked about the hardware that will be used in this example.(Connect racnode2 to interconnect Ethernet switch) Category 6 patch cable . let's take a conceptual look at what the environment would look like after connecting all of the hardware components (click on the graphic below to view larger image): http://www. and mouse that would have direct access to the console of each server. US$10 US$10 US$10 US$10 US$10 US$10 US$340 Total    US$2.455   We are about to start the installation process. video monitor and mouse. This solution is made possible using a Keyboard.oracle. as the number of servers to manage increases. please see the article "KVM Switches For the Home and the Enterprise". it might make sense to connect each server with its own monitor. A more practical solution would be to configure a dedicated computer which would include a single monitor. When managing a very small number of servers. this solution becomes unfeasible. keyboard. Video.com/technology/pub/articles/hunter-rac11gr2-iscsi. Mouse Switch —better known as a KVM Switch. A KVM switch is a hardware device that allows a user to control multiple computers from a single keyboard.(4SV1000BND1-001) For a detailed explanation and guide on the use and KVM switches.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Ethernet switch) Category 6 patch cable .(Connect openfiler1 to interconnect Ethernet switch) Optional Components KVM Switch This guide requires access to the console of all nodes (servers) in order to install the operating system and perform several of the configuration tasks.

  6.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Figure 1 : Architecture As we start to go into the details of the installation. This section provides a summary of the screens used to install the Linux operating system. Before installing the Oracle Enterprise Linux operating system on both Oracle RAC nodes.zip   (620 MB) http://www. I will indicate at the beginning of each section whether or not the task(s) should be performed on both Oracle RAC nodes or on the network storage server (openfiler1). Oracle E-Delivery Web site for Oracle Enterprise Linux 32-bit (x86) Installations V17787-01.oracle. Install the Linux Operating System Perform the following installation on both Oracle RAC nodes in the cluster.zip   (582 MB) V17789-01.zip   (612 MB) V17790-01.com/technology/pub/articles/hunter-rac11gr2-iscsi.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . you should have both NIC interface cards installed that will be used for the public and private network. Download the following ISO images for Oracle Enterprise Linux release 5 update 4 for either x86 or x86_64 depending on your hardware architecture. This guide is designed to work with Oracle Enterprise Linux release 5 update 4 for x86_64 and follows Oracle's suggestion of performing a "default RPMs" installation type to ensure all expected Linux O/S packages are present for a successful Oracle RDBMS installation. note that most of the tasks within this document will need to be performed on both Oracle RAC nodes (racnode1 and racnode2).

You may already be familiar with and have the proper software to burn images to a CD/DVD.7 GB) Unzip the single DVD image file and burn it to a DVD: Enterprise-R5-U4-Server-i386-dvd. you may find it more convenient to make use of the single DVD image: V17794-01.iso Enterprise-R5-U4-Server-i386-disc2. you may find it more convenient to make use of the single DVD image: V17793-01. hit [Enter] to start the installation process. unzip each of the files.2 GB) Unzip the single DVD image file and burn it to a DVD: Enterprise-R5-U4-Server-x86_64-dvd. http://www.oracle.iso Enterprise-R5-U4-Server-x86_64-disc3.iso Note: If the Linux RAC nodes have a DVD installed.com/technology/pub/articles/hunter-rac11gr2-iscsi.zip V17799-01.zip V17798-01.iso Note: If the Linux RAC nodes have a DVD installed. At the boot: prompt. perform the same Linux installation on the second node while substituting the node name racnode1 for racnode2 and the different IP addresses where appropriate.iso Enterprise-R5-U4-Server-i386-disc4.zip             (580 (615 (605 (616 (597 (198 MB) MB) MB) MB) MB) MB) After downloading the Oracle Enterprise Linux operating system.zip V17797-01.zip V17800-01.iso Enterprise-R5-U4-Server-x86_64-disc6. there are many options for burning these images (ISO files) to a CD/DVD. unzip each of the files. and answer the installation screen prompts as noted below.zip V17796-01.zip   (2. Boot Screen The first screen is the Oracle Enterprise Linux boot screen.iso Enterprise-R5-U4-Server-x86_64-disc4.iso Enterprise-R5-U4-Server-i386-disc5. You will then have the following ISO images which will need to be burned to CDs: Enterprise-R5-U4-Server-x86_64-disc1.html?_template=/ocom/print[7/8/2010 12:15:46 PM] .zip   (619 MB) V17792-01.iso 64-bit (x86_64) Installations V17795-01.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI V17791-01. power it on. here are just two (of many) software packages that can be used: UltraISO Magic ISO Maker After downloading and burning the Oracle Enterprise Linux images (ISO files) to CD/DVD. insert OEL Disk #1 into the first server ( racnode1 in this example). You will then have the following ISO images which will need to be burned to CDs: Enterprise-R5-U4-Server-i386-disc1. If you are not familiar with this process and do not have the required software to burn images to a CD/DVD.zip   (3.iso Enterprise-R5-U4-Server-x86_64-disc5.iso Enterprise-R5-U4-Server-i386-disc3.iso If you are downloading the above ISO files to a MS Windows machine.iso Enterprise-R5-U4-Server-x86_64-disc2. After completing the Linux installation on the first node.zip   (267 MB) After downloading the Oracle Enterprise Linux operating system.

the installer will create the same disk configuration as just noted but will create them using the Logical Volume Manager (LVM). The LVM Volume Group (VolGroup00) is then partitioned into two LVM partitions . and the rest going to the root ( /) partition. This will bring up the "Edit LVM Volume Group: VolGroup00" dialog.952MB for swap since I have 4GB of RAM installed. You will need to configure the machine with static IP addresses.e. accept all default values and click [Next] to continue.520MB). the automatic layout does not configure an adequate amount of swap space. Click [Yes] to acknowledge this warning.032MB . Since we will be using this machine to host an Oracle database. First. [Edit] and decrease the size of the root file system ( /) by the amount you want to add to the swap partition. When completed.) If for any reason. The main concern during the partitioning phase is to ensure enough swap space is allocated as required by Oracle (which is a multiple of the available RAM). After several seconds.048MB RAM) for swap. to add another 512MB to swap. The installer then goes into GUI mode. Make the appropriate selections for your configuration. of course. depend on your network configuration. For example.5 times the size of RAM Between 2.com/technology/pub/articles/hunter-rac11gr2-iscsi. If there were any errors. Now add the space you decreased from the root file system (512MB) to the swap partition. it will partition the first hard drive ( /dev/sda for my configuration) into two partitions — one for the /boot partition ( /dev/sda1 ) and the remainder of the disk dedicate to a LVM named VolGroup00 ( /dev/sda2 ). The installer may choose to not activate eth1 by default. For example. make sure that each of the network devices are checked to [Active on boot]. To use the GRUB boot loader. (Including 5.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Media Test When asked to test the CD media. Always select to "Install Enterprise Linux". double the amount of RAM (systems with <= 2.one for the root filesystem ( /) and another for swap.192MB 0. you would decrease the size of the root file system by 512MB (i. Network Configuration I made sure to install both NIC interfaces (cards) in each of the Linux machines before starting the operating system installation. click [OK] on the "Edit LVM Volume Group: VolGroup00" dialog. [Edit] the volume group VolGroup00.048MB RAM) or an amount equal to RAM (systems with > 2.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . Welcome to Oracle Enterprise Linux At the welcome screen. Disk Partitioning Setup Select [Remove all partitions on selected drives and create default layout] and check the option to [Review and modify partitioning layout]. The key point to make is that the machine should never be configured with DHCP since it will be used to host the Oracle database server. the installer will choose 100MB for /boot . Partitioning The installer will then allow you to view (and modify if needed) the disk partitions it automatically selected. 36.049MB and 8. Boot Loader Configuration The installer will use the GRUB boot loader by default. and mouse. http://www. The following is Oracle's minimum requirement for swap space: Available RAM Swap Space Required Between 1. For most automatic layouts.048MB 1. The settings you make here will.512MB = 35. Detect Previous Installation Note that if the installer detects a previous version of Oracle Enterprise Linux.024MB and 2. monitor. This screen should have successfully detected each of the network devices. tab over to [Skip] and hit [Enter]. the installer should then detect the video card. there will be several changes that need to be made to the network configuration. you can easily change that from this screen. First. I will accept all automatically preferred sizes. click [Next] to continue.oracle.192MB Equal to the size of RAM More than 8. the media burning software would have warned us. You will then be prompted with a dialog window asking if you really want to remove all Linux partitions. Once you are satisfied with the disk layout. To increase the size of the swap partition.75 times the size of RAM For the purpose of this install. Starting with RHEL 4. Language / Keyboard Selection The next two screens prompt you for the Language and Keyboard settings. it will ask if you would like to "Install Enterprise Linux" or "Upgrade an existing Installation". click [Next] to continue. Click [Next] to continue. You will also need to configure the server with a real host name.

e.) So although the package group gets selected for install.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . Click off the option to "Enable IPv6 support". I used " racnode1 " for the first node and " racnode2 " for the second. Not to worry.255.Check OFF the option to use [Dynamic IP configuration (DHCP)] . however.151    Prefix (Netmask): 255. For many of the Linux package groups. libaio-devel ). Finish this dialog off by supplying your gateway and DNS servers. There are several other packages (RPMs).Check OFF the option to [Enable IPv6 support] Continue by manually setting your hostname.1.168.Check ON the option to [Enable IPv4 support] . some of the packages required by Oracle do not get installed. there are some packages that are required by Oracle that do not belong to any of the available package groups (i. De-selecting any "default RPM" groupings or individual RPMs. Most of the packages required for the Oracle software are grouped into "Package Groups" (i. Time Zone Selection Select the appropriate time zone for your environment and click [Next] to continue. For the purpose of this article.com/technology/pub/articles/hunter-rac11gr2-iscsi. install the following package groups: Desktop Environments GNOME Desktop Environment Applications Editors Graphical Internet Text-based Internet Development Development Libraries Development Tools Legacy Software Development Servers Server Configuration Tools Base System http://www. The installer includes a "Customize software" selection that allows the addition of RPM groupings such as "Development Libraries" or "Legacy Library Support". Put eth1 (the interconnect) on a different subnet than eth0 (the public network): eth0: . Since these nodes will be hosting the Oracle grid infrastructure and Oracle RAC software. can result in failed Oracle grid infrastructure and Oracle RAC installation attempts. The addition of such RPM groupings is not an issue. verify that at least the following package groups are selected for install.0 . Application -> Editors). In fact.168. Package Installation Defaults By default. You may choose to use different IP addresses for both eth0 and eth1 that I have documented in this guide and that is OK. Set Root Password Select a root password and click [Next] to continue.e.255. that are required to successfully install the Oracle software.(select Manual configuration)    IPv4 Address: 192.2.Check OFF the option to use [Dynamic IP configuration (DHCP)] .Check OFF the option to [Enable IPv6 support] eth1: .255. Click off the option to use "Dynamic IP configuration (DHCP)" by selecting the "Manual configuration" radio button and configure a static IP address and Netmask for your environment. [Edit] both eth0 and eth1 as follows.151    Prefix (Netmask): 255. however. select the radio button [Customize now] and click [Next] to continue.0 . (Note the "Optional packages" button after selecting a package group. These packages will need to be manually installed from the Oracle Enterprise Linux CDs after the operating system install. For now. This is where you pick the packages to install.Check ON the option to [Enable IPv4 support] . Verify that the option "Enable IPv4 support" is selected.oracle. Oracle Enterprise Linux installs most of the software required for a typical server.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Second. not all of the packages associated with that group get selected for installation. A complete list of required packages for Oracle grid infrastructure 11 g Release 2 and Oracle RAC 11 g Release 2 for Oracle Enterprise Linux 5 will be provided in the next section.255.(select Manual configuration)    IPv4 Address: 192.

click [Forward] to continue. About to Install This screen is basically a confirmation screen. The installer will eject the CD/DVD from the CD-ROM drive. For the purpose of this article. select any additional packages you wish to install for this node keeping in mind to NOT de-select any of the "default" RPM packages . Reboot System Given we changed the SELinux option (to disabled). Create User Create any additional (non-oracle) operating system user accounts if desired and click [Forward] to continue. Kdump Accept the default setting on the Kdump screen (disabled) and click [Forward] to continue. The post installation wizard allows you to make final O/S configuration settings. If you are installing Oracle Enterprise Linux using CDs.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Administration Tools Base Java Legacy Software Support System Tools X Window System In addition to the above packages. After selecting the packages to install click [Next] to continue. you will be asked to switch CDs during the installation process depending on which packages you selected. On the "Welcome" screen. Click [Next] to start the installation.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . License Agreement Read through the license agreement. click [Yes] to acknowledge a reboot of the system will occur after firstboot (Post Installation Wizard) is completed. On the sound card screen click [Forward] to continue. http://www. You will be prompted with a warning dialog warning that changing the SELinux setting will require rebooting the system so the entire file system can be relabeled.com/technology/pub/articles/hunter-rac11gr2-iscsi. SELinux On the SELinux screen. I agree to the License Agreement" and click [Forward] to continue. Sound Card This screen will only appear if the wizard detects a sound card. When this occurs. If you chose not to define any additional operating system user accounts. When this occurs. Choose "Yes. make sure to select the [Disabled] option and click [Forward] to continue. we are prompted to reboot the system. Click [OK] to reboot the system for normal use. Post Installation Wizard Welcome Screen When the system boots into Oracle Enterprise Linux for the first time. Additional CDs On the "Additional CDs" screen click [Finish] to continue. click [Continue] to acknowledge the warning dialog. I will not be creating any additional operating system accounts. You will be prompted with a warning dialog about not setting the firewall. click [Yes] to continue. Take out the CD/DVD and click [Reboot] to reboot the system. Date and Time Settings Adjust the date and time settings if necessary and click [Forward] to continue. Congratulations And that's it. it will prompt you with another Welcome screen for the "Post Installation Wizard". You have successfully installed Oracle Enterprise Linux on the first node (racnode1). choose the [Disabled] option and click [Forward] to continue. Firewall On this screen. I will be creating the "grid" and "oracle" user accounts later in this guide.oracle.

Check OFF the option to [Enable IPv6 support] eth1: .oracle.255.125 elfutils-libelf-devel-static-0. 32-bit (x86) Installations binutils-2. Install Required Linux Packages for Oracle RAC Install the following required Linux packages on both Oracle RAC nodes in the cluster. For my installation.255.0. You may choose to use different IP addresses for both eth0 and eth1 that I have documented in this guide and that is OK.17. I used " racnode2 " for the second node. Click off the option to "Enable IPv6 support".125 elfutils-libelf-devel-0. repeat the above steps for the second node ( racnode2 ).Check OFF the option to [Enable IPv6 support] Continue by setting your hostname manually.(select Manual configuration)    IPv4 Address: 192.2 glibc-2.1.168. make sure that each of the network devices are checked to [Active on boot]. The Oracle Universal Installer (OUI) performs checks on your machine during installation to verify that it meets the appropriate operating system package requirements. several will be missing either because they were considered optional within the package group or simply didn't exist in any package group! The packages listed in this section (or later versions) are required for Oracle grid infrastructure 11 g Release 2 and Oracle RAC 11 g Release 2 running on the Enterprise Linux 5 platform.0 . you are presented with the login screen.2 gcc-c++-4.1.   7.255.2.6 compat-libstdc++-33-3.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . To ensure that these checks complete successfully. [Edit] both eth0 and eth1 as follows. Click off the option to use "Dynamic IP configuration (DHCP)" by selecting the "Manual configuration" radio button and configure a static IP address and Netmask for your environment.50.Check ON the option to [Enable IPv4 support] . Although many of the required packages for Oracle were installed during the Enterprise Linux installation.5-24 http://www.255. Second.Check ON the option to [Enable IPv4 support] . Perform the same installation on the second node After completing the Linux installation on the first node. the next step is to verify and install all packages (RPMs) required by both Oracle Clusterware and Oracle RAC.152    Prefix (Netmask): 255.168.com/technology/pub/articles/hunter-rac11gr2-iscsi.125 gcc-4.0 .3 elfutils-libelf-0. Log in using the "root" user account and the password you provided during the installation. ensure to configure the proper values. this is what I configured for racnode2 : First. verify the software requirements documented in this section before starting the Oracle installs. After installing Enterprise Linux.2.Check OFF the option to use [Dynamic IP configuration (DHCP)] .Check OFF the option to use [Dynamic IP configuration (DHCP)] . Put eth1 (the interconnect) on a different subnet than eth0 (the public network): eth0: .1.(select Manual configuration)    IPv4 Address: 192.152    Prefix (Netmask): 255. Finish this dialog off by supplying your gateway and DNS servers. When configuring the machine name and networking. The installer may choose to not activate eth1.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Login Screen After rebooting the machine. Verify that the option "Enable IPv4 support" is selected.

1.0.4 (x86) . While it is possible to query each individual package to determine which ones are missing and need to be installed.* rpm -Uvh glibc-devel-2.3 (32 bit) http://www.* rpm -Uvh elfutils-libelf-0.2.* rpm -Uvh unixODBC-devel-2.6.4 (x86).* rpm -Uvh sysstat-7.2 libstdc++-4.5 kernel-headers-2.1.3. # From Enterprise Linux 5.4 (x86) .11 unixODBC-devel-2.[CD #1] mkdir -p /media/cdrom mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh binutils-2.* rpm -Uvh kernel-headers-2.* rpm -Uvh libstdc++-devel-4.2. the RPM command will simply ignore the install and print a warning message to the console that the package is already installed.* cd / eject # From Enterprise Linux 5.18 ksh-20060214 libaio-0. For packages that already exist and are up to date.11 Each of the packages listed above can be found on CD #1.2.6 compat-libstdc++-33-3.* rpm -Uvh ksh-2* rpm -Uvh libaio-0.2 make-3.106 libaio-devel-0.* rpm -Uvh glibc-common-2.2 libgomp-4.* rpm -Uvh unixODBC-2.* rpm -Uvh libstdc++-4.2 unixODBC-2.* rpm -Uvh gcc-c++-4.17.0.3.3 compat-libstdc++-33-3.50.2.com/technology/pub/articles/hunter-rac11gr2-iscsi.* rpm -Uvh libgomp-4. and CD #3 on the Enterprise Linux 5 .Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI glibc-common-2.* rpm -Uvh glibc-2.1.1.52 glibc-headers-2.[CD #2] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh elfutils-libelf-devel-* rpm -Uvh gcc-4.* cd / eject 64-bit (x86_64) Installations binutils-2.(x86) CDs.106 libgcc-4.* cd / eject # From Enterprise Linux 5.2 libstdc++-devel-4.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . an easier method is to run the rpm -Uvh PackageName command from the five CDs as follows.[CD #3] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh compat-libstdc++-33* rpm -Uvh libaio-devel-0.81 sysstat-7.oracle.* rpm -Uvh glibc-headers-2. CD #2.* rpm -Uvh libgcc-4.5 glibc-devel-2.* rpm -Uvh make-3.

2 glibc-2.1.11 (32 bit) unixODBC-devel-2.4 (x86_64) .3.5 glibc-devel-2.5 glibc-devel-2.1.2 (32 bit) libstdc++-4.2 (32 bit) libstdc++-devel 4.* cd / eject # From Enterprise Linux 5.1.4 (x86_64) .[CD #3] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh compat-libstdc++-33* http://www.106 libaio-devel-0.11 (32 bit) Each of the packages listed above can be found on CD #1.* rpm -Uvh glibc-headers-2.106 libaio-0.125 elfutils-libelf-devel-0.* rpm -Uvh gcc-c++-4.1. CD #3.125 gcc-4.11 unixODBC-2.1.* rpm -Uvh elfutils-libelf-0.* rpm -Uvh glibc-2. an easier method is to run the rpm -Uvh PackageName command from the six CDs as follows.oracle.2 libstdc++-4. # From Enterprise Linux 5.2 make-3.3.* rpm -Uvh glibc-common-2.[CD #1] mkdir -p /media/cdrom mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh binutils-2. For packages that already exist and are up to date.2.5-24 glibc-2.81 sysstat-7.1.125 elfutils-libelf-devel-static-0.106 (32 bit) libgcc-4. CD #2.4 (x86_64).* rpm -Uvh libstdc++-devel-4.11 unixODBC-devel-2.3.* cd / eject # From Enterprise Linux 5.* rpm -Uvh glibc-devel-2. and CD #4 on the Enterprise Linux 5 .html?_template=/ocom/print[7/8/2010 12:15:46 PM] .5 ksh-20060214 libaio-0.* rpm -Uvh libstdc++-4.1.2 libgcc-4.5 (32 bit) glibc-headers-2.2.com/technology/pub/articles/hunter-rac11gr2-iscsi.* rpm -Uvh unixODBC-2.2.2 unixODBC-2.0.3.5-24 (32 bit) glibc-common-2.2 gcc-c++-4.* rpm -Uvh make-3. While it is possible to query each individual package to determine which ones are missing and need to be installed.* rpm -Uvh libgcc-4. the RPM command will simply ignore the install and print a warning message to the console that the package is already installed.* rpm -Uvh ksh-2* rpm -Uvh libaio-0.(x86_64) CDs.[CD #2] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh elfutils-libelf-devel-* rpm -Uvh gcc-4.106 (32 bit) libaio-devel-0.2.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI elfutils-libelf-0.

During installation of Oracle grid infrastructure. but on racnode2 have eth1 as the public interface. you cannot configure network adapters on racnode1 with eth0 as the public interface. bond0 for the public network and bond1 for the private network). and TCP is the interconnect protocol for Oracle Clusterware. Starting with Oracle Clusterware 11 g Release 2. Network Configuration Perform the following network configuration on both Oracle RAC nodes in the cluster.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . For the private network. NIC bonding is not covered in this article. For the private network. Oracle recommends that you use NIC bonding.* cd / eject   8. If you use more than one NIC for the private interconnect. If you http://www. Although we configured several of the network settings during the Linux installation. unless bonded. Use separate bonding for the public and private networks (i. You must use a switch for the interconnect. UDP is the default interconnect protocol for Oracle RAC. with our two-node cluster.* rpm -Uvh unixODBC-devel-2. For example.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi. You do not need to configure these addresses manually in a hosts directory. the endpoints of all designated interconnect interfaces must be completely reachable on the network.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI rpm -Uvh libaio-devel-0. it is important to not skip this section as it contains critical steps to check that you have the networking hardware and Internet Protocol (IP) addresses required for an Oracle grid infrastructure for a cluster installation.[CD #4] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh sysstat-7. but Oracle recommends that you do not create separate interfaces for Oracle Clusterware and Oracle RAC. There should be no node that is not connected to every private network interface. or not used and you must use the same private interfaces for both Oracle Clusterware and Oracle RAC. you no longer need to provide a private name or IP address for the interconnect. then eth1 must be the private interface for racnode2 . You must identify each interface as a public interface . You should configure the private interfaces on the same network adapters as well. each network adapter must support TCP/IP. and the private interface names associated with the network adaptors should be the same on all nodes. The public interface names associated with the network adapters for each network must be the same on all nodes. Note that multiple private interfaces provide load balancing but not failover. because during installation each interface is defined as a public or private interface. and one for the private network interface (the interconnect). IP addresses on the subnet you identify as private are assigned as private IP addresses for cluster member nodes. To use multiple NICs for the public network or for the private network. the interconnect must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP (minimum requirement 1 Gigabit Ethernet). Network Hardware Requirements The following is a list of hardware requirements for network configuration: Each Oracle RAC node must have at least two network adapters or network interface cards (NICs): one for the public network interface. in case of a NIC failure. Oracle does not support token-rings or crossover cables for the interconnect. a private interface . so you must configure eth0 as public on both nodes. You can test if an interconnect interface is reachable using ping . Oracle recommends that you use a dedicated switch. then Oracle recommends that you use NIC bonding.4 (x86_64) . you are asked to identify the planned use for each network interface that OUI detects on your cluster node. Public interface names must be the same. You can bond separate interfaces to a common interface to provide redundancy.e.* cd / eject # From Enterprise Linux 5. If eth1 is the private interface for racnode1 . For the public network.

manually defining static IP addresses is still available with Oracle Clusterware 11 g Release 2 and will be the method used in this article to assign all required Oracle Clusterware networking components (public IP address for the node. Notice that the title of this section includes the phrase "The DNS Method". activating GNS in a cluster requires a DHCP server on the public network which I felt was out of the scope of this article. I would emphatically state that DHCP should never be used to assign any of these IP addresses. the RAC interconnect. While configuring IP addresses using GNS certainly has its benefits and offers more flexibility over manually defining static IP addresses.e. Oracle Clusterware now assigns interconnect addresses on the interface defined during installation as the private interface ( eth1 . then you can configure private IP names in the hosts file or the DNS. this article will configure the iSCSI network storage traffic on the same network as the RAC private interconnect ( eth1 ). Manually Assigning Static IP Address .151 192. GNS and DHCP are key elements to Oracle's new Grid Plug and Play (GPnP) feature that. It provides self-documentation and a set of end-points on the private network I can use for troubleshooting purposes: 192.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI want name resolution for the interconnect. the RAC interconnect. virtual IP address. Oracle recommends that static IP addresses be manually configured in a domain name server (DNS) before starting the Oracle grid infrastructure installation. for example) for that storage traffic using a TCP/IP offload Engine (TOE) card. in all of my previous articles. as Oracle states. Things.2. and the virtual IP address (VIP). however. the Single Client Access Name (SCAN) IP address(s). in 11 g Release 2. However. Prior to Oracle Clusterware 11 g Release 2. For the sake of brevity. Grid Naming Service (GNS) Starting with Oracle Clusterware 11 g Release 2. It provides self-documentation and a set of end-points on the private network I can use for troubleshooting purposes: 192.168.151 192. it does come at the cost of complexity and requires components not defined in this guide on building an inexpensive Oracle RAC. For example. In practice. The basic idea of a TOE is to offload the processing of TCP/IP protocols from the host processor to the hardware on the adapter or in the system. Oracle Clusterware assigns interconnect addresses on the interface defined during installation as the private interface ( eth1 .168. RAC interconnect.2.com/technology/pub/articles/hunter-rac11gr2-iscsi. for example). Assigning IP Address Recall that each node requires at least two network interfaces configured — one for the private IP address and one for the public IP address. please see Oracle Grid Infrastructure Installation Guide 11 g Release 2 (11.0. this would not present a huge obstacle as it was possible to define each IP address in the host file ( /etc/hosts ) on all nodes without the use of DNS. and to the subnet used for the private subnet. all IP addresses needed to be manually assigned by the network administrator using static IP addresses — never to use DHCP. This would include public IP address for the node. and for the purpose of this guide. virtual IP address (VIP). and for the purpose of this guide. racnode1-priv or racnode2-priv ). I will continue to include a private name and IP address on each node for the RAC interconnect.2. In fact. which for this article is 192.168.152 racnode1-priv racnode2-priv http://www. when building an inexpensive Oracle RAC. then you can configure private IP names in the hosts file or the DNS. as well as most of the VIP addresses to be assigned using DHCP. it is highly recommended to configure a redundant third network interface ( eth2 .152 racnode1-priv racnode2-priv In a production environment that uses iSCSI for network storage. change a bit in Oracle grid infrastructure 11 g Release 2. it is not always possible you will have access to a DNS server.168. Let's start with the RAC private interconnect. It is no longer a requirement to provide a private name or IP address for the interconnect during the Oracle grid infrastructure install (i. Combining the iSCSI storage traffic and cache fusion traffic for Oracle RAC on the same network interface works great for an inexpensive test system but should never be considered for production. and to the subnet used for the private subnet. Previous to 11 g Release 2. you now have two options that can used to assign IP addresses to each Oracle RAC node — Grid Naming Service (GNS) which uses DHCP or the traditional method of manually assigning static IP addresses using DNS.2. However.oracle. and new to 11 g Release 2.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . A TOE if often embedded in a network interface card (NIC) or a host bus adapter (HBA) and used to reduce the amount of TCP/IP processing handled by the CPU and server I/O subsystem and improve overall performance.2.2) for Linux. To learn more about the benefits and how to configure GNS. for example). This would include the public IP address for the node. Well.168. and SCAN). a second method for assigning IP addresses named Grid Naming Service (GNS) was introduced that allows all private interconnect addresses. eliminates per-node configuration data and the need for explicit add and delete nodes steps. If you want name resolution for the interconnect.(The DNS Method) If you choose not to use GNS. I will continue to include a private name and IP address on each node for the RAC interconnect. GNS enables a dynamic grid infrastructure through the self-management of the network requirements for the cluster. In practice.

0   (interconnect) to racnode2 (racnode2-priv) http://www.168. I will configure SCAN to resolve to only one. we need to configure the network on both Oracle RAC nodes for access to the public network as well as their private interconnect. With the current release of Oracle grid infrastructure and previous releases. SCAN is a new feature that provides a single host name for clients to access an Oracle Database running in a cluster.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . such as racnode1-vip . manually configured static IP address using the DNS method (but not actually defining it in DNS): 192. The easiest way to configure network settings in Enterprise Linux is with the program "Network Configuration".151 255.1. However. Single Client Access Name (SCAN) for the Cluster If you have ever been tasked with extending an Oracle RAC cluster by adding a new node (or shrinking a RAC cluster by removing a node).252 racnode1 racnode1-vip racnode2 racnode2-vip The Single Client Access Name (SCAN) virtual IP is new to 11 g Release 2 and seems to be the one causing the most discussion! The SCAN must be configured in GNS or DNS for Round Robin resolution to three addresses (recommended) or at least one address.168. You will be asked to provide the host name and up to three IP addresses to be used for the SCAN resource during the interview phase of the Oracle grid infrastructure installation. Both of these tasks can be completed using the Network Configuration GUI.1. Oracle recommends defining the name and IP address for each to be resolved through DNS and included in the hosts file for each node. unlike a virtual IP. and public IP addresses must all be on the same subnet. and can be associated with multiple IP addresses. the SCAN must resolve to at least one address. The workaround involves modifying the nslookup utility and should be performed before installing Oracle grid infrastructure.152 192.1. Oracle recommends that you configure the SCAN name so that it resolves to three IP addresses. Network Configuration is a GUI application that can be started from the command-line as the "root" user account as follows: [root@racnode1 ~]# /usr/bin/system-config-network & Using the Network Configuration application. or by using Domain Name Service (DNS) resolution. Notice that the /etc/hosts settings are the same for both nodes and that I removed any entry that has to do with IPv6. independent of the nodes that make up the cluster.255. the Cluster Verification Utility check will fail during the Oracle grid infrastructure installation.0 192.1.251 192. The SCAN should be configured so that it is resolvable either by using Grid Naming Service (GNS) within the cluster.(racnode1) Subnet Gateway Purpose Connects racnode1 to the 192. you need to configure both NIC devices as well as the /etc/hosts file.1.localdomain6 localhost6 Our example Oracle RAC configuration will use the following network settings: Device IP Address eth0 eth1 Oracle RAC Node 1 . Note that SCAN addresses.255.168. In this article.168.168.255. rather than an individual node.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI The public IP address for the node and the virtual IP address (VIP) remain the same in 11 g Release 2.1 public network Connects racnode1 192. If you do not have access to a DNS. If the SCAN cannot be resolved through DNS (or GNS). Oracle 11 g Release 2 introduced a new feature known as Single Client Access Name or SCAN for short.168.1.2.151 255. Oracle Clusterware has no problem resolving the public IP address for the node and the VIP using only a hosts file: 192.255. I provide an easy workaround in the section Configuring SCAN without DNS.oracle. then Oracle states the SCAN must be resolved through DNS and not through the hosts file. The SCAN virtual IP name is similar to the names used for a node's virtual IP addresses. If you choose not to use GNS. Clients using SCAN do not need to change their TNS configuration if you add or remove nodes in the cluster.151 192.com/technology/pub/articles/hunter-rac11gr2-iscsi.1. the SCAN is associated with the entire cluster. virtual IP addresses. The SCAN resource and its associated IP address(s) provide a stable name for clients to use for connections.168. then you know the pain of going through a list of all clients and updating their SQL*Net or JDBC configuration to reflect the new or deleted node! To address this problem. not just one address. For high availability and scalability.187 racnode-cluster-scan Configuring Public and Private Network In our two node example.168. For example: ::1 localhost6. At a minimum.

127.168.121 domo 192.168.168.1.1.2.1.255.187 racnode-cluster-scan # Private Storage Network for Openfiler .168.oracle.168.com/technology/pub/articles/hunter-rac11gr2-iscsi.2.168.152 racnode2 # Private Interconnect .1.195 openfiler1 192.1.1.1.168.1 router 192.1.168.125 oemprod 192.245 accesspoint http://www.152 racnode2 # Private Interconnect .1.187 racnode-cluster-scan # Private Storage Network for Openfiler .195 openfiler1-priv # Miscellaneous Nodes 192.168.151 racnode1 192.1.122 switch1 192.168.168.255.168.168.106 melody 192.168.1.152 racnode2-priv # Public Virtual IP (VIP) addresses .168.1.125 oemprod 192.1.168.168.1.0.1 public network Connects racnode2 eth1 192.(eth0) 192.(eth0) 192.0 192.168.122 switch1 192.168.(eth1) 192.152 racnode2-priv # Public Virtual IP (VIP) addresses .255.168.151 racnode1 192.1 localhost.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI /etc/hosts # Do not remove the following line.168.168.105 packmule 192.168.2.168. or various programs # that require network functionality will fail.195 openfiler1-priv # Miscellaneous Nodes 192.1.121 domo 192.168.1.1 localhost.151 racnode1-priv 192.152 255.1.(eth0:1) 192.2.1.html?_template=/ocom/print[7/8/2010 12:15:46 PM] .152 255.1.1.2.168.251 racnode1-vip 192.168.151 racnode1-priv 192.(eth0:1) 192.106 melody 192.2.(eth1) 192.252 racnode2-vip # Single Client Access Name (SCAN) 192.(racnode2) Subnet Gateway Purpose Connects racnode2 to the eth0 192.168.2.255.195 openfiler1 192.251 racnode1-vip 192.1.168. or various programs # that require network functionality will fail.168.1.252 racnode2-vip # Single Client Access Name (SCAN) 192.168.1.1 router 192.1.1.0   (interconnect) to racnode1 (racnode1-priv) /etc/hosts Device IP Address # Do not remove the following line. 127.1.168.localdomain localhost # Public Network .(eth1) 192.0.168.168.105 packmule 192.0.localdomain localhost # Public Network .1.168.245 accesspoint Oracle RAC Node 2 .(eth1) 192.1.0.

Node 1 (racnode1) http://www. only Oracle RAC Node 1 (racnode1) is shown. Figure 2 : Network Configuration Screen.com/technology/pub/articles/hunter-rac11gr2-iscsi.oracle.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . Be sure to make all the proper network settings to both Oracle RAC nodes.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI In the screen shots below.

html?_template=/ocom/print[7/8/2010 12:15:46 PM] .com/technology/pub/articles/hunter-rac11gr2-iscsi.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Figure 3 : Ethernet Device Screen.oracle. eth0 (racnode1) http://www.

eth1 (racnode1) http://www.html?_template=/ocom/print[7/8/2010 12:15:46 PM] .Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Figure 4 : Ethernet Device Screen.com/technology/pub/articles/hunter-rac11gr2-iscsi.oracle.

1 Mask:255.255.168. /etc/hosts (racnode1) Once the network is configured.168.0.9 KiB) TX bytes:8634 (8.255 Mask:255.1.com/technology/pub/articles/hunter-rac11gr2-iscsi. The following example is from racnode1 : [root@racnode1 ~]# /sbin/ifconfig -a eth0 Link encap:Ethernet HWaddr 00:14:6C:76:5C:71 inet addr:192.5 MiB) TX bytes:727861314 (694.4 KiB) Base address:0xddc0 Memory:fe9c0000-fe9e0000 Link encap:Local Loopback inet addr:127.255.0.html?_template=/ocom/print[7/8/2010 12:15:46 PM] .0.151 Bcast:192.168.255 Mask:255.0 inet6 addr: fe80::214:6cff:fe76:5c71/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:759780 errors:0 dropped:0 overruns:0 frame:0 TX packets:771948 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:672708275 (641.1.0 inet6 addr: fe80::20e:cff:fe64:d1e5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:120 errors:0 dropped:0 overruns:0 frame:0 TX packets:48 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:24544 (23.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Figure 5 : Network Configuration Screen.2.255.151 Bcast:192. you can use the ifconfig command to verify everything is working.255.168.0.oracle.0 inet6 addr: ::1/128 Scope:Host eth1 lo http://www.2.1 MiB) Interrupt:177 Base address:0xcf00 Link encap:Ethernet HWaddr 00:0E:0C:64:D1:E5 inet addr:192.

I indicated to not configure the firewall option. to turn UDP ICMP rejections off for next server reboot (which should always be turned off): [root@racnode1 ~]# chkconfig iptables off http://www. This has burned me several times so I like to do a double-check that the firewall option is not configured and to ensure udp ICMP filtering is turned off.d/iptables status Firewall is stopped.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3191 errors:0 dropped:0 overruns:0 frame:0 TX packets:3191 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4296868 (4. If the machine name is listed in the in the loopback address entry as below: 127.oracle.1 racnode1 localhost. [root@racnode1 ~]# /etc/rc.com/technology/pub/articles/hunter-rac11gr2-iscsi.0. the Oracle Clusterware software will crash after several minutes of running.d/init. By default the option to configure a firewall is selected by the installer. Check to ensure that the firewall option is turned off.localdomain localhost it will need to be removed as shown below: 127.0 b) Confirm the RAC Node Name is Not Listed in Loopback Address Ensure that the node names ( racnode1 or racnode2 ) are not included for the loopback address in the /etc/hosts file.0 MiB) TX bytes:4296868 (4. you will have something similar to the following in the <machine_name>_evmocr.html?_template=/ocom/print[7/8/2010 12:15:46 PM] .d/iptables stop Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] 3.0 b) TX bytes:0 (0. 2.log file: 08/29/2005 22:17:19 oac_init:2: Could not connect to server. clsc retcode = 9 08/29/2005 22:17:19 a_init:12!: Client init unsuccessful : [32] ibctx:1:ERROR: INVALID FORMAT proprinit:problem reading the bootblock or superbloc 22 When experiencing this type of error. When the Oracle Clusterware process fails. If the firewall option is operating you will need to first manually disable UDP ICMP rejections: [root@racnode1 ~]# /etc/rc.1 localhost.0 MiB) sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0. If the firewall option is stopped (like it is in my example below) you do not have to proceed with the following steps. you will receive the following error during the RAC installation: ORA-00603: ORACLE server session terminated by fatal error or ORA-29702: error occurred in Cluster Group Service operation Check and turn off UDP ICMP rejections During the Linux installation process.d/init.localdomain localhost If the RAC node name is listed for the loopback address.0. The Oracle Clusterware software will then start to operate normally and not crash. The following commands should be executed as the root user account: 1.0. the solution is to remove the UDP ICMP (iptables) rejection rule . Then. If UDP ICMP is blocked or rejected by the firewall.0.or to simply have the firewall option turned off.

run the following commands as the root user on both Oracle RAC nodes: [root@racnode1 ~]# /sbin/service ntpd stop [root@racnode1 ~]# chkconfig ntpd off [root@racnode1 ~]# mv /etc/ntp.pid This file maintains the pid for the NTP daemon. or the new Oracle Cluster Time Synchronization Service (CTSS). enter the following command as the Grid installation owner ( grid ): [grid@racnode1 ~]$ crsctl check ctss CRS-4701: The Cluster Time Synchronization Service is in Active mode. To do this on Oracle Enterprise Linux. To deactivate the NTP service.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI   9. restart the NTP service.conf.original Also remove the following file: [root@racnode1 ~]# rm /var/run/ntpd. Oracle Clusterware 11 g Release 2 and later requires time synchronization across all nodes within a cluster where Oracle RAC is deployed. and no active time synchronization is performed by Oracle Clusterware within the cluster. which prevents time from being adjusted backward.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . OPTIONS="-x -u ntp:ntp -p /var/run/ntpd. then the Cluster Time Synchronization Service is started in observer mode . and you prefer to continue using it instead of Cluster Time Synchronization Service. Oracle provide two options for time synchronization: an operating system configured network time protocol (NTP). then you need to modify the NTP initialization file to set the -x flag. as in the following example: # Drop root to id 'ntp:ntp' by default. CRS-4702: Offset (in msec): 0 Configure Network Time Protocol . This section is provided for documentation purposes only and can be used by organizations already setup to use NTP within their domain.conf /etc/ntp. modify the configuration file /etc/sysconfig/ntp with the following settings: http://www. Configure Cluster Time Synchronization Service . edit the /etc/sysconfig/ntpd file to add the -x flag. Restart the network time protocol daemon after you complete this task. To complete these steps on Oracle Enterprise Linux.pid" # Set to 'yes' to sync hw clock after successful ntpdate SYNC_HWCLOCK=no # Additional options for ntpdate NTPDATE_OPTIONS="" Then. To confirm that ctssd is active after installation. you must stop the existing ntpd service. Red Hat Linux. # /sbin/service ntp restart On SUSE systems. If NTP is found configured. If you are using NTP.oracle. the Cluster Time Synchronization Service is automatically installed in active mode and synchronizes the time across the nodes.conf file. disable it from the initialization sequences and remove the ntp.(CTSS) If you want to use Cluster Time Synchronization Service to provide synchronization service in the cluster.(only if not using CTSS as documented above) Note: Please note that this guide will use Cluster Time Synchronization Service for time synchronization across both Oracle RAC nodes in the cluster. Cluster Time Synchronization Service Perform the following Cluster Time Synchronization Service configuration on both Oracle RAC nodes in the cluster. then de-configure and de-install the Network Time Protocol (NTP). Oracle Cluster Time Synchronization Service (ctssd) is designed for organizations whose Oracle RAC databases are unable to access NTP services. and Asianux systems. Configuring NTP is outside the scope of this article and will therefore rely on the Cluster Time Synchronization Service as the network time protocol.com/technology/pub/articles/hunter-rac11gr2-iscsi. When the installer finds that the NTP protocol is not active.

Openfiler combines these ubiquitous technologies into a small. here are just two (of many) software packages that can be used: UltraISO Magic ISO Maker Install Openfiler This section provides a summary of the screens used to install the Openfiler software.com/technology/pub/articles/hunter-rac11gr2-iscsi.oracle. To learn more about Openfiler. version 2. For the purpose of this article. Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. You may already be familiar with and have the proper software to burn images to CD. If you are not familiar with this process and do not have the required software to burn images to CD.iso   (322 MB) 64-bit (x86_64) Installations openfiler-2. After downloading Openfiler.openfiler. services and drivers are started and recognized. I could have made an extra partition on the 500GB internal SATA disk for the iSCSI target. FTP. Linux NFS and iSCSI Enterprise Target. the next step is to install the Openfiler software to the network storage server ( openfiler1 ). A second internal 73GB 15K SCSI hard disk will be configured as a single "Volume Group" that will be used for all shared disk storage requirements. 32-bit (x86) Installations openfiler-2. Later in this article. For example. Once the install has completed. NFS. Samba. HTTP/DAV.3-x86_64-disc1. the server will reboot to make sure all required components. the network storage server will be configured as an iSCSI storage device for all Oracle Clusterware and Oracle RAC shared storage requirements. any external hard drives (if connected) will be discovered by the Openfiler server. we will only be making use of its iSCSI capabilities to implement an inexpensive SAN for the shared storage components required by Oracle RAC 11 g. The operating system and Openfiler application will be installed on one internal SATA disk. The only manual change required was for configuring the local network settings. Openfiler supports CIFS. however. ext3.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI NTPD_OPTIONS="-x -u ntp" Restart the daemon using the following command: # service ntp restart   10. LVM2. This guide uses x86_64. I opted to install Openfiler with all default options.3 (Final Release) for either x86 or x86_64 depending on your hardware architecture.com/ Download Openfiler Use the links below to download Openfiler NAS/SAN Appliance. The entire software stack interfaces with open source applications such as Apache. Powered by rPath Linux. Install Openfiler Perform the following installation on the network storage server (openfiler1).iso   (336 MB) If you are downloading the above ISO file to a MS Windows machine. easy to manage solution fronted by a powerful web-based management interface. The Openfiler server will be configured to use this volume group for iSCSI based storage and will be used in our Oracle RAC 11 g configuration to store the shared files required by Oracle Clusterware and the Oracle RAC database.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . After the reboot. With the network configured on both Oracle RAC nodes. Please be aware that any type of hard disk (internal or external) should work for database storage as long as it can be recognized by the network storage server (Openfiler) and has adequate space. you will then need to burn the ISO image to CD. please visit their website at http://www.3-x86-disc1. http://www. but decided to make use of the faster SCSI disk for this example. there are many options for burning these images (ISO files) to a CD.

the next screen will ask if you want to "remove" or "keep" old partitions. make sure that each of the network devices are checked to [Active on boot]. configure eth1 (the storage network) to be on the same subnet you configured for eth1 on racnode1 and racnode2 : eth0: . the installer should then detect the video card. For now. You may choose to use different IP addresses for both eth0 and eth1 and that is OK. I will "Delete" any and all partitions on this drive (there was only one. If there were any errors. After several seconds.openfiler. For my example configuration. Keyboard Configuration The next screen prompts you for the Keyboard settings. an adequate amount of swap. Select the option to [Remove all partitions on this system]. Although the official Openfiler documentation suggests to use Manual Partitioning. and mouse. Network Configuration I made sure to install both NIC interfaces (cards) in the network storage server before starting the Openfiler installation. hit [Enter] to start the installation process. I am satisfied with the installers recommended partitioning for /dev/sda . In this example. that the instructions I have provided below be used for this Oracle RAC 11 g configuration. In the next section. After downloading and burning the Openfiler ISO image (ISO file) to CD. and answer the installation screen prompts as noted below. Media Test When asked to test the CD media.Leave the [Activate on boot] checked http://www. I opted to use "Automatic Partitioning" given the simplicity of my example configuration. Select [Automatically partition] and click [Next] continue. Click [Next] to continue. Click [Yes] to acknowledge this warning. and the rest going to the root ( /) partition for that disk (or disks). You must. I will create the required partition for this particular hard disk.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . You will then be prompted with a dialog window asking if you really want to remove all partitions. Make the appropriate selection for your configuration.com/technology/pub/articles/hunter-rac11gr2-iscsi. Second. please visit http://www. Partitioning The installer will then allow you to view (and modify if needed) the disk partitions it automatically chose for hard disks selected in the previous screen. click [Next] to continue. For my example configuration. tab over to [Skip] and hit [Enter]. Automatic Partitioning If there were a previous installation of Linux on this machine. however. I also keep the checkbox [Review (and modify if needed) the partitions created] selected. I de-selected the 73GB SCSI internal hard drive since this disk will be used exclusively in the next section to create a single "Volume Group" that will be used for all iSCSI based shared disk storage requirements for Oracle Clusterware and Oracle RAC. At the boot: prompt. Boot Screen The first screen is the Openfiler boot screen. I selected ONLY the 500GB SATA internal hard drive [sda] for the operating system and Openfiler application installation. In almost all cases. Before installing the Openfiler software to the network storage server.com/learn/. Welcome to Openfiler NSA At the welcome screen. First. the installer will choose 100MB for /boot .Check off the option to [Configure using DHCP] . I would suggest. the media burning software would have warned us.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI For more detailed installation instructions. The installer will also show any other internal hard disks it discovered. the installer found the 73GB SCSI internal hard drive as /dev/sdb . /dev/sdb1 ). insert the CD into the network storage server ( openfiler1 in this example). monitor. however. This screen should have successfully detected each of the network devices. power it on. Disk Partitioning Setup The next screen asks whether to perform disk partitioning using "Automatic Partitioning" or "Manual Partitioning with Disk Druid". you should have both NIC interfaces (cards) installed and any external hard drives connected and turned on (if you will be using external hard drives). [Edit] both eth0 and eth1 as follows.oracle. The installer may choose to not activate eth1 by default. The installer then goes into GUI mode.

Configure iSCSI Volumes using Openfiler Perform the following configuration tasks on the network storage server (openfiler1).168.idevelopment.2. identify and partition the physical storage.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI . To use Openfiler as an iSCSI storage server.Leave the [Activate on boot] checked . The default administration login credentials for Openfiler are: Username: openfiler Password: password The first page the administrator sees is the [Status] / [System Information] screen. This allows convenient name resolution when testing the network for the cluster. create a new volume group.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . You have successfully installed Openfiler on the network storage server. Modify /etc/hosts File on Openfiler Server Although not mandatory.0 eth1: . Openfiler administration is performed using the Openfiler Storage Control Center — a browser based tool over an https connection on port 446.IP Address: 192. I used a hostname of " openfiler1 ".0 Continue by setting your hostname manually. we use the Openfiler Storage Control Center and navigate to [Services] / [Manage Services]: http://www.255. set up iSCSI services. Click [Next] to start the installation.255. If everything was successful after the reboot. The installer will eject the CD from the CD-ROM drive. Services To control services. Time Zone Selection The next screen allows you to configure your time zone information. create all logical volumes. and finally.   11. Set Root Password Select a root password and click [Next] to continue. you should now be presented with a text login screen and the URL to use for administering the Openfiler server.Netmask: 255.195 . I typically copy the contents of the /etc/hosts file from one of the Oracle RAC nodes to the new Openfiler server. Congratulations And that's it. log in as an administrator.255. Finish this dialog off by supplying your gateway and DNS servers.1.195 .168. configure network access.Netmask: 255.com/technology/pub/articles/hunter-rac11gr2-iscsi. For example: https://openfiler1.IP Address: 192.info:446/ From the Openfiler Storage Control Center home page.oracle. we have to perform six major tasks. Make the appropriate selection for your location. About to Install This screen is basically a confirmation screen. create new iSCSI targets for each of the logical volumes. Take out the CD and click [Reboot] to reboot the system.255.Check off the option to [Configure using DHCP] .

Network Access Configuration The next step is to configure network access in Openfiler to identify both Oracle RAC nodes ( racnode1 and racnode2 ) that will need to access the iSCSI volumes through the storage (private) network. After that. configuring network access is accomplished using the Openfiler Storage Control Center by navigating to [System] / [Network Setup]. As a convention when entering nodes. Note that iSCSI logical volumes will be created later on in this section. we will want to add both Oracle RAC nodes individually rather than allowing the entire 192. note that the 'Name' field is just a logical name used for reference only.html?_template=/ocom/print[7/8/2010 12:15:46 PM] .. Next.168. With the iSCSI target enabled. For the purpose of this article. when entering the actual node in the 'Network/Host' field. When entering each of the Oracle RAC nodes. I simply use the node name defined for that IP address. Also note that this step does not actually grant the appropriate permissions to the iSCSI volumes required by both Oracle RAC nodes. click on the 'Enable' link under the 'iSCSI target server' service name. The "Network Access Configuration" section (at the bottom of the page) allows an administrator to setup networks and/or hosts that will be allowed to access resources exported by the Openfiler appliance. we should be able to SSH into the Openfiler server and see the iscsi-target service running: [root@openfiler1 ~]# service iscsi-target status ietd (pid 14243) is running.2. always use its IP address even though its host name may http://www.. the 'iSCSI target server' status should change to ' Enabled'. The ietd program implements the user level part of iSCSI Enterprise Target software for building an iSCSI storage system on Linux.0 network have access to Openfiler resources. That will be accomplished later in this section by updating the ACL for each new logical volume. As in the previous section.com/technology/pub/articles/hunter-rac11gr2-iscsi.oracle.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Figure 6 : Enable iSCSI Openfiler Service To enable the iSCSI service.

use a subnet mask of 255. Once these devices are discovered at the OS level. external USB drives.255. In our case. It is important to remember that you will be entering the IP address of the private network ( eth1 ) for each of the RAC nodes in the cluster.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . Storage devices like internal IDE/SATA/SCSI/SAS disks.255 . Lastly. we have a 73GB internal SCSI hard drive for our shared storage needs. storage arrays. navigate to [Volumes] / [Block Devices] from the Openfiler Storage Control Center: http://www.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI already be defined in your /etc/hosts file or DNS.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi. or ANY other storage can be connected to the Openfiler server and served to the clients.255. This involves multiple steps that will be performed on the internal 73GB 15K SCSI hard disk connected to the Openfiler server. To see this and to start the process of creating our iSCSI volumes. we will be creating the three iSCSI volumes to be used as shared storage by both of the Oracle RAC nodes in the cluster. when entering actual hosts in our Class C network. On the Openfiler server this drive is seen as /dev/sdb (MAXTOR ATLAS15K2_73SCA). external FireWire drives. Openfiler Storage Control Center can be used to set up and manage all of that storage. The following image shows the results of adding both Oracle RAC nodes: Figure 7 : Configure Openfiler Network Access for Oracle RAC Nodes Physical Storage In this section.

To accept that we click on the "Create" button. This results in a new partition ( /dev/sdb1 ) on our internal hard disk: http://www.oracle.Block Device Management Partitioning the Physical Disk The first step we will perform is to create a single primary partition on the /dev/sdb internal hard disk. we are presented with the options to 'Edit' or 'Create' a partition.com/technology/pub/articles/hunter-rac11gr2-iscsi.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Figure 8 : Openfiler Physical Storage .36 GB. Here are the values I specified to create the primary partition on /dev/sdb : Mode: Primary Partition Type: Physical volume Starting Cylinder: 1 Ending Cylinder: 8924 The size now shows 68. Since we will be creating a single primary partition that spans the entire disk. By clicking on the /dev/sdb link. most of the options can be left to their default setting where the only modification would be to change the ' Partition Type' from 'Extended partition' to ' Physical volume'.html?_template=/ocom/print[7/8/2010 12:15:46 PM] .

or none as in our case. There we would see any existing volume groups. After that we are presented with the list that now shows our newly created volume group named " racdbvg": http://www.oracle. Using the Volume Group Management screen. navigate to [Volumes] / [Volume Groups]. We will be creating a single volume group named racdbvg that contains the newly created primary partition. From the Openfiler Storage Control Center.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Figure 9 : Partition the Physical Volume Volume Group Management The next step is to create a Volume Group.com/technology/pub/articles/hunter-rac11gr2-iscsi.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . and finally click on the 'Add volume group' button. click on the checkbox in front of /dev/sdb1 to select that partition. enter the name of the new volume group ( racdbvg).

208 33. The "Manage Volumes" screen should look as follows: http://www.ASM CRS Volume 1 racdb . the application will point you to the "Manage Volumes" screen. Also available at the bottom of this screen is the option to create a new volume in the selected volume group . Use this screen to create the following three logical (iSCSI) volumes. After creating each logical volume. From the Openfiler Storage Control Center.ASM FRA Volume 1 In effect we have created three iSCSI disks that can now be presented to iSCSI clients ( racnode1 and racnode2 ) on the network.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Figure 10 : New Volume Group Created Logical Volumes We can now create the three logical volumes in the newly created volume group ( racdbvg).oracle.888 33. navigate to [Volumes] / [Add Volume].ASM Data Volume 1 racdb .(Create a volume in "racdbvg") . There we will see the newly created volume group ( racdbvg) along with its block storage statistics.888 iSCSI iSCSI iSCSI racdb .html?_template=/ocom/print[7/8/2010 12:15:46 PM] . You will then need to click back to the "Add Volume" tab to create the next logical volume until all three iSCSI volumes are created: iSCSI / Logical Volumes Volume Name Volume Description racdb-crs1 racdb-data1 racdb-fra1 Required Space (MB) Filesystem Type 2.com/technology/pub/articles/hunter-rac11gr2-iscsi.

openfiler:racdb.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Figure 11 : New Logical (iSCSI) Volumes iSCSI Targets At this point. grant both of the Oracle RAC nodes access to the new iSCSI target. create a unique Target IQN (basically.one for each of the iSCSI logical volumes.ASM Data Volume 1 racdb .com.com/technology/pub/articles/hunter-rac11gr2-iscsi. map one of the iSCSI logical volumes created in the previous section to the newly created iSCSI target.crs1).com. the following table lists the new iSCSI target names (the Target IQN) and which iSCSI logical volume it will be mapped to: iSCSI Target / Logical Volume Mappings Target IQN iqn. and finally.oracle.2006-01. there will be a one-to-one mapping between an iSCSI logical volume and an iSCSI target. The example below illustrates the three steps required to create a new iSCSI target by creating the Oracle Clusterware / racdb-crs1 target ( iqn. There are three steps involved in creating and configuring an iSCSI target.openfiler:racdb.2006-01.ASM CRS Volume 1 racdb . For the purpose of this article. an iSCSI target will need to be created for each of these three volumes. Before an iSCSI client can have access to them. we have three iSCSI logical volumes.openfiler:racdb. the universal name for the new iSCSI target).openfiler:racdb. however.com.com. For the purpose of this article.ASM FRA Volume 1 iqn.crs1 iSCSI Volume Name Volume Description racdb-crs1 racdb . Please note that this process will need to be performed for each of the three iSCSI logical volumes created in the previous section. Each iSCSI logical volume will be mapped to a specific iSCSI target and the appropriate network access permissions to that target will be granted to both Oracle RAC nodes.html?_template=/ocom/print[7/8/2010 12:15:46 PM] .2006-01. This three step process will need to be repeated for http://www.data1 racdb-data1 iqn.2006-01.fra1 racdb-fra1 We are now ready to create the three new iSCSI targets .

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI

each of the three new iSCSI targets listed in the table above. Create New Target IQN From the Openfiler Storage Control Center, navigate to [Volumes] / [iSCSI Targets]. Verify the grey sub-tab "Target Configuration" is selected. This page allows you to create a new iSCSI target. A default value is automatically generated for the name of the new iSCSI target (better known as the "Target IQN"). An example Target IQN is " iqn.2006-01.com.openfiler:tsn.ae4683b67fd3 ":

Figure 12 : Create New iSCSI Target : Default Target IQN I prefer to replace the last segment of the default Target IQN with something more meaningful. For the first iSCSI target (Oracle Clusterware / racdb-crs1), I will modify the default Target IQN by replacing the string " tsn.ae4683b67fd3" with " racdb.crs1 " as shown in Figure 13 below:

Figure 13 : Create New iSCSI Target : Replace Default Target IQN Once you are satisfied with the new Target IQN, click the "Add" button. This will create a new iSCSI target and then bring up a page that allows you to modify a number of settings for the new iSCSI target. For the purpose of this article, none of settings for the new iSCSI target need to be changed. LUN Mapping After creating the new iSCSI target, the next step is to map the appropriate iSCSI logical volumes to it. Under the "Target Configuration" sub-tab, verify the correct iSCSI target is selected in the section "Select iSCSI Target". If not, use the pull-down menu to select the correct iSCSI target and hit the "Change" button. Next, click on the grey sub-tab named "LUN Mapping" (next to "Target Configuration" sub-tab). Locate the appropriate iSCSI logical volume ( /dev/racdbvg/racdb-crs1 in this case) and click the "Map" button. You do not need to change any settings on this page. Your screen should look similar to Figure 14 after clicking the "Map" button for volume /dev/racdbvg/racdb-crs1 :
http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi.html?_template=/ocom/print[7/8/2010 12:15:46 PM]

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI

Figure 14 : Create New iSCSI Target : Map LUN Network ACL Before an iSCSI client can have access to the newly created iSCSI target, it needs to be granted the appropriate permissions. Awhile back, we configured network access in Openfiler for two hosts (the Oracle RAC nodes). These are the two nodes that will need to access the new iSCSI targets through the storage (private) network. We now need to grant both of the Oracle RAC nodes access to the new iSCSI target. Click on the grey sub-tab named "Network ACL" (next to "LUN Mapping" sub-tab). For the current iSCSI target, change the "Access" for both hosts from 'Deny' to 'Allow' and click the 'Update' button:

Figure 15 : Create New iSCSI Target : Update Network ACL
http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi.html?_template=/ocom/print[7/8/2010 12:15:46 PM]

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI

Go back to the Create New Target IQN section and perform these three tasks for the remaining two iSCSI logical volumes while substituting the values found in the "iSCSI Target / Logical Volume Mappings" table .   12. Configure iSCSI Volumes on Oracle RAC Nodes

Configure the iSCSI initiator on both Oracle RAC nodes in the cluster. Creating partitions, however, should only be executed on one of nodes in the RAC cluster.
An iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for which iSCSI support (a driver) is available. In our case, the clients are two Linux servers, racnode1 and racnode2 , running Oracle Enterprise Linux 5.4. In this section we will be configuring the iSCSI software initiator on both of the Oracle RAC nodes. Oracle Enterprise Linux 5.4 includes the Open-iSCSI iSCSI software initiator which can be found in the iscsi-initiator-utils RPM. This is a change from previous versions of Oracle Enterprise Linux (4.x) which included the Linux iscsisfnet software driver developed as part of the Linux-iSCSI Project. All iSCSI management tasks like discovery and logins will use the command-line interface iscsiadm which is included with Open-iSCSI. The iSCSI software initiator will be configured to automatically log in to the network storage server ( openfiler1 ) and discover the iSCSI volumes created in the previous section. We will then go through the steps of creating persistent local SCSI device names (i.e. /dev/iscsi/crs1 ) for each of the iSCSI target names discovered using udev . Having a consistent local SCSI device name and which iSCSI target it maps to, helps to differentiate between the three volumes when configuring ASM. Before we can do any of this, however, we must first install the iSCSI initiator software. Note: This guide makes use of ASMLib 2.0 which is a support library for the Automatic Storage Management (ASM) feature of the Oracle Database. ASMLib will be used to label all iSCSI volumes used in this guide. By default, ASMLib already provides persistent paths and permissions for storage devices used with ASM. This feature eliminates the need for updating udev or devlabel files with storage device paths and permissions. For the purpose of this article and in practice, I still opt to create persistent local SCSI device names for each of the iSCSI target names discovered using udev . This provides a means of self-documentation which helps to quickly identify the name and location of each volume. Installing the iSCSI (initiator) service With Oracle Enterprise Linux 5.4, the Open-iSCSI iSCSI software initiator does not get installed by default. The software is included in the iscsi-initiator-utils package which can be found on CD #1. To determine if this package is installed (which in most cases, it will not be), perform the following on both Oracle RAC nodes:
[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-initiator-utils

If the iscsi-initiator-utils package is not installed, load CD #1 into each of the Oracle RAC nodes and perform the following:
[root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 ~]# ~]# ~]# ~]# ~]# mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh iscsi-initiator-utils-* cd / eject

Verify the iscsi-initiator-utils package is now installed:
[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-initiator-utils iscsi-initiator-utils-6.2.0.871-0.10.el5 (x86_64)

Configure the iSCSI (initiator) service After verifying that the iscsi-initiator-utils package is installed on both Oracle RAC nodes, start the iscsid service and enable it to automatically start when the system boots. We will also configure the iscsi service to automatically start which logs into iSCSI targets needed at system startup.
[root@racnode1 ~]# service iscsid start Turning off network shutdown. Starting iSCSI daemon: [root@racnode1 ~]# chkconfig iscsid on [root@racnode1 ~]# chkconfig iscsi on [ OK ] [ OK ]

Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server. This should be performed
http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi.html?_template=/ocom/print[7/8/2010 12:15:46 PM]

As with the manual log in process described above. /dev/iscsi/crs1 ) that will always point to the appropriate iSCSI target through reboots.2006-01.com.com.168. This will be done using udev .openfiler:racdb. it provides a means of self-documentation to quickly identify the name and location of each iSCSI volume. What we need is a consistent device name we can reference (i./sdd ip-192.2006-01.com.168.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI on both Oracle RAC nodes to verify the configuration is functioning properly: [root@racnode1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-priv 192.195 --op update -n node./sdc Using the output from the above listing. [root@racnode1 ~]# iscsiadm -m node -T iqn. For example. Note that I had to specify the IP address and not the host name of the network storage server ( openfiler1-priv ) . may change every time the Oracle RAC node is rebooted.data1 iqn.openfiler:racdb.startup -v automatic [root@racnode1 ~]# iscsiadm -m node -T iqn.com.html?_template=/ocom/print[7/8/2010 12:15:46 PM] .2006-01.com.2006-01. This is where the Dynamic Device Management tool named udev comes in.195:3260-iscsi-iqn.195 --op update -n node. It is therefore impractical to rely on using the local SCSI device name given there is no way to predict the iSCSI target mappings after a reboot.openfiler:racdb./sdb ip-192. Although this is not a strict requirement since we will be using ASMLib 2.com. When udev receives a device event (for example.crs1-lun-0 -> . the target iqn. When either of the Oracle RAC nodes boot and the iSCSI initiator service is started.2006-01./.crs1 -p 192.195 -l Configure Automatic Log In The next step is to ensure the client will automatically log in to each of the targets listed above when the machine is booted (or the iSCSI initiator service is started/restarted).crs1 This mapping. we can establish the following current mappings: Current iSCSI Target Name to local SCSI Device Name Mappings iSCSI Target Name iqn.168.2006-01.crs1 192. Having a consistent local SCSI device name and which iSCSI target it maps to.2. perform the following on both Oracle RAC nodes: [root@racnode1 ~]# iscsiadm -m node -T iqn.com.168.2.com.195:3260.168.195 -l [root@racnode1 ~]# iscsiadm -m node -T iqn. after a reboot it may be determined that the iSCSI target iqn.2006-01.crs1 -p 192.crs1 may get mapped to /dev/sdb . helps to differentiate between the three volumes when configuring ASM.2..data1 Manually Log In to iSCSI Targets At this point the iSCSI initiator service has been started and each of the Oracle RAC nodes were able to discover the available targets from the network storage server. I can actually determine the current mappings for all targets by looking at the /dev/disk/by-path directory: [root@racnode1 ~]# (cd /dev/disk/by-path.openfiler:racdb.195:3260.crs1 iqn.168.2..2006-01.fra1 SCSI Device Name /dev/sdb /dev/sdd /dev/sdc 01.195:3260.com. ls -l *openfiler* | awk '{FS=" ".168.openfiler:racdb.2.2006-01.2006-01.2.data1-lun-0 -> .168.195 -l [root@racnode1 ~]# iscsiadm -m node -T iqn. print $9 " " $10 " " $11}') ip-192.openfiler:racdb.2006-01.oracle.fra1 -p 192..2.openfiler:racdb.openfiler:racdb.168. the client logging in to an iSCSI target).1 iqn.com/technology/pub/articles/hunter-rac11gr2-iscsi.com.2006-01.fra1-lun-0 -> .2006-01.startup -v automatic Create Persistent Local SCSI Device Names In this section. it will automatically log in to each of the targets configured in a random fashion and map them to the next available local SCSI device name.com.com.openfiler:racdb.168.2. we will go through the steps to create persistent local SCSI device names for each of the iSCSI target names.fra1 -p 192. Rules that match may provide additional device information or specify a device node name and multiple symlink http://www.168..com.2006gets mapped to the local SCSI device /dev/sdc .openfiler:racdb./. it matches its configured rules against the available device attributes provided in sysfs to identify the device.1 iqn.e.2.195:3260-iscsi-iqn..2.data1 -p 192.0 for all volumes. The next step is to manually log in to each of the available targets which can be done using the iscsiadm command-line interface.2.openfiler:racdb.1 iqn.openfiler:racdb.openfiler:racdb.fra1 192.2006-01. udev provides a dynamic device directory using symbolic links that point to the actual device using a configurable set of rules.2006-01.168.data1 -p 192.195 --op update -n node. This needs to be run on both Oracle RAC nodes.2006-01.com. however.195:3260-iscsi-iqn.com.openfiler:racdb. For example..2.openfiler:racdb./.com.startup -v automatic [root@racnode1 ~]# iscsiadm -m node -T iqn.openfiler:racdb.com.openfiler:racdb.I believe this is required given the discovery (above) shows the targets using the IP address.

.. then exit 1 fi # Check if QNAP drive check_qnap_target_name=${target_name%%:*} if [ $check_qnap_target_name = "iqn.168.2..data1.. The file will be named /etc/udev/rules..data1..openfiler:racdb..2006-01.... target: iqn....168... portal: 192.........168.2006-01.com..195. create the UNIX shell script /etc/udev/scripts/iscsidev.2006-01......2006-01. portal: 192.. portal: 192... Create the following rules file /etc/udev/rules.openfiler:racdb.2.rules and contain only a single line of name=value pairs used to receive events we are interested in...168..2..fra1. target: iqn.. portal: 192...3260]: successful [ OK ] http://www...2006-01..... portal: 192.com/technology/pub/articles/hunter-rac11gr2-iscsi...... portal: 192. restart the iSCSI service on both Oracle RAC nodes: [root@racnode1 ~]# service iscsi stop Logging out of session [sid: 6.3260]: successful Logout of [sid: 7.168.crs1.fra1..195.*}"` fi echo "${target_name##*.......168..sh ) to handle the event...SYMLINK+="iscsi/%c/part%n" . # /etc/udev/rules.2.3260]: successful Stopping iSCSI daemon: [ OK ] [root@racnode1 ~]# service iscsi start iscsid dead but pid file exists Turning off network shutdown. After creating the UNIX SHELL script.... portal: 192. portal: 192...2......rules KERNEL=="sd*".168..2. target: iqn..fra1.openfiler:racdb.... BUS=="scsi".168.crs1.2006-01...3260]: successful Login to [iface: default..com.2.oracle..com...2004-04.openfiler:racdb.....data1..3260] Logging in to [iface: default.. We now need to create the UNIX SHELL script that will be called when this event is received.com..195.3260] Logout of [sid: 6........com.... portal: 192..sh %b".2006-01....sh BUS=${1} HOST=${BUS%%:*} [ -e /sys/class/iscsi_host ] || exit 1 file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname" target_name=$(cat ${file}) # This is not an open-scsi drive if [ -z "${target_name}" ]..195...168. portal: 192.195.2006-01...2006-01...com.2.rules on both Oracle RAC nodes: ..com.. PROGRAM="/etc/udev/scripts/iscsidev.2.d/55-openiscsi.openfiler:racdb..3260] Logging out of session [sid: 8.2. then target_name=`echo "${target_name%..... change it to executable: [root@racnode1 ~]# chmod 755 /etc/udev/scripts/iscsidev.3260]: successful Logout of [sid: 8.. It will also define a call-out SHELL script ( /etc/udev/scripts/iscsidev.3260] Logging out of session [sid: 7..crs1... #!/bin/sh # FILE: /etc/udev/scripts/iscsidev.sh on both Oracle RAC nodes: ... target: iqn.com.2006-01..com. target: iqn. portal: 192..openfiler:racdb..}" . target: iqn..195.com...openfiler:racdb.. The first step is to create a new rules file...3260] Login to [iface: default.. Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default... target: iqn...openfiler:racdb. target: iqn.... target: iqn.195..data1.168....3260]: successful Login to [iface: default.openfiler:racdb.. target: iqn.....fra1..com..com. target: iqn.195.....195..crs1....195..168.2006-01.html?_template=/ocom/print[7/8/2010 12:15:46 PM] ....195.... target: iqn..openfiler:racdb.. portal: 192.qnap" ]..sh Now that udev is configured....openfiler:racdb..openfiler:racdb.168.. Let's first create a separate directory on both Oracle RAC nodes where udev scripts can be stored: [root@racnode1 ~]# mkdir -p /etc/udev/scripts Next.195.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI names and instruct udev to run additional programs (a SHELL script for example) as part of the device event handling process..2....2..2006-01.d/55-openiscsi.3260] Logging in to [iface: default..com..d/55-openiscsi.

2006-01. you can use the default values when creating the primary partition as the default action is to use the entire disk.com. racnode1 ) http://www.... the physical database files (data/index files. the Fast Recovery Area (RMAN backups and archived redo log files) will be stored in a third ASM disk group named +FRA which too will be configured for external redundancy. Finally.fra1 Local Device Name /dev/iscsi/crs1/part /dev/iscsi/fra1/part iqn./sdd The listing above shows that udev did the job it was suppose to do! We now have a consistent set of local device names that can be used to reference the iSCSI targets. For each of the three iSCSI volumes.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Let's see if our hard work paid off: [root@racnode1 ~]# ls -l /dev/iscsi/* /dev/iscsi/crs1: total 0 lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> .2006-01.com/technology/pub/articles/hunter-rac11gr2-iscsi. As mentioned earlier in this article. and control files).oracle. The fdisk command is used in Linux for creating (and removing) partitions. I will be using Automatic Storage Management (ASM) to store the shared files required for Oracle Clusterware.2006-01./. The following table lists the three ASM disk groups that will be created and which iSCSI volume they will contain: Oracle Shared Drive Configuration File Types OCR and Voting Disk Oracle Database Files ASM Diskgroup Name iSCSI Target (short) Name ASM Redundancy Size ASMLib Volume Name +CRS +RACDB_DATA crs1 data1 fra1 External External External 2GB ORCL:CRSVOL1 32GB ORCL:DATAVOL1 32GB ORCL:FRAVOL1 Oracle Fast Recovery Area +FRA As shown in the table above. we can safely assume that the device name /dev/iscsi/crs1/part will always reference the iSCSI target iqn.data1 /dev/iscsi/data1/part Create Partitions on iSCSI Volumes We now need to create a single primary partition on each of the iSCSI volumes that spans the entire size of the volume. I will be running the fdisk command from racnode1 to create a single primary partition on each iSCSI target using the local device names created by udev in the previous section: /dev/iscsi/crs1/part /dev/iscsi/data1/part /dev/iscsi/fra1/part Note: Creating the single partition on each of the iSCSI volumes must only be run from one of the nodes in the Oracle RAC cluster! (i.openfiler:racdb.openfiler:racdb.2006-01.com.com. and the Fast Recovery Area (FRA) for the clustered database.e.openfiler:racdb. We now have a consistent iSCSI target name to local device name mapping which is described in the following table: iSCSI Target Name to Local Device Name Mappings iSCSI Target Name iqn.openfiler:racdb.crs1 iqn. For example.. SGI or OSF disklabel). The Oracle Clusterware shared files (OCR and voting disk) will be stored in an ASM disk group named +CRS which will be configured for external redundancy .com. we will need to create a single Linux primary partition on each of the three iSCSI volumes.crs1.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . The physical database files for the clustered database will be stored in an ASM disk group named +RACDB_DATA which will also be configured for external redundancy. You can safely ignore any warnings that may indicate the device does not contain a valid DOS partition (or Sun././sdc /dev/iscsi/data1: total 0 lrwxrwxrwx 1 root root 9 Nov /dev/iscsi/fra1: total 0 lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sde 3 18:13 part -> . online redo log files. In this example.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI # --------------------------------------[root@racnode1 ~]# fdisk /dev/iscsi/crs1/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1012. 62 sectors/track. 33888 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot /dev/iscsi/fra1/part1 Start 1 End 33888 Blocks 34701296 Id 83 System Linux Command (m for help): w The partition table has been altered! http://www. # --------------------------------------[root@racnode1 ~]# fdisk /dev/iscsi/fra1/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-33888.com/technology/pub/articles/hunter-rac11gr2-iscsi. default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-1012. Syncing disks. 32 sectors/track. 1012 cylinders Units = cylinders of 4464 * 512 = 2285568 bytes Device Boot /dev/iscsi/crs1/part1 Start 1 End 1012 Blocks 2258753 Id 83 System Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. 2315255808 bytes 72 heads. 32 sectors/track. # --------------------------------------[root@racnode1 ~]# fdisk /dev/iscsi/data1/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-33888. default 1012): 1012 Command (m for help): p Disk /dev/iscsi/crs1/part: 2315 MB. default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-33888.5 GB. default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-33888.oracle. 35534143488 bytes 64 heads.5 GB. Syncing disks. default 33888): 33888 Command (m for help): p Disk /dev/iscsi/fra1/part: 35. 33888 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot /dev/iscsi/data1/part1 Start 1 End 33888 Blocks 34701296 Id 83 System Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. default 33888): 33888 Command (m for help): p Disk /dev/iscsi/data1/part: 35.html?_template=/ocom/print[7/8/2010 12:15:46 PM] . 35534143488 bytes 64 heads.

0 GB./sdd1 ip-192.2006-01.com...5 GB.openfiler:racdb. you should now inform the kernel of the partition changes using the following command as the " root " user account from all remaining nodes in the Oracle RAC cluster ( racnode2 ). 33888 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot /dev/sdc1 Start 1 End 33888 Blocks 34701296 Id 83 System Linux Disk /dev/sdd: 2315 MB. 32 sectors/track..168.fra1-lun-0-part1 -> .openfiler:racdb.com.2./sdd ip-192. We will be using these new device names when configuring the volumes for ASMlib later in this guide: /dev/iscsi/crs1/part1 /dev/iscsi/data1/part1 /dev/iscsi/fra1/part1 Page 1   Page 2  Page 3 http://www./sdb ip-192.data1-lun-0 -> ./.openfiler:racdb..html?_template=/ocom/print[7/8/2010 12:15:46 PM] . 19452 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/sda1 * /dev/sda2 Start 1 14 End 13 19452 Blocks 104391 156143767+ Id 83 8e System Linux Linux LVM Disk /dev/sdb: 35. print $9 " " $10 " " $11}') ip-192.crs1-lun-0 -> .168.com.195:3260-iscsi-iqn. From racnode2 ./sdc1 ip-192.168.2006-01.openfiler:racdb.5 GB.com.com.2. 33888 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot /dev/sdb1 Start 1 End 33888 Blocks 34701296 Id 83 System Linux Disk /dev/sdc: 35...2.195:3260-iscsi-iqn. 35534143488 bytes 64 heads.openfiler:racdb.fra1-lun-0 -> ../../.crs1-lun-0-part1 -> ./..data1-lun-0-part1 -> .2006-01./.2. This is not a concern and will not cause any problems since we will not be using the local SCSI device names but rather the local device names created by udev in the previous section. 63 sectors/track.. ls -l *openfiler* | awk '{FS=" ".. 2315255808 bytes 72 heads.168..openfiler:racdb.195:3260-iscsi-iqn. 160000000000 bytes 255 heads. Syncing disks. Verify New Partitions After creating all required partitions from racnode1 .195:3260-iscsi-iqn. run the following commands: [root@racnode2 ~]# partprobe [root@racnode2 ~]# fdisk -l Disk /dev/sda: 160./sdc ip-192.168.com/technology/pub/articles/hunter-rac11gr2-iscsi. 35534143488 bytes 64 heads.com./sdb1 The listing above shows that udev did indeed create new device names for each of the new partitions. 62 sectors/track.2006-01.2.2006-01.2.195:3260-iscsi-iqn.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI Calling ioctl() to re-read partition table.168.oracle. Note that the mapping of iSCSI target names discovered from Openfiler and the local SCSI device name will be different on both Oracle RAC nodes./.2006-01. 32 sectors/track. 1012 cylinders Units = cylinders of 4464 * 512 = 2285568 bytes Device Boot /dev/sdd1 Start 1 End 1012 Blocks 2258753 Id 83 System Linux As a final step you should run the following command on both Oracle RAC nodes to verify that udev created the new symbolic links for each new partition: [root@racnode2 ~]# (cd /dev/disk/by-path.195:3260-iscsi-iqn.

and setting shell limit tasks for the the cluster. and directories. group. so that each Oracle software installation owner can write to the central inventory (oraInventory). and confirm that the primary group for each grid infrastructure for a cluster installation owner has the same name and group ID which for the purpose of this guide is oinstall (GID 1000). Ensure that this group is available as a primary group for all planned Oracle software installation owners. and installation privileges are granted by using different installation owners for each Oracle installation. For the purpose of this guide. Page 2 Page 1  Page 2   Page 3 Build Your Own Oracle RAC 11 g Cluster on Oracle Enterprise Linux and iSCSI (Continued) The information in this guide is not validated by Oracle. Create this group as a separate group if you want to have separate administration privilege groups for Oracle ASM and Oracle Database administrators. The commands in this section should be performed on both Oracle RAC nodes as root to create these groups. you can designate the oracle user to be the sole installation owner for all Oracle software (Grid infrastructure and the Oracle database software). One OS user will be created to own each Oracle software product — " grid " for the Oracle grid infrastructure owner and " oracle" for the Oracle RAC software. This user will own both the Oracle Clusterware and Oracle Automatic Storage Management binaries. A Job Role Separation privileges configuration of Oracle is a configuration with operating system groups and users that divide administrative access privileges to the Oracle grid infrastructure installation from other administrative privileges users and groups associated with other Oracle installations (e. if an oraInventory group does not exist. it is referred to as asmadmin . The following O/S groups will be created: Description Oracle Inventory and Software Owner Oracle Automatic Storage Management Group ASM Database Administrator Group ASM Operator Group Database Administrator Database Operator Oracle Inventory Group (typically oinstall ) Members of the OINSTALL group are considered the "owners" of the Oracle software and are granted privileges to write to the Oracle central inventory (oraInventory).g. These different administrator users can configure a system in preparation for an Oracle grid infrastructure for a cluster installation. and Directories Perform the following user.loc file. and in code examples. The Oracle RAC software owner must also have the OSDBA group and the optional OSOPER group as secondary groups. 13. directory configuration. The user created to own the Oracle Database binaries (Oracle RAC) will be called the oracle user. By default. and the path of the Oracle Central Inventory directory. and all Oracle Databases on the servers. and complete all configuration tasks that require operating system root privileges. oracle grid grid. Members of the OSASM group can use SQL to connect to an Oracle ASM instance as SYSASM using operating system authentication. Other organizations. Administrative privileges access is granted by membership in separate operating system groups. The Oracle Automatic Storage Management Group (typically asmadmin ) This is a required group. With this type of configuration. a small organization could simply allocate operating system user privileges so that you can use one administrative user and one group for operating system authentication for all system privileges on the storage and database tiers. Check to make sure that the group and user IDs you want to use are available on each cluster member node. oracle grid oracle oracle   SYSASM   OSASM SYSDBA for ASM SYSOPER for ASM SYSDBA SYSOPER OSDBA for ASM OSOPER for ASM OSDBA OSOPER http://www. When grid infrastructure installation and configuration is completed successfully. oinstall ). and all privileges as installation owners. Throughout this article. Both Oracle software owners must have the Oracle Inventory group ( oinstall ) as their primary group. Automatic Storage Management. the operating system group whose members are granted privileges is called the OSASM group. and should only be used at your own risk. network administrators. users.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. Users. the grid and oracle installation owners must be configured with oinstall as their primary group. a user created to own the Oracle grid infrastructure binaries is called the grid user. For example. a system administrator should only need to provide configuration information and to grant access to the database administrator to run scripts as root during an Oracle RAC installation. When you install Oracle software on a Linux system for the first time. however. This file identifies the name of the Oracle Inventory group (by default.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . OUI creates the /etc/oraInst. or storage administrators. it is for educational purposes only. is not supported by Oracle. and designate oinstall to be the single group whose members are granted all system privileges for Oracle Clusterware. where there is a group specifically created to grant this privilege. In Oracle documentation. the Oracle Database software). and so that OCR and Oracle Clusterware resource permissions are set correctly. Create Job Role Separation Operating System Privileges Groups.com/technology/pub/articles/hunter-rac11gr2-iscsi-2. grid and oracle users on both Oracle RAC nodes in This section provides the instructions on how to create the operating system users and groups to install all Oracle software using a Job Role Separation configuration. then the installer lists the primary group of the installation owner for the grid infrastructure for a cluster as the oraInventory group.oracle. The SYSASM OS Group Name OS Users Assigned to this Group Oracle Privilege Oracle Group Name oinstall asmadmin asmdba asmoper dba oper grid. Note that the group and user IDs must be identical on both Oracle RAC nodes in the cluster. This type of configuration is optional but highly recommend by Oracle for organizations that need to restrict user access to Oracle software by responsibility areas for different administrator users. have specialized system roles who will be responsible for installing the Oracle software such as system administrators.

and other storage administration tasks.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . including starting up and stopping the Oracle ASM instance. Members of the ASM Operator Group (OSOPER for ASM.com/technology/pub/articles/hunter-rac11gr2-iscsi-2.1) is now fully separated from the SYSDBA privilege in Oracle ASM 11 g Release 2 (11. Providing system privileges for the storage tier using the SYSASM privilege instead of the SYSDBA privilege provides a clearer division of responsibility between ASM administration and database administration. The SYSDBA system privilege should not be confused with the database role DBA . Database Operator (OSOPER. typically dba ) Members of the OSDBA group can use SQL to connect to an Oracle instance as SYSDBA using operating system authentication. Create Login Script for the grid User Account Log in to both Oracle RAC nodes as the grid user account and create the following login script ( . To use this group.bash_profile --------------------------------------------------OS User: grid Application: Oracle Grid Infrastructure Version: Oracle 11g release 2 --------------------------------------------------- http://www. By default. and helps to prevent different databases using the same storage from accidentally overwriting each others files.1200(asmadmin). make certain to assign each RAC node a unique Oracle SID. such as creating the database and instance startup and shutdown.1201(asmdba). Control of this privilege is totally outside of the database itself. choose the Advanced installation type to install the Oracle database software.1202(asmoper) Set the password for the grid account: [root@racnode1 ~]# passwd grid Changing password for user grid. Page 2 privilege that was introduced in Oracle ASM 11 g release 1 (11. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully. then the grid infrastructure for a cluster software owner ( grid ) must be a member of this group. In this case. OUI prompts you to specify the name of this group. typically asmoper) This is an optional group. Database Administrator (OSDBA. Create this group if you want a separate group of operating system users to have a limited set of Oracle ASM instance administrative privileges (the SYSOPER for ASM privilege).asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure [root@racnode1 ~]# id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall).grid # # # # # # # --------------------------------------------------. If you want to have an OSOPER for ASM group. The SYSASM privileges permit mounting and dismounting disk groups. The DBA role does not include the SYSDBA or SYSOPER system privileges. members of the OSASM group also have all privileges granted by the SYSOPER for ASM privilege.oracle. this group is asmoper. The default name for this group is dba .bash_profile ): Note: When setting the Oracle environment variables for each Oracle RAC node.asmdba. To use the ASM Operator group to create an ASM administrator group with fewer privileges than the default asmadmin group. The grid infrastructure installation owner ( grid ) and all Oracle Database software owners ( oracle) must be a member of this group. For this example. Control of this privilege is totally outside of the database itself.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. then you must choose the Advanced installation type to install the Grid infrastructure software. The ASM Database Administrator group (OSDBA for ASM. The default name for this group is oper . The SYSOPER system privilege allows access to a database instance even when the database is not open. and all users with OSDBA membership on databases that have access to the files managed by Oracle ASM must be members of the OSDBA group for ASM. I used: racnode1 racnode2 : ORACLE_SID=+ASM1 : ORACLE_SID=+ASM2 [root@racnode1 ~]# su . Members of this optional group have a limited set of database administrative privileges such as managing and running backups. SYSASM privileges no longer provide access privileges on an RDBMS instance.2). typically oper ) Members of the OSOPER group can use SQL to connect to an Oracle instance as SYSOPER using operating system authentication. Members of this group can perform critical database administration tasks. Create Groups and User for Grid Infrastructure Lets start this section by creating the recommended OS groups and user for Grid Infrastructure on both Oracle RAC nodes: [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 Owner" grid ~]# ~]# ~]# ~]# ~]# groupadd -g 1000 oinstall groupadd -g 1200 asmadmin groupadd -g 1201 asmdba groupadd -g 1202 asmoper useradd -m -u 1100 -g oinstall -G asmadmin. The SYSDBA system privilege allows access to a database instance even when the database is not open. typically asmdba) Members of the ASM Database Administrator group (OSDBA for ASM) is a subset of the SYSASM privileges and are granted read and write access to files managed by Oracle ASM. In this guide.

export NLS_DATE_FORMAT # --------------------------------------------------# TNS_ADMIN # --------------------------------------------------# Specifies the directory containing the Oracle Net # Services configuration files like listener. or if the file is not # in the current directory. the Grid # home must not be placed under one of the Oracle base # directories. ~/. The Oracle base directory for the # grid installation owner is the location where # diagnostic and administrative logs. and sqlnet. Page 2 # Get the aliases and functions if [ -f ~/. export ORACLE_BASE # --------------------------------------------------# ORACLE_HOME # --------------------------------------------------# Specifies the directory containing the Oracle # Grid Infrastructure software. # --------------------------------------------------JAVA_HOME=/usr/local/java.com/technology/pub/articles/hunter-rac11gr2-iscsi-2. # This variable is used by SQL*Plus. ownership of the path to the Grid # home is changed to root.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . export ORACLE_HOME # --------------------------------------------------# ORACLE_PATH # --------------------------------------------------# Specifies the search path for files used by Oracle # applications such as SQL*Plus. The # value of this parameter can be any valid date # format mask. +ASM2. The default value of # this parameter is determined by NLS_TERRITORY. it # defaults to the value of your TERM environment # variable. or in the home # directory of an installation owner.bashrc fi alias ls="ls -FA" # --------------------------------------------------# ORACLE_SID # --------------------------------------------------# Specifies the Oracle system identifier (SID) # for the Automatic Storage Management (ASM)instance # running on this node. # --------------------------------------------------ORACLE_PATH=/u01/app/oracle/common/oracle/sql. then . # --------------------------------------------------ORACLE_HOME=/u01/app/11.0/grid..2.bashrc ].oracle.) # --------------------------------------------------ORACLE_SID=+ASM1. or under Oracle home directories of # Oracle Database installation owners. Forms and Menu. During # installation.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. # --------------------------------------------------TNS_ADMIN=$ORACLE_HOME/network/admin. # Each RAC node must have a unique ORACLE_SID. For example: # # NLS_DATE_FORMAT = "MM/DD/YYYY" # # --------------------------------------------------NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS". # tnsnames. export JAVA_HOME # --------------------------------------------------# ORACLE_BASE # --------------------------------------------------# Specifies the base of the Oracle directory structure # for Optimal Flexible Architecture (OFA) compliant # installations. If the full path to # the file is not specified. export ORACLE_PATH # # # # # # # --------------------------------------------------SQLPATH --------------------------------------------------Specifies the directory or list of directories that SQL*Plus searches for a login. Used by all character mode products..e. and the value must be surrounded by # double quotation marks. and other logs # associated with Oracle ASM and Oracle Clusterware # are stored.. +ASM1. If not set. export ORACLE_SID # --------------------------------------------------# JAVA_HOME # --------------------------------------------------# Specifies the directory of the Java SDK and Runtime # Environment. export TNS_ADMIN # --------------------------------------------------# ORA_NLS11 # --------------------------------------------------- http://www. # --------------------------------------------------ORACLE_TERM=xterm. For grid # infrastructure for a cluster installations. the Oracle application # uses ORACLE_PATH to locate the file. export SQLPATH # --------------------------------------------------# ORACLE_TERM # --------------------------------------------------# Defines a terminal definition.ora. This change causes # permission errors for other installations. export ORACLE_TERM # --------------------------------------------------# NLS_DATE_FORMAT # --------------------------------------------------# Specifies the default date format to use with the # TO_CHAR and TO_DATE functions.sql file. # --------------------------------------------------ORACLE_BASE=/u01/app/grid. --------------------------------------------------SQLPATH=/u01/app/common/oracle/sql. # (i.ora.ora.

# --------------------------------------------------export TEMP=/tmp export TMPDIR=/tmp # --------------------------------------------------# UMASK # --------------------------------------------------# Set the default file mode creation mask # (umask) to 022 to ensure that the user performing # the Oracle software installation creates files # with 644 permissions. if set. # --------------------------------------------------THREADS_FLAG=native. I used: racnode1 racnode2 : ORACLE_SID=racdb1 : ORACLE_SID=racdb2 [root@racnode1 ~]# su .1300(dba). # --------------------------------------------------CLASSPATH=$ORACLE_HOME/JRE CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export CLASSPATH # --------------------------------------------------# THREADS_FLAG # --------------------------------------------------# All the tools in the JDK use green threads as a # default. set the THREADS_FLAG environment variable to # "native". # must include the $ORACLE_HOME/bin directory.1301(oper) Set the password for the oracle account: [root@racnode1 ~]# passwd oracle Changing password for user oracle. # --------------------------------------------------PATH=. # territory. Create Login Script for the oracle User Account Log in to both Oracle RAC nodes as the oracle user account and create the following login script ( .com/technology/pub/articles/hunter-rac11gr2-iscsi-2. To specify that native threads should be # used.oracle.bash_profile ): Note: When setting the Oracle environment variables for each Oracle RAC node. # --------------------------------------------------LD_LIBRARY_PATH=$ORACLE_HOME/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib export LD_LIBRARY_PATH # --------------------------------------------------# CLASSPATH # --------------------------------------------------# Specifies the directory or list of directories that # contain compiled Java classes. make certain to assign each RAC node a unique Oracle SID.1201(asmdba). Page 2 # Specifies the directory where the language. export THREADS_FLAG # --------------------------------------------------# TEMP.oper.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . TMP.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.oracle http://www. export ORA_NLS11 # --------------------------------------------------# PATH # --------------------------------------------------# Used by the shell to locate executable programs.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin PATH=${PATH}:/u01/app/common/oracle/bin export PATH # --------------------------------------------------# LD_LIBRARY_PATH # --------------------------------------------------# Specifies the list of directories that the shared # library loader searches to locate shared object # libraries at runtime. tools that create temporary files # create them in one of these directories. and TMPDIR # --------------------------------------------------# Specify the default directories for temporary # files. create the the recommended OS groups and user for the Oracle database software on both Oracle RAC nodes: [root@racnode1 ~]# groupadd -g 1300 dba [root@racnode1 ~]# groupadd -g 1301 oper [root@racnode1 ~]# useradd -m -u 1101 -g oinstall -G dba. and linguistic definition # files are stored. # --------------------------------------------------ORA_NLS11=$ORACLE_HOME/nls/data. You can revert to the use of green # threads by setting THREADS_FLAG to the value # "green". New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully.asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle [root@racnode1 ~]# id oracle uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall). character set. For this example. # --------------------------------------------------umask 022 Create Groups and User for Oracle Database Software Next.

2) and later. and # check the status of Enterprise Manager. # --------------------------------------------------JAVA_HOME=/usr/local/java.oracle. If not set. # This variable is used by SQL*Plus. # --------------------------------------------------ORACLE_UNQNAME=racdb. Used by all character mode products. export JAVA_HOME # --------------------------------------------------# ORACLE_BASE # --------------------------------------------------# Specifies the base of the Oracle directory structure # for Optimal Flexible Architecture (OFA) compliant # database software installations. # (i. # --------------------------------------------------ORACLE_TERM=xterm. export ORACLE_UNQNAME # --------------------------------------------------# JAVA_HOME # --------------------------------------------------# Specifies the directory of the Java SDK and Runtime # Environment. # --------------------------------------------------ORACLE_HOME=$ORACLE_BASE/product/11. export SQLPATH # --------------------------------------------------# ORACLE_TERM # --------------------------------------------------# Defines a terminal definition. stop. and the value must be surrounded by # double quotation marks.com/technology/pub/articles/hunter-rac11gr2-iscsi-2.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . With # Oracle Database 11g release 2 (11..2.. export ORACLE_PATH # # # # # # # --------------------------------------------------SQLPATH --------------------------------------------------Specifies the directory or list of directories that SQL*Plus searches for a login. you were # required to set environment variables for # ORACLE_HOME and ORACLE_SID to start. If the full path to # the file is not specified. # --------------------------------------------------ORACLE_BASE=/u01/app/oracle. # --------------------------------------------------ORACLE_PATH=/u01/app/common/oracle/sql.bash_profile --------------------------------------------------OS User: oracle Application: Oracle Database Software Owner Version: Oracle 11g release 2 --------------------------------------------------- # Get the aliases and functions if [ -f ~/.e.0/dbhome_1. export ORACLE_HOME # --------------------------------------------------# ORACLE_PATH # --------------------------------------------------# Specifies the search path for files used by Oracle # applications such as SQL*Plus. --------------------------------------------------SQLPATH=/u01/app/common/oracle/sql.sql file. export ORACLE_SID # --------------------------------------------------# ORACLE_UNQNAME # --------------------------------------------------# In previous releases of Oracle Database.) # --------------------------------------------------ORACLE_SID=racdb1. # Each RAC node must have a unique ORACLE_SID. the Oracle application # uses ORACLE_PATH to locate the file. racdb1. or if the file is not # in the current directory. Page 2 # # # # # # # --------------------------------------------------. The # value of this parameter can be any valid date # format mask.bashrc fi alias ls="ls -FA" # --------------------------------------------------# ORACLE_SID # --------------------------------------------------# Specifies the Oracle system identifier (SID) for # the Oracle instance running on this node. export ORACLE_BASE # --------------------------------------------------# ORACLE_HOME # --------------------------------------------------# Specifies the directory containing the Oracle # Database software. export ORACLE_TERM # --------------------------------------------------# NLS_DATE_FORMAT # --------------------------------------------------# Specifies the default date format to use with the # TO_CHAR and TO_DATE functions.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. you # need to set the environment variables ORACLE_HOME # and ORACLE_UNQNAME to use Enterprise Manager.. # Set ORACLE_UNQNAME equal to the database unique # name. it # defaults to the value of your TERM environment # variable. ~/. The default value of # this parameter is determined by NLS_TERRITORY. then .bashrc ]. racdb2. Forms and Menu. For example: # # NLS_DATE_FORMAT = "MM/DD/YYYY" # # --------------------------------------------------NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS". export NLS_DATE_FORMAT # --------------------------------------------------# TNS_ADMIN http://www.

# --------------------------------------------------LD_LIBRARY_PATH=$ORACLE_HOME/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib export LD_LIBRARY_PATH # --------------------------------------------------# CLASSPATH # --------------------------------------------------# Specifies the directory or list of directories that # contain compiled Java classes. # must include the $ORACLE_HOME/bin directory. # --------------------------------------------------CLASSPATH=$ORACLE_HOME/JRE CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export CLASSPATH # --------------------------------------------------# THREADS_FLAG # --------------------------------------------------# All the tools in the JDK use green threads as a # default. export THREADS_FLAG # --------------------------------------------------# TEMP. enter the following command: # id nobody uid=99(nobody) gid=99(nobody) groups=99(nobody) If this command displays information about the nobody user. export ORA_NLS11 # --------------------------------------------------# PATH # --------------------------------------------------# Used by the shell to locate executable programs. Repeat this procedure on all the other Oracle RAC nodes in the cluster.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. If the user nobody does not exist. # --------------------------------------------------THREADS_FLAG=native. then enter the following command to create it: # /usr/sbin/useradd nobody 3.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin PATH=${PATH}:/u01/app/common/oracle/bin export PATH # --------------------------------------------------# LD_LIBRARY_PATH # --------------------------------------------------# Specifies the list of directories that the shared # library loader searches to locate shared object # libraries at runtime. You can revert to the use of green # threads by setting THREADS_FLAG to the value # "green".ora. and linguistic definition # files are stored. To determine if the user exists. complete the following procedure to verify that the user nobody exists on both Oracle RAC nodes: 1. This will need to be performed on both Oracle RAC nodes in the cluster as root . 2. http://www. character set. # --------------------------------------------------ORA_NLS11=$ORACLE_HOME/nls/data.com/technology/pub/articles/hunter-rac11gr2-iscsi-2. Page 2 # --------------------------------------------------# Specifies the directory containing the Oracle Net # Services configuration files like listener. if set. tools that create temporary files # create them in one of these directories. and TMPDIR # --------------------------------------------------# Specify the default directories for temporary # files. To specify that native threads should be # used. # --------------------------------------------------export TEMP=/tmp export TMPDIR=/tmp # --------------------------------------------------# UMASK # --------------------------------------------------# Set the default file mode creation mask # (umask) to 022 to ensure that the user performing # the Oracle software installation creates files # with 644 permissions. # --------------------------------------------------umask 022 Verify That the User nobody Exists Before installing the software.ora.ora.oracle. export TNS_ADMIN # --------------------------------------------------# ORA_NLS11 # --------------------------------------------------# Specifies the directory where the language. then you do not have to create that user. # --------------------------------------------------PATH=.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . # tnsnames. TMP. # territory. # --------------------------------------------------TNS_ADMIN=$ORACLE_HOME/network/admin. set the THREADS_FLAG environment variable to # "native". Create the Oracle Base Directory Path The final step is to configure an Oracle base path compliant with an Optimal Flexible Architecture (OFA) structure and correct permissions. and sqlnet.

An OFA-compliant mount point /u01 owned by grid:oinstall before installation. including the permissions to start and stop the Oracle ASM instance. An Oracle grid installation for a cluster owner ( grid ). if it does not already exist: [root@racnode1 ~]# cat >> /etc/pam. or Korn shell. Page 2 This guide assumes that the /u01 directory is being created in the root file system. whose members are granted the SYSASM privilege to administer Oracle Clusterware and Oracle ASM. whose members that have the central inventory group as their primary group are granted permissions to write to the oraInventory directory. add or edit the following line in the /etc/pam. [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 ~]# ~]# ~]# ~]# ~]# ~]# mkdir mkdir chown mkdir chown chmod -p /u01/app/grid -p /u01/app/11. add the following lines to the /etc/profile file by running the following command: http://www. with the oraInventory group as its primary group. Set Resource Limits for the Oracle Software Installation Users To improve the performance of the software on Linux systems. with the oraInventory group as its primary group.d/login file. A separate OSDBA for ASM group ( asmdba). An Oracle Database software owner ( oracle). Normally. This path remains owned by grid:oinstall . Please note that this is being done for the sake of brevity and is not recommended as a general practice. The grid installation owner Oracle base directory is the location where Oracle ASM diagnostic and administrative log files are placed. add the following lines to the /etc/security/limits.2. An Oracle base for the grid /u01/app/grid owned by grid:oinstall with 775 permissions. A Grid home /u01/app/11. whose members include oracle.2.com/technology/pub/articles/hunter-rac11gr2-iscsi-2. OUI creates the Oracle Inventory directory in the path /u01/app/oraInventory.conf Hard Limit nofile stack 65536 16384 10240 Maximum number of processes available to a single user nproc To make these changes.oracle. and who are granted limited Oracle database administrator privileges. A separate OSDBA group ( dba ).Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. An Oracle base /u01/app/oracle owned by oracle:oinstall with 775 permissions. A separate OSOPER for ASM group ( asmoper). and changed during the installation process to 755 permissions. OSOPER ( oper ). or oraInventory group ( oinstall ).conf <<EOF grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 EOF 2. and with the OSASM ( asmadmin ). to change ulimit setting for all Oracle installation owners (note that these examples show the users oracle and grid ): For the Bourne. oracle): Shell Limit Maximum number of open file descriptors Maximum size of the stack segment of the process Item in limits. During installation. Bash. These permissions are required for installation. the /u01 directory would be provisioned as a separate file system with either hardware or software mirroring configured.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . whose members include grid and oracle. On each Oracle RAC node. you should have the following on both Oracle RAC nodes: An Oracle central inventory group. and with the OSDBA ( dba ).so EOF 3. make the following changes to the default shell startup file. On each Oracle RAC node. run the following as root: 1.0/grid owned by grid:oinstall with 775 ( drwxdrwxr-x) permissions. you must increase the following resource limits for the Oracle software owner users ( grid . OSDBA for ASM ( asmdba) and OSOPER for ASM ( asmoper) groups as secondary groups. Depending on your shell environment. and the OSDBA for ASM group (asmdba) as their secondary groups. whose members include grid . and who are granted limited Oracle ASM administrator privileges. and who are granted access to Oracle ASM. and are changed during the installation process to root:oinstall with 755 permissions ( drwxr-xr-x).0/grid -R grid:oinstall /u01 -p /u01/app/oracle oracle:oinstall /u01/app/oracle -R 775 /u01 At the end of this section. to enable other Oracle software owners to write to the central inventory. A separate OSASM group ( asmadmin ). A separate OSOPER group ( oper ). whose members are granted the SYSDBA privilege to administer the Oracle Database.d/login <<EOF session required pam_limits.conf file (the following example shows the software account owners oracle and grid ): [root@racnode1 ~]# cat >> /etc/security/limits.

Given the fact that I created this article on a system that makes use of a KVM Switch. This solution is made possible using a Keyboard. set the DISPLAY environment: [root@racnode1 ~]# su . I am able to toggle to each node and rely on the native X11 display server for Linux in order to display X applications. keyboard. then if [ \$SHELL = "/bin/ksh" ]. After installing the Linux operating system. there are several applications which are needed to install and configure Oracle RAC which use a Graphical User Interface (GUI) and require the use of an X11 display server. you would need to install an X11 display server on that Windows client (Xming for example).0 [grid@racnode1 ~]$ export DISPLAY [grid@racnode1 ~]$ # TEST X CONFIGURATION BY RUNNING xterm [grid@racnode1 ~]$ xterm & http://www.login file by running the following command: [root@racnode1 ~]# cat >> /etc/csh.com/technology/pub/articles/hunter-rac11gr2-iscsi-2. Logging In to a Remote System Using X Terminal This guide requires access to the console of all machines (Oracle RAC nodes and Openfiler) in order to install the operating system and perform several of the configuration tasks. Video.grid [grid@racnode1 ~]$ DISPLAY=<your local workstation>:0. A more practical solution would be to configure a dedicated computer which would include a single monitor. this solution becomes unfeasible. then perform the following actions: 1. For example. it might make sense to connect each server with its own monitor. Mouse Switch —better known as a KVM Switch. and mouse in order to access its console. add the following lines to the /etc/csh.oracle. However.login <<EOF if ( \$USER == "oracle" || \$USER == "grid" ) then limit maxproc 16384 limit descriptors 65536 endif EOF   14. PuTTY. As the software owner ( grid . If you are not logged directly on to the graphical console of a node but rather you are using a remote client like SSH. From the client workstation. and mouse that would have direct access to the console of each machine. or Telnet to connect to the node. then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF For the C shell (csh or tcsh). 4. Start the X11 display server software on the client workstation. as the number of servers to manage increases. oracle). 2. The most notable of these GUI applications (or better known as an X application) is the Oracle Universal Installer (OUI) although others like the Virtual IP Configuration Assistant (VIPCA) also require use of an X11 display server. log in to the server where you want to install the software as the Oracle grid infrastructure for a cluster software owner ( grid ) or the Oracle RAC software ( oracle). 3. Page 2 [root@racnode1 ~]# cat >> /etc/profile <<EOF if [ \$USER = "oracle" ] || [ \$USER = "grid" ]. If you intend to install the Oracle grid infrastructure and Oracle RAC software from a Windows workstation or other system with an X11 display server installed. keyboard.html?_template=/ocom/print[7/8/2010 12:17:00 PM] .Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. any X application will require an X11 display server installed on the client. Configure the security settings of the X server software to permit remote hosts to display X applications on the local system. When managing a very small number of servers. if you are making a terminal remote connection to racnode1 from a Windows workstation.

conf) is included in Section 17 ("All Startup Commands for Both Oracle RAC Nodes").sem = 250 32000 100 128 fs.5 GB. each Oracle RAC node will be hosting Oracle grid infrastructure and Oracle RAC and will therefore require at least 2.shmmni = 4096 kernel. This way you do not have to use a raw device or even more drastic.getting each one prepared for the Oracle 11 g release 2 grid infrastructure and Oracle RAC 11 g release 2 installations on the Oracle Enterprise Linux 5 platform. or 2. Memory and Swap Space Considerations The minimum required RAM on RHEL/OEL is 1. you can add temporary swap space by creating a temporary swap file. The kernel parameters discussed in this section will need to be defined on both Oracle RAC nodes in the cluster every time the machine is booted. There are several different ways to configure (set) these parameters.com/technology/pub/articles/hunter-rac11gr2-iscsi-2.wmem_max=1048576 fs. so if your system uses a larger value.rmem_max=4194304 net. kernel. rebuild your system. setting the maximum number of file handles. setting shared memory and semaphores.rmem_default=262144 net.core.file-max = 6815744 net.shmmax = 4294967295 kernel. In this guide. use swap space equal to RAM. Configure the Linux Servers for Oracle Perform the following configuration procedures on both Oracle RAC nodes in the cluster. type: [root@racnode1 ~]# cat /proc/meminfo | grep SwapTotal SwapTotal: 6094840 kB If you have less than 4GB of memory (between your RAM and SWAP). This section provides information about setting those kernel parameters required for Oracle. For systems with more than 16 GB RAM. do not change it. This includes verifying enough memory and swap space. use 16 GB of RAM for swap space. Instructions for placing them in a startup script ( /etc/sysctl. Oracle recommends that you set swap space to 1. make a file that will act as additional swap space. Configure Kernel Parameters Oracle Database 11 g release 2 on RHEL/OEL 5 requires the kernel parameter settings shown below. Overview This section focuses on configuring both Oracle RAC Linux servers . For production database systems.conf file. Run xterm from Node 1 (racnode1)   15.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. The values given are minimums. To check the amount of memory you have. Oracle recommends that you tune these values to optimize the performance of the system.core.oracle.shmall = 2097152 kernel.5 GB for grid infrastructure for a cluster and Oracle RAC.5 GB in each server. verify that the kernel parameters described in this section are set to values greater than or equal to the recommended values.5 GB for grid infrastructure for a cluster. For the purpose of this article.core.aio-max-nr=1048576 RHEL/OEL 5 already comes configured with default values defined for the following kernel parameters: http://www. setting the IP local port range. The minimum required swap space is 1. On both Oracle RAC nodes. type: [root@racnode1 ~]# cat /proc/meminfo | grep MemTotal MemTotal: 4038564 kB To check the amount of swap you have allocated.core.ip_local_port_range = 9000 65500 net. and finally how to activate all kernel parameters for the system. let's say about 500MB: # dd if=/dev/zero of=tempswap bs=1k count=500000 Now we should change the file permissions: # chmod 600 tempswap Finally we format the "partition" as swap and add it to the swap space: # mke2fs tempswap # mkswap tempswap # swapon tempswap Configure Kernel Parameters The kernel parameters presented in this section are recommended values only as documented by Oracle. Also note that when setting the four semaphore values that all four values need to be entered on one line.5 times the amount of RAM for systems with 2 GB of RAM or less. As root . Page 2 Figure 16 : Test X11 Display Server on Windows. I will be making all changes permanent (through reboots) by placing all values in the /etc/sysctl.wmem_default=262144 net. For systems with 2 GB to 16 GB RAM.ipv4. Each of the Oracle RAC nodes used in this article are equipped with 4 GB of physical RAM.html?_template=/ocom/print[7/8/2010 12:17:00 PM] .

rmem_max = 4194304 net.sysrq = 0 kernel.aio-max-nr = 1048576 Verify the new kernel parameter values by running the following on both Oracle RAC nodes in the cluster: [root@racnode1 ~]# /sbin/sysctl -a | grep shm vm. so there's no need to reboot the system after making kernel parameter changes. This article assumes a fresh new install of Oracle Enterprise Linux 5 and as such.core. many of the required kernel parameters are already set (see above).rmem_max = 4194304 net.shmmni = 4096 kernel.shmmni = 4096 kernel.ip_local_port_range = 9000 65500 [root@racnode1 ~]# /sbin/sysctl -a | grep 'core\.rmem_default = 262144 net.accept_source_route = 0 kernel.sem = 250 32000 100 128 fs.rmem_default = 262144 net.core.shmall = 4294967296 kernel.shmmax = 68719476736 kernel.(Optional) http://www.shmmax = 68719476736 [root@racnode1 ~]# /sbin/sysctl -a | grep sem kernel.ipv4.conf startup file.conf.com/technology/pub/articles/hunter-rac11gr2-iscsi-2.wmem_max=1048576 # Maximum number of allowable concurrent asynchronous I/O requests requests fs.shmmax Use the default values if they are the same or larger than the required values. you can simply copy / paste the following to both Oracle RAC nodes while logged in as root : [root@racnode1 ~]# cat >> /etc/sysctl.conf.aio-max-nr=1048576 EOF Activate All Kernel Parameters for the System The above command persisted the required kernel parameters through reboots by inserting them in the /etc/sysctl.conf <<EOF # Controls the maximum number of shared memory segments system wide kernel. This being the case.core.default.ipv4. Configure RAC Nodes for Remote Access using SSH .core.sem = 250 32000 100 128 SEMMNI_value # Sets the maximum number of file-handles that the Linux kernel will allocate fs.ip_forward = 0 net.core.[rw]mem' net.hugetlb_shm_group = 0 kernel.default.ip_local_port_range = 9000 65500 # Default setting in bytes of the socket "receive" buffer which # may be set by using the SO_RCVBUF socket option net.wmem_default = 262144 net.core.core. run the following as root on both Oracle RAC nodes in the cluster: [root@racnode1 ~]# sysctl -p net.rmem_default=262144 # Maximum setting in bytes of the socket "receive" buffer which # may be set by using the SO_RCVBUF socket option net.shmall kernel.core.core_uses_pid = 1 net.core. Page 2 kernel.core.wmem_default = 262144 net.sem = 250 32000 100 128 [root@racnode1 ~]# /sbin/sysctl -a | grep file-max fs.rp_filter = 1 net.ipv4.core.ipv4.wmem_max = 1048576 fs. Linux allows modification of these kernel parameters to the current system while it is up and running.shmmni = 4096 # Sets the following semaphore values: # SEMMSL_value SEMMNS_value SEMOPM_value kernel.file-max = 6815744 # Defines the local port range that is used by TCP and UDP # traffic to choose the local port net.ipv4.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.ip_local_port_range = 9000 65500 net.wmem_default=262144 # Maximum setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket option net.file-max = 6815744 net. To activate the new kernel parameter values for the currently running system.tcp_syncookies = 1 kernel.rmem_max=4194304 # Default setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket option net.html?_template=/ocom/print[7/8/2010 12:17:00 PM] .wmem_max = 1048576   16.core.oracle.msgmax = 65536 kernel.ipv4.ipv4.shmall = 4294967296 kernel.file-max = 6815744 [root@racnode1 ~]# /sbin/sysctl -a | grep ip_local_port_range net.msgmnb = 65536 kernel.

OPatch.-----------------------racnode1 yes racnode2 yes Result: Node reachability check passed from node "racnode1" Checking user equivalence.sh) is a valuable tool located in the Oracle Clusterware root directory that not only verifies all prerequisites have been met before software installation. Oracle Enterprise Manager.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . to resolve many incomplete system configuration requirements. SSH is used after installation by configuration assistants. Check: Node reachability from node "racnode1" Destination Node Reachable? -----------------------------------. if the OUI already includes a feature that automates the SSH configuration between the Oracle RAC nodes.. For example. the installer attempts to use the rsh and rcp commands instead of ssh and scp . the Oracle Universal Installer (OUI) uses the secure shell tools ssh and scp commands during installation to run remote commands on and copy files to the other cluster nodes. have a prerequisite of its own and that is SSH user equivalency is configured correctly for the user account running the installation. To the contrary. These messages. Note: When SSH is not available. OpenSSH should be included in the Linux distribution minimal installation.oracle. Oracle recommends that you use the automatic procedure whenever possible. Passwordless SSH is required for Oracle 11 g release 2 and higher. One of the best parts about this section of the document is that it is completely optional! That's not to say configuring Secure Shell (SSH) connectivity between the Oracle RAC nodes is not necessary. however. The automatic configuration performed by OUI creates passwordless SSH connectivity between all cluster member nodes.. Page 2 Perform the following optional procedures on both Oracle RAC nodes to manually configure passwordless SSH connectivity between the two cluster member nodes as the "grid" and "oracle" user.el5 (x86_64) If you do not see a list of SSH packages.. then SSH must be configured manually before an installation can be run. it also has the ability to generate shell script programs. The ability to run SSH commands without being prompted for a password is sometimes referred to as user equivalence . Since this guide uses grid as the Oracle grid infrastructure software owner and oracle as the owner of the Oracle RAC software. and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer.3p2-36. To confirm that SSH packages are installed. Further documentation on preventing installation errors caused by stty commands can be found later in this section. Note: Configuring SSH with a passphrase is no longer supported for Oracle Clusterware 11 g release 2 and later releases. for the purpose of this article. Check: User equivalence for user "grid" Node Name Comment -----------------------------------. During the Oracle software installations. In addition to installing the Oracle software.3p2-36.racnode2 -verbose Performing pre-checks for cluster services setup Checking node reachability.el5 (x86_64) openssh-4. then install those packages for your Linux distribution. The CVU ( runcluvfy. Verify SSH Software is Installed The supported version of SSH for Linux distributions is OpenSSH. Another reason you may decide to manually configure SSH for user equivalence is to have the ability to run the Cluster Verification Utility (CVU) prior to installing the Oracle software.3p2-36. and other features that perform configuration operations from local to remote nodes. however.el5 (x86_64) openssh-clients-4. and that generate messages to the terminal. mail checks. If you intend to configure SSH connectivity using the OUI. The reason this section of the document is optional is that the OUI interface in 11 g release 2 includes a new feature that can automatically configure SSH during the actual install phase of the Oracle software for the user account running the installation. SSH must be configured so that these commands do not prompt for a password.com/technology/pub/articles/hunter-rac11gr2-iscsi-2. called fixup scripts. These services. and remove other security measures that are triggered during a login. load CD #1 into each of the Oracle RAC nodes and perform the following to install the OpenSSH packages: [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 ~]# ~]# ~]# ~]# ~]# mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh openssh-* cd / eject Why Configure SSH User Equivalence Using the Manual Method Option? So.. passwordless SSH must be configured for both user accounts. The CVU does. are disabled by default on most Linux systems. If they are not disabled.el5 (x86_64) openssh-server-4.sh stage -pre crsinst -fixup -n racnode1.-----------------------racnode2 failed racnode1 failed Result: PRVF-4007 : User equivalence check failed for user "grid" ERROR: User equivalence unavailable on all the specified nodes Verification cannot proceed http://www. then why provide a section on how to manually configure passwordless SSH connectivity? In fact. One reason to include this section on manually configuring SSH is to make mention of the fact that you must remove stty commands from the profiles of any Oracle software installation owners. The use of RSH will not be discussed in this article. I decided to forgo manually configuring SSH connectivity in favor of Oracle's automatic methods included in the installer. run the following command on both Oracle RAC nodes: [root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep ssh openssh-askpass-4. know that the CVU utility will fail before having the opportunity to perform any of its critical checks: [grid@racnode1 ~]$ /media/cdrom/grid/runcluvfy.3p2-36.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.

then the response to this command is a list of process ID number(s). and Create SSH Keys On Each Node Complete the following steps on each node: 1. the Oracle software owner listed is the grid user. Enter file in which to save the key (/home/grid/. 4.ssh directory in the grid user's home directory. In the examples that follow. Run this command on both Oracle RAC nodes in the cluster to verify the SSH daemons are installed and running. [root@racnode1 ~]# su . For example: [grid@racnode1 ~]$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall). the grid user). and then copy all the keys generated on all cluster node members into an authorized keys file that is identical on each node. you must first create RSA or DSA keys on each cluster node. Page 2 Pre-check for cluster services setup was unsuccessful on all the nodes.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. it is not required to manually configure SSH connectivity before running the OUI. oracle). The goal in this section is to setup user equivalence for the grid and oracle OS user accounts. the installer detects when minimum requirements for installation are not completed and performs the same tasks done by the CVU to generate fixup scripts to resolve incomplete system configuration requirements.com/technology/pub/articles/hunter-rac11gr2-iscsi-2. Enter the following command to generate a DSA key pair (public and private key) for the SSH protocol. Configure SSH Connectivity Manually on All Cluster Nodes To reiterate.oracle. Checking Existing SSH Configuration on the System To determine if SSH is installed and running. The tasks below to manually configure SSH connectivity between all cluster member nodes is included for documentation purposes only. To ensure that you are logged in as grid . User equivalence enables the grid and oracle user accounts to access all other nodes in the cluster (running commands and copying files) without the need for a password. Starting with Oracle 11 g release 2.1202(asmoper) [grid@racnode1 ~]$ id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall). You must configure passwordless SSH separately for each Oracle software installation owner that you intend to use for installation ( grid .1200(asmadmin). Note: Automatic passwordless SSH configuration using the OUI creates RSA encryption keys on all nodes of the cluster. Keep in mind that this guide uses grid as the Oracle grid infrastructure software owner and oracle as the owner of the Oracle RAC software. as SSH ignores a private key file if it is accessible by others.1202(asmoper) 3. Note that the SSH files must be readable only by root and by the software installation user ( grid . If necessary. The instructions that follow are for SSH1.5 protocol. With OpenSSH. The OUI in 11 g release 2 provides an interface during the install for the user account running the installation to automatically create passwordless SSH connectivity between all cluster member nodes.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . and set permissions on it to ensure that only the grid user has read and write permissions: [grid@racnode1 ~]$ mkdir ~/. you can use either RSA or DSA. This is the recommend approach by Oracle and the method used in this article. while DSA is the default for the SSH 2. create the . then refer to your SSH distribution documentation to configure SSH1 compatibility or to configure SSH2 with DSA.0 protocol. and you cannot use SSH1. In the examples that follow. RSA is used with the SSH 1. it should be performed for both user accounts. complete the following: Create SSH Directory. Oracle added support in 10 g release 1 for using the SSH tool suite for setting up user equivalence. If you have an SSH2 installation.1200(asmadmin).1201(asmdba).1201(asmdba).ssh/id_dsa): [Enter] http://www. oracle). Before Oracle Database 10 g.grid 2.ssh [grid@racnode1 ~]$ chmod 700 ~/. the DSA key is used. and to verify that the user ID matches the expected user ID you have assigned to the grid user. Ensure that Oracle user group and user and the user terminal window process you are using have group and user IDs are identical. Configuring Passwordless SSH on Cluster Nodes To configure passwordless SSH. user equivalence had to be configured using remote shell (RSH). Log in as the software owner (in this example. accept the default key file location and no passphrase (press [Enter]): [grid@racnode1 ~]$ /usr/bin/ssh-keygen -t dsa Generating public/private dsa key pair. Please note that it is not required to run the CVU utility before installing the Oracle software. enter the commands id and id grid . To configure passwordless SSH. At the prompts.ssh Note: SSH configuration will fail if the permissions are not set to 700. You need either an RSA or a DSA key for the SSH protocol. If you decide to manually configure SSH connectivity. enter the following command: [grid@racnode1 ~]$ pgrep sshd 2535 19852 If SSH is running.

1.ssh/authorized_keys file on every node must contain the contents from all of the ~/. using the DSA key ( racnode2 ).pub public key from both Oracle RAC nodes in the cluster to the authorized key file just created ( ~/.151' (RSA) to the list of known hosts.168. grid@racnode1's password: xxxxx [grid@racnode1 ~]$ ssh racnode2 cat ~/.ssh/authorized_keys grid@racnode2's password: xxxxx authorized_keys 100% 1206 1.ssh/id_dsa. you will see a message similar to the following: The authenticity of host 'racnode1 (192.com/technology/pub/articles/hunter-rac11gr2-iscsi-2. An authorized key file is nothing more than a single file that contains a copy of everyone's (every node's) DSA public key. use SCP (Secure Copy) or SFTP (Secure FTP) to copy the content of the ~/.192.168.168. Note: The grid user's ~/. this will be done from racnode1 . create it now: [grid@racnode1 ~]$ touch ~/.1 grid oinstall 603 -rw-r--r-. On the local node ( racnode1 ). Change the permission of the authorized key file for both Oracle RAC nodes in the cluster by logging into the node and running the following: [grid@racnode1 ~]$ chmod 600 ~/.pub files that you generated on all cluster nodes.ssh/id_dsa file. This command writes the DSA public key to the ~/. Use the scp command to copy the authorized key file to all remaining nodes in the cluster: [grid@racnode1 ~]$ scp ~/.ssh/id_dsa.pub.ssh/authorized_keys [grid@racnode1 ~]$ ls -l ~/.151)' can't be established.1 grid oinstall 0 Nov 12 12:34 authorized_keys -rw------. RSA key fingerprint is 2f:0d:2c:da:9f:d4:3d:2e:ea:e9:98:20:2c:b9:e8:f5.ssh total 8 -rw-r--r-. Your public key has been saved in /home/grid/.ssh/id_dsa.pub >> ~/.pub file and the private key to the ~/. The key fingerprint is: 7b:e9:e8:47:29:37:ea:10:10:c6:b6:7d:d2:73:e9:03 grid@racnode1 Note: SSH with passphrase is not supported for Oracle Clusterware 11 g release 2 and later releases. If the file doesn't exist.ssh directory and you will not see this message again when you connect from this system to the same node. you should see the id_dsa. 3. You will be prompted for the grid OS user account password for both Oracle RAC nodes accessed. Complete the following steps on one of the nodes in the cluster to create and then distribute the authorized key file.ssh/authorized_keys The authenticity of host 'racnode2 (192.ssh/id_dsa.168.ssh/authorized_keys http://www.pub known_hosts We now need to copy it to the remaining nodes in the cluster. we have the DSA public key from every node in the cluster in the authorized key file ( ~/.ssh total 16 -rw-r--r-. racnode1 : 1. I am using the primary node in the cluster.ssh/authorized_keys) on racnode1 : [grid@racnode1 ~]$ ls -l ~/.1.192.pub In the .ssh directory of the owner's home directory.1 grid oinstall 668 -rw-r--r-.1 grid oinstall 603 Nov 12 09:24 id_dsa.pub >> ~/. In our two-node cluster example. grid@racnode2's password: xxxxx The first time you use SSH to connect to a node from a particular system.ssh/authorized_keys).ssh/id_dsa.2KB/s 00:00 4. Once the authorized key file contains all of the public keys. For the purpose of this article. it is then distributed to all other nodes in the cluster. The following example is being run from racnode1 and assumes a two-node cluster. At this point.ssh/authorized_keys racnode2:. Again. From racnode1 (the local node) determine if the authorized key file ~/. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'racnode1.1. Are you sure you want to continue connecting (yes/no)? yes Enter yes at the prompt to continue.html?_template=/ocom/print[7/8/2010 12:17:00 PM] .1 grid oinstall 1206 -rw------. 2. you will need to create an authorized key file ( authorized_keys ) on one of the nodes.152)' can't be established. RSA key fingerprint is 2f:0d:2c:da:9f:d4:3d:2e:ea:e9:98:20:2c:b9:e8:f5. 5.1 grid oinstall 808 Nov Nov Nov Nov 12 12 12 12 12:45 09:24 09:24 12:45 authorized_keys id_dsa id_dsa. and the blank file authorized_keys .1 grid oinstall 668 Nov 12 09:24 id_dsa -rw-r--r-.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. Page 2 Enter passphrase (empty for no passphrase): [Enter] Enter same passphrase again: [Enter] Your identification has been saved in /home/grid/. the only remaining node is racnode2 .ssh/authorized_keys The authenticity of host 'racnode1 (192.151)' can't be established. with nodes racnode1 and racnode2 : [grid@racnode1 ~]$ ssh racnode1 cat ~/. In most cases this will not exist since this article assumes you are working with a new install.152' (RSA) to the list of known hosts.1.ssh/id_dsa.pub keys that you have created. Add All Keys to a Common authorized_keys File Now that both Oracle RAC nodes contain a public and private key for DSA.ssh directory. Never distribute the private key to anyone not authorized to perform Oracle software installations.ssh/id_dsa. The public hostname will then be added to the known_hosts file in the ~/. Passwordless SSH is required for Oracle 11 g release 2 and higher. Repeat steps 1 through 4 for all remaining nodes that you intend to make a member of the cluster.1.ssh/authorized_keys already exists in the . RSA key fingerprint is 97:ab:db:26:f6:01:20:cc:e0:63:d0:d1:73:7e:c2:0a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'racnode2.168.oracle.

RSA key fingerprint is 97:ab:db:26:f6:01:20:cc:e0:63:d0:d1:73:7e:c2:0a. using fake authentication data for X11 forwarding.152)' can't be established. insert the following into the ~/.168.192.oracle.1. the Oracle grid infrastructure software owner will be used which is named grid .grid 2. From the terminal session enabled for user equivalence (the node you will be performing the Oracle installations from).com/technology/pub/articles/hunter-rac11gr2-iscsi-2. The Oracle Universal Installer is a GUI interface and requires the use of an X Server. you should perform another test of the current terminal session to ensure that X11 forwarding is not enabled: [grid@racnode1 ~]$ ssh racnode1 hostname racnode1 [grid@racnode1 ~]$ ssh racnode2 hostname racnode2 Note: If you are using a remote client to connect to the node performing the installation. if you see any other messages or text. Fri Nov 13 10:19:57 EST 2009 racnode1 [grid@racnode2 ~]$ ssh racnode1 "date. Make any changes required to ensure that only the date and host name is displayed when you enter these commands. then the Oracle installation will fail. For example: [grid@racnode1 ~]$ export DISPLAY=melody:0 [grid@racnode1 ~]$ ssh racnode2 hostname Warning: No xauth data. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'racnode2. You should ensure that any part of a login script that generates any output. On the system where you want to run OUI from ( racnode1 ). using fake authentication data for X11 forwarding." then this means that your authorized keys file is configured correctly.hostname" Fri Nov 13 10:22:01 EST 2009 racnode2 4.168.192.1.152' (RSA) to the list of known hosts. and Bash shells: [grid@racnode1 ~]$ DISPLAY=<Any X-Windows Host>:0 [grid@racnode1 ~]$ export DISPLAY C shell: [grid@racnode1 ~]$ setenv DISPLAY <Any X-Windows Host>:0 After setting the DISPLAY variable to a valid X Windows display. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'racnode1. set the environment variable DISPLAY to a valid X Windows display: Bourne. or asks any questions. Make sure that the ForwardX11 attribute is set to no.ssh/config 2. If any of the nodes prompt for a password or pass phrase then verify that the ~/. is modified so it acts only when the shell is an interactive shell. create a user-level SSH client configuration file for the oracle OS user account that disables X11 Forwarding: 1.168.151)' can't be established.hostname" Fri Nov 13 09:47:34 EST 2009 racnode2 3.1.ssh/config file : Host * http://www. and you see a message similar to: " Warning: No xauth data. racnode2 Note that having X11 Forwarding enabled will cause the Oracle installation to fail.hostname" The authenticity of host 'racnode1 (192.ssh/authorized_keys file on that node contains the correct public keys and that you have created an Oracle software owner with identical group membership and IDs. edit or create the file ~/.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . Page 2 Enable SSH User Equivalency on Cluster Nodes After you have copied the authorized_keys file that contains all public keys to each node in the cluster.1.168. In this example. your SSH configuration has X11 forwarding enabled. RSA key fingerprint is 2f:0d:2c:da:9f:d4:3d:2e:ea:e9:98:20:2c:b9:e8:f5. Using a text editor. To correct this problem.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. Fri Nov 13 10:22:00 EST 2009 racnode2 [grid@racnode2 ~]$ ssh racnode2 "date. you will be able to use the ssh and scp commands without being prompted for a password or pass phrase from the terminal session: [grid@racnode1 ~]$ ssh racnode1 "date. When running the test SSH commands in this section. [root@racnode1 ~]# su . log in as the grid user. apart from the date and host name. Perform the same actions above from the remaining nodes in the Oracle RAC cluster ( racnode2 ) to ensure they too can access all other nodes without being prompted for a password or pass phrase and get added to the known_hosts file: [grid@racnode2 ~]$ ssh racnode1 "date.hostname" Fri Nov 13 10:20:58 EST 2009 racnode1 -------------------------------------------------------------------------[grid@racnode2 ~]$ ssh racnode2 "date. If SSH is configured correctly.hostname" The authenticity of host 'racnode2 (192. however.hostname" Fri Nov 13 09:46:56 EST 2009 racnode1 [grid@racnode1 ~]$ ssh racnode2 "date. 1. For example. Korn.151' (RSA) to the list of known hosts. complete the steps in this section to ensure passwordless SSH connectivity between all cluster member nodes is configured correctly.

1 is enabled... For each of the startup files below. This file also contains those parameters responsible for configuring shared memory......... and entries from previous sections that need to occur on both Oracle RAC nodes when they are booted. # Controls IP packet forwarding net. and local IP range used by the Oracle instance. we have talked in great detail about the parameters and resources that need to be configured on both nodes in the Oracle RAC 11 g configuration. /etc/sysctl....ipv4..com/technology/pub/articles/hunter-rac11gr2-iscsi-2. To avoid this problem...conf.. commands. OUI uses SSH to run commands and copy files to the other nodes.conf We wanted to adjust the default and maximum send buffer size as well as the default and maximum receive buffer size for the interconnect.. or Korn shell: if [ -t 0 ]......ip_forward = 0 # Controls source route verification net..... then stty intr ^C fi C shell: test -t 0 if ($status == 0) then stty intr ^C endif Note: If there are hidden files that contain stty commands that are loaded by the remote shell. in bytes kernel. This section will review those parameters..default.default.. All Startup Commands for Both Oracle RAC Nodes Verify that the following startup commands are included on both of the Oracle RAC nodes in the cluster.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue kernel. you must modify these files in each Oracle installation owner user home directory to suppress all output on STDERR. ... entries in red should be included in each startup file..sysrq = 0 # Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.bashrc or . Bash.ipv4....... Page 2 ForwardX11 no Preventing Installation Errors Caused by stty Commands During an Oracle grid infrastructure or Oracle RAC software installation. During the installation..sem = 250 32000 100 128 SEMMNI_value # Sets the maximum number of file-handles that the Linux kernel will allocate fs. as in the following examples: Bourne.... then OUI indicates an error and stops the installation. ....cshrc) will cause makefile and other installation errors if they contain stty commands.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel. in pages kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments.tcp_syncookies = 1 # Controls the maximum size of a message.. 0 is disabled..rp_filter = 1 # Do not accept source routing net. Up to this point...core_uses_pid = 1 # Controls the use of TCP syncookies net. file handles.... See sysctl(8) and # sysctl....conf(5) for more details.ipv4....shmall = 4294967296 # Controls the maximum number of shared memory segments system wide kernel..   17.conf. in bytes kernel..file-max = 6815744 # Defines the local port range that is used by TCP and UDP http://www.oracle. semaphores.html?_template=/ocom/print[7/8/2010 12:17:00 PM] ..Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. # Kernel sysctl configuration file for Red Hat Linux # # For binary values.ipv4...msgmax = 65536 # Controls the maximum shared segment size..shmmni = 4096 # Sets the following semaphore values: # SEMMSL_value SEMMNS_value SEMOPM_value kernel. hidden files on the system (for example.

ipv4..aio-max-nr=1048576 .accept_source_route = 0 kernel....1.sem = 250 32000 100 128 fs.....core_uses_pid = 1 net................2....wmem_max = 1048576 fs.187 racnode-cluster-scan # Private Storage Network for Openfiler ..168.rmem_default=262144 # Maximum setting in bytes of the socket "receive" buffer which # may be set by using the SO_RCVBUF socket option net....ip_local_port_range = 9000 65500 # Default setting in bytes of the socket "receive" buffer which # may be set by using the SO_RCVBUF socket option net.......core.195 openfiler1-priv # Miscellaneous Nodes 192.152 racnode2-priv # Public Virtual IP (VIP) addresses .... Page 2 # traffic to choose the local port net..wmem_default=262144 # Maximum setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket option net....... .....msgmnb = 65536 kernel..html?_template=/ocom/print[7/8/2010 12:17:00 PM] ...168....195 openfiler1 192.............. ensure that each of these parameters are truly in effect by running the following command on both Oracle RAC nodes in the cluster: [root@racnode1 ~]# sysctl -p net..2.wmem_max=1048576 # Maximum number of allowable concurrent asynchronous I/O requests requests fs..core..rmem_max = 4194304 net.(eth0:1) 192.168. Verify that each of the required kernel parameters are configured in the /etc/sysctl..wmem_default = 262144 net.. /etc/udev/scripts/iscsidev.shmmni = 4096 kernel....152 racnode2 # Private Interconnect .0....1..0....121 domo 192.aio-max-nr = 1048576 /etc/hosts All machine/IP entries for nodes in our RAC cluster.168..sh http://www. # Do not remove the following line.(eth1) 192.1.168.....rmem_default = 262144 net..251 racnode1-vip 192.rmem_max=4194304 # Default setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket option net.151 racnode1 192...core........ BUS=="scsi"..........168.d/55-openiscsi.............com/technology/pub/articles/hunter-rac11gr2-iscsi-2.core.....168.localdomain localhost # Public Network .core.default...1..168......1.rules Rules file to be used by udev to mount iSCSI volumes......core...........sysrq = 0 kernel.........file-max = 6815744 net..168....conf.252 racnode2-vip # Single Client Access Name (SCAN) 192.106 melody 192... 127.....1..2..1. # /etc/udev/rules.1.......conf file...ipv4...............168.. .rp_filter = 1 net...conf...tcp_syncookies = 1 kernel.....shmall = 4294967296 kernel....1.......ipv4.SYMLINK+="iscsi/%c/part%n" ......(eth1) 192..........168....ipv4....168........1 localhost........ Then.....(eth0) 192... or various programs # that require network functionality will fail...ipv4.......245 accesspoint .msgmax = 65536 kernel.core..Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.105 packmule 192...1....1..168........ip_forward = 0 net.168.......d/55-openiscsi....rules KERNEL=="sd*"...125 oemprod 192........ipv4.1...default..core. /etc/udev/rules.......122 switch1 192.oracle...168.168......1 router 192.151 racnode1-priv 192. PROGRAM="/etc/udev/scripts/iscsidev. This file contains all name=value pairs used to receive events and the call-out SHELL script to handle the event..1.sh %b"....ip_local_port_range = 9000 65500 net....shmmax = 68719476736 kernel...........

...... You will be required to create RAW devices for all disk partitions used by ASM.com/technology/pub/articles/hunter-rac11gr2-iscsi-2.x-x.. Oracle database files are created on raw character devices managed by ASM using standard Linux I/O system calls..i686.x.. Those performance metrics and testing details are out of scope of this article and therefore will not be discussed......... then target_name=`echo "${target_name%... ASM is built into the Oracle kernel and can be used for both single and clustered instances of Oracle.... This is no longer necessary since the ASMLib software is included with Oracle Enterprise Linux (with the exception of the Userspace Library which is a separate download) .. then exit 1 fi # Check if QNAP drive check_qnap_target_name=${target_name%%:*} if [ $check_qnap_target_name = "iqn..... there are two different methods to configure ASM on Linux: ASM with ASMLib I/O: This method creates all Oracle database files on raw block devices managed by ASM using ASMLib calls..html?_template=/ocom/print[7/8/2010 12:17:00 PM] ..0 software stack includes the following packages: 32-bit (x86) Installations ASMLib Kernel Driver oracleasm-x.rpm 64-bit (x86_64) Installations http://www. The ASMLib 2.0 Packages In previous editions of this article. the Automatic Storage Management and Oracle Clusterware software is packaged together in a single binary distribution and installed into a single home directory.x-x.. here would be the time where you would need to download the ASMLib 2..... Oracle Database files (data..x-x. visit http://www.rpm .x-x....x.2)....com. So.. In this section..0 which is a support library for the Automatic Storage Management (ASM) feature of the Oracle Database. control files... ...x-x.oracle....(for xen kernel) Userspace Library oracleasmlib-x. ASMLib allows an Oracle Database using ASM more efficient and capable access to the disk groups it is using.......x.rpm .. three disk groups).. will only need to be performed on a single node within the cluster (racnode1). is ASMLib required for ASM? Not at all.el5.... we will install and configure ASMLib 2..el5..... In this article.x-x....(for default kernel) oracleasm-x.... Install and Configure ASMLib 2. Oracle states in Metalink Note 275315...com/technology/tech/linux/asmlib/ Install ASMLib 2... Keep in mind that ASMLib is only a support library for the ASM software..x...0 software from Oracle ASMLib Downloads for Red Hat Enterprise Linux Server 5. ASM automatically performs load balancing in parallel across all available disk drives to prevent hot spots and maximize performance... online redo logs...oracle.. I will be using the "ASM with ASMLib I/O" method... If you would like to learn more about Oracle ASMLib 2.0.el5xen-x...   18...rpm Driver Support Files oracleasm-support-x. #!/bin/sh # FILE: /etc/udev/scripts/iscsidev.i686. All of the files and directories to be used for Oracle will be contained in a disk group — (or for the purpose of this article. RAW devices are not required with this method as ASMLib works with block devices. ASM with Standard Linux I/O: This method does not make use of ASMLib..x..el5-x. which is referred to as the Grid Infrastructure home..0 The installation and configuration procedures in this section should be performed on both of the Oracle RAC nodes in the cluster.... Automatic Storage Management simplifies database administration by eliminating the need for the DBA to directly manage potentially thousands of Oracle database files requiring only the management of groups of disks allocated to the Oracle Database.. archived redo logs)... In fact..1 that " ASMLib was provided to enable ASM I/O to Linux disks without the limitations of the standard UNIX I/O API ".... Creating the ASM disks. even with rapidly changing data usage patterns.. Page 2 Call-out SHELL script that handles the events passed to it from the udev rules file (above) and used to mount iSCSI volumes.. however. and the Fast Recovery Area..el5...}" .. I plan on performing several tests in the future to identify the performance gains in using ASMLib... In this article. The ASM software will be installed as part of Oracle grid infrastructure later in this guide Starting with Oracle grid infrastructure 11 g release 2 (11..x...qnap" ]. ASM will be used as the shared file system and volume manager for Oracle Clusterware files (OCR and voting disk).Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI....*}"` fi echo "${target_name##*.i386.. The Oracle grid infrastructure software will be owned by the user grid ..2004-04.sh BUS=${1} HOST=${BUS%%:*} [ -e /sys/class/iscsi_host ] || exit 1 file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname" target_name=$(cat ${file}) # This is not an open-scsi drive if [ -z "${target_name}" ]......el5...i386...

4-1.el5-2. The ASMLib 2.x86_64. http://www.x-x. the ASMLib 2. [root@racnode1 ~]# /usr/sbin/oracleasm configure ORACLEASM_ENABLED=false ORACLEASM_UID= ORACLEASM_GID= ORACLEASM_SCANBOOT=true ORACLEASM_SCANORDER="" ORACLEASM_SCANEXCLUDE="" 1.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.x.el5..el5.rpm With Oracle Enterprise Linux 5.x.rpm cd / eject After installing the ASMLib packages.el5 (x86_64) oracleasm-support-2.rpm 64-bit (x86_64) Installations oracleasmlib-2.oracle.el5. Enter the following command to run the oracleasm initialization script with the configure option: [root@racnode1 ~]# /usr/sbin/oracleasm configure -i Configuring the Oracle ASM library driver. ########################################### [100%] 1:oracleasmlib ########################################### [100%] For information on obtaining the ASMLib support library through the Unbreakable Linux Network (which is not a requirement for this article).0.0.rpm Preparing. If you enter the command oracleasm configure without the -i flag.x.x86_64.0 software is included with Enterprise Linux with the exception of the Userspace Library (a.x.(for default kernel) oracleasm-x.x86_64.x-x. is not deprecated.x86_64. load the Enterprise Linux CD #3 and then CD #5 into each of the Oracle RAC nodes and perform the following: From Enterprise Linux 5.0 packages are not installed.el5-2. Page 2 ASMLib Kernel Driver oracleasm-x.6. This task needs to be run on both Oracle RAC nodes as the root user account.el5.5-1.x86_64.k. please visit Getting Oracle ASMLib via the Unbreakable Linux Network.1.4 (x86_64) . For example.x.el5. The Userspace Library will need to be downloaded as it is not included with Enterprise Linux.x.el5.x-x.com/technology/pub/articles/hunter-rac11gr2-iscsi-2.rpm .[CD #5] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh oracleasm-2.6.rpm After downloading the Userspace Library to both Oracle RAC nodes in the cluster.[CD #3] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh oracleasm-support-2. but the oracleasm binary in that path is now used typically for internal commands. which was used in previous releases.rpm cd / eject From Enterprise Linux 5. Ctrl-C will abort.0 software packages do not get installed by default. The current values will be shown in brackets ('[]'). the ASMLib 2. To determine if the Oracle ASMLib packages are installed (which in most cases. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The /etc/init.18-164. the ASMLib support library).x-x. install it using the following: [root@racnode1 ~]# rpm -Uvh oracleasmlib-2.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . Hitting <ENTER> without typing an answer will keep that current value.x-x.0. The Userspace Library is required and can be downloaded for free at: 32-bit (x86) Installations oracleasmlib-2.el5-x..0 kernel drivers can be found on CD #5 while the Driver Support File can be found on CD #3.d path.4-1.el5.i386.el5 (x86_64) Download Oracle ASMLib Userspace Library As mentioned in the previous section.4-1. Configure ASMLib Now that you have installed the ASMLib Packages for Linux.5-1.a.rpm .(for xen kernel) Userspace Library oracleasmlib-x.0.x86_64. you need to configure and load the ASM kernel module. they will not be).0.el5.x86_64. verify from both Oracle RAC nodes that the software is installed: [root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | sort oracleasm-2.x-x.x86_64. Note: The oracleasm command by default is in the path /usr/sbin .18-164. perform the following on both Oracle RAC nodes: [root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | sort If the Oracle ASMLib 2.rpm Driver Support Files oracleasm-support-x. then you are shown the current configuration.el5.el5xen-x. This will configure the on-boot properties of the Oracle ASM library driver.3-1.4 (x86_64) .1.3-1.

oracle. I will be running these commands on racnode1 . Oracle database files like online redo logs.oracle. 2. This is a FREE account! Oracle offers a development and testing license free of charge. Page 2 Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done The script completes the following tasks: Creates the /etc/sysconfig/oracleasm configuration file Creates the /dev/oracleasm mount point Mounts the ASMLib driver file system Note: The ASMLib driver file system is not a regular file system. No support.... you should then run the oracleasm listdisks command on both Oracle RAC nodes to verify that all ASM disks were created and available.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. To create the ASM disks using the iSCSI target names to local device name mappings. ASM will be used for storing Oracle Clusterware files. In the section "Create Partitions on iSCSI Volumes". This command identifies shared disks attached to the node that are marked as Automatic Storage Management disks: [root@racnode1 ~]# /usr/sbin/oracleasm listdisks CRSVOL1 DATAVOL1 FRAVOL1 [root@racnode2 ~]# /usr/sbin/oracleasm listdisks CRSVOL1 DATAVOL1 FRAVOL1   19. Enter the following command to load the oracleasm kernel module: [root@racnode1 ~]# /usr/sbin/oracleasm init Creating /dev/oracleasm mount point: /dev/oracleasm Loading module "oracleasm": oracleasm Mounting ASMlib driver filesystem: /dev/oracleasm 3. you will need to create one. you will need to perform a scandisk to recognize the new volumes.com/technology/software/products/database/oracle11g/112010_linuxsoft. 32-bit (x86) Installations http://www.. we configured (partitioned) three iSCSI volumes to be used by ASM. database files. Repeat this procedure on all nodes in the cluster ( racnode2 ) where you want to install Oracle RAC.html 64-bit (x86_64) Installations http://www. and the Fast Recovery Area. Use the local device names that were created by udev when configuring the three ASM volumes. archived redo log files. On the other Oracle RAC node(s). It is used only by the Automatic Storage Management library to communicate with the Automatic Storage Management driver. control files. When that is complete. enter the following command as root on each node: [root@racnode2 ~]# /usr/sbin/oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks. is provided and the license does not permit production use. however. Create ASM Disks for Oracle Creating the ASM disks only needs to be performed from one node in the RAC cluster as the root user account.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . The next step is to download and extract the required Oracle software packages from the Oracle Technology Network (OTN): Note: If you do not currently have an account with Oracle OTN. type the following: [root@racnode1 ~]# /usr/sbin/oracleasm createdisk CRSVOL1 /dev/iscsi/crs1/part1 Writing disk header: done Instantiating disk: done [root@racnode1 ~]# /usr/sbin/oracleasm createdisk DATAVOL1 /dev/iscsi/data1/part1 Writing disk header: done Instantiating disk: done [root@racnode1 ~]# /usr/sbin/oracleasm createdisk FRAVOL1 /dev/iscsi/fra1/part1 Writing disk header: done Instantiating disk: done To make the disk available on the other nodes in the cluster ( racnode2 ). Scanning system for ASM disks. A full description of the license agreement is available on OTN.com/technology/pub/articles/hunter-rac11gr2-iscsi-2. Download Oracle RAC 11 g Release 2 Software The following download procedures only need to be performed on one node in the cluster. Instantiating disk "FRAVOL1" Instantiating disk "DATAVOL1" Instantiating disk "CRSVOL1" We can now test that the ASM disks were successfully created by using the following command on both nodes in the RAC cluster as the root user account.

To install the cvuqdisk RPM.7-1. or i386). login and download the Oracle grid infrastructure software to the directory /home/grid/software/oracle as the grid user.. Copy the cvuqdisk package from racnode1 to racnode2 as the grid user account: [racnode2]: /home/grid/software/oracle/grid/rpm/cvuqdisk-1.7-1.2.x64_11gR2_examples.zip /home/oracle/software/oracle ~]$ mv linux. This section contains any remaining preinstallation tasks for Oracle grid infrastructure that have not already been discussed. The cvuqdisk RPM can be found on the Oracle grid infrastructure installation media in the rpm directory.html You will be downloading and extracting the required software from Oracle to only one of the Linux nodes in the cluster — namely.0. Preinstallation Tasks for Oracle Grid Infrastructure for a Cluster Perform the following checks on both Oracle RAC nodes in the cluster. In the directory where you have saved the cvuqdisk RPM. Log in as root on both Oracle RAC nodes: [grid@racnode1 rpm]$ su [grid@racnode2 rpm]$ su 4.7-1 [root@racnode2 rpm]# rpm -iv cvuqdisk-1. Log in to the node that you will be performing all of the Oracle installations from ( racnode1 ) as the appropriate software owner. Next.0.zip Extract the Oracle Database and Oracle Examples software as the oracle user: [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 ~]$ mkdir -p /home/oracle/software/oracle ~]$ mv linux.zip   20. which for this article is oinstall : [root@racnode1 rpm]# CVUQDISK_GRP=oinstall. racnode1 . Extract the Oracle grid infrastructure software as the grid user: [grid@racnode1 [grid@racnode1 [grid@racnode1 [grid@racnode1 ~]$ mkdir -p /home/grid/software/oracle ~]$ mv linux.x64_11gR2_database_2of2.rpm http://www. cvuqdisk-1.oracle. The CVU is run automatically at the end of the Oracle grid infrastructure installation as part of the Configuration Assistants process. Page 2 http://www.0. log in and download the Oracle Database and Oracle Examples (optional) software to the /home/oracle/software/oracle directory as the oracle user.1. The Oracle installer will copy the required software packages to all other nodes in the RAC configuration using remote access ( scp ). export CVUQDISK_GRP 5.zip oracle]$ unzip linux.x64_11gR2_examples.7-1.x64_11gR2_database_2of2. Without cvuqdisk . Please note that manually running the Cluster Verification Utility (CVU) before running the Oracle installer is not required.rpm 3. and you receive the error message "Package cvuqdisk not installed" when the Cluster Verification Utility is run (either manually or at the end of the Oracle grid infrastructure installation).zip /home/oracle/software/oracle ~]$ cd /home/oracle/software/oracle oracle]$ unzip linux. You will perform all Oracle software installs from this machine.rpm 2.html?_template=/ocom/print[7/8/2010 12:17:00 PM] .0) for Linux Oracle Database 11 g Release 2 Examples (optional) All downloads are available from the same page. which is in the directory rpm on the installation media from racnode1 : [racnode1]: /home/grid/software/oracle/grid/rpm/cvuqdisk-1. the Oracle grid infrastructure media was extracted to the /home/grid/software/oracle/grid directory on racnode1 as the grid user.rpm Preparing packages for installation.0.. Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk .x64_11gR2_database_1of2.1.0.0.zip /home/oracle/software/oracle ~]$ mv linux. For example. Cluster Verification Utility cannot discover shared disks.7-1. Locate the cvuqdisk RPM package.0.0) for Linux Oracle Database 11 g Release 2 (11.zip oracle]$ unzip linux. Install the cvuqdisk Package for Linux Install the operating system package cvuqdisk to both Oracle RAC nodes.x64_11gR2_database_1of2.x64_11gR2_grid. export CVUQDISK_GRP [root@racnode2 rpm]# CVUQDISK_GRP=oinstall. use the following command to install the cvuqdisk package on both Oracle RAC nodes: [root@racnode1 rpm]# rpm -iv cvuqdisk-1.oracle.zip /home/grid/software/oracle ~]$ cd /home/grid/software/oracle oracle]$ unzip linux. x86_64.2.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. Use the cvuqdisk RPM for your hardware architecture (for example.com/technology/pub/articles/hunter-rac11gr2-iscsi-2.com/technology/software/products/database/oracle11g/112010_linx8664soft. Download and Extract the Oracle Software Download the following software packages: Oracle Database 11 g Release 2 Grid Infrastructure (11.x64_11gR2_grid. For the purpose of this article. complete the following procedures: 1.

running the Cluster Verification Utility before running the Oracle installer is not required. Starting with Oracle Clusterware 11 g release 2. please keep in mind that it should be run as the grid user from from the node you will be performing the Oracle installation from ( racnode1 ).7-1 Verify Oracle Clusterware Requirements with CVU ./runcluvfy.racnode2 -fixup -verbose Review the CVU report. SSH connectivity with user equivalence must be configured for the grid user. If you decide that you want to run the CVU.sh stage -post hwos -n racnode1. run the following as the grid user account from racnode1 with user equivalence configured: [grid@racnode1 ~]$ cd /home/grid/software/oracle/grid [grid@racnode1 grid]$ . [grid@racnode1 ~]$ cd /home/grid/software/oracle/grid [grid@racnode1 grid]$ ..-----------. Oracle Universal Installer (OUI) detects when the minimum requirements for an installation are not met.-----------.-----------------------racnode2 failed racnode1 failed Result: PRVF-4007 : User equivalence check failed for user "grid" ERROR: User equivalence unavailable on all the specified nodes Verification cannot proceed Pre-check for cluster services setup was unsuccessful on all the nodes. and Directories. All checks performed by CVU should be reported as "passed" before continuing with the Oracle grid infrastructure installation.com/technology/pub/articles/hunter-rac11gr2-iscsi-2.sh stage -pre crsinst -n racnode1. All other checks performed by CVU should be reported as "passed" before continuing with the Oracle grid infrastructure installation.-----------.(optional) As stated earlier in this section. Once all prerequisites for running the CVU utility have been met.. You also can have CVU generate fixup scripts before installation. Page 2 Preparing packages for installation.0. the CVU utility will fail before having the opportunity to perform any of its critical checks and generate the fixup scripts: Checking user equivalence. Again.. Check: User equivalence for user "grid" Node Name Comment -----------------------------------. In addition. called fixup scripts. Users. You can run the fixup script after you click the Fix and Check Again Button during the Oracle grid infrastructure installation.racnode2 -verbose Review the CVU report.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. The only failure that should be found given the configuration described in this article is: Check: Membership of user "grid" in group "dba" Node Name User Exists Group Exists User in Group Comment ---------------. then it generates fixup scripts ( runfixup. cvuqdisk-1. This failed check can be safely ignored.sh). you can now manually check your cluster configuration before installation and generate a fixup script to make operating system changes before starting the installation. The CVU fails to recognize this type of configuration and assumes the grid user should always be part of the dba group. If OUI detects an incomplete task.   Page 1  Page 2   Page 3 http://www.oracle. to finish incomplete system configuration steps.html?_template=/ocom/print[7/8/2010 12:17:00 PM] . and creates shell scripts. Creating a Job Role Separation configuration was described in the section Create Job Role Separation Operating System Privileges Groups. If you intend to configure SSH connectivity using the OUI.. Verify Hardware and Operating System Setup with CVU The next CVU check to run will verify the hardware and operating system setup./runcluvfy.---------------racnode2 yes yes no failed racnode1 yes yes no failed Result: Membership check for user "grid" in group "dba" failed The check fails because this guide creates role-allocated groups and users by using a Job Role Separation configuration which is not accurately recognized by the CVU.

Complete the following steps to install Oracle grid infrastructure on your cluster. integration with IPMI. Install Oracle Grid Infrastructure for a Cluster Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (racnode1). it is for educational purposes only. This new option provides streamlined cluster installations. Configuring SCAN without DNS For the purpose of this article. Typical and Advanced Installation Starting with 11 g release 2. At any time during installation. I felt it beyond the scope of this article to configure DNS. Instead. Please note that the workaround documented in this section is only for the sake of brevity and should not be considered for a production implementation. You are now ready to install the "grid" part of the environment — Oracle Clusterware and Automatic Storage Management. This section includes a workaround (Ok. although I indicated I will be manually assigning IP addresses using the DNS method for name resolution (as opposed to GNS). Defining the SCAN in only the hosts file and not in either Grid Naming Service (GNS) or DNS is an invalid http://www. use of operating system group authentication for role-based administrative privileges. click the Help button on the OUI page.oracle. The Oracle grid infrastructure software (Oracle Clusterware and Automatic Storage Management) will be installed to both of the Oracle RAC nodes in the cluster by the Oracle Universal Installer.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. we will be using the "Advanced Installation" option.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . 21. Although Oracle strongly discourages this practice and highly recommends the use of GNS or DNS resolution. or more granularity in specifying Automatic Storage Management roles. especially for those customers who are new to clustering. is not supported by Oracle. and should only be used at your own risk. if you have a question about what you are being asked to do. It enables you to select particular configuration choices. I will not actually be defining the SCAN in any DNS server (or GNS for that matter). Typical installation defaults as many options as possible to those recommended as best practices.com/technology/pub/articles/hunter-rac11gr2-iscsi-3. including additional storage and network choices. a total hack) to the nslookup binary that allows the Cluster Verification Utility to finish successfully during the Oracle grid infrastructure install. I will only be defining the SCAN host name and IP address in the hosts file ( /etc/hosts ) on each Oracle RAC node and any clients attempting to connect to the database cluster. Page 3 Page 1  Page 2  Page 3 Build Your Own Oracle RAC Cluster on Oracle Enterprise Linux and iSCSI (Continued) The information in this guide is not validated by Oracle. Oracle now provides two options for installing the Oracle grid infrastructure software: Typical Installation The typical installation option is a simplified installation with a minimal number of manual configuration choices. Advanced Installation The advanced installation option is an advanced procedure that requires a higher degree of system knowledge. Given the fact that this article makes use of role-based administrative privileges and high granularity in specifying Automatic Storage Management roles.

it would be safe to ignore this check and continue by clicking the [Next] button in OUI and move forward with the Oracle grid infrastructure installation.153) failed INFO: ERROR: INFO: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 192. Page 3 configuration and will cause the Cluster Verification Utility to fail during the Oracle grid infrastructure installation: Figure 17 : Oracle Grid Infrastructure / CVU Error . simply modify the nslookup utility as root on both Oracle RAC nodes as follows.html?_template=/ocom/print[7/8/2010 12:17:20 PM] .(Configuring SCAN without DNS) INFO: Checking Single Client Access Name (SCAN). rename the original nslookup binary to nslookup.1.34 with your primary DNS.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.168.168.original on both Oracle RAC nodes: [root@racnode1 ~]# mv /usr/bin/nslookup /usr/bin/nslookup.154. This is documented in Doc ID: 887471.138.187) failed INFO: ERROR: INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "racnode-cluster-scan" INFO: Verification of SCAN VIP and Listener setup failed Provided this is the only error reported by the CVU.oracle. First.187 with your SCAN IP address: #!/bin/bash http://www.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.1 on the My Oracle Support web site. create a new shell script named /usr/bin/nslookup as shown below while replacing 24.. racnode-cluster-scan with your SCAN host name.original Next... INFO: ERROR: INFO: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 216. and 192.1.1. If on the other hand you want the CVU to complete successfully while still only defining the SCAN in the hosts file.24.. INFO: Checking name resolution setup for "racnode-cluster-scan".

1. otherwise. then echo "Server: 24.168.. Next..187 passed Verification of SCAN VIP and Listener setup passed Verification of scan was successful.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.. The new nslookup shell script simply echo's back your SCAN IP address whenever the CVU calls nslookup with your SCAN host name..------------------racnode-cluster-scan racnode1 true true ListenerName -----------LISTENER Port -----------1521 ---- Checking name resolution setup for "racnode-cluster-scan".------------------racnode-cluster-scan racnode1 true true ListenerName -----------LISTENER Port -----------1521 ---- Checking name resolution setup for "racnode-cluster-scan".. The CVU will now pass during the Oracle grid infrastructure installation when it attempts to verify your SCAN: [grid@racnode1 ~]$ cluvfy comp scan -verbose Verifying scan Checking Single Client Access Name (SCAN).168.154. it calls the original nslookup binary. change the new nslookup shell script to executable: [root@racnode1 ~]# chmod 755 /usr/bin/nslookup Remember to perform these actions on both Oracle RAC nodes. SCAN VIP name Node Running? Running? ---------------. verify your X11 display server settings which were described in the section.1. SCAN VIP name Node Running? Running? ---------------.-----------------------racnode-cluster-scan 192.187" else /usr/bin/nslookup.154..1.-----------------------. Logging In to a Remote System Using X Terminal.34" echo "Address: 24..168.1. http://www..34#53" echo "Non-authoritative answer:" echo "Name: racnode-cluster-scan" echo "Address: 192.-----------------------racnode-cluster-scan 192. log in to racnode1 as the owner of the Oracle grid infrastructure software which for this article is grid . Comment ---------- Verify Terminal Shell Environment Before starting the Oracle Universal Installer. Comment ---------- =============================================================================== [grid@racnode2 ~]$ cluvfy comp scan -verbose Verifying scan Checking Single Client Access Name (SCAN).-----------.-----------------------.com/technology/pub/articles/hunter-rac11gr2-iscsi-3. SCAN Name IP Address Status -----------.html?_template=/ocom/print[7/8/2010 12:17:20 PM] .-----------.187 passed Verification of SCAN VIP and Listener setup passed Verification of scan was successful.1. SCAN Name IP Address Status -----------. Page 3 HOSTNAME=${1} if [[ $HOSTNAME = "racnode-cluster-scan" ]].oracle.original $HOSTNAME fi Finally. if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server).

Un-check the option to "Configure GNS". acknowledge the dialog box.1201(asmdba). the OUI will attempt to validate the SCAN information: Information Use this screen to add the node racnode2 to the cluster and to configure SSH connectivity.1202(asmoper) [grid@racnode1 ~]$ DISPLAY=<your local workstation>:0.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. This will start the "SSH Connectivity" configuration process: After the SSH configuration process successfully completes. Click the "Add" button to add " racnode2" and its virtual IP address " racnode2-vip " according to the table below: Public Node Name racnode1 racnode2 Virtual Host Name racnode1-vip racnode2-vip Cluster Node Information Next. http://www. Cluster Name SCAN Name SCAN Port Screen Shot racnode-cluster racnode-cluster-scan 1521 Grid Plug and Play After clicking [Next]./runInstaller Screen Name Select Installation Option Select Installation Type Select Product Languages Response Select " Install and Configure Grid Infrastructure for a Cluster " Select " Advanced Installation " Make the appropriate selection(s) for your environment. Instructions on how to configure Grid Naming Service (GNS) is beyond the scope of this article.com/technology/pub/articles/hunter-rac11gr2-iscsi-3. Enter the "OS Password" for the grid user and click the [Setup] button. click the [SSH Connectivity] button.1200(asmadmin). Page 3 Install Oracle Grid Infrastructure Perform the following tasks as the grid user to install Oracle grid infrastructure: [grid@racnode1 ~]$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall).oracle.html?_template=/ocom/print[7/8/2010 12:17:20 PM] .0 [grid@racnode1 ~]$ export DISPLAY [grid@racnode1 ~]$ cd /home/grid/software/oracle/grid [grid@racnode1 grid]$ .

oracle. configuration. to resolve many incomplete system configuration requirements.1.0 Interface Type Public Private Select " Automatic Storage Management (ASM)".2. The fixup script is generated during installation. Starting with Oracle Clusterware 11 g release 2 (11. Prerequisite Checks http://www.2). you will need to create the Oracle Inventory. then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button. Use the default values provided by the OUI:    Inventory Directory: /u01/app/oraInventory    oraInventory Group Name: oinstall The installer will run through a series of checks to determine if both Oracle RAC nodes meet the minimum requirements for installing and configuring the Oracle Clusterware and Automatic Storage Management software. if necessary.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . You will be prompted to run the script as root in a separate terminal session.0 192.2.com/technology/pub/articles/hunter-rac11gr2-iscsi-3. Create an ASM Disk Group that will be used to store the Oracle Clusterware files according to the values in the table below: Disk Group Name CRS Redundancy External Disk Path ORCL:CRSVOL1 For the purpose of this article. it raises kernel values to required minimums. Page 3 Finish off this screen by clicking the [Test] button to verify passwordless SSH connectivity.168. called fixup scripts. if any checks fail.168. Make any changes necessary to match the values in the table below: OSDBA for ASM asmdba Privileged Operating System Groups OSOPER for ASM asmoper OSASM asmadmin Specify Installation Location Create Inventory Set the "Oracle Base" ( $ORACLE_BASE) and "Software Location" ( $ORACLE_HOME) for the Oracle grid infrastructure installation:    Oracle Base: /u01/app/grid    Software Location: /u01/app/11. and completes other operating system configuration tasks. Specify Network Interface Usage Storage Option Information Create ASM Disk Group Specify ASM Password Failure Isolation Support Identify the network interface to be used for the "Public" and "Private" network.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.0/grid Since this is the first install on the host. Select " Do not use Intelligent Platform Management Interface (IPMI)". the installer (OUI) will create shell script programs. If OUI detects an incomplete task that is marked "fixable". Make any changes necessary to match the values in the table below: Interface Name eth0 eth1 Subnet 192. I choose to " Use same passwords for these accounts". Configuring Intelligent Platform Management Interface (IPMI) is beyond the scope of this article. This article makes use of role-based administrative privileges and high granularity in specifying Automatic Storage Management roles using a Job Role Separation. When you run the script.

1 on the My Oracle Support web site.. http://www.sh The root. Run the orainstRoot.0/grid/root.2. If the configuration assistants and CVU run successfully. After completing the steps document in that section. The inventory pointer is located at /etc/oraInst.sh script on both nodes in Execute the RAC cluster one at a time starting with the node you are performing Configuration the install from: scripts [root@racnode1 ~]# /u01/app/11.sh Within the same new console window on both Oracle RAC nodes in the cluster. as the root user account. Automatic Storage Management (ASMCA).sh scripts.oracle. follow the instructions in section Configuring SCAN without DNS to modify the nslookup utility. Page 3 If all prerequisite checks pass (as was the case for my install). Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window. (starting with the node you are performing the install from). As described earlier in this section.com/technology/pub/articles/hunter-rac11gr2-iscsi-3. Run the root. This is documented in Doc ID: 887471. Configure Oracle Grid Provided this is the only error reported by the CVU.0/grid/root.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. Click [Next] and then [Close] to exit the OUI.0/grid/root. do not click the [Next] button in OUI to bypass the error. and Oracle Private Interconnect (VIPCA). The installer performs the Oracle grid infrastructure setup process on both Oracle RAC nodes.sh on the last node. return to the OUI and click the [Retry] button. this is considered an invalid configuration and will cause the Cluster Verification Utility to fail.. you can exit OUI by clicking [Next] and then [Close].sh and /u01/app/11. you will be prompted to run the /u01/app/oraInventory/orainstRoot.sh [root@racnode2 ~]# /u01/app/11. you will receive output similar to the following which signifies a successful install: . The final step performed by OUI is to run the Cluster Verification Utility (CVU). If on the other hand you want the CVU to complete successfully while still only defining the SCAN in the hosts file. if you configured SCAN "only" in your hosts file ( /etc/hosts ) and not in either Grid Naming Service (GNS) or manually using DNS. it would be safe to Infrastructure ignore this check and continue by clicking [Next] and then the [Close] for a Cluster button to exit the OUI.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . Instead. the OUI continues to the Summary screen.2. After the installation completes. Open a new console window on both Oracle RAC nodes in the cluster.2. The CVU should now finish with no errors.sh script on both nodes in the RAC cluster: [root@racnode1 ~]# /u01/app/oraInventory/orainstRoot. When running root. The installer will run configuration assistants for Oracle Net Services (NETCA).sh [root@racnode2 ~]# /u01/app/oraInventory/orainstRoot. (starting with the node you are performing the install from). Summary Setup Click [Finish] to start the installation.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.sh script can take several minutes to run. stay logged in as the root user account.

type 0/5 0/ OFFLINE OFFLINE ora..type 0/0 0/0 ONLINE ONLINE racnode1 Check Cluster Nodes [grid@racnode1 ~]$ olsnodes -n racnode1 1 racnode2 2 Check Oracle TNS Listener Process on Both Nodes [grid@racnode1 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}' LISTENER_SCAN1 LISTENER [grid@racnode2 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}' LISTENER http://www....t1.....lsnr ora.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.scan1.ip.ons application 0/3 0/0 ONLINE ONLINE racnode1 ora...ER.   22.gsd ora..type 0/5 0/0 OFFLINE OFFLINE ora..dg ora..oc4j...gsd....type 0/0 0/0 ONLINE ONLINE racnode1 ora..asm application 0/5 0/0 ONLINE ONLINE racnode1 ora.....type 0/5 0/ ONLINE ONLINE racnode1 ora.. do not remove manually or run cron jobs that remove /tmp/......lsnr application 0/5 0/0 ONLINE ONLINE racnode1 ora.de1...ons ora.type 0/5 0/ ONLINE ONLINE racnode1 ora.oracle or /var/tmp/. Verify Oracle Clusterware Installation After the installation of Oracle grid infrastructure..de1.network ora.oc4j ora.asm. Caution: After installation is complete.N1.ry..gsd application 0/5 0/0 OFFLINE OFFLINE ora.asm application 0/5 0/0 ONLINE ONLINE racnode2 ora.ons application 0/3 0/0 ONLINE ONLINE racnode2 ora...type 0/5 0/ ONLINE ONLINE racnode1 ora... Check CRS Status [grid@racnode1 ~]$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online Check Clusterware Resources Note: The crs_stat command is deprecated in Oracle Clusterware 11 g release 2 (11....de2..gsd application 0/5 0/0 OFFLINE OFFLINE ora...type 0/0 0/0 ONLINE ONLINE racnode2 ora.. Postinstallation Tasks for Oracle Grid Infrastructure for a Cluster Perform the following postinstallation procedures on both Oracle RAC nodes in the cluster.lsnr application 0/5 0/0 ONLINE ONLINE racnode2 ora.oracle.oracle or its files while Oracle Clusterware is up..2).type 0/3 0/ ONLINE ONLINE racnode1 ora.E1..E2.type 0/5 0/ ONLINE ONLINE racnode1 ora.asm ora...CRS.ons..type 0/3 0/ ONLINE ONLINE racnode1 ora..html?_template=/ocom/print[7/8/2010 12:17:20 PM] ..vip ora.acfs ora. and you will encounter error CRS-0184: Cannot communicate with the CRS daemon..... [grid@racnode1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------ora. Run the following commands on both nodes in the RAC cluster as the grid user.de1.eons ora.de2... you should run through several tests to verify the install was successful.fs..t1..type 0/5 0/0 ONLINE ONLINE racnode1 ora.de2.rk.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.er. Page 3 Finish At the end of the installation.SM1.lsnr ora..er...type 0/5 0/ ONLINE ONLINE racnode1 ora.SM2... If you remove these files.vip ora.vip ora. click the [Close] button to exit the OUI.up.eons... then Oracle Clusterware could encounter intermittent hangs.

com/technology/pub/articles/hunter-rac11gr2-iscsi-3. Backing up the voting disks in Oracle Clusterware 11 g release 2 is no longer required.oracle. then you can recover it from the root. and Oracle Local Registry (OLR).sh script. The voting disk data is automatically backed up in OCR as part of any configuration change and is automatically restored to any voting disk added.sh script during the installation. it was highly recommended to back up the voting disk using the dd command after installing the Oracle Clusterware software. use the srvctl binary in the Oracle grid infrastructure home for a cluster (Grid home). Back Up the root.2) or later installations. With Oracle Clusterware release 11. Page 3 Confirming Oracle ASM Function for Oracle Clusterware Files If you installed the OCR and voting disk files on Oracle ASM.sh file on both Oracle RAC nodes as root : http://www. If you require information contained in the original root.----------------------------. When we install Oracle Real Application Clusters (the Oracle database software).racnode2 ASM is enabled. Check Oracle Cluster Registry (OCR) [grid@racnode1 ~]$ ocrcheck Status of Oracle Cluster Registry Version Total space (kbytes) Used space (kbytes) Available space (kbytes) ID Device/File Name is as follows : : 3 : 262120 : 2404 : 259716 : 1259866904 : +CRS Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check bypassed due to non-privileged user Check Voting Disk [grid@racnode1 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -.sh Script Oracle recommends that you back up the root. you cannot use the srvctl binary in the database home to manage Oracle ASM or Oracle Net which reside in the Oracle grid infrastructure home.2).Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.--------1.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . please refer to the Oracle Clusterware Administration and Deployment Guide 11 g Release 2 (11. Back up the root.sh file copy. then the installer updates the contents of the existing root. Note: To manage Oracle ASM or Oracle Net 11 g release 2 (11. Voting Disk Management In prior releases. then use the following command syntax as the Grid Infrastructure installation owner to confirm that your Oracle ASM installation is running: [grid@racnode1 ~]$ srvctl status asm -a ASM is running on racnode1. To learn more about managing the voting disks.sh script after you complete an installation. Oracle Cluster Registry (OCR).2 and later. ONLINE 4cbbd0de4c694f50bfd3857ebd8ad8c4 (ORCL:CRSVOL1) [CRS] Located 1 voting disk(s). backing up and restoring a voting disk using the dd is not supported and may result in the loss of the voting disk. If you install other products in the same Oracle home directory.

oracle.racnode2. You can download the IPD/OS tool along with a detailed installation and configuration guide at the following URL: http://www.6. Verify Terminal Shell Environment Before starting the ASM Configuration Assistant.6. Logging In to a Remote System Using X Terminal.2.0/grid [root@racnode2 grid]# cp root. control files.18-164. online redo logs.html   23.sh root. process.2.18 kernel: [root@racnode1 ~]# uname -a Linux racnode1 2. Page 3 [root@racnode1 ~]# cd /u01/app/11. log in to racnode1 as the owner of the Oracle grid infrastructure software which for this article is grid . This article was written using Oracle Enterprise Linux 5 update 4 which uses the 2.com/technology/products/database/clustering/ipd_download_homepage. The tool can provide better explanations for many issues that occur in clusters where Oracle Clusterware. For root cause analysis. These new ASM disk groups will be used later in this guide when creating the clustered database. Oracle ASM and Oracle RAC are running.com/technology/pub/articles/hunter-rac11gr2-iscsi-3. we will create two additional ASM disk groups using the ASM Configuration Assistant ( asmca ).el5 #1 SMP Thu Sep 3 04:15:13 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux If you are using a Linux kernel earlier than 2. when thresholds are reached. then you would use OS Watcher and RACDDT which is available through the My Oracle Support website (formerly Metalink).0/grid [root@racnode1 grid]# cp root. Create Additional ASM Disk Groups using ASMCA Perform the following tasks as the grid user to create two additional ASM disk groups: http://www.AFTER_INSTALL_NOV-20-2009 [root@racnode2 ~]# cd /u01/app/11. archived redo logs).Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.6. Instructions for installing and configuring the IPD/OS tool is beyond the scope of this article and will not be discussed.AFTER_INSTALL_NOV-20-2009 Install Cluster Health Management Software . It collects and analyzes clusterwide data. The first ASM disk group will be named +RACDB_DATA and will be used to store all Oracle physical database files (data. if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server). we configured one ASM disk group named +CRS which was used to store the Oracle clusterware files (OCR and voting disk). It tracks the operating system resource consumption at each node. and device level continuously.(Optional) To address troubleshooting issues.6. Oracle recommends that you install Instantaneous Problem Detection OS Tool (IPD/OS) if you are using Linux kernel 2.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . Next. In this section.oracle.racnode1. The IPD/OS tool is designed to detect and analyze operating system and cluster resource-related degradation and failures. A second ASM disk group will be created for the Fast Recovery Area named +FRA . Create ASM Disk Groups for Data and Fast Recovery Area Run the ASM Configuration Assistant (asmca) as the grid user from only one node in the cluster (racnode1) to create the additional ASM disk groups which will be used to create the clustered database. historical data can be replayed to understand what was happening at the time of failure.sh root. In real time mode.sh.sh. such as node evictions.9. verify your X11 display server settings which were described in the section. an alert is shown to the operator. During the installation of Oracle grid infrastructure.9 or higher.

Disk Exit the ASM Configuration Assistant by clicking the [Exit] button. click the " Create" button.com/technology/pub/articles/hunter-rac11gr2-iscsi-3. Verify Terminal Shell Environment Before starting the Oracle Universal Installer (OUI). Page 3 [grid@racnode1 ~]$ asmca & Screen Response Name Disk From the "Disk Groups" tab. In the "Redundancy" section. If the ASMLib volumes we created earlier in this article do not show up in the "Select Member Disks" window as eligible ( ORCL:DATAVOL1 and ORCL:FRAVOL1) then Create click on the "Change Disk Discovery Path" button and input " ORCL:*". After verifying all values in this dialog are correct. Finally. OUI copies the binary files from this node to all the other node in the cluster during the installation process. use " FRA" for the "Disk Group Disk Group Name". Next. Disk After creating the first ASM disk group. check the ASMLib volume " ORCL:FRAVOL1" in the "Select Member Disks" section. choose " External (none) ". Install Oracle Database 11 g with Oracle Real Application Clusters Screen Shot Perform the Oracle Database software installation from only one of the Oracle RAC nodes in the cluster (racnode1)! The Oracle Database software will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer using SSH. if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server). For the purpose of this guide. verify your X11 display server settings which were described in the section. In the "Redundancy" section. Groups The "Create Disk Group" dialog should show two of the ASMLib volumes we created earlier in this guide.oracle. Groups   24.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . use " RACDB_DATA " for the "Disk Group Name". check the ASMLib volume " ORCL:DATAVOL1" in the "Select Member Disks" section. Groups Click the " Create" button again to create the second ASM disk group.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. Now that the grid infrastructure software is functional. Disk Group When creating the "Data" ASM disk group. Logging In to a Remote System Using X Terminal. choose " External (none) ". click the " [OK] " button. Finally. Create When creating the "Fast Recovery Area" disk group. click the " [OK] " button. After verifying all values in this dialog are correct. log in to racnode1 as the owner of the Oracle Database software which for this article is oracle. Install Oracle Database 11 g Release 2 Software http://www. we will forgo the "Create Database" option when installing the Oracle Database software. The clustered database will be created later in this guide using the Database Configuration Assistant (DBCA) after all installs have been completed. you can install the Oracle Database software on the one node in your cluster ( racnode1 ) as the oracle user. The "Create Disk Group" dialog should now show the final remaining ASMLib volume. you will be returned to the initial dialog.

0 [oracle@racnode1 ~]$ export DISPLAY [oracle@racnode1 ~]$ cd /home/oracle/software/oracle/database [oracle@racnode1 database]$ . Next. Select the " Real Application Clusters database installation " radio button (default) and verify that both Oracle RAC nodes are checked in the "Node Name" window. Page 3 Perform the following tasks as the oracle user to install the Oracle Database software: [oracle@racnode1 ~]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall). Finish off this screen by clicking the [Test] button to verify passwordless SSH connectivity. the installer (OUI) will create shell script programs. if any checks fail.1300(dba).1201(asmdba). Select " Enterprise Edition ". Specify the Oracle base and Software location (Oracle_home) as follows:    Oracle Base : /u01/app/oracle    Software Location: /u01/app/oracle/product/11. to resolve many incomplete system configuration requirements. click the [SSH Connectivity] button. This will start the "SSH Connectivity" configuration process: Screen Shot Grid Options After the SSH configuration process successfully completes. acknowledge the dialog box. Enter the "OS Password" for the oracle user and click the [Setup] button. Acknowledge the warning dialog indicating you have not provided an email address by clicking the [Yes] button.0/dbhome_1 Operating System Groups Select the OS groups to be used for the SYSDBA and SYSOPER privileges:    Database Administrator (OSDBA) Group: dba    Database Operator (OSOPER) Group: oper The installer will run through a series of checks to determine if both Oracle RAC nodes meet the minimum requirements for installing and configuring the Oracle Database software. Starting with 11 g release 2 (11. called fixup scripts. Product Languages Database Edition Installation Location Make the appropriate selection(s) for your environment./runInstaller Screen Name Configure Security Updates Installation Option Response For the purpose of this article.2).1301(oper) [oracle@racnode1 ~]$ DISPLAY=<your local workstation>:0. un-check the security updates checkbox and click the [Next] button to continue.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . If OUI detects an incomplete task that is http://www.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.oracle. Select " Install database software only ".Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.2.

2. Perform the Oracle Database 11g Examples software installation from only one of the Oracle RAC nodes in the cluster (racnode1)! The Oracle Database Examples software will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer using SSH.2. Next. if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server). OUI copies the binary files from this node to all the other node in the cluster during the installation process. the Examples software is only installed from one node in your cluster ( racnode1 ) as the oracle user. and completes other operating system configuration tasks.sh [root@racnode2 ~]# /u01/app/oracle/product/11.0/dbhome_1/root.2. log in to racnode1 as the owner of the Oracle Database software which for this article is oracle. it raises kernel values to required minimums. you have the option to install the Oracle Database 11 g Examples. Execute Configuration scripts Run the root.sh script on both Oracle RAC nodes. Install Oracle Database 11 g Examples (formerly Companion) At the end of the installation. Like the Oracle Database software install. If all prerequisite checks pass (as was the case for my install). verify your X11 display server settings which were described in the section. When you run the script.0/dbhome_1/root. then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button. the OUI continues to the Summary screen. The installer performs the Oracle Database software installation process on both Oracle RAC nodes. Finish   25. After the installation completes. Open a new console window on both Oracle RAC nodes in the cluster. if necessary.oracle. You will be prompted to run the script as root in a separate terminal session. Now that the Oracle Database 11 g software is installed.sh Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window.0/dbhome_1/root. Page 3 Prerequisite Checks marked "fixable".com/technology/pub/articles/hunter-rac11gr2-iscsi-3. (starting with the node you are performing the install from). The fixup script is generated during installation.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. Install Oracle Database 11 g Release 2 Examples Perform the following tasks as the oracle user to install the Oracle Database Examples: http://www.sh script on all nodes in the RAC cluster: [root@racnode1 ~]# /u01/app/oracle/product/11. Summary Install Product Click [Finish] to start the installation.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . Verify Terminal Shell Environment Before starting the Oracle Universal Installer (OUI). click the [Close] button to exit the OUI. you will be prompted to run the /u01/app/oracle/product/11. as the root user account. Logging In to a Remote System Using X Terminal.

grid -c "crs_stat -t -v" Password: ********* Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------ora. if necessary.) are running before attempting to start the clustered database creation process: [oracle@racnode1 ~]$ su . Summary Install Product Finish   26./runInstaller Screen Name Installation Location Response Specify the Oracle base and Software location (Oracle_home) as follows:    Oracle Base : /u01/app/oracle    Software Location: /u01/app/oracle/product/11. If OUI detects an incomplete task that is marked "fixable". Setting environment variables in the login script for the oracle user account was covered in Section 13.CRS.type 0/5 0/ ONLINE ONLINE racnode1 http://www.0/dbhome_1 environment. Starting with 11 g release 2 (11.2. The installer performs the Oracle Database Examples software installation process on both Oracle RAC nodes.dg ora.lsnr ora. At the end of the installation.type 0/5 0/ ONLINE ONLINE racnode1 ora. click the [Close] button to exit the OUI.. The fixup script is generated during installation. You will be prompted to run the script as root in a separate terminal session.2..oracle. If all prerequisite checks pass (as was the case for my install). etc. Before executing the DBCA.. and completes other operating system configuration tasks.up.2).up. the OUI continues to the Summary screen. called fixup scripts.....type 0/5 0/ ONLINE ONLINE racnode1 ora.dg ora..Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. if any checks fail. Page 3 [oracle@racnode1 ~]$ cd /home/oracle/software/oracle/examples [oracle@racnode1 examples]$ .ER.0/dbhome_1 Screen Shot The installer will run through a series of checks to determine if both Oracle RAC nodes meet the minimum requirements for installing and configuring the Oracle Database Examples software... the installer (OUI) will create shell script programs. Create the Oracle Cluster Database Click [Finish] to start the installation.com/technology/pub/articles/hunter-rac11gr2-iscsi-3. Prerequisite Checks The database creation process should only be performed from one of the Oracle RAC nodes in the cluster (racnode1). When you run the script.FRA. to resolve many incomplete system configuration requirements. Oracle Clusterware processes.er. make certain that the $ORACLE_HOME and $PATH are set appropriately for the $ORACLE_BASE/product/11. it raises kernel values to required minimums..html?_template=/ocom/print[7/8/2010 12:17:20 PM] . Use the Oracle Database Configuration Assistant (DBCA) to create the clustered database. You should also verify that all services we have installed up to this point (Oracle TNS listener. then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button..

...... Create the Clustered Database To start the database creation process.er.de1.type ora...eons ora.N1.. Select Custom Database .type ora....E2.ons.de1... Select Create a Database .gsd ora..acfs ora......oc4j ora..asm.t1..ip.type ora.. verify your X11 display server settings which were described in the section. Logging In to a Remote System Using X Terminal. You may use any database domain.de2.eons. Click the [Select All] button to select all servers: racnode1 and racnode2 .network ora.type ora.ons ora...info    SID Prefix: racdb Screen Shot Database Identification Note: I used idevelopment.asm ora..type ora.. Keep in mind that this domain does not have to be a valid DNS domain.vip ora.asm ora.up...type ora.. Management Options Database Credentials Leave the default options here...info for the database domain.type 0/5 0/5 0/5 0/3 0/5 0/5 0/5 0/3 0/5 0/5 0/5 0/3 0/0 0/5 0/5 0/5 0/3 0/0 0/5 0/0 0/0 0/ 0/ 0/ 0/ 0/ 0/0 0/ 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/ 0/0 ONLINE ONLINE ONLINE ONLINE OFFLINE ONLINE OFFLINE ONLINE ONLINE ONLINE OFFLINE ONLINE ONLINE ONLINE ONLINE OFFLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE OFFLINE ONLINE OFFLINE ONLINE ONLINE ONLINE OFFLINE ONLINE ONLINE ONLINE ONLINE OFFLINE ONLINE ONLINE ONLINE ONLINE racnode1 racnode1 racnode1 racnode1 racnode1 racnode1 racnode1 racnode1 racnode1 racnode1 racnode2 racnode2 racnode2 racnode2 racnode1 racnode1 Verify Terminal Shell Environment Before starting the Database Configuration Assistant (DBCA).idevelopment..E1. Next...type application application application application ora.vip ora.    Configuration Type: Admin-Managed Database naming.gsd ora.rk.    Global Database Name: racdb.DATA..de1...gsd...dg ora.lsnr ora. if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server).lsnr ora.type ora..html?_template=/ocom/print[7/8/2010 12:17:20 PM] ..gsd ora..lsnr ora.oracle.ons ora. run the following as the oracle user: [oracle@racnode1 ~]$ dbca & Screen Name Welcome Screen Operations Database Templates Response Select Oracle Real Application Clusters database ..SM2......scan1.asm ora. which is to Configure Enterprise Manager / Configure Database Control for local management. Cluster database configuration.ry. Node Selection.. log in to racnode1 as the owner of the Oracle Database software which for this article is oracle.    Storage Type: Automatic Storage Management (ASM)    Storage Locations: Use Oracle-Managed Files Database File Locations http://www.SM1.. Specify storage type and locations for database files. I selected to Use the Same Administrative Password for All Accounts .com/technology/pub/articles/hunter-rac11gr2-iscsi-3.de2. Page 3 ora.fs.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.. Enter the password (twice) and make sure the password does not start with a digit number.type application application application application ora.......type ora..oc4j.type ora.t1...ons ora.vip ora...de2.

com/technology/pub/articles/hunter-rac11gr2-iscsi-3.network ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.lsnr ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.oracle.RACDB_DATA. I left them all at their default settings. I used a Fast Recovery Area Size of 30 GB ( 30413 MB ). exit from the DBCA. Click Finish to start the database creation process.dg ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora. When defining the Fast Recovery Area size. click the [Browse] button and select the disk group name +FRA. you will have a fully functional Oracle RAC cluster running! Verify Clustered Database is Open [oracle@racnode1 ~]$ su .eons ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora. Recovery Configuration Database Content Initialization Parameters Database Storage Creation Options When the DBCA has completed.gsd OFFLINE OFFLINE racnode1 OFFLINE OFFLINE racnode2 ora.CRS.net1. Click OK on the "Summary" screen. This option is available since we installed the Oracle Database 11 g Examples. the database creation will start. I left all of the Database Components (and destination tablespaces) set to their default value although it is perfectly OK to select the Sample Schemas . I also always select to Generate Database Creation Scripts .Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. Change any parameters for your environment. My disk group has a size of about 33GB.dg ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.LISTENER. For the Fast Recovery Area. After acknowledging the database creation report and script generation dialog. Check the option for Specify Fast Recovery Area.FRA. End of Database Creation At the end of the database creation.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . Keep the default option Create Database selected.dg ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora. I left them all at their default settings. Change any parameters for your environment.ons ONLINE ONLINE racnode1 http://www.asm ONLINE ONLINE racnode1 Started ONLINE ONLINE racnode2 Started ora.grid -c "crsctl status resource -w \"TYPE co 'ora'\" -t" Password: ********* -------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------Local Resources -------------------------------------------------------------------------------ora. Page 3      Database Area: +RACDB_DATA Specify ASMSNMP Password Specify the ASMSNMP password for the ASM instance. use the entire volume minus 10% for overhead — (33-10%=30 GB).

1.racnode1.0 Copyright (c) 1996.0/dbhome_1/racnode1_racdb/sysman/log Figure 18 : Oracle Enterprise Manager .acfs ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 -------------------------------------------------------------------------------Cluster Resources -------------------------------------------------------------------------------ora.lsnr 1 ONLINE ONLINE racnode1 ora. 2009 Oracle Corporation.2.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . -----------------------------------------------------------------Logs are generated in directory /u01/app/oracle/product/11. https://racnode1:1158/em/console/aboutApplication Oracle Enterprise Manager 11g is running. it can be used to view the database configuration and current status of the database.vip 1 ONLINE ONLINE racnode2 ora.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.oracle.oc4j 1 OFFLINE OFFLINE ora.LISTENER_SCAN1. All rights reserved.2.vip 1 ONLINE ONLINE racnode1 Oracle Enterprise Manager If you configured Oracle Enterprise Manager (Database Control).registry. Page 3 ONLINE ONLINE racnode2 ora.scan1. The URL for this example is: https://racnode1:1158/em [oracle@racnode1 ~]$ emctl status dbconsole Oracle Enterprise Manager 11g Database Control Release 11.db 1 ONLINE ONLINE racnode1 Open 2 ONLINE ONLINE racnode2 Open ora.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.racnode2.vip 1 ONLINE ONLINE racnode1 ora.0.racdb.(Database Console)   http://www.

2. From one of the nodes in the Oracle RAC configuration.sql Enabling Archive Logs in a RAC Environment Whether a single instance or clustered database. All rights reserved. When the current online redolog fills. racnode1 ) as oracle and disable the cluster instance parameter by setting cluster_database to FALSE from the current instance: [oracle@racnode1 ~]$ sqlplus / as sysdba SQL> alter system set cluster_database=false scope=spfile sid='racdb1'. The single instance must contain at least two online redologs (or online redolog groups). If the database is in "Archive Log Mode". The size of an online redolog file is completely independent of another instance's' redolog size. The Database Configuration Assistant (DBCA) allows users to configure a new database to be in archive log mode.e. To facilitate media recovery.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . Oracle. MOUNT the database: [oracle@racnode1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11. Once an online redolog fills. Oracle writes to its online redolog files in a circular manner. Page 3 27. It is therefore a requirement that online redo logs be located on a shared storage device (just like the database files). A thread must contain at least two online redologs (or online redolog groups). Oracle allows the DBA to put the database into "Archive Log Mode" which makes a copy of the online redolog after it fills (and before it gets reused). [oracle@racnode1 ~]$ sqlplus / as sysdba SQL> @?/rdbms/admin/utlrp.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. In cases like this where the database is in no archive log mode. In an Oracle RAC environment.1. 2009.0 Production on Sat Nov 21 19:26:47 2009 Copyright (c) 1982. Although in most configurations the size is the same. For the purpose of this article. it may be different depending on the workload and backup / recovery considerations for each node. Shutdown all instances accessing the clustered database as the oracle user: [oracle@racnode1 ~]$ srvctl stop database -d racdb 3. Post Database Creation Tasks .(Optional) This section offers several optional tasks that can be performed on your new Oracle 11g in order to enhance availability as well as database management. use the following tasks to put a RAC enabled database into archive log mode. Connected to an idle instance. It is also worth mentioning that each instance has exclusive write access to its own online redolog files.0.sql script to recompile all invalid PL/SQL packages now instead of when the packages are accessed for the first time. Log in to one of the nodes (i. System altered. Note however that this will require a short database outage. each instance will have its own set of online redolog files known as a thread. each instance can read another instance's current online redolog file to perform instance recovery if that instance was terminated abnormally. In a correctly configured RAC environment. As already mentioned. it is a simple task to put the database into archive log mode. Using the local instance. Oracle will switch to the next one. This is a process known as archiving. however most DBA's opt to bypass this option during initial database creation. Oracle tracks and logs all changes to database blocks in online redolog files. Oracle will make a copy of the online redo log before it gets reused. I will use the node racnode1 which runs the racdb1 instance: 1. 2.oracle. however. Each Oracle instance will use its group of online redologs in a circular manner. This step is optional but recommended. Re-compile Invalid Objects Run the utlrp. Oracle moves to the next one. http://www. The same holds true for a single instance configuration.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.

backups. performance. Data Mining and Real Application Testing options SQL> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence Archive Mode Enabled USE_DB_RECOVERY_FILE_DEST 69 70 70 After enabling Archive Log Mode.0.0 Production on Sat Nov 21 19:33:38 2009 Copyright (c) 1982. Real Application Clusters.2. Re-enable support for clustering by modifying the instance parameter cluster_database to TRUE from the current instance: SQL> alter system set cluster_database=true scope=spfile sid='racdb1'.zip archive to the $ORACLE_BASE directory of each node in the cluster.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. For the purpose of this http://www. All rights reserved. OLAP.idevelopment. In this section you will download and install a collection of Oracle DBA scripts that can be used to manage many aspects of your database including space management. 2009. Login to the local instance and verify Archive Log Mode is enabled: [oracle@racnode1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.zip. Enable archiving: SQL> alter database archivelog. and session management.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . Connected to: Oracle Database 11g Enterprise Edition Release 11.1.com/technology/pub/articles/hunter-rac11gr2-iscsi-3. Automatic Storage Management. security. 7. Page 3 SQL> startup mount ORACLE instance started. Total System Global Area 1653518336 bytes Fixed Size 2213896 bytes Variable Size 1073743864 bytes Database Buffers 570425344 bytes Redo Buffers 7135232 bytes 4.1. The Oracle DBA scripts archive can be downloaded using the following link http://www. As the oracle user account. System altered.oracle.0. Database altered. Shutdown the local instance: SQL> shutdown immediate ORA-01109: database not open Database dismounted.64bit Production With the Partitioning.0 . 6. 5. ORACLE instance shut down. Bring all instance back up as the oracle account using srvctl: [oracle@racnode1 ~]$ srvctl start database -d racdb 8. each instance in the RAC configuration can automatically archive redologs! Download and Install Custom Oracle Database Scripts DBA's rely on Oracle's data dictionary views and dynamic performance views in order to support and better manage their databases.2.info/data/Oracle/DBA_scripts/common. Oracle. download the common. it helps to have a collection of accurate and readily available SQL scripts to query these views. Although these views provide a simple and easy mechanism to query critical information regarding the database.

com/technology/pub/articles/hunter-rac11gr2-iscsi-3.sql asm_drop_files. Page 3 example.512 2. run the help.zip archive will be copied to /u01/app/oracle .sql asm_templates.sql asm_files.SNIP --.:$ORACLE_HOME/rdbms/admin export ORACLE_PATH Note: The ORACLE_PATH environment variable should already be set in the .sql asm_disks.---- 2.sql asm_clients. For example.sql asm_disks_perf.048.061.oracle. Tablespace Size Used (in bytes) Pct.264 20.576 703.---------------.288 ---------------.200 157.sql < --.---------.-----------. to query tablespace information while logged into the Oracle database as a DBA user: SQL> @dba_tablespaces Status Used ----------ONLINE 81 ONLINE 90 ONLINE 20 ONLINE 96 ONLINE 54 ONLINE 10 ONLINE 88 ----avg 63 sum 7 rows selected.776 1.200 75. For example.043. you should now be able to run any of the SQL scripts in your $ORACLE_BASE/common/oracle/sql while logged into SQL*Plus.286. Seg. Now that the Oracle DBA scripts have been unzipped and the UNIX environment variable ( $ORACLE_PATH) has been set to the appropriate directory.> http://www.472 511. perform the following on both nodes in the Oracle RAC cluster as the oracle user account: [oracle@racnode1 ~]$ mv common.232 948.---------------.html?_template=/ocom/print[7/8/2010 12:17:20 PM] .zip The final step is to verify (or set) the appropriate environment variable for the current UNIX shell to ensure the Oracle SQL scripts can be run from within SQL*Plus while in any directory.760 5.sql ======================================== Automatic Shared Memory Management ======================================== asmm_components.840.sql asm_diskgroups.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. Mgt.400 209.sql script: SQL> @help.967. unzip the archive file to the $ORACLE_BASE directory.744 85.880 734.sql ======================================== Automatic Storage Management ======================================== asm_alias.869.sql asm_files2.336.497. verify the following environment variable is set and included in your login shell script: ORACLE_PATH= $ORACLE_BASE/common/oracle/sql :. Mgt.715.145.---SYSAUX UNDOTBS1 USERS SYSTEM EXAMPLE UNDOTBS2 TEMP PERMANENT UNDO PERMANENT PERMANENT PERMANENT UNDO TEMPORARY LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL AUTO MANUAL AUTO MANUAL AUTO MANUAL MANUAL 629.227.600 1.135. ----------------.bash_profile login script that was created in the section Create Login Script for the oracle User Account.--------.003.---------------. Tablespace Name TS Type Ext.328 To obtain a list of all available Oracle DBA scripts while logged into SQL*Plus. For UNIX.131.060. Next.zip /u01/app/oracle [oracle@racnode1 ~]$ cd /u01/app/oracle [oracle@racnode1 ~]$ unzip common. the common.952.242.059.448 66.

259.265.sql wm_unfreeze_workspace.oracle.sql   28.262. you may want to make a sizable testing database. Create / Alter Tablespaces When creating the clustered database.703530441 +RACDB_DATA/racdb/datafile/users. Grant succeeded.703530435 +RACDB_DATA/racdb/datafile/sysaux.sql ======================================== Workspace Manager ======================================== wm_create_workspace.sql wm_get_workspace. The following query can be used to determine the file names for your environment: SQL> 2 3 4 5 select tablespace_name.703530397 +RACDB_DATA/racdb/tempfile/temp. SQL> alter database datafile '+RACDB_DATA/racdb/datafile/users.703530447 TABLESPACE_NAME --------------EXAMPLE SYSAUX SYSTEM TEMP UNDOTBS1 UNDOTBS2 USERS 7 rows selected. If you are using a large drive for the shared storage.sql wm_workspaces. SQL> grant dba.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. Please keep in mind that the database file names (OMF files) used in this example may differ from what the Oracle Database Configuration Assistant (DBCA) creates for your environment. Tablespace created.sql wm_remove_workspace.sql wm_enable_versioning.703530429 +RACDB_DATA/racdb/datafile/undotbs1.sql perf_top_sql_by_disk_reads. [oracle@racnode1 ~]$ sqlplus "/ as sysdba" SQL> create user scott identified by tiger default tablespace users.sql wm_disable_versioning.sql wm_refresh_workspace.sql wm_freeze_workspace. SQL> alter database datafile '+RACDB_DATA/racdb/datafile/system. we left all tablespaces set to their default size. When working through this section. FILE_NAME -------------------------------------------------+RACDB_DATA/racdb/datafile/example.703530411 +RACDB_DATA/racdb/datafile/system.html?_template=/ocom/print[7/8/2010 12:17:20 PM] .261.703530447' resize 1024m.703530423 +RACDB_DATA/racdb/datafile/undotbs2. Database altered.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.703530397' resize 1024m. http://www. connect to scott.260.259. Below are several optional SQL commands for modifying and creating all tablespaces for the test database.263. SQL> 2 3 4 create tablespace indx datafile '+RACDB_DATA' size 1024m autoextend on next 100m maxsize unlimited extent management local autoallocate segment space management auto.265. Page 3 perf_top_sql_by_buffer_gets. Tablespace altered.sql wm_merge_workspace.sql wm_goto_workspace. substitute the data file names that were created in your environment where appropriate. file_name from dba_temp_files. file_name from dba_data_files union select tablespace_name. resource. SQL> alter tablespace users add datafile '+RACDB_DATA' size 1024m autoextend off.264. User created.

---------. Most of the checks described in this section use the Server Control Utility (SRVCTL) and can be run as either the oracle or grid OS user. Oracle also provides the Oracle Clusterware Control (CRSCTL) utility.521. Oracle Notification Services.220.073.oracle. Oracle Clusterware 11 g release 2 (11. Tablespace Name TS Type Ext.288 ---------------.048. parsing and calling Oracle Clusterware APIs for Oracle Clusterware objects.097. Mgt. SQL> alter database datafile '+RACDB_DATA/racdb/datafile/undotbs2.2) introduces cluster-aware commands with which you can perform check. CRSCTL is an interface between you and Oracle Clusterware. and Oracle Enterprise Manager agents (for maintenance purposes).com/technology/pub/articles/hunter-rac11gr2-iscsi-3. You can run these commands from any node in the cluster on http://www.262. Verify Oracle Grid Infrastructure and Database Configuration The following Oracle Clusterware and Oracle RAC verification checks can be performed on any of the Oracle RAC nodes in the cluster.261.824 2. and stop operations on the cluster.073.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.---------------.703530441' resize 1024m.824 512.703530411' resize 1024m.073. Mgt.264 1.073. For the purpose of this article.304 948. SQL> alter database datafile '+RACDB_DATA/racdb/datafile/undotbs1.741. Database altered. Here is a snapshot of the tablespaces I have defined for my test database environment: Status Used ----------ONLINE 48 ONLINE 88 ONLINE 0 ONLINE 65 ONLINE 54 ONLINE 0 ONLINE 2 ONLINE 6 ----avg 33 sum 8 rows selected.824 1.264.---------------.280 85.483. Tablespace Size Used (in bytes) Pct. Database altered.741.824 1. Database altered. There are five node-level tasks defined for SRVCTL: Adding and deleting node-level applications Setting and un-setting the environment for node-level applications Administering node applications Administering ASM instances Starting and stopping a group of programs that includes virtual IP addresses.286. Page 3 Database altered.747.338.073.824 1.741.060.576 20. SQL> alter database datafile '+RACDB_DATA/racdb/datafile/sysaux.648 1.201.741.147.776 2.992 2.html?_template=/ocom/print[7/8/2010 12:17:20 PM] .152 703. start.703530429' resize 1024m.--------.098. Seg.---SYSAUX UNDOTBS1 USERS SYSTEM EXAMPLE INDX UNDOTBS2 TEMP PERMANENT UNDO PERMANENT PERMANENT PERMANENT PERMANENT UNDO TEMPORARY LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL AUTO MANUAL AUTO MANUAL AUTO AUTO MANUAL MANUAL 1. I will only be performing checks from racnode1 as the oracle OS user.840.741. ----------------.824 157.043.703530423' resize 1024m. Database altered.-----------.---- 8.---------------.131.741.260. listeners.400 1.088   29.073.448 66. SQL> alter database tempfile '+RACDB_DATA/racdb/tempfile/temp.

we will only make use of the "Checking the health of the cluster" operation which uses the Clusterized (Cluster Aware) Command: crsctl check cluster Many subprograms and commands were deprecated in Oracle Clusterware 11 g release 2 (11. You can use CRSCTL commands to perform several operations on Oracle Clusterware. Page 3 another node in the cluster.(Database Status) [oracle@racnode1 ~]$ srvctl status database -d racdb Instance racdb1 is running on node racnode1 Instance racdb2 is running on node racnode2 Single Oracle Instance .(Clusterized Command) Run as the grid user. depending on the operation.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.(Status of Specific Instance) [oracle@racnode1 ~]$ srvctl status instance -d racdb -i racdb1 Instance racdb1 is running on node racnode1 Node Applications .html?_template=/ocom/print[7/8/2010 12:17:20 PM] . or on all nodes in the cluster.(Status) [oracle@racnode1 ~]$ srvctl VIP racnode1-vip is enabled VIP racnode1-vip is running VIP racnode2-vip is enabled VIP racnode2-vip is running Network is enabled Network is running on node: status nodeapps on node: racnode1 on node: racnode2 racnode1 http://www.oracle.2): crs_stat crs_register crs_unregister crs_start crs_stop crs_getperm crs_profile crs_relocate crs_setperm crsctl check crsd crsctl check cssd crsctl check evmd crsctl debug log crsctl set css votedisk crsctl start resources crsctl stop resources Check the Health of the Cluster . such as: Starting and stopping Oracle Clusterware resources Enabling and disabling Oracle Clusterware daemons Checking the health of the cluster Managing resources that represent third-party applications Integrating Intelligent Platform Management Interface (IPMI) with Oracle Clusterware to provide failure isolation support and to ensure cluster integrity Debugging Oracle Clusterware components For the purpose of this article (and this section).com/technology/pub/articles/hunter-rac11gr2-iscsi-3. [grid@racnode1 ~]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online All Oracle Instances .

remote port 6200 eONS daemon exists.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.1.255. Local port 6100. TNS listener .(Status) [oracle@racnode1 ~]$ srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node(s): racnode1. ONS daemon exists. Page 3 Network is running on node: racnode2 GSD is disabled GSD is not running on node: racnode1 GSD is not running on node: racnode2 ONS is enabled ONS daemon is running on node: racnode1 ONS daemon is running on node: racnode2 eONS is enabled eONS daemon is running on node: racnode1 eONS daemon is running on node: racnode2 Node Applications .html?_template=/ocom/print[7/8/2010 12:17:20 PM] .(Configuration) [oracle@racnode1 ~]$ srvctl config nodeapps VIP exists.0/eth0 VIP exists.ora Domain: idevelopment.0/eth0 GSD exists.(Configuration) [oracle@racnode1 ~]$ srvctl config listener -a Name: LISTENER Network: 1.info Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1.racnode2 TNS listener .racnode2 ASM .168.0/grid ASM listener: LISTENER ASM is enabled.racdb2 Disk Groups: RACDB_DATA.168.racnode1 End points: TCP:1521 http://www.43.(Configuration) $ srvctl config asm -a ASM home: /u01/app/11.(Status) [oracle@racnode1 ~]$ srvctl status asm ASM is running on racnode1.2.0/grid on node(s) racnode2.252/255. multicast IP address 234.:racnode2 VIP exists.: /racnode2-vip/192.1.255. listening port 2016 List all Configured Databases [oracle@racnode1 ~]$ srvctl config database racdb Database .2.FRA Services: Database is enabled Database is administrator managed ASM .(Configuration) [oracle@racnode1 ~]$ srvctl config database -d racdb -a Database unique name: racdb Database name: racdb Oracle home: /u01/app/oracle/product/11.194.: /racnode1-vip/192.oracle.255. Multicast port 24057.0/dbhome_1 Oracle user: oracle Spfile: +RACDB_DATA/racdb/spfileracdb.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.251/255.:racnode1 VIP exists.255.168. Owner: grid Home: /u01/app/11.2.

remote port 6200 Name: LISTENER Network: 1..168. IP: /racnode-cluster-scan/192..255.: /racnode2-vip/192. VIP exists.252/255. ONS.255.0 msecs Check: Reference Time Offset Node Name Time Offset Status http://www...:racnode2 VIP exists.. Check of Clusterware install passed Checking if CTSS Resource is running on all nodes.: /racnode2-vip/192.1.1.: /racnode1-vip/192.0/eth0 Configuration for Node Applications .-----------------------racnode1 Active CTSS is in Active state. Result: Query of CTSS for time offset passed Check CTSS state started..255. Check: CTSS Resource running on all nodes Node Name Status -----------------------------------.168. ONS daemon exists.0/eth0 SCAN VIP name: scan1.251/255.-----------------------racnode1 passed Result: CTSS resource check passed Querying CTSS for time offset on all nodes.1.255. Proceeding with check of clock time offsets on all nodes.0/eth0 VIP exists.0/255. Network: 1/192.(Status) [oracle@racnode1 ~]$ srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node racnode1 SCAN .:racnode1 VIP exists.1.255.255...255. Page 3 SCAN .:racnode2 VIP exists.168. Reference Time Offset Limit: 1000.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.255.252/255. Listener) [oracle@racnode1 ~]$ srvctl config nodeapps -a -g -s -l -l option has been deprecated and will be ignored.255. Owner: grid Home: /u01/app/11.(VIP.oracle.html?_template=/ocom/print[7/8/2010 12:17:20 PM] .racnode1 End points: TCP:1521 Verifying Clock Synchronization across the Cluster Nodes [oracle@racnode1 ~]$ cluvfy comp clocksync -verbose Verifying Clock Synchronization across the cluster nodes Checking if Clusterware is installed on all nodes.(Configuration) [oracle@racnode1 ~]$ srvctl config scan SCAN name: racnode-cluster-scan.: /racnode1-vip/192.1.168. Local port 6100. GSD.168.168.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.0/eth0 GSD exists.0/grid on node(s) racnode2.(Configuration of Specific Node) [oracle@racnode1 ~]$ srvctl config vip -n racnode1 VIP exists.187 VIP .255.1.:racnode1 VIP exists.2..251/255. Check: CTSS state Node Name State -----------------------------------.(Status of Specific Node) [oracle@racnode1 ~]$ srvctl status vip -n racnode1 VIP racnode1-vip is enabled VIP racnode1-vip is running on node: racnode1 [oracle@racnode1 ~]$ srvctl status vip -n racnode2 VIP racnode2-vip is enabled VIP racnode2-vip is running on node: racnode2 VIP .0/eth0 [oracle@racnode1 ~]$ srvctl config vip -n racnode2 VIP exists..

265.0 -----------------------passed Time offset is within the specified limits on the following set of nodes: "[racnode1]" Result: Check of clock time offsets passed Oracle Cluster Time Synchronization Services check passed Verification of Clock Synchronization across the cluster nodes was successful.703530391 +FRA/racdb/onlinelog/group_2.-------.703530411 +RACDB_DATA/racdb/datafile/system.703530389 +RACDB_DATA/racdb/datafile/example. Page 3 -----------racnode1 -----------------------0.267. PATH ---------------------------------ORCL:CRSVOL1 ORCL:DATAVOL1 ORCL:FRAVOL1   http://www.703530429 19 rows selected. database_status db_status . ASM Disk Volumes .Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. parallel .703542993 +RACDB_DATA/racdb/datafile/sysaux.256.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.266.703530447 +RACDB_DATA/racdb/datafile/users.(SQL) select union select union select union select name from v$datafile member from v$logfile name from v$controlfile name from v$tempfile.(SQL) SELECT inst_id .269.703530435 +RACDB_DATA/racdb/datafile/indx.703530393 +FRA/racdb/onlinelog/group_3.257.703533497 +FRA/racdb/onlinelog/group_4.703530441 +RACDB_DATA/racdb/datafile/users.---------1 1 racdb1 2 2 racdb2 PAR --YES YES STATUS ------OPEN OPEN DB_STATUS STATE HOST -----------.703533497 +RACDB_DATA/racdb/onlinelog/group_4.261.703533499 +RACDB_DATA/racdb/tempfile/temp.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . NAME ------------------------------------------+FRA/racdb/controlfile/current. INST_ID INST_NO INST_NAME -------.257.270.703542943 +RACDB_DATA/racdb/onlinelog/group_1.------ACTIVE NORMAL racnode1 ACTIVE NORMAL racnode2 All database files and the ASM disk group they reside in .703533499 +RACDB_DATA/racdb/controlfile/current.703530423 +RACDB_DATA/racdb/datafile/undotbs2.703530389 +FRA/racdb/onlinelog/group_1. instance_number inst_no .263.258.262.260.258.259.703530397 +RACDB_DATA/racdb/datafile/undotbs1.264. host_name host FROM gv$instance ORDER BY inst_id.--------.703530393 +RACDB_DATA/racdb/onlinelog/group_3.703530391 +RACDB_DATA/racdb/onlinelog/group_2. status . active_state state .(SQL) SELECT path FROM v$asm_disk.oracle. instance_name inst_name .256. All running instances in the cluster .259.260.

diskmon' on 'racnode1' CRS-2677: Stop of 'ora. all services — including Oracle Clusterware. SCAN.network' on 'racnode1' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'racnode1' CRS-2677: Stop of 'ora. the Oracle Database.LISTENER_SCAN1.scan1. There are times.dg' on 'racnode1' CRS-2673: Attempting to stop 'ora.FRA.RACDB_DATA. however.crsd' on 'racnode1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racnode1' CRS-2673: Attempting to stop 'ora. so how do I start and stop services?".lsnr' on 'racnode2' succeeded <-.racdb. Oracle grid infrastructure was installed by the grid user while the Oracle RAC software was installed by oracle. and so on — should start automatically on each reboot of the Linux nodes.0/grid/bin/crsctl stop cluster CRS-2673: Attempting to stop 'ora.vip' on 'racnode1' CRS-2677: Stop of 'ora. If you have followed the instructions in this guide.oracle.dg' on 'racnode1' CRS-2673: Attempting to stop 'ora.scan1.net1.acfs' on 'racnode1' succeeded CRS-2676: Start of 'ora.db' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.racnode1.acfs' on 'racnode1' CRS-2673: Attempting to stop 'ora. "OK.lsnr' on 'racnode2' CRS-2676: Start of 'ora.vip' on 'racnode1' CRS-2677: Stop of 'ora.vip' on 'racnode2' CRS-2677: Stop of 'ora. ASM .registry.asm' on 'racnode1' CRS-2677: Stop of 'ora. Starting / Stopping the Cluster At this point.ctssd' on 'racnode1' CRS-2673: Attempting to stop 'ora.eons' on 'racnode1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racnode1' has completed CRS-2677: Stop of 'ora. VIP.asm' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.crsd' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'racnode1' succeeded CRS-2677: Stop of 'ora.LISTENER.LISTENER.vip' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.eons' on 'racnode1' CRS-2677: Stop of 'ora.CRS.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI.cssd' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora. you may ask.vip' on 'racnode2' succeeded <-.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.racnode1.Notice racnode1 VIP moved to racnode2 CRS-2676: Start of 'ora.racnode1.dg' on 'racnode1' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN1.racdb. everything has been installed and configured for Oracle RAC 11 g release 2.registry. This section provides the commands necessary to stop and start the Oracle Clusterware stack on a local server ( racnode1 ).FRA.net1.ons' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.lsnr' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.html?_template=/ocom/print[7/8/2010 12:17:20 PM] .dg' on 'racnode1' succeeded CRS-2677: Stop of 'ora.cssd' on 'racnode1' CRS-2677: Stop of 'ora.network' on 'racnode1' CRS-2677: Stop of 'ora.evmd' on 'racnode1' succeeded CRS-2677: Stop of 'ora. Page 3 30.scan1.2. Or you may find that Enterprise Manager is not running and need to start it.evmd' on 'racnode1' CRS-2673: Attempting to stop 'ora. Stopping the Oracle Clusterware Stack on the Local Server Use the " crsctl stop cluster" command on racnode1 to stop the Oracle Clusterware stack: [root@racnode1 ~]# /u01/app/11.dg' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.lsnr' on 'racnode1' CRS-2673: Attempting to stop 'ora. network.vip' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora. After all of that hard work.ons' on 'racnode1' CRS-2673: Attempting to stop 'ora.Notice LISTENER_SCAN1 moved to racnode2 CRS-2677: Stop of 'ora.racnode1.RACDB_DATA.lsnr' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'racnode1' CRS-2677: Stop of 'ora.LISTENER_SCAN1.diskmon' on 'racnode1' succeeded Note: If any resources that Oracle Clusterware manages are still running after you run the " crsctl stop http://www.dg' on 'racnode1' CRS-2677: Stop of 'ora.db' on 'racnode1' CRS-2673: Attempting to stop 'ora. We also have a fully functional clustered database running named racdb .cssdmonitor' on 'racnode1' CRS-2673: Attempting to stop 'ora.vip' on 'racnode2' succeeded <-.CRS.ctssd' on 'racnode1' succeeded CRS-2677: Stop of 'ora.vip' on 'racnode2' CRS-2677: Stop of 'ora. when you might want to take down the Oracle services on a node for maintenance purposes and restart the Oracle Clusterware stack at a later time. The following stop/start actions need to be performed as root .scan1.Notice SCAN moved to racnode2 CRS-2672: Attempting to start 'ora.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI, Page 3

cluster "

command, then the entire command fails. Use the -f option to unconditionally stop all resources and stop the Oracle Clusterware stack. Also note that you can stop the Oracle Clusterware stack on all servers in the cluster by specifying -all . The following will bring down the Oracle Clusterware stack on both racnode1 and racnode2 :
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all

Starting the Oracle Clusterware Stack on the Local Server Use the " crsctl start cluster" command on racnode1 to start the Oracle Clusterware stack:
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode1' CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'racnode1' CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1' CRS-2676: Start of 'ora.diskmon' on 'racnode1' succeeded CRS-2676: Start of 'ora.cssd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1' CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'racnode1' CRS-2672: Attempting to start 'ora.asm' on 'racnode1' CRS-2676: Start of 'ora.evmd' on 'racnode1' succeeded CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'racnode1' CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded

Note: You can choose to start the Oracle Clusterware stack on all servers in the cluster by specifying -all :
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all

You can also start the Oracle Clusterware stack on one or more named servers in the cluster by listing the servers separated by a space:
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n racnode1 racnode2

Start/Stop All Instances with SRVCTL Finally, you can start/stop all instances and associated services using the following:
[oracle@racnode1 ~]$ srvctl stop database -d racdb [oracle@racnode1 ~]$ srvctl start database -d racdb

  31. Troubleshooting Confirm the RAC Node Name is Not Listed in Loopback Address Ensure that the node names ( racnode1 or racnode2 ) are not included for the loopback address in the /etc/hosts file. If the machine name is listed in the in the loopback address entry as below:
127.0.0.1 racnode1 localhost.localdomain localhost

it will need to be removed as shown below:
127.0.0.1 localhost.localdomain localhost

If the RAC node name is listed for the loopback address, you will receive the following error during the RAC installation:
ORA-00603: ORACLE server session terminated by fatal error

or
ORA-29702: error occurred in Cluster Group Service operation

http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.html?_template=/ocom/print[7/8/2010 12:17:20 PM]

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI, Page 3

Openfiler - Logical Volumes Not Active on Boot One issue that I have run into several times occurs when using a USB drive connected to the Openfiler server. When the Openfiler server is rebooted, the system is able to recognize the USB drive however, it is not able to load the logical volumes and writes the following message to /var/log/messages - (also available through dmesg ):
iSCSI Enterprise Target Software - version 0.4.14 iotype_init(91) register fileio iotype_init(91) register blockio iotype_init(91) register nullio open_path(120) Can't open /dev/rac1/crs -2 fileio_attach(268) -2 open_path(120) Can't open /dev/rac1/asm1 -2 fileio_attach(268) -2 open_path(120) Can't open /dev/rac1/asm2 -2 fileio_attach(268) -2 open_path(120) Can't open /dev/rac1/asm3 -2 fileio_attach(268) -2 open_path(120) Can't open /dev/rac1/asm4 -2 fileio_attach(268) -2

Please note that I am not suggesting that this only occurs with USB drives connected to the Openfiler server. It may occur with other types of drives, however I have only seen it with USB drives! If you do receive this error, you should first check the status of all logical volumes using the lvscan command from the Openfiler server:
# lvscan inactive inactive inactive inactive inactive '/dev/rac1/crs' [2.00 GB] inherit '/dev/rac1/asm1' [115.94 GB] inherit '/dev/rac1/asm2' [115.94 GB] inherit '/dev/rac1/asm3' [115.94 GB] inherit '/dev/rac1/asm4' [115.94 GB] inherit

Notice that the status for each of the logical volumes is set to inactive - (the status for each logical volume on a working system would be set to ACTIVE). I currently know of two methods to get Openfiler to automatically load the logical volumes on reboot, both of which are described below. Method 1 One of the first steps is to shutdown both of the Oracle RAC nodes in the cluster ( racnode1 and racnode2 ). Then, from the Openfiler server, manually set each of the logical volumes to ACTIVE for each consecutive reboot:
# # # # # lvchange lvchange lvchange lvchange lvchange -a -a -a -a -a y y y y y /dev/rac1/crs /dev/rac1/asm1 /dev/rac1/asm2 /dev/rac1/asm3 /dev/rac1/asm4

Another method to set the status to active for all logical volumes is to use the Volume Group change command as follows:
# vgscan Reading all physical volumes. This may take a while... Found volume group "rac1" using metadata type lvm2 # vgchange -ay 5 logical volume(s) in volume group "rac1" now active

After setting each of the logical volumes to active, use the lvscan command again to verify the status:
# lvscan

http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.html?_template=/ocom/print[7/8/2010 12:17:20 PM]

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI, Page 3

ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE

'/dev/rac1/crs' [2.00 GB] inherit '/dev/rac1/asm1' [115.94 GB] inherit '/dev/rac1/asm2' [115.94 GB] inherit '/dev/rac1/asm3' [115.94 GB] inherit '/dev/rac1/asm4' [115.94 GB] inherit

As a final test, reboot the Openfiler server to ensure each of the logical volumes will be set to ACTIVE after the boot process. After you have verified that each of the logical volumes will be active on boot, check that the iSCSI target service is running:
# service iscsi-target status ietd (pid 2668) is running...

Finally, restart each of the Oracle RAC nodes in the cluster - ( racnode1 and racnode2 ). Method 2 This method was kindly provided by Martin Jones. His workaround includes amending the /etc/rc.sysinit script to basically wait for the USB disk ( /dev/sda in my example) to be detected. After making the changes to the /etc/rc.sysinit script (described below), verify the external drives are powered on and then reboot the Openfiler server. The following is a small portion of the /etc/rc.sysinit script on the Openfiler server with the changes (highlighted in blue) proposed by Martin:
.............................................................. # LVM2 initialization, take 2 if [ -c /dev/mapper/control ]; then if [ -x /sbin/multipath.static ]; then modprobe dm-multipath >/dev/null 2>&1 /sbin/multipath.static -v 0 if [ -x /sbin/kpartx ]; then /sbin/dmsetup ls --target multipath --exec "/sbin/kpartx -a" fi fi if [ -x /sbin/dmraid ]; then modprobe dm-mirror > /dev/null 2>&1 /sbin/dmraid -i -a y fi #----#----#----MJONES - Customisation Start # Check if /dev/sda is ready while [ ! -e /dev/sda ] do echo "Device /dev/sda for first USB Drive is not yet ready." echo "Waiting..." sleep 5 done echo "INFO - Device /dev/sda for first USB Drive is ready." #----#----#----MJONES - Customisation END

if [ -x /sbin/lvm.static ]; then if /sbin/lvm.static vgscan > /dev/null 2>&1 ; then action $"Setting up Logical Volume Management:" /sbin/lvm.static vgscan --mknodes --ignorelockingfailure && /sbin/lvm.static vgchange -a y --ignorelockingfailure fi fi fi # Clean up SELinux labels if [ -n "$SELINUX" ]; then for file in /etc/mtab /etc/ld.so.cache ; do [ -r $file ] && restorecon $file >/dev/null 2>&1 done fi ..............................................................

http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi-3.html?_template=/ocom/print[7/8/2010 12:17:20 PM]

writing web-based database administration tools. Page 3 Finally. Linux. Bane was also involved with hardware recommendations and testing. A special thanks to K Gopalakrishnan for his assistance in delivering the Oracle RAC 11 g Overview section of this article. and physical / logical database design in a UNIX. and Intel.iDevelopment. however.   Jeffrey M. This article has hopefully given you an economical solution to setting up and configuring an inexpensive Oracle 11 g release 2 RAC Cluster using Oracle Enterprise Linux and iSCSI technology.idevelopment. The RAC solution presented in this article can be put together for around US$2. Jeff's other interests include mathematical encryption theory.000. I would like to express my appreciation to the following vendors for generously supplying the hardware for this article. Pennsylvania. it should never be considered for a production environment. Jeff currently works as a Senior Database Administrator for The DBA Zone. Avocent Corporation. California.   32.info] is an Oracle Certified Professional. Lastly. Although I was able to author and successfully demonstrate the validity of the components that make up this configuration. His research and hard work made the task of configuring Openfiler seamless. located in Pittsburgh. Page 1  Page 2  Page 3 http://www. database security. programming language processors (compilers and interpreters) in Java and C. Java Development Certified Professional. Acknowledgements An article of this magnitude and complexity is generally not the work of one person alone. Inc. that want to become more familiar with the features and benefits of Oracle11 g RAC will find the costs of configuring even a small RAC cluster costing in the range of US$15. Hunter [www. restart each of the Oracle RAC nodes in the cluster . His work includes advanced performance tuning. much of the content regarding the history of Oracle RAC can be found in his very popular book Oracle Database 10 g Real Application Clusters Handbook. Conclusion Oracle11 g RAC allows the DBA to configure a database solution with superior fault tolerance and load balancing. and an Oracle ACE. First. Bane not only introduced me to Openfiler. Author.com/technology/pub/articles/hunter-rac11gr2-iscsi-3. capacity planning.Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI. In this section. I would like to thank Bane Radulovic from the Server BDE Team at Oracle. and Windows server environment. Seagate. but shared with me his experience and knowledge of the product and how to best utilize it for Oracle RAC. and of course Linux. Jeff graduated from Stanislaus State University in Turlock.oracle.000 to US$20. Jeff has been a Sr. Database Administrator and Software Engineer for over 16 years and maintains his own website site at: http://www. with a Bachelor's degree in Computer Science. there are several other individuals that deserve credit in making this article a success.( racnode1 and racnode2 ). Java and PL/SQL programming. This book comes highly recommended for both DBA's and Developers wanting to successfully implement Oracle RAC and fully understand how many of the advanced services like Cache Fusion and Global Resource Directory operate. While the hardware used for this article should be stable enough for educational purposes.   33. For those DBA's.700 and will provide the DBA with a fully functional Oracle 11 g release 2 RAC cluster.html?_template=/ocom/print[7/8/2010 12:17:20 PM] . LDAP.info.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->