DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

DBA Tips Archive for Oracle

High Availab Guide Appliance, replication or software? Cho the right solu for you.

www.evidian.co

Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5.5)
by Jeff Hunter, Sr. Database Administrator

Contents
Introduction Oracle RAC 11g Overview Shared-Storage Overview iSCSI Technology Hardware and Costs Install the Linux Operating System Install Required Linux Packages for Oracle RAC Install Openfiler Network Configuration Cluster Time Synchronization Service Configure iSCSI Volumes using Openfiler Configure iSCSI Volumes on Oracle RAC Nodes Create Job Role Separation Operating System Privileges Groups, Users, and Directories Logging In to a Remote System Using X Terminal Configure the Linux Servers for Oracle Configure RAC Nodes for Remote Access using SSH - (Optional) Install and Configure ASMLib 2.0 Download Oracle RAC 11g release 2 Software Pre-installation Tasks for Oracle Grid Infrastructure for a Cluster Install Oracle Grid Infrastructure for a Cluster Post-installation Tasks for Oracle Grid Infrastructure for a Cluster Create ASM Disk Groups for Data and Fast Recovery Area Install Oracle Database 11g with Oracle Real Application Clusters Install Oracle Database 11g Examples (formerly Companion) Create the Oracle Cluster Database Post Database Creation Tasks - (Optional) Create / Alter Tablespaces Verify Oracle Grid Infrastructure and Database Configuration Starting / Stopping the Cluster Troubleshooting Conclusion Acknowledgements About the Author

Introduction
Oracle RAC 11g release 2 allows DBA's to configure a clustered database solution with superior fault tolerance, load balancing, and scalability. However, DBA's who want to become more familiar with the features and benefits of database clustering, will find the costs of configuring even a small RAC cluster costing in the range of US$10,000 to US$20,000. This cost would not even include the heart of a production RAC configuration, the shared storage. In most cases, this would be a

1 of 136

4/18/2011 10:17 PM

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

Storage Area Network (SAN), which generally start at US$10,000. Unfortunately, for many shops, the price of the hardware required for a typical RAC configuration exceeds most training budgets. For those who want to become familiar with Oracle RAC 11g without a major cash outlay, this guide provides a low-cost alternative to configuring an Oracle RAC 11g release 2 system using commercial off-the-shelf components and downloadable software at an estimated cost of US$2,800. The system will consist of a two node cluster, both running Linux (CentOS 5.5 for x86_64), Oracle RAC 11g release 2 for Linux x86_64, and ASMLib 2.0. All shared disk storage for Oracle RAC will be based on iSCSI using Openfiler release 2.3 x86_64 running on a third node (known in this article as the Network Storage Server). This guide is provided for educational purposes only, so the setup is kept simple to demonstrate ideas and concepts. For example, the shared Oracle Clusterware files (OCR and voting files) and all physical database files in this article will be set up on only one physical disk, while in practice that should be stored on multiple physical drives configured for increased performance and redundancy (i.e. RAID). In addition, each Linux node will only be configured with two network interfaces — one for the public network (eth0) and one that will be used for both the Oracle RAC private interconnect "and" the network storage server for shared iSCSI access (eth1). For a production RAC implementation, the private interconnect should be at least Gigabit (or more) with redundant paths and "only" be used by Oracle to transfer Cluster Manager and Cache Fusion related data. A third dedicated network interface (eth2, for example) should be configured on another redundant Gigabit network for access to the network storage server (Openfiler). Oracle Documentation While this guide provides detailed instructions for successfully installing a complete Oracle RAC 11g system, it is by no means a substitute for the official Oracle documentation (see list below). In addition to this guide, users should also consult the following Oracle documents to gain a full understanding of alternative configuration options, installation, and administration with Oracle RAC 11g. Oracle's official documentation site is docs.oracle.com. Grid Infrastructure Installation Guide - 11g Release 2 (11.2) for Linux Clusterware Administration and Deployment Guide - 11g Release 2 (11.2) Oracle Real Application Clusters Installation Guide - 11g Release 2 (11.2) for Linux and UNIX Real Application Clusters Administration and Deployment Guide - 11g Release 2 (11.2) Oracle Database 2 Day + Real Application Clusters Guide - 11g Release 2 (11.2) Oracle Database Storage Administrator's Guide - 11g Release 2 (11.2)

Network Storage Server Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. The entire software stack interfaces with open source applications such as Apache, Samba, LVM2, ext3, Linux NFS and iSCSI Enterprise Target. Openfiler combines these ubiquitous technologies into a small, easy to manage solution fronted by a powerful web-based management interface. Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its iSCSI capabilities to implement an inexpensive SAN for the shared storage components required by Oracle RAC 11g. The operating system (rPath Linux) and the Openfiler application will be installed on one internal SATA disk. A second internal 73GB 15K SCSI hard disk will be configured as a single volume group that will be used for all shared disk storage requirements. The Openfiler server will be configured to use this volume group for iSCSI based storage and will be used in our Oracle RAC 11g configuration to store the shared files required by Oracle grid infrastructure and the Oracle RAC database. Oracle Grid Infrastructure 11g Release 2 (11.2) With Oracle grid infrastructure 11g release 2 (11.2), the Automatic Storage Management (ASM) and Oracle Clusterware software is packaged together in a single binary distribution and installed into a single home directory, which is referred to as the Grid Infrastructure home. You must install the grid infrastructure in order to use Oracle RAC 11g release 2. Configuration assistants start after the installer interview process that will be responsible for configuring ASM and Oracle Clusterware. While the installation of the combined products is called Oracle grid infrastructure, Oracle Clusterware and Automatic Storage Manager remain separate products.

2 of 136

4/18/2011 10:17 PM

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

After Oracle grid infrastructure is installed and configured on both nodes in the cluster, the next step will be to install the Oracle Real Application Clusters (Oracle RAC) software on both Oracle RAC nodes. In this article, the Oracle grid infrastructure and Oracle RAC software will be installed on both nodes using the optional Job Role Separation configuration. One OS user will be created to own each Oracle software product — "grid" for the Oracle grid infrastructure owner and "oracle" for the Oracle RAC software. Throughout this article, a user created to own the Oracle grid infrastructure binaries is called the grid user. This user will own both the Oracle Clusterware and Oracle Automatic Storage Management binaries. The user created to own the Oracle database binaries (Oracle RAC) will be called the oracle user. Both Oracle software owners must have the Oracle Inventory group (oinstall) as their primary group, so that each Oracle software installation owner can write to the central inventory (oraInventory), and so that OCR and Oracle Clusterware resource permissions are set correctly. The Oracle RAC software owner must also have the OSDBA group and the optional OSOPER group as secondary groups. Assigning IP Address Prior to Oracle Clusterware 11g release 2, the only method available for assigning IP addresses to each of the Oracle RAC nodes was to have the network administrator manually assign static IP addresses in DNS — never to use DHCP. This would include the public IP address for the node, the RAC interconnect, virtual IP address (VIP), and new to 11g release 2, the Single Client Access Name (SCAN) virtual IP address(s). Oracle Clusterware 11g release 2 now provides two methods for assigning IP addresses to all Oracle RAC nodes: 1. Assigning IP addresses dynamically using Grid Naming Service (GNS) which makes use of DHCP 2. The traditional method of manually assigning static IP addresses in Domain Name Service (DNS)
Assigning IP Addresses Dynamically using Grid Naming Service (GNS)

A new method for assigning IP addresses was introduced in Oracle Clusterware 11g release 2 named Grid Naming Service (GNS) which allows all private interconnect addresses, as well as most of the VIP addresses to be dynamically assigned using DHCP. GNS and DHCP are key elements to Oracle's new Grid Plug and Play (GPnP) feature that, as Oracle states, eliminates per-node configuration data and the need for explicit add and delete nodes steps. GNS enables a dynamic grid infrastructure through the self-management of the network requirements for the cluster. While assigning IP addresses using GNS certainly has its benefits and offers more flexibility over manually defining static IP addresses, it does come at the cost of complexity and requires components not defined in this guide. For example, activating GNS in a cluster requires a DHCP server on the public network which falls outside the scope of building an inexpensive Oracle RAC. The example Oracle RAC configuration described in this guide will use the traditional method of manually assigning static IP addresses in DNS. To learn more about the benefits and how to configure GNS, please see Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux.
Assigning IP Addresses Manually using Static IP Address - (The DNS Method)

If you choose not to use GNS, manually defining static IP addresses is still available with Oracle Clusterware 11g release 2 and will be the method used in this article to assign all required Oracle Clusterware networking components (public IP address for the node, RAC interconnect, virtual IP address, and SCAN virtual IP). It should be pointed out that previous to Oracle 11g release 2, the need for DNS in order to successfully configure Oracle RAC was not a strict requirement. It was technically possible (although not recommended for a production system) to define all IP addresses only in the hosts file on all nodes in the cluster (i.e. /etc/hosts). This actually worked to my advantage with any of my previous articles on building an inexpensive RAC because it was one less component to document and configure. So, why is the use of DNS now a requirement when manually assigning static IP addresses? The answer is SCAN. Oracle Clusterware 11g release 2 requires the use of DNS in order to store the SCAN virtual IP address(s). In addition to the requirement of configuring the SCAN virtual IP address in DNS, we will also configure the public and virtual IP address for all

3 of 136

4/18/2011 10:17 PM

two independent disks can fail without impacting access to the OCR.168. Part of this solution is the ability to store the Oracle Clusterware files. the SCAN is associated with the entire cluster. You will be asked to provide the host name (also called the SCAN name in this document) and up to three IP addresses to be used for the SCAN resource during the interview phase of the Oracle grid infrastructure installation. such as racnode1-vip. For high availability and scalability. The SCAN should be configured so that it is resolvable either by using Grid Naming Service (GNS) within the cluster or by using the traditional method of assigning static IP addresses using Domain Name Service (DNS) resolution. At a minimum. instructions will be included later in this guide on how to install a minimal DNS server on the Openfiler network storage server. This feature enables ASM to provide a unified storage solution. Oracle recommends that you configure the SCAN name for round-robin resolution to three IP addresses. the network configuration files on the client computer do not need to be modified when nodes are added to or removed from the cluster. and public IP addresses must all be on the same subnet.188 192. When using the DNS method for assigning IP addresses. independent of the nodes that make up the cluster. the SCAN must resolve to at least one address. The SCAN virtual IP name is similar to the names used for a node's virtual IP address. However. Clients using SCAN do not need to change their TNS configuration if you add or remove nodes in the cluster. Oracle ASM and Oracle Database 11g release 2 provide a more enhanced storage solution from previous releases. also known as the Voting Disks) on ASM. To address this problem. unlike a virtual IP. manually configured static IP address using the DNS method: racnode-cluster-scan racnode-cluster-scan racnode-cluster-scan IN A IN A IN A 192. virtual IP addresses. Note that SCAN addresses. SCAN is a new feature that provides a single host name for clients to access an Oracle Database running in a cluster. In this article. a Normal Redundancy ASM disk group will hold a two-waymirrored OCR. I will configure SCAN for round-robin resolution to three. If you do not have access to a DNS. then you know the pain of going through a list of all clients and updating their SQL*Net or JDBC configuration to reflect the new or deleted node.168. and can be associated with multiple IP addresses. Oracle recommends that all static IP addresses be manually configured in DNS before starting the Oracle grid infrastructure installation.1. namely the Oracle Cluster Registry (OCR) and the Voting Files (VF. With External 4 of 136 4/18/2011 10:17 PM . Clients that access the Oracle RAC database should use the SCAN or SCAN address. With a High Redundancy ASM disk group (three-way-mirrored). Oracle 11g release 2 introduced a new feature known as Single Client Access Name or SCAN for short. Oracle Clusterware files are stored in an ASM disk group and therefore utilize the ASM disk group configuration with respect to redundancy.1. Single Client Access Name (SCAN) for the Cluster If you have ever been tasked with extending an Oracle RAC cluster by adding a new node (or shrinking a RAC cluster by removing a node).168. not the VIP name or address. storing all the data for the clusterware and the database without the need for third-party volume managers or cluster file systems.. If an application uses a SCAN to connect to the cluster database. For example. a listener is created for each of the SCAN addresses. not just one address.shtml Oracle RAC nodes in DNS for name resolution.189 Further details regarding the configuration of SCAN will be provided in the section "Verify SCAN Configuration" during the network configuration phase of this guide.187 192. rather than an individual node. The SCAN resource and its associated IP address(s) provide a stable name for clients to use for connections.1. A failure of one disk in the disk group will not prevent access to the OCR. During installation of the Oracle grid infrastructure. Automatic Storage Management and Oracle Clusterware Files Automatic Storage Management (ASM) is now fully integrated with Oracle Clusterware in the Oracle grid infrastructure.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Just like database files.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. you can see that after listing all of the ASM files in the +CRS/racnode-cluster/OCRFILE directory. The two Oracle RAC nodes and the network storage server will be configured as follows: Oracle RAC / Openfiler Nodes Node Name racnode1 racnode2 openfiler1 Instance Name racdb1 racdb.idevelopment. use the crsctl query css votedisk command as follows: [grid@racnode1 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -.00 GHz 4GB 6GB Database Name Processor 1 x Dual Core Intel Xeon. it only shows the OCR (REGISTRY. The following example describes how the Oracle Clusterware files are stored in ASM after installing Oracle grid infrastructure using this guide. Oracle recommends using either normal or high redundancy ASM disk groups. To view the OCR. They follow the ASM disk group configuration with respect to redundancy. Previous versions of this guide used OCFS2 for storing the OCR and voting disk files. The Voting Files are managed in a similar way to the OCR. The Oracle physical database files (data. Instead. control files.703024853). archived redo logs) will be installed on ASM in an ASM disk group named +RACDB_DATA while the Fast Recovery Area will be created in a separate ASM disk group named +FRA. 3. ONLINE 4cbbd0de4c694f50bfd3857ebd8ad8c4 (ORCL:CRSVOL1) [CRS] Located 1 voting disk(s). Please note that installing Oracle Clusterware files on raw or block devices is no longer supported.00 GHz RAM 4GB 5 of 136 4/18/2011 10:17 PM .00 GHz 2 x Intel Xeon.--------1. online redo logs. The listing does not show the Voting File(s) because they are not managed as normal ASM files. 3. When configuring Oracle Clusterware files on a production system.703024853 From the example above.shtml Redundancy. use ASMCMD: [grid@racnode1 ~]$ asmcmd ASMCMD> ls -l +CRS/racnode-cluster/OCRFILE Type Redund Striped Time Sys OCRFILE UNPROT COARSE NOV 22 12:00:00 Y Name REGISTRY. unless an existing system is being upgraded. but are not managed as normal ASM files in the disk group. If you decide against using ASM for the OCR and voting disk files.255. Oracle only allows one OCR per disk group in order to protect against physical disk failures. This guide will store the OCR and voting disk files on ASM in an ASM disk group named +CRS using external redundancy which is one OCR location and one voting disk location. each voting disk is placed on a specific disk in the disk group. no protection is provided by Oracle. Oracle Clusterware still allows these files to be stored on a cluster file system like Oracle Cluster File System release 2 (OCFS2) or a NFS system.----------------------------. If disk mirroring is already occurring at either the OS or hardware level. you can use external redundancy.255.info racdb2 1 x Dual Core Intel Xeon. The disk and the location of the Voting Files on the disks are stored internally within Oracle Clusterware. The ASM disk group should be be created on shared storage and be at least 2GB in size. To find the location of all Voting Files within Oracle Clusterware. 3.

152 192.195 Virtual IP 192.168.152 192.1.168.168.195 Private IP 192.168.168.2.2.151 192.168.168.1.shtml Network Configuration Node Name racnode1 racnode2 openfiler1 Public IP 192.151 192.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.1.1.168.1.251 192.252 racnode-cluster-scan SCAN Name 6 of 136 4/18/2011 10:17 PM .2.

this became known as the Integrated Distributed Lock Manager (IDLM) and relied on an additional layer known as the Operating System Dependant (OSD) layer. before Oracle realized the need for a more efficient and scalable distributed lock manager (DLM) as the one included with the VAX/VMS cluster product was not well suited for database applications. Oracle was the first commercial database to support clustering at the database level. It wasn't long. The first successful cluster product was developed by DataPoint in 1977 named ARCnet. Oracle introduced a generic lock manager that was integrated into the Oracle kernel. but to also create their own clusterware product in future releases.shtml Node Name Public IP Private IP Virtual IP SCAN Name Oracle Software Components Software Component Grid Infrastructure OS User grid Primary Group oinstall Supplementary Groups asmadmin.e. In the case of failure with one of the servers. The only exception here is the choice of vendor hardware (i. but didn't really take off in the commercial market. asmdba /home/oracle /u01/app/oracle /u01/app/oracle/pro Storage Components Storage Component OCR/Voting Disk Database Files Fast Recovery Area File System ASM ASM ASM Volume Size 2GB 32GB 32GB ASM Volume Group Name +CRS +RACDB_DATA +FRA ASM Redundancy External External External This article is only designed to work as documented with absolutely no substitutions. asmoper Home Directory /home/grid Oracle Base / /u01/app/grid /u01/app/11.2 which gave birth to Oracle Parallel Server (OPS) .DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. click here.the first database to run the parallel server. By Oracle8. the other surviving server (or servers) can take over the workload from the failed server and the application continues to function normally as if nothing has happened. machines.1 using iSCSI. In later releases of Oracle.3 (Final Release). is the successor to Oracle Parallel Server. A cluster is a group of two or more interconnected computers or servers that appear as if they are one server to end users and applications and generally share the same set of physical disks. OPS was extended to included support for not only the VAX/VMS cluster product but also with most flavors of UNIX.3 using iSCSI. it might be helpful to first clarify what a cluster is. but made for a complex environment to setup and manage given the multiple layers involved. The key benefit of clustering is to provide a highly available framework where the failure of one node (for example a database server running an instance of Oracle) does not bring down an entire application. Ensure that the hardware you purchase from the vendor is supported on Red Hat Enterprise Linux 5 and Openfiler 2. The concept of clustering computers actually started several decades ago. introduced with Oracle9i. networking equipment.0/gri Oracle RAC oracle oinstall dba. This new model paved the way for Oracle to not only have their own DLM. It wasn't until the 1980's when Digital Equipment Corporation (DEC) released its VAX cluster product for the VAX/VMS operating system. however. Oracle decided to design and write their own DLM for the VAX/VMS cluster product which provided the fine-grain block level locking required by the database. Oracle's own DLM was included in Oracle 6. Oracle Real Application Clusters (RAC). Using the 7 of 136 4/18/2011 10:17 PM . If you are looking for an example that takes advantage of Oracle RAC 10g release 2 with RHEL 5. If you are looking for an example that takes advantage of Oracle RAC 11g release 1 with RHEL 5. asmdba. The ARCnet product enjoyed much success by academia types in research labs.2. By Oracle 7. and internal / external hard drives). Oracle RAC 11g Overview Before introducing the details for building a RAC cluster. oper. With the release of Oracle 6 for the Digital VAX cluster product. This framework required vendor-supplied clusterware which worked well. click here.

which does not include the cost of the servers that make up the Oracle database cluster. A less expensive alternative to fibre channel is SCSI. redo log files. Fibre channel configurations can support as many as 127 nodes and have a throughput of up to 2. A big difference between Oracle RAC and OPS is the addition of Cache Fusion. in which data is spread across several machines rather than shared by all. which can reach prices of about US$300 for a single 36GB drive.12 Gigabits per second in each direction. With the release of Oracle Database 10g release 2 (10. CRS was only available for Windows and Linux. even SCSI can come in over budget. the failure of one node will not cause the loss of access to the database. The UNDO. iSCSI. TCP as the transport protocol. For more background about Oracle RAC. RAC provides fault tolerance. then the requesting instance can read that data (after acquiring the required locks). With Oracle9i. and at the same time since all instances access the same database. Like OPS. Oracle Clusterware is the only clusterware that you need for most platforms on which Oracle RAC operates (except for Tru cluster. visit the Oracle RAC Product Center on OTN. fibre channel is one of the most popular solutions for shared storage. This process was called disk pinging.25 Gbps is expected. Specifically. Pre-configured Oracle RAC solutions are available from vendors such as Dell. Shared-Storage Overview Today. on the other hand. however. This article. Another popular solution is the Sun NFS (Network File System) found on a NAS. in which case you need vendor clusterware). Oracle's clusterware product was available for all operating systems and was the required cluster technology for Oracle RAC. IBM and HP for production environments.2). is read all the time during normal database operation (e. Protocols supported by Fibre Channel include SCSI and IP. It can be used for shared storage but only if you are using a network appliance or something similar. arbitrated loop (FC-AL). This guide uses Oracle Clusterware which as of 11g release 2 (11. This does not even include the fibre channel storage array and high-end drives. With cache fusion.000. is very expensive. See the Certify page on Oracle Metalink for supported Network Attached Storage (NAS) devices that can be used with Oracle RAC. at around US$2. Each instance has its own redo log files and UNDO tablespace that are locally read-writeable. SCSI technology provides acceptable performance for shared storage. focuses on putting together your own Oracle RAC 11g environment for development and testing by using Linux servers and a low cost shared disk solution. Not all database clustering solutions use shared storage.g. The redo log files for an instance are only writeable by that instance and will only be read from another instance during system failure. and performance benefits by allowing the system to scale out. for CR fabrication). Some vendors use an approach known as a Federated Cluster.000 to US$5. but keep in mind that Oracle RAC still requires Oracle Clusterware as it is fully integrated with the database software.000 for a two-node cluster. fibre channel is a high-speed serial-transfer interface that is used to connect systems and storage devices in either point-to-point (FC-P2P). A typical fibre channel setup which includes fibre channel cards for the servers is roughly US$10. One of the key drawbacks that has limited the benefits of using NFS and NAS for database storage has been performance degradation and complex configuration requirements. is now a component of Oracle grid infrastructure. With Oracle RAC. however. Cluster Ready Services was renamed to Oracle Clusterware. Just the fibre channel switch alone can start at around US$1. Oracle's approach to clustering leverages the collective processing power of all the nodes in the cluster and at the same time provides failover security. multiple instances use the same set of disks for storing data. Standard NFS client software (client systems that use the operating system provided NFS driver) is not 8 of 136 4/18/2011 10:17 PM . but for administrators and developers who are used to GPL-based Linux prices. As mentioned earlier. control files and parameter file for all other instances in the cluster. Oracle9i could still rely on external clusterware but was the first release to include their own clusterware product named Cluster Ready Services (CRS).2). The other instances in the cluster must be able to access them (read-only) in order to recover that instance in the event of a system failure.000. Each instance in the cluster must be able to access all of the data. you need servers that guarantee direct I/O over NFS. and 4. Oracle RAC allows multiple instances to access the same database (storage) simultaneously. By Oracle 10g release 1. You can still use clusterware from other vendors if the clusterware is certified. load balancing. data is passed along a high-speed interconnect using a sophisticated locking algorithm.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. or switched topologies (FC-SW). and read/write block sizes of 32K. With OPS a request for data from one instance to another required the data to be written to disk first.shtml same IDLM. The data disks must be globally available in order to allow all instances to access the database. however. Fibre channel. When using Oracle 10g or higher. At the heart of Oracle RAC is a shared disk subsystem.

Database servers depend on this type of communication (as opposed to the file level communication used by most NAS systems) in order to work properly. the only technology that existed for building a network based storage solution was a Fibre Channel Storage Area Network (FC SAN). it is not often used in a production environment.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. a new feature known as Direct NFS Client integrates the NFS client functionality directly in the Oracle software. iSCSI Technology For many years. but given the low-end hardware being used. While the costs involved in building a FC SAN have come down in recent years. iSCSI SANs remain the leading competitor to FC SANs. The third disadvantage is the fact that a Fibre Channel network is not Ethernet! It requires a separate network technology along with a second set of skill sets that need to exist with the data center staff. Block-level communication means that data is transferred between the host and the client in chunks called blocks. While iSCSI has a promising future. and robust reliability. the Internet Small Computer System Interface. is very complex and CPU intensive. large complex connectivity. and in many cases automate. FC SANs suffer from three major disadvantages. Customers who have strict requirements for high performance storage. most of the processing of the data (both TCP and iSCSI) is handled in software and is much slower than Fibre Channel which is handled completely in hardware. however. is an Internet Protocol (IP)-based storage networking standard for establishing and managing connections between IP-based storage devices. iSCSI Initiator Basically. The beauty of iSCSI is its ability to utilize an already familiar IP network as its transport mechanism. Ratified on February 11. Several of the advantages to FC SAN include greater performance. This solution offers a low-cost alternative to fibre channel for testing and educational purposes. Since its adoption. Like a FC SAN. however. hosts. Fibre Channel has recently been given a run for its money by iSCSI-based storage systems. many of its early critics were quick to point out some of its inherent shortcomings with regards to performance. The first is price. the performance optimization of the NFS client configuration for database workloads. it is only important to understand the difference between an iSCSI initiator and an iSCSI target. To learn more about Direct NFS Client. many product manufacturers have interpreted the Fibre Channel specifications differently from each other which has resulted in scores of interconnect problems. The second is incompatible hardware components. Fibre Channel has clearly demonstrated its capabilities over the years with its capacity for extremely high speeds. When purchasing Fibre Channel components from a common manufacturer. Also consider that 10-Gigabit Ethernet is a reality today! So with all of this talk about iSCSI. For many the solution is to do away with iSCSI software initiators and invest in specialized cards that can offload TCP/IP and iSCSI processing from a server's CPU. iSCSI comes with its own set of acronyms and terminology. The overhead incurred in mapping every SCSI command onto an equivalent iSCSI transaction is excessive. Direct NFS Client can simplify. With the introduction of Oracle 11g. an iSCSI initiator is a client device that connects and initiates requests to some service offered by a server (in this case an iSCSI target). The shared storage that will be used for this article is based on iSCSI technology using a network storage server installed with Openfiler. Through this integration. its components can be much the same as in a typical IP network (LAN). Oracle is able to optimize the I/O path between the Oracle software and the NFS server resulting in significant performance gains. better scalability. see the Oracle White Paper entitled "Oracle Database 11g Direct NFS Client". improved availability. With iSCSI. Fibre Channel was developed to move SCSI commands over a storage network. The TCP/IP protocol. The iSCSI initiator software will need to exist on each of the Oracle RAC nodes (racnode1 and 9 of 136 4/18/2011 10:17 PM . and mission critical reliability will undoubtedly continue to choose Fibre Channel.shtml optimized for Oracle database file I/O access patterns. With the popularity of Gigabit Ethernet and the demand for lower cost. 2003 by the Internet Engineering Task Force (IETF). iSCSI is a data transport protocol defined in the SCSI-3 specifications framework and is similar to Fibre Channel in that it is responsible for carrying block-level data over a storage network. and clients. increased disk utilization. better known as iSCSI. the cost of entry still remains prohibitive for small companies with limited IT budgets. does this mean the death of Fibre Channel anytime soon? Probably not. however. Based on an earlier set of ANSI protocols called Fiber Distributed Data Interface (FDDI). For the purpose of this article. Today. As with any new technology. an iSCSI SAN should be a separate physical network devoted entirely to storage. this is usually not a problem. and most important to us — support for server clustering! Still today. These specialized cards are sometimes referred to as an iSCSI Host Bus Adaptor (HBA) or a TCP Offload Engine (TOE) card. flexibility.

shtml racnode2). Alacritech. Intel(R) PRO/1000 PT Server Adapter .(EXPI9400PT) Oracle RAC Node 2 . iSCSI Target An iSCSI target is the "server" component of an iSCSI network. DDR2. I used a Gigabit Ethernet switch (and a 1Gb Ethernet card) for the private network.(racnode2) Dell PowerEdge T100 Dual Core Intel(R) Xeon(R) E3110.(Connected to KVM Switch) US$90 US$500 US$500 10 of 136 4/18/2011 10:17 PM . DDR2.(ATI ES1000) Integrated Gigabit Ethernet .0 GHz. Software iSCSI initiators are available for most major operating system platforms. iSCSI HBAs are available from a number of vendors. Each Linux server for Oracle RAC should contain two NIC adapters.0 GHz. Monitor. A second NIC adapter will be used for the private network (RAC interconnect and Openfiler networked storage). 1333MHz 4GB. A hardware initiator is an iSCSI HBA (or a TCP Offload Engine (TOE) card).2K RPM SATA 3Gbps Hard Drive Integrated Graphics . 3. For this article. 3.(racnode1) Dell PowerEdge T100 Dual Core Intel(R) Xeon(R) E3110. Intel. we will be using the free Linux Open-iSCSI software driver found in the iscsi-initiator-utils RPM.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. or Mouse . This is typically the storage device that contains the information you want and answers requests from the initiator(s). 6MB Cache. and QLogic.(Broadcom(R) NetXtreme IITM 5722) 16x DVD Drive No Keyboard. For the purpose of this article.2K RPM SATA 3Gbps Hard Drive Integrated Graphics . 800MHz 160GB 7. Oracle RAC Node 1 . Monitor. For the purpose of this article. the node openfiler1 will be the iSCSI target. An iSCSI initiator can be implemented using either software or hardware. including Adaptec. 1333MHz 4GB.(Broadcom(R) NetXtreme IITM 5722) 16x DVD Drive No Keyboard. Select the appropriate NIC adapter that is compatible with the maximum data transmission speed of the network switch to be used for the private network. Hardware and Costs The hardware used to build our example Oracle RAC 11g environment consists of three Linux servers (two Oracle RAC nodes and one Network Storage Server) and components that can be purchased at many local computer stores or over the Internet. which is basically just a specialized Ethernet card with a SCSI ASIC on-board to offload all the work (TCP and SCSI commands) from the system CPU.(ATI ES1000) Integrated Gigabit Ethernet . or Mouse . 800MHz 160GB 7. The iSCSI software initiator is generally used with a standard network interface card (NIC) — a Gigabit Ethernet card in most cases. The Dell PowerEdge T100 includes an embedded Broadcom(R) NetXtreme IITM 5722 Gigabit Ethernet NIC that will be used to connect to the public network. 6MB Cache.(Connected to KVM Switch) 1 x Ethernet LAN Card Used for RAC interconnect to racnode2 and Openfiler networked storage.

The Dell PowerEdge 1800 machine included an integrated 10/100/1000 Ethernet adapter that will be used to connect to the public network. or Mouse .(Connected to KVM Switch) Note: The rPath Linux operating system and Openfiler application will be installed on the 500GB internal SATA disk. I could have made an extra partition on the 500GB internal SATA disk for the iSCSI target. but decided to make use of the faster SCSI disk for this example. The Network Storage Server (Openfiler server) should contain two NIC adapters. 1 x Ethernet LAN Card Used for networked storage on the private network.(EXPI9400PT) Network Storage Server .(PWLA8490MT) Miscellaneous Components 1 x Ethernet Switch US$50 US$125 US$90 US$800 11 of 136 4/18/2011 10:17 PM .(openfiler1) Dell PowerEdge 1800 Dual 3. For the purpose of this article. Select the appropriate NIC adapter that is compatible with the maximum data transmission speed of the network switch to be used for the private network. Monitor. I used a Gigabit Ethernet switch (and a 1Gb Ethernet card) for the private network. The Dell PowerEdge T100 includes an embedded Broadcom(R) NetXtreme IITM 5722 Gigabit Ethernet NIC that will be used to connect to the public network. Please be aware that any type of hard disk (internal or external) should work for the shared disk storage as long as it can be recognized by the network storage server (Openfiler) and has adequate space. The second NIC adapter will be used for the private network (Openfiler networked storage).0GHz Xeon / 1MB Cache / 800FSB (SL7PE) 6GB of ECC Memory 500GB SATA Internal Hard Disk 73GB 15K SCSI Internal Hard Disk Integrated Graphics Single embedded Intel 10/100/1000 Gigabit NIC 16x DVD Drive No Keyboard. Each Linux server for Oracle RAC should contain two NIC adapters.shtml Oracle RAC Node 1 . I used a Gigabit Ethernet switch (and 1Gb Ethernet card) for the private network. The Openfiler server will be configured to use this second hard disk for iSCSI based storage and will be used in our Oracle RAC 11g configuration to store the shared files required by Oracle Clusterware as well as the clustered database files. Intel(R) PRO/1000 PT Server Adapter . A second NIC adapter will be used for the private network (RAC interconnect and Openfiler networked storage). although the Openfiler server used in this example configuration contains 6GB of memory. Select the appropriate NIC adapter that is compatible with the maximum data transmission speed of the network switch to be used for the private network. A second internal 73GB 15K SCSI hard disk will be configured for the shared database storage. Intel(R) PRO/1000 MT Server Adapter . this is by no means a requirement. For example. The Openfiler server could be configured with as little as 2GB for a small test / evaluation network storage server.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. For the purpose of this article.(racnode1) 1 x Ethernet LAN Card Used for RAC interconnect to racnode1 and Openfiler networked storage. Finally.

This switch will also be used for network storage traffic for Openfiler. A KVM switch is a hardware device that allows a user to control multiple computers from a single keyboard. Total US$350 US$2. However.0 network. it might make sense to connect each server with its own monitor. Mouse Switch —better known as a KVM Switch. I used a Gigabit Ethernet switch (and 1Gb Ethernet cards) for the private network.168. please see the article "KVM Switches For the Home and the Enterprise". this solution becomes unfeasible. as the number of servers to manage increases. video monitor and mouse. Now that we have talked about the hardware that will be used in this example. A more practical solution would be to configure a dedicated device which would include a single monitor. let's take a conceptual look at what the environment would look like after connecting all of the hardware components (click on the graphic below to view larger image): 12 of 136 4/18/2011 10:17 PM . For the purpose of this article. keyboard.shtml Oracle RAC Node 1 . keyboard. and mouse that would have direct access to the console of each server. This solution is made possible using a Keyboard. D-Link 8-port 10/100/1000 Desktop Switch . Video. When managing a very small number of servers.2.(racnode1) Used for the interconnect between racnode1-priv and racnode2-priv which will be on the 192.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.565 We are about to start the installation process.(DGS-2208) 6 x Network Cables Category Category Category Category Category Category switch) 6 6 6 6 6 6 patch patch patch patch patch patch cable cable cable cable cable cable (Connect (Connect (Connect (Connect (Connect (Connect racnode1 to public network) racnode2 to public network) openfiler1 to public network) racnode1 to interconnect Ethernet switch) racnode2 to interconnect Ethernet switch) openfiler1 to interconnect Ethernet US$10 US$10 US$10 US$10 US$10 US$10 Optional Components KVM Switch This guide requires access to the console of all machines in order to install the operating system and perform several of the configuration tasks. and mouse in order to access its console. Note: This article assumes you already have a switch or VLAN in place what will be used for the public network. Avocent provides a high quality and economical 4-port switch which includes four 6' cables: AutoView(R) Analog KVM Switch For a detailed explanation and guide on the use and KVM switches.

5 for x86_64 or Red Hat Enterprise Linux 5. This guide is designed to work with CentOS release 5. Although I have used Red Hat Fedora in the past. Install the Linux Operating System Perform the following installation on both Oracle RAC nodes in the cluster. This provides a free and stable version of the Red Hat Enterprise Linux 5 (AS/ES) operating environment that I can use for Oracle testing and development. I tend to stick with it as it is stable and reacts fast with regards to updates by Red Hat. I have moved away from Fedora as I need a stable environment that is not only free. This section provides a summary of the screens used to install the Linux operating system. I wanted to switch to a Linux environment that would guarantee all of the functionality contained with Oracle. Download CentOS 13 of 136 4/18/2011 10:17 PM .DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml Figure 1: Oracle RAC 11g release 2 Test Configuration As we start to go into the details of the installation. note that most of the tasks within this document will need to be performed on both Oracle RAC nodes (racnode1 and racnode2).5 for x86_64 and follows Oracle's suggestion of performing a "default RPMs" installation type to ensure all expected Linux O/S packages are present for a successful Oracle RDBMS installation. While CentOS is not the only project performing the same functionality. I will indicate at the beginning of each section whether or not the task(s) should be performed on both Oracle RAC nodes or on the network storage server (openfiler1). but as close to the actual Oracle supported operating system as possible. The CentOS project takes the Red Hat Enterprise Linux 5 source RPMs and compiles them into a free clone of the Red Hat Enterprise Server 5 product. This is where CentOS comes in.

5-i386-bin-3of7.shtml Use the links below to download CentOS 5.iso CentOS-5.iso CentOS-5.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.5 for either x86 or x86_64 depending on your hardware architecture. and answer the installation screen prompts as noted below.iso CentOS-5.iso (3. insert CentOS Disk #1 into the first server (racnode1 in this example).5-x86_64-bin-DVD.5-i386-bin-6of7.iso CentOS-5. After completing the Linux installation on the first node. Boot Screen The first screen is the CentOS boot screen.iso CentOS-5.iso CentOS-5.5-x86_64-bin-6of8.5-x86_64-bin-1of8. here are just three of the many software packages that can be used: InfraRecorder UltraISO Magic ISO Maker Install CentOS After downloading and burning the CentOS images (ISO files) to CD/DVD.5-i386-bin-5of7.5-x86_64-bin-8of8. 32-bit (x86) Installations CentOS-5.iso (623 MB) (621 MB) (630 MB) (619 MB) (629 MB) (637 MB) (231 MB) Note: If the Linux RAC nodes have a DVD installed.iso CentOS-5. you may find it more convenient to make use of the single DVD image: CentOS-5.5-x86_64-bin-3of8.iso CentOS-5.5-i386-bin-7of7. At the boot: prompt. If you are not familiar with this process and do not have the required software to burn images to CD. 14 of 136 4/18/2011 10:17 PM .iso CentOS-5.5-i386-bin-4of7. there are many options for burning these images (ISO files) to a CD.5-x86_64-bin-4of8.torrent (360 KB) If you are downloading the above ISO files to a MS Windows machine. You may already be familiar with and have the proper software to burn images to CD. perform the same Linux installation on the second node while substituting the node name racnode1 for racnode2 and the different IP addresses were appropriate. hit [Enter] to start the installation process.5-x86_64-bin-2of8.9 GB) 64-bit (x86_64) Installations CentOS-5.5-x86_64-bin-5of8. you may find it more convenient to make use of the two DVD images (requires BitTorrent ): CentOS-5.iso CentOS-5.5-i386-bin-2of7. you should have the two NIC interfaces (cards) installed.iso CentOS-5. Before installing the Linux operating system on both nodes. power it on.iso CentOS-5.5-x86_64-bin-7of8.5-i386-bin-DVD.iso CentOS-5.5-i386-bin-1of7.iso (623 MB) (587 MB) (634 MB) (633 MB) (634 MB) (627 MB) (624 MB) (242 MB) Note: If the Linux RAC nodes have a DVD installed.

The LVM Volume Group (VolGroup00) is then partitioned into two LVM partitions .520MB).shtml Media Test When asked to test the CD media.192MB More than 8.024MB and 2.5 times the size of RAM Equal to the size of RAM 0. [Edit] the volume group VolGroup00. (Including 5. the installer will create the same disk configuration as just noted but will create them using the Logical Volume Manager (LVM). This will bring up the "Edit LVM Volume Group: VolGroup00" dialog. I will accept all automatically preferred sizes. the media burning software would have warned us. The main concern during the partitioning phase is to ensure enough swap space is allocated as required by Oracle (which is a multiple of the available RAM).048MB RAM) or an amount equal to RAM (systems with > 2.952MB for swap since I have 4GB of RAM installed. Now add the space you decreased from the root file system 15 of 136 4/18/2011 10:17 PM . you can easily change that from this screen. double the amount of RAM (systems with <= 2. the automatic layout does not configure an adequate amount of swap space. you would decrease the size of the root file system by 512MB (i.512MB = 35. Click "[Next]" to continue. The following is Oracle's minimum requirement for swap space: Available RAM Between 1.049MB and 8. Click [Yes] to acknowledge this warning. and the rest going to the root (/) partition. For example. click [Next] to continue.032MB . Partitioning The installer will then allow you to view (and modify if needed) the disk partitions it automatically selected.192MB Swap Space Required 1. For most automatic layouts. Make the appropriate selection for your configuration and click [Next] to continue. the installer should then detect the video card. [Edit] and decrease the size of the root file system (/) by the amount you want to add to the swap partition.048MB Between 2. The installer then goes into GUI mode. Always select to Install CentOS. You will then be prompted with a dialog window asking if you really want to remove all Linux partitions. and mouse. First. to add another 512MB to swap. For example.e. tab over to [Skip] and hit [Enter].75 times the size of RAM For the purpose of this install. After several seconds.048MB RAM) for swap. 36. To increase the size of the swap partition. Disk Partitioning Setup Select "Remove all partitions on selected drives and create default layout" and check the option to "Review and modify partitioning layout". Language / Keyboard Selection The next two screens prompt you for the Language and Keyboard settings. If there were any errors.one for the root file system (/) and another for swap. Welcome to CentOS At the welcome screen. it will partition the first hard drive (/dev/sda for my configuration) into two partitions — one for the /boot partition (/dev/sda1) and the remainder of the disk dedicate to a LVM named VolGroup00 (/dev/sda2). Detect Previous Installation If the installer detects a previous version of RHEL / CentOS.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. monitor. the installer will choose 100MB for /boot. Starting with RHEL 4.) If for any reason. it will ask if you would like to "Install CentOS" or "Upgrade an existing Installation".

255. I used racnode1 for the first node and racnode2 for the second.(select Manual configuration) IPv4 Address Prefix (Netmask) Enable IPv6 support eth1 Enable IPv4 support Dynamic IP configuration (DHCP) . To use the "GRUB boot loader".168.151 255. Finish this dialog off by supplying your gateway and DNS servers. When completed. Make certain to put eth1 (the interconnect) on a different subnet than eth0 (the public network): Oracle RAC Node Network Configuration (racnode1) eth0 Enable IPv4 support Dynamic IP configuration (DHCP) . depend on your network configuration.2. Once you are satisfied with the disk layout. You may choose to use different IP addresses for both eth0 and eth1 that I have documented in this guide and that is OK. [Edit] both eth0 and eth1 as follows. 16 of 136 4/18/2011 10:17 PM . you will need to configure the server with a real host name. The settings you make here will. The installer may choose to not activate eth1 by default. Boot Loader Configuration The installer will use the GRUB boot loader by default. there will be several changes that need to be made to the network configuration. of course. Since this guide will use the traditional method of assigning static IP addresses for each of the Oracle RAC nodes. make sure that each of the network devices are checked to "Active on boot".shtml (512MB) to the swap partition. accept all default values and click [Next] to continue. click [OK] on the "Edit LVM Volume Group: VolGroup00" dialog. The most important modification that will be required for this guide is to not configure the Oracle RAC nodes with DHCP since we will be assigning static IP addresses.255.255. click [Next] to continue.1.255.151 255. Additionally. The installer should have successfully detected each of the network devices.168. First.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.(select Manual configuration) IPv4 Address Prefix (Netmask) Enable IPv6 support ON OFF 192.0 OFF Continue by manually setting your hostname. Network Configuration I made sure to install both NIC interfaces (cards) in each of the Linux machines before starting the operating system installation. Second.0 OFF ON OFF 192.

These packages will need to be manually installed from the CentOS CDs after the operating system install.shtml Additional DNS configuration information for both of the Oracle RAC nodes will be discussed later in this guide. For many of the Linux package groups. There are several other packages (RPMs). Most of the packages required for the Oracle software are grouped into "Package Groups" (i. there are some packages that are required by Oracle that do not belong to any of the available package groups (i. A complete list of required packages for Oracle grid infrastructure 11g release 2 and Oracle RAC 11g release 2 for Linux will be provided in the next section.e.e. The addition of such RPM groupings is not an issue. however. not all of the packages associated with that group get selected for installation. libaio-devel). can result in failed Oracle grid infrastructure and Oracle RAC installation attempts. select the radio button "Customize now" and click [Next] to continue. Time Zone Selection Select the appropriate time zone for your environment and click [Next] to continue. This is where you pick the packages to install. De-selecting any "default RPM" groupings or individual RPMs. Not to worry. some of the packages required by Oracle do not get installed. The installer includes a "Customize software" selection that allows the addition of RPM groupings such as "Development Libraries" or "Legacy Library Support".) So although the package group gets selected for install. For the purpose of this article. Set Root Password Select a root password and click [Next] to continue. In fact. For now. CentOS installs most of the software required for a typical server. install the following package groups: Desktop Environments GNOME Desktop Environment Applications Editors Graphical Internet Text-based Internet Development Development Libraries Development Tools Legacy Software Development Servers Server Configuration Tools Base System Administration Tools Base Java Legacy Software Support 17 of 136 4/18/2011 10:17 PM . however.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. (Note the "Optional packages" button after selecting a package group. verify that at least the following package groups are selected for install. Application -> Editors). Package Installation Defaults By default. Since these nodes will be hosting the Oracle grid infrastructure and Oracle RAC software. that are required to successfully install the Oracle software.

click [Continue] to acknowledge the warning dialog. Post Installation Wizard Welcome Screen When the system boots into CentOS Linux for the first time. The installer will eject the CD/DVD from the CD-ROM drive. click [Yes] to continue. Date and Time Settings Adjust the date and time settings if necessary and click [Forward] to continue. The post installation wizard allows you to make final O/S configuration settings. I will be creating the "grid" and "oracle" user accounts later in this guide. Sound Card This screen will only appear if the wizard detects a sound card. If you chose not to define any additional operating system user accounts.shtml System Tools X Window System In addition to the above packages. On the sound card screen click [Forward] to continue. click [Yes] to acknowledge a reboot of the system will occur after firstboot (Post Installation Wizard) is completed. About to Install This screen is basically a confirmation screen. You will be prompted with a warning dialog about not setting the firewall. select any additional packages you wish to install for this node keeping in mind to NOT de-select any of the "default" RPM packages. SELinux On the SELinux screen. When this occurs. click [Forward] to continue. Kdump Accept the default setting on the Kdump screen (disabled) and click [Forward] to continue. Create User Create any additional (non-oracle) operating system user accounts if desired and click [Forward] to continue. I will not be creating any additional operating system accounts. Take out the CD/DVD and click [Reboot] to reboot the system. You will be prompted with a warning dialog warning that changing the SELinux setting will require rebooting the system so the entire file system can be relabeled. On the "Welcome screen".DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Click [Next] to start the installation. choose the "Disabled" option and click [Forward] to continue. After selecting the packages to install click [Next] to continue. If you are installing CentOS using CDs. You have successfully installed Linux on the first node (racnode1). For the purpose of this article. When this occurs. Congratulations And that's it. you will be asked to switch CDs during the installation process depending on which packages you selected. Firewall On this screen. it will prompt you with another welcome screen for the "Post Installation Wizard". make sure to select the "Disabled" option and click [Forward] to continue. 18 of 136 4/18/2011 10:17 PM .

When configuring the machine name and networking.152 255. make sure that each of the network devices are checked to "Active on boot".152 255. I used racnode2 for the second node. Reboot System Given we changed the SELinux option to "Disabled". ensure to configure the proper values.(select Manual configuration) IPv4 Address Prefix (Netmask) Enable IPv6 support eth1 Enable IPv4 support Dynamic IP configuration (DHCP) . Second.255.168.1. You may choose to use different IP addresses for both eth0 and eth1 that I have documented in this guide and that is OK. Log in using the "root" user account and the password you provided during the installation.255. you are presented with the login screen.shtml Additional CDs On the "Additional CDs" screen click [Finish] to continue. Login Screen After rebooting the machine. Make certain to put eth1 (the interconnect) on a different subnet than eth0 (the public network): Oracle RAC Node Network Configuration (racnode2) eth0 Enable IPv4 support Dynamic IP configuration (DHCP) . Finish this dialog off by supplying your gateway and DNS servers.168. this is what I configured for racnode2. For my installation.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.(select Manual configuration) IPv4 Address Prefix (Netmask) Enable IPv6 support ON OFF 192.0 OFF Continue by manually setting your hostname. Perform the same installation on the second node After completing the Linux installation on the first node. we are prompted to reboot the system. The installer may choose to not activate eth1 by default. repeat the above steps for the second node (racnode2). [Edit] both eth0 and eth1 as follows.2.0 OFF ON OFF 192. First.255.255. Perform the same Linux installation on racnode2 19 of 136 4/18/2011 10:17 PM . Click [OK] to reboot the system for normal use.

5 for x86 CDs.106 libaio-devel-0.125 elfutils-libelf-devel-static-0. CD #3.2. # From CentOS 5.11 unixODBC-devel-2.5-24 glibc-common-2.2. CD #2. For packages that already exist and are up to date. While it is possible to query each individual package to determine which ones are missing and need to be installed.0.6.81 pdksh-5.14 sysstat-7.17.106 libgcc-4. After installing the Linux O/S. The Oracle Universal Installer (OUI) performs checks on your machine during installation to verify that it meets the appropriate operating system package requirements.50.* rpm -Uvh elfutils-libelf-0.2 unixODBC-2.1.125 elfutils-libelf-devel-0.2 make-3.2.[CD #1] mkdir -p /media/cdrom mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh binutils-2.5 glibc-devel-2. the next step is to verify and install all packages (RPMs) required by both Oracle Clusterware and Oracle RAC.6 compat-libstdc++-33-3.* 20 of 136 4/18/2011 10:17 PM .52 glibc-headers-2.0.2 libstdc++-devel-4.* rpm -Uvh ksh-2* rpm -Uvh libaio-0.11 Each of the packages listed above can be found on CD #1. Although many of the required packages for Oracle were installed during the Linux installation.* rpm -Uvh kernel-headers-2.2 libstdc++-4.2 glibc-2.* rpm -Uvh glibc-common-2.1.1.shtml Install Required Linux Packages for Oracle RAC Install the following required Linux packages on both Oracle RAC nodes in the cluster.1.1.5 (x86). an easier method is to run the rpm -Uvh PackageName command from the four CDs as follows.2 gcc-c++-4. the RPM command will simply ignore the install and print a warning message to the console that the package is already installed.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. verify the software requirements documented in this section before starting the Oracle installs. and CD #4 on the CentOS 5.2 libgomp-4.3. To ensure that these checks complete successfully.3.18 ksh-20060214 libaio-0.* rpm -Uvh glibc-2.2.125 gcc-4. several will be missing either because they were considered optional within the package group or simply didn't exist in any package group! The packages listed in this section (or later versions) are required for Oracle grid infrastructure 11g release 2 and Oracle RAC 11g release 2 running on the Red Hat Enterprise Linux 5 or CentOS 5 platform.3 elfutils-libelf-0.1.5 kernel-headers-2. 32-bit (x86) Installations binutils-2.

* rpm -Uvh pdksh-5.* rpm -Uvh gcc-c++-4.* rpm -Uvh glibc-headers-2.* rpm -Uvh glibc-devel-2.* rpm -Uvh libaio-devel-0.* rpm -Uvh libgomp-4.5 (x86).* rpm -Uvh libaio-devel-0.* rpm -Uvh glibc-common-2.* rpm -Uvh compat-libstdc++-33* rpm -Uvh elfutils-libelf-devel-* rpm -Uvh gcc-4.* rpm -Uvh glibc-2.* rpm -Uvh make-3.[CD #3] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh compat-libstdc++-33* rpm -Uvh elfutils-libelf-devel-* rpm -Uvh gcc-4.[CD #4] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh sysstat-7.* rpm -Uvh libstdc++-4.* rpm -Uvh libstdc++-4.* 21 of 136 4/18/2011 10:17 PM .DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.* rpm -Uvh unixODBC-2.* rpm -Uvh kernel-headers-2.* rpm -Uvh unixODBC-devel-2.5 (x86) .* rpm -Uvh libgcc-4.* rpm -Uvh glibc-devel-2.* rpm -Uvh unixODBC-2.[DVD #1] mkdir -p /media/cdrom mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh binutils-2.* rpm -Uvh gcc-c++-4.* rpm -Uvh elfutils-libelf-0.5 (x86) .* cd / eject # From CentOS 5.* cd / eject # From CentOS 5.5 (x86) .* rpm -Uvh libstdc++-devel-4.* rpm -Uvh make-3.* rpm -Uvh ksh-2* rpm -Uvh libaio-0.[CD #2] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh libgomp-4.* cd / eject -------------------------------------------------------------------------------------# From CentOS 5.* cd / eject # From CentOS 5.* rpm -Uvh libstdc++-devel-4.shtml rpm -Uvh libgcc-4.* rpm -Uvh glibc-headers-2.

1.106 (32 bit) libaio-devel-0.2 libgcc-4.3 (32 bit) elfutils-libelf-0.2.2 (32 bit) libstdc++-devel 4.11 unixODBC-2.[CD #1] mkdir -p /media/cdrom mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh binutils-2.11 unixODBC-devel-2.2.5 ksh-20060214 libaio-0. and CD #5 on the CentOS 5.* rpm -Uvh libgcc-4.6 compat-libstdc++-33-3.5-24 glibc-2.5 for x86_64 CDs. CD #4.2.shtml rpm -Uvh pdksh-5.1.3.81 pdksh-5.* rpm -Uvh libstdc++-4.106 libaio-devel-0.2 glibc-2.* rpm -Uvh unixODBC-devel-2.106 libaio-0.* 22 of 136 4/18/2011 10:17 PM .* cd / eject 64-bit (x86_64) Installations binutils-2.2 (32 bit) libstdc++-4.2 libstdc++-4.3.* rpm -Uvh make-3. CD #3.106 (32 bit) libgcc-4.3.125 elfutils-libelf-devel-static-0.1. an easier method is to run the rpm -Uvh PackageName command from the four CDs as follows.1.* rpm -Uvh sysstat-7.50.1.5-24 (32 bit) glibc-common-2.* rpm -Uvh ksh-2* rpm -Uvh libaio-0.5 (32 bit) glibc-headers-2.2 gcc-c++-4. For packages that already exist and are up to date. # From CentOS 5.3 compat-libstdc++-33-3.2 unixODBC-2.125 elfutils-libelf-devel-0.14 sysstat-7.0.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.0.17. the RPM command will simply ignore the install and print a warning message to the console that the package is already installed.2.1.5 glibc-devel-2. While it is possible to query each individual package to determine which ones are missing and need to be installed.1.5 (x86_64).* rpm -Uvh elfutils-libelf-0.2.125 gcc-4.2 make-3.* rpm -Uvh glibc-2.3.* rpm -Uvh glibc-common-2.2.11 (32 bit) unixODBC-devel-2.5 glibc-devel-2.11 (32 bit) Each of the packages listed above can be found on CD #1.2.

* rpm -Uvh make-3.5 (x86_64) .[CD #4] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh compat-libstdc++-33* rpm -Uvh libaio-devel-0.* cd / eject # From CentOS 5.* rpm -Uvh glibc-headers-2.[CD #3] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh elfutils-libelf-devel-* rpm -Uvh gcc-4.* rpm -Uvh compat-libstdc++-33* rpm -Uvh libaio-devel-0.[CD #5] mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh sysstat-7.5 (x86_64) .* rpm -Uvh unixODBC-devel-2.* rpm -Uvh glibc-devel-2.* rpm -Uvh unixODBC-devel-2.* rpm -Uvh ksh-2* rpm -Uvh libaio-0.* rpm -Uvh elfutils-libelf-0.shtml cd / eject # From CentOS 5.5 (x86_64) .* rpm -Uvh glibc-common-2.* rpm -Uvh elfutils-libelf-devel-* rpm -Uvh gcc-4.* rpm -Uvh sysstat-7.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.* rpm -Uvh libstdc++-devel-4.* rpm -Uvh gcc-c++-4.* rpm -Uvh libstdc++-devel-4.* rpm -Uvh glibc-headers-2.* rpm -Uvh unixODBC-2.* rpm -Uvh gcc-c++-4.* cd / eject 23 of 136 4/18/2011 10:17 PM .[DVD #1] mkdir -p /media/cdrom mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh binutils-2.* cd / eject -------------------------------------------------------------------------------------# From CentOS 5.* rpm -Uvh unixODBC-2.5 (x86_64).* cd / eject # From CentOS 5.* rpm -Uvh glibc-devel-2.* rpm -Uvh pdksh-5.* rpm -Uvh libstdc++-4.* rpm -Uvh glibc-2.* rpm -Uvh pdksh-5.* rpm -Uvh libgcc-4.

easy to manage solution fronted by a powerful web-based management interface. Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. Please be aware that any type of hard disk (internal or external) should work for the shared database storage as long as it can be recognized by the network storage server (Openfiler) and has adequate space. Openfiler combines these ubiquitous technologies into a small. The Openfiler server will be configured to use this volume group for iSCSI based storage and will be used in our Oracle RAC 11g configuration to store the shared files required by Oracle Clusterware and the Oracle RAC database. Powered by rPath Linux. we will only be making use of its iSCSI capabilities to implement an inexpensive SAN for the shared storage components required by Oracle RAC 11g. however.iso (322 MB) 64-bit (x86_64) Installations openfiler-2. ext3. For the purpose of this article. With Linux installed on both Oracle RAC nodes.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.3 (Final Release) for either x86 or x86_64 depending on your hardware architecture. A second internal 73GB 15K SCSI hard disk will be configured as a single volume group that will be used for all shared disk storage requirements.com/.3-x86-disc1. FTP. To learn more about Openfiler. The only manual change required was for configuring the local network 24 of 136 4/18/2011 10:17 PM . LVM2.iso (336 MB) If you are downloading the above ISO file to a MS Windows machine. The rPath Linux operating system and Openfiler application will be installed on one internal SATA disk.shtml Install Openfiler Perform the following installation on the network storage server (openfiler1). Openfiler supports CIFS. 32-bit (x86) Installations openfiler-2. Download Openfiler Use the links below to download Openfiler NAS/SAN Appliance. I opted to install Openfiler with all default options. the next step is to install the Openfiler software to the network storage server (openfiler1). The entire software stack interfaces with open source applications such as Apache. NFS. This guide uses x86_64. Later in this guide. the network storage server will be configured as an iSCSI storage device for all Oracle Clusterware and Oracle RAC shared storage requirements. here are just three of the many software packages that can be used: InfraRecorder UltraISO Magic ISO Maker Install Openfiler This section provides a summary of the screens used to install the Openfiler software. there are many options for burning these images (ISO files) to a CD.openfiler. After downloading Openfiler. Samba. please visit their website at http://www. If you are not familiar with this process and do not have the required software to burn images to CD. Linux NFS and iSCSI Enterprise Target. version 2. but decided to make use of the faster SCSI disk for this example. HTTP/DAV. you will then need to burn the ISO image to CD.3-x86_64-disc1. For example. You may already be familiar with and have the proper software to burn images to CD. I could have made an extra partition on the 500GB internal SATA disk for the iSCSI target.

Automatic Partitioning If there were a previous installation of Linux on this machine. you should have both NIC interfaces (cards) installed and any external hard drives connected and turned on (if you will be using external hard drives). At the boot: prompt. For more detailed installation instructions. Boot Screen The first screen is the Openfiler boot screen. Make the appropriate selection for your configuration. Click [Next] to continue. After downloading and burning the Openfiler ISO image file to CD. monitor. I selected ONLY the 500GB SATA internal hard drive [sda] for the operating system and Openfiler application installation. hit [Enter] to start the installation process.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. however. Media Test When asked to test the CD media. The installer then goes into GUI mode. Click [Yes] to acknowledge this warning. the installer should then detect the video card. I would suggest. that the instructions I have provided below be used for this Oracle RAC 11g configuration. I opted to use "Automatic Partitioning" given the simplicity of my example configuration. the media burning software would have warned us. and answer the installation screen prompts as noted below. services and drivers are started and recognized.shtml settings. Once the install has completed. You will then be prompted with a dialog window asking if you really want to remove all partitions. the installer will choose 100MB for /boot. I also keep the check-box [Review (and modify if needed) the partitions created] selected. Before installing the Openfiler software to the network storage server. In this example. any external hard drives (if connected) will be discovered by the Openfiler server.openfiler. Partitioning The installer will then allow you to view (and modify if needed) the disk partitions it automatically chose for hard disks selected in the previous screen. please visit http://www. Keyboard Configuration The next screen prompts you for the Keyboard settings. and mouse. Select the option to [Remove all partitions on this system]. I am satisfied with the installers recommended partitioning for /dev/sda. an adequate amount of swap. After several seconds. Although the official Openfiler documentation suggests to use Manual Partitioning. click [Next] to continue. the server will reboot to make sure all required components. 25 of 136 4/18/2011 10:17 PM .com/learn/. I de-selected the 73GB SCSI internal hard drive since this disk will be used exclusively later in this guide to create a single "Volume Group" (racdbvg) that will be used for all iSCSI based shared disk storage requirements for Oracle Clusterware and Oracle RAC. Disk Partitioning Setup The next screen asks whether to perform disk partitioning using "Automatic Partitioning" or "Manual Partitioning with Disk Druid". and the rest going to the root (/) partition for that disk (or disks). After the reboot. tab over to [Skip] and hit [Enter]. If there were any errors. For my example configuration. In almost all cases. insert the CD into the network storage server (openfiler1 in this example). Select [Automatically partition] and click [Next] continue. power it on. Welcome to Openfiler NSA At the welcome screen. the next screen will ask if you want to "remove" or "keep" old partitions.

255.shtml The installer will also show any other internal hard disks it discovered. /dev/sdb1).195 255. You may choose to use different IP addresses for both eth0 and eth1 and that is OK. The installer will eject the CD from the CD-ROM drive. I will create the required partition for this particular hard disk. Click [Next] to start the installation. [Edit] both eth0 and eth1 as follows. For now. Finish this dialog off by supplying your gateway and DNS servers.255. Network Configuration I made sure to install both NIC interfaces (cards) in the network storage server before starting the Openfiler installation.0 Continue by setting your hostname manually.2. If everything was successful after the reboot.255. Time Zone Selection The next screen allows you to configure your time zone information.255. You have successfully installed Openfiler on the network storage server. Take out the CD and click [Reboot] to reboot the system. make sure that each of the network devices are checked to [Active on boot]. The installer may choose to not activate eth1 by default.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Congratulations And that's it. Second. I will "Delete" any and all partitions on this drive (there was only one. configure eth1 (the storage network) to be on the same subnet you configured for eth1 on racnode1 and racnode2: eth0 Configure using DHCP Activate on boot IP Address Netmask eth1 Configure using DHCP Activate on boot IP Address Netmask OFF ON 192. Later in this guide.0 OFF ON 192. 26 of 136 4/18/2011 10:17 PM . however. About to Install This screen is basically a confirmation screen.168.1.168. the installer found the 73GB SCSI internal hard drive as /dev/sdb. Set Root Password Select a root password and click [Next] to continue. you should now be presented with a text login screen and the URL to use for administering the Openfiler server. Make the appropriate selection for your location.195 255. I used a hostname of "openfiler1". The installer should have successfully detected each of the network devices. First. For my example configuration. You must.

Attempting to do so will result in the following error message: openfiler1 login: openfiler Password: password This interface has not been implemented yet. verify you can log in to the machine using the root user account and the password you supplied during installation. and TCP is the interconnect protocol for Oracle Clusterware. For the public network. Oracle recommends that you use a dedicated switch. then eth1 must be the private interface for racnode2. You must identify each interface as a public interface. If you use more than one NIC for the private interconnect. You can test if an interconnect interface is reachable using ping. Network Hardware Requirements The following is a list of hardware requirements for network configuration: Each Oracle RAC node must have at least two network adapters or network interface cards (NICs) — one for the public network interface and one for the private network interface (the interconnect).e. or not used and you must use the same private interfaces for both Oracle Clusterware and Oracle RAC. you are asked to identify the planned use for each network interface that OUI detects on your cluster node. Oracle recommends that you use NIC bonding. There should be no node that is not connected to every private network interface. Public interface names must be the same. Note that multiple 27 of 136 4/18/2011 10:17 PM . and the private interface names associated with the network adaptors should be the same on all nodes. each network adapter must support TCP/IP. For the private network. The public interface names associated with the network adapters for each network must be the same on all nodes. but on racnode2 have eth1 as the public interface. During installation of Oracle grid infrastructure. You must use a switch for the interconnect. Oracle does not support token-rings or crossover cables for the interconnect. because during installation each interface is defined as a public or private interface. You should configure the private interfaces on the same network adapters as well. If eth1 is the private interface for racnode1. so you must configure eth0 as public on both nodes. For the private network. For example. it is important to not skip this section as it contains critical steps which include configuring DNS and verifying you have the networking hardware and Internet Protocol (IP) addresses required for an Oracle grid infrastructure for a cluster installation. a private interface. with our two-node cluster. NIC bonding is not covered in this article.shtml After installing Openfiler. UDP is the default interconnect protocol for Oracle RAC. Do not attempt to log in to the console or SSH using the built-in openfiler user account. but Oracle recommends that you do not create separate interfaces for Oracle Clusterware and Oracle RAC. To use multiple NICs for the public network or for the private network. the endpoints of all designated interconnect interfaces must be completely reachable on the network. Use separate bonding for the public and private networks (i.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Network Configuration Perform the following network configuration tasks on both Oracle RAC nodes in the cluster. You can bond separate interfaces to a common interface to provide redundancy. in case of a NIC failure. then Oracle recommends that you use NIC bonding. Although we configured several of the network settings during the Linux installation. the interconnect must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP (minimum requirement 1 Gigabit Ethernet). bond0 for the public network and bond1 for the private network). you cannot configure network adapters on racnode1 with eth0 as the public interface. Only attempt to log in to the console or SSH using the root user account.

and for the purpose of this guide. and to the subnet used for the private subnet. the RAC interconnect. for example). Oracle Clusterware assigns interconnect addresses on the interface defined during installation as the private interface (eth1. You do not need to configure these addresses manually in a hosts directory.shtml private interfaces provide load balancing but not failover. I will continue to include a private name and IP address on each node for the RAC interconnect. The basic idea of a TOE is to offload the processing of TCP/IP protocols from the host processor to the hardware on the adapter or in the system. Combining the iSCSI storage traffic and cache fusion traffic for Oracle RAC on the same network interface works great for an inexpensive test system (like the one described in this article) but should never be considered for production.168.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. unless bonded. 28 of 136 4/18/2011 10:17 PM . I often refer to this traditional method of manually assigning IP addresses as the "DNS method" given the fact that all IP addresses should be resolved using DNS. Note that Oracle requires you to define the SCAN domain address (racnodecluster-scan in this example) to resolve on your DNS to one of three possible IP addresses in order to successfully install Oracle grid infrastructure! Defining the SCAN domain address only in the hosts files for each Oracle RAC node. Starting with Oracle Clusterware 11g release 2. The SCAN virtual IP will only be registered in DNS. this article will configure the iSCSI network storage traffic on the same network as the RAC private interconnect (eth1). and new to 11g release 2. When using the DNS method for assigning IP addresses. will cause the "Oracle Cluster Verification Utility" to fail with an [INS-20802] error during the Oracle grid infrastructure install.151 192. the Single Client Access Name (SCAN) virtual IP. In practice. virtual IP address (VIP). Oracle recommends that all static IP addresses be manually configured in DNS before starting the Oracle grid infrastructure installation.2. you no longer need to provide a private name or IP address for the interconnect. A TOE is often embedded in a network interface card (NIC) or a host bus adapter (HBA) and used to reduce the amount of TCP/IP processing handled by the CPU and server I/O subsystem and improve overall performance. It provides self-documentation and a set of end-points on the private network I can use for troubleshooting purposes: 192.152 racnode1-priv racnode2-priv In a production environment that uses iSCSI for network storage.168.2. then you can configure private IP names in the hosts file or the DNS. for example) for that storage traffic using a TCP/IP offload Engine (TOE) card. and not in DNS. However. If you want name resolution for the interconnect. This would include the public IP address for the node. it is highly recommended to configure a redundant third network interface (eth2. I opted not to use Grid Naming Service (GNS) for assigning IP addresses to each Oracle RAC node but instead will manually assign them in DNS and hosts files. IP addresses on the subnet you identify as private are assigned as private IP addresses for cluster member nodes. Oracle RAC Network Configuration For this guide. For the sake of brevity. Note that every IP address will be registered in DNS and the hosts file for each Oracle RAC node with the exception of the SCAN virtual IP. The following table displays the network configuration that will be used to build the example two-node Oracle RAC described in this guide.

168.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.252 192.2.1.168.168.1.168.1.168.168.187 192.251 192.151 192.168.168.152 192.1.189 Resolved By DNS and hosts file DNS and hosts file DNS and hosts file DNS and hosts file DNS and hosts file DNS and hosts file DNS DNS DNS 29 of 136 4/18/2011 10:17 PM .152 192.2.1.188 192.168.shtml Example Two-Node Oracle RAC Network Configuration Identity Node 1 Public Node 1 Private Node 1 VIP Node 2 Public Node 2 Private Node 2 VIP SCAN VIP 1 SCAN VIP 2 SCAN VIP 3 Name racnode1 racnode1-priv racnode1-vip racnode2 racnode2-priv racnode2-vip racnode-cluster-scan racnode-cluster-scan racnode-cluster-scan Type Public Private Virtual Public Private Virtual Virtual Virtual Virtual IP Address 192.1.151 192.1.

1-1 info-named:user=1-1-0. To learn more about the different options and parameters that can be used with the conary utility. If you do not have access to a DNS server.195 192.info. Note that in the example below.168.1.info.idevelopment. Use and Existing DNS If you already have access to a DNS server.168. developed by rPath.info.conf File".idevelopment.idevelopment.2.1. Reverse Lookup Zone 151 152 251 252 187 188 189 IN IN IN IN IN IN IN IN IN IN IN IN IN IN IN IN IN IN A A A A A A A A A A A PTR PTR PTR PTR PTR PTR PTR 192.info.252 192. racnode-cluster-scan.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.1 Applying update job 1 of 2: Install info-named(:user)=1-1-0.1.2. Forward Lookup Zone racnode1 racnode2 racnode1-priv racnode2-priv racnode1-vip racnode2-vip openfiler1 openfiler1-priv racnode-cluster-scan racnode-cluster-scan racnode-cluster-scan . review the Conary QuickReference guide.idevelopment.152 192.1.idevelopment.151 192. racnode2-vip. use the command-line tool conary. To install packages on Openfiler you need access to the Internet! To install DNS on the Openfiler server.251 192.152 192. racnode-cluster-scan. run the following command as the root user account: [root@openfiler1 ~]# conary update bind:runtime Including extra troves to resolve dependencies: bind:lib=9.info.shtml Identity Name Type IP Address Resolved By DNS Configuration The example Oracle RAC configuration described in this guide will use the traditional method of manually assigning static IP addresses and therefore requires a DNS server. this section includes detailed instructions for installing a minimal DNS server on the Openfiler network storage server.3_P5-1.4.168.168.1.168.168.info. racnode2. I am using the domain name idevelopment.168. Install DNS on Openfiler Installing DNS on the Openfiler network storage server is a trivial task. .1.195 192.168.idevelopment.168.188 192.1.info.info.189 racnode1. Please feel free to substitute your own domain name if needed.idevelopment. racnode1-vip.168.2.1 Applying update job 2 of 2: 30 of 136 4/18/2011 10:17 PM .151 192.1. racnode-cluster-scan. simply add the appropriate A and PTR records for Oracle RAC to your DNS and skip ahead to the next section "Update /etc/resolv. To install or update packages on Openfiler.187 192.168.

4_P1-0.ssl] -> 9.36.in-addr.6 -rwxr-xr-x 1 root root 2643 2008-02-22 21:44:05 UTC /etc/init.3_P5-1.0.168.4.~!pie.1.3_P5-1.so. Please feel free to substitute your own domain name if so desired.4.sh -rwxr-xr-x 1 root root 3168 2010-03-11 00:48:51 UTC /usr/sbin/dns-keygen -rwxr-xr-x 1 root root 21416 2010-03-11 00:48:51 UTC /usr/sbin/dnssec-keygen -rwxr-xr-x 1 root root 53412 2010-03-11 00:48:51 UTC /usr/sbin/dnssec-signzone -rwxr-xr-x 1 root root 379912 2010-03-12 14:07:50 UTC /usr/sbin/lwresd -rwxr-xr-x 1 root root 379912 2010-03-12 14:07:50 UTC /usr/sbin/named -rwxr-xr-x 1 root root 7378 2006-10-11 02:33:29 UTC /usr/sbin/named-bootconf -rwxr-xr-x 1 root root 20496 2010-03-11 00:48:51 UTC /usr/sbin/named-checkconf -rwxr-xr-x 1 root root 19088 2010-03-11 00:48:51 UTC /usr/sbin/named-checkzone lrwxrwxrwx 1 root root 15 2007-03-09 17:26:40 UTC /usr/sbin/named-compilezone -rwxr-xr-x 1 root root 24032 2010-03-11 00:48:51 UTC /usr/sbin/rndc -rwxr-xr-x 1 root root 11708 2010-03-11 00:48:51 UTC /usr/sbin/rndc-confgen drwxr-xr-x 1 named named 0 2007-12-16 01:01:35 UTC /var/run/named Configure DNS Configuration of the DNS server involves creating and modifying the following files: /etc/named.0.key -rw-r--r-1 root root 1561 2006-07-20 18:40:14 UTC /etc/sysconfig/named drwxr-xr-x 1 root named 0 2007-12-16 01:01:35 UTC /srv/named drwxr-xr-x 1 named named 0 2007-12-16 01:01:35 UTC /srv/named/data drwxr-xr-x 1 named named 0 2007-12-16 01:01:35 UTC /srv/named/slaves -rwxr-xr-x 1 root root 2927 2010-03-11 00:14:02 UTC /usr/bin/isc-config. If you do decide to use a different domain name.30 -> l -rwxr-xr-x 1 root root 37404 2010-03-11 00:48:52 UTC /usr/lib/libbind9.conf configuration file used in this example will be kept fairly simple and only contain the necessary customizations required to run a minimal DNS.arpa.30.30. I will be using the domain name idevelopment.~!pie.38 -> lib -rwxr-xr-x 1 root root 1421820 2010-03-11 00:48:52 UTC /usr/lib/libdns.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.1 lrwxrwxrwx 1 root root 16 2010-03-11 00:14:00 UTC /usr/lib/libdns.4.1.30 -> l -rwxr-xr-x 1 root root 28112 2010-03-11 00:48:51 UTC /usr/lib/libisccc.shtml Update bind(:lib) (9.so.1-1 Verify the files installed by the DNS bind package: [root@openfiler1 ~]# conary q bind --lsl lrwxrwxrwx 1 root root 16 2009-07-29 17:03:02 UTC /usr/lib/libbind. make certain to modify it in all of the files that are part of the network configuration described in this section.4. 31 of 136 4/18/2011 10:17 PM .1 lrwxrwxrwx 1 root root 19 2009-07-29 17:03:00 UTC /usr/lib/libisccfg. For the purpose of this guide.so.36 -> lib -rwxr-xr-x 1 root root 308260 2010-03-11 00:48:52 UTC /usr/lib/libisc.30.3.0.so.38.5-1[ipv6.30.2 lrwxrwxrwx 1 root root 18 2009-07-29 17:03:00 UTC /usr/lib/libbind9.30 -> l -rwxr-xr-x 1 root root 64360 2010-03-11 00:48:51 UTC /usr/lib/liblwres.*" for the public network.30 -> -rwxr-xr-x 1 root root 71428 2010-03-11 00:48:52 UTC /usr/lib/libisccfg.so.conf".4 -> lib -rwxr-xr-x 1 root root 294260 2010-03-11 00:48:52 UTC /usr/lib/libbind.zone — (Forward zone definition file) /srv/named/data/1.info and the IP range "192.4_P1-0.so.so.zone — (Reverse zone definition file) /etc/named. The /etc/named.3_P5-1.0 lrwxrwxrwx 1 root root 16 2009-07-29 17:02:58 UTC /usr/lib/libisc.so.so.so.192.so.so.3.168.conf — (DNS configuration file) /srv/named/data/idevelopment.0.1-1) Update bind-utils(:doc :runtime) (9.info.conf The first step will be to create the DNS configuration file "/etc/named.5-1[ipv6.d/named -rw-r----1 root root 1435 2004-06-18 04:39:39 UTC /etc/rndc.0.conf -rw-r----1 root named 65 2005-09-24 20:40:23 UTC /etc/rndc.so.so.ssl] -> 9.2 lrwxrwxrwx 1 root root 18 2007-03-09 17:26:37 UTC /usr/lib/libisccc.1-1) Install bind:runtime=9.d/named -rw-r--r-1 root root 163 2004-07-07 19:20:10 UTC /etc/logrotate.5 lrwxrwxrwx 1 root root 18 2009-07-29 17:03:01 UTC /usr/lib/liblwres.1.

192. you will notice it's zone definition file is "idevelopment. }. This directive specifies where named will look for zone definition files. }. to look like the one described below.info.zone".zone". The fully qualified name for this file is derived by concatenating the directory directive and the "file" specified for that zone.zone". Take note of the three entries used to configure the SCAN name for round-robin resolution to three IP addresses. }. allow-update { none. # ---------------------------------# Forward Zone # ---------------------------------zone "idevelopment.arpa.192. Create and edit the file associated with your forward lookup zone. 32 of 136 4/18/2011 10:17 PM . allow-update { none. I am using my D-Link router which is configured as my gateway to the Internet.shtml The DNS configuration file described below is configured to resolve the names of the servers described in this guide. }. For example.1. }.1. I needed to add DNS Forwarding by defining the forwarders directive.conf with at least the following content: # # # # # +-------------------------------------------------------------------+ | /etc/named.zone". the fully qualified name for the forward lookup zone definition file described below is "/srv/named/data/idevelopment.conf | | | | DNS configuration file for Oracle RAC 11g release 2 example | +-------------------------------------------------------------------+ options { // FORWARDERS: Forward any name this DNS can't resolve to my router. These files will be located in the "/srv/named/data" directory. /srv/named/data/idevelopment. This directive tells the DNS. The same rules apply for the reverse lookup zone which in this example would be "/srv/named/data/1.in-addr.168.168. file "1.192. For the purpose of this example. like those on the Internet.168. forwarders { 192.inaddr.info.zone In the DNS configuration file above.zone". # ---------------------------------# Reverse Zone # ---------------------------------zone "1. directory "/srv/named/data". anything it can't resolve should be passed to the DNS(s) listed. if you skip forward in the DNS configuration file to the "idevelopment. file "idevelopment. In order to make sure that servers on external networks. // DIRECTORY: Directory where named will look for zone files.info" forward lookup zone.168. For example. Create the file /etc/named. and several other miscellaneous nodes. This includes the two Oracle RAC nodes.info.in-addr.info" IN { type master. }. I could just as well have used the DNS entries provided by my ISP.info.arpa. we defined the forward and reverse zone definition files.arpa" IN { type master. (which in my case is "/srv/named /data/idevelopment.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. are resolved properly.zone"). The next directive defined in the options section is directory. the Openfiler network storage server (which is now also a DNS server!).info.

245 /srv/named/data/1.2. +-------------------------------------------------------------------+ | /srv/named/data/idevelopment.192.168.168.192.192.2. ( 201011021 .168.168.(1 day) IN SOA openfiler1. expire . $TTL 86400 @ .info.168. +-------------------------------------------------------------------+ | /srv/named/data/1.189 .1. retry .zone Next.info.arpa.idevelopment. minimum .(2 hours) 300 .arpa.idevelopment.1.1.168.1 192.arpa.168.1.152 192.idevelopment.1.121 192.151 192.168.168. . .1.168.(1 week) 60 .(yyyymmdd+s) 7200 .zone" zone definition file for public network reverse lookups: .168. jhunter. . jhunter.105 192.arpa.0.188 racnode-cluster-scan IN A 192.1.251 192.1. .168.info.1.122 192.168.168.zone | | | | Reverse zone definition file for idevelopment. . $TTL 86400 @ .187 racnode-cluster-scan IN A 192. serial number .info. time-to-live . 127. Single Client Access Name (SCAN) virtual IP racnode-cluster-scan IN A 192.0.in-addr.152 192.(1 day) IN SOA openfiler1.195 192. serial number . .195 ) localhost .151 192.info | +-------------------------------------------------------------------+ $ORIGIN 1. we need to create the "/srv/named/data/1.info | +-------------------------------------------------------------------+ $ORIGIN idevelopment.1 192. time-to-live .(5 minutes) 604800 .1.168.(yyyymmdd+s) 33 of 136 4/18/2011 10:17 PM .in-addr.168. Network Storage Server openfiler1 IN A openfiler1-priv IN A .in-addr.1.2. . ( 201011021 .(1 minute) IN NS IN A IN IN IN IN IN IN A A A A A A openfiler1.168.192.168.in-addr.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. refresh .idevelopment.252 192. . Oracle RAC Nodes racnode1 racnode2 racnode1-priv racnode2-priv racnode1-vip racnode2-vip .168.125 192.shtml .idevelopment.info.1.1.1.168.info.168.zone | | | | Forward zone definition file for idevelopment.info. Miscellaneous Nodes router packmule domo switch1 oemprod accesspoint IN IN IN IN IN IN A A A A A A 192.

info.F.IN-ADDR. automatic empty zone: 1.4.info.0.E.0.0.idevelopment.info.0.idevelopment.. To troubleshoot problems with starting the named service.192.0.idevelopment..IP6.0.ARPA automatic empty zone: A.0.idevelopment.0.0.2.0.0.F.0.idevelopment.0.0.ARPA automatic empty zone: 254.conf' using default UDP/IPv4 port range: [1024.0. domo.255.0. Single Client Access Name (SCAN) virtual IP 187 IN PTR racnode-cluster-scan. check the /var/log/messages file. the entries in the /var/log/messages file should resemble the following: . oemprod.0.idevelopment.ARPA automatic empty zone: 2.E.0.info.ARPA automatic empty zone: 8.0.0.IP6.F. .info.0. 65535] listening on IPv4 interface lo.info. refresh retry expire minimum IN NS - (2 (5 (1 (1 hours) minutes) week) minute) openfiler1.ARPA automatic empty zone: 9.0. packmule.IN-ADDR.IN-ADDR.0. 188 IN PTR racnode-cluster-scan.0.195#53 automatic empty zone: 0. .IN-ADDR.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.ARPA automatic empty zone: B.168.IP6.info.info.0.0.0. 65535] using default UDP/IPv6 port range: [1024.0.0.idevelopment.F.0. . racnode2-vip. Oracle RAC Nodes 151 152 251 252 IN IN IN IN PTR PTR PTR PTR .3-P5 -u named adjusted limit on open files from 1024 to 1048576 found 1 CPU.0.0. switch1.info.0.0. using 1 worker thread using up to 4096 sockets loading configuration from '/etc/named.1.0.0.0. If named starts successfully.shtml 7200 300 604800 60 ) .0.idevelopment. 192. automatic empty zone: D.0.IN-ADDR.0. Start the DNS Service When the DNS configuration file and zone definition files are in place.info.ARPA automatic empty zone: 127.0.0.0. 192.0. Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov Nov 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: starting BIND 9.idevelopment.0.E.0.0.E. the service will fail to start and errors will be displayed on the screen.1#53 listening on IPv4 interface eth0.IP6.info. Network Storage Server 195 IN PTR .0. openfiler1. start the DNS server by starting the "named" service: [root@openfiler1 ~]# service named start Starting named: [ OK ] If named finds any problems with the DNS configuration file or zone definition files. 127.0. racnode2.168.info.0.0.0.0.F.195#53 listening on IPv4 interface eth1.idevelopment. .ARPA 34 of 136 4/18/2011 10:17 PM .info.255.0.0.0.idevelopment.0.info.ARPA automatic empty zone: 255.ARPA automatic empty zone: 0.info.0. accesspoint. 189 IN PTR racnode-cluster-scan.0.idevelopment.IP6.0. Miscellaneous Nodes 1 105 121 122 125 245 IN IN IN IN IN IN PTR PTR PTR PTR PTR PTR router.idevelopment. racnode1.169.255. .0. racnode1-vip.0.idevelopment.0.0.idevelopment.0.

1.195 search idevelopment.0.conf file contains the following entries where the IP address of the name server and domain match those of your DNS server and the domain you have configured: nameserver 192.info After modifying the /etc/resolv.195#53 35 of 136 4/18/2011 10:17 PM .195 search idevelopment.shtml Nov Nov Nov Nov Nov Nov Nov .info/IN: loaded serial 201011021 startup succeeded running Configure DNS to Start Automatically Now that the named service is running.conf file on every server in the cluster. Make certain the /etc/resolv.168.168.1.195 Address: 192.conf file was successfully updated on all servers in our mini-network: [root@openfiler1 ~]# cat /etc/resolv.168.168.1.conf nameserver 192.1.. This is accomplished by editing the "/etc/resolv.1#953 command channel listening on ::1#953 no source of entropy found zone 1. 2 2 2 2 2 2 2 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 21:35:49 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 openfiler1 named[7995]: named[7995]: named[7995]: named[7995]: named[7995]: named: named named[7995]: command channel listening on 127.1.168.conf" File With DNS now setup and running.conf" file on each server including the two Oracle RAC nodes and the Openfiler network storage server.arpa/IN: loaded serial 201011021 zone idevelopment.info [root@racnode1 ~]# cat /etc/resolv. issue the following commands to make sure this service starts automatically at boot time: [root@openfiler1 ~]# chkconfig named on [root@openfiler1 ~]# chkconfig named --list named 0:off 1:off 2:on 3:on 4:on 5:on 6:off Update "/etc/resolv.info The second line allows you to resolve a name on this network without having to specify the fully qualified host name.0. Perform tests similar to the following from each node to all other nodes in your cluster: [root@racnode1 ~]# nslookup racnode2.168.195 search idevelopment.in-addr.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.info Server: 192. the next step is to configure each server to use it for name resolution..conf nameserver 192.168. Verify that the /etc/resolv.idevelopment.info [root@racnode2 ~]# cat /etc/resolv. verify that DNS is functioning correctly by testing forward and reverse lookups using the nslookup command-line utility.195 search idevelopment.192.conf nameserver 192.1.

195#53 Name: racnode2.info Address: 192.195#53 Name: racnode-cluster-scan.info Address: 192. The easiest way to configure network settings in RHEL / CentOS is with the program "Network Configuration".1. you need to configure both NIC devices as well as the /etc/hosts file and verifying the DNS configuration.195#53 187.195 Address: 192.168.idevelopment.1.arpa name = racnode2. Configuring Public and Private Network In our two node example.shtml Name: racnode2.168.idevelopment.1.idevelopment.localdomain6 localhost6 Our example Oracle RAC configuration will use the following network settings: 36 of 136 4/18/2011 10:17 PM .187 Server: 192.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.168.idevelopment.1. [root@racnode1 ~]# nslookup racnode-cluster-scan Server: 192.168. we need to configure the network on both Oracle RAC nodes for access to the public network as well as their private interconnect.in-addr.info Address: 192.152 [root@racnode1 ~]# nslookup racnode2 Server: 192.info Address: 192.1.168.168.1.168.1.168.info.187 Name: racnode-cluster-scan.info Address: 192. For example: # ::1 localhost6.192.1.1.195 Address: 192.168.168.1.1.1. All of these tasks can be completed using the Network Configuration GUI.195 Address: 192.168.168.195#53 152. Network Configuration is a GUI application that can be started from the command-line as the root user account as follows: [root@racnode1 ~]# /usr/bin/system-config-network & Using the Network Configuration application.arpa name = racnode-cluster-scan. It should be noted that the /etc/hosts entries are the same for both Oracle RAC nodes and that I removed any entry that has to do with IPv6.188 Name: racnode-cluster-scan.189 [root@racnode1 ~]# nslookup 192.1.info.in-addr.168.152 [root@racnode1 ~]# nslookup 192.152 Server: 192.192.168.1.idevelopment.1.idevelopment.1.195 Address: 192.1.idevelopment.168.168.168.

1 localhost.idevelopment.info # Private Interconnect .168.2.255.info /etc/hosts # Do not remove the following line.195 openfiler1.168.1.1.255.1.168.168.info 192.idevelopment.(racnode1) Device eth0 eth1 IP Address 192.idevelopment.0.shtml Oracle RAC Node 1 .168.151 Subnet 255. or various programs # that require network functionality will fail.152 racnode2-priv.1.idevelopment.195 openfiler1-priv.info 192.151 racnode1-priv.168. 127.151 racnode1.195 search idevelopment.localdomain localhost # Public Network .168.0 Gateway 192.255.168.(eth1) 192.151 192.2.252 racnode2-vip.2.info racnode1 racnode2 racnode1-priv racnode2-priv racnode1-vip racnode2-vip openfiler1 openfiler1-priv 37 of 136 4/18/2011 10:17 PM .0 255.1.1.idevelopment.(eth0:1) 192.info 192.(eth0) 192.idevelopment.1.info # Public Virtual IP (VIP) addresses .152 racnode2.(eth1) 192.168.conf nameserver 192.info # Private Storage Network for Openfiler .info 192.255.168.251 racnode1-vip.1 Purpose Connects racnode1 to the public network Connects racnode1 (interconnect) to racnode2 (racnode2-priv) /etc/resolv.1.168.idevelopment.168.2.idevelopment.0.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.

info 192.0 255.151 racnode1.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.2. 127.idevelopment.localdomain localhost # Public Network .info 192.152 Subnet 255.2.info # Private Interconnect .195 openfiler1.251 racnode1-vip.1.0.255.168.(eth1) 192.info racnode1 racnode2 racnode1-priv racnode2-priv racnode1-vip racnode2-vip openfiler1 openfiler1-priv 38 of 136 4/18/2011 10:17 PM .152 racnode2.1.168.info # Public Virtual IP (VIP) addresses .1 localhost.1.168.1.0 Gateway 192.2.252 racnode2-vip.1 Purpose Connects racnode2 to the public network Connects racnode2 (interconnect) to racnode1 (racnode1-priv) /etc/resolv.255.195 search idevelopment.168.idevelopment.195 openfiler1-priv.1.info 192.idevelopment.255.1.1.shtml Oracle RAC Node 2 .(eth1) 192.168.idevelopment. or various programs # that require network functionality will fail.168.152 192.168.idevelopment.idevelopment.info /etc/hosts # Do not remove the following line.(eth0:1) 192.168.151 racnode1-priv.info 192.168.conf nameserver 192.idevelopment.(racnode2) Device eth0 eth1 IP Address 192.168.168.1.info # Private Storage Network for Openfiler .0.255.idevelopment.168.(eth0) 192.152 racnode2-priv.2.

255. Figure 2: Network Configuration Screen.conf nameserver 192.255.168.1.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.localdomain localhost 192. Be sure to make all the proper network settings to both Oracle RAC nodes.195 openfiler1.195 192.168.168.195 search idevelopment.0.shtml Device IP Address Subnet Gateway Purpose Openfiler Network Storage Server .195 Subnet 255. or various programs # that require network functionality will fail.168.1.info /etc/hosts # Do not remove the following line.0 255.2.255.idevelopment.(openfiler1) Device eth0 eth1 IP Address 192.0. only Oracle RAC Node 1 (racnode1) is shown. 127.1.255.info openfiler1 In the screen shots below. Node 1 (racnode1) 39 of 136 4/18/2011 10:17 PM .1 Purpose Connects openfiler1 to the public network Connects openfiler1 to the private network /etc/resolv.0 Gateway 192.168.1 localhost.1.

eth0 (racnode1) 40 of 136 4/18/2011 10:17 PM .shtml Figure 3: Ethernet Device Screen.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml Figure 4: Ethernet Device Screen. eth1 (racnode1) 41 of 136 4/18/2011 10:17 PM .

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml Figure 5: Network Configuration Screen. DNS (racnode1) 42 of 136 4/18/2011 10:17 PM .

0.0.0 inet6 addr: fe80::20e:cff:fe64:d1e5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:120 errors:0 dropped:0 overruns:0 frame:0 TX packets:48 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:24544 (23.168.255.0.0 inet6 addr: fe80::226:9eff:fe02:d3ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:236549 errors:0 dropped:0 overruns:0 frame:0 TX packets:264953 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:28686645 (27. The following example is from racnode1: [root@racnode1 ~]# /sbin/ifconfig -a eth0 Link encap:Ethernet HWaddr 00:26:9E:02:D3:AC inet addr:192.255.151 Bcast:192.168.3 MiB) TX bytes:159319080 (151.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 eth1 lo 43 of 136 4/18/2011 10:17 PM .1.9 MiB) Interrupt:177 Memory:dfef0000-dff00000 Link encap:Ethernet HWaddr 00:0E:0C:64:D1:E5 inet addr:192.255.255 Mask:255.151 Bcast:192.1.255 Mask:255.2.2.4 KiB) Base address:0xddc0 Memory:fe9c0000-fe9e0000 Link encap:Local Loopback inet addr:127. /etc/hosts (racnode1) Once the network is configured.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.168.shtml Figure 6: Network Configuration Screen.1 Mask:255.0.9 KiB) TX bytes:8634 (8.168.255. you can use the ifconfig command to verify everything is working.

Verify SCAN Configuration In this article.0 b) TX bytes:0 (0. resolve the issue before you proceed.info openfiler1. run the nslookup command several times to make certain that the round-robin algorithm is functioning properly.1.shtml RX packets:3191 errors:0 dropped:0 overruns:0 frame:0 TX packets:3191 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4296868 (4. Verify the SCAN configuration in DNS using the nslookup command-line utility. verify the network configuration by using the ping command to test the connection from each node in the cluster to all the other nodes.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.168.idevelopment.idevelopment. The result should be that each time the nslookup is run. regardless of the number of servers in the cluster.idevelopment. Since our DNS is set up to provide round-robin access to the IP addresses resolved by the SCAN entry.info openfiler1-priv. If the ping commands for the public addresses fail.info racnode2-priv.0 b) Verify Network Configuration As the root user account.188 192.info". The SCAN name must be 15 characters or less in length. I will configure SCAN for round-robin resolution to three.168. and must be resolvable without the domain suffix.info racnode1-priv.189 Oracle Corporation strongly recommends configuring three IP addresses considering load balancing and high availability requirements. manually configured static IP addresses in DNS: racnode-cluster-scan racnode-cluster-scan racnode-cluster-scan IN A IN A IN A 192.idevelopment. In other words.info racnode1 racnode2 racnode1-priv racnode2-priv openfiler1 openfiler1-priv You should not get a response from the nodes using the ping command for the virtual IPs (racnode1-vip.0 MiB) TX bytes:4296868 (4.187 192.168.0 MiB) sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.info racnode2. run the following commands on each node: # # # # # # # # # # # # ping ping ping ping ping ping ping ping ping ping ping ping -c -c -c -c -c -c -c -c -c -c -c -c 3 3 3 3 3 3 3 3 3 3 3 3 racnode1.idevelopment. The virtual IP addresses for SCAN (and the virtual IP address for the node) should not be manually assigned to a network interface on the cluster since Oracle Clusterware is responsible for enabling them after the Oracle grid infrastructure installation.1. For example. racnode2-vip) or the SCAN IP addresses (racnode-cluster-scan) until after Oracle Clusterware is installed and running. it will 44 of 136 4/18/2011 10:17 PM . as the root user account. not including the domain.idevelopment. These virtual IP addresses must all be on the same subnet as the public network in the cluster. "racnode-cluster-scan" must be resolvable as opposed to only "racnode-cluster-scan.1.idevelopment. For example. the SCAN addresses and virtual IP addresses (VIPs) should not respond to ping commands before installation.

189 Name: racnode-cluster-scan. If the machine name is listed in the in the loopback address entry as below: 127.187 Name: racnode-cluster-scan.188 [root@racnode1 ~]# nslookup racnode-cluster-scan Server: 192.1.1.1 racnode1 localhost. For example: [root@racnode1 ~]# nslookup racnode-cluster-scan Server: 192.168.1.idevelopment.168.localdomain localhost it will need to be removed as shown below: 127.1.1.info Address: 192.168.info Address: 192.idevelopment.shtml return the set of three IP addresses in a different order.1.168.168.168.localdomain localhost If the RAC node name is listed for the loopback address.168.1.0.187 Confirm the RAC Node Name is Not Listed in Loopback Address Ensure that the node name (racnode1 or racnode2) is not included for the loopback address in the /etc/hosts file.1.1.idevelopment.idevelopment.info Address: 192.195#53 Name: racnode-cluster-scan.info Address: 192.0.187 Name: racnode-cluster-scan.1.1.0.info Address: 192.188 Name: racnode-cluster-scan.188 Name: racnode-cluster-scan.info Address: 192.195#53 Name: racnode-cluster-scan.168.195 Address: 192.168.info Address: 192.1.168.1 localhost.1.idevelopment.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.195#53 Name: racnode-cluster-scan.168.189 [root@racnode1 ~]# nslookup racnode-cluster-scan Server: 192.189 Name: racnode-cluster-scan.168.idevelopment. you will receive the following error during the RAC installation: ORA-00603: ORACLE server session terminated by fatal error or ORA-29702: error occurred in Cluster Group Service operation 45 of 136 4/18/2011 10:17 PM .195 Address: 192.info Address: 192.168.1.info Address: 192.168.0.195 Address: 192.1.idevelopment.168.idevelopment.idevelopment.

The Oracle Clusterware software will then start to operate normally and not crash.d/init. Then.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.log file: 08/29/2005 22:17:19 oac_init:2: Could not connect to server. you will need to first manually disable UDP ICMP rejections: [root@racnode1 ~]# /etc/rc. 2.shtml Check and turn off UDP ICMP rejections During the Linux installation process. Configuring NTP is outside the scope of this article and will therefore rely on the Oracle Cluster Time Synchronization Service as the network time protocol.d/init.d/iptables stop Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] 3. The following commands should be executed as the root user account on both Oracle RAC nodes: 1. the solution is to remove the UDP ICMP (iptables) rejection rule . When the Oracle Clusterware process fails. [root@racnode2 ~]# /etc/rc. clsc retcode = 9 08/29/2005 22:17:19 a_init:12!: Client init unsuccessful : [32] ibctx:1:ERROR: INVALID FORMAT proprinit:problem reading the bootblock or superbloc 22 When experiencing this type of error. you will have something similar to the following in the <machine_name>_evmocr. Check to ensure that the firewall option is turned off. If UDP ICMP is blocked or rejected by the firewall.or to simply have the firewall option turned off. This has burned me several times so I like to do a double-check that the firewall option is not configured and to ensure udp ICMP filtering is turned off. If the firewall option is stopped (like it is in my example below) you do not have to proceed with the following steps. If the firewall option is operating.d/init. [root@racnode1 ~]# /etc/rc. I indicated to not configure the firewall option. turn UDP ICMP rejections off for all subsequent server reboots (which should always be turned off): [root@racnode1 ~]# chkconfig iptables off Cluster Time Synchronization Service Perform the following Cluster Time Synchronization Service configuration on both Oracle RAC nodes in the cluster. Oracle provides two options for time synchronization: an operating system configured network time protocol (NTP) or the new Oracle Cluster Time Synchronization Service (CTSS). By default the option to configure a firewall is selected by the installer. 46 of 136 4/18/2011 10:17 PM .d/iptables status Firewall is stopped.d/iptables status Firewall is stopped. Oracle Cluster Time Synchronization Service (ctssd) is designed for organizations whose Oracle RAC databases are unable to access NTP services. Oracle Clusterware 11g release 2 and later requires time synchronization across all nodes within a cluster where Oracle RAC is deployed. the Oracle Clusterware software will crash after several minutes of running.

shtml Configure Cluster Time Synchronization Service . Red Hat Linux. OPTIONS="-x -u ntp:ntp -p /var/run/ntpd. If NTP is found configured. and no active time synchronization is performed by Oracle Clusterware within the cluster. enter the following command as the Grid installation owner (grid): [grid@racnode1 ~]$ crsctl check ctss CRS-4701: The Cluster Time Synchronization Service is in Active mode. the Cluster Time Synchronization Service is automatically installed in active mode and synchronizes the time across the nodes.conf. If you are using NTP and you prefer to continue using it instead of Cluster Time Synchronization Service. which prevents time from being adjusted backward. you must stop the existing ntpd service. disable it from the initialization sequences and remove the ntp. then the Cluster Time Synchronization Service is started in observer mode. as in the following example: # Drop root to id 'ntp:ntp' by default. Restart the network time protocol daemon after you complete this task. To complete these steps on Red Hat Enterprise Linux or CentOS.pid This file maintains the pid for the NTP daemon. edit the /etc/sysconfig/ntpd file to add the -x flag. To do this on Oracle Enterprise Linux. run the following commands as the root user account on both Oracle RAC nodes: [root@racnode1 ~]# /sbin/service ntpd stop [root@racnode1 ~]# chkconfig ntpd off [root@racnode1 ~]# mv /etc/ntp. This section is provided for documentation purposes only and can be used by organizations already setup to use NTP within their domain.(CTSS) If you want to use Cluster Time Synchronization Service to provide synchronization service in the cluster.conf /etc/ntp.conf file. When the installer finds that the NTP protocol is not active. and Asianux systems. CRS-4702: Offset (in msec): 0 Configure Network Time Protocol .(only if not using CTSS as documented above) Please note that this guide will use Cluster Time Synchronization Service for time synchronization (described above) across both Oracle RAC nodes in the cluster. then de-configure and de-install the Network Time Protocol (NTP) service. To confirm that ctssd is active after installation. then you need to modify the NTP initialization file to set the -x flag. To deactivate the NTP service.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.original Also remove the following file: [root@racnode1 ~]# rm /var/run/ntpd.pid" # Set to 'yes' to sync hw clock after successful ntpdate SYNC_HWCLOCK=no # Additional options for ntpdate NTPDATE_OPTIONS="" 47 of 136 4/18/2011 10:17 PM .

Services To control services. configure network access. identify and partition the physical storage. For example: https://openfiler1. create a new volume group. create new iSCSI targets for each of the logical volumes.info:446/ From the Openfiler Storage Control Center home page. modify the configuration file /etc/sysconfig/ntp with the following settings: NTPD_OPTIONS="-x -u ntp" Restart the daemon using the following command: # service ntp restart Configure iSCSI Volumes using Openfiler Perform the following configuration tasks on the network storage server (openfiler1). restart the NTP service. we have to perform six major tasks — set up iSCSI services. The default administration login credentials for Openfiler are: Username: openfiler Password: password The first page the administrator sees is the [Status] / [System Overview] screen. we use the Openfiler Storage Control Center and navigate to [Services] / [Manage Services]: 48 of 136 4/18/2011 10:17 PM . and finally.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. To use Openfiler as an iSCSI storage server.idevelopment. log in as an administrator.shtml Then. Openfiler administration is performed using the Openfiler Storage Control Center — a browser based tool over an https connection on port 446. create all logical volumes. # /sbin/service ntp restart On SUSE systems.

49 of 136 4/18/2011 10:17 PM . The ietd program implements the user level part of iSCSI Enterprise Target software for building an iSCSI storage system on Linux. Also note that this step does not actually grant the appropriate permissions to the iSCSI volumes required by both Oracle RAC nodes. That will be accomplished later in this section by updating the ACL for each new logical volume. Note that iSCSI logical volumes will be created later on in this section. we should be able to SSH into the Openfiler server and see the iscsi-target service running: [root@openfiler1 ~]# service iscsi-target status ietd (pid 14243) is running.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. click on the 'Enable' link under the 'iSCSI target server' service name. With the iSCSI target enabled. Network Access Configuration The next step is to configure network access in Openfiler to identify both Oracle RAC nodes (racnode1 and racnode2) that will need to access the iSCSI volumes through the storage (private) network... After that.shtml Figure 7: Enable iSCSI Openfiler Service To enable the iSCSI service. the 'iSCSI target server' status should change to 'Enabled'.

storage arrays. we have a 73GB internal SCSI hard drive for our shared storage needs. external USB drives. we will want to add both Oracle RAC nodes individually rather than allowing the entire 192.168. On the Openfiler server this drive is seen as /dev/sdb (MAXTOR ATLAS15K2_73SCA). always use its IP address even though its host name may already be defined in your /etc/hosts file or DNS.0 network have access to Openfiler resources. For the purpose of this article. we will be creating the three iSCSI volumes to be used as shared storage by both of the Oracle RAC nodes in the cluster. When entering each of the Oracle RAC nodes. when entering actual hosts in our Class C network. In our case. note that the 'Name' field is just a logical name used for reference only.255. or ANY other storage can be connected to the Openfiler server and served to the clients. navigate to [Volumes] / [Block Devices] from the Openfiler Storage Control Center: 50 of 136 4/18/2011 10:17 PM .2. Storage devices like internal IDE/SATA/SCSI/SAS disks. configuring network access is accomplished using the Openfiler Storage Control Center by navigating to [System] / [Network Setup]. Next.shtml As in the previous section. Openfiler Storage Control Center can be used to set up and manage all of that storage. It is important to remember that you will be entering the IP address of the private network (eth1) for each of the RAC nodes in the cluster. Once these devices are discovered at the OS level. As a convention when entering nodes. when entering the actual node in the 'Network/Host' field. The "Network Access Configuration" section (at the bottom of the page) allows an administrator to setup networks and/or hosts that will be allowed to access resources exported by the Openfiler appliance.255. use a subnet mask of 255. The following image shows the results of adding both Oracle RAC nodes: Figure 8: Configure Openfiler Network Access for Oracle RAC Nodes Physical Storage In this section. Lastly.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. I simply use the node name defined for that IP address. external FireWire drives. To see this and to start the process of creating our iSCSI volumes.255. This involves multiple steps that will be performed on the internal 73GB 15K SCSI hard disk connected to the Openfiler server.

Since we will be creating a single primary partition that spans the entire disk. most of the options can be left to their default setting where the only modification would be to change the 'Partition Type' from 'Extended partition' to 'Physical volume'. Here are the values I specified to create the primary partition on /dev/sdb: Physical Disk Primary Partition Mode Partition Type Starting Cylinder Ending Cylinder Primary Physical volume 1 8924 The size now shows 68. To accept that we click on the [Create] button. By clicking on the /dev/sdb link.shtml Figure 9: Openfiler Physical Storage . This results in a new partition (/dev/sdb1) on our internal hard disk: 51 of 136 4/18/2011 10:17 PM .Block Device Management Partitioning the Physical Disk The first step we will perform is to create a single primary partition on the /dev/sdb internal hard disk. we are presented with the options to 'Edit' or 'Create' a partition.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.36 GB.

or none as in our case. enter the name of the new volume group (racdbvg).shtml Figure 10: Partition the Physical Volume Volume Group Management The next step is to create a Volume Group. There we would see any existing volume groups. navigate to [Volumes] / [Volume Groups]. click on the check-box in front of /dev/sdb1 to select that partition. and finally click on the [Add volume group] button. After that we are presented with the list that now shows our newly created volume group named "racdbvg": 52 of 136 4/18/2011 10:17 PM .DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. From the Openfiler Storage Control Center. We will be creating a single volume group named racdbvg that contains the newly created primary partition. Using the Volume Group Management screen.

888 iSCSI racdb-fra1 33. the application will point you to the "Manage Volumes" screen.888 iSCSI In effect we have created three iSCSI disks that can now be presented to iSCSI clients (racnode1 and racnode2) on the 53 of 136 4/18/2011 10:17 PM . After creating each logical volume.ASM CRS Volume 1 racdb .(Create a volume in "racdbvg").DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Also available at the bottom of this screen is the option to create a new volume in the selected volume group .208 Filesystem Type iSCSI racdb-data1 33.shtml Figure 11: New Volume Group Created Logical Volumes We can now create the three logical volumes in the newly created volume group (racdbvg). From the Openfiler Storage Control Center. Use this screen to create the following three iSCSI logical volumes.ASM Data Volume 1 racdb . There we will see the newly created volume group (racdbvg) along with its block storage statistics.ASM FRA Volume 1 Required Space (MB) 2. navigate to [Volumes] / [Add Volume]. You will then need to click back to the "Add Volume" tab to create the next logical volume until all three iSCSI volumes are created: iSCSI / Logical Volumes Volume Name racdb-crs1 Volume Description racdb .

For the purpose of this article. however. 54 of 136 4/18/2011 10:17 PM .shtml network. map one of the iSCSI logical volumes created in the previous section to the newly created iSCSI target. grant both of the Oracle RAC nodes access to the new iSCSI target. The "Manage Volumes" screen should look as follows: Figure 12: New Logical (iSCSI) Volumes iSCSI Targets At this point. There are three steps involved in creating and configuring an iSCSI target — create a unique Target IQN (basically. the universal name for the new iSCSI target). Before an iSCSI client can have access to them. Each iSCSI logical volume will be mapped to a specific iSCSI target and the appropriate network access permissions to that target will be granted to both Oracle RAC nodes. and finally. there will be a one-to-one mapping between an iSCSI logical volume and an iSCSI target. Please note that this process will need to be performed for each of the three iSCSI logical volumes created in the previous section. we have three iSCSI logical volumes.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. an iSCSI target will need to be created for each of these three volumes.

openfiler:tsn. A default value is automatically generated for the name of the new iSCSI target (better known as the "Target IQN").2006-01.ASM FRA Volume 1 iqn. An example Target IQN is "iqn.com. navigate to [Volumes] / [iSCSI Targets]. Create New Target IQN From the Openfiler Storage Control Center. The example below illustrates the three steps required to create a new iSCSI target by creating the Oracle Clusterware / racdb-crs1 target (iqn.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.com.crs1).crs1" as shown in Figure 14 below: 55 of 136 4/18/2011 10:17 PM . This three step process will need to be repeated for each of the three new iSCSI targets listed in the table above.openfiler:racdb.shtml For the purpose of this article.com.ae4683b67fd3": Figure 13: Create New iSCSI Target : Default Target IQN I prefer to replace the last segment of the default Target IQN with something more meaningful.2006-01.com.ASM CRS Volume 1 racdb .2006-01.crs1 iSCSI Volume Name racdb-crs1 Volume Description racdb .openfiler:racdb.data1 racdb-data1 iqn.ae4683b67fd3" with "racdb.2006-01.ASM Data Volume 1 racdb . This page allows you to create a new iSCSI target.openfiler:racdb. I will modify the default Target IQN by replacing the string "tsn.2006-01.fra1 racdb-fra1 We are now ready to create the three new iSCSI targets — one for each of the iSCSI logical volumes.openfiler:racdb. For the first iSCSI target (racdb-crs1).com. Verify the grey sub-tab "Target Configuration" is selected. the following table lists the new iSCSI target names (the Target IQN) and which iSCSI logical volume it will be mapped to: iSCSI Target / Logical Volume Mappings Target IQN iqn.

This will create a new iSCSI target and then bring up a page that allows you to modify a number of settings for the new iSCSI target. click the [Add] button. verify the correct iSCSI target is selected in the section "Select iSCSI Target". use the pull-down menu to select the correct iSCSI target and click the [Change] button. Locate the appropriate iSCSI logical volume (/dev/racdbvg/racdb-crs1 in this first example) and click the [Map] button. Your screen should look similar to Figure 15 after clicking the "Map" button for volume /dev/racdbvg/racdb-crs1: 56 of 136 4/18/2011 10:17 PM . If not. click on the grey sub-tab named "LUN Mapping" (next to "Target Configuration" sub-tab). LUN Mapping After creating the new iSCSI target. You do not need to change any settings on this page.shtml Figure 14: Create New iSCSI Target : Replace Default Target IQN Once you are satisfied with the new Target IQN. the next step is to map the appropriate iSCSI logical volume to it. For the purpose of this article. Next. none of settings for the new iSCSI target need to be changed.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Under the "Target Configuration" sub-tab.

For the current iSCSI target. Awhile back. change the "Access" for both hosts from 'Deny' to 'Allow' and click the [Update] button: 57 of 136 4/18/2011 10:17 PM .shtml Figure 15: Create New iSCSI Target : Map LUN Network ACL Before an iSCSI client can have access to the newly created iSCSI target. These are the two nodes that will need to access the new iSCSI targets through the storage (private) network. we configured network access in Openfiler for two hosts (the Oracle RAC nodes).DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. We now need to grant both of the Oracle RAC nodes access to the new iSCSI target. Click on the grey sub-tab named "Network ACL" (next to "LUN Mapping" sub-tab). it needs to be granted the appropriate permissions.

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

Figure 16: Create New iSCSI Target : Update Network ACL

Go back to the Create New Target IQN section and perform these same three tasks for the remaining two iSCSI logical volumes while substituting the values found in the "iSCSI Target / Logical Volume Mappings" table (namely, the value in the 'Target IQN' column).

Configure iSCSI Volumes on Oracle RAC Nodes
Configure the iSCSI initiator on both Oracle RAC nodes in the cluster. Creating partitions, however, should only be executed on one of nodes in the RAC cluster. An iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for which iSCSI support (a driver) is available. In our case, the clients are two Linux servers, racnode1 and racnode2, running Red Hat Enterprise Linux 5.5 or CentOS 5.5. In this section we will be configuring the iSCSI software initiator on both of the Oracle RAC nodes. RHEL / CentOS 5.5 includes the Open-iSCSI iSCSI software initiator which can be found in the iscsi-initiator-utils RPM. This is a change from previous versions of RHEL / CentOS (4.x) which included the Linux iscsi-sfnet software driver developed as part of the Linux-iSCSI Project. All iSCSI management tasks like discovery and logins will use the command-line interface iscsiadm which is included with Open-iSCSI. The iSCSI software initiator will be configured to automatically log in to the network storage server (openfiler1) and discover the iSCSI volumes created in the previous section. We will then go through the steps of creating persistent local SCSI device names (i.e. /dev/iscsi/crs1) for each of the iSCSI target names discovered using udev. Having a consistent local SCSI device name and which iSCSI target it maps to, helps to differentiate between the three volumes when configuring ASM. Before we can do any of this, however, we must first install the iSCSI initiator software.

This guide makes use of ASMLib 2.0 which is a support library for the Automatic Storage Management (ASM) feature of the Oracle Database. ASMLib will be used to label all iSCSI volumes used in this guide. By default, ASMLib already provides persistent paths and permissions for storage devices used with ASM. This feature eliminates the need for updating udev or devlabel files with storage device paths and permissions. For the purpose of this article and in practice, I still opt to create persistent local SCSI device

58 of 136

4/18/2011 10:17 PM

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

names for each of the iSCSI target names discovered using udev. This provides a means of self-documentation which helps to quickly identify the name and location of each volume.

Installing the iSCSI (initiator) service With Red Hat Enterprise Linux 5.5 or CentOS 5.5, the Open-iSCSI iSCSI software initiator does not get installed by default. The software is included in the iscsi-initiator-utils package which can be found on CD/DVD #1. To determine if this package is installed (which in most cases, it will not be), perform the following on both Oracle RAC nodes:

[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscs

If the iscsi-initiator-utils package is not installed, load CD/DVD #1 into each of the Oracle RAC nodes and perform the following:

[root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1

~]# ~]# ~]# ~]# ~]#

mount -r /dev/cdrom /media/cdrom cd /media/cdrom/CentOS rpm -Uvh iscsi-initiator-utils-* cd / eject

Verify the iscsi-initiator-utils package is now installed on both Oracle RAC nodes:

[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscs iscsi-initiator-utils-6.2.0.871-0.16.el5 (x86_64)

[root@racnode2 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscs iscsi-initiator-utils-6.2.0.871-0.16.el5 (x86_64)

Configure the iSCSI (initiator) service After verifying that the iscsi-initiator-utils package is installed, start the iscsid service on both Oracle RAC nodes and enable it to automatically start when the system boots. We will also configure the iscsi service to automatically start which logs into iSCSI targets needed at system startup.

[root@racnode1 ~]# service iscsid start Turning off network shutdown. Starting iSCSI daemon: [root@racnode1 ~]# chkconfig iscsid on [root@racnode1 ~]# chkconfig iscsi on

[ [

OK OK

] ]

Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server. This should be performed on both Oracle RAC nodes to verify the configuration is functioning properly:

[root@racnode1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-priv 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.fra1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.data1

59 of 136

4/18/2011 10:17 PM

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

Manually Log In to iSCSI Targets At this point, the iSCSI initiator service has been started and each of the Oracle RAC nodes were able to discover the available targets from the Openfiler network storage server. The next step is to manually log in to each of the available iSCSI targets which can be done using the iscsiadm command-line interface. This needs to be run on both Oracle RAC nodes. Note that I had to specify the IP address and not the host name of the network storage server (openfiler1-priv) — I believe this is required given the discovery (above) shows the targets using the IP address.

[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 -l [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 -l

Configure Automatic Log In The next step is to ensure the client will automatically log in to each of the targets listed above when the machine is booted (or the iSCSI initiator service is started/restarted). As with the manual log in process described above, perform the following on both Oracle RAC nodes:

[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 -[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 --

Create Persistent Local SCSI Device Names In this section, we will go through the steps to create persistent local SCSI device names for each of the iSCSI target names. This will be done using udev. Having a consistent local SCSI device name and which iSCSI target it maps to, helps to differentiate between the three volumes when configuring ASM. Although this is not a strict requirement since we will be using ASMLib 2.0 for all volumes, it provides a means of self-documentation to quickly identify the name and location of each iSCSI volume. By default, when either of the Oracle RAC nodes boot and the iSCSI initiator service is started, it will automatically log in to each of the iSCSI targets configured in a random fashion and map them to the next available local SCSI device name. For example, the target iqn.2006-01.com.openfiler:racdb.crs1 may get mapped to /dev/sdb. I can actually determine the current mappings for all targets by looking at the /dev/disk/by-path directory:

[root@racnode1 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdb ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sdd ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sdc

Using the output from the above listing, we can establish the following current mappings:
Current iSCSI Target Name to local SCSI Device Name Mappings
iSCSI Target Name iqn.2006-01.com.openfiler:racdb.crs1 iqn.2006-01.com.openfiler:racdb.data1 iqn.2006-01.com.openfiler:racdb.fra1 Local SCSI Device Name /dev/sdb /dev/sdd /dev/sdc

60 of 136

4/18/2011 10:17 PM

then exit 1 fi # Check if QNAP drive check_qnap_target_name=${target_name%%:*} if [ $check_qnap_target_name = "iqn.rules and contain only a single line of name=value pairs used to receive events we are interested in. The first step is to create a new rules file.openfiler:racdb. It will also define a call-out SHELL script (/etc/udev/scripts/iscsidev. For example.d/55-openiscsi.d/55-openiscsi. /dev/iscsi/crs1) that will always point to the appropriate iSCSI target through reboots. the client logging in to an iSCSI target).shtml This mapping.sh on both Oracle RAC nodes: #!/bin/sh # FILE: /etc/udev/scripts/iscsidev. then target_name=`echo "${target_name%. PROGRAM="/etc/udev/scripts/iscsidev.com. The file will be named /etc/udev/rules. may change every time the Oracle RAC node is rebooted.2004-04.sh %b".rules on both Oracle RAC nodes: # /etc/udev/rules.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. change it to executable: 61 of 136 4/18/2011 10:17 PM .d/55-openiscsi.*}"` fi echo "${target_name##*.}" After creating the UNIX SHELL script.e.SYMLINK+="iscsi/%c/part% We now need to create the UNIX SHELL script that will be called when this event is received. however.crs1 gets mapped to the local SCSI device /dev/sdc.sh) to handle the event. create the UNIX shell script /etc/udev/scripts/iscsidev. When udev receives a device event (for example. This is where the Dynamic Device Management tool named udev comes in.qnap" ]. Create the following rules file /etc/udev/rules. What we need is a consistent device name we can reference (i. It is therefore impractical to rely on using the local SCSI device name given there is no way to predict the iSCSI target mappings after a reboot. udev provides a dynamic device directory using symbolic links that point to the actual device using a configurable set of rules.rules KERNEL=="sd*".com. Let's first create a separate directory on both Oracle RAC nodes where udev scripts can be stored: [root@racnode1 ~]# mkdir -p /etc/udev/scripts [root@racnode2 ~]# mkdir -p /etc/udev/scripts Next. after a reboot it may be determined that the iSCSI target iqn. it matches its configured rules against the available device attributes provided in sysfs to identify the device.sh BUS=${1} HOST=${BUS%%:*} [ -e /sys/class/iscsi_host ] || exit 1 file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname" target_name=$(cat ${file}) # This is not an open-scsi drive if [ -z "${target_name}" ]. BUS=="scsi".2006-01. Rules that match may provide additional device information or specify a device node name and multiple symlink names and instruct udev to run additional programs (a SHELL script for example) as part of the device event handling process.

crs1.openfiler:racdb. target: iqn. portal: 192. Logging out of session [sid: 3.3 Login to [iface: default.openfiler:racdb.2. portal: 192.com..195.2006-01.crs1.2006-01. [ OK ] [root@racnode2 ~]# service iscsi stop Logging out of session [sid: 1.195.crs1.195. target: iqn.crs1.195.fra1.com.2006-01.195.openfiler:racdb.sh [root@racnode2 ~]# chmod 755 /etc/udev/scripts/iscsidev.2. target: iqn.3260]: s Logout of [sid: 2.com.2006-01.195.com. target: iqn..168.2006-01. portal: 192.168.168.3260]: s Logout of [sid: 2.2.fra1.openfiler:racdb.openfiler:racd Logging in to [iface: default.openfiler:racd Logging in to [iface: default. portal: 192.168. portal: 192.2 Logging out of session [sid: 2. target: iqn.openfiler:racdb.com. target: iqn. target: iqn.openfiler:racdb.2006-01.2 Logout of [sid: 1.com..data1. portal: 192. target: iqn.com.data1. portal: 192.shtml [root@racnode1 ~]# chmod 755 /etc/udev/scripts/iscsidev.2006-01.2 Logout of [sid: 1.fra1.168.data1. Logging in to [iface: default. target: iqn.com.openfiler:racdb.com.openfiler:racdb.com.2.com.168. Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default.crs1. target: iqn.195.2006-01. portal: 192.data1.fra1. target: iqn.3260]: s Stopping iSCSI daemon: [ OK ] [root@racnode2 ~]# service iscsi start iscsid dead but pid file exists Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default.168.openfiler:racdb.com.195.2006-01.168. target: iqn.2006-01. portal: 192.168./. target: iqn. portal: 192. target: iqn. Logging out of session [sid: 3. Login to [iface: default.2 Login to [iface: default.2006-01.2006-01.crs1.2006-01.2.168.openfiler:racdb.openfiler:racdb.2006-01. target: iqn. portal: 192. portal: 192.168. target: iqn. portal: 192. portal: 192.data1.sh Now that udev is configured..com. target: iqn.168. portal: 192.2006-01.2006-01. target: iqn.2 Logging out of session [sid: 2. portal: 192.fra1.168.openfiler:racdb.2. target: iqn.2006-01.com.2.3260]: s Stopping iSCSI daemon: [ OK ] [root@racnode1 ~]# service iscsi start iscsid dead but pid file exists Turning off network shutdown. portal: 192.openfiler:racdb.2006-01. restart the iSCSI service on both Oracle RAC nodes: [root@racnode1 ~]# service iscsi stop Logging out of session [sid: 1.168.com. Logging in to [iface: default.2.195.2.openfiler:racdb.168.com.3260]: Logout of [sid: 3.195.openfiler:racdb.openfiler:racdb.3 Login to [iface: default.3 Login to [iface: default. target: iqn./.2.com.168.168.com. portal: 192.2006-01.com.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.fra1.openfiler:racdb.195.com.3 [ OK ] Let's see if our hard work paid off: [root@racnode1 ~]# ls -l /dev/iscsi/* /dev/iscsi/crs1: total 0 lrwxrwxrwx 1 root root 9 Nov 6 17:32 part -> .openfiler:racdb.openfiler:racdb.2.2006-01. portal: 192. portal: 192./sdc /dev/iscsi/data1: total 0 lrwxrwxrwx 1 root root 9 Nov 6 17:32 part -> .data1./sdd 62 of 136 4/18/2011 10:17 PM .com.2.3260]: Logout of [sid: 3.168.2.fra1.195.openfiler:racdb. Login to [iface: default. target: iqn.2006-01.crs1.com.fra1.168.openfiler:racdb.168. target: iqn.2006-01.2.2006-01.168.2.com.com. portal: 192. target: iqn.168.openfiler:racdb. target: iqn.2006-01.data1. portal: 192.

com. online redo log files.. Finally./sdd /dev/iscsi/data1: total 0 lrwxrwxrwx 1 root root 9 Nov /dev/iscsi/fra1: total 0 lrwxrwxrwx 1 root root 9 Nov 6 17:36 part -> . The physical database files for the clustered database will be stored in an ASM disk group named +RACDB_DATA which will also be configured for external redundancy.2006-01. I will be using Automatic Storage Management (ASM) to store the shared files required for Oracle Clusterware.crs1 iqn.openfiler:racdb. the Fast Recovery Area (RMAN backups and archived redo log files) will be stored in a third ASM disk group named +FRA which too will be configured for external redundancy./././sdc 6 17:36 part -> .2006-01.com. we can safely assume that the device name /dev/iscsi /crs1/part will always reference the iSCSI target iqn.com.2006-01. The Oracle Clusterware shared files (OCR and voting disk) will be stored in an ASM disk group named +CRS which will be configured for external redundancy./sde [root@racnode2 ~]# ls -l /dev/iscsi/* /dev/iscsi/crs1: total 0 lrwxrwxrwx 1 root root 9 Nov 6 17:36 part -> ..fra1 Local Device Name /dev/iscsi/crs1/part /dev/iscsi/data1/part /dev/iscsi/fra1/part Create Partitions on iSCSI Volumes We now need to create a single primary partition on each of the iSCSI volumes that spans the entire size of the volume...crs1. The following table lists the three ASM disk groups that will be created and which iSCSI volume they will contain: Oracle Shared Drive Configuration File Types OCR and Voting Disk Oracle Database Files Oracle Fast Recovery Area ASM Diskgroup Name +CRS +RACDB_DATA +FRA iSCSI Target (short) Name crs1 data1 fra1 ASM Redundancy External External External Size 2GB 32GB 32GB ASMLib Volu ORCL:CRSVOL ORCL:DATAVO ORCL:FRAVOL 63 of 136 4/18/2011 10:17 PM ./.data1 iqn. For example. We now have a consistent iSCSI target name to local device name mapping which is described in the following table: iSCSI Target Name to Local Device Name Mappings iSCSI Target Name iqn. and control files).openfiler:racdb..shtml /dev/iscsi/fra1: total 0 lrwxrwxrwx 1 root root 9 Nov 6 17:32 part -> .com.openfiler:racdb.. As mentioned earlier in this article. the physical database files (data/index files.openfiler:racdb.2006-01..DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.. and the Fast Recovery Area (FRA) for the clustered database././sde The listing above shows that udev did the job it was suppose to do! We now have a consistent set of local device names that can be used to reference the iSCSI targets.

33888 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes 64 of 136 4/18/2011 10:17 PM . I will be running the fdisk command from racnode1 to create a single primary partition on each iSCSI target using the local device names created by udev in the previous section: /dev/iscsi/crs1/part /dev/iscsi/data1/part /dev/iscsi/fra1/part Creating the single partition on each of the iSCSI volumes must only be run from one of the nodes in the Oracle RAC cluster! (i. 32 sectors/track. # --------------------------------------[root@racnode1 ~]# fdisk /dev/iscsi/data1/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-33888. 1012 cylinders Units = cylinders of 4464 * 512 = 2285568 bytes Device Boot /dev/iscsi/crs1/part1 Start 1 End 1012 Blocks 2258753 Id 83 System Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table.5 GB. In this example. you can use the default values when creating the primary partition as the default action is to use the entire disk. The fdisk command is used in Linux for creating (and removing) partitions.e.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. default 33888): 33888 Command (m for help): p Disk /dev/iscsi/data1/part: 35. Syncing disks. SGI or OSF disklabel). 62 sectors/track. For each of the three iSCSI volumes. 2315255808 bytes 72 heads. racnode1) # --------------------------------------[root@racnode1 ~]# fdisk /dev/iscsi/crs1/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1012. You can safely ignore any warnings that may indicate the device does not contain a valid DOS partition (or Sun. default 1012): 1012 Command (m for help): p Disk /dev/iscsi/crs1/part: 2315 MB.shtml As shown in the table above. default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-1012. we will need to create a single Linux primary partition on each of the three iSCSI volumes. 35534143488 bytes 64 heads. default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-33888.

you should now inform the kernel of the partition changes using the following command as the root user account from all remaining nodes in the Oracle RAC cluster (racnode2). This is not a concern and will not cause any problems since we will not be using the local SCSI device names but rather the local device names created by udev in the previous section. # --------------------------------------[root@racnode1 ~]# fdisk /dev/iscsi/fra1/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-33888. 32 sectors/track. Syncing disks. 19452 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/sda1 * /dev/sda2 Start 1 14 End 13 19452 Blocks 104391 156143767+ Id 83 8e System Linux Linux LVM Disk /dev/sdb: 35. default 33888): 33888 Command (m for help): p Disk /dev/iscsi/fra1/part: 35. 33888 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes 65 of 136 4/18/2011 10:17 PM . 35534143488 bytes 64 heads. Verify New Partitions After creating all required partitions from racnode1. 33888 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot /dev/iscsi/fra1/part1 Start 1 End 33888 Blocks 34701296 Id 83 System Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. 32 sectors/track.shtml Device Boot /dev/iscsi/data1/part1 Start 1 End 33888 Blocks 34701296 Id 83 System Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. 35534143488 bytes 64 heads. default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-33888.0 GB. run the following commands: [root@racnode2 ~]# partprobe [root@racnode2 ~]# fdisk -l Disk /dev/sda: 160. Syncing disks. Note that the mapping of iSCSI target names discovered from Openfiler and the local SCSI device name will be different on both Oracle RAC nodes.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. 160000000000 bytes 255 heads.5 GB. From racnode2. 63 sectors/track.5 GB.

openfiler:racdb.openfiler:racdb.crs1-lun-0 -> ./sdd ip-192./.openfiler:racdb. 32 sectors/track.fra1-lun-0-part1 -> . print $9 " " $10 " ip-192. and directories.2.195:3260-iscsi-iqn.2006-01.168./.2./.195:3260-iscsi-iqn.openfiler:racdb.fra1-lun-0 -> ./sdc1 ip-192. and Directories Perform the following user. This section provides the instructions on how to create the operating system users and groups to install all Oracle software using a Job Role Separation configuration.2..openfiler:racdb.openfiler:racdb.. Administrative privileges access 66 of 136 4/18/2011 10:17 PM . 62 sectors/track.2006-01./sdc ip-192.2006-01. ls -l *openfiler* | awk '{FS=" ".com.com.195:3260-iscsi-iqn./.2.2.168..com.. Check to make sure that the group and user IDs you want to use are available on each cluster member node./sdd ip-192.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.168..168..2.168../. and setting shell limit tasks for the grid and oracle users on both Oracle RAC nodes in the cluster.com.195:3260-iscsi-iqn.fra1-lun-0-part1 -> .g./.168./sdd1 ip-192. ls -l *openfiler* | awk '{FS=" "./sde1 The listing above shows that udev did indeed create new device names for each of the new partitions.. A Job Role Separation privileges configuration of Oracle is a configuration with operating system groups and users that divide administrative access privileges to the Oracle grid infrastructure installation from other administrative privileges users and groups associated with other Oracle installations (e.2.com.2006-01.data1-lun-0-part1 -> .195:3260-iscsi-iqn.168.data1-lun-0 -> .com./.2006-01. directory configuration.com.com.2.crs1-lun-0-part1 -> ..fra1-lun-0 -> . The commands in this section should be performed on both Oracle RAC nodes as root to create these groups.. users./sde ip-192../sdc1 ip-192.openfiler:racdb./sde ip-192.openfiler:racdb.openfiler:racdb...168.2./sdd1 ip-192./.data1-lun-0 -> ..openfiler:racdb.2.168././sdc ip-192..195:3260-iscsi-iqn.crs1-lun-0-part1 -> .com..crs1-lun-0 -> .195:3260-iscsi-iqn.. the Oracle database software).2006-01.openfiler:racdb.195:3260-iscsi-iqn.168.195:3260-iscsi-iqn.5 GB. group..2006-01.168.2006-01.195:3260-iscsi-iqn./.2006-01.2006-01. Note that the group and user IDs must be identical on both Oracle RAC nodes in the cluster.com.195:3260-iscsi-iqn.. 1012 cylinders Units = cylinders of 4464 * 512 = 2285568 bytes Device Boot /dev/sdd1 Start 1 End 1012 Blocks 2258753 Id 83 System Linux As a final step you should run the following command on both Oracle RAC nodes to verify that udev created the new symbolic links for each new partition: [root@racnode1 ~]# (cd /dev/disk/by-path. Users.com.195:3260-iscsi-iqn. We will be using these new device names when configuring the volumes for ASMlib later in this guide: /dev/iscsi/crs1/part1 /dev/iscsi/data1/part1 /dev/iscsi/fra1/part1 Create Job Role Separation Operating System Privileges Groups.2.2006-01./.data1-lun-0-part1 -> .2006-01.openfiler:racdb.. and confirm that the primary group for each grid infrastructure for a cluster installation owner has the same name and group ID which for the purpose of this guide is oinstall (GID 1000)..2.. 35534143488 bytes 64 heads.shtml Device Boot /dev/sdb1 Start 1 End 33888 Blocks 34701296 Id 83 System Linux Disk /dev/sdc: 35. 2315255808 bytes 72 heads../. print $9 " " $10 " ip-192./sde1 [root@racnode2 ~]# (cd /dev/disk/by-path.168.. 33888 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot /dev/sdc1 Start 1 End 33888 Blocks 34701296 Id 83 System Linux Disk /dev/sdd: 2315 MB.com.

With this type of configuration. In Oracle documentation. By default. Other organizations. For the purpose of this guide. oinstall). The user created to own the Oracle database binaries (Oracle RAC) will be called the oracle user. have specialized system roles who will be responsible for installing the Oracle software such as system administrators. however. the grid and oracle installation owners must be configured with oinstall as their primary group. oracle grid oracle oracle SYSASM SYSDBA for ASM SYSOPER for ASM SYSDBA SYSOPER Oracle Privilege Oracle Inventory Group (typically oinstall) Members of the OINSTALL group are considered the "owners" of the Oracle software and are granted privileges to write to the Oracle central inventory (oraInventory). so that each Oracle software installation owner can write to the central inventory (oraInventory). and the path of the Oracle Central Inventory directory. Throughout this article. and all privileges as installation owners. This type of configuration is optional but highly recommend by Oracle for organizations that need to restrict user access to Oracle software by responsibility areas for different administrator users. and all Oracle Databases on the servers. Members of the OSASM group can use SQL to connect to an Oracle ASM instance as SYSASM using operating 67 of 136 4/18/2011 10:17 PM . and so that OCR and Oracle Clusterware resource permissions are set correctly. One OS user will be created to own each Oracle software product — "grid" for the Oracle grid infrastructure owner and "oracle" for the Oracle RAC software. you can designate the oracle user to be the sole installation owner for all Oracle software (Grid infrastructure and the Oracle database software). When grid infrastructure installation and configuration is completed successfully. then the installer lists the primary group of the installation owner for the grid infrastructure for a cluster as the oraInventory group. a system administrator should only need to provide configuration information and to grant access to the database administrator to run scripts as root during an Oracle RAC installation.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. if an oraInventory group does not exist. a small organization could simply allocate operating system user privileges so that you can use one administrative user and one group for operating system authentication for all system privileges on the storage and database tiers. For example. Automatic Storage Management. it is referred to as asmadmin. and complete all configuration tasks that require operating system root privileges. where there is a group specifically created to grant this privilege. When you install Oracle software on a Linux system for the first time. and installation privileges are granted by using different installation owners for each Oracle installation.shtml is granted by membership in separate operating system groups. a user created to own the Oracle grid infrastructure binaries is called the grid user. Both Oracle software owners must have the Oracle Inventory group (oinstall) as their primary group. Create this group as a separate group if you want to have separate administration privilege groups for Oracle ASM and Oracle Database administrators. oracle grid grid. the operating system group whose members are granted privileges is called the OSASM group. The Oracle Automatic Storage Management Group (typically asmadmin) This is a required group. and in code examples. or storage administrators. The following O/S groups will be created to support job role separation: Description Oracle Inventory and Software Owner Oracle Automatic Storage Management Group ASM Database Administrator Group ASM Operator Group Database Administrator Database Operator OS Group Name oinstall asmadmin asmdba asmoper dba oper OS Users Assigned to this Group grid.loc file. Ensure that this group is available as a primary group for all planned Oracle software installation owners. This user will own both the Oracle Clusterware and Oracle Automatic Storage Management binaries. OUI creates the /etc/oraInst. The Oracle RAC software owner must also have the OSDBA group and the optional OSOPER group as secondary groups. and designate oinstall to be the single group whose members are granted all system privileges for Oracle Clusterware. This file identifies the name of the Oracle Inventory group (by default. network administrators. These different administrative users can configure a system in preparation for an Oracle grid infrastructure for a cluster installation.

typically oper) Members of the OSOPER group can use SQL to connect to an Oracle instance as SYSOPER using operating system authentication. Members of this group can perform critical database administration tasks. members of the OSASM group also have all privileges granted by the SYSOPER for ASM privilege. Create this group if you want a separate group of operating system users to have a limited set of Oracle ASM instance administrative privileges (the SYSOPER for ASM privilege).DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Control of this privilege is totally outside of the database itself. To use the ASM Operator group to create an ASM administrator group with fewer privileges than the default asmadmin group. The SYSOPER system privilege allows access to a database instance even when the database is not open. SYSASM privileges no longer provide access privileges on an RDBMS instance. Control of this privilege is totally outside of the database itself. The SYSASM privileges permit mounting and dismounting disk groups. and other storage administration tasks. If you want to have an OSOPER for ASM group.1201(asmdba). Database Operator (OSOPER. The SYSDBA system privilege should not be confused with the database role DBA.2). Create Groups and User for Grid Infrastructure Lets start this section by creating the recommended OS groups and user for Grid Infrastructure on both Oracle RAC nodes: [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 ~]# ~]# ~]# ~]# ~]# groupadd -g 1000 oinstall groupadd -g 1200 asmadmin groupadd -g 1201 asmdba groupadd -g 1202 asmoper useradd -m -u 1100 -g oinstall -G asmadmin.1200(asmadmin). then you must choose the Advanced installation type to install the Grid infrastructure software. this group is asmoper. then the grid infrastructure for a cluster software owner (grid) must be a member of this group.asmoper -d /home/grid -s / [root@racnode1 ~]# id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall). The DBA role does not include the SYSDBA or SYSOPER system privileges. Database Administrator (OSDBA. OUI prompts you to specify the name of this group. including starting up and stopping the Oracle ASM instance. The SYSDBA system privilege allows access to a database instance even when the database is not open. The default name for this group is dba. typically asmoper) This is an optional group. Members of this optional group have a limited set of database administrative privileges such as managing and running backups.shtml system authentication. and all users with OSDBA membership on databases that have access to the files managed by Oracle ASM must be members of the OSDBA group for ASM. typically dba) Members of the OSDBA group can use SQL to connect to an Oracle instance as SYSDBA using operating system authentication. The default name for this group is oper. choose the Advanced installation type to install the Oracle database software. In this case.1202(asmope 68 of 136 4/18/2011 10:17 PM . Members of the ASM Operator Group (OSOPER for ASM. By default. Providing system privileges for the storage tier using the SYSASM privilege instead of the SYSDBA privilege provides a clearer division of responsibility between ASM administration and database administration. To use this group. The SYSASM privilege that was introduced in Oracle ASM 11g release 1 (11. The grid infrastructure installation owner (grid) and all Oracle Database software owners (oracle) must be a member of this group.asmdba. The ASM Database Administrator group (OSDBA for ASM. such as creating the database and instance startup and shutdown. typically asmdba) Members of the ASM Database Administrator group (OSDBA for ASM) is a subset of the SYSASM privileges and are granted read and write access to files managed by Oracle ASM. and helps to prevent different databases using the same storage from accidentally overwriting each others files. In this guide.1) is now fully separated from the SYSDBA privilege in Oracle ASM 11g release 2 (11.

1200(asmadmin).bash_profile --------------------------------------------------OS User: grid Application: Oracle Grid Infrastructure Version: Oracle 11g release 2 --------------------------------------------------- # Get the aliases and functions if [ -f ~/. then . [root@racnode2 ~]# passwd grid Changing password for user grid. ~/.shtml ------------------------------------------------[root@racnode2 [root@racnode2 [root@racnode2 [root@racnode2 [root@racnode2 ~]# ~]# ~]# ~]# ~]# groupadd -g 1000 oinstall groupadd -g 1200 asmadmin groupadd -g 1201 asmdba groupadd -g 1202 asmoper useradd -m -u 1100 -g oinstall -G asmadmin. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.asmoper -d /home/grid -s / [root@racnode2 ~]# id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall).1202(asmope Set the password for the grid account on both Oracle RAC nodes: [root@racnode1 ~]# passwd grid Changing password for user grid. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully.bash_profile): When setting the Oracle environment variables for each Oracle RAC node in the login script.bashrc fi alias ls="ls -FA" 69 of 136 4/18/2011 10:17 PM .1201(asmdba).bashrc ]. Create Login Script for the grid User Account Log in to both Oracle RAC nodes as the grid user account and create the following login script (. make certain to assign each RAC node a unique Oracle SID for ASM: racnode1: ORACLE_SID=+ASM1 racnode2: ORACLE_SID=+ASM2 [root@racnode1 ~]# su .grid # # # # # # # --------------------------------------------------.asmdba.

# --------------------------------------------------JAVA_HOME=/usr/local/java...shtml # --------------------------------------------------# ORACLE_SID # --------------------------------------------------# Specifies the Oracle system identifier (SID) # for the Automatic Storage Management (ASM)instance # running on this node. or in the home # directory of an installation owner. ownership of the path to the Grid # home is changed to root. # This variable is used by SQL*Plus. +ASM2. # --------------------------------------------------ORACLE_PATH=/u01/app/oracle/dba_scripts/common/sql.sql file. # Each RAC node must have a unique ORACLE_SID. If the full path to # the file is not specified. export ORACLE_SID # --------------------------------------------------# JAVA_HOME # --------------------------------------------------# Specifies the directory of the Java SDK and Runtime # Environment. This change causes # permission errors for other installations. export ORACLE_PATH # # # # # --------------------------------------------------SQLPATH --------------------------------------------------Specifies the directory or list of directories that SQL*Plus searches for a login. the Grid # home must not be placed under one of the Oracle base # directories. the Oracle application # uses ORACLE_PATH to locate the file.2. The Oracle base directory for the # grid installation owner is the location where # diagnostic and administrative logs. or if the file is not # in the current directory. +ASM1. # --------------------------------------------------ORACLE_HOME=/u01/app/11. export JAVA_HOME # --------------------------------------------------# ORACLE_BASE # --------------------------------------------------# Specifies the base of the Oracle directory structure # for Optimal Flexible Architecture (OFA) compliant # installations.. export ORACLE_BASE # --------------------------------------------------# ORACLE_HOME # --------------------------------------------------# Specifies the directory containing the Oracle # Grid Infrastructure software. # (i.0/grid. # --------------------------------------------------ORACLE_BASE=/u01/app/grid. export ORACLE_HOME # --------------------------------------------------# ORACLE_PATH # --------------------------------------------------# Specifies the search path for files used by Oracle # applications such as SQL*Plus. and other logs # associated with Oracle ASM and Oracle Clusterware # are stored. Forms and Menu. 70 of 136 4/18/2011 10:17 PM . or under Oracle home directories of # Oracle Database installation owners.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.e. During # installation. For grid # infrastructure for a cluster installations.) # --------------------------------------------------ORACLE_SID=+ASM1.

ora. export ORA_NLS11 # --------------------------------------------------# PATH # --------------------------------------------------# Used by the shell to locate executable programs. For example: # # NLS_DATE_FORMAT = "MM/DD/YYYY" # # --------------------------------------------------NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS". export ORACLE_TERM # --------------------------------------------------# NLS_DATE_FORMAT # --------------------------------------------------# Specifies the default date format to use with the # TO_CHAR and TO_DATE functions. # must include the $ORACLE_HOME/bin directory.ora. # tnsnames.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Used by all character mode products. and the value must be surrounded by # double quotation marks. # territory. If not set. export NLS_DATE_FORMAT # --------------------------------------------------# TNS_ADMIN # --------------------------------------------------# Specifies the directory containing the Oracle Net # Services configuration files like listener. # --------------------------------------------------ORA_NLS11=$ORACLE_HOME/nls/data. # --------------------------------------------------TNS_ADMIN=$ORACLE_HOME/network/admin. The # value of this parameter can be any valid date # format mask.ora. export TNS_ADMIN # --------------------------------------------------# ORA_NLS11 # --------------------------------------------------# Specifies the directory where the language. and sqlnet.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin PATH=${PATH}:/u01/app/oracle/dba_scripts/common/bin export PATH # --------------------------------------------------# LD_LIBRARY_PATH # --------------------------------------------------# Specifies the list of directories that the shared # library loader searches to locate shared object # libraries at runtime.shtml # --------------------------------------------------# SQLPATH=/u01/app/oracle/dba_scripts/common/sql. The default value of # this parameter is determined by NLS_TERRITORY. # --------------------------------------------------PATH=. character set. it # defaults to the value of your TERM environment # variable. # --------------------------------------------------ORACLE_TERM=xterm. and linguistic definition # files are stored. # --------------------------------------------------LD_LIBRARY_PATH=$ORACLE_HOME/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib 71 of 136 4/18/2011 10:17 PM . export SQLPATH # --------------------------------------------------# ORACLE_TERM # --------------------------------------------------# Defines a terminal definition.

tools that create temporary files # create them in one of these directories.oper.shtml export LD_LIBRARY_PATH # --------------------------------------------------# CLASSPATH # --------------------------------------------------# Specifies the directory or list of directories that # contain compiled Java classes. You can revert to the use of green # threads by setting THREADS_FLAG to the value # "green". export THREADS_FLAG # --------------------------------------------------# TEMP.asmdba -d /home/oracle -s /bin/ba [root@racnode1 ~]# id oracle uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall). To specify that native threads should be # used. set the THREADS_FLAG environment variable to # "native".DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.asmdba -d /home/oracle -s /bin/ba 72 of 136 4/18/2011 10:17 PM . # --------------------------------------------------export TEMP=/tmp export TMPDIR=/tmp # --------------------------------------------------# UMASK # --------------------------------------------------# Set the default file mode creation mask # (umask) to 022 to ensure that the user performing # the Oracle software installation creates files # with 644 permissions. if set. TMP.oper. # --------------------------------------------------umask 022 Create Groups and User for Oracle Database Software Next.1201(asmdba). create the the recommended OS groups and user for the Oracle database software on both Oracle RAC nodes: [root@racnode1 ~]# groupadd -g 1300 dba [root@racnode1 ~]# groupadd -g 1301 oper [root@racnode1 ~]# useradd -m -u 1101 -g oinstall -G dba. # --------------------------------------------------CLASSPATH=$ORACLE_HOME/JRE CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export CLASSPATH # --------------------------------------------------# THREADS_FLAG # --------------------------------------------------# All the tools in the JDK use green threads as a # default. # --------------------------------------------------THREADS_FLAG=native.1301(oper) ------------------------------------------------- [root@racnode2 ~]# groupadd -g 1300 dba [root@racnode2 ~]# groupadd -g 1301 oper [root@racnode2 ~]# useradd -m -u 1101 -g oinstall -G dba. and TMPDIR # --------------------------------------------------# Specify the default directories for temporary # files.1300(dba).

Each RAC node must have a unique ORACLE_SID.1201(asmdba).DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.. [root@racnode2 ~]# passwd oracle Changing password for user oracle.oracle # # # # # # # --------------------------------------------------. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully.1301(oper) Set the password for the oracle account: [root@racnode1 ~]# passwd oracle Changing password for user oracle. ~/. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully.bashrc ].1300(dba). make certain to assign each RAC node a unique Oracle SID: racnode1: ORACLE_SID=racdb1 racnode2: ORACLE_SID=racdb2 [root@racnode1 ~]# su . racdb2.shtml [root@racnode2 ~]# id oracle uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall). then . (i.bash_profile): When setting the Oracle environment variables for each Oracle RAC node in the login script.bash_profile --------------------------------------------------OS User: oracle Application: Oracle Database Software Owner Version: Oracle 11g release 2 --------------------------------------------------- # Get the aliases and functions if [ -f ~/. Create Login Script for the oracle User Account Log in to both Oracle RAC nodes as the oracle user account and create the following login script (..e.. racdb1.) 73 of 136 4/18/2011 10:17 PM .bashrc fi alias ls="ls -FA" # # # # # # # --------------------------------------------------ORACLE_SID --------------------------------------------------Specifies the Oracle system identifier (SID) for the Oracle instance running on this node.

# --------------------------------------------------JAVA_HOME=/usr/local/java.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. stop. export ORACLE_HOME # --------------------------------------------------# ORACLE_PATH # --------------------------------------------------# Specifies the search path for files used by Oracle # applications such as SQL*Plus. export ORACLE_PATH # # # # # # # --------------------------------------------------SQLPATH --------------------------------------------------Specifies the directory or list of directories that SQL*Plus searches for a login. If the full path to # the file is not specified. export ORACLE_UNQNAME # --------------------------------------------------# JAVA_HOME # --------------------------------------------------# Specifies the directory of the Java SDK and Runtime # Environment. export JAVA_HOME # --------------------------------------------------# ORACLE_BASE # --------------------------------------------------# Specifies the base of the Oracle directory structure # for Optimal Flexible Architecture (OFA) compliant # database software installations. and # check the status of Enterprise Manager.sql file.0/dbhome_1. the Oracle application # uses ORACLE_PATH to locate the file. # --------------------------------------------------ORACLE_HOME=$ORACLE_BASE/product/11. # Set ORACLE_UNQNAME equal to the database unique # name. export ORACLE_SID # --------------------------------------------------# ORACLE_UNQNAME # --------------------------------------------------# In previous releases of Oracle Database.2) and later.2. Forms and Menu. --------------------------------------------------SQLPATH=/u01/app/oracle/dba_scripts/common/sql. export SQLPATH # --------------------------------------------------# ORACLE_TERM # --------------------------------------------------- 74 of 136 4/18/2011 10:17 PM . export ORACLE_BASE # --------------------------------------------------# ORACLE_HOME # --------------------------------------------------# Specifies the directory containing the Oracle # Database software. you were # required to set environment variables for # ORACLE_HOME and ORACLE_SID to start.shtml # --------------------------------------------------ORACLE_SID=racdb1. # --------------------------------------------------ORACLE_BASE=/u01/app/oracle. # --------------------------------------------------ORACLE_PATH=/u01/app/oracle/dba_scripts/common/sql:$ORACLE_HOME/rdbms/admin. or if the file is not # in the current directory. you # need to set the environment variables ORACLE_HOME # and ORACLE_UNQNAME to use Enterprise Manager. # --------------------------------------------------ORACLE_UNQNAME=racdb. With # Oracle Database 11g release 2 (11. # This variable is used by SQL*Plus.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. # --------------------------------------------------LD_LIBRARY_PATH=$ORACLE_HOME/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib export LD_LIBRARY_PATH # # # # --------------------------------------------------CLASSPATH --------------------------------------------------Specifies the directory or list of directories that 75 of 136 4/18/2011 10:17 PM . If not set. # --------------------------------------------------TNS_ADMIN=$ORACLE_HOME/network/admin. it # defaults to the value of your TERM environment # variable. character set.shtml # Defines a terminal definition. # tnsnames. and linguistic definition # files are stored. and sqlnet. # --------------------------------------------------PATH=. The default value of # this parameter is determined by NLS_TERRITORY. # --------------------------------------------------ORA_NLS11=$ORACLE_HOME/nls/data. The # value of this parameter can be any valid date # format mask. export TNS_ADMIN # --------------------------------------------------# ORA_NLS11 # --------------------------------------------------# Specifies the directory where the language.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin PATH=${PATH}:/u01/app/oracle/dba_scripts/common/bin export PATH # --------------------------------------------------# LD_LIBRARY_PATH # --------------------------------------------------# Specifies the list of directories that the shared # library loader searches to locate shared object # libraries at runtime. # --------------------------------------------------ORACLE_TERM=xterm. export ORA_NLS11 # --------------------------------------------------# PATH # --------------------------------------------------# Used by the shell to locate executable programs. export ORACLE_TERM # --------------------------------------------------# NLS_DATE_FORMAT # --------------------------------------------------# Specifies the default date format to use with the # TO_CHAR and TO_DATE functions.ora. For example: # # NLS_DATE_FORMAT = "MM/DD/YYYY" # # --------------------------------------------------NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS". # must include the $ORACLE_HOME/bin directory. # territory. export NLS_DATE_FORMAT # --------------------------------------------------# TNS_ADMIN # --------------------------------------------------# Specifies the directory containing the Oracle Net # Services configuration files like listener.ora. Used by all character mode products. and the value must be surrounded by # double quotation marks.ora.

# --------------------------------------------------umask 022 Verify That the User nobody Exists Before installing the software. if set. enter the following command: [root@racnode1 ~]# id nobody uid=99(nobody) gid=99(nobody) groups=99(nobody) [root@racnode2 ~]# id nobody uid=99(nobody) gid=99(nobody) groups=99(nobody) If this command displays information about the nobody user. You can revert to the use of green # threads by setting THREADS_FLAG to the value # "green". # --------------------------------------------------THREADS_FLAG=native.shtml # contain compiled Java classes. # --------------------------------------------------CLASSPATH=$ORACLE_HOME/JRE CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export CLASSPATH # --------------------------------------------------# THREADS_FLAG # --------------------------------------------------# All the tools in the JDK use green threads as a # default. 2. complete the following procedure to verify that the user nobody exists on both Oracle RAC nodes: 1.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. To specify that native threads should be # used. export THREADS_FLAG # --------------------------------------------------# TEMP. then you do not have to create that user. and TMPDIR # --------------------------------------------------# Specify the default directories for temporary # files. If the user nobody does not exist. tools that create temporary files # create them in one of these directories. set the THREADS_FLAG environment variable to # "native". then enter the following command to create it: [root@racnode1 ~]# /usr/sbin/useradd nobody Create the Oracle Base Directory Path [root@racnode2 ~]# /usr/sbin/useradd nobody 76 of 136 4/18/2011 10:17 PM . TMP. To determine if the user exists. # --------------------------------------------------export TEMP=/tmp export TMPDIR=/tmp # --------------------------------------------------# UMASK # --------------------------------------------------# Set the default file mode creation mask # (umask) to 022 to ensure that the user performing # the Oracle software installation creates files # with 644 permissions.

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

The final step is to configure an Oracle base path compliant with an Optimal Flexible Architecture (OFA) structure and correct permissions. This will need to be performed on both Oracle RAC nodes in the cluster as root. This guide assumes that the /u01 directory is being created in the root file system. Please note that this is being done for the sake of brevity and is not recommended as a general practice. Normally, the /u01 directory would be provisioned as a separate file system with either hardware or software mirroring configured.

[root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1

~]# ~]# ~]# ~]# ~]# ~]#

mkdir mkdir chown mkdir chown chmod

-p /u01/app/grid -p /u01/app/11.2.0/grid -R grid:oinstall /u01 -p /u01/app/oracle oracle:oinstall /u01/app/oracle -R 775 /u01

------------------------------------------------------------[root@racnode2 [root@racnode2 [root@racnode2 [root@racnode2 [root@racnode2 [root@racnode2 ~]# ~]# ~]# ~]# ~]# ~]# mkdir mkdir chown mkdir chown chmod -p /u01/app/grid -p /u01/app/11.2.0/grid -R grid:oinstall /u01 -p /u01/app/oracle oracle:oinstall /u01/app/oracle -R 775 /u01

At the end of this section, you should have the following on both Oracle RAC nodes: An Oracle central inventory group, or oraInventory group (oinstall), whose members that have the central inventory group as their primary group are granted permissions to write to the oraInventory directory. A separate OSASM group (asmadmin), whose members are granted the SYSASM privilege to administer Oracle Clusterware and Oracle ASM. A separate OSDBA for ASM group (asmdba), whose members include grid and oracle, and who are granted access to Oracle ASM. A separate OSOPER for ASM group (asmoper), whose members include grid, and who are granted limited Oracle ASM administrator privileges, including the permissions to start and stop the Oracle ASM instance. An Oracle grid installation for a cluster owner (grid), with the oraInventory group as its primary group, and with the OSASM (asmadmin), OSDBA for ASM (asmdba) and OSOPER for ASM (asmoper) groups as secondary groups. A separate OSDBA group (dba), whose members are granted the SYSDBA privilege to administer the Oracle Database. A separate OSOPER group (oper), whose members include oracle, and who are granted limited Oracle database administrator privileges. An Oracle Database software owner (oracle), with the oraInventory group as its primary group, and with the OSDBA (dba), OSOPER (oper), and the OSDBA for ASM group (asmdba) as their secondary groups. An OFA-compliant mount point /u01 owned by grid:oinstall before installation. An Oracle base for the grid /u01/app/grid owned by grid:oinstall with 775 permissions, and changed during the installation process to 755 permissions. The grid installation owner Oracle base directory is the location where Oracle ASM diagnostic and administrative log files are placed. A Grid home /u01/app/11.2.0/grid owned by grid:oinstall with 775 (drwxdrwxr-x) permissions. These permissions are required for installation, and are changed during the installation process to root:oinstall with 755 permissions (drwxr-xr-x). During installation, OUI creates the Oracle Inventory directory in the path /u01/app/oraInventory. This path remains owned by grid:oinstall, to enable other Oracle software owners to write to the central inventory.

77 of 136

4/18/2011 10:17 PM

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

An Oracle base /u01/app/oracle owned by oracle:oinstall with 775 permissions. Set Resource Limits for the Oracle Software Installation Users To improve the performance of the software on Linux systems, you must increase the following resource limits for the Oracle software owner users (grid, oracle):
Shell Limit Maximum number of open file descriptors Maximum number of processes available to a single user Maximum size of the stack segment of the process Item in limits.conf nofile nproc stack Hard Limit 65536 16384 10240

To make these changes, run the following as root: 1. On each Oracle RAC node, add the following lines to the /etc/security/limits.conf file (the following example shows the software account owners oracle and grid):

[root@racnode1 ~]# cat >> /etc/security/limits.conf <<EOF grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 EOF [root@racnode2 ~]# cat >> /etc/security/limits.conf <<EOF grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 EOF

2. On each Oracle RAC node, add or edit the following line in the /etc/pam.d/login file, if it does not already exist:

[root@racnode1 ~]# cat >> /etc/pam.d/login <<EOF session required pam_limits.so EOF [root@racnode2 ~]# cat >> /etc/pam.d/login <<EOF session required pam_limits.so EOF

3. Depending on your shell environment, make the following changes to the default shell startup file in order to change ulimit settings for all Oracle installation owners (note that these examples show the users oracle and grid): For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile file by running the following:

78 of 136

4/18/2011 10:17 PM

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

[root@racnode1 ~]# cat >> /etc/profile <<EOF if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF [root@racnode2 ~]# cat >> /etc/profile <<EOF if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF

For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file by running the following:

[root@racnode1 ~]# cat >> /etc/csh.login <<EOF if ( \$USER == "oracle" || \$USER == "grid" ) then limit maxproc 16384 limit descriptors 65536 endif EOF [root@racnode2 ~]# cat >> /etc/csh.login <<EOF if ( \$USER == "oracle" || \$USER == "grid" ) then limit maxproc 16384 limit descriptors 65536 endif EOF

Logging In to a Remote System Using X Terminal
This guide requires access to the console of all machines (Oracle RAC nodes and Openfiler) in order to install the operating system and perform several of the configuration tasks. When managing a very small number of servers, it might make sense to connect each server with its own monitor, keyboard, and mouse in order to access its console. However, as the number of servers to manage increases, this solution becomes unfeasible. A more practical solution would be to configure a dedicated device which would include a single monitor, keyboard, and mouse that would have direct access to the console of each machine. This solution is made possible using a Keyboard, Video, Mouse Switch —better known as a KVM Switch. After installing the Linux operating system, there are several applications which are needed to install and configure Oracle RAC that use a Graphical User Interface (GUI) and require the use of an X11 display server. The most notable of these GUI applications (or better known as an X application) is the Oracle Universal Installer (OUI) although others like the Virtual IP Configuration Assistant (VIPCA) also require the use of an X11 display server. Given the fact that I created this article on a system that makes use of a KVM Switch, I am able to toggle to each node and rely on the native X11 display server for Linux in order to display X applications.

79 of 136

4/18/2011 10:17 PM

if you are making a terminal remote connection to racnode1 from a Windows workstation. PuTTY. or Telnet to connect to the node. The kernel parameters discussed in this section will need to be set on both Oracle RAC nodes in the cluster every time the machine is booted. This section provides information about setting all OS kernel parameters required for Oracle. you would need to install an X11 display server on that Windows client (Xming for example). 80 of 136 4/18/2011 10:17 PM . log in to the server where you want to install the software as the Oracle grid infrastructure for a cluster software owner (grid) or the Oracle RAC software (oracle). Configure the security settings of the X server software to permit remote hosts to display X applications on the local system.conf) will be discussed later in this section.0 [grid@racnode1 ~]$ export DISPLAY [grid@racnode1 ~]$ # TEST X CONFIGURATION BY RUNNING xterm [grid@racnode1 ~]$ xterm & Figure 17: Test X11 Display Server on Windows.grid [grid@racnode1 ~]$ DISPLAY=<your local workstation>:0. any X application will require an X11 display server installed on the client. 3. oracle). then perform the following actions: 1. 4. Start the X11 display server software on the client workstation.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Run xterm from Node 1 (racnode1) Configure the Linux Servers for Oracle Perform the following configuration procedures on both Oracle RAC nodes in the cluster.shtml If you are not logged directly on to the graphical console of a node but rather you are using a remote client like SSH. set the DISPLAY environment: [root@racnode1 ~]# su . From the client workstation. As the software owner (grid. Instructions for setting all OS kernel parameters required by Oracle in a startup script (/etc/sysctl. For example. 2. If you intend to install the Oracle grid infrastructure and Oracle RAC software from a Windows workstation or other system with an X11 display server installed.

let's say about 500MB: [root@racnode1 ~]# dd if=/dev/zero of=tempswap bs=1k count=500000 Next. how to activate all kernel parameters for the system. For the purpose of this article.5 times the amount of RAM for systems with 2 GB of RAM or less. each Oracle RAC node will be hosting Oracle grid infrastructure and Oracle RAC and will therefore require at least 2. Oracle recommends that you set swap space to 1. As root.5 GB in each server. use 16 GB of RAM for swap space. format the "partition" as swap and add it to the swap space: 81 of 136 4/18/2011 10:17 PM .5 GB.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. The minimum required swap space is 1. setting the maximum number of file handles. make a file that will act as additional swap space. type: [root@racnode1 ~]# cat /proc/meminfo | grep SwapTotal 6094840 kB SwapTotal: [root@racnode2 ~]# cat /proc/meminfo | grep SwapTotal SwapTotal: 6094840 kB If you have less than 4GB of memory (between your RAM and SWAP). To check the amount of memory you have. and finally. or 2. Each of the Oracle RAC nodes used in this example are equipped with 4 GB of physical RAM.shtml Overview This section focuses on configuring both Oracle RAC Linux servers — getting each one prepared for the Oracle 11g release 2 grid infrastructure and Oracle RAC 11g release 2 installations on the Red Hat Enterprise Linux 5 or CentOS 5 platform. use swap space equal to RAM.conf file. For systems with more than 16 GB RAM. This way you do not have to use a raw device or even more drastic. For systems with 2 GB to 16 GB RAM. I will be making all changes permanent through reboots by placing all values in the /etc/sysctl. you can add temporary swap space by creating a temporary swap file. type: [root@racnode1 ~]# cat /proc/meminfo | grep MemTotal MemTotal: 4038512 kB [root@racnode2 ~]# cat /proc/meminfo | grep MemTotal MemTotal: 4038512 kB To check the amount of swap you have allocated.5 GB for grid infrastructure for a cluster. setting the IP local port range. Memory and Swap Space Considerations The minimum required RAM on RHEL/CentOS is 1. This includes verifying enough memory and swap space. setting shared memory and semaphores. There are several different ways to set these parameters. rebuild your system.5 GB for grid infrastructure for a cluster and Oracle RAC. change the file permissions: [root@racnode1 ~]# chmod 600 tempswap Finally. In this guide.

shmall kernel.shtml [root@racnode1 ~]# mke2fs tempswap [root@racnode1 ~]# mkswap tempswap [root@racnode1 ~]# swapon tempswap Configure Kernel Parameters The kernel parameters presented in this section are recommended values only as documented by Oracle.ipv4. you can simply copy / paste the following to both Oracle RAC nodes while logged in as root: [root@racnode1 ~]# cat >> /etc/sysctl. The default values for these two kernel parameters is adequate for Oracle Database 11g release 2 and therefore do not need to be modified: kernel.rmem_max=4194304 net. Oracle recommends that you tune these values to optimize the performance of the system.wmem_default=262144 net.ip_local_port_range = 9000 65500 net.core. Oracle Database 11g release 2 on RHEL/CentOS 5 requires the kernel parameter settings shown below.core.shmmni = 4096 kernel. On both Oracle RAC nodes. kernel.sem = 250 32000 100 128 fs. do not change it.file-max = 6815744 net.conf <<EOF # Controls the maximum number of shared memory segments system wide kernel. so if your system uses a larger value.sem = 250 32000 100 128 SEMMNI_value # Sets the maximum number of file-handles that the Linux kernel will allocate fs.aio-max-nr=1048576 RHEL/CentOS 5 already comes configured with default values defined for the following kernel parameters. For production database systems.shmmax = 4294967295 kernel.wmem_max=1048576 fs. This being the case.core.shmall = 2097152 kernel. verify that the kernel parameters described in this section are set to values greater than or equal to the recommended values. many of the required kernel parameters are already set (see above).core. The values given are minimums.shmmni = 4096 # Sets the following semaphore values: # SEMMSL_value SEMMNS_value SEMOPM_value kernel. This article assumes a fresh new install of RHEL/CentOS 5 and as such.rmem_default=262144 net.shmmax Use the default values if they are the same or larger than the required values. Also note that when setting the four semaphore values that all four values need to be entered on one line.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.file-max = 6815744 # Defines the local port range that is used by TCP and UDP # traffic to choose the local port 82 of 136 4/18/2011 10:17 PM .

aio-max-nr=1048576 EOF [root@racnode2 ~]# cat >> /etc/sysctl.rmem_max=4194304 # Default setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket option net.core.file-max = 6815744 # Defines the local port range that is used by TCP and UDP # traffic to choose the local port net.shmmni = 4096 # Sets the following semaphore values: # SEMMSL_value SEMMNS_value SEMOPM_value kernel.core.rmem_max=4194304 # Default setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket option net.rmem_default=262144 # Maximum setting in bytes of the socket "receive" buffer which # may be set by using the SO_RCVBUF socket option net.ip_local_port_range = 9000 65500 # Default setting in bytes of the socket "receive" buffer which # may be set by using the SO_RCVBUF socket option net.ip_local_port_range = 9000 65500 # Default setting in bytes of the socket "receive" buffer which # may be set by using the SO_RCVBUF socket option net.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.ipv4.aio-max-nr=1048576 EOF Activate All Kernel Parameters for the System 83 of 136 4/18/2011 10:17 PM .core.core.core.core.conf <<EOF # Controls the maximum number of shared memory segments system wide kernel.sem = 250 32000 100 128 SEMMNI_value # Sets the maximum number of file-handles that the Linux kernel will allocate fs.wmem_max=1048576 # Maximum number of allowable concurrent asynchronous I/O requests requests fs.rmem_default=262144 # Maximum setting in bytes of the socket "receive" buffer which # may be set by using the SO_RCVBUF socket option net.wmem_default=262144 # Maximum setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket option net.core.ipv4.shtml net.core.wmem_default=262144 # Maximum setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket option net.wmem_max=1048576 # Maximum number of allowable concurrent asynchronous I/O requests requests fs.

ipv4.rmem_max = 4194304 net.core.ipv4.ip_forward = 0 net.rp_filter = 1 net.shmmax = 68719476736 kernel.tcp_syncookies = 1 kernel.ipv4.tcp_syncookies = 1 kernel.ipv4.ipv4.msgmax = 65536 kernel.hugetlb_shm_group = 0 kernel.ip_forward = 0 net.sysrq = 0 kernel.ipv4.aio-max-nr = 1048576 Verify the new kernel parameter values by running the following on both Oracle RAC nodes in the cluster: [root@racnode1 ~]# /sbin/sysctl -a | grep shm vm.sem = 250 32000 100 128 [root@racnode1 ~]# /sbin/sysctl -a | grep file-max fs.msgmax = 65536 kernel.msgmnb = 65536 kernel.core_uses_pid = 1 net.accept_source_route = 0 kernel. run the following as root on both Oracle RAC nodes in the cluster: [root@racnode1 ~]# sysctl -p net.ipv4.conf.ip_local_port_range = 9000 65500 net.default.core.conf startup file.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.default.rmem_default = 262144 net.file-max = 6815744 84 of 136 4/18/2011 10:17 PM .ipv4.ipv4.shmmni = 4096 kernel.file-max = 6815744 net.shmmni = 4096 kernel. Linux allows modification of these kernel parameters to the current system while it is up and running. To activate the new kernel parameter values for the currently running system.core.wmem_default = 262144 net.core.ip_local_port_range = 9000 65500 net.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.sysrq = 0 kernel.wmem_max = 1048576 fs.shmmax = 68719476736 kernel.msgmnb = 65536 kernel.rp_filter = 1 net.conf.wmem_max = 1048576 fs.core.shmmax = 68719476736 [root@racnode1 ~]# /sbin/sysctl -a | grep sem kernel.wmem_default = 262144 net.conf.file-max = 6815744 net.sem = 250 32000 100 128 fs.core.shmall = 4294967296 kernel.shtml The above command persisted the required kernel parameters through reboots by inserting them in the /etc/sysctl.shmall = 4294967296 kernel.ipv4.core.shmall = 4294967296 kernel.accept_source_route = 0 kernel.core_uses_pid = 1 net.default.rmem_max = 4194304 net.conf.core.rmem_default = 262144 net.default. so there's no need to reboot the system after making kernel parameter changes.aio-max-nr = 1048576 [root@racnode2 ~]# sysctl -p net.

The ability to run SSH commands without being prompted for a password is sometimes referred to as user equivalence.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.el5 (x86_64) openssh-server-4. The reason this section of the document is optional is that the OUI interface in 11g release 2 includes a new feature that can automatically configure SSH during the install phase of the Oracle software for the user account running the installation. Passwordless SSH is required for Oracle 11g release 2 and higher. OpenSSH should be included in the Linux distribution minimal installation.core.shtml [root@racnode1 ~]# /sbin/sysctl -a | grep ip_local_port_range net. These services. and other features that perform configuration operations from local to remote nodes. are disabled by default on most Linux systems. Configuring SSH with a passphrase is no longer supported for Oracle Clusterware 11g release 2 and later releases. One of the best parts about this section of the document is that it is completely optional! That's not to say configuring Secure Shell (SSH) connectivity between the Oracle RAC nodes is not necessary.rmem_default = 262144 net.wmem_max = 1048576 Configure RAC Nodes for Remote Access using SSH .core. the Oracle Universal Installer (OUI) uses the secure shell tools ssh and scp commands during installation to run remote commands on and copy files to the other cluster nodes. run the following command on both Oracle RAC nodes: [root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep ssh openssh-askpass-4. When SSH is not available. To confirm that SSH packages are installed.rmem_max = 4194304 net.3p2-41. The use of RSH will not be discussed in this guide. Oracle Enterprise Manager. the installer attempts to use the rsh and rcp commands instead of ssh and scp. During the Oracle software installations. Verify SSH Software is Installed The supported version of SSH for Linux distributions is OpenSSH.wmem_default = 262144 net.ipv4.core. passwordless SSH must be configured for both user accounts.ip_local_port_range = 9000 65500 [root@racnode1 ~]# /sbin/sysctl -a | grep 'core\. For example. SSH must be configured so that these commands do not prompt for a password.core.el5 (x86_64) openssh-4.el5 (x86_64) If you do not see a list of SSH packages. Since this guide uses grid as the Oracle grid infrastructure software owner and oracle as the owner of the Oracle RAC software. Oracle recommends that you use the automatic procedure provided by the OUI whenever possible.(Optional) Perform the following optional procedures on both Oracle RAC nodes to manually configure passwordless SSH connectivity between the two cluster member nodes as the "grid" and "oracle" user. The automatic configuration performed by OUI creates passwordless SSH connectivity between all cluster member nodes. however. then install those packages for your Linux distribution. In addition to installing the Oracle software.3p2-41.3p2-41. OPatch. SSH is used after installation by configuration assistants. load CD #1 85 of 136 4/18/2011 10:17 PM .3p2-41.[rw]mem' net.el5 (x86_64) openssh-clients-4. To the contrary.

Please note that it is not required to run the CVU utility before installing the Oracle software. I decided to forgo manually configuring SSH connectivity in favor of Oracle's automatic methods included in the installer. The CVU (runcluvfy... and remove other security measures that are triggered during a login and that generate messages to the terminal. Check: Node reachability from node "racnode1" Destination Node Reachable? -----------------------------------.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.sh) is a valuable tool located in the Oracle Clusterware root directory that not only verifies all prerequisites have been met before software installation. mail checks.. Check: User equivalence for user "grid" Node Name Comment -----------------------------------. If you intend to configure SSH connectivity using the OUI. then SSH must be configured manually before an installation can be run. Starting with Oracle 11g 86 of 136 4/18/2011 10:17 PM . called fixup scripts.shtml into each of the Oracle RAC nodes and perform the following to install the OpenSSH packages: [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 ~]# ~]# ~]# ~]# ~]# mount -r /dev/cdrom /media/cdrom cd /media/cdrom/Server rpm -Uvh openssh-* cd / eject Why Configure SSH User Equivalence Using the Manual Method Option? So. for the purpose of this article.. to resolve many incomplete system configuration requirements. If they are not disabled.racnode Performing pre-checks for cluster services setup Checking node reachability. One reason to include this section on manually configuring SSH is to make mention of the fact that you must remove stty commands from the profiles of any Oracle software installation owners. These messages. Another reason you may decide to manually configure SSH for user equivalence is to have the ability to run the Cluster Verification Utility (CVU) prior to installing the Oracle software. however. and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer.sh stage -pre crsinst -fixup -n racnode1. The CVU does. know that the CVU utility will fail before having the opportunity to perform any of its critical checks: [grid@racnode1 ~]$ /media/cdrom/grid/runcluvfy. it also has the ability to generate shell script programs.-----------------------racnode1 yes racnode2 yes Result: Node reachability check passed from node "racnode1" Checking user equivalence. have a prerequisite of its own and that is that SSH user equivalency is configured correctly for the user account running the installation. then why provide a section on how to manually configure passwordless SSH connectivity? In fact. if the OUI already includes a feature that automates the SSH configuration between the Oracle RAC nodes.-----------------------racnode2 failed racnode1 failed Result: PRVF-4007 : User equivalence check failed for user "grid" ERROR: User equivalence unavailable on all the specified nodes Verification cannot proceed Pre-check for cluster services setup was unsuccessful on all the nodes. Further documentation on preventing installation errors caused by stty commands can be found later in this section.

oracle). You need either an RSA or a DSA key for the SSH protocol. the Oracle software owner grid will be configured for passwordless SSH. If you have an SSH2 installation. Create SSH Directory and SSH Keys Complete the following steps on each Oracle RAC node. complete the following on both Oracle RAC nodes.5 protocol. 87 of 136 4/18/2011 10:17 PM .DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. With OpenSSH. In the example that follows. Note that the SSH files must be readable only by root and by the software installation user (grid. you can use either RSA or DSA. the DSA key is used. User equivalence enables the grid and oracle user accounts to access all other nodes in the cluster (running commands and copying files) without the need for a password. you must first create RSA or DSA keys on each cluster node. and then copy all the keys generated on all cluster node members into an authorized keys file that is identical on each node.shtml release 2. user equivalence had to be configured using remote shell (RSH). Run this check on both Oracle RAC nodes in the cluster to verify the SSH daemons are installed and running. then the response to this command is a list of process ID number(s). Configure SSH Connectivity Manually on All Cluster Nodes To reiterate. If you decide to manually configure SSH connectivity. it should be performed for both user accounts. The goal in this section is to setup user equivalence for the grid and oracle OS user accounts. it is not required to manually configure SSH connectivity before running the OUI. oracle). the installer detects when minimum requirements for installation are not completed and performs the same tasks done by the CVU to generate fixup scripts to resolve incomplete system configuration requirements. Configuring Passwordless SSH on Cluster Nodes To configure passwordless SSH. The tasks below to manually configure SSH connectivity between all cluster member nodes is included for documentation purposes only. Checking Existing SSH Configuration on the System To determine if SSH is installed and running. This is the recommend approach by Oracle and the method used in this article. Keep in mind that this guide uses grid as the Oracle grid infrastructure software owner and oracle as the owner of the Oracle RAC software. while DSA is the default for the SSH 2. Oracle added support in 10g release 1 for using the SSH tool suite for setting up user equivalence. Before Oracle Database 10g. enter the following command: [grid@racnode1 ~]$ pgrep sshd 2535 19852 If SSH is running. as SSH ignores a private key file if it is accessible by others. In the examples that follow. and you cannot use SSH1. You must configure passwordless SSH separately for each Oracle software installation owner that you intend to use for installation (grid. Automatic passwordless SSH configuration using the OUI creates RSA encryption keys on all nodes of the cluster. then refer to your SSH distribution documentation to configure SSH1 compatibility or to configure SSH2 with DSA. The OUI in 11g release 2 provides an interface during the install for the user account running the installation to automatically create passwordless SSH connectivity between all cluster member nodes. To configure passwordless SSH. RSA is used with the SSH 1.0 protocol. The instructions that follow are for SSH1.

Enter file in which to save the key (/home/grid/.1201(asmdba).shtml 1.1200(asmadmin). 5.ssh SSH configuration will fail if the permissions are not set to 700. [grid@racnode1 ~]$ id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall). create the . Add All Keys to a Common authorized_keys File 88 of 136 4/18/2011 10:17 PM . The key fingerprint is: 57:21:d7:d5:54:29:4c:12:40:23:36:e9:6e:2f:e6:40 grid@racnode1 SSH with passphrase is not supported for Oracle Clusterware 11g release 2 and later releases.ssh/id_dsa.ssh [grid@racnode1 ~]$ chmod 700 ~/. Passwordless SSH is required for Oracle 11g release 2 and higher. Verify that the Oracle user group and user and the user terminal window process you are using have group and user IDs that are identical. and set permissions on it to ensure that only the grid user has read and write permissions: [grid@racnode1 ~]$ mkdir ~/. Never distribute the private key to anyone not authorized to perform Oracle software installations.pub file and the private key to the ~/.ssh/id_dsa.grid 2. At the prompts.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. 4. Repeat steps 1 through 4 for all remaining nodes that you intend to make a member of the cluster using the DSA key (racnode2). Your public key has been saved in /home/grid/. 3. This command writes the DSA public key to the ~/.pub. enter the commands id and id grid.ssh/id_dsa. To ensure that you are logged in as grid and to verify that the user ID matches the expected user ID you have assigned to the grid user.ssh/id_dsa): [Enter] Enter passphrase (empty for no passphrase): [Enter] Enter same passphrase again: [Enter] Your identification has been saved in /home/grid/. Enter the following command to generate a DSA key pair (public and private key) for the SSH protocol.ssh /id_dsa file. If necessary. Log in to both Oracle RAC nodes as the software owner (in this example.ssh directory in the grid user's home directory.1200(asmadmin). the grid user): [root@racnode1 ~]# su .1201(asmdba). For example: [grid@racnode1 ~]$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall). accept the default key file location and no passphrase (simply press [Enter] three times!): [grid@racnode1 ~]$ /usr/bin/ssh-keygen -t dsa Generating public/private dsa key pair.

ssh total 8 -rw-r--r-.pub files that you generated on all cluster nodes. Once the authorized key file contains all of the public keys for each node.152)' can't be established.1.ssh/authorized_keys The authenticity of host 'racnode2 (192.ssh directory and you will not see this message again when you connect from this system to the same node.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.ssh/authorized_keys already exists in the . RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e.pub >> ~/.1 grid oinstall 603 Nov 7 16:56 id_dsa. You will be prompted for the grid OS user account password for both Oracle RAC nodes accessed.ssh/id_dsa.1 grid oinstall 672 Nov 7 16:56 id_dsa -rw-r--r-.pub In the . From racnode1.ssh directory.pub) from both Oracle RAC nodes in the cluster to the authorized key file just created (~/. create it now: [grid@racnode1 ~]$ touch ~/.ssh/authorized_keys). An authorized key file is nothing more than a single file that contains a copy of everyone's (every node's) DSA public key. Complete the following steps on one of the nodes in the cluster to create and then distribute the authorized key file. this will be done from racnode1. you will need to create an authorized key file (authorized_keys) on one of the nodes.1.168.ssh 89 of 136 4/18/2011 10:17 PM . RSA key fingerprint is 30:cd:90:ad:18:00:24:c5:42:49:21:b0:1d:59:2d:7b. Are you sure you want to continue connecting (yes/no)? yes 3. For the purpose of this example. with nodes racnode1 and racnode2: [grid@racnode1 ~]$ ssh racnode1 cat ~/.151' (RSA) to the list of known hosts grid@racnode1's password: xxxxx [grid@racnode1 ~]$ ssh racnode2 cat ~/. The grid user's ~/.ssh/id_dsa. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'racnode1. 2.151)' to the be established. you should see the id_dsa. In most cases this will not exist since this article assumes you are working with a new install. RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e.ssh directory of the owner's home directory. racnode1: 1.192.151)' can't be established.1 grid oinstall 0 Nov 7 17:25 authorized_keys -rw------. At this point.168. The following example is being run from racnode1 and assumes a two-node cluster.1.192. I am using the primary node in the cluster. The public hostname will then be addedcan't known_hosts file in the yes at the prompt to of host 'racnode1 (192.ssh/authorized_keys [grid@racnode1 ~]$ ls -l ~/. use SCP (Secure Copy) or SFTP (Secure FTP) to copy the public key (~/.1. we have the DSA public key from every node in the cluster contained in the authorized key file (~/. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'racnode2.shtml Now that both Oracle RAC nodes contain a public and private key for DSA. If the file doesn't exist. it is then distributed to all of the nodes in the cluster.ssh/id_dsa.152' (RSA) to the list of known hosts grid@racnode2's password: xxxxx The first time you use SSH to connect to a node from a particular system.pub >> ~/.ssh/authorized_keys The authenticity of host 'racnode1 (192.ssh/authorized_keys file on every node must contain the contents from all of the ~/. From racnode1.168.168.ssh/id_dsa. ~/. Again. you will see a message similar to the following: EnterThe authenticity continue.pub public key that was created and the blank file authorized_keys. determine if the authorized key file ~/.168.1.

If any of the nodes prompt for a password or pass phrase then verify that the ~/.grid 2. Use the scp command to copy the authorized key file to all remaining nodes in the cluster: [grid@racnode1 ~]$ scp ~/. complete the steps in this section to ensure passwordless SSH connectivity between all cluster member nodes is configured correctly.1 grid oinstall 603 -rw-r--r-. is modified so it acts only when the shell is an interactive shell.2KB/s 00:00 4. apart from the date and host name. the only remaining node is racnode2.ssh/authorized_keys [grid@racnode2 ~]$ chmod 600 ~/.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.ssh total 16 -rw-r--r-. In our two-node cluster example. log in as the grid user.pub known_hosts We now need to copy the authorized key file to the remaining nodes in the cluster. On the system where you want to run OUI from (racnode1). 1. When running the test SSH commands in this section. You should ensure that any part of a login script that generates any output. If SSH is configured correctly.1 grid oinstall 1206 -rw------. the Oracle grid infrastructure software owner will be used which is named grid.1 grid oinstall 808 Nov Nov Nov Nov 7 7 7 7 17:31 16:56 16:56 17:31 authorized_keys id_dsa id_dsa.ssh/authorized_keys grid@racnode2's password: xxxxx authorized_keys 100% 1206 1.ssh/authorized_keys Enable SSH User Equivalency on Cluster Nodes After you have copied the authorized_keys file that contains all public keys to each node in the cluster. or asks any questions. Change the permission of the authorized key file for both Oracle RAC nodes in the cluster by logging into the node and running the following: [grid@racnode1 ~]$ chmod 600 ~/. if you see any other messages or text.ssh/authorized_keys file on that node contains the correct public keys and that you have created an Oracle software owner with identical group membership and IDs. you will be able to use the ssh and scp commands without being prompted for a password or pass phrase from the terminal session: [grid@racnode1 ~]$ ssh racnode1 "date. [root@racnode1 ~]# su .1 grid oinstall 672 -rw-r--r-. Make any changes required to ensure that only the date and host name is displayed when you enter these commands. then the Oracle installation will fail.hostname" Sun Nov 7 18:06:17 EST 2010 racnode1 [grid@racnode1 ~]$ ssh racnode2 "date.hostname" Sun Nov 7 18:07:55 EST 2010 racnode2 90 of 136 4/18/2011 10:17 PM .ssh/authorized_keys racnode2:.shtml /authorized_keys) on racnode1: [grid@racnode1 ~]$ ls -l ~/. In this example.

152)' can't be established.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.1.192. The Oracle Universal Installer is a GUI interface and requires the use of an X Server.1.shtml 3.grid [grid@racnode2 ~]$ ssh racnode1 "date. RSA key fingerprint is 30:cd:90:ad:18:00:24:c5:42:49:21:b0:1d:59:2d:7b. Perform the same actions above from the remaining nodes in the Oracle RAC cluster (racnode2) to ensure they too can access all other nodes without being prompted for a password or pass phrase and get added to the known_hosts file: [root@racnode2 ~]# su .1.151' (RSA) to the list of known hosts Sun Nov 7 18:08:46 EST 2010 racnode1 [grid@racnode2 ~]$ ssh racnode1 "date.1.168.192. and Bash shells: [grid@racnode1 ~]$ DISPLAY=<Any X-Windows Host>:0 [grid@racnode1 ~]$ export DISPLAY C shell: [grid@racnode1 ~]$ setenv DISPLAY <Any X-Windows Host>:0 After setting the DISPLAY variable to a valid X Windows display. RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e.hostname" The authenticity of host 'racnode1 (192.151)' can't be established.168. you should perform another test of the current terminal session to ensure that X11 forwarding is not enabled: [grid@racnode1 ~]$ ssh racnode1 hostname racnode1 [grid@racnode1 ~]$ ssh racnode2 hostname racnode2 91 of 136 4/18/2011 10:17 PM . Korn.152' (RSA) to the list of known hosts Sun Nov 7 18:11:51 EST 2010 racnode2 [grid@racnode2 ~]$ ssh racnode2 "date.168. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'racnode1.hostname" Sun Nov 7 18:08:53 EST 2010 racnode1 -------------------------------------------------------------------------[grid@racnode2 ~]$ ssh racnode2 "date. From the terminal session enabled for user equivalence (the node you will be performing the Oracle installations from). Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'racnode2.168.hostname" Sun Nov 7 18:11:54 EST 2010 racnode2 4. set the environment variable DISPLAY to a valid X Windows display: Bourne.hostname" The authenticity of host 'racnode2 (192.

however. Install and Configure ASMLib 2. For example: [grid@racnode1 ~]$ export DISPLAY=melody:0 [grid@racnode1 ~]$ ssh racnode2 hostname Warning: No xauth data.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. To correct this problem.ssh/config 2.bashrc or . Make sure that the ForwardX11 attribute is set to no. using fake authentication data for X11 forwarding. or Korn shell: if [ -t 0 ].0 92 of 136 4/18/2011 10:17 PM . using fake authentication data for X11 forwarding. OUI uses SSH to run commands and copy files to the other nodes. Bash.ssh /config file: Host * ForwardX11 no Preventing Installation Errors Caused by stty Commands During an Oracle grid infrastructure or Oracle RAC software installation. To avoid this problem. as in the following examples: Bourne." then this means that your authorized keys file is configured correctly. hidden files on the system (for example. then stty intr ^C fi C shell: test -t 0 if ($status == 0) then stty intr ^C endif If there are hidden files that contain stty commands that are loaded by the remote shell. During the installation. create a user-level SSH client configuration file for the grid and oracle OS user account that disables X11 Forwarding: 1. your SSH configuration has X11 forwarding enabled. racnode2 Note that having X11 Forwarding enabled will cause the Oracle installation to fail. and you see a message similar to: "Warning: No xauth data.cshrc) will cause makefile and other installation errors if they contain stty commands. you must modify these files in each Oracle installation owner user home directory to suppress all output on STDERR. . insert the following into the ~/. For example.shtml If you are using a remote client to connect to the node performing the installation. Using a text editor. edit or create the file ~/. then OUI indicates an error and stops the installation.

will only need to be performed on a single node within the cluster (racnode1).DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.6.0 libraries and the kernel driver from OTN.18-194. The ASM software will be installed as part of Oracle grid infrastructure later in this guide. visit http://www. All of the files and directories to be used for Oracle will be contained in a disk group — (or for the purpose of this article.5-1. So. which is referred to as the Grid Infrastructure home. Creating the ASM disks. ASM is built into the Oracle kernel and can be used for both single and clustered instances of Oracle. If you would like to learn more about Oracle ASMLib 2. Oracle ASMLib Downloads for Red Hat Enterprise Linux Server 5 At the time of this writing.6. online redo logs. ASM automatically performs load balancing in parallel across all available disk drives to prevent hot spots and maximize performance. Oracle states in Metalink Note 275315. I will be using the "ASM with ASMLib I/O" method. however. RAW character devices are not required with this method as ASMLib works with block devices. the latest release of the ASMLib kernel driver is 2. three disk groups).0. and the Fast Recovery Area. In this section. archived redo logs). Oracle Database files (data. In fact.com/technetwork/topics/linux/asmlib/index101839. The Oracle grid infrastructure software will be owned by the user grid. You will be required to create RAW devices for all disk partitions used by ASM.0.shtml The installation and configuration procedures in this section should be performed on both of the Oracle RAC nodes in the cluster. Starting with Oracle grid infrastructure 11g release 2 (11.0 which is an optional support library for the Automatic Storage Management (ASM) feature of the Oracle Database. We need to download the appropriate version of the ASMLib driver for the Linux kernel which in my case is kernel 2.2). In this article.el5 running on the x86_64 architecture: [root@racnode1 ~]# uname -a Linux racnode1 2. Keep in mind that ASMLib is only a support library for the ASM software. Oracle database files are created on raw character devices managed by ASM using standard Linux I/O system calls.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux 32-bit (x86) Installations 93 of 136 4/18/2011 10:17 PM . even with rapidly changing data usage patterns. Download ASMLib 2. is ASMLib required for ASM? Not at all. I plan on performing several tests in the future to identify the performance gains in using ASMLib.1 that "ASMLib was provided to enable ASM I/O to Linux disks without the limitations of the standard UNIX I/O API".18-194. Those performance metrics and testing details are out of scope of this article and therefore will not be discussed. there are two different methods to configure ASM on Linux: ASM with ASMLib I/O: This method creates all Oracle database files on raw block devices managed by ASM using ASMLib calls. the Automatic Storage Management and Oracle Clusterware software is packaged together in a single binary distribution and installed into a single home directory. ASM with Standard Linux I/O: This method does not make use of ASMLib.0 Packages We start this section by downloading the latest ASMLib 2. control files.oracle. In this article.html. ASMLib allows an Oracle Database using ASM more efficient and capable access to the disk groups it is using. Automatic Storage Management simplifies database administration by eliminating the need for the DBA to directly manage potentially thousands of Oracle database files requiring only the management of groups of disks allocated to the Oracle Database. we will install and configure ASMLib 2. ASM will be used as the shared file system and volume manager for Oracle Clusterware files (OCR and voting disk).

5-1.el5-2.5-1.i686.el5 (x86_64) oracleasm-support-2. download the ASMLib tools: oracleasm-support-2.0 Packages The installation of ASMLib 2.el5 (x86_64) [root@racnode2 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep orac oracleasm-2.el5. ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.el5..1.18-194.18-194.6.el5-2.el5-2.4-1.el5-2.el5.rpm warning: oracleasm-2.6.6. key I Preparing.i386.rpm: Header V3 DSA signature: NOKEY.1.0.5-1.3-1.6.rpm oracleasmlib-2.0.el########################################### [ 67%] 3:oracleasmlib ########################################### [100%] After installing the ASMLib packages.x86_64. ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.3-1.el5 (x86_64) Configure ASMLib Now that you have installed the ASMLib Packages for Linux.el5 (x86_64) oracleasmlib-2.18-194.el5.el5 (x86_64) oracleasm-support-2.el5-2.1. verify from both Oracle RAC nodes that the software is installed: [root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep orac oracleasm-2.18-194.18-194.rpm Next.0.x86_64.0.0.18-194.el5-2.el5.rpm 64-bit (x86_64) Installations oracleasm-2.x86_64.4-1.i386.6..rpm \ > oracleasm-support-2.1.0 needs to be performed on both nodes in the Oracle RAC cluster as the root user account: [root@racnode1 ~]# rpm -Uvh oracleasm-2.4-1.6.x86_64.0.x86_64.rpm Install ASMLib 2.rpm oracleasmlib-2.0.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.el5.el5-2.x86_64.rpm: Header V3 DSA signature: NOKEY. you need to configure and load the ASM kernel module.18-194.el5.6.1.x86_64.18-194.0.rpm warning: oracleasm-2.x86_64..0.0.el5.5-1.x86_64.x86_64.5-1.rpm Next.el5.3-1.el5.4-1.6.4-1.5-1.el5-2.18-194.3-1..0.0.4-1.el5.rpm \ > oracleasm-support-2.el5 (x86_64) oracleasmlib-2.el5.3-1.5-1. download the ASMLib tools: oracleasm-support-2. 94 of 136 4/18/2011 10:17 PM .0.rpm \ > oracleasmlib-2. This task needs to be run on both Oracle RAC nodes as the root user account. key I Preparing.0.x86_64.el5.18-194.rpm \ > oracleasmlib-2.shtml oracleasm-2.6.6.3-1.5-1.el5.1.el########################################### [ 67%] 3:oracleasmlib ########################################### [100%] [root@racnode2 ~]# rpm -Uvh oracleasm-2.

d path. then you are shown the current configuration. you should then run the oracleasm listdisks command on both Oracle RAC nodes to verify that all ASM disks were created and available. Hitting <ENTER> without typing an answer will keep that current value. This will configure the on-boot properties of the Oracle ASM library driver. Ctrl-C will abort. 95 of 136 4/18/2011 10:17 PM . is not deprecated but the oracleasm binary in that path is now used typically for internal commands. The /etc/init.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. It is used only by the Automatic Storage Management library to communicate with the Automatic Storage Management driver. Enter the following command to load the oracleasm kernel module: [root@racnode1 ~]# /usr/sbin/oracleasm init Creating /dev/oracleasm mount point: /dev/oracleasm Loading module "oracleasm": oracleasm Mounting ASMlib driver filesystem: /dev/oracleasm 3.shtml The oracleasm command by default is in the path /usr/sbin. Repeat this procedure on all nodes in the cluster (racnode2) where you want to install Oracle RAC. 2. you will need to perform a scandisk to recognize the new volumes. The current values will be shown in brackets ('[]'). The following questions will determine whether the driver is loaded on boot and what permissions it will have. I will be running these commands on racnode1. Create ASM Disks for Oracle Creating the ASM disks only needs to be performed from one node in the RAC cluster as the root user account. On the other Oracle RAC node(s). Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done The script completes the following tasks: Creates the /etc/sysconfig/oracleasm configuration file Creates the /dev/oracleasm mount point Mounts the ASMLib driver file system The ASMLib driver file system is not a regular file system. Enter the following command to run the oracleasm initialization script with the configure option: [root@racnode1 ~]# /usr/sbin/oracleasm configure -i Configuring the Oracle ASM library driver. When that is complete. If you enter the command oracleasm configure without the -i flag. For example. [root@racnode1 ~]# /usr/sbin/oracleasm configure ORACLEASM_ENABLED=false ORACLEASM_UID= ORACLEASM_GID= ORACLEASM_SCANBOOT=true ORACLEASM_SCANORDER="" ORACLEASM_SCANEXCLUDE="" 1. which was used in previous releases.

shtml In the section "Create Partitions on iSCSI Volumes". type the following: [root@racnode1 ~]# /usr/sbin/oracleasm createdisk CRSVOL1 /dev/iscsi/crs1/part1 Writing disk header: done Instantiating disk: done [root@racnode1 ~]# /usr/sbin/oracleasm createdisk DATAVOL1 /dev/iscsi/data1/part1 Writing disk header: done Instantiating disk: done [root@racnode1 ~]# /usr/sbin/oracleasm createdisk FRAVOL1 /dev/iscsi/fra1/part1 Writing disk header: done Instantiating disk: done To make the volumes available on the other nodes in the cluster (racnode2).. archived redo log files. control files. To create the ASM disks using the iSCSI target names to local device name mappings... Use the local device names that were created by udev when configuring the three ASM volumes. enter the following command as root on each node: [root@racnode2 ~]# /usr/sbin/oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks. ASM will be used for storing Oracle Clusterware files. No support. Scanning system for ASM disks. is 96 of 136 4/18/2011 10:17 PM .. This is a FREE account! Oracle offers a development and testing license free of charge. database files. The next step is to download and extract the required Oracle software packages from the Oracle Technology Network (OTN): If you do not currently have an account with Oracle OTN. Oracle database files like online redo logs.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. however. This command identifies shared disks attached to the node that are marked as Automatic Storage Management disks: [root@racnode1 ~]# /usr/sbin/oracleasm listdisks CRSVOL1 DATAVOL1 FRAVOL1 [root@racnode2 ~]# /usr/sbin/oracleasm listdisks CRSVOL1 DATAVOL1 FRAVOL1 Download Oracle RAC 11g release 2 Software The following download procedures only need to be performed on one node in the cluster (racnode1). and the Fast Recovery Area. Instantiating disk "DATAVOL1" Instantiating disk "CRSVOL1" Instantiating disk "FRAVOL1" We can now test that the ASM disks were successfully created by using the following command on both nodes in the RAC cluster as the root user account. you will need to create one. we configured (partitioned) three iSCSI volumes to be used by ASM.

Please note that manually running the Cluster Verification Utility (CVU) before running the Oracle installer is not 97 of 136 4/18/2011 10:17 PM .zip /home/oracle/software/oracle ~]$ mv linux.com/technetwork/database/enterprise-edition/downloads/112010-linuxsoft-085393.zip /home/oracle/software/oracle ~]$ cd /home/oracle/software/oracle oracle]$ unzip linux. Next.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.x64_11gR2_database_2of2.x64_11gR2_database_2of2.x64_11gR2_examples.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-100572. Log in to the node that you will be performing all of the Oracle installations from (racnode1) as the appropriate software owner.oracle.2. You will perform all Oracle software installs from this machine.x64_11gR2_database_1of2. Download and Extract the Oracle Software Download the following software packages: Oracle Database 11g Release 2 Grid Infrastructure (11.html 64-bit (x86_64) Installations http://www.zip oracle]$ unzip linux.x64_11gR2_examples. log in and download the Oracle Database and Oracle Examples (optional) software to the /home/oracle/software/oracle directory as the oracle user. 32-bit (x86) Installations http://www.zip Extract the Oracle Database and Oracle Examples software as the oracle user: [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 [oracle@racnode1 ~]$ mkdir -p /home/oracle/software/oracle ~]$ mv linux.x64_11gR2_database_1of2.zip oracle]$ unzip linux. racnode1. A full description of the license agreement is available on OTN. This section contains any remaining pre-installation tasks for Oracle grid infrastructure that have not already been discussed.zip Pre-installation Tasks for Oracle Grid Infrastructure for a Cluster Perform the following checks on both Oracle RAC nodes in the cluster.2.0) for Linux Oracle Database 11g Release 2 (11.0) for Linux Oracle Database 11g Release 2 Examples (optional) All downloads are available from the same page. login and download the Oracle grid infrastructure software to the directory /home/grid/software /oracle as the grid user.shtml provided and the license does not permit production use.html You will be downloading and extracting the required software from Oracle to only one of the Linux nodes in the cluster — namely.x64_11gR2_grid.1. Extract the Oracle grid infrastructure software as the grid user: [grid@racnode1 [grid@racnode1 [grid@racnode1 [grid@racnode1 ~]$ mkdir -p /home/grid/software/oracle ~]$ mv linux. For example. The Oracle installer will copy the required software packages to all other nodes in the RAC configuration using remote access (scp).zip /home/oracle/software/oracle ~]$ mv linux.oracle.x64_11gR2_grid.zip /home/grid/software/oracle ~]$ cd /home/grid/software/oracle oracle]$ unzip linux.0.0.1.

Copy the cvuqdisk package from racnode1 to racnode2 as the grid user account: [racnode2]: /home/grid/software/oracle/grid/rpm/cvuqdisk-1. Log in as root on both Oracle RAC nodes: [grid@racnode1 rpm]$ su [grid@racnode2 rpm]$ su 4. x86_64 or i386).shtml required. Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk.7-1. which is in the directory rpm on the installation media from racnode1: [racnode1]: /home/grid/software/oracle/grid/rpm/cvuqdisk-1.rpm 3.0.. export CVUQDISK_GRP [root@racnode2 rpm]# CVUQDISK_GRP=oinstall. Verify the cvuqdisk utility was successfully installed: 98 of 136 4/18/2011 10:17 PM . Without cvuqdisk.7-1 [root@racnode2 rpm]# rpm -iv cvuqdisk-1. use the following command to install the cvuqdisk package on both Oracle RAC nodes: [root@racnode1 rpm]# rpm -iv cvuqdisk-1.rpm 2.. Locate the cvuqdisk RPM package. To install the cvuqdisk RPM. cvuqdisk-1.0. complete the following procedures: 1. The cvuqdisk RPM can be found on the Oracle grid infrastructure installation media in the rpm directory.rpm Preparing packages for installation.7-1. and you will receive the error message "Package cvuqdisk not installed" when the Cluster Verification Utility is run (either manually or at the end of the Oracle grid infrastructure installation).7-1 6. In the directory where you have saved the cvuqdisk RPM. Use the cvuqdisk RPM for your hardware architecture (for example.7-1.. which for this article is oinstall: [root@racnode1 rpm]# CVUQDISK_GRP=oinstall.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Install the cvuqdisk Package for Linux Install the operating system package cvuqdisk to both Oracle RAC nodes. For the purpose of this article.rpm Preparing packages for installation. Cluster Verification Utility cannot discover shared disks. export CVUQDISK_GRP 5. cvuqdisk-1.0. the Oracle grid infrastructure media was extracted to the /home/grid/software/oracle/grid directory on racnode1 as the grid user.0. The CVU is run automatically at the end of the Oracle grid infrastructure installation as part of the Configuration Assistants process..7-1.0.0.

-----------. please keep in mind that it should be run as the grid user from from the node you will be performing the Oracle installation from (racnode1). If OUI detects an incomplete task. Check: User equivalence for user "grid" Node Name Comment -----------------------------------./runcluvfy. Once all prerequisites for running the CVU utility have been met.. SSH connectivity with user equivalence must be configured for the grid user. Starting with Oracle Clusterware 11g release 2.sh stage -pre crsinst -n racnode1. the CVU utility will fail before having the opportunity to perform any of its critical checks and generate the fixup scripts: Checking user equivalence. you can now manually check your cluster configuration before installation and generate a fixup script to make operating system changes before starting the installation.-----------------------racnode2 failed racnode1 failed Result: PRVF-4007 : User equivalence check failed for user "grid" ERROR: User equivalence unavailable on all the specified nodes Verification cannot proceed Pre-check for cluster services setup was unsuccessful on all the nodes. it then generates fixup scripts (runfixup.racnode2 -fixup -verbose Review the CVU report..-----------. If you decide that you want to run the CVU.sh). You also can have CVU generate fixup scripts before installation. Oracle Universal Installer (OUI) detects when the minimum requirements for an installation are not met and creates shell scripts called fixup scripts to finish incomplete system configuration steps.-----------. running the Cluster Verification Utility before running the Oracle installer is not required.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.---------------racnode2 yes yes no failed racnode1 yes yes no failed Result: Membership check for user "grid" in group "dba" failed The check fails because this guide creates role-allocated groups and users by using a Job Role Separation configuration 99 of 136 4/18/2011 10:17 PM . [grid@racnode1 ~]$ cd /home/grid/software/oracle/grid [grid@racnode1 grid]$ .(optional) As stated earlier in this section. You can run the fixup script after you click the [Fix and Check Again Button] during the Oracle grid infrastructure installation. If you intend to configure SSH connectivity using the OUI.shtml [root@racnode1 rpm]# ls -l /usr/sbin/cvuqdisk -rwsr-xr-x 1 root oinstall 9832 May 28 2009 /usr/sbin/cvuqdisk [root@racnode2 rpm]# ls -l /usr/sbin/cvuqdisk -rwsr-xr-x 1 root oinstall 9832 May 28 2009 /usr/sbin/cvuqdisk Verify Oracle Clusterware Requirements with CVU . In addition. The only failure that should be found given the configuration described in this guide is: Check: Membership of user "grid" in group "dba" Node Name User Exists Group Exists User in Group Comment ---------------.

Again. The CVU fails to recognize this type of configuration and assumes the grid user should always be part of the dba group. use of operating system group authentication for role-based administrative privileges. especially for those customers who are new to clustering. integration with IPMI. At any time during installation. Users. It enables you to select particular configuration choices including additional storage and network choices. Next. if you have a question about what you are being asked to do. This new option provides streamlined cluster installations.racnode2 -verbose Review the CVU report. if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server). Verify Hardware and Operating System Setup with CVU The next CVU check to run will verify the hardware and operating system setup. Advanced Installation The advanced installation option is an advanced procedure that requires a higher degree of system knowledge. and Directories. click the Help button on the OUI page. and more granularity in specifying Automatic Storage Management roles.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. This failed check can be safely ignored. 100 of 136 4/18/2011 10:17 PM . The Oracle grid infrastructure software (Oracle Clusterware and Automatic Storage Management) will be installed to both of the Oracle RAC nodes in the cluster by the Oracle Universal Installer. Oracle now provides two options for installing the Oracle grid infrastructure software: Typical Installation The typical installation option is a simplified installation with a minimal number of manual configuration choices. verify your X11 display server settings which were described in the section Logging In to a Remote System Using X Terminal. Creating a Job Role Separation configuration was described in the section Create Job Role Separation Operating System Privileges Groups. Complete the following steps to install Oracle grid infrastructure on your cluster. log in to racnode1 as the owner of the Oracle grid infrastructure software which for this article is grid.sh stage -post hwos -n racnode1. Install Oracle Grid Infrastructure for a Cluster Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (racnode1)./runcluvfy. All other checks performed by CVU should be reported as "passed" before continuing with the Oracle grid infrastructure installation. run the following as the grid user account from racnode1 with user equivalence configured: [grid@racnode1 ~]$ cd /home/grid/software/oracle/grid [grid@racnode1 grid]$ . Typical installation defaults as many options as possible to those recommended as best practices. You are now ready to install the "grid" part of the environment — Oracle Clusterware and Automatic Storage Management. Verify Terminal Shell Environment Before starting the Oracle Universal Installer. Typical and Advanced Installation Starting with 11g release 2. we will be using the "Advanced Installation" option. All checks performed by CVU should be reported as "passed" before continuing with the Oracle grid infrastructure installation. Given the fact that this guide makes use of role-based administrative privileges and high granularity in specifying Automatic Storage Management roles.shtml which is not accurately recognized by the CVU.

info Virtual Host Name racnode1-vip./runInstaller Screen Name Select Installation Option Select Installation Type Select Product Languages Response Screen Shot Select "Install and Configure Grid Infrastructure for a Cluster" Select "Advanced Installation" Make the appropriate selection(s) for your environment. Enter the "OS Password" for the grid user and click the [Setup] button. Instructions on how to configure Grid Naming Service (GNS) is beyond the scope of this article.1200(asmadmin). Un-check the option to "Configure GNS".info" according to the table below: Public Node Name racnode1.info racnode2-vip. Cluster Name racnode-cluster SCAN Name racnode-cluster-scan SCAN Port 1521 Grid Plug and Play Information After clicking [Next].idevelopment. This will start the "SSH Connectivity" configuration process: 101 of 136 4/18/2011 10:17 PM .idevelopment. the OUI will attempt to validate the SCAN information: Use this screen to add the node racnode2 to the cluster and to configure SSH connectivity.idevelopment.idevelopment.info Cluster Node Information racnode2.info Next.1202(asmope [grid@racnode1 ~]$ DISPLAY=<your local workstation>:0. click the [SSH Connectivity] button.shtml Install Oracle Grid Infrastructure Perform the following tasks as the grid user to install Oracle grid infrastructure: [grid@racnode1 ~]$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall).1201(asmdba).idevelopment.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.0 [grid@racnode1 ~]$ export DISPLAY [grid@racnode1 ~]$ cd /home/grid/software/oracle/grid [grid@racnode1 grid]$ . Click the [Add] button to add "racnode2.idevelopment.info" and its virtual IP address "racnode2-vip.

Privileged Operating System Groups Make any changes necessary to match the values in the table below: OSDBA for ASM asmdba OSOPER for ASM asmoper OSASM asmadmin Specify Installation Location Set the "Oracle Base" ($ORACLE_BASE) and "Software Location" ($ORACLE_HOME) for the Oracle grid infrastructure installation: Oracle Base: /u01/app/grid Software Location: /u01/app/11. Identify the network interface to be used for the "Public" and "Private" network.shtml Screen Name Response Screen Shot After the SSH configuration process successfully completes. Finish off this screen by clicking the [Test] button to verify passwordless SSH connectivity.168.2. Make any changes necessary to match the values in the table below: Specify Network Interface Usage Interface Name eth0 eth1 Subnet 192.0 192.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.1. acknowledge the dialog box. This article makes use of role-based administrative privileges and high granularity in specifying Automatic Storage Management roles using a Job Role Separation configuration. Use the default values provided by the OUI: Inventory Directory: /u01/app/oraInventory oraInventory Group Name: oinstall Create Inventory 102 of 136 4/18/2011 10:17 PM .0/grid Since this is the first install on the host.168. I choose to "Use same passwords for these accounts".0 Interface Type Public Private Storage Option Information Select "Automatic Storage Management (ASM)". you will need to create the Oracle Inventory. Configuring Intelligent Platform Management Interface (IPMI) is beyond the scope of this article.2. Select "Do not use Intelligent Platform Management Interface (IPMI)". Create an ASM Disk Group that will be used to store the Oracle Clusterware files according to the values in the table below: Create ASM Disk Group Disk Group Name CRS Redundancy External Disk Path ORCL:CRSVOL1 Specify ASM Password Failure Isolation Support For the purpose of this article.

do not manually remove or run cron jobs that remove /tmp/.2).. the OUI continues to the Summary screen. The fixup script is generated during installation.sh script on both nodes in the RAC cluster: [root@racnode1 ~]# /u01/app/oraInventory/orainstRoot.sh on the last node.oracle or /var/tmp/.sh [root@racnode2 ~]# /u01/app/oraInventory/orainstRoot.2.sh and /u01/app/11. you will receive output similar to the following which signifies a successful install: .sh [root@racnode2 ~]# /u01/app/11. Run the root. When running root. Run the orainstRoot.sh The root. and Oracle Private Interconnect (VIPCA). then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button. Starting with Oracle Clusterware 11g release 2 (11. Screen Shot Prerequisite Checks Summary Setup Click [Finish] to start the installation. stay logged in as the root user account. and completes other operating system configuration tasks. (starting with the node you are performing the install from). After installation is complete. (starting with the node you are performing the install from).sh Within the same new console window on both Oracle RAC nodes in the cluster.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.2.0/grid/root. it raises kernel values to required minimums. Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window. The final step performed by OUI is to run the Cluster Verification Utility (CVU).. If all prerequisite checks pass (as was the case for my install). If you remove these files. If OUI detects an incomplete task that is marked "fixable". At the end of the installation. Execute Configuration scripts Configure Oracle Grid Infrastructure for a Cluster Finish The installer will run configuration assistants for Oracle Net Services (NETCA).sh script can take several minutes to run.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.2.sh script on both nodes in the RAC cluster one at a time starting with the node you are performing the install from: [root@racnode1 ~]# /u01/app/11. You will be prompted to run the script as root in a separate terminal session. click the [Close] button to exit the OUI. After the installation completes. if any check fails.shtml Screen Name Response The installer will run through a series of checks to determine if both Oracle RAC nodes meet the minimum requirements for installing and configuring the Oracle Clusterware and Automatic Storage Management software. The installer performs the Oracle grid infrastructure setup process on both Oracle RAC nodes.sh scripts. Open a new console window on both Oracle RAC nodes in the cluster. The inventory pointer is located at /etc/oraInst. as the root user account.0/grid/root. you will be prompted to run the /u01/app /oraInventory/orainstRoot.oracle or its files while Oracle Clusterware is up. When you run the script. the installer (OUI) will create shell script programs called fixup scripts to resolve many incomplete system configuration requirements. if necessary.0/grid/root. then Oracle Clusterware could encounter intermittent hangs and you will 103 of 136 4/18/2011 10:17 PM . Automatic Storage Management (ASMCA).

.type 0/0 0/0 ONLINE ONLINE racnode1 ora.ons ora.t1..ip.SM1.gsd..eons ora.t1..de1..dg ora.er...de1..de2...N1..gsd ora...er.E1...type 0/5 0/0 ONLINE ONLINE racnode1 ora..oc4j.up...type 0/0 0/0 ONLINE ONLINE racnode2 ora.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.de2...vip ora.type 0/0 0/0 ONLINE ONLINE racnode2 ora.er..type 0/0 0/0 ONLINE ONLINE racnode1 ora..de2...vip ora...network ora.lsnr ora.lsnr ora.N3. you should run through several tests to verify the install was successful.vip ora. Run the following commands on both nodes in the RAC cluster as the grid user..ons.shtml encounter error: CRS-0184: Cannot communicate with the CRS daemon Post-installation Tasks for Oracle Grid Infrastructure for a Cluster Perform the following postinstallation procedures on both Oracle RAC nodes in the cluster.....type 0/3 0/ ONLINE ONLINE racnode1 ora....rk..type 0/5 0/0 ONLINE ONLINE racnode2 ora..ip. Verify Oracle Clusterware Installation After the installation of Oracle grid infrastructure.type 0/3 0/ ONLINE ONLINE racnode1 ora.type 0/5 0/ OFFLINE OFFLINE ora.....ip.scan3.scan1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1 ora..type 0/0 0/0 ONLINE ONLINE racnode1 104 of 136 4/18/2011 10:17 PM ..de1..vip ora.......N2.scan2.type 0/5 0/0 ONLINE ONLINE racnode1 ora....CRS..lsnr ora.asm ora..gsd application 0/5 0/0 OFFLINE OFFLINE ora.type 0/5 0/ ONLINE ONLINE racnode1 ora.ons application 0/3 0/0 ONLINE ONLINE racnode2 ora....type 0/5 0/0 OFFLINE OFFLINE ora..gsd application 0/5 0/0 OFFLINE OFFLINE ora...asm application 0/5 0/0 ONLINE ONLINE racnode1 ora.asm application 0/5 0/0 ONLINE ONLINE racnode2 ora...lsnr application 0/5 0/0 ONLINE ONLINE racnode2 ora.type 0/5 0/ ONLINE ONLINE racnode1 ora.ER..ons application 0/3 0/0 ONLINE ONLINE racnode1 ora...E2...type 0/5 0/ ONLINE ONLINE racnode1 ora.asm.vip ora.er..... Check CRS Status [grid@racnode1 ~]$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online Check Clusterware Resources [grid@racnode1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------ora.lsnr ora..type 0/5 0/ ONLINE ONLINE racnode1 ora.oc4j ora..eons.SM2......

2). Check Oracle Cluster Registry (OCR) [grid@racnode1 ~]$ ocrcheck Status of Oracle Cluster Registry Version Total space (kbytes) Used space (kbytes) Available space (kbytes) ID Device/File Name is as follows : : 3 : 262120 : 2332 : 259788 : 1559468462 : +CRS Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded 105 of 136 4/18/2011 10:17 PM .racnode2 ASM is enabled. then use the following command syntax as the Grid Infrastructure installation owner to confirm that your Oracle ASM installation is running: [grid@racnode1 ~]$ srvctl status asm -a ASM is running on racnode1.shtml The crs_stat command is deprecated in Oracle Clusterware 11g release 2 (11. Check Cluster Nodes [grid@racnode1 ~]$ olsnodes -n racnode1 1 racnode2 2 Check Oracle TNS Listener Process on Both Nodes [grid@racnode1 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}' LISTENER_SCAN2 LISTENER_SCAN3 LISTENER [grid@racnode2 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}' LISTENER_SCAN1 LISTENER Confirming Oracle ASM Function for Oracle Clusterware Files If you installed the OCR and voting disk files on Oracle ASM.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.

. ANSWER: 3.. you cannot use the srvctl binary in the database home to manage Oracle ASM or Oracle Net which reside in the Oracle grid infrastructure home.168..el5_4. AUTHORITY: 1.1.idevelopment.. When we install Oracle Real Application Clusters (the Oracle database software).. status: NOERROR. backing up and restoring a voting disk using the dd is not supported and may result in the loss of the voting disk. Query time: 0 msec SERVER: 192.. 192.idevelopment. ANSWER SECTION: racnode-cluster-scan.info .3. As shown in the output below. ->>HEADER<<. Got answer: . 86400 IN A 192.168.. .info .6-4. it was highly recommended to back up the voting disk using the dd command after installing the Oracle Clusterware software.168. AUTHORITY SECTION: idevelopment. ONLINE 05592be032644f19bf2b50a929efe843 (ORCL:CRSVOL1) [CRS] Located 1 voting disk(s). Check SCAN Resolution After installing Oracle grid infrastructure.info.info.shtml Logical corruption check bypassed due to non-privileged user Check Voting Disk [grid@racnode1 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -.info.168.info.P1.idevelopment.info. flags: qr aa rd ra.info.195) WHEN: Mon Nov 8 16:54:02 2010 MSG SIZE rcvd: 145 Voting Disk Management In prior releases. 86400 IN A 192.idevelopment. <<>> DiG 9.189 . ADDITIONAL SECTION: openfiler1. 86400 IN .195#53(192.. 106 of 136 4/18/2011 10:17 PM .6-P1-RedHat-9.168. use the srvctl binary in the Oracle grid infrastructure home for a cluster (Grid home). ADDITIONAL: 1 . QUESTION SECTION: . the scan address is resolved to 3 different ip-addresses: [grid@racnode1 ~]$ dig racnode-cluster-scan.3.2) or later installations. IN A . QUERY: 1.195 .idevelopment. verify the SCAN virtual IP. .1.1.idevelopment. global options: printcmd . With Oracle Clusterware release 11. 86400 IN A 192.1.187 racnode-cluster-scan.info. To manage Oracle ASM or Oracle Net 11g release 2 (11.racnode-cluster-scan.----------------------------.idevelopment. 86400 IN NS A openfiler1.--------1.idevelopment.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.. id: 37366 .1.2 and later..168..188 racnode-cluster-scan.opcode: QUERY.1.2 <<>> racnode-cluster-scan...

such as node evictions. then you would use OS Watcher and RACDDT which is available through the My Oracle Support website (formerly Metalink).2.sh script during the installation.racnode1.com/technology/products/database/clustering/ipd_download_homepage. 107 of 136 4/18/2011 10:17 PM .6.(Optional) To address troubleshooting issues.sh script. then the installer updates the contents of the existing root. It collects and analyzes cluster-wide data. You can download the IPD/OS tool along with a detailed installation and configuration guide at the following URL: http://www. process.2). historical data can be replayed to understand what was happening at the time of failure.5 which uses the 2.sh root. Back up the root. To learn more about managing the voting disks.sh script after you complete an installation. The tool can provide better explanations for many issues that occur in clusters where Oracle Clusterware.sh.html Create ASM Disk Groups for Data and Fast Recovery Area Run the ASM Configuration Assistant (asmca) as the grid user from only one node in the cluster (racnode1) to create the additional ASM disk groups which will be used to create the clustered database. when thresholds are reached. we configured one ASM disk group named +CRS which was used to store the Oracle clusterware files (OCR and voting disk). Oracle Cluster Registry (OCR).DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Oracle ASM and Oracle RAC are running. an alert is shown to the operator.sh file copy. If you require information contained in the original root.2. It tracks the operating system resource consumption at each node. we will create two additional ASM disk groups using the ASM Configuration Assistant (asmca).sh.18-194.9. Instructions for installing and configuring the IPD/OS tool is beyond the scope of this article and will not be discussed. In real time mode.racnode2.shtml Backing up the voting disks in Oracle Clusterware 11g release 2 is no longer required. During the installation of Oracle grid infrastructure. For root cause analysis. The IPD/OS tool is designed to detect and analyze operating system and cluster resource-related degradation and failures.0/grid [root@racnode1 grid]# cp root.6. In this section. Oracle recommends that you install Instantaneous Problem Detection OS Tool (IPD/OS) if you are using Linux kernel 2. please refer to the Oracle Clusterware Administration and Deployment Guide 11g Release 2 (11.18 kernel: [root@racnode1 ~]# uname -a Linux racnode1 2.6. Back Up the root. These new ASM disk groups will be used later in this guide when creating the clustered database. and Oracle Local Registry (OLR).el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux If you are using a Linux kernel earlier than 2.sh Script Oracle recommends that you back up the root.sh file on both Oracle RAC nodes as root: [root@racnode1 ~]# cd /u01/app/11.AFTER_INSTALL_NOV-08-2010 Install Cluster Health Management Software . and device level continuously. The voting disk data is automatically backed up in OCR as part of any configuration change and is automatically restored to any voting disk added.sh root. This article was written using RHEL/CentOS 5.6. If you install other products in the same Oracle home directory. then you can recover it from the root.9 or higher.0/grid [root@racnode2 grid]# cp root.oracle.AFTER_INSTALL_NOV-08-2010 [root@racnode2 ~]# cd /u01/app/11.

use "RACDB_DATA" for the "Disk Group Name". The "Create Disk Group" dialog should now show the final remaining ASMLib volume. 108 of 136 4/18/2011 10:17 PM . Create Additional ASM Disk Groups using ASMCA Perform the following tasks as the grid user to create two additional ASM disk groups: [grid@racnode1 ~]$ asmca & Screen Name Disk Groups Response From the "Disk Groups" tab. verify your X11 display server settings which were described in the section Logging In to a Remote System Using X Terminal. When creating the "Data" ASM disk group. Now that the grid infrastructure software is functional. click the [OK] button.shtml The first ASM disk group will be named +RACDB_DATA and will be used to store all Oracle physical database files (data. you will be returned to the initial dialog. The Oracle Database software will be installed to both of the Oracle RAC nodes in the cluster by the Oracle Universal Installer using SSH. choose "External (None)". Next. choose "External (None)". click the [OK] button. The "Create Disk Group" dialog should show two of the ASMLib volumes we created earlier in this guide. OUI copies the binary files from this node to all the other node in the cluster during the installation process. use "FRA" for the "Disk Group Name". if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server). click the [Create] button. Click the [Create] button again to create the second ASM disk group. control files. archived redo logs). If the ASMLib volumes we created earlier in this article do not show up in the "Select Member Disks" window as eligible (ORCL:DATAVOL1 and ORCL:FRAVOL1) then click on the [Change Disk Discovery Path] button and input "ORCL:*". Finally. In the "Redundancy" section. Install Oracle Database 11g with Oracle Real Application Clusters Perform the Oracle Database software installation from only one of the Oracle RAC nodes in the cluster (racnode1). check the ASMLib volume "ORCL:DATAVOL1" in the "Select Member Disks" section. Screen Shot Create Disk Group Disk Groups After creating the first ASM disk group. online redo logs. check the ASMLib volume "ORCL:FRAVOL1" in the "Select Member Disks" section. Finally. In the "Redundancy" section. Verify Terminal Shell Environment Before starting the ASM Configuration Assistant. After verifying all values in this dialog are correct. log in to racnode1 as the owner of the Oracle grid infrastructure software which for this article is grid. you can install the Oracle Database software on the one node in your cluster (racnode1) as the oracle user. Disk Groups Exit the ASM Configuration Assistant by clicking the [Exit] button. Create Disk Group When creating the "Fast Recovery Area" disk group.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. A second ASM disk group will be created for the Fast Recovery Area named +FRA. After verifying all values in this dialog are correct.

un-check the security updates check-box and click the [Next] button to continue. Product Languages Database Edition Make the appropriate selection(s) for your environment. acknowledge the dialog box. This will start the "SSH Connectivity" configuration process: Screen Shot Grid Options After the SSH configuration process successfully completes.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Next. if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server). Finish off this screen by clicking the [Test] button to verify passwordless SSH connectivity. Next. Enter the "OS Password" for the oracle user and click the [Setup] button./runInstaller Screen Name Configure Security Updates Installation Option Response For the purpose of this article. Acknowledge the warning dialog indicating you have not provided an email address by clicking the [Yes] button. 109 of 136 4/18/2011 10:17 PM . Verify Terminal Shell Environment Before starting the Oracle Universal Installer (OUI). log in to racnode1 as the owner of the Oracle Database software which for this article is oracle.shtml For the purpose of this guide. verify your X11 display server settings which were described in the section Logging In to a Remote System Using X Terminal. Select the "Real Application Clusters database installation" radio button (default) and verify that both Oracle RAC nodes are checked in the "Node Name" window. we will forgo the "Create Database" option when installing the Oracle Database software. Install Oracle Database 11g Release 2 Software Perform the following tasks as the oracle user to install the Oracle Database software: [oracle@racnode1 ~]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall). Select "Install database software only".1201(asmdba).1300(dba).1301(oper) [oracle@racnode1 ~]$ DISPLAY=<your local workstation>:0. click the [SSH Connectivity] button.0 [oracle@racnode1 ~]$ export DISPLAY [oracle@racnode1 ~]$ cd /home/oracle/software/oracle/database [oracle@racnode1 database]$ . The clustered database will be created later in this guide using the Database Configuration Assistant (DBCA) after all installs have been completed. Select "Enterprise Edition".

0/dbhome_1 Select the OS groups to be used for the SYSDBA and SYSOPER privileges: Database Administrator (OSDBA) Group: dba Database Operator (OSOPER) Group: oper The installer will run through a series of checks to determine if both Oracle RAC nodes meet the minimum requirements for installing and configuring the Oracle Database software.sh script on both Oracle RAC nodes.shtml Screen Name Response Specify the Oracle base and Software location (Oracle_home) as follows: Screen Shot Installation Location Oracle Base: /u01/app/oracle Software Location: /u01/app/oracle/product/11.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Install Oracle Database 11g Examples (formerly Companion) Perform the Oracle Database 11g Examples software installation from only one of the Oracle RAC nodes in the cluster (racnode1). then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button. The Oracle Database Examples software will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer using SSH. (starting with the node you are performing the install from). Operating System Groups Prerequisite Checks Summary Install Product Click [Finish] to start the installation. Finish At the end of the installation. the installer (OUI) will create shell script programs called fixup scripts. Execute Configuration scripts Run the root. OUI copies the binary files from this node to all the other node in the cluster during the installation process.0/dbhome_1/root. the OUI continues to the Summary screen. you have the option to install the Oracle Database 11g Examples. You will be prompted to run the script as root in a separate terminal session. Verify Terminal Shell Environment 110 of 136 4/18/2011 10:17 PM . After the installation completes. click the [Close] button to exit the OUI.2.sh script on all nodes in the RAC cluster: [root@racnode1 ~]# /u01/app/oracle/product/11. The installer performs the Oracle Database software installation process on both Oracle RAC nodes. If OUI detects an incomplete task that is marked "fixable". Like the Oracle Database software install.0/dbhome_1/root. Now that the Oracle Database 11g software is installed.2.2). Starting with 11g release 2 (11. to resolve many incomplete system configuration requirements.0/dbhome_1/root.sh Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window.2. if any checks fail.sh [root@racnode2 ~]# /u01/app/oracle/product/11.2. if necessary. When you run the script. If all prerequisite checks pass (as was the case for my install). you will be prompted to run the /u01/app /oracle/product/11. it raises kernel values to required minimums. the Examples software is only installed from one node in your cluster (racnode1) as the oracle user. as the root user account. and completes other operating system configuration tasks. The fixup script is generated during installation. Open a new console window on both Oracle RAC nodes in the cluster.

the installer (OUI) will create shell script programs called fixup scripts. Setting environment variables in the login script for the oracle user account was covered in the section "Create Login Script for the oracle User Account".DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. it raises kernel values to required minimums. Use the Oracle Database Configuration Assistant (DBCA) to create the clustered database. if necessary. verify your X11 display server settings which were described in the section Logging In to a Remote System Using X Terminal. Next. The installer performs the Oracle Database Examples software installation process on both Oracle RAC nodes. Create the Oracle Cluster Database The database creation process should only be performed from one of the Oracle RAC nodes in the cluster (racnode1). the OUI continues to the Summary screen. If all prerequisite checks pass (as was the case for my install). log in to racnode1 as the owner of the Oracle Database software which for this article is oracle. When you run the script.0/dbhome_1 The installer will run through a series of checks to determine if both Oracle RAC nodes meet the minimum requirements for installing and configuring the Oracle Database Examples software. Starting with 11g release 2 (11. The fixup script is generated during installation.) are running on both Oracle RAC nodes before attempting to start the clustered database creation process: [oracle@racnode1 ~]$ su .0/dbhome_1 environment. You should also verify that all services we have installed up to this point (Oracle TNS listener./runInstaller Screen Name Response Specify the Oracle base and Software location (Oracle_home) as follows: Screen Shot Installation Location Oracle Base: /u01/app/oracle Software Location: /u01/app/oracle/product/11.2. If OUI detects an incomplete task that is marked "fixable". and completes other operating system configuration tasks. make certain that the $ORACLE_HOME and $PATH are set appropriately for the $ORACLE_BASE/product/11.grid -c "crs_stat -t -v" Password: ********* 111 of 136 4/18/2011 10:17 PM . Prerequisite Checks Summary Install Product Finish Click [Finish] to start the installation. if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server). if any checks fail. click the [Close] button to exit the OUI.shtml Before starting the Oracle Universal Installer (OUI). then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button. Install Oracle Database 11g Release 2 Examples Perform the following tasks as the oracle user to install the Oracle Database Examples: [oracle@racnode1 ~]$ cd /home/oracle/software/oracle/examples [oracle@racnode1 examples]$ .2). to resolve many incomplete system configuration requirements. Before executing the DBCA.2. At the end of the installation. Oracle Clusterware processes. etc. You will be prompted to run the script as root in a separate terminal session.

..network ora.de1.type 0/5 0/ ONLINE ONLINE racnode1 ora..scan2.type 0/3 0/ ONLINE ONLINE racnode1 ora.lsnr ora...dg ora.vip ora..E2.er.asm ora....lsnr ora.FRA..type 0/5 0/ ONLINE ONLINE racnode1 ora...type 0/3 0/ ONLINE ONLINE racnode1 ora..SM1....type 0/5 0/ ONLINE ONLINE racnode1 ora..lsnr application 0/5 0/0 ONLINE ONLINE racnode2 ora..gsd ora..type 0/5 0/ ONLINE ONLINE racnode1 ora..type 0/5 0/ ONLINE ONLINE racnode1 ora..dg ora.E1..ons ora........vip ora.type 0/5 0/0 OFFLINE OFFLINE ora........asm..ons application 0/3 0/0 ONLINE ONLINE racnode2 ora.lsnr ora.asm ora.ER...E2.....N3..dg ora..up..vip ora...de2.scan2..type 0/0 0/0 ONLINE ONLINE racnode1 ora.E1.vip ora.gsd..eons ora. log in to racnode1 as the owner of the Oracle Database 112 of 136 4/18/2011 10:17 PM ..dg ora.type 0/5 0/0 OFFLINE OFFLINE ora..oc4j ora...type 0/5 0/0 ONLINE ONLINE racnode1 ora..de2.vip ora.ip..de1...er..type 0/5 0/ ONLINE ONLINE racnode1 ora...vip ora.eons ora.lsnr application 0/5 0/0 ONLINE ONLINE racnode1 ora...scan3..type 0/0 0/0 ONLINE ONLINE racnode2 ora.vip ora..type 0/0 0/0 ONLINE ONLINE racnode1 [oracle@racnode2 ~]$ su .type 0/3 0/ ONLINE ONLINE racnode1 ora.de2.ER....ons application 0/3 0/0 ONLINE ONLINE racnode2 ora.grid -c "crs_stat -t -v" Password: ********* Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------ora...ons application 0/3 0/0 ONLINE ONLINE racnode1 ora..dg ora.de1.type 0/5 0/ ONLINE ONLINE racnode1 ora.oc4j.type 0/0 0/0 ONLINE ONLINE racnode2 ora.er.type 0/5 0/ ONLINE ONLINE racnode1 ora.....t1....type 0/0 0/0 ONLINE ONLINE racnode2 ora...rk.....type 0/5 0/ ONLINE ONLINE racnode1 ora..up...asm application 0/5 0/0 ONLINE ONLINE racnode1 ora.type 0/5 0/ ONLINE ONLINE racnode1 ora...asm application 0/5 0/0 ONLINE ONLINE racnode2 ora.scan3.lsnr ora.lsnr ora.....gsd application 0/5 0/0 OFFLINE OFFLINE ora...lsnr ora......asm application 0/5 0/0 ONLINE ONLINE racnode1 ora.asm..ip.lsnr application 0/5 0/0 ONLINE ONLINE racnode1 ora.type 0/5 0/0 ONLINE ONLINE racnode2 ora.ons ora.type 0/5 0/0 ONLINE ONLINE racnode2 ora..N1..network ora.type 0/5 0/ OFFLINE OFFLINE ora..N2.er...type 0/0 0/0 ONLINE ONLINE racnode2 ora.SM2.type 0/0 0/0 ONLINE ONLINE racnode1 ora...N2..dg ora.t1..asm application 0/5 0/0 ONLINE ONLINE racnode2 ora.SM1...up..eons..gsd ora..type 0/5 0/ OFFLINE OFFLINE ora..rk.CRS.CRS.type 0/5 0/ ONLINE ONLINE racnode1 ora..type 0/5 0/0 ONLINE ONLINE racnode1 ora..N3..lsnr application 0/5 0/0 ONLINE ONLINE racnode2 ora...lsnr ora..type 0/0 0/0 ONLINE ONLINE racnode1 Verify Terminal Shell Environment Before starting the Database Configuration Assistant (DBCA)..scan1.......FRA.....er..eons....type 0/0 0/0 ONLINE ONLINE racnode1 ora.type 0/5 0/0 ONLINE ONLINE racnode1 ora...de1..gsd application 0/5 0/0 OFFLINE OFFLINE ora....ip.er...ip.up.scan1.gsd.er..de1.ons..de2..type 0/5 0/0 ONLINE ONLINE racnode1 ora.t1.shtml Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------ora.oc4j..type 0/0 0/0 ONLINE ONLINE racnode1 ora....up.....vip ora..gsd application 0/5 0/0 OFFLINE OFFLINE ora.er.ip.lsnr ora..type 0/5 0/ ONLINE ONLINE racnode1 ora.oc4j ora.de1.....N1.DATA.gsd application 0/5 0/0 OFFLINE OFFLINE ora.vip ora.....SM2.t1.....up.DATA..DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.de2.ip.de2.vip ora.type 0/3 0/ ONLINE ONLINE racnode1 ora...ons..ons application 0/3 0/0 ONLINE ONLINE racnode1 ora.

Cluster database configuration. For the Fast Recovery Area. which is to Configure Enterprise Manager / Configure Database Control for local management. use the entire volume minus 10% for overhead — (33-10%=30 GB). I used a Fast Recovery Area Size of 30 GB (30413 MB). Select Create a Database.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Database File Locations Storage Type: Automatic Storage Management (ASM) Storage Locations: Use Oracle-Managed Files Database Area: +RACDB_DATA Specify ASMSNMP Password Specify the ASMSNMP password for the ASM instance. Database Content I left all of the Database Components (and destination tablespaces) set to their default value although it is perfectly OK to select the Sample Schemas. Create the Clustered Database To start the database creation process.shtml software which for this article is oracle. Keep in mind that this domain does not have to be a valid DNS domain. My disk group has a size of about 33GB.info for the database domain.idevelopment. This option is available since we installed the Oracle Database 11g Examples. Node Selection. if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server). verify your X11 display server settings which were described in the section Logging In to a Remote System Using X Terminal.info SID Prefix: racdb Note: I used idevelopment. Recovery Configuration 113 of 136 4/18/2011 10:17 PM . Specify storage type and locations for database files. Enter the password (twice) and make sure the password does not start with a digit number. Configuration Type: Admin-Managed Database naming. Select Custom Database. run the following as the oracle user: [oracle@racnode1 ~]$ dbca & Screen Name Welcome Screen Operations Database Templates Response Select Oracle Real Application Clusters database. click the [Browse] button and select the disk group name +FRA. Next. You may use any database domain. When defining the Fast Recovery Area size. Check the option for Specify Fast Recovery Area. Global Database Name: racdb. I selected to Use the Same Administrative Password for All Accounts. Click the [Select All] button to select all servers: racnode1 and racnode2. Screen Shot Database Identification Management Options Database Credentials Leave the default options here.

the database creation will start.RACDB_DATA.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.dg ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.FRA.net1. When the DBCA has completed.shtml Screen Name Initialization Parameters Database Storage Response Change any parameters for your environment.asm ONLINE ONLINE racnode1 Started ONLINE ONLINE racnode2 Started ora.eons ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.network ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.lsnr ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.dg ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora. Keep the default option Create Database selected. Change any parameters for your environment. you will have a fully functional Oracle RAC 11g release 2 cluster running! Verify Clustered Database is Open [oracle@racnode1 ~]$ su .LISTENER_SCAN1.LISTENER_SCAN2. I left them all at their default settings.lsnr 1 ONLINE ONLINE racnode1 114 of 136 4/18/2011 10:17 PM . Click OK on the "Summary" screen.ons ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 -------------------------------------------------------------------------------Cluster Resources -------------------------------------------------------------------------------ora. I left them all at their default settings. exit from the DBCA. Click Finish to start the database creation process.gsd OFFLINE OFFLINE racnode1 OFFLINE OFFLINE racnode2 ora.grid -c "crsctl status resource -w \"TYPE co 'ora'\" -t" Password: ********* -------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------Local Resources -------------------------------------------------------------------------------ora.LISTENER.CRS. After acknowledging the database creation report and script generation dialog.lsnr 1 ONLINE ONLINE racnode2 ora. I also always select to Generate Database Creation Scripts. Screen Shot Creation Options End of Database Creation At the end of the database creation.dg ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.

scan3.vip 1 ONLINE ONLINE racnode1 racnode1 racnode2 racnode1 racnode2 racnode2 racnode1 racnode1 Open Open Oracle Enterprise Manager If you configured Oracle Enterprise Manager (Database Control).racdb.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.vip 1 ONLINE ONLINE ora.db 1 ONLINE ONLINE 2 ONLINE ONLINE ora.racdb.racnode2. 2009 Oracle Corporation.scan2.dbora.idevelopment.oc4j 1 OFFLINE OFFLINE ora.info:1158/em/console/aboutApplication Oracle Enterprise Manager 11g is running.vip 1 ONLINE ONLINE ora.1.lsnr 1 ONLINE ONLINE ora.LISTENER_SCAN3.0.info:1158/em [oracle@racnode1 ~]$ emctl status dbconsole Oracle Enterprise Manager 11g Database Control Release 11.0/dbhome_1/racnode1_racdb/sysman/l 115 of 136 4/18/2011 10:17 PM .2. it can be used to view the database configuration and current status of the database.0 Copyright (c) 1996.vip 1 ONLINE ONLINE ora.idevelopment. https://racnode1.scan1. The URL for this example is: https://racnode1.2. -----------------------------------------------------------------Logs are generated in directory /u01/app/oracle/product/11.vip 1 ONLINE ONLINE ora.racnode1. All rights reserved.shtml ora.

(Database Console) Post Database Creation Tasks . Oracle moves to the 116 of 136 4/18/2011 10:17 PM . each instance will have its own set of online redolog files known as a thread.sql Enabling Archive Logs in a RAC Environment Whether a single instance or clustered database. In an Oracle RAC environment.shtml Figure 18: Oracle Enterprise Manager .(Optional) This section offers several optional tasks that can be performed on your new Oracle 11g in order to enhance availability as well as database management. [oracle@racnode1 ~]$ sqlplus / as sysdba SQL> @?/rdbms/admin/utlrp. Oracle tracks and logs all changes to database blocks in online redolog files. Each Oracle instance will use its group of online redologs in a circular manner. This step is optional but recommended. Re-compile Invalid Objects Run the utlrp.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Once an online redolog fills.sql script to recompile all invalid PL/SQL packages now instead of when the packages are accessed for the first time.

I will use the node racnode1 which runs the racdb1 instance: 1. each instance can read another instance's current online redolog file to perform instance recovery if that instance was terminated abnormally. It is therefore a requirement that online redo logs be located on a shared storage device (just like the database files). A thread must contain at least two online redologs (or online redolog groups). For the purpose of this article. SQL> startup mount ORACLE instance started. Oracle will switch to the next one. it is a simple task to put the database into archive log mode. 4. Oracle writes to its online redolog files in a circular manner. From one of the nodes in the Oracle RAC configuration. This is a process known as archiving. Using the local instance. The same holds true for a single instance configuration.0 Production on Sat Nov 21 19:26:47 2009 Copyright (c) 1982.1.e. racnode1) as oracle and disable the cluster instance parameter by setting cluster_database to FALSE from the current instance: [oracle@racnode1 ~]$ sqlplus / as sysdba SQL> alter system set cluster_database=false scope=spfile sid='racdb1'. Shutdown all instances accessing the clustered database as the oracle user: [oracle@racnode1 ~]$ srvctl stop database -d racdb 3. The Database Configuration Assistant (DBCA) allows users to configure a new database to be in archive log mode. MOUNT the database: [oracle@racnode1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11. Oracle allows the DBA to put the database into "Archive Log Mode" which makes a copy of the online redolog after it fills (and before it gets reused). Total System Global Area 1653518336 bytes Fixed Size 2213896 bytes Variable Size 1073743864 bytes Database Buffers 570425344 bytes Redo Buffers 7135232 bytes All rights reserved.shtml next one. To facilitate media recovery. however most DBA's opt to bypass this option during initial database creation. In a correctly configured RAC environment. When the current online redolog fills. As already mentioned. use the following tasks to put a RAC enabled database into archive log mode. Although in most configurations the size is the same. Oracle will make a copy of the online redo log before it gets reused. It is also worth mentioning that each instance has exclusive write access to its own online redolog files.2. Enable archiving: 117 of 136 4/18/2011 10:17 PM . In cases like this where the database is in no archive log mode. The size of an online redolog file is completely independent of another instance's' redolog size. If the database is in "Archive Log Mode". 2. Connected to an idle instance. Log in to one of the nodes (i.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. Note however that this will require a short database outage. however. 2009.0. Oracle. The single instance must contain at least two online redologs (or online redolog groups). it may be different depending on the workload and backup / recovery considerations for each node. System altered.

2.0 . All rights reserved. Automatic Storage Management.2. Although these views provide a simple and easy mechanism to query critical information regarding the database.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. ORACLE instance shut down. 5. 6.0 Production on Mon Nov 8 20:07:48 2010 Copyright (c) 1982.64bit Production With the Partitioning.0. each instance in the RAC configuration can automatically archive redologs! Download and Install Custom Oracle Database Scripts DBA's rely on Oracle's data dictionary views and dynamic performance views in order to support and better manage their databases. 2009.shtml SQL> alter database archivelog. Re-enable support for clustering by modifying the instance parameter cluster_database to TRUE from the current instance: SQL> alter system set cluster_database=true scope=spfile sid='racdb1'. Oracle. Shutdown the local instance: SQL> shutdown immediate ORA-01109: database not open Database dismounted. Real Application Clusters. System altered. Bring all instances back up as the oracle account using srvctl: [oracle@racnode1 ~]$ srvctl start database -d racdb 8. 7.0. 118 of 136 4/18/2011 10:17 PM . Login to the local instance and verify Archive Log Mode is enabled: [oracle@racnode1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11. Database altered.1. OLAP Data Mining and Real Application Testing options SQL> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence Archive Mode Enabled USE_DB_RECOVERY_FILE_DEST 68 69 69 After enabling Archive Log Mode.1. Connected to: Oracle Database 11g Enterprise Edition Release 11.

unzip the archive file to the $ORACLE_BASE directory. perform the following on both nodes in the Oracle RAC cluster as the oracle user account: [oracle@racnode1 ~]$ mv dba_scripts_archive_Oracle.idevelopment.227.info/data/Oracle/DBA_scripts /dba_scripts_archive_Oracle.952.zip The final step is to verify (or set) the appropriate environment variable for the current UNIX shell to ensure the Oracle SQL scripts can be run from within SQL*Plus while in any directory. security.880 1. Mgt. For UNIX.497.336. you should now be able to run any of the SQL scripts in the $ORACLE_BASE/dba_scripts/common/sql while logged into SQL*Plus.---------------. The DBA Scripts Archive for Oracle can be downloaded using the following link http://www. backups.264 MANUAL 209.---------------.zip archive will be copied to /u01/app/oracle.zip /u01/app/oracle [oracle@racnode1 ~]$ cd /u01/app/oracle [oracle@racnode1 ~]$ unzip dba_scripts_archive_Oracle.288 ---------------.967.200 20.059.bash_profile login script that was created in the section Create Login Script for the oracle User Account. Next. As the oracle user account.715.061. For example. download the dba_scripts_archive_Oracle. Tablespace Size Used (in bytes) P --------.286.145. performance.060.760 948.400 85.328 To obtain a list of all available Oracle DBA scripts while logged into SQL*Plus.576 MANUAL 734.472 66. verify the following environment variable is set and included in your login shell script: ORACLE_PATH=$ORACLE_BASE/dba_scripts/common/sql:.512 2.131.zip.135.zip archive to the $ORACLE_BASE directory of each node in the cluster.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml it helps to have a collection of accurate and readily available SQL scripts to query these views. In this section you will download and install a collection of Oracle DBA scripts that can be used to manage many aspects of your database including space management. Mgt. ---------LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL Seg. and session management.776 AUTO 5. Tablespace Name ----------------SYSAUX UNDOTBS1 USERS SYSTEM EXAMPLE UNDOTBS2 TEMP TS Type -----------PERMANENT UNDO PERMANENT PERMANENT PERMANENT UNDO TEMPORARY Ext.200 703.448 MANUAL 75.744 AUTO 157. For example.003.AUTO 629.048.---------------.242.869.840.sql ======================================== 119 of 136 4/18/2011 10:17 PM . For the purpose of this example.600 511. to query tablespace information while logged into the Oracle database as a DBA user: SQL> @dba_tablespaces Status ------ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE avg sum 7 rows selected.sql script: SQL> @help.043. the dba_scripts_archive_Oracle.:$ORACLE_HOME/rdbms/admin export ORACLE_PATH The ORACLE_PATH environment variable should already be set in the . run the help.2. Now that the DBA Scripts Archive for Oracle has been unzipped and the UNIX environment variable ($ORACLE_PATH) has been set to the appropriate directory.232 MANUAL 1.

SNIP --.sql ======================================== Workspace Manager ======================================== wm_create_workspace.260.sql wm_refresh_workspace.sql wm_freeze_workspace.sql wm_workspaces.261.sql asm_templates. you may want to make a sizable testing database.sql ======================================== Automatic Storage Management ======================================== asm_alias.> perf_top_sql_by_buffer_gets.sql perf_top_sql_by_disk_reads.703530429 +RACDB_DATA/racdb/datafile/undotbs1. substitute the data file names that were created in your environment where appropriate.sql asm_files2.sql asm_clients.sql asm_disks_perf. Please keep in mind that the database file names (OMF files) used in this example may differ from what the Oracle Database Configuration Assistant (DBCA) creates for your environment. file_name from dba_data_files union select tablespace_name.sql wm_enable_versioning.703530441 TABLESPACE_NAME --------------EXAMPLE SYSAUX SYSTEM TEMP UNDOTBS1 UNDOTBS2 120 of 136 4/18/2011 10:17 PM .sql wm_unfreeze_workspace.sql Create / Alter Tablespaces When creating the clustered database.263.shtml Automatic Shared Memory Management ======================================== asmm_components.703530411 +RACDB_DATA/racdb/datafile/system.sql asm_drop_files. Below are several optional SQL commands for modifying and creating all tablespaces for the test database. FILE_NAME -------------------------------------------------+RACDB_DATA/racdb/datafile/example.264.sql wm_remove_workspace.sql < --.sql asm_disks. we left all tablespaces set to their default size.703530397 +RACDB_DATA/racdb/tempfile/temp.sql wm_merge_workspace.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. The following query can be used to determine the file names for your environment: SQL> 2 3 4 5 select tablespace_name.sql wm_get_workspace.262. If you are using a large drive for the shared storage.703530423 +RACDB_DATA/racdb/datafile/undotbs2.sql asm_diskgroups.sql wm_goto_workspace.703530435 +RACDB_DATA/racdb/datafile/sysaux.sql asm_files.259. When working through this section.sql wm_disable_versioning. file_name from dba_temp_files.

097. Tablespace created.824 1.576 MANUAL 1.098.---------------.060.703530411' resize 102 Database altered.280 AUTO 157.048.147.259.265. Tablespace altered. Tablespace Size Used (in bytes) P --------.703530447' resize 1024 Database altered.703530447 [oracle@racnode1 ~]$ sqlplus "/ as sysdba" SQL> create user scott identified by tiger default tablespace users.043.824 66.shtml USERS 7 rows selected.073.741. SQL> alter database datafile '+RACDB_DATA/racdb/datafile/sysaux.483.073.131.400 85.741.703530441' resize 1 Database altered.AUTO 1. resource.824 948.264.---------------. Mgt. SQL> alter database datafile '+RACDB_DATA/racdb/datafile/undotbs1. connect to scott.703530429' resize 1024m Database altered.840.073.288 121 of 136 4/18/2011 10:17 PM .741.741.286. SQL> alter database tempfile '+RACDB_DATA/racdb/tempfile/temp. SQL> alter database datafile '+RACDB_DATA/racdb/datafile/users.304 MANUAL 1.260.648 2. SQL> alter database datafile '+RACDB_DATA/racdb/datafile/undotbs2. Grant succeeded. User created. +RACDB_DATA/racdb/datafile/users.448 MANUAL 1.073. SQL> alter tablespace users add datafile '+RACDB_DATA' size 1024m autoextend off.073.703530397' resize 102 Database altered.152 MANUAL 1. SQL> 2 3 4 create tablespace indx datafile '+RACDB_DATA' size 1024m autoextend on next 100m maxsize unlimited extent management local autoallocate segment space management auto. Here is a snapshot of the tablespaces I have defined for my test database environment: Status ------ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE Tablespace Name ----------------SYSAUX UNDOTBS1 USERS SYSTEM EXAMPLE INDX UNDOTBS2 TEMP TS Type -----------PERMANENT UNDO PERMANENT PERMANENT PERMANENT PERMANENT UNDO TEMPORARY Ext.265. SQL> grant dba.741.703530423' resize 1 Database altered.073.201. Mgt.824 20. SQL> alter database datafile '+RACDB_DATA/racdb/datafile/system.264 AUTO 1.741.261. ---------LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL Seg.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.262.824 703.824 512.776 AUTO 2.

Most of the checks described in this section use the Server Control Utility (SRVCTL) and can be run as either the oracle or grid OS user.avg sum 8 rows selected.992 2.220. Oracle Notification Services.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. such as: Starting and stopping Oracle Clusterware resources Enabling and disabling Oracle Clusterware daemons Checking the health of the cluster Managing resources that represent third-party applications Integrating Intelligent Platform Management Interface (IPMI) with Oracle Clusterware to provide failure isolation support and to ensure cluster integrity Debugging Oracle Clusterware components For the purpose of this article (and this section).521. Oracle Clusterware 11g release 2 (11. and Oracle Enterprise Manager agents (for maintenance purposes). parsing and calling Oracle Clusterware APIs for Oracle Clusterware objects. depending on the operation. Oracle also provides the Oracle Clusterware Control (CRSCTL) utility. You can run these commands from any node in the cluster on another node in the cluster. we will only make use of the "Checking the health of the cluster" operation which uses the Clusterized (Cluster Aware) Command: crsctl check cluster Many subprograms and commands were deprecated in Oracle Clusterware 11g release 2 (11.---------------.338. There are five node-level tasks defined for SRVCTL: Adding and deleting node-level applications Setting and un-setting the environment for node-level applications Administering node applications Administering ASM instances Starting and stopping a group of programs that includes virtual IP addresses.747. CRSCTL is an interface between you and Oracle Clusterware. listeners. You can use CRSCTL commands to perform several operations on Oracle Clusterware. I will only be performing checks from racnode1 as the oracle OS user.2): crs_stat crs_register crs_unregister crs_start crs_stop crs_getperm crs_profile crs_relocate crs_setperm crsctl check crsd crsctl check cssd crsctl check evmd crsctl debug log 122 of 136 4/18/2011 10:17 PM . start.2) introduces cluster-aware commands with which you can perform check. 8. For the purpose of this article. or on all nodes in the cluster.shtml ---------------. and stop operations on the cluster.088 Verify Oracle Grid Infrastructure and Database Configuration The following Oracle Clusterware and Oracle RAC verification checks can be performed on any of the Oracle RAC nodes in the cluster.

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

crsctl set css votedisk crsctl start resources crsctl stop resources
Check the Health of the Cluster - (Clusterized Command)

Run as the grid user.

[grid@racnode1 ~]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online

All Oracle Instances - (Database Status)

[oracle@racnode1 ~]$ srvctl status database -d racdb Instance racdb1 is running on node racnode1 Instance racdb2 is running on node racnode2

Single Oracle Instance - (Status of Specific Instance)

[oracle@racnode1 ~]$ srvctl status instance -d racdb -i racdb1 Instance racdb1 is running on node racnode1

Node Applications - (Status)

[oracle@racnode1 ~]$ srvctl status nodeapps VIP racnode1-vip is enabled VIP racnode1-vip is running on node: racnode1 VIP racnode2-vip is enabled VIP racnode2-vip is running on node: racnode2 Network is enabled Network is running on node: racnode1 Network is running on node: racnode2 GSD is disabled GSD is not running on node: racnode1 GSD is not running on node: racnode2 ONS is enabled ONS daemon is running on node: racnode1 ONS daemon is running on node: racnode2 eONS is enabled eONS daemon is running on node: racnode1 eONS daemon is running on node: racnode2

Node Applications - (Configuration)

[oracle@racnode1 ~]$ srvctl config nodeapps VIP exists.:racnode1 VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0 VIP exists.:racnode2

123 of 136

4/18/2011 10:17 PM

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0 GSD exists. ONS daemon exists. Local port 6100, remote port 6200 eONS daemon exists. Multicast port 24057, multicast IP address 234.194.43.168, listening port 2

List all Configured Databases

[oracle@racnode1 ~]$ srvctl config database racdb

Database - (Configuration)

[oracle@racnode1 ~]$ srvctl config database -d racdb -a Database unique name: racdb Database name: racdb Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: +RACDB_DATA/racdb/spfileracdb.ora Domain: idevelopment.info Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1,racdb2 Disk Groups: RACDB_DATA,FRA Services: Database is enabled Database is administrator managed

ASM - (Status)

[oracle@racnode1 ~]$ srvctl status asm ASM is running on racnode1,racnode2

ASM - (Configuration)

$ srvctl config asm -a ASM home: /u01/app/11.2.0/grid ASM listener: LISTENER ASM is enabled.

TNS listener - (Status)

[oracle@racnode1 ~]$ srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node(s): racnode1,racnode2

124 of 136

4/18/2011 10:17 PM

DBA Tips Archive for Oracle

file:///D:/rac11gr2/CLUSTER_12.shtml

TNS listener - (Configuration)

[oracle@racnode1 ~]$ srvctl config listener -a Name: LISTENER Network: 1, Owner: grid Home: /u01/app/11.2.0/grid on node(s) racnode2,racnode1 End points: TCP:1521

SCAN - (Status)

[oracle@racnode1 ~]$ srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node racnode2 SCAN VIP scan2 is enabled SCAN VIP scan2 is running on node racnode1 SCAN VIP scan3 is enabled SCAN VIP scan3 is running on node racnode1

SCAN - (Configuration)

[oracle@racnode1 ~]$ srvctl config scan SCAN name: racnode-cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth0 SCAN VIP name: scan1, IP: /racnode-cluster-scan.idevelopment.info/192.168.1.188 SCAN VIP name: scan2, IP: /racnode-cluster-scan.idevelopment.info/192.168.1.189 SCAN VIP name: scan3, IP: /racnode-cluster-scan.idevelopment.info/192.168.1.187

VIP - (Status of Specific Node)

[oracle@racnode1 ~]$ srvctl status vip -n racnode1 VIP racnode1-vip is enabled VIP racnode1-vip is running on node: racnode1 [oracle@racnode1 ~]$ srvctl status vip -n racnode2 VIP racnode2-vip is enabled VIP racnode2-vip is running on node: racnode2

VIP - (Configuration of Specific Node)

[oracle@racnode1 ~]$ srvctl config vip -n racnode1 VIP exists.:racnode1 VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0 [oracle@racnode1 ~]$ srvctl config vip -n racnode2 VIP exists.:racnode2 VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0

Configuration for Node Applications - (VIP, GSD, ONS, Listener)

125 of 136

4/18/2011 10:17 PM

(SQL) SELECT 126 of 136 4/18/2011 10:17 PM . Check of Clusterware install passed Checking if CTSS Resource is running on all nodes. remote port 6200 Name: LISTENER Network: 1.:racnode1 VIP exists..0/eth0 VIP exists.:racnode2 VIP exists.-----------------------racnode1 Active CTSS is in Active state. Result: Query of CTSS for time offset passed Check CTSS state started.0 passed Time offset is within the specified limits on the following set of nodes: "[racnode1]" Result: Check of clock time offsets passed Oracle Cluster Time Synchronization Services check passed Verification of Clock Synchronization across the cluster nodes was successful.255.1. All running instances in the cluster .-----------------------racnode1 passed Result: CTSS resource check passed Querying CTSS for time offset on all nodes.-----------------------racnode1 0.. Local port 6100.: /racnode2-vip/192..168.2..252/255. Check: CTSS state Node Name State -----------------------------------.255...: /racnode1-vip/192. Check: CTSS Resource running on all nodes Node Name Status -----------------------------------...0/grid on node(s) racnode2. Reference Time Offset Limit: 1000.255.1. Proceeding with check of clock time offsets on all nodes.251/255.-----------------------.racnode1 End points: TCP:1521 Verifying Clock Synchronization across the Cluster Nodes [oracle@racnode1 ~]$ cluvfy comp clocksync -verbose Verifying Clock Synchronization across the cluster nodes Checking if Clusterware is installed on all nodes.168.0 msecs Check: Reference Time Offset Node Name Time Offset Status -----------.shtml [oracle@racnode1 ~]$ srvctl config nodeapps -a -g -s -l -l option has been deprecated and will be ignored.. ONS daemon exists. Owner: grid Home: /u01/app/11.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.0/eth0 GSD exists.. VIP exists.255.

703542993 +RACDB_DATA/racdb/datafile/sysaux.703533499 +RACDB_DATA/racdb/tempfile/temp. instance_number inst_no .shtml inst_id .(SQL) select union select union select union select name from v$datafile member from v$logfile name from v$controlfile name from v$tempfile.260.259.266.703530391 +RACDB_DATA/racdb/onlinelog/group_2.256. NAME ------------------------------------------+FRA/racdb/controlfile/current.259.270.703530429 19 rows selected.264.(SQL) SELECT path FROM v$asm_disk.257.703530393 +RACDB_DATA/racdb/onlinelog/group_3. database_status db_status . ASM Disk Volumes .703530389 +FRA/racdb/onlinelog/group_1.257.262.---------1 1 racdb1 2 2 racdb2 PAR --YES YES STATUS ------OPEN OPEN DB_STATUS -----------ACTIVE ACTIVE STATE --------NORMAL NORMAL HOST ------racnode1 racnode2 All database files and the ASM disk group they reside in . PATH ---------------------------------ORCL:CRSVOL1 ORCL:DATAVOL1 127 of 136 4/18/2011 10:17 PM .261.703533499 +RACDB_DATA/racdb/controlfile/current.256. instance_name inst_name .703530389 +RACDB_DATA/racdb/datafile/example.703530397 +RACDB_DATA/racdb/datafile/undotbs1.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.703533497 +FRA/racdb/onlinelog/group_4.269.703530393 +FRA/racdb/onlinelog/group_3.703530441 +RACDB_DATA/racdb/datafile/users.260. host_name host FROM gv$instance ORDER BY inst_id. status .265. parallel .703530411 +RACDB_DATA/racdb/datafile/system.703530435 +RACDB_DATA/racdb/datafile/indx.263.703530391 +FRA/racdb/onlinelog/group_2.267.703530447 +RACDB_DATA/racdb/datafile/users.703542943 +RACDB_DATA/racdb/onlinelog/group_1.-------. active_state state .258.703533497 +RACDB_DATA/racdb/onlinelog/group_4.703530423 +RACDB_DATA/racdb/datafile/undotbs2.258. INST_ID INST_NO INST_NAME -------.

lsnr' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.lsnr' on 'racnode2' CRS-2672: Attempting to start 'ora.scan3.asm' on 'racnode1' CRS-2677: Stop of 'ora.vip' on 'racnode1' CRS-2677: Stop of 'ora.racnode1.lsnr' on 'racnode2' succeeded <-CRS-2676: Start of 'ora.lsnr' on 'racnode1' CRS-2673: Attempting to stop 'ora. so how do I start and stop services?".vip' on 'racnode2' CRS-2677: Stop of 'ora. There are times.racnode1.racdb. and so on should start automatically on each reboot of the Linux nodes.CRS.LISTENER_SCAN3.RACDB_DATA.vip' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora. the Oracle Database. Oracle grid infrastructure was installed by the grid user while the Oracle RAC software was installed by oracle.0/grid/bin/crsctl stop cluster CRS-2673: Attempting to stop 'ora.dg' on 'racnode1' succeeded CRS-2677: Stop of 'ora. when you might want to take down the Oracle services on a node for maintenance purposes and restart the Oracle Clusterware stack at a later time. Stopping the Oracle Clusterware Stack on the Local Server Use the "crsctl stop cluster" command on racnode1 to stop the Oracle Clusterware stack: [root@racnode1 ~]# /u01/app/11.vip' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.dg' on 'racnode1' CRS-2673: Attempting to stop 'ora. you may ask.lsnr' on 'racnode1' CRS-2677: Stop of 'ora.scan3.RACDB_DATA.scan3.eons' on 'racnode1' CRS-2673: Attempting to stop 'ora. "OK.LISTENER_SCAN2.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. If you have followed the instructions in this guide.lsnr' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.racnode1. ASM.lsnr' on 'racnode1' CRS-2673: Attempting to stop 'ora. After all of that hard work. all services.2.FRA.ons' on 'racnode1' 'racnode1' Notice racnode1 VIP Notice SCAN3 VIP mov Notice SCAN2 VIP mov Notice LISTENER_SCAN Notice LISTENER_SCAN 128 of 136 4/18/2011 10:17 PM .vip' on 'racnode2' CRS-2677: Stop of 'ora.vip' on 'racnode2' succeeded CRS-2676: Start of 'ora. including Oracle Clusterware.dg' on 'racnode1' CRS-2673: Attempting to stop 'ora.asm' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3. VIP.scan2. network.vip' on 'racnode2' succeeded <-CRS-2676: Start of 'ora.shtml ORCL:FRAVOL1 Starting / Stopping the Cluster At this point. We also have a fully functional clustered database running named racdb.scan2.LISTENER_SCAN3.vip' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora. SCAN.FRA.racdb.LISTENER_SCAN2.dg' on 'racnode1' succeeded CRS-2677: Stop of 'ora. The following stop/start actions need to be performed as root.db' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.dg' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.lsnr' on 'racnode2' succeeded <-CRS-2677: Stop of 'ora.scan2.LISTENER_SCAN3. everything has been installed and configured for Oracle RAC 11g release 2.CRS.vip' on 'racnode1' CRS-2677: Stop of 'ora.dg' on 'racnode1' CRS-2677: Stop of 'ora. Or you may find that Enterprise Manager is not running and need to start it.vip' on 'racnode2' <-CRS-2676: Start of 'ora.LISTENER.vip' on 'racnode1' CRS-2677: Stop of 'ora.lsnr' on 'racnode2' CRS-2676: Start of 'ora.LISTENER.LISTENER_SCAN2.scan2.scan3.db' on 'racnode1' CRS-2673: Attempting to stop 'ora.lsnr' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.racnode1. however.vip' on 'racnode2' succeeded <-CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.crsd' on 'racnode1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on CRS-2673: Attempting to stop 'ora. This section provides the commands necessary to stop and start the Oracle Clusterware stack on a local server (racnode1).

evmd' on 'racnode1' succeeded CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded Attempting to stop 'ora.0/grid/bin/crsctl start cluster -all You can also start the Oracle Clusterware stack on one or more named servers in the cluster by listing the servers 129 of 136 4/18/2011 10:17 PM .diskmon' on 'racnode1' CRS-2676: Start of 'ora.cssd' on 'racnode1' succeeded Attempting to stop 'ora.shtml CRS-2677: CRS-2673: CRS-2677: CRS-2677: CRS-2792: CRS-2677: CRS-2673: CRS-2673: CRS-2673: CRS-2673: CRS-2677: CRS-2677: CRS-2677: CRS-2677: CRS-2673: CRS-2677: CRS-2673: CRS-2677: Stop of 'ora.0/grid/bin/crsctl start cluster CRS-2672: Attempting to start 'ora.cssd' on 'racnode1' CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1' Stop of 'ora.diskmon' on 'racnode1' succeeded If any resources that Oracle Clusterware manages are still running after you run the "crsctl stop cluster" command.0/grid/bin/crsctl stop cluster -all Starting the Oracle Clusterware Stack on the Local Server Use the "crsctl start cluster" command on racnode1 to start the Oracle Clusterware stack: [root@racnode1 ~]# /u01/app/11.2.cssd' on 'racnode1' Stop of 'ora.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12. then the entire command fails.net1.evmd' on 'racnode1' CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1' succeeded Stop of 'ora. The following will bring down the Oracle Clusterware stack on both racnode1 and racnode2: [root@racnode1 ~]# /u01/app/11.2.ctssd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora. Also note that you can stop the Oracle Clusterware stack on all servers in the cluster by specifying -all.net1.cssd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora. Use the -f option to unconditionally stop all resources and stop the Oracle Clusterware stack.crsd' on 'racnode1' CRS-2676: Start of 'ora.network' on 'racnode1' Stop of 'ora.evmd' on 'racnode1' Attempting to stop 'ora.ons' on 'racnode1' succeeded Attempting to stop 'ora.asm' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.eons' on 'racnode1' succeeded Shutdown of Cluster Ready Services-managed resources on 'racnode1' has completed Stop of 'ora.asm' on 'racnode1' Stop of 'ora.crsd' on 'racnode1' succeeded You can choose to start the Oracle Clusterware stack on all servers in the cluster by specifying -all: [root@racnode1 ~]# /u01/app/11.asm' on 'racnode1' succeeded Attempting to stop 'ora.asm' on 'racnode1' CRS-2672: Attempting to start 'ora.2.cssdmonitor' on 'racnode1' Attempting to stop 'ora.network' on 'racnode1' succeeded Stop of 'ora.ctssd' on 'racnode1' Attempting to stop 'ora.cssdmonitor' on 'racnode1' succeeded Stop of 'ora.ctssd' on 'racnode1' CRS-2676: Start of 'ora.evmd' on 'racnode1' succeeded Stop of 'ora.diskmon' on 'racnode1' succeeded CRS-2676: Start of 'ora.

Configuring SCAN without DNS Defining the SCAN in only the hosts file (/etc/hosts) and not in either Grid Naming Service (GNS) or DNS is an invalid configuration and will cause the Cluster Verification Utility to fail during the Oracle grid infrastructure installation: Figure 19: Oracle Grid Infrastructure / CVU Error .DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.2.0/grid/bin/crsctl start cluster -n racnode1 racnode2 Start/Stop All Instances with SRVCTL Finally. you can start/stop all instances and associated services using the following: [oracle@racnode1 ~]$ srvctl stop database -d racdb [oracle@racnode1 ~]$ srvctl start database -d racdb Troubleshooting This section contains a short list of common errors (and solutions) that can be encountered during the Oracle RAC installation described in this article.shtml separated by a space: [root@racnode1 ~]# /u01/app/11.(Configuring SCAN without DNS) 130 of 136 4/18/2011 10:17 PM .

change the new nslookup shell script to executable: [root@racnode1 ~]# chmod 755 /usr/bin/nslookup [root@racnode2 ~]# chmod 755 /usr/bin/nslookup Remember to perform these actions on both Oracle RAC nodes.1. racnode-cluster-scan with your SCAN host name. Although Oracle strongly discourages this practice and highly recommends the use of GNS or DNS resolution.168.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.154.shtml INFO: INFO: INFO: INFO: INFO: INFO: INFO: INFO: INFO: Checking Single Client Access Name (SCAN). rename the original nslookup binary to nslookup. First. and 192.1. The instructions below include a workaround (Ok. create a new shell script on both Oracle RAC nodes named /usr/bin/nslookup as shown below while replacing 24. Please note that the workaround documented in this section is only for the sake of brevity and should not be considered for a production implementation. Checking name resolution setup for "racnode-cluster-scan".13 ERROR: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 192. it is OK to ignore this check and continue by clicking the [Next] button in OUI and move forward with the Oracle grid infrastructure installation..1. then echo "Server: 24. ERROR: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 216. 131 of 136 4/18/2011 10:17 PM .1 ERROR: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "racnode-cluster-sca Verification of SCAN VIP and Listener setup failed Provided this is the only error reported by the CVU.1..34#53" echo "Non-authoritative answer:" echo "Name: racnode-cluster-scan" echo "Address: 192.original on both Oracle RAC nodes: [root@racnode1 ~]# mv /usr/bin/nslookup /usr/bin/nslookup. simply modify the nslookup utility as root on both Oracle RAC nodes as follows.1.. This is documented in Doc ID: 887471. If on the other hand you want the CVU to complete successfully while still only defining the SCAN in the hosts file. a total hack) to the nslookup binary that allows the Cluster Verification Utility to finish successfully during the Oracle grid infrastructure install.154.original [root@racnode2 ~]# mv /usr/bin/nslookup /usr/bin/nslookup.34 with your primary DNS.168.187 with your SCAN IP address: #!/bin/bash HOSTNAME=${1} if [[ $HOSTNAME = "racnode-cluster-scan" ]].168.34" echo "Address: 24.original Next.187" else /usr/bin/nslookup.1 on the My Oracle Support web site.original $HOSTNAME fi Finally.154.. some readers may not have access to a DNS.24.

.. otherwise.168. The CVU will now pass during the Oracle grid infrastructure installation when it attempts to verify your SCAN: [grid@racnode1 ~]$ cluvfy comp scan -verbose Verifying scan Checking Single Client Access Name (SCAN).187 Verification of SCAN VIP and Listener setup passed Verification of scan was successful.1 localhost.0.-----------racnode-cluster-scan racnode1 true ListenerName -----------LISTENER Port -----------1521 Running? -----------true Checking name resolution setup for "racnode-cluster-scan".. you will receive the following error during the RAC installation: 132 of 136 4/18/2011 10:17 PM .. Comment ---------- Confirm the RAC Node Name is Not Listed in Loopback Address Ensure that the node name (racnode1 or racnode2) is not included for the loopback address in the /etc/hosts file.shtml The new nslookup shell script simply echo's back your SCAN IP address whenever the CVU calls nslookup with your SCAN host name.-----------.-----------------------passed racnode-cluster-scan 192.localdomain localhost it will need to be removed as shown below: 127.1..1 racnode1 localhost.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.-----------racnode-cluster-scan racnode1 true ListenerName -----------LISTENER Port -----------1521 Running? -----------true Checking name resolution setup for "racnode-cluster-scan". SCAN Name IP Address Status -----------.localdomain localhost If the RAC node name is listed for the loopback address. it calls the original nslookup binary.0..0. If the machine name is listed in the in the loopback address entry as below: 127. Comment ---------- =============================================================================== [grid@racnode2 ~]$ cluvfy comp scan -verbose Verifying scan Checking Single Client Access Name (SCAN). SCAN Name IP Address Status -----------. SCAN VIP name Node Running? ---------------.168.1.-----------------------.-----------------------.187 passed Verification of SCAN VIP and Listener setup passed Verification of scan was successful.0. SCAN VIP name Node Running? ---------------..-----------------------racnode-cluster-scan 192..-----------.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.version 0. the system is able to recognize the USB drive however. from the Openfiler server.(the status for each logical volume on a working system would be set to ACTIVE).Logical Volumes Not Active on Boot One issue that I have run into several times occurs when using a USB drive connected to the Openfiler server. you should first check the status of all logical volumes using the lvscan command from the Openfiler server: # lvscan inactive inactive inactive inactive inactive '/dev/rac1/crs' [2.94 GB] inherit '/dev/rac1/asm4' [115. I currently know of two methods to get Openfiler to automatically load the logical volumes on reboot. It may occur with other types of drives.14 iotype_init(91) register fileio iotype_init(91) register blockio iotype_init(91) register nullio open_path(120) Can't open /dev/rac1/crs -2 fileio_attach(268) -2 open_path(120) Can't open /dev/rac1/asm1 -2 fileio_attach(268) -2 open_path(120) Can't open /dev/rac1/asm2 -2 fileio_attach(268) -2 open_path(120) Can't open /dev/rac1/asm3 -2 fileio_attach(268) -2 open_path(120) Can't open /dev/rac1/asm4 -2 fileio_attach(268) -2 Please note that I am not suggesting that this only occurs with USB drives connected to the Openfiler server.(racnode1 and racnode2). Then. When the Openfiler server is rebooted.94 GB] inherit Notice that the status for each of the logical volumes is set to inactive .4. both of which are described below. Method 1 One of the first steps is to shutdown both of the Oracle RAC nodes in the cluster .94 GB] inherit '/dev/rac1/asm2' [115.00 GB] inherit '/dev/rac1/asm1' [115. it is not able to load the logical volumes and writes the following message to /var/log/messages .(also available through dmesg): iSCSI Enterprise Target Software .shtml ORA-00603: ORACLE server session terminated by fatal error or ORA-29702: error occurred in Cluster Group Service operation Openfiler . however I have only seen it with USB drives! If you do receive this error. manually set each of the logical volumes to ACTIVE for each consecutive reboot: # # # # lvchange lvchange lvchange lvchange -a -a -a -a y y y y /dev/rac1/crs /dev/rac1/asm1 /dev/rac1/asm2 /dev/rac1/asm3 133 of 136 4/18/2011 10:17 PM .94 GB] inherit '/dev/rac1/asm3' [115.

(racnode1 and racnode2). The following is a small portion of the /etc/rc.. verify the external drives are powered on and then reboot the Openfiler server......sysinit script to basically wait for the USB disk (/dev/sda in my example) to be detected...... reboot the Openfiler server to ensure each of the logical volumes will be set to ACTIVE after the boot process..............shtml # lvchange -a y /dev/rac1/asm4 Another method to set the status to active for all logical volumes is to use the Volume Group change command as follows: # vgscan Reading all physical volumes. then modprobe dm-multipath >/dev/null 2>&1 /sbin/multipath...94 GB] inherit '/dev/rac1/asm3' [115.. # LVM2 initialization. Finally. After making the changes to the /etc/rc. Method 2 This method was kindly provided by Martin Jones.static -v 0 if [ -x /sbin/kpartx ].........94 GB] inherit '/dev/rac1/asm4' [115....... His workaround includes amending the /etc/rc......94 GB] inherit As a final test....94 GB] inherit '/dev/rac1/asm2' [115. use the lvscan command again to verify the status: # lvscan ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE '/dev/rac1/crs' [2.. then if [ -x /sbin/multipath..sysinit script (described below)... check that the iSCSI target service is running: # service iscsi-target status ietd (pid 2668) is running. restart each of the Oracle RAC nodes in the cluster . This may take a while..DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12..00 GB] inherit '/dev/rac1/asm1' [115. then /sbin/dmsetup ls --target multipath --exec "/sbin/kpartx -a" fi fi if [ -x /sbin/dmraid ]..... then modprobe dm-mirror > /dev/null 2>&1 /sbin/dmraid -i -a y fi #----- 134 of 136 4/18/2011 10:17 PM . After you have verified that each of the logical volumes will be active on boot. take 2 if [ -c /dev/mapper/control ].. Found volume group "rac1" using metadata type lvm2 # vgchange -ay 5 logical volume(s) in volume group "rac1" now active After setting each of the logical volumes to active..sysinit script on the Openfiler server with the changes (highlighted in blue) proposed by Martin: ....static ]..

... The RAC solution presented in this article can be put together for around US$2..... then action $"Setting up Logical Volume Management:" /sbin/lvm.... First...cache ..... then for file in /etc/mtab /etc/ld... His research and hard work made the task of configuring Openfiler seamless.static vgscan --mknodes --ignorelockingfailure && /sbin/lvm...shtml #----#----- MJONES ." sleep 5 done echo "INFO . there are several other individuals that deserve credit in making this article a success." echo "Waiting... Bane not only introduced me to Openfiler.... Although I was able to author and successfully demonstrate the validity of the components that make up this configuration. I would like to thank Bane Radulovic from the Server BDE Team at Oracle.700 and will provide the DBA with a fully functional Oracle 11g release 2 RAC cluster..(racnode1 and racnode2)... A special thanks to K Gopalakrishnan for his assistance in delivering the Oracle RAC 11g Overview section of this article..000... then if /sbin/lvm.static ]. do [ -r $file ] && restorecon $file >/dev/null 2>&1 done fi ." #----#----#----- MJONES .. In this section... However..Customisation END if [ -x /sbin/lvm...so.. Bane was also involved with hardware recommendations and testing.Customisation Start # Check if /dev/sda is ready while [ ! -e /dev/sda ] do echo "Device /dev/sda for first USB Drive is not yet ready.... Finally.Device /dev/sda for first USB Drive is ready. for those DBA's that want to become more familiar with the features and benefits of database clustering will find the costs of configuring even a small RAC cluster costing in the range of US$15... Acknowledgements An article of this magnitude and complexity is generally not the work of one person alone. much of the content regarding the history of Oracle RAC can be found in his very popular book Oracle Database 10g Real Application Clusters Handbook. restart each of the Oracle RAC nodes in the cluster .. Conclusion Oracle RAC 11g release 2 allows the DBA to configure a clustered database solution with superior fault tolerance and load balancing.. it should never be considered for a production environment... This book comes highly recommended for both DBA's and Developers 135 of 136 4/18/2011 10:17 PM .000 to US$20..DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12... but shared with me his experience and knowledge of the product and how to best utilize it for Oracle RAC..... While the hardware used for this guide is stable enough for educational purposes..static vgscan > /dev/null 2>&1 .....static vgchange -a y --ignorelockingfailure fi fi fi # Clean up SELinux labels if [ -n "$SELINUX" ]. This article has hopefully given you an economical solution to setting up and configuring an inexpensive Oracle 11g release 2 RAC Cluster using Red Hat Enterprise Linux (or CentOS) and iSCSI technology.....

programming language processors (compilers and interpreters) in Java and C. developing high availability solutions. Hunter and is protected under copyright laws of the United States. Java and PL/SQL programming. Last modified on Saturday. and physical / logical database design in a UNIX. I will in no case be liable for any monetary damages arising from such loss. scripts and material located at the Internet address of http://www.DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.info is the copyright of Jeffrey M. Inc. Linux. All articles. database security. written permission. and of course Linux. with a Bachelor's degree in Computer Science. prior.info. I would like to express my appreciation to the following vendors for generously supplying the hardware for this article. damage or destruction of data or any other property which may arise from relying on it. damage or destruction. Pennsylvania.idevelopment. located in Pittsburgh. but I disclaim any and all responsibility for any loss. capacity planning. Hunter. writing web-based database administration tools. All rights reserved. About the Author Jeffrey Hunter is an Oracle Certified Professional. I have made every effort and taken great care in making sure that the material included on my web site is technically accurate. Database Administrator and Software Engineer for over 17 years and maintains his own website site at: http://www. Copyright (c) 1998-2011 Jeffrey M. Avocent Corporation. Java Development Certified Professional. and Intel. Seagate. Author. 26-Feb-2011 12:19:26 EST Page Count: 11560 136 of 136 4/18/2011 10:17 PM . Jeff graduated from Stanislaus State University in Turlock. and an Oracle ACE. His work includes advanced performance tuning. Application to host any of the material elsewhere can be made by contacting me at jhunter@idevelopment.info. He has been a Sr.shtml wanting to successfully implement Oracle RAC and fully understand how many of the advanced services like Cache Fusion and Global Resource Directory operate. LDAP. Jeff's other interests include mathematical encryption theory. This document may not be hosted on any other site without my express. and Windows server environment. California.iDevelopment. Lastly. Jeff currently works as a Senior Database Administrator for The DBA Zone.

Sign up to vote on this title
UsefulNot useful