You are on page 1of 18

Step-By-Step Installation of 9i RAC with OCFS on

Windows 2000

Doc ID: Note:178882.1 Content Type: TEXT/X-


Subject:Step-By-Step Install of RAC with OCFS on HTML
Windows 2000 Creation Date: 05-MAR-2002
Type: REFERENCE Last Revision
02-SEP-2005
Status: PUBLISHED Date:

Purpose
This document will provide the reader with step-by-step instructions on how to install and configure an Oracle9i Real
Application Clusters (RAC) database using Oracle Cluster File System (OCFS) on a Windows 2000 cluster. Based on
Service Request volume, this note will outline the installation of 9i Release 2 for a Windows 2000 cluster. The
instructions for NT or 2003 should be similar; however the navigation within the OS may differ (i.e., Disk
Management is Disk Administrator in NT). The basic principles, especially with the pre-install cluster configuration
should be the same.

Note: If you wish to use Logical Partitions (otherwise known as RAW Partitions) for the datafiles instead of OCFS,
please see Note 236155.1.

Note: OCFS is not supported with Oracle9i Release 1 (9.0.1.0.0). You must use Logical Partitions for the datafiles:
please see Note 236155.1.

OCFS and the Oracle Clusterware is available for download from Metalink under Patch 3973928 WINDOWS CFS
AND CLUSTERWARE PATCH FOR 9.2.0.6. You will need to stage this to a local drive on one of the nodes in the
cluster. Instruction on the installation follows.

Note: Microsoft Cluster Software (MSCS) is not required for RAC databases as the Oracle Clusterware provides the
clustering. However, the Oracle Clusterware can coexist with MSCS as long as the quorum and shared disks are
partitioned and mutually exclusive.

Disclaimer: If there are any errors or issues prior to section 2, please contact your cluster hardware
vendor's support.
The information contained here is as accurate as possible at the time of writing.

1. Configure the Cluster Hardware

1.1 Minimal Hardware List / System Requirements


1.2 Install the Shared Disk Array
1.3 Install Cluster Interconnect and Public Network Hardware
1.4 Check the Temp and Tmp Directories Defined Within Windows
1.5 Check Access to Other Nodes Within Windows
1.6 Perform a Final Clustercheck

2. Install and Configure the Cluster Software with OCFS

2.1 Prepare the OCFS Drive in Windows


2.2 Run Oracle Setup Wizard
2.3 Install the 22018 OUI
2.4 Install the OCFS Support Software into the Oracle Home
2.5 Install the 9201 RDBMS Software into the Oracle Home
2.6 Install the 9206 RDBMS Patch
2.7 Patch the Remaining Clusterware
2.8 Fix CM Service Priority

3. Create a RAC Database Using Oracle Database Configuration Assistant

4. Use of SRVCTL for the Administration and Maintenance of a RAC Database

5. References

1. Configure the Cluster Hardware

1.1 Minimal Hardw System Requirements

Certified cluster configurations are listed in Note 184875.1 How To Check The Certification Matrix for Real
Application Clusters. Note that there are different configurations for Windows NT and 2000. Please consult
this listing for specific Hardware/Software/Variance information provided by your Cluster vendor. In
general, each node will require the following:

1.1.1. Hardware:

1. External shared hard disks


2. Certified hardware configurations

1.1.2. Software:

1. Oracle Operating System Dependent (OSD) clusterware layer


1.1.3. RAM:

1. 256 MB minimum for each instance running on that node

The above information is contained within the Oracle9i Database Installation Guide for Windows. See the
section "Oracle9i Database System Requirements" for additional information on hardware/system sizing for
other options of the RDBMS.

1.2 Install Shared Disk Array

Follow the procedures provided by your Cluster vendor. Verify that all nodes can view the shared partitions
within the Disk Manager in Windows 2000 and that they are numbered the same. You may have to refresh
the view or restart Disk Manager if it is open on other nodes during reconfiguration.

1.3 Install Cluster Interconnect and Public Network Hardware

Follow the procedures provided by your Cluster vendor. In general, you will setup the following Hostname
and IP information before running the Cluster setup:

1.3.1. Setup the External and Internal Network Interface Cards (NIC):
• Within the Network settings of Windows, create at least two entries for the NICs you have installed.
1. When assigning the Bindings of the NICs within the Windows Networking Properties, ensure that the
Public IP is listed at the top for all settings. The Private NIC(s) should be listed below the public NIC
settings. You can verify this at the command prompt by running the command ipconfig /all to verify that
the public IP address is listed first.
2. It is strongly recommended that a network switch is used for the interconnect between nodes rather than
a crossover cable or a hub. Most cluster hardware vendors will have this as a requirement due to known
NIC problems when there is loss of electrical connectivity. This can cause hanging issues with various
services on node reboot. Please see Note 213416.1 RAC: Troubleshooting Windows NT/2000 Service
Hangs for more information.

1.3.2. Resolution of External and Internal Hostnames:


1. Ensure that the External/Public Hostnames are defined in your Directory Network Services (DNS) and
that the correct IP addresses resolve for all nodes in the cluster.
2. Ensure that all External/Public and Internal/Private Hostnames are defined in the HOSTS file on all
nodes of the cluster. This file is located in the WINDOWS_HOME\System32\drivers\etc directory.
For example a two node cluster may look like:
135.1.136.52 racnode1
135.1.136.53 racnode2
10.10.10.11 racnode1.san
10.10.10.12 racnode2.san
Note: Some vendors also require the setup of the LMHOSTS file. Please check your
Vendor specific documentation.
3. Test your cluster configuration by pinging all hostnames from each node and check for proper names
resolution.

1.4 Check the Temp and Tmp Directories Defined within Windows

To install properly across all nodes, the Oracle Universal Installer will need to use the temporary folders
defined within Windows. The TEMP and TMP folders should be the same across all nodes in the cluster. By
default these settings are defined as %USERPROFILE%\Local Settings\Temp and
%USERPROFILE%\Local Settings\Tmp in the Environment Settings of My Computer. It is recommended
to explicitly redefine these as WIN_DRIVE:\temp and WIN_DRIVE:\tmp; for example: C:\temp and
C:\tmp for all nodes.

1.5 Check Access to Other Nodes Within Windows

To install and perform administrative tasks, Oracle Corporation recommends using the same local
administrative username and password on every node in a cluster, or a domain username with local
administrative privileges on all nodes. All nodes must be in the same domain.

Ensure that each node has administrative access to all these directories within the Windows environment by
running the following at the command prompt:

NET USE \\host_name\C$

where host_name is the public network name for the other nodes. If you plan to install the
ORACLE_HOME onto another drive location than C, check that administrative share as well.

For example, if your WIN_HOME is on the C drive and you were installing the ORACLE_HOME onto the
E drive of all nodes, you would run the following from a command prompt on node 1 of a four-node cluster:

NET USE \\node2\C$


NET USE \\node3\C$
NET USE \\node4\C$
NET USE \\node2\E$
NET USE \\node3\E$
NET USE \\node4\E$

You would then repeat these commands on all nodes within the cluster. If the following appears for each
command, the privileges are correct:
The command completed successfully.

If you receive errors, resolve these within the Windows environment before proceeding.

1.6 Perform a Final Clustercheck

Note: If you have any issues with Clustercheck, please see Note 186130.1 Clustercheck.exe Fails with
Windows Error 183 .

Within a command prompt window, run the clustercheck.exe program located in the staged directory of
unzipped patch 3973928 (i.e., under the 3973928\Disk1\preinstall_rac\clustercheck directory). This tool will
prompt for the public and private hostnames and have you verify the IP address resolution. If that passes,
then it will perform a check of the health of the shared disk array and other environment variables and
permissions necessary for proper cluster installation and operation. It will create a subdirectory called opsm
in the temporary directory specified by your environment settings (WIN_DRIVE:\Temp by default if you
have changed it as recommended) and log file called OraInfoCoord.log. This log will contain any errors
encountered in the check. You should see the following at the bottom of the log file and within the command
prompt window when you run the clustercheck.exe program:

ORACLE CLUSTER CHECK WAS SUCCESSFUL

You must correct any errors that occur before proceeding. Please contact your Cluster Hardware Vendor if
you need assistance.

NOTE: If at any time in the installation of the software you do not see all nodes in the cluster within the
Cluster Node Selection screen, there is something wrong with your cluster configuration. You will have to go
back and troubleshoot your cluster install. You can perform clusterware diagnostics by executing the
ORACLE_HOME\bin\lsnodes -v command and analyzing its output. Use Metalink to search for any errors.
Refer to your vendor's clusterware documentation if the output indicates that your clusterware is not
properly installed. Resolve the problem, then rerun the checks.

2. Install and Configure Cluster Software with OCFS


Note: OCFS is not supported with Oracle9i Release 1 (9.0.1.0.0)

This section contains an abbreviated version of instructions from the OCFS and Oracle Clusterware
README. This configuration will install only the Oracle Datafiles on shared OCFS partitions and the
Oracle Home on local NTFS drives of each node. Alternately, you can install both the Oracle Home and the
Oracle Datafiles on OCFS. Please refer to the README documentation for installation instructions.
Currently there is a limitation with the Database Configuration Assistant (DBCA) that allows only one OCFS
drive to be used for all datafiles. Although this note will specify only one OCFS drive for your datafiles, you
may configure as many OCFS drives for the datafiles as needed. The workaround is to change the locations
to other OCFS drives using ALTER DATABASE DATAFILE RENAME commands after the database is
created.

Due to various issues, it is recommended to apply the latest patches available for all components:

4. The Windows CFS and Clusterware Patch for 9.2.0.6 -- available in Metalink under Patch number
3973928
5. The 2.2.0.18.0 Oracle Universal Installer -- available in Metalink under Patch number 2878462
6. The 9.2.0.6 RDBMS patchset -- available in Metalink under Patch number 3948480

The following instructions will incorporate the application of these patches with the installation for a new
cluster. Please review all README instructions before proceeding. For this set of instructions, you will
stage the software to the hard drive of node 1. For example, the following convention will be used:

• Oracle 9i Release 2 (9.2.0.1) EE >> copied from the 3 RDBMS installation CDs
E:\installs\9201\disk1\
E:\installs\9201\disk2\
E:\installs\9201\disk3\
• Oracle 9i Patch 9.2.0.6 >> downloaded from Metalink Patch number 3948480
E:\installs\9206\disk1\
• Oracle Clusterware patch 9.2.0.6 >> downloaded from Metalink Patch number 3973928
E:\installs\osd9206\

Note: For installations with more than 2 nodes: Due to known OUI issues with the push installation
on a 3-or-more node cluster (Bug 2973000), it is recommended to install the 2.2.0.18 version of the
OUI so that you can perform a cluster installation of the RDBMS software. The alternative is to
perform individual installs on each node, which would put an installation inventory on each node.

If you choose to perform individual installs, you should be aware of the following:

1. 1. The clustersetup would still be run only off of one node, as it does not use OUI.
2. 2. All instructions below using the OUI would need to be done individually on each node.
3. 3. All future patch installations would also have to be done individually on each node.

Note: Sometimes there are patch issues with some non-Oracle services that may be running on the
cluster nodes. Typically the Microsoft Service Distributed Transaction Coordinator (MSDTC) can
interact with Oracle software during install. It is recommended that this service is stopped and set to
manual start using services.msc on both nodes. If, after completing the install, the MSDTC service is
required, it can be restarted and set to autostart.

2.1. Prepare the OCFS Drives in Windows

Note: The minimum partition size needed is 4.0 GB for the Oracle Datafiles.
Note: Choosing to use a Primary Partition rather than the Extended Partition may cause clustersetup to fail
with the error: "PRKI-2016: unable to find an oracle partition. Please exit the wizard, create the oracle
partition and try again". Reconfigure the drive to be an Extended Partition prior to creating Logical
Drives.

1. 1. Log in to Windows as member of the Local Administrators Group on node 1.


2. 2. Right click on My Computer and choose Manage. In the Computer Management console tree, select
Disk Management.
3. 3. To create the Extended Partitions, right-click the unallocated region of a basic disk, and choose
Create Partition. (Dynamic disks are not supported)
4. 4. In the Create Partition wizard, choose Next > Extended Partition.
5. 5. Choose Extended Partition. Choose Next.
6. 6. Choose the maximum amount of space available. Choose Next.
7. 7. A summary screen will come up. Choose Finish.
8. 8. To create the Logical Partition, right-click again on the Extended Partition (it should be green) and
choose Create Logical Drive.
9. 9. Follow the instructions in the wizard, choosing the entire drive for the Logical Drive or the desired
amount of space (4.0 G at minimum). Choose not to assign any drive letters and no format. Choose
Finish.
10. 10. After creating the "empty drive", you should check the drive status on all other nodes via Disk
Manager. Ensure that the other nodes display the new drive and did not assign a drive letter to the
drive. If a drive letter was assigned, you will have to remove it.
11. Note: If you are running Windows 2003, you should check whether automount is enabled on all nodes in
the cluster. Please see Note 254611.1 "Shared Partition Errors in RAC Configuration on Windows
2003". You also will have to reboot the remote nodes in order to view any new drives in Disk Manager.
12. 11. Repeat for any additional OCFS drives that may be needed.

Note: If the Disk Management window is open during any disk management modifications, you nto close
and open the window or refresh to view any changes you applied.

2.2 Run Oracle Cluster Setup Wizard

For 3-or-more nodes: Since the OUI is not used, you can run this only on node 1 and the software will be
correctly transferred to the other nodes in the cluster.

1. 1. Download Patch number 3973928 Windows CFS and Clusterware Patch for 9.2.0.6.
2. 2. Expand the patch into the staged directory, such as E:\installs\osd9206 . This will create another
subdirectory such as E:\installs\osd9206\3973928. This clusterware patch contains a full clustersetup
release.
3. 3. Within a command prompt window, navigate to the
E:\installs\osd9206\3973928\preinstall_rac\clustersetup directory in the OCFS staged directory.
4. 4. Launch the Oracle Cluster Setup Wizard by typing clustersetup at the command line.
5. 5. The Cluster Wizard program should launch with a Welcome page. Click Next.
6. 6. The first time the Wizard is run, the only option will be to Create a cluster. Click Next.
7. 7. Choose "Use private network for interconnect" and click Next.
8. 8. The Network Configuration page appears. Enter the cluster name. Then enter the public hostnames
for all nodes. The private hostnames will be automatically entered as public_name.san. Accept the
default or change as appropriate for your cluster configuration. Click Next.
9. 9. The Cluster File System Options page appears. Choose CFS for Datafiles only. Click Next.
10. 10. The CFS for Datafiles page appears. Choose a drive letter, and then choose one of the partition you
prepared earlier with a minimum 4.0 GB in size. Click Next.
11. 11. The VIA Detection screen appears stating whether Virtual Interface Architecture (VIA) hardware was
detected. Choose yes or no depending on your configuration. Please contact your cluster hardware
vendor if you are unsure. Click Next.
12. 12. The Install Location screen appears. It will default to the WIN_HOME\system32\osd9i directory.
Accept the default and click Finish.
13. 13. The Cluster Setup window will appear. This will show the progress with installing the cluster files,
creating the cluster services on all nodes, and formatting the OCFS drives. If no errors occur, the Oracle
Cluster Setup Wizard application will complete and close automatically.
14. 14. Check the Clusterware setup. You should have an OCFS drive visible from both nodes.
Also, the following 3 services should be running on each of the nodes in the cluster:
1. OracleClusterVolumeService
2. Oracle Object Service
3. OracleCMService9i

Note: If the clustersetup doesn't run properly, check for errors in the log files under
WIN_HOME\system32\osd9i.

If any hardware or OS configuration changes are made during this setup process, or if it is necessary to run
through the clustersetup again, you must remove and reinstall the OCFS software (Deinstallation is not
available at this time). Please see Note 230290.1 WIN RAC: How to Remove a Failed OCFS Install for more
information.

Note: Adding another OCFS drive can be done by following Note 229060.1 How to Add Another OCFS
Drive for RAC on Windows.

2.3 Install the 22018 OUI


1. 1. Download the 2.2.0.18 version of the OUI from Patch number 2878462. Unzip into a staged directory
such as E:\oui22018.
2. 2. Navigate within a command prompt window to E:\oui22018\Disk1\install\win32. Run setup.exe and
the OUI Welcome screen appears. Click Next.
3. 3. The Cluster Node Selection screen appears. Highlight all nodes and click Next. For individual
installs: choose the local node only.
4. 4. Ensure the correct source path is being used. In the Destination field, enter the Oracle Home for the
desired Oracle Home for the database, such as C:\oracle\ora92.
5. 5. The Installation Types screen appears where you choose to install both the Software Packager and the
OUI 2.2.0.18 or a subset. Choose Minimum installation (2.2.0.18 OUI only) and click Next.
6. 6. The Summary screen appears. Check that all nodes are listed. Click Next and the progress screen
will come up. When the 22018 OUI is installed, click Exit.
7. 7. For individual installs: Repeat on all nodes.

2.4 Install the OCFS Support Software into the Oracle Home

1. 1. To install the OCFS binaries, bring up the new OUI program from Start > Programs > Oracle
Installation Products > Universal Installer. Click Next at the Welcome page.
2. 2. The Node Selection screen appears. Highlight all nodes and click Next. For individual installs:
Choose only the local node.
3. 3. Browse to change the Source Path so that it is pointing to
E:\installs\osd9206\3973928\Disk1\stage\products.jar. In the File Locations page, enter the Oracle Home
name where you just installed the OUI and click Next.
4. 4. OUI displays a summary page. Click Next to begin the installation and see the progress bar. When
the install is complete, you will have installed the OCFS support files in the ORA_HOME\cfspatch
directory. This OCFS support software will only be installed on node 1, not on any other nodes. Click
Exit.
5. 5. For individual installs: Repeat the previous steps for all other nodes in the cluster.

2.5 Install the 9201 RDBMS Software into the Oracle Home

1. 1. Relaunch the OUI and click Next at the Welcome page.


2. 2. The Node Selection screen appears. Highlight all nodes on to which the Oracle RDBMS software will
be installed. For individual installs: Select only the local node. Click Next.
3. 3. Browse so that the Source path location for the products.jar file is correct
(E:\installs\9201\disk1\stage\products.jar). In the Destination section, ensure the same location for your
Oracle9i Home as used in previous steps. Click Next and a bar at the top of the window will show the
progress of loading the products list. When it reaches 100%, it will proceed to the next screen.
4. 4. The Available Products screen appears. Select the Oracle9i Database, and then click Next.
5. 5. The Installation Type screen appears. Choose the Enterprise Edition. The selection on this screen
refers to the installation operation, not the database configuration. Click Next.
6. 6. The Database Configuration screen appears. Choose Software Only and click Next.
7. 7. If Microsoft Transaction Server is detected, then the Oracle Services for Microsoft Transaction
Server window appears. Enter a port number for this service (or leave at default value if unsure) and
click Next.
8. 8. The Summary page appears. Review the information in the Summary page. Double-check the
temporary space available on the drive from which you are installing and then click Install.
Note: The OUI will install the Oracle9i software on to the local node, and then copy this
information to the other nodes selected and make registry changes. This will take some time, an
hour or more depending on your computing and networking environment. During the installation
process, the OUI does not display all the messages indicating components are being installed on
other nodes, so the installation may appear to be hung. In this case, I/O activity may be the only
indication that the process is continuing. If necessary, check each node's activity using Task
Manager. You can also check the progress by periodically reviewing the 'Properties' on the Oracle
Home directory in Windows Explorer to see if the size is growing.
Note: There is a known bug where OUI fails to find crlogdr.exe or other files when installing from
Disk 3. These files are located in Disk 1 under the preinstall_rac subdirectory. See Note 211685.1
RAC WIN: Oracle 9.2 installation halts with error file not found CRLOGDR.EXE for more
information.
1. 9. For individual installs: Repeat the previous steps for all other nodes in the cluster.
Note: When doing a push installation, check the remote nodes' shortcuts by right clicking on the Start
button and choosing Explore All Users. Browse to the newly created Oracle - OraHome folder by clicking
on Programs folder. Check that the shortcuts exist and work. If the folders are empty, you can copy the
shortcuts from another node or from another folder, verifying that the copied shortcuts work.

2.6 Install the 9206 RDBMS Patch

The 9206 patchset uses the 10g version of the OUI installer. Therefore you will install the 10g OUI along
with the 9206 patch.
1. Navigate to E:\installs\9206\disk1 directory and launch the setup executable. Click Next when the
Welcome screen appears.
2. Ensure the correct source path is being used. In the Destination field, enter the desired Oracle
Home for the database, such as C:\oracle\ora92. Click Next.
3. The Cluster Node Selection screen appears. The list of all the cluster nodes should appear and
click Next. For individual installs: only the local node will be listed.
4. The Available Products screen appears. Choose the default checked off products (which is all
products with lower versions than 9.2.0.6) and click Next.
5. A Summary screen appears. Click Install.
6. The progress screen appears. When the progress bar reaches 100%, the OUI will show a screen
stating the patch installation was successful. Click Exit to complete patch install.
7. For individual installs: Repeat the previous steps for all other nodes in the cluster.
8. Reboot all nodes in the cluster before proceeding. Ensure all services start on all nodes.
Note: If you don't get a cluster node selection screen, please see Note 270048.1 Node Selection Screen Does
Not Show The Nodenames Installing 9205 (OUI 10g) for the workaround.
1. 2.7 Patch the Remaining Clusterware

You will copy all files from the staged osd9206 directory (E:\installs\osd9206\3973928 in our example). You
may want to rename the extension of the files to keep the original version.

1. 1. To patch the GSD from E:\installs\osd9206\3973928\srvm\gsd, copy these files into the following
directories:

%ORACLE_HOME%\bin\orasrvm.dll
%ORACLE_HOME%\bin\gsd.exe
%ORACLE_HOME%\bin\gsdservice.exe
%ORACLE_HOME%\jlib\srvm.jar

Install the GSD service by running the following via command line on all nodes:

'gsdservice -install'

To change the service startup click: Start > Settings > Control Panel > Administrative Tools > Services.
Select OracleGSDService and select Properties from the Action menu and a tabbed Properties page
appears. Select the Log On tab and select 'Log On As' > 'This Account'. Enter the username and
password for an OS user in the Local Administrators and ORA_DBA groups. Perform this step on each
node. Please see Note 213416.1 for detailed information.

1. 2. To patch the OLM files from E:\installs\osd9206\3973928\Disk1\preinstall_rac\olm, copy these files


into both of the following directories: %ORACLE_HOME%\bin and C:\WINNT\System32\osd9i\olm:

crlogdr.exe
DeleteDisk.exe
ExportSYMLinks.exe
GUIOracleOBJManager.exe
ImportSYMLinks.exe
LetterDelete.exe
LogPartFormat.exe
OracleObjManager.exe
OracleObjService.exe
oraoobjlib.dll
readme.txt

Reinstall the Oracle Object Service by issuing the following via command line on all nodes in the cluster:

OracleOBJService.exe /remove
OracleOBJService.exe /install

Use the service control panel to start the service or re-boot the nodes.

Note: You may find that the service is marked as Disabled until you reboot the node. If so, you will have
to reboot prior to recreating the service.

Note: The readme of the Clusterware Patch states to replace the assistantsCommon.jar and srvm.jar
from the OTN download with the 9206 Clusterware versions. Due to Bug 4260342, this is not
recommended, and the original files from the OTN download should be used. This is expected to be fixed
in the 9207 clusterware patch (which to date is not released).

2.8 Fix the CM Service Priority

This is an optional step that can be done now or at any time after the install and configuration is complete.
The CM Service requires a small addition to the registry on all nodes to give the service a higher priority
within the Windows OS. Please see Note 255481.1 Changing the Priority of CMSRVR on Windows for the
procedure. After making this registry change, it is important to restart the CMService on all nodes to enable
this change. Again, this is optional and will not effect the install process if you choose to configure this at a
later date. It is, however, recommended for all production RAC environments or ones that will be heavily
loaded/stressed.

3. Create a RAC Database Using the Oracle Database Configuration


Assistant
The Oracle Database Configuration Assistant (DBCA) will create a database for you. Oracle Corporation
recommends that you use the DBCA to create your database because it takes advantage of Oracle9i features
such as the server parameter file and automatic undo management. The DBCA also enables you to define
arbitrary tablespaces as part of the database creation process. So even if you have datafile requirements that
differ from those offered in one of the DBCA templates, use the DBCA. You can also execute user-specified
scripts as part of the database creation process. The DBCA and the Oracle Net Configuration Assistant also
accurately configure your Real Application Clusters environment for various Oracle high availability
features and cluster administration tools.

1. 1. On the OCFS drive that you created for Datafiles, create an oradata directory at the root. For
example: O:\>mkdir oradata. Verify this directory is visible from all nodes.
2. 2. Run Net Configuration Assistant to ensure there are entries in the listener and tnsnames.ora setup
that will allow DBCA to create the database. Choose a Cluster Configuration and step through the tool.
You will need to configure a listener named 'LISTENERS_SIDprefix' and tnsnames entries for local
listener named: 'LISTENER_SID'. For example, if your SID prefix is MYDB, then the listener should be
LISTENERS_MYDB and the tnsnames entries should be LISTENER_MYDB1 and LISTENER_MYDB2
for a two node RAC.
3. 3. Edit the dbca.bat file as outlined in Note 232239.1 DBCA Tips and Pitfalls in a Windows RAC
Environment under section titled "Trace DBCA During Database Creation". This will provide a more
complete error log if problems arise.
4. 4. Open a new MS-DOS window and change directories to the ORA_HOME\bin directory. Run DBCA
from the command prompt as follows:
dbca -datafileDestination O:\oradata > dbca_trace.txt
This will spool the output to a file called dbca_trace.txt in the directory you are in. You can
change this path or filename as desired.
1. 5. The Welcome Page displays with the selection to create a Cluster or Single Instance Database.
Choose Oracle Cluster Database option and select Next.
2. 6. The Operations page is displayed. Choose the option 'Create a Database' and click Next.
3. 7. The Node Selection page appears. Select the nodes that you want to configure as part of the RAC
database and click Next. If the OracleGSDService is not running on any of the selected nodes, then the
DBCA displays a dialog explaining how to start.
4. 8. The Database Templates page is displayed. The templates other than New Database include
preconfigured datafiles for file systems. Choose New Database and then click Next.
5. 9. DBCA now displays the Database Identification page. Enter the Global Database Name and Oracle
System Identifier (SID) Prefix. The Global Database Name is typically of the form name.domain, for
example mydb.us.oracle.com, while the SID prefix along with a number is used to uniquely identify an
instance. For example, SID prefix MYDB would become SIDs MYDB1 and MYDB2 for instances 1 and
2, respectively. Click Next.
6. 10. The Database Options page is displayed. Select the options you wish to configure. The Additional
database Configurations button displays the option to install Java and interMedia database features.
Check all options you wish and then choose Next. Note: If you did not choose New Database frommplate
page, you will not see this screen.
7. 11. The Connection Options screen appears. Select either the dedicated server or shared server option
for the default user connection type. Note: If you did not choose New Database from the Database
Template page, you will not see this screen. Click Next.
8. 12. DBCA now displays the Initialization Parameters page. This page comprises a number of pages,
which you navigate through by clicking on the tabs. Modify the Memory settings if desired.
1. Change the Archivelog mode as necessary. In general, it is recommended you create your database in
Noarchivelog mode, and then after the database is created, alter the database after performing a
complete backup.
2. DB Sizing will specify your db_block_size, sort_area_size and database character set parameters.
3. Under the File Locations tab, the option Create persistent initialization parameter file is selected by
default. The raw device name for the location of the server parameter file (spfile) must be entered.
4. The button File Location Variables displays variable information.
• The button All Initialization Parameters... displays the Initialization Parameters dialog box. This box
presents values for all initialization parameters and indicates whether they are to be included in the spfile
to be created through the check box, included (Y/N). Instance specific parameters have an instance value
in the instance column. Complete entries in the All Initialization Parameters page and select Close.
Note: There are a few exceptions to what can be altered via this screen. Ensure all entries in the Initialization
Parameters page are complete and select Next.
9. 13. DBCA now displays the Database Storage Window. This page allows you to enter file names for each
tablespace in your database. The file names are displayed in the Datafiles folder, but are entered by
selecting the Tablespaces icon, and then selecting the tablespace object from the expanded tree. Any
names displayed here can be changed. These should already be defined using the OCFS oradata drive
created earlier. Complete the database storage information and click Next.
Note: Check the filenames to ensure they are going to the OCFS drive. Check the redo log names to
ensure they indicate the thread number from which they belong (mydb_redo1_1, mydb_redo1_2,
mydb_redo2_1, etc)
10. 14. The Creation Options page is displayed. Ensure that the option 'Create Database' is checked and
click Finish. Check the 'Create Template' and 'Save as a Script' boxes if desired.
11. 15. The DBCA Summary window is displayed. Review this information and then click OK. Once the
Summary screen is closed using the OK option, DBCA begins to create the database according to the
values specified.

Some Notes on DBCA Database Creation:


1. The database creation can take a while, and the progress may seem slow or hung, especially during the
creation of the java server components and at the end when the database service is created on the
remote nodes and the other threads of redo are created. You can check the progress by checking Task
Manager and seeing the CPU activity, or by checking the alert log for redo log switching.
2. During the database creation process, you may see the following error: ORA-29807 specified operator
does not exist. This is a known issue (Bug 2925665). You can click on the "Ignore" button to continue.
Once DBCA has completed database creation, remember to run the 'prvtxml.plb' script from
%ORACLE_HOME%\rdbms\admin independently, as the user SYS. It is also advised to run the
'utlrp.sql' script to ensure that there are no invalid objects in the database.
3. It is not uncommon for the DBCA to hang at 95-99%. This is usually due to a problem with creating and
enabling the second thread of redo and then bringing the database up in cluster mode. Check the alert
logs on both nodes for any errors. If you don't see any errors, open a SQL*Plus session on node 1 and
connect as a sysdba user. Select on the v$thread view to see how many threads are open. If there is only
one, check the redo logs (v$log, v$logfile) to see if the second thread of redo logs are physically present. If
not, run the appropriate scripts manually. At the present time, this is the postDBCreation.sql script and
is located in ORA_HOME\admin\<db_name>\scripts directory. You can also check the progress of the
scripts run by reviewing the logs produced in ORA_HOME\admin\<db_name>\create directory.
4. If you have issues with any service hangs, please see Note 213416.1 RAC: Troubleshooting Windows
NT/2000 Service Hangs.

4. Use of SRVCTL for the Administration and Maintenance of a RAC


Database
Once your RAC database is created, you can use the Server Control (SRVCTL) utility to assist in
administration and maintenance tasks. The Global Services Daemon (GSD) receives requests from
SRVCTL to execute administrative jobs, such as startup or shutdown. The task is executed locally on all
nodes, and the results are sent back to SRVCTL. SRVCTL also serves as a single point of control between
the Oracle Intelligent Agent and the nodes in the cluster.

If you have issues with Oracle Agent hangs, please see the following notes:

Note 223554.1 Automatic Startup of the Intelligent Agent Fails in RAC Environment

Note 158295.1 How to Configure EM with 9i Real Application Clusters (RAC)

To see the online command syntax and options for each SRVCTL command, enter:

srvctl command option -h

Where command option is one of the valid options such as start, stop, or status.

The following are some examples of tasks you can perform with this utility. (Please see the corresponding
Administration guide for more complete command details.)

srvctl start -- Use this command to start all instances or a subset of instances in your Real Application
Clusters database.

For example, to start all the instances use the syntax: srvctl start database -d db_name
Or you can start specific instances using the syntax: srvctl start instance -d db_name -i
instance_name

This syntax starts the specific instance that you name. Using srvctl start also starts all listeners
associated with an instance.

srvctl stop -- Use this command to stop all instances or a subset of instances in your Real Application
Clusters database.

For example, to stop all instances use the syntax: srvctl stop database -d db_name
Or you can stop specific instances using: srvctl stop instance -d db_name -i
instance_name

Using srvctl stop also stops all listeners associated with an instance.

srvctl status -- Use the srvctl status command to determine what instances are running.

For example, use the output from the following syntax to identify which instances are running:
srvctl status instance -d db_name -i instance_name
srvctl config -- Use the srvctl config command to identify the existing Real Application Clusters
databases. You can use two syntaxes for srvctl config.

For example, the following syntax lists all the Real Application Clusters databases in your
environment: srvctl config
The following syntax lists the instances for the Real Application Clusters database name that you
provide: srvctl config database -d db_name

The Oracle Enterprise Manager auto-discovery process also uses output from this command to
discover the configurations for databases in your Real Application Clusters.

srvctl getenv or get env -- Use the srvctl get env command to obtain environment information for
either a specific instance or for an entire Real Application Clusters database.

For example, the output from the following syntax displays environment information for the entire
Real Application Clusters database identified by the name you provide: srvctl getenv database
-d db_name
The following syntax displays environment information for a specific instance: srvctl getenv
instance -d db_name -i instance_name

5. References
The following are references used from the Oracle online documentation for both Release 1 and Release 2:

1. Oracle9i Database Installation Guide for Windows


2. Oracle9i Real Application Clusters Installation and Configuration
3. Oracle9i Real Application Clusters Concepts
4. Oracle9i Real Application Clusters Administration
5. Oracle9i Real Application Clusters Deployment and Performance
6. Oracle9i Release Notes

In addition, the following references were used:

1. Oracle9i for Windows 2000 Tips and Techniques: Best Practices from Oracle Experts Oracle Press -
McGraw-Hill/Osborne (ISBN 0-07-219462-6)

Metalink Notes:

1. Note 184875.1 How to Check the Certification Matrix for Real Application Clusters
2. Note 213416.1 RAC: Troubleshooting Windows NT/2000 Service Hangs
3. Note 183408.1 Raw Devices and Cluster File Systems With Real Application Clusters
4. Note 186130.1 Clustercheck.exe fails with Windows error 183
5. Note 254611.1 Shared Partition Errors in RAC Configuration on Windows 2003
6. Note 223554.1 Automatic Startup of the Intelligent Agent Fails in RAC Environment
7. Note 158295.1 How to Configure EM with 9i Real Application Clusters (RAC)
8. Note 211685.1 RAC WIN: Oracle 9.2 installation halts with error file not found CRLOGDR.EXE
9. Note 230290.1 WIN RAC: How to Remove a Failed OCFS Install
10.Note 232239.1 DBCA Tips and Pitfalls in a Windows RAC Environment
11.Note 255481.1 Changing the Priority of CMSRVR on Windows
12.Note 257689.1 'gsdservice -install' Fails to Create the OracleGSDService
13.Note 229060.1 How to Add Another OCFS Drive for RAC on Windows
14.Note 270048.1 Node Selection Screen Does Not Show The Nodenames Installing 9205 (OUI 10g)

How to install Oracle RAC on Windows

Oracle Tips by Burleson Consulting


November 2, 2007

Installing Oracle RAC on Windows

Question: How do I install Oracle RAC on a Windows Platform. I’m a beginner, and the documentation does not help
and I want step by step directions for installing and configuring RAC on my Windows PC.

Answer: For Windows-based RAC database systems, both the Windows 2000 and Windows 2003 are supported.
Windows 2003 refers to enterprise, data center and standard edition. Windows 2000 refers to Advanced Server and
Datacenter Edition. Note that the RAC clusterware is supplied by Oracle.

Ordinarily, the data files, control files, and redo log files need to reside on the unformatted raw devices on a Windows
RAC platform. However, with the Oracle9i Release 2 (9.2.0.1.0) there is the option of using the cluster file system for
setting up the shared storage volumes for the RAC database. If Oracle9i Release 1 (9.0.1.0.0) is being installed, logical
partitions (otherwise known as RAW Partitions) for the shared disks must still be used.

In a Windows environment, the raw devices are more commonly known as logical drives that reside within extended
partitions.

Installing Windows OCFS for RAC

The Oracle Cluster File System (OCFS) for Windows


OCFS is a shared file system specifically designed for Real Application Clusters. It allows multiple nodes to share
Oracle Home and databases on a single SAN volume. All nodes in the cluster have concurrent ownership and access to
the shared disks.
For installing OCFS on Windows RAC, Oracle supports all database files and installation of the Oracle software. It was
designed for use of an Oracle database, not a new general-purpose file system. The single shared Oracle Home aids the
install process by providing a consistent image of binaries and metadata across the RAC cluster.

However, according to Oracle Metalink Note # 225550.1, the current version of OCFS for RAC does not support
access from mapped drives in the Windows environments.

Getting help installing Windows RAC

Installing and configuring Oracle RAC is quite complex, and many shops hire experienced Oracle RAC consulting.

If you want to install Oracle RAC on Windows yourself, in lieu of the Oracle RAC training, I recommend the book
"Oracle 10g RAC and Grid".

For installing RAC on a Windows PC, there is a whole book dedicated to installing and configuring RAC on a PC by
Edward Stoever.

If you are a neophyte in RAC, it's best to get the step-by-step directions from the book "Personal Oracle RAC Clusters:
Create Oracle 10g Grid Computing at Home".

Related RAC Windows install notes

See my related notes in installing RAC on a Windows server:

• Oracle RAC installation


• Great guide to install Oracle RAC at home
• Oracle 10g RAC installation
• Oracle Data Warehouses in a RAC Environment
• Oracle RAC software Installation - Virtual Internet Protocol
• RAC Grid Install Configure Database Creation Oracle RAC
• RAC Grid Install Configure Database Creation Cluster Ready
• RAC Grid Install Configure Database Creation Oracle Database
• RAC Grid Install Configure Database Creation CRS
• RAC Grid Install Configure Database Creation SKGXP SKGXN Libraries

You might also like