Professional Documents
Culture Documents
Windows 2000
Purpose
This document will provide the reader with step-by-step instructions on how to install and configure an Oracle9i Real
Application Clusters (RAC) database using Oracle Cluster File System (OCFS) on a Windows 2000 cluster. Based on
Service Request volume, this note will outline the installation of 9i Release 2 for a Windows 2000 cluster. The
instructions for NT or 2003 should be similar; however the navigation within the OS may differ (i.e., Disk
Management is Disk Administrator in NT). The basic principles, especially with the pre-install cluster configuration
should be the same.
Note: If you wish to use Logical Partitions (otherwise known as RAW Partitions) for the datafiles instead of OCFS,
please see Note 236155.1.
Note: OCFS is not supported with Oracle9i Release 1 (9.0.1.0.0). You must use Logical Partitions for the datafiles:
please see Note 236155.1.
OCFS and the Oracle Clusterware is available for download from Metalink under Patch 3973928 WINDOWS CFS
AND CLUSTERWARE PATCH FOR 9.2.0.6. You will need to stage this to a local drive on one of the nodes in the
cluster. Instruction on the installation follows.
Note: Microsoft Cluster Software (MSCS) is not required for RAC databases as the Oracle Clusterware provides the
clustering. However, the Oracle Clusterware can coexist with MSCS as long as the quorum and shared disks are
partitioned and mutually exclusive.
Disclaimer: If there are any errors or issues prior to section 2, please contact your cluster hardware
vendor's support.
The information contained here is as accurate as possible at the time of writing.
5. References
Certified cluster configurations are listed in Note 184875.1 How To Check The Certification Matrix for Real
Application Clusters. Note that there are different configurations for Windows NT and 2000. Please consult
this listing for specific Hardware/Software/Variance information provided by your Cluster vendor. In
general, each node will require the following:
1.1.1. Hardware:
1.1.2. Software:
The above information is contained within the Oracle9i Database Installation Guide for Windows. See the
section "Oracle9i Database System Requirements" for additional information on hardware/system sizing for
other options of the RDBMS.
Follow the procedures provided by your Cluster vendor. Verify that all nodes can view the shared partitions
within the Disk Manager in Windows 2000 and that they are numbered the same. You may have to refresh
the view or restart Disk Manager if it is open on other nodes during reconfiguration.
Follow the procedures provided by your Cluster vendor. In general, you will setup the following Hostname
and IP information before running the Cluster setup:
1.3.1. Setup the External and Internal Network Interface Cards (NIC):
• Within the Network settings of Windows, create at least two entries for the NICs you have installed.
1. When assigning the Bindings of the NICs within the Windows Networking Properties, ensure that the
Public IP is listed at the top for all settings. The Private NIC(s) should be listed below the public NIC
settings. You can verify this at the command prompt by running the command ipconfig /all to verify that
the public IP address is listed first.
2. It is strongly recommended that a network switch is used for the interconnect between nodes rather than
a crossover cable or a hub. Most cluster hardware vendors will have this as a requirement due to known
NIC problems when there is loss of electrical connectivity. This can cause hanging issues with various
services on node reboot. Please see Note 213416.1 RAC: Troubleshooting Windows NT/2000 Service
Hangs for more information.
1.4 Check the Temp and Tmp Directories Defined within Windows
To install properly across all nodes, the Oracle Universal Installer will need to use the temporary folders
defined within Windows. The TEMP and TMP folders should be the same across all nodes in the cluster. By
default these settings are defined as %USERPROFILE%\Local Settings\Temp and
%USERPROFILE%\Local Settings\Tmp in the Environment Settings of My Computer. It is recommended
to explicitly redefine these as WIN_DRIVE:\temp and WIN_DRIVE:\tmp; for example: C:\temp and
C:\tmp for all nodes.
To install and perform administrative tasks, Oracle Corporation recommends using the same local
administrative username and password on every node in a cluster, or a domain username with local
administrative privileges on all nodes. All nodes must be in the same domain.
Ensure that each node has administrative access to all these directories within the Windows environment by
running the following at the command prompt:
where host_name is the public network name for the other nodes. If you plan to install the
ORACLE_HOME onto another drive location than C, check that administrative share as well.
For example, if your WIN_HOME is on the C drive and you were installing the ORACLE_HOME onto the
E drive of all nodes, you would run the following from a command prompt on node 1 of a four-node cluster:
You would then repeat these commands on all nodes within the cluster. If the following appears for each
command, the privileges are correct:
The command completed successfully.
If you receive errors, resolve these within the Windows environment before proceeding.
Note: If you have any issues with Clustercheck, please see Note 186130.1 Clustercheck.exe Fails with
Windows Error 183 .
Within a command prompt window, run the clustercheck.exe program located in the staged directory of
unzipped patch 3973928 (i.e., under the 3973928\Disk1\preinstall_rac\clustercheck directory). This tool will
prompt for the public and private hostnames and have you verify the IP address resolution. If that passes,
then it will perform a check of the health of the shared disk array and other environment variables and
permissions necessary for proper cluster installation and operation. It will create a subdirectory called opsm
in the temporary directory specified by your environment settings (WIN_DRIVE:\Temp by default if you
have changed it as recommended) and log file called OraInfoCoord.log. This log will contain any errors
encountered in the check. You should see the following at the bottom of the log file and within the command
prompt window when you run the clustercheck.exe program:
You must correct any errors that occur before proceeding. Please contact your Cluster Hardware Vendor if
you need assistance.
NOTE: If at any time in the installation of the software you do not see all nodes in the cluster within the
Cluster Node Selection screen, there is something wrong with your cluster configuration. You will have to go
back and troubleshoot your cluster install. You can perform clusterware diagnostics by executing the
ORACLE_HOME\bin\lsnodes -v command and analyzing its output. Use Metalink to search for any errors.
Refer to your vendor's clusterware documentation if the output indicates that your clusterware is not
properly installed. Resolve the problem, then rerun the checks.
This section contains an abbreviated version of instructions from the OCFS and Oracle Clusterware
README. This configuration will install only the Oracle Datafiles on shared OCFS partitions and the
Oracle Home on local NTFS drives of each node. Alternately, you can install both the Oracle Home and the
Oracle Datafiles on OCFS. Please refer to the README documentation for installation instructions.
Currently there is a limitation with the Database Configuration Assistant (DBCA) that allows only one OCFS
drive to be used for all datafiles. Although this note will specify only one OCFS drive for your datafiles, you
may configure as many OCFS drives for the datafiles as needed. The workaround is to change the locations
to other OCFS drives using ALTER DATABASE DATAFILE RENAME commands after the database is
created.
Due to various issues, it is recommended to apply the latest patches available for all components:
4. The Windows CFS and Clusterware Patch for 9.2.0.6 -- available in Metalink under Patch number
3973928
5. The 2.2.0.18.0 Oracle Universal Installer -- available in Metalink under Patch number 2878462
6. The 9.2.0.6 RDBMS patchset -- available in Metalink under Patch number 3948480
The following instructions will incorporate the application of these patches with the installation for a new
cluster. Please review all README instructions before proceeding. For this set of instructions, you will
stage the software to the hard drive of node 1. For example, the following convention will be used:
• Oracle 9i Release 2 (9.2.0.1) EE >> copied from the 3 RDBMS installation CDs
E:\installs\9201\disk1\
E:\installs\9201\disk2\
E:\installs\9201\disk3\
• Oracle 9i Patch 9.2.0.6 >> downloaded from Metalink Patch number 3948480
E:\installs\9206\disk1\
• Oracle Clusterware patch 9.2.0.6 >> downloaded from Metalink Patch number 3973928
E:\installs\osd9206\
Note: For installations with more than 2 nodes: Due to known OUI issues with the push installation
on a 3-or-more node cluster (Bug 2973000), it is recommended to install the 2.2.0.18 version of the
OUI so that you can perform a cluster installation of the RDBMS software. The alternative is to
perform individual installs on each node, which would put an installation inventory on each node.
If you choose to perform individual installs, you should be aware of the following:
1. 1. The clustersetup would still be run only off of one node, as it does not use OUI.
2. 2. All instructions below using the OUI would need to be done individually on each node.
3. 3. All future patch installations would also have to be done individually on each node.
Note: Sometimes there are patch issues with some non-Oracle services that may be running on the
cluster nodes. Typically the Microsoft Service Distributed Transaction Coordinator (MSDTC) can
interact with Oracle software during install. It is recommended that this service is stopped and set to
manual start using services.msc on both nodes. If, after completing the install, the MSDTC service is
required, it can be restarted and set to autostart.
Note: The minimum partition size needed is 4.0 GB for the Oracle Datafiles.
Note: Choosing to use a Primary Partition rather than the Extended Partition may cause clustersetup to fail
with the error: "PRKI-2016: unable to find an oracle partition. Please exit the wizard, create the oracle
partition and try again". Reconfigure the drive to be an Extended Partition prior to creating Logical
Drives.
Note: If the Disk Management window is open during any disk management modifications, you nto close
and open the window or refresh to view any changes you applied.
For 3-or-more nodes: Since the OUI is not used, you can run this only on node 1 and the software will be
correctly transferred to the other nodes in the cluster.
1. 1. Download Patch number 3973928 Windows CFS and Clusterware Patch for 9.2.0.6.
2. 2. Expand the patch into the staged directory, such as E:\installs\osd9206 . This will create another
subdirectory such as E:\installs\osd9206\3973928. This clusterware patch contains a full clustersetup
release.
3. 3. Within a command prompt window, navigate to the
E:\installs\osd9206\3973928\preinstall_rac\clustersetup directory in the OCFS staged directory.
4. 4. Launch the Oracle Cluster Setup Wizard by typing clustersetup at the command line.
5. 5. The Cluster Wizard program should launch with a Welcome page. Click Next.
6. 6. The first time the Wizard is run, the only option will be to Create a cluster. Click Next.
7. 7. Choose "Use private network for interconnect" and click Next.
8. 8. The Network Configuration page appears. Enter the cluster name. Then enter the public hostnames
for all nodes. The private hostnames will be automatically entered as public_name.san. Accept the
default or change as appropriate for your cluster configuration. Click Next.
9. 9. The Cluster File System Options page appears. Choose CFS for Datafiles only. Click Next.
10. 10. The CFS for Datafiles page appears. Choose a drive letter, and then choose one of the partition you
prepared earlier with a minimum 4.0 GB in size. Click Next.
11. 11. The VIA Detection screen appears stating whether Virtual Interface Architecture (VIA) hardware was
detected. Choose yes or no depending on your configuration. Please contact your cluster hardware
vendor if you are unsure. Click Next.
12. 12. The Install Location screen appears. It will default to the WIN_HOME\system32\osd9i directory.
Accept the default and click Finish.
13. 13. The Cluster Setup window will appear. This will show the progress with installing the cluster files,
creating the cluster services on all nodes, and formatting the OCFS drives. If no errors occur, the Oracle
Cluster Setup Wizard application will complete and close automatically.
14. 14. Check the Clusterware setup. You should have an OCFS drive visible from both nodes.
Also, the following 3 services should be running on each of the nodes in the cluster:
1. OracleClusterVolumeService
2. Oracle Object Service
3. OracleCMService9i
Note: If the clustersetup doesn't run properly, check for errors in the log files under
WIN_HOME\system32\osd9i.
If any hardware or OS configuration changes are made during this setup process, or if it is necessary to run
through the clustersetup again, you must remove and reinstall the OCFS software (Deinstallation is not
available at this time). Please see Note 230290.1 WIN RAC: How to Remove a Failed OCFS Install for more
information.
Note: Adding another OCFS drive can be done by following Note 229060.1 How to Add Another OCFS
Drive for RAC on Windows.
2.4 Install the OCFS Support Software into the Oracle Home
1. 1. To install the OCFS binaries, bring up the new OUI program from Start > Programs > Oracle
Installation Products > Universal Installer. Click Next at the Welcome page.
2. 2. The Node Selection screen appears. Highlight all nodes and click Next. For individual installs:
Choose only the local node.
3. 3. Browse to change the Source Path so that it is pointing to
E:\installs\osd9206\3973928\Disk1\stage\products.jar. In the File Locations page, enter the Oracle Home
name where you just installed the OUI and click Next.
4. 4. OUI displays a summary page. Click Next to begin the installation and see the progress bar. When
the install is complete, you will have installed the OCFS support files in the ORA_HOME\cfspatch
directory. This OCFS support software will only be installed on node 1, not on any other nodes. Click
Exit.
5. 5. For individual installs: Repeat the previous steps for all other nodes in the cluster.
2.5 Install the 9201 RDBMS Software into the Oracle Home
The 9206 patchset uses the 10g version of the OUI installer. Therefore you will install the 10g OUI along
with the 9206 patch.
1. Navigate to E:\installs\9206\disk1 directory and launch the setup executable. Click Next when the
Welcome screen appears.
2. Ensure the correct source path is being used. In the Destination field, enter the desired Oracle
Home for the database, such as C:\oracle\ora92. Click Next.
3. The Cluster Node Selection screen appears. The list of all the cluster nodes should appear and
click Next. For individual installs: only the local node will be listed.
4. The Available Products screen appears. Choose the default checked off products (which is all
products with lower versions than 9.2.0.6) and click Next.
5. A Summary screen appears. Click Install.
6. The progress screen appears. When the progress bar reaches 100%, the OUI will show a screen
stating the patch installation was successful. Click Exit to complete patch install.
7. For individual installs: Repeat the previous steps for all other nodes in the cluster.
8. Reboot all nodes in the cluster before proceeding. Ensure all services start on all nodes.
Note: If you don't get a cluster node selection screen, please see Note 270048.1 Node Selection Screen Does
Not Show The Nodenames Installing 9205 (OUI 10g) for the workaround.
1. 2.7 Patch the Remaining Clusterware
You will copy all files from the staged osd9206 directory (E:\installs\osd9206\3973928 in our example). You
may want to rename the extension of the files to keep the original version.
1. 1. To patch the GSD from E:\installs\osd9206\3973928\srvm\gsd, copy these files into the following
directories:
%ORACLE_HOME%\bin\orasrvm.dll
%ORACLE_HOME%\bin\gsd.exe
%ORACLE_HOME%\bin\gsdservice.exe
%ORACLE_HOME%\jlib\srvm.jar
Install the GSD service by running the following via command line on all nodes:
'gsdservice -install'
To change the service startup click: Start > Settings > Control Panel > Administrative Tools > Services.
Select OracleGSDService and select Properties from the Action menu and a tabbed Properties page
appears. Select the Log On tab and select 'Log On As' > 'This Account'. Enter the username and
password for an OS user in the Local Administrators and ORA_DBA groups. Perform this step on each
node. Please see Note 213416.1 for detailed information.
crlogdr.exe
DeleteDisk.exe
ExportSYMLinks.exe
GUIOracleOBJManager.exe
ImportSYMLinks.exe
LetterDelete.exe
LogPartFormat.exe
OracleObjManager.exe
OracleObjService.exe
oraoobjlib.dll
readme.txt
Reinstall the Oracle Object Service by issuing the following via command line on all nodes in the cluster:
OracleOBJService.exe /remove
OracleOBJService.exe /install
Use the service control panel to start the service or re-boot the nodes.
Note: You may find that the service is marked as Disabled until you reboot the node. If so, you will have
to reboot prior to recreating the service.
Note: The readme of the Clusterware Patch states to replace the assistantsCommon.jar and srvm.jar
from the OTN download with the 9206 Clusterware versions. Due to Bug 4260342, this is not
recommended, and the original files from the OTN download should be used. This is expected to be fixed
in the 9207 clusterware patch (which to date is not released).
This is an optional step that can be done now or at any time after the install and configuration is complete.
The CM Service requires a small addition to the registry on all nodes to give the service a higher priority
within the Windows OS. Please see Note 255481.1 Changing the Priority of CMSRVR on Windows for the
procedure. After making this registry change, it is important to restart the CMService on all nodes to enable
this change. Again, this is optional and will not effect the install process if you choose to configure this at a
later date. It is, however, recommended for all production RAC environments or ones that will be heavily
loaded/stressed.
1. 1. On the OCFS drive that you created for Datafiles, create an oradata directory at the root. For
example: O:\>mkdir oradata. Verify this directory is visible from all nodes.
2. 2. Run Net Configuration Assistant to ensure there are entries in the listener and tnsnames.ora setup
that will allow DBCA to create the database. Choose a Cluster Configuration and step through the tool.
You will need to configure a listener named 'LISTENERS_SIDprefix' and tnsnames entries for local
listener named: 'LISTENER_SID'. For example, if your SID prefix is MYDB, then the listener should be
LISTENERS_MYDB and the tnsnames entries should be LISTENER_MYDB1 and LISTENER_MYDB2
for a two node RAC.
3. 3. Edit the dbca.bat file as outlined in Note 232239.1 DBCA Tips and Pitfalls in a Windows RAC
Environment under section titled "Trace DBCA During Database Creation". This will provide a more
complete error log if problems arise.
4. 4. Open a new MS-DOS window and change directories to the ORA_HOME\bin directory. Run DBCA
from the command prompt as follows:
dbca -datafileDestination O:\oradata > dbca_trace.txt
This will spool the output to a file called dbca_trace.txt in the directory you are in. You can
change this path or filename as desired.
1. 5. The Welcome Page displays with the selection to create a Cluster or Single Instance Database.
Choose Oracle Cluster Database option and select Next.
2. 6. The Operations page is displayed. Choose the option 'Create a Database' and click Next.
3. 7. The Node Selection page appears. Select the nodes that you want to configure as part of the RAC
database and click Next. If the OracleGSDService is not running on any of the selected nodes, then the
DBCA displays a dialog explaining how to start.
4. 8. The Database Templates page is displayed. The templates other than New Database include
preconfigured datafiles for file systems. Choose New Database and then click Next.
5. 9. DBCA now displays the Database Identification page. Enter the Global Database Name and Oracle
System Identifier (SID) Prefix. The Global Database Name is typically of the form name.domain, for
example mydb.us.oracle.com, while the SID prefix along with a number is used to uniquely identify an
instance. For example, SID prefix MYDB would become SIDs MYDB1 and MYDB2 for instances 1 and
2, respectively. Click Next.
6. 10. The Database Options page is displayed. Select the options you wish to configure. The Additional
database Configurations button displays the option to install Java and interMedia database features.
Check all options you wish and then choose Next. Note: If you did not choose New Database frommplate
page, you will not see this screen.
7. 11. The Connection Options screen appears. Select either the dedicated server or shared server option
for the default user connection type. Note: If you did not choose New Database from the Database
Template page, you will not see this screen. Click Next.
8. 12. DBCA now displays the Initialization Parameters page. This page comprises a number of pages,
which you navigate through by clicking on the tabs. Modify the Memory settings if desired.
1. Change the Archivelog mode as necessary. In general, it is recommended you create your database in
Noarchivelog mode, and then after the database is created, alter the database after performing a
complete backup.
2. DB Sizing will specify your db_block_size, sort_area_size and database character set parameters.
3. Under the File Locations tab, the option Create persistent initialization parameter file is selected by
default. The raw device name for the location of the server parameter file (spfile) must be entered.
4. The button File Location Variables displays variable information.
• The button All Initialization Parameters... displays the Initialization Parameters dialog box. This box
presents values for all initialization parameters and indicates whether they are to be included in the spfile
to be created through the check box, included (Y/N). Instance specific parameters have an instance value
in the instance column. Complete entries in the All Initialization Parameters page and select Close.
Note: There are a few exceptions to what can be altered via this screen. Ensure all entries in the Initialization
Parameters page are complete and select Next.
9. 13. DBCA now displays the Database Storage Window. This page allows you to enter file names for each
tablespace in your database. The file names are displayed in the Datafiles folder, but are entered by
selecting the Tablespaces icon, and then selecting the tablespace object from the expanded tree. Any
names displayed here can be changed. These should already be defined using the OCFS oradata drive
created earlier. Complete the database storage information and click Next.
Note: Check the filenames to ensure they are going to the OCFS drive. Check the redo log names to
ensure they indicate the thread number from which they belong (mydb_redo1_1, mydb_redo1_2,
mydb_redo2_1, etc)
10. 14. The Creation Options page is displayed. Ensure that the option 'Create Database' is checked and
click Finish. Check the 'Create Template' and 'Save as a Script' boxes if desired.
11. 15. The DBCA Summary window is displayed. Review this information and then click OK. Once the
Summary screen is closed using the OK option, DBCA begins to create the database according to the
values specified.
If you have issues with Oracle Agent hangs, please see the following notes:
Note 223554.1 Automatic Startup of the Intelligent Agent Fails in RAC Environment
To see the online command syntax and options for each SRVCTL command, enter:
Where command option is one of the valid options such as start, stop, or status.
The following are some examples of tasks you can perform with this utility. (Please see the corresponding
Administration guide for more complete command details.)
srvctl start -- Use this command to start all instances or a subset of instances in your Real Application
Clusters database.
For example, to start all the instances use the syntax: srvctl start database -d db_name
Or you can start specific instances using the syntax: srvctl start instance -d db_name -i
instance_name
This syntax starts the specific instance that you name. Using srvctl start also starts all listeners
associated with an instance.
srvctl stop -- Use this command to stop all instances or a subset of instances in your Real Application
Clusters database.
For example, to stop all instances use the syntax: srvctl stop database -d db_name
Or you can stop specific instances using: srvctl stop instance -d db_name -i
instance_name
Using srvctl stop also stops all listeners associated with an instance.
srvctl status -- Use the srvctl status command to determine what instances are running.
For example, use the output from the following syntax to identify which instances are running:
srvctl status instance -d db_name -i instance_name
srvctl config -- Use the srvctl config command to identify the existing Real Application Clusters
databases. You can use two syntaxes for srvctl config.
For example, the following syntax lists all the Real Application Clusters databases in your
environment: srvctl config
The following syntax lists the instances for the Real Application Clusters database name that you
provide: srvctl config database -d db_name
The Oracle Enterprise Manager auto-discovery process also uses output from this command to
discover the configurations for databases in your Real Application Clusters.
srvctl getenv or get env -- Use the srvctl get env command to obtain environment information for
either a specific instance or for an entire Real Application Clusters database.
For example, the output from the following syntax displays environment information for the entire
Real Application Clusters database identified by the name you provide: srvctl getenv database
-d db_name
The following syntax displays environment information for a specific instance: srvctl getenv
instance -d db_name -i instance_name
5. References
The following are references used from the Oracle online documentation for both Release 1 and Release 2:
1. Oracle9i for Windows 2000 Tips and Techniques: Best Practices from Oracle Experts Oracle Press -
McGraw-Hill/Osborne (ISBN 0-07-219462-6)
Metalink Notes:
1. Note 184875.1 How to Check the Certification Matrix for Real Application Clusters
2. Note 213416.1 RAC: Troubleshooting Windows NT/2000 Service Hangs
3. Note 183408.1 Raw Devices and Cluster File Systems With Real Application Clusters
4. Note 186130.1 Clustercheck.exe fails with Windows error 183
5. Note 254611.1 Shared Partition Errors in RAC Configuration on Windows 2003
6. Note 223554.1 Automatic Startup of the Intelligent Agent Fails in RAC Environment
7. Note 158295.1 How to Configure EM with 9i Real Application Clusters (RAC)
8. Note 211685.1 RAC WIN: Oracle 9.2 installation halts with error file not found CRLOGDR.EXE
9. Note 230290.1 WIN RAC: How to Remove a Failed OCFS Install
10.Note 232239.1 DBCA Tips and Pitfalls in a Windows RAC Environment
11.Note 255481.1 Changing the Priority of CMSRVR on Windows
12.Note 257689.1 'gsdservice -install' Fails to Create the OracleGSDService
13.Note 229060.1 How to Add Another OCFS Drive for RAC on Windows
14.Note 270048.1 Node Selection Screen Does Not Show The Nodenames Installing 9205 (OUI 10g)
Question: How do I install Oracle RAC on a Windows Platform. I’m a beginner, and the documentation does not help
and I want step by step directions for installing and configuring RAC on my Windows PC.
Answer: For Windows-based RAC database systems, both the Windows 2000 and Windows 2003 are supported.
Windows 2003 refers to enterprise, data center and standard edition. Windows 2000 refers to Advanced Server and
Datacenter Edition. Note that the RAC clusterware is supplied by Oracle.
Ordinarily, the data files, control files, and redo log files need to reside on the unformatted raw devices on a Windows
RAC platform. However, with the Oracle9i Release 2 (9.2.0.1.0) there is the option of using the cluster file system for
setting up the shared storage volumes for the RAC database. If Oracle9i Release 1 (9.0.1.0.0) is being installed, logical
partitions (otherwise known as RAW Partitions) for the shared disks must still be used.
In a Windows environment, the raw devices are more commonly known as logical drives that reside within extended
partitions.
However, according to Oracle Metalink Note # 225550.1, the current version of OCFS for RAC does not support
access from mapped drives in the Windows environments.
Installing and configuring Oracle RAC is quite complex, and many shops hire experienced Oracle RAC consulting.
If you want to install Oracle RAC on Windows yourself, in lieu of the Oracle RAC training, I recommend the book
"Oracle 10g RAC and Grid".
For installing RAC on a Windows PC, there is a whole book dedicated to installing and configuring RAC on a PC by
Edward Stoever.
If you are a neophyte in RAC, it's best to get the step-by-step directions from the book "Personal Oracle RAC Clusters:
Create Oracle 10g Grid Computing at Home".