Table of Contents

Rac11gR2OnWindows ........................................................................................................................................1 1. Introduction.........................................................................................................................................1 1.1. Overview of new concepts in 11gR2 Grid Infrastructure...................................................1 1.1.1. SCAN ..................................................................................................................1 1.1.2. GNS....................................................................................................................1 1.1.3. OCR and Voting on ASM storage......................................................................1 1.1.4. Intelligent Platform Management interface (IPMI)............................................1 1.1.5. Time sync ............................................................................................................2 1.1.6. Clusterware and ASM share the same Oracle Home ..........................................2 1.2. System Requirements ....................................................................................................................................2 1.2.1. Hardware Requirements ................................................................................................................2 1.2.2. Network Hardware Requirements .................................................................................................2 1.2.3. IP Address Requirements ..............................................................................................................3 1.2.4. Installation method.......................................................................................................................3 2. Prepare the cluster nodes for Oracle RAC.......................................................................................................3 2.1. User Accounts..................................................................................................................................3 2.1.1. User Account changes specifically for Windows 2008:..................................................3 2.1.2. Net Use Test....................................................................................................................4 . 2.1.3. Remote Registry Connect................................................................................................5 2.2. Networking...................................................................................................................................................5 2.2.1. Network Ping Tests.......................................................................................................................6 2.2.2. Network Interface Binding Order (and Protocol Priorities).........................................................7 2.2.3. Disable DHCP Media Sense.........................................................................................................7 2.2.4. Disable SNP Features...................................................................................................................7 2.3. Stopping Services ..........................................................................................................................................7 2.4. Synchronizing the Time on ALL Nodes.......................................................................................................8 2.5. Environment Variables.................................................................................................................................8 2.6. Stage the Oracle Software.............................................................................................................................8 2.7. Cluster Verification Utility (CVU) stage check ............................................................................................9 3. Prepare the shared storage for Oracle RAC.....................................................................................................9 3.1. Shared Disk Layout.........................................................................................................................9 . 3.1.1. Grid Infrastructure Shared Storage..................................................................................9 3.1.2. ASM Shared Storage.....................................................................................................10 3.2. Enable Automount......................................................................................................................................10 3.3. Clean the Shared Disks...............................................................................................................................10 3.4. Create Logical partitions inside Extended partitions..................................................................................11 3.4.1. View Created partitions..............................................................................................................13 3.5. List Drive Letters........................................................................................................................................13 3.5.1. Remove Drive Letters.................................................................................................................13 3.5.2. List volumes on Second node.....................................................................................................15 3.6. Marking Disk Partitions for use by ASM...................................................................................................15 3.7. Verify Grid Infrastructure Installation Readiness.......................................................................................17 4. Oracle Grid Infrastructure Install...................................................................................................................18 4.1. Basic Grid Infrastructure Install (without GNS and IPMI)...........................................................18 5. Grid Infrastructure Home Patching................................................................................................................27 6. RDBMS Software Install...............................................................................................................................27 7. RAC Home Patching ......................................................................................................................................31 8. Run ASMCA to create diskgroups................................................................................................................31 9. Run DBCA to create the database.................................................................................................................34

i

Rac11gR2OnWindows

1. Introduction
1.1. Overview of new concepts in 11gR2 Grid Infrastructure
1.1.1. SCAN
The single client access name (SCAN) is the address used by all clients connecting to the cluster. The SCAN name is a domain name registered to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS). The SCAN name eliminates the need to change clients when nodes are added to or removed from the cluster. Clients using SCAN names can also access the cluster using EZCONNECT. • The Single Client Access Name (SCAN) is a domain name that resolves to all the addresses allocated for the SCAN name. Allocate three addresses to the SCAN name. During Oracle grid infrastructure installation, listeners are created for each of the SCAN addresses, and Oracle grid infrastructure controls which server responds to a SCAN address request. Provide three IP addresses in the DNS to use for SCAN name mapping. This ensures high availability. • The SCAN addresses need to be on the same subnet as the VIP addresses for nodes in the cluster. • The SCAN domain name must be unique within your corporate network.

1.1.2. GNS
In the past, the host and VIP names and addresses were defined in the DNS or locally in a hosts file. GNS can simplify this setup by using DHCP. To use GNS, DHCP must be configured in the subdomain in which the cluster resides.

1.1.3. OCR and Voting on ASM storage
The ability to use ASM diskgroups for the storage of Clusterware OCR and Voting disks is a new feature in the Oracle Database 11g Release 2 Grid Infrastructure. If you choose this option and ASM is not yet configured, OUI launches ASM configuration assistant to configure ASM and a diskgroup.

1.1.4. Intelligent Platform Management interface (IPMI)
Intelligent Platform Management Interface (IPMI) provides a set of common interfaces to computer hardware and firmware that administrators can use to monitor system health and manage the system. With Oracle Database 11g Release 2, Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity. You must have the following hardware and software configured to enable cluster nodes to be managed with IPMI: • Each cluster member node requires a Baseboard Management Controller (BMC) running firmware compatible with IPMI version 1.5, which supports IPMI over LANs, and configured for remote control. • Each cluster member node requires an IPMI driver installed on each node. • The cluster requires a management network for IPMI. This can be a shared network, but Oracle recommends that you configure a dedicated network. • Each cluster node's ethernet port used by BMC must be connected to the IPMI management network. Rac11gR2OnWindows 1

If you intend to use IPMI, then you must provide an administration account username and password to provide when prompted during installation.

1.1.5. Time sync
There is a general requirement for Oracle RAC that the time on all the nodes be the same. With 11gR2 time synchronization can be performed by the Clusterware using CTSSD (Cluster Time Synchronization Services Daemon) or by using the Windows Time Service. If the Windows Time Service is being used, it MUST be configured to prevent the time from being adjusted backwards.

1.1.6. Clusterware and ASM share the same Oracle Home
The clusterware and ASM share the same home thus we call it Grid Infrastructure home (prior to 11gR2 ASM could be installed either in a separate home or in the same Oracle home as RDBMS.)

1.2. System Requirements
1.2.1. Hardware Requirements
• Physical memory (at least 1.5 gigabyte (GB) of RAM) • An amount of swap space equal the amount of RAM • Temporary space (at least 1 GB) available in /tmp • A processor type (CPU) that is certified with the version of the Oracle software being installed • At minimum of 1024 x 786 display resolution, so that Oracle Universal Installer (OUI) displays correctly • All servers that will be used in the cluster have the same chip architecture, for example, all 32-bit processors or all 64-bit processors • Disk space for software installation locations. You will need at least 4.5 GB of available disk space for the Grid home directory, which includes both the binary files for Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) and their associated log files, and at least 4 GB of available disk space for the Oracle Database home directory. • Shared disk space

An Oracle RAC database is a shared everything database. All data files, control files, redo log files, and the server parameter file (SPFILE) used by the Oracle RAC database must reside on shared storage that is accessible by all the Oracle RAC database instances. The Oracle RAC installation that is described in this guide uses Oracle ASM for the shared storage for Oracle Clusterware and Oracle Database files. The amount of shared disk space is determined by the size of your database.

1.2.2. Network Hardware Requirements
• Each node has at least two network interface cards (NIC), or network adapters. • Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter 'PublicLAN', then you must configure 'PublicLAN' as the public interface on all nodes. • You should configure the same private interface names for all nodes as well. If 'PrivateLAN' is the private interface name for the first node, then 'PrivateLAN' should be the private interface name for your second node. • For the private network, the end points of all designated interconnect interfaces must be completely reachable on the network. Every node in the cluster should be able to connect to every private network interface in the cluster. 1.1.4. Intelligent Platform Management interface (IPMI) 2

• The host name of each node must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.

1.2.3. IP Address Requirements
• One public IP address for each node • One virtual IP address for each node • Three single client access name (SCAN) addresses for the cluster

1.2.4. Installation method
This document details the steps for installing a 2-node Oracle 11gR2 RAC cluster on Windows: • The Oracle Grid Infrastructure Home binaries are installed on the local disk of each of the RAC nodes. • The files required by Oracle Clusterware (OCR and Voting disks) are stored in ASM • The installation is explained without GNS and IPMI (additional Information for installation with GNS and IPMI are explained)

2. Prepare the cluster nodes for Oracle RAC
The guides include hidden sections, use the and image for each section to show/hide the section or you can Expand all or Collapse all by clicking these buttons. This is implemented using the Twisty Plugin which requires Java Script to be enabled on your browser.

2.1. User Accounts
The installation should be performed as the Local Administrator, the Local Administrator username and password MUST be identical on all cluster nodes. If a domain account is used, this domain account must be explicitly defined as a member of the Local Administrator group on all cluster nodes. For Windows 2008: • Open Windows 2008 Server Manager • Expand the Configuration category in the console tree • Expand the Local Users and Groups category in the console tree • Within Groups, open the Administrator group • Add the desired user account as a member of the Administrator Group • Click OK to save the changes. We must now configure and test the installation user's ability to interact with the other cluster nodes.

2.1.1. User Account changes specifically for Windows 2008:
1. Change the elevation prompt behavior for administrators to "Elevate without prompting" to allow for user equivalence to function properly in Windows 2008: • Open a command prompt and type "secpol.msc" to launch the Security Policy Console management utility. 1.2.2. Network Hardware Requirements 3

1. However. • Click OK to save the changes (if changes were made).1.oracle. See Section 5. select: "Elevate without prompting (tasks requesting elevation will automatically run as elevated without prompting the administrator)" • Click OK to confirm the changes. • If the Administrators group is NOT listed in the "Local Security Settings" tab. Follow these steps to turn off the windows firewall : • Click Start. • Open a command prompt • Execute the following (replacing C$ with the appropriate drive letter if necessary) repeat the command to ensure access to every node in the cluster from the local node replacing with the appropriate nodes in the cluster.2) for Microsoft Windows for details: http://download. click Run. 2. Disable Windows Firewall When installing Oracle Grid Infrastructure and/or Oracle RAC it is required to turn off the Windows firewall. User Account changes specifically for Windows 2008: 4 . • Click on "Local Policies" • Click on "User Rights Assignment" • Locate and double click the "Manage auditing and security log" in the listing of User Rights Assignments. you can enable the Windows Firewall for the public connections. type "firewall. • From the drop-down menu.com/docs/cd/E11882_01/install.cpl".112/e10817/postinst.htm#CHDJGCEH NOTE: The Windows Firewall must be disabled on all the nodes in the cluster before performing any cluster-wide configuration changes. • Repeat the previous 5 steps on ALL cluster nodes.2. After the installation is successful. add the group now. 3. Ensure that the Administrators group is listed under "Manage auditing and security log": • Open a command prompt and type "secpol.1. and then Security Options • Scroll down to and double-click User Account Control: Behavior of the elevation prompt for administrators. 2. • Repeat the previous 3 steps on ALL cluster nodes. then the changes might not be propagated correctly to all the nodes of the cluster.• From the Local Security Settings console tree.2. Net Use Test The "net use" utility can be used to validate the ability to perform the software copy among the cluster nodes. click "Turn Windows Firewall on or off" (upper left hand corner of the window). • Choose the "Off" radio button in the "Windows Firewall Settings" window and click OK to save the changes. 2. • Repeat the previous 6 steps on ALL cluster nodes.1. "Configure Exceptions for the Windows Firewall" of Oracle® Grid Infrastructure Installation Guide 11g Release 2 (11. you must add certain executables and ports to the Firewall exception list on all the nodes of a cluster.msc" to launch the Security Policy Console management utility. such as: • Adding a node • Deleting a node • Upgrading to patch release • Applying a one-off patch If you do not disable the Windows Firewall before performing these actions. click Local Policies. and then click OK • In the Firewall Control Pannel. to ensure correct operation of the Oracle software.

2. Determine the public host name for each node in the cluster. • The cluster name must consist of the same character set used for host names: single-byte alphanumeric characters (a to z.2. The virtual host name is a public node name that is used to reroute client requests sent to the node if the node is down. A to Z. • Repeat the previous 4 steps on ALL cluster nodes ensuring that the public and private interfaces have the same name on every node. • Click OK and wait for the remote registry to appear in the tree.2. Determine your cluster name. Net Use Test 5 . 2. for example: racnode1-vip.3. • It is recommended that NIC teaming is configured. In other words. and then click OK. Remote Registry Connect Validate the ability to connect to the remote nodes registry(s) as follows: • Open a command prompt and type "regedit" • Within the registry editor menu bar. For the public host name. • Repeat the previous 4 steps for ALL cluster nodes. Oracle recommends that you provide a name in the format <public hostname>-vip. Networking NOTE: This section is intended to be used for installations NOT using GNS.1.1. type "ncpa. and 0 to 9) and hyphens (-). • Determine the intended purpose for each of the interfaces (may need to view the IP configuration) • Right click the interface to be renamed and click "rename" • Enter the desired name for the interface. 3. Active/passive is the preferred teaming method due to its simplistic configuration. Determine the virtual hostname for each node in the cluster. For Windows 2008: Perform the following to rename the network interfaces: • Click Start. click Run. long names should be avoided and special characters are NOT to be used. choose File and select "Connect Network Registry" • In the Select Computer window enter the remote node name. • The cluster name is at least 1 character long and less than 15 characters long. NOTE: It is a requirement that network interfaces used for the Public and Private Interconnect be consistently named (have the same name) on every node in the cluster. The cluster name should satisfy the following conditions: • The cluster name is globally unique throughout your host domain. 2. 1. • The virtual IP address must be on the same subnet as your public IP address. Common practice is to use the names Public and Private for the interfaces.C:\Users\Administrator>net use \\remote node name\C$ The command completed succ • Repeat the previous 2 steps on ALL cluster nodes. The virutal hostname must meet the following requirements: • The virtual IP address and the network name must not be currently in use. 2. use the name displayed by the hostname command for example: racnode1. use the primary host name of each node.cpl".

You can use any name for the SCAN.2.com. The short SCAN for the cluster is docrac-scan.2.PRIVATE 172. A common naming convention for the private hostname is <public hostname>-priv. and then the network adapter binding order should be checked. #PublicLAN . Network Ping Tests There are a series of 'ping' tests that should be completed.1. the clients would use docrac-scan to connect to the cluster. • The private network should be on standalone dedicated switch(es).2.• The virtual host name for each node should be registered with your DNS. The fully qualified SCAN for the cluster defaults to cluster_name-scan. SCAN VIPs must be resolvable by DNS. 2.example.example. * Public Ping test Pinging Node1 from Node1 should return Node1's public IP address Pinging Node2 from Node If any of the above tests fail you should fix name/address resolution by updating the DNS or local hosts files on each node before continuing with the installation.example. 2.com racnode1 192.PUBLIC 192. Oracle recommends that you list the public IP.100 racnode1.2. This will result in only 1 SCAN VIP for the entire cluster. Active/passive is the preferred teaming method due to its simplistic configuration.101 racnode2. This private hostname does not need to be resolvable through DNS and should be entered in the hosts file (typically located in: c:\windows\system32\drivers\etc).2.0.0.GNS_subdomain_name.0. 4. Using the previous example.101 racnode2-priv After you have completed the installation process.100 racnode1-priv 172. as long as it is unique within your network and conforms to the RFC 952 standard.example. You should ensure that the public IP addresses resolve correctly and that the private addresses are of the form 'nodename-priv' and resolve on both nodes via the hosts file. Define a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin). Determine the private hostname for each node in the cluster. Even if you are using a DNS. • It is recommended that redundant NICs are configured using teaming. 6.2.com racnode1-vip 192. • The private IP should NOT be accessible to servers not participating in the local cluster. • The private network should be deployed on Gigabit Ethernet or better. Configure the c:\windows\system32\drivers\etc\hosts file so that it is similar to the following example: NOTE: The SCAN VIP MUST NOT be in the hosts file. 5. • The private network should NOT be part of a larger overall network topology. SCAN VIPs must NOT be in the c:\windows\system32\drivers\etc\hosts file. configure clients to use the SCAN to access the cluster. for example docrac-scan.com racnode2-vip #PrivateLAN .0.0.102 racnode1-vip. Networking 6 .103 racnode2-vip.example.2. VIP and private addresses for each node in the hosts file on each node.com racnode2 #VIP 192.0.2.

type "ncpa. and then click OK. make sure that the Key is called DisableDHCPMediaSense.cpl". • Click Configure. click TCP/IP Offload. click 'Alt' to enable that menu item). For Windows 2008 we can check the status of DHCP Media Sense with the command: netsh interface ipv4 show global 2. if the "Advanced" is not showing. 2. and then click OK. Stopping Services There can be issues with some (non-Oracle) services.2. Media Sense allows Windows to uncouple an IP address from a card when the link to the local switch is lost. and then click the Advanced tab. • In the menu bar on the top of the window click "Advanced" and choose "Advanced Settings" (For Windows 2008.2. and then click OK. • Under Binding order for increase the priority of IPv4 over IPv6 • Click OK to save the changes • Repeat the previous 5 steps on ALL cluster nodes 2. Disable DHCP Media Sense Media Sense should be disabled.3. These issues are described in detail in Microsoft KB article 948496 and 951037. which may already be running on the cluster nodes. It is recommended that this service is stopped and set to 'manual' start using services. • Under the Adapters and Bindings tab use the up arrow to move the Public interface to the top of the Connections list.2. • Right-click a network adapter object.msc on both nodes. Typically a Microsoft Service: Distributed Transaction Coordinator (MSDTC) can interact with Oracle software during install. click Run.3.2. type "ncpa. click Run. For Windows 2008: Perform the follow tasks to ensure this requirement is met: • Click Start. and then click OK.2. • Repeat steps 2 through 5 for each network adapter object.2. Network Interface Binding Order (and Protocol Priorities) It is required that the Public interface be listed first in the network interface binding order on ALL cluster nodes. and then click Properties. You should disable this activity using the registry editor regedit. Navigate to the Key HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters and right click to create a new key of type DWORD.cpl". is of type DWORD and has a value of 1. Network Interface Binding Order (and Protocol Priorities) 7 . • In the Property list. Perform the following tasks to take proactive action on these potential issues: • Click Start.4. The same can be accomplished on Windows 2008 by issuing the following commands: netsh int tcp set global chimney=disabled and netsh int tcp set global rss=disabled Validate these changes with the command: netsh interface ipv4 show global 2. Disable SNP Features On Windows 2003 SP2 and later platforms there are several network issues related to SNP features. click Disable in the Value list. click Disable in the Value list. click Receive Side Scaling.2. • In the Property list.

1.0. • Set the value for MaxNegPhaseCorrection? to 0 and exit the registry editor.4. • Open a command prompt and execute the following to put the change into effect: cmd> W32tm /config /update • Repeat steps 1 through 4 for ALL cluster nodes. Environment Variables Set the TEMP and TMP environment variables to a common location that exists on ALL nodes in the cluster. 2. Stage the Oracle Software It is recommended that you stage the required software onto a local drive on Node 1 of your cluster. it MUST be configured to prevent the time from being adjusted backwards. Synchronizing the Time on ALL Nodes There is a general requirement for Oracle RAC that the time on all the nodes be the same. Synchronizing the Time on ALL Nodes 8 . Perform the following steps to ensure the time is NOT adjusted backwards using Windows Time Service: • Open a command prompt and type "regedit" • Within the registry editor locate the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config key. If the location is not the same for both variables on ALL cluster nodes the installation will fail. this path must be identical for both TMP and TEMP and they must be set to the same location on ALL cluster nodes.0) for Windows For the RDBMS software download from OTN: Oracle Database 11g Release 2 (11. • Repeat steps 1 through 6 for ALL cluster nodes.6.2.0.0) for Windows 2.4. Ensure that you use only 32 bit versions of the Oracle Software on a 32bit OS and 64 bit versions of the Oracle Software on a 64bit OS. • Click OK to save the changes. During installation the Oracle Universal Installer (OUI) will utilize these directories to store temporary copies of the binaries. With 11gR2 time synchronization can be performed by the Clusterware using CTSSD (Cluster Time Synchronization Services Daemon) or by using the Windows Time Service. Keep in mind.2. For the Grid Infrastructure (clusterware and ASM) software download: Oracle Database 11g Release 2 Grid Infrastructure (11. Important. If the Windows Time Service is being used.5. Most commonly these parameters are set as follows: TMP=C:\temp TEMP=C:\temp For Windows 2008: To set the TEMP and TMP environment variables: • Log into the server as the user that will perform the installation • Open Computer Properties • Click the Advanced system settings link (on the left under tasks) • Under the Advanced tab.2. 2. click the Environment Variables button • Modify the TEMP and TMP variables under "User variables for Administrator" to the desired setting.1.

Disk Number Volume Size(MB) ASM Label Prefix Diskgroup Redundancy Disk 1 Disk 2 Disk 3 Volume 2 Volume 3 Volume 4 1024 1024 1024 OCR_VOTE OCR_VOTE OCR_VOTE OCR_VOTE OCR_VOTE OCR_VOTE Normal Normal Normal 2. The 2 nodes must also share some central disks. This disk must not have cache enabled at the node level.e. Enable Automounting of disks on Windows 3. View Disks 7. This diskgroup will also be used to store the ASM SPFILE. 3. Cluster Verification Utility (CVU) stage check Now you can run the CVU to check the state of the cluster prior to the install of the Oracle Software.1. This configuration will provide 3 multiplexed copies of the Voting Disk and a single OCR which takes on the redundancy of that disk group (mirrored within ASM). If the SAN supports caching that is visible to all nodes then this can be enabled. Clean the Shared Disks 4. Check if there is a newer version of CVU available on otn compared to the one that ships on the installation media http://otn. Shared Disk Layout 2. For those who wish to utilize Oracle supplied redundancy for the OCR and Voting disks you could create a separate (3rd) ASM Diskgroup having a minimum of 2 fail groups (total of 3 disks). Prepare the shared storage for Oracle RAC This section describes how to prepare the shared storage for Oracle RAC 1.7. This means that the OCR and Voting disk will be stored along with the database related files. i.com/rac 3.1. Verify Clusterware Installation Readiness 3. Create Logical partitions inside Extended partitions 5. Drive Letters 6. If you are utilizing external redundancy for your disk groups this means you will have 1 Voting Disk and 1 OCR. Shared Disk Layout It is assumed that the two nodes have local disk primarily for the operating system and the local Oracle Homes. we will be using the more complex of the above configurations by creating a 3rd diskgroup for storage of the OCR and Voting Disks. Cluster Verification Utility (CVU) stage check 9 . The minimum size of the 3 disks that make up this diskgroup is 1GB.1. For demonstration purposes within this cookbook. if the HBA drivers support caching of reads/writes it should be disabled. Labelled C: The Oracle Grid Infrastructure software also resides on the local disks on each node.2.oracle. Marking Disk Partitions for use by ASM 8. Grid Infrastructure Shared Storage With Oracle 11gR2 it is considered a best practice to store the OCR and Voting Disk within ASM and to maintain the ASM best practice of having no more than 2 diskgroups (Flash Recovery Area and Database Area). Our third disk group will be normal redundancy allowing for 3 Voting Disks and a single OCR which takes on the redundancy of that diskgroup.7.

On each node log in as someone with Administrator privileges then Click START->RUN and type diskpart C:\>diskpart Microsoft DiskPart version 5. 3. WARNING this will destroy all of the data on the disk. Enable Automount You must enable automounting of disks for them to be visible to Oracle Grid Infrastructure.1. Cleaning will remove data from any previous failed install.3.1.msc instead of diskpart (as used in the following sections) to create these partitions.2.3790. This may take some time to complete. 4 4 1+0 1+0 100 100 DBDATA DBFLASH DBDATA DBFLASH External External Number of LUNs(Disks) RAID Level Size(GB) ASM Label Prefix Diskgroup Redundancy In this document we will use the diskpart command line tool to manage these LUNs.2.msc cannot be used instead of diskpart to create these partitions. Clean the Shared Disks You may want to clean your shared disks before starting the install. There must be no drive letters assigned to any of the Disks1 – Disk10 on any node. ASM Shared Storage --- 10 .3. For Microsoft Windows 2008. diskmgmt. But see a later Appendix for coping with failed installs. ASM Shared Storage It is recommended that ALL ASM disks within a disk group are of the same size and carry the same performance characteristics. Do not select the disk containing the operating system or you will have to reinstall the OS Cleaning the disk ‘scrubs’ every block on the disk. On computer: WINNODE1 DISKPART>AUTOMOUNT ENABLE Repeat the above command on all nodes in the cluster 3.2. DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------.------. For MIcrosoft Windows 2003 it is possible to use diskmgmt.---------.2. external redundancy should be used for database storage on ASM. Whenever possible Oracle also recommends sticking to the SAME (Stripe And Mirror Everything) methodology by using RAID 1+0. On Node1 from within diskpart you should clean each of the disks. You must create logical drives inside of extended partitions for the disks to be used by Oracle Grid Infrastructure and Oracle ASM.--Disk 0 Online 34 GB 0 B Disk 1 Online 1024 MB 1024 MB Disk 2 Online 1024 MB 1024 MB Disk 3 Online 1024 MB 1024 MB Disk 4 Online 1024 MB 1024 MB Disk 5 Online 1024 MB 1024 MB Disk 6 Online 100 GB 100 GB Disk 7 Online 100 GB 100 GB Disk 8 Online 100 GB 100 GB Disk 9 Online 100 GB 100 GB 3.------.3959 Copyright (C) 1999-2001 Microsoft Corporation. If SAN level redundancy is available.

DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> clean all DISKPART> select disk 5 Disk 5 is now the selected disk. 3. for Oracle Grid Infrastructure. Create Logical partitions inside Extended partitions Assuming the disks you are going to use are completely empty you must create an extended partition and then inside that partition a logical partition. In the following example. DISKPART> clean all DISKPART> select disk 10 Disk 10 is now the selected disk. DISKPART> clean all DISKPART> select disk 7 Disk 7 is now the selected disk. DISKPART> clean all DISKPART> select disk 3 Disk 3 is now the selected disk. DISKPART> clean all DISKPART> select disk 2 Disk 2 is now the selected disk.Disk 10 Online 100 GB 100 GB Now you should clean disks 1 – 10 (Not disk0 as this the local C: drive) DISKPART>select disk 1 Disk 1 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> select disk 1 Disk 1 is now the selected disk.4. Clean the Shared Disks 11 . DISKPART> clean all DISKPART> select disk 9 Disk 9 is now the selected disk.3. DISKPART> clean all DISKPART> select disk 6 Disk 6 is now the selected disk. DISKPART> clean all 3. DISKPART> clean all DISKPART> select disk 8 Disk 8 is now the selected disk. DISKPART> clean all DISKPART> select disk 4 Disk 4 is now the selected disk. I have dedicated LUNS for each device.

4. DISKPART> create part ext DiskPart succeeded in creating the specified partition.. DISKPART> select disk 3 Disk 3 is now the selected disk. DISKPART> create part log 3. DISKPART> select disk 9 Disk 9 is now the selected disk. DISKPART> select disk 7 Disk 7 is now the selected disk. Create Logical partitions inside Extended partitions 12 . DISKPART> select disk 8 Disk 8 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> select disk 5 Disk 5 is now the selected disk. DISKPART> select disk 4 Disk 4 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition.DISKPART> select disk 2 Disk 2 is now the selected disk. DISKPART> select disk 6 Disk 6 is now the selected disk. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition.

3. Using diskpart on Node2 DISKPART> list volume --C D E F G H I J K L M --------------NTFS RAW RAW RAW RAW RAW RAW RAW RAW RAW RAW ---------Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition ------16 GB 1023 MB 1023 MB 1023 MB 1023 MB 1023 MB 100 GB 100 GB 100 GB 100 GB 100 GB --------Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy -------System Volume ### Ltr Label Fs Type Size Status Info ---------Volume 0 Volume 1 Volume 2 Volume 3 Volume 4 Volume 5 Volume 6 Volume 7 Volume 8 Volume 9 Volume 10 Notice that the volumes are listed in a completely different order compared to the disk list.1.1. is the local disk. This relates to volume 0 (In this example) 3. in this case. View Created partitions DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------. List Drive Letters Diskpart should not add drive letters to the partitions on the local node.------.---------.5.1. DISKPART> create part log DiskPart succeeded in creating the specified partition. Windows 2003 SP2 and Windows 2008 do not suffer from this issue. On earlier versions of Windows 2003 a reboot of the ‘other’ node will be required for the new partitions to become visible. You must remove them. DISKPART> create part ext DiskPart succeeded in creating the specified partition.DiskPart succeeded in creating the specified partition. Remove Drive Letters You need to remove the drive letters D E F G H I J K L M These relate to volumes 1 2 3 4 5 6 7 8 9 10 *Do NOT remove drive letter C which. The partitions on the other node may have drive letters assigned.4. 3.4.5. DISKPART> select disk 10 Disk 10 is now the selected disk. View Created partitions 13 .------Disk 0 Online 34 GB 0 B Disk 1 Online 1024 MB 0 MB Disk 2 Online 1024 MB 0 MB Disk 3 Online 1024 MB 0 MB Disk 4 Online 1024 MB 0 MB Disk 5 Online 1024 MB 0 MB Disk 6 Online 100 GB 0 MB Disk 7 Online 100 GB 0 MB Disk 8 Online 100 GB 0 MB Disk 9 Online 100 GB 0 MB Disk 10 Online 100 GB 0 MB --- --- 3.

DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 4 Volume 4 is the selected volume. DISKPART> select volume 2 Volume 2 is the selected volume. DISKPART> select volume 3 Volume 3 is the selected volume. Remove Drive Letters 14 .DISKPART> select volume 1 Volume 1 is the selected volume. DISKPART> select volume 6 Volume 6 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point.5. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point.1. DISKPART> select volume 10 Volume 10 is the selected volume. DISKPART> select volume 7 Volume 7 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 5 Volume 5 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. 3. DISKPART> select volume 9 Volume 9 is the selected volume. DISKPART> select volume 8 Volume 8 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point.

• Within ASM Tool GUI. The following table outlines the summary of the disks that will be stamped for ASM usage: 3 4 4 1+0 1+0 1+0 1024 100 100 OCR_VOTE DBDATA DBFLASH OCR_VOTE DBDATA DBFLASH Normal External External Number of LUNs(Disks) RAID Level Size(GB) ASM Label Prefix Diskgroup Redundancy Perform this task as follows: • Within Windows Explorer navigate to the asmtool directory within the Grid Infrastructure installation media and double click the â–asmtoolg. Therefore prior to running the OUI the disks that are to be used by Oracle RAC MUST be stamped using ASM Tool. List volumes on Second node 15 . asmtoolg will be used to stamp the ASM disks. ASM Tool is available in two different flavors.5.6. Marking Disk Partitions for use by ASM The only partitions that the Oracle Universal Installer acknowledges on Windows systems are logical drives that are created on top of extended partitions and that have been stamped as candidate ASM disks.exeâ– executable. command line (asmtool) and graphical (asmtoolg). For this installation. Both utilities can be found under the asmtool directory within the Grid Infrastructure installation media. List volumes on Second node You should check that none of the RAW partitions have drive letters assigned DISKPART> list vol --C --------------NTFS RAW RAW RAW RAW RAW RAW RAW RAW RAW RAW ---------Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition ------16 GB 1023 MB 1023 MB 1023 MB 1023 MB 1023 MB 100 GB 100 GB 100 GB 100 GB 100 GB --------Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy -------System Volume ### Ltr Label Fs Type Size Status Info ---------Volume 0 Volume 1 Volume 2 Volume 3 Volume 4 Volume 5 Volume 6 Volume 7 Volume 8 Volume 9 Volume 10 You can now exit diskpart on all nodes 3.5.2.2. select â–Add or Change Labelâ– and click â–Nextâ–. 3.3.

• Review the summary screen and click â–Nextâ–. size and/or performance characteristics. click â–Nextâ– to continue. After choosing the intended disks and entering the appropriate ASM Label Prefix.6.• On the Select Disks screen choose the appropriate disks to be assigned a label and enter an ASM Label Prefix to make the disks easily identifiable for their intended purpose. Marking Disk Partitions for use by ASM 16 . 3.

3. Later in this document CLUVFY will be run in pre dbinst mode to validate the readiness for the RDBMS software installation. • Repeat these steps for all ASM disks that will differ in their label prefix.7. at this stage it should be run in the CRS pre-installation mode. click â–Finishâ– to update the ASM Disk Labels. 3. There are various levels at which CLUVFY can be run.• On the final screen.7. Verify Grid Infrastructure Installation Readiness Prior to installing Grid Infrastructure it is highly recommended to run the cluster verification utility (CLUVFY) to verify that the cluster nodes have been properly configured for a successful Oracle Grid Infrastructure installation. Verify Grid Infrastructure Installation Readiness 17 .

• If using the DVD installation media (from edelivery.com) the first screen to appear will be the Select a Product to Install screen. 4.0. The latest version of the CLUVFY utility can be downloaded from: http://otn. Oracle Grid Infrastructure Install 4. see step 3) directory on the 11g Release 2 (11. it is recommended to download and run the latest version of CLUVFY. Basic Grid Infrastructure Install (without GNS and IPMI) • Shutdown all Oracle Processes running on all nodes (not necessary if performing the install on new servers) • Start the Oracle Universal Installer (OUI) by running setup. • Open a command prompt and run CLUVFY as follows to perform the Oracle Clusterware pre-installation verification: cmd> cmd> runcluvfy stage -post hwos -n runcluvfy stage -pre crsinst -n [â–verbose] [-verbose] If any errors are encountered. • On the Select Installation Option Screen. If the OTN media is used the RDBMS/ASM and Clusterware are separate downloads and the first screen to be displayed will be the Welcome screen . Choose Oracle Clusterware 11g and click next to continue.com/rac Once the latest version of the CLUVFY has been downloaded and installed. Oracle Grid Infrastructure Install 18 .1.oracle.exe as the Local Administrator user from the Clusterware (db directory if using the DVD Media.2.1) installation media.Though CLUVFY is packaged with the Grid Infrastructure installation media. these issues should be investigated and resolved before proceeding with the installation. execute it as follows to perform the Grid Infrastructure pre-installation verification: • Login to the server in which the installation will be performed as the Local Administrator. 4. choose â–Install and Configure Grid Infrastructure for a Clusterâ– and click â–Nextâ– to continue.oracle.

choose â–Advanced Installationâ– and click â–Nextâ– to continue. • Choose the appropriate language on the Select Product Languages screen and click â–Nextâ– to continue. 4.1. Basic Grid Infrastructure Install (without GNS and IPMI) 19 .* On the Select Installation Type screen.

♦ Click â–Nextâ– to continue. this port defaults to 1521. ♦ Uncheck the â–Configure GNSâ– checkbox. Basic Grid Infrastructure Install (without GNS and IPMI) 20 .1. • Add all of the cluster nodes hostnames and Virtual IP hostnames on the Cluster Node Information Screen. this name should be unique for the enterprise and CAN NOT be changed post installation. additional nodes can be added using the â–Addâ– button. By default the OUI only knows about the local node. The SCAN Name must be a DNS entry resolving to 3 IP addresses. ♦ Enter the SCAN Name for the cluster. ♦ Enter the port number for the SCAN Listener. The SCAN Name MUST NOT be in the hosts file. 4.• On the Grid Plug and Play Information screen perform the following: ♦ Enter the desired Cluster Name.

Make the appropriate corrections on this screen and click â–Nextâ– to continue. NOTE The public and private interface names MUST be consistent across the cluster. the same MUST be true on all of the other nodes in the cluster. Basic Grid Infrastructure Install (without GNS and IPMI) 21 . make sure that the public and private interfaces are properly specified. • Choose â–Automatic Storage Management (ASM) on the Storage Option Information screen and click â–Nextâ– to create the ASM diskgroup for Grid Infrastructure.1.• On the Specify Network Interface Usage screen. so if â–Publicâ– is public and â–Privateâ– is private. 4.

g. To stamp the disks review the instructions in the Storage Prerequisites section of this document and click the â–Stamp Disksâ– button on this screen. ♦ Select the disks that were previously designated for the OCR and Voting Disks.1. NOTE If no Candidate disks are available the disks have not yet been stamped for use by ASM. OCR_VOTE) ♦ Choose â–Normalâ– for the Redundancy level. • Click â–Nextâ– to continue. 4.• On the Create ASM Disk Group screen perform the following: • ♦ Enter the Disk Group Name that will store the OCR and Voting Disks (e. Basic Grid Infrastructure Install (without GNS and IPMI) 22 .

• Specify the location for Oracle Base (e.g.• Enter the appropriate passwords for the SYS and ASMSNMP users of ASM on the Specify ASM Passwords screen and click â–Nextâ– to continue. d:\app\oracle) and the Grid Infrastructure Installation (e. • On the Failure Isolation Support screen choose â–Do not use Intelligent Platform Management Interface (IPMI)â– and click â–Nextâ– to continue. Click â–Nextâ– to allow the OUI to perform the prerequisite checks on the target nodes.g. therefore it is recommended that failed prerequisite checks be resolved before continuing.1. 4. Basic Grid Infrastructure Install (without GNS and IPMI) 23 . d:\OraGrid) on the Specify Installation Location screen. NOTE Continuing with failed prerequisites may result in a failed installation.

Basic Grid Infrastructure Install (without GNS and IPMI) 24 .• After the prerequisite checks have successfully completed. review the summary of the pending installation and click â–Finishâ– to install and configure Grid Infrastructure. 4.1.

1. • Once the installation has completed. Though these resources are offline it is NOT supported to remove them. GSD will only be online if Grid Infrastructure is managing a 9i database and OC4J? is reserved for use in a future release. cmd> %GI_HOME%\bin\crsctl stat res -t The output of the above command will be similar to the following: -------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS 4. check the status of the CRS resources as follows: NOTE All resources should report as online with the exception of GSD and OC4J? . Basic Grid Infrastructure Install (without GNS and IPMI) 25 .• On the Finish screen click â–Closeâ– to exit the OUI.

lsnr 1 ONLINE ONLINE ora.ratwin01.1.vip 1 ora.net1.lsnr 1 ONLINE ONLINE ora.dg ONLINE ONLINE ora.OCR_VOTE.lsnr 1 ONLINE ONLINE ora.scan1.LISTENER_SCAN3. Basic Grid Infrastructure Install (without GNS and IPMI) 26 .vip 1 ONLINE ora.ratwin02.scan2.vip 1 ONLINE ratwin01 ONLINE ratwin02 ONLINE ONLINE ratwin02 ONLINE ONLINE ratwin01 ONLINE ONLINE ratwin01 4.lsnr ONLINE ONLINE ora.scan3.vip 1 ora.eons ONLINE ONLINE ora.vip 1 ONLINE ora.LISTENER.network ONLINE ONLINE ora.asm ONLINE ONLINE ora.LISTENER_SCAN1.ons ONLINE ONLINE ONLINE ONLINE ratwin01 ratwin02 ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 Started Started ONLINE ONLINE ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 -------------------------------------------------------------------------------Cluster Resources -------------------------------------------------------------------------------ora.LISTENER_SCAN2.oc4j 1 ratwin02 ratwin01 ratwin01 OFFLINE OFFLINE ora.gsd OFFLINE OFFLINE OFFLINE OFFLINE ora.-------------------------------------------------------------------------------Local Resources -------------------------------------------------------------------------------ora.

Grid Infrastructure Home Patching This Chapter is a placeholder 6. Grid Infrastructure Home Patching 27 . 5.oracle. After entering the desired information click â–Nextâ– to continue. • If using the DVD installation media (from edelivery.2. If the OTN media is used the RDBMS/ASM and Clusterware are separate downloads and the first screen to be displayed will be Installation Type screen .exe command as the Local Administrator user from the DB directory of the Oracle Database 11g Release 2 (11.0. Choose Oracle Database 11g and click next to continue. The following outlines how to achieve this task: cmd> runcluvfy stage -pre dbinst -n -verbose • Start the OUI by running the setup. In order to achieve this CLUVFY must be run in pre dbinst mode. RDBMS Software Install Prior to installing the Database Software (RDBMS) it is highly recommended to run the cluster verification utility (CLUVFY) to verify that Grid Infrastructure has been properly installed and the cluster nodes have been properly configured to support the Database Software installation.5. • Choose â–Install database software onlyâ– on the Select Installation Option screen and click â–Nextâ– to continue.com) the first screen to appear will be the Select a Product to Install screen. • The first OUI screen will prompt for an email address to allow you to be notified of security issues and to enable Oracle Configuration Manager.1) installation media and click next to begin the installation process. If you wish NOT to be notified or not to use Oracle Configuration Manager leave the email box blank and uncheck the check box.

Click â–Nextâ– to continue. RDBMS Software Install 28 .• On the Grid Installation Options screen. • Choose the appropriate language on the Select Product Languages screen and click â–Nextâ– to continue. choose â–Real Application Clusters database installationâ– and select ALL nodes for the software installation. 6.

6. RDBMS Software Install 29 .0\db_1) on the Specify Installation Location screen. Click â–Nextâ– to allow the OUI to perform the prerequisite checks on the target nodes. NOTE It is recommended that the same Oracle Base location is used for the database installation that was used for the Grid Infrastructure installation. these options can be chosen by clicking the â–Select Optionsâ– button.2. NOTE If there is a need for specific database options to be installed or not installed. d:\app\oracle) and the database software installation (e. • Specify the location for Oracle Base (e.• On the Select Database Edition screen choose â–Enterprise Editionâ– and click â–Nextâ– to continue.g. d:\app\oracle\product\11.g.

RDBMS Software Install 30 . therefore it is recommended that failed prerequisite checks be resolved before continuing. 6.NOTE Continuing with failed prerequisites may result in a failed installation. review the summary of the pending installation and click â–Finishâ– to install the database software. • After the prerequisite checks have successfully completed.

7. Run ASMCA to create diskgroups Prior to creating a database on the cluster. RAC Home Patching 31 . the ASM disks for the database diskgroups were stamped for ASM usage. In an earlier chapter.• On the Finish screen click â–Closeâ– to exit the OUI. We will now use the ASM Configuration Assistant to create the diskgroups: 7. RAC Home Patching This Chapter is a placeholder 8. the ASM diskgroups that will house the database must be created.

control files. online redo logs. backup sets. ♦ Select the candidate disks to include in the DiskGroup? NOTE: If no Candidate disks are available the disks have not yet been stamped for use by ASM. 8. click the â–Createâ– button to display the Create DiskGroup? window. and change tracking files (used in incremental backups) are stored. Oracle recommends that generally no more than two diskgroups be maintained for database storage. The Database Area Diskgroup will house active database files such as datafiles. Run ASMCA to create diskgroups 32 .g. and flashback log files • Within the Create DiskGroup? window perform the following: ♦ Enter the desired DiskGroup? Name (e. click the DiskGroups? tab.• Run the ASM Configuration Assistant (ASMCA) from the ASM Home by executing the following: cmd> %GI_HOME%\bin\asmca • After launching ASMCA. a Database Area DiskGroup? and a Flash Recovery Area DiskGroup? . NOTE: To reduce the complexity of managing ASM and its diskgroups. The Flash Recovery Area will house recovery-related files are created. To stamp the disks review the instructions in the 'Rac11gR2WindowsPrepareDisk' chapter of this document and click the â–Stamp Disksâ– button on this screen. archived redo logs. DBDATA or DBFLASH) ♦ Choose External Redundancy (assuming redundancy is provided at the SAN level). such as multiplexed copies of the current control file and online redo logs. • While on the DiskGroups? tab.

• Repeat the previous two steps to create all necessary diskgroups. To follow this recommendation add an OCR mirror.• ♦ Click â–OKâ– to create the DiskGroup? . Action: 8. Run ASMCA to create diskgroups 33 . Mind that you can only have one OCR in a diskgroup. Note: It is Oracle's Best Practise to have an OCR mirror stored in a second disk group. • Once all the necessary DiskGroups? have been created click â–Exitâ– to exit from ASMCA.

ensure that the Oracle Clusterware stack is running and run the following command as administrator: 2. use the following Cluster Verification Utility command syntax: cmd> runcluvfy stage -pre dbcfg -n all -d c:\app\11. select â–Oracle Real Application Clustersâ– and click â–Nextâ– • On the Operation screen.0\grid -verbose Perform the following to create an 11gR2 RAC database on the cluster: • Run the Database Configuration Assistant (DBCA) from the ASM Home by executing the following: cmd> %ORACLE_HOME\bin\dbca • On the Welcome screen. Run DBCA to create the database 34 .1. Run DBCA to create the database To help to verify that the system is prepared to successfully create a RAC database.2. D:\OraGrid\BIN>ocrcheck -config 9. To add OCR mirror to an Oracle ASM disk group. select â–Create Databaseâ– and click â–Nextâ– 9. D:\OraGrid\BIN>ocrconfig -add +DBDATA 3.

• On the Database Identification screen perform the following: ♦ Select the â–Admin Managedâ– configuration type. 9. Run DBCA to create the database 35 . ♦ Click â–Nextâ– to continue. ♦ Enter the desired SID prefix. Select the â–General Purpose or Transaction Processingâ– database template and click â–Nextâ– to continue. ♦ Enter the desired Global Database Name. ♦ Select ALL the nodes in the cluster.• The Database Templates screen will now be displayed.

Once Grid Control has been installed on the system this option may be unselected to allow the database to be managed by Grid Control. Click â–Nextâ– to continue.• On the Management Options screen. 9. Run DBCA to create the database 36 . • Enter the appropriate database credentials for the default user accounts and click â–Nextâ– to continue. choose â–Configure Enterprise Managerâ–.

♦ Click â–OKâ– after entering the ASMSNMP password to continue. ♦ Select â–Use Oracle-Managed Filesâ– and specify the ASM Disk Group that will house the database files (e.• On the Database File Locations screen perform the following: ♦ Choose Automatic Storage Management (ASM).g. Run DBCA to create the database 37 . +DBDATA) ♦ Click â–Nextâ– and enter the ASMSNMP password when prompted. 9.

select â–Specify Flash Recovery Areaâ–. enter +DBFLASH for the location and choose an appropriate size. you are able create the sample schemas. Check the check box if the sample schemas are to be loaded into the database and click â–Nextâ– to continue. Click â–Nextâ– to continue.• For the Recovery Configuration. It is also recommended to select â–Enable Archivingâ– at this point. • On the Database Content screen. Run DBCA to create the database 38 . 9.

9. review the file layout and click â–Nextâ– to continue. • On the Database Storage screen. NOTE: At this point you may want to increase the number of redo logs per thread to 3 and increase the size of the logs from the default of 50MB. Run DBCA to create the database 39 .• Enter the desired memory configuration and initialization parameters on the Initialization Parameters screen and click â–Nextâ– to continue.

Run DBCA to create the database 40 .• On the Last screen. • Click â–OKâ– on the Summary window to create the database. ensure that â–Create Databaseâ– is checked and click â–Finishâ– to review the summary of the pending database creation. 9.

9. Run DBCA to create the database 41 .