Table of Contents

Rac11gR2OnWindows ........................................................................................................................................1 1. Introduction.........................................................................................................................................1 1.1. Overview of new concepts in 11gR2 Grid Infrastructure...................................................1 1.1.1. SCAN ..................................................................................................................1 1.1.2. GNS....................................................................................................................1 1.1.3. OCR and Voting on ASM storage......................................................................1 1.1.4. Intelligent Platform Management interface (IPMI)............................................1 1.1.5. Time sync ............................................................................................................2 1.1.6. Clusterware and ASM share the same Oracle Home ..........................................2 1.2. System Requirements ....................................................................................................................................2 1.2.1. Hardware Requirements ................................................................................................................2 1.2.2. Network Hardware Requirements .................................................................................................2 1.2.3. IP Address Requirements ..............................................................................................................3 1.2.4. Installation method.......................................................................................................................3 2. Prepare the cluster nodes for Oracle RAC.......................................................................................................3 2.1. User Accounts..................................................................................................................................3 2.1.1. User Account changes specifically for Windows 2008:..................................................3 2.1.2. Net Use Test....................................................................................................................4 . 2.1.3. Remote Registry Connect................................................................................................5 2.2. Networking...................................................................................................................................................5 2.2.1. Network Ping Tests.......................................................................................................................6 2.2.2. Network Interface Binding Order (and Protocol Priorities).........................................................7 2.2.3. Disable DHCP Media Sense.........................................................................................................7 2.2.4. Disable SNP Features...................................................................................................................7 2.3. Stopping Services ..........................................................................................................................................7 2.4. Synchronizing the Time on ALL Nodes.......................................................................................................8 2.5. Environment Variables.................................................................................................................................8 2.6. Stage the Oracle Software.............................................................................................................................8 2.7. Cluster Verification Utility (CVU) stage check ............................................................................................9 3. Prepare the shared storage for Oracle RAC.....................................................................................................9 3.1. Shared Disk Layout.........................................................................................................................9 . 3.1.1. Grid Infrastructure Shared Storage..................................................................................9 3.1.2. ASM Shared Storage.....................................................................................................10 3.2. Enable Automount......................................................................................................................................10 3.3. Clean the Shared Disks...............................................................................................................................10 3.4. Create Logical partitions inside Extended partitions..................................................................................11 3.4.1. View Created partitions..............................................................................................................13 3.5. List Drive Letters........................................................................................................................................13 3.5.1. Remove Drive Letters.................................................................................................................13 3.5.2. List volumes on Second node.....................................................................................................15 3.6. Marking Disk Partitions for use by ASM...................................................................................................15 3.7. Verify Grid Infrastructure Installation Readiness.......................................................................................17 4. Oracle Grid Infrastructure Install...................................................................................................................18 4.1. Basic Grid Infrastructure Install (without GNS and IPMI)...........................................................18 5. Grid Infrastructure Home Patching................................................................................................................27 6. RDBMS Software Install...............................................................................................................................27 7. RAC Home Patching ......................................................................................................................................31 8. Run ASMCA to create diskgroups................................................................................................................31 9. Run DBCA to create the database.................................................................................................................34

i

Rac11gR2OnWindows

1. Introduction
1.1. Overview of new concepts in 11gR2 Grid Infrastructure
1.1.1. SCAN
The single client access name (SCAN) is the address used by all clients connecting to the cluster. The SCAN name is a domain name registered to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS). The SCAN name eliminates the need to change clients when nodes are added to or removed from the cluster. Clients using SCAN names can also access the cluster using EZCONNECT. • The Single Client Access Name (SCAN) is a domain name that resolves to all the addresses allocated for the SCAN name. Allocate three addresses to the SCAN name. During Oracle grid infrastructure installation, listeners are created for each of the SCAN addresses, and Oracle grid infrastructure controls which server responds to a SCAN address request. Provide three IP addresses in the DNS to use for SCAN name mapping. This ensures high availability. • The SCAN addresses need to be on the same subnet as the VIP addresses for nodes in the cluster. • The SCAN domain name must be unique within your corporate network.

1.1.2. GNS
In the past, the host and VIP names and addresses were defined in the DNS or locally in a hosts file. GNS can simplify this setup by using DHCP. To use GNS, DHCP must be configured in the subdomain in which the cluster resides.

1.1.3. OCR and Voting on ASM storage
The ability to use ASM diskgroups for the storage of Clusterware OCR and Voting disks is a new feature in the Oracle Database 11g Release 2 Grid Infrastructure. If you choose this option and ASM is not yet configured, OUI launches ASM configuration assistant to configure ASM and a diskgroup.

1.1.4. Intelligent Platform Management interface (IPMI)
Intelligent Platform Management Interface (IPMI) provides a set of common interfaces to computer hardware and firmware that administrators can use to monitor system health and manage the system. With Oracle Database 11g Release 2, Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity. You must have the following hardware and software configured to enable cluster nodes to be managed with IPMI: • Each cluster member node requires a Baseboard Management Controller (BMC) running firmware compatible with IPMI version 1.5, which supports IPMI over LANs, and configured for remote control. • Each cluster member node requires an IPMI driver installed on each node. • The cluster requires a management network for IPMI. This can be a shared network, but Oracle recommends that you configure a dedicated network. • Each cluster node's ethernet port used by BMC must be connected to the IPMI management network. Rac11gR2OnWindows 1

If you intend to use IPMI, then you must provide an administration account username and password to provide when prompted during installation.

1.1.5. Time sync
There is a general requirement for Oracle RAC that the time on all the nodes be the same. With 11gR2 time synchronization can be performed by the Clusterware using CTSSD (Cluster Time Synchronization Services Daemon) or by using the Windows Time Service. If the Windows Time Service is being used, it MUST be configured to prevent the time from being adjusted backwards.

1.1.6. Clusterware and ASM share the same Oracle Home
The clusterware and ASM share the same home thus we call it Grid Infrastructure home (prior to 11gR2 ASM could be installed either in a separate home or in the same Oracle home as RDBMS.)

1.2. System Requirements
1.2.1. Hardware Requirements
• Physical memory (at least 1.5 gigabyte (GB) of RAM) • An amount of swap space equal the amount of RAM • Temporary space (at least 1 GB) available in /tmp • A processor type (CPU) that is certified with the version of the Oracle software being installed • At minimum of 1024 x 786 display resolution, so that Oracle Universal Installer (OUI) displays correctly • All servers that will be used in the cluster have the same chip architecture, for example, all 32-bit processors or all 64-bit processors • Disk space for software installation locations. You will need at least 4.5 GB of available disk space for the Grid home directory, which includes both the binary files for Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) and their associated log files, and at least 4 GB of available disk space for the Oracle Database home directory. • Shared disk space

An Oracle RAC database is a shared everything database. All data files, control files, redo log files, and the server parameter file (SPFILE) used by the Oracle RAC database must reside on shared storage that is accessible by all the Oracle RAC database instances. The Oracle RAC installation that is described in this guide uses Oracle ASM for the shared storage for Oracle Clusterware and Oracle Database files. The amount of shared disk space is determined by the size of your database.

1.2.2. Network Hardware Requirements
• Each node has at least two network interface cards (NIC), or network adapters. • Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter 'PublicLAN', then you must configure 'PublicLAN' as the public interface on all nodes. • You should configure the same private interface names for all nodes as well. If 'PrivateLAN' is the private interface name for the first node, then 'PrivateLAN' should be the private interface name for your second node. • For the private network, the end points of all designated interconnect interfaces must be completely reachable on the network. Every node in the cluster should be able to connect to every private network interface in the cluster. 1.1.4. Intelligent Platform Management interface (IPMI) 2

• The host name of each node must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.

1.2.3. IP Address Requirements
• One public IP address for each node • One virtual IP address for each node • Three single client access name (SCAN) addresses for the cluster

1.2.4. Installation method
This document details the steps for installing a 2-node Oracle 11gR2 RAC cluster on Windows: • The Oracle Grid Infrastructure Home binaries are installed on the local disk of each of the RAC nodes. • The files required by Oracle Clusterware (OCR and Voting disks) are stored in ASM • The installation is explained without GNS and IPMI (additional Information for installation with GNS and IPMI are explained)

2. Prepare the cluster nodes for Oracle RAC
The guides include hidden sections, use the and image for each section to show/hide the section or you can Expand all or Collapse all by clicking these buttons. This is implemented using the Twisty Plugin which requires Java Script to be enabled on your browser.

2.1. User Accounts
The installation should be performed as the Local Administrator, the Local Administrator username and password MUST be identical on all cluster nodes. If a domain account is used, this domain account must be explicitly defined as a member of the Local Administrator group on all cluster nodes. For Windows 2008: • Open Windows 2008 Server Manager • Expand the Configuration category in the console tree • Expand the Local Users and Groups category in the console tree • Within Groups, open the Administrator group • Add the desired user account as a member of the Administrator Group • Click OK to save the changes. We must now configure and test the installation user's ability to interact with the other cluster nodes.

2.1.1. User Account changes specifically for Windows 2008:
1. Change the elevation prompt behavior for administrators to "Elevate without prompting" to allow for user equivalence to function properly in Windows 2008: • Open a command prompt and type "secpol.msc" to launch the Security Policy Console management utility. 1.2.2. Network Hardware Requirements 3

2. • Repeat the previous 6 steps on ALL cluster nodes. then the changes might not be propagated correctly to all the nodes of the cluster.htm#CHDJGCEH NOTE: The Windows Firewall must be disabled on all the nodes in the cluster before performing any cluster-wide configuration changes. you can enable the Windows Firewall for the public connections.• From the Local Security Settings console tree. • From the drop-down menu. you must add certain executables and ports to the Firewall exception list on all the nodes of a cluster. • Choose the "Off" radio button in the "Windows Firewall Settings" window and click OK to save the changes. However.1. 2. click Run. such as: • Adding a node • Deleting a node • Upgrading to patch release • Applying a one-off patch If you do not disable the Windows Firewall before performing these actions. • Repeat the previous 5 steps on ALL cluster nodes.oracle. and then Security Options • Scroll down to and double-click User Account Control: Behavior of the elevation prompt for administrators. • Click on "Local Policies" • Click on "User Rights Assignment" • Locate and double click the "Manage auditing and security log" in the listing of User Rights Assignments. 2. • Repeat the previous 3 steps on ALL cluster nodes. 3.cpl". • If the Administrators group is NOT listed in the "Local Security Settings" tab. • Open a command prompt • Execute the following (replacing C$ with the appropriate drive letter if necessary) repeat the command to ensure access to every node in the cluster from the local node replacing with the appropriate nodes in the cluster.1. 2. User Account changes specifically for Windows 2008: 4 . See Section 5. select: "Elevate without prompting (tasks requesting elevation will automatically run as elevated without prompting the administrator)" • Click OK to confirm the changes.msc" to launch the Security Policy Console management utility.112/e10817/postinst. add the group now.2. • Click OK to save the changes (if changes were made). Follow these steps to turn off the windows firewall : • Click Start. to ensure correct operation of the Oracle software. After the installation is successful.1. click Local Policies. type "firewall. Net Use Test The "net use" utility can be used to validate the ability to perform the software copy among the cluster nodes. "Configure Exceptions for the Windows Firewall" of Oracle® Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Microsoft Windows for details: http://download. and then click OK • In the Firewall Control Pannel.com/docs/cd/E11882_01/install. Ensure that the Administrators group is listed under "Manage auditing and security log": • Open a command prompt and type "secpol. click "Turn Windows Firewall on or off" (upper left hand corner of the window). Disable Windows Firewall When installing Oracle Grid Infrastructure and/or Oracle RAC it is required to turn off the Windows firewall.1.

Networking NOTE: This section is intended to be used for installations NOT using GNS. Determine the public host name for each node in the cluster. Common practice is to use the names Public and Private for the interfaces. Active/passive is the preferred teaming method due to its simplistic configuration. Remote Registry Connect Validate the ability to connect to the remote nodes registry(s) as follows: • Open a command prompt and type "regedit" • Within the registry editor menu bar.2. A to Z. click Run. • It is recommended that NIC teaming is configured.1.C:\Users\Administrator>net use \\remote node name\C$ The command completed succ • Repeat the previous 2 steps on ALL cluster nodes. 2.1. choose File and select "Connect Network Registry" • In the Select Computer window enter the remote node name.3.cpl". • The virtual IP address must be on the same subnet as your public IP address. Determine your cluster name. 2. • Click OK and wait for the remote registry to appear in the tree. 2. For Windows 2008: Perform the following to rename the network interfaces: • Click Start. The virtual host name is a public node name that is used to reroute client requests sent to the node if the node is down. use the primary host name of each node. • The cluster name is at least 1 character long and less than 15 characters long. NOTE: It is a requirement that network interfaces used for the Public and Private Interconnect be consistently named (have the same name) on every node in the cluster. • Determine the intended purpose for each of the interfaces (may need to view the IP configuration) • Right click the interface to be renamed and click "rename" • Enter the desired name for the interface. The virutal hostname must meet the following requirements: • The virtual IP address and the network name must not be currently in use. 3.2. • Repeat the previous 4 steps on ALL cluster nodes ensuring that the public and private interfaces have the same name on every node. use the name displayed by the hostname command for example: racnode1. Oracle recommends that you provide a name in the format <public hostname>-vip. 2. and then click OK. Determine the virtual hostname for each node in the cluster. type "ncpa. long names should be avoided and special characters are NOT to be used. Net Use Test 5 . For the public host name. • Repeat the previous 4 steps for ALL cluster nodes. The cluster name should satisfy the following conditions: • The cluster name is globally unique throughout your host domain. • The cluster name must consist of the same character set used for host names: single-byte alphanumeric characters (a to z. for example: racnode1-vip. In other words. 1. and 0 to 9) and hyphens (-).

Determine the private hostname for each node in the cluster. 5.example. Even if you are using a DNS.1.0.2.100 racnode1.2.101 racnode2-priv After you have completed the installation process. 2. This will result in only 1 SCAN VIP for the entire cluster. Define a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin). • The private network should be deployed on Gigabit Ethernet or better. • The private IP should NOT be accessible to servers not participating in the local cluster.2.example.103 racnode2-vip. as long as it is unique within your network and conforms to the RFC 952 standard.0.com racnode2-vip #PrivateLAN .example.2.102 racnode1-vip.GNS_subdomain_name. You can use any name for the SCAN. • The private network should be on standalone dedicated switch(es).0. 6.com racnode1-vip 192.com racnode2 #VIP 192. for example docrac-scan. the clients would use docrac-scan to connect to the cluster.com racnode1 192. Networking 6 .• The virtual host name for each node should be registered with your DNS.100 racnode1-priv 172. You should ensure that the public IP addresses resolve correctly and that the private addresses are of the form 'nodename-priv' and resolve on both nodes via the hosts file.com. 4. Network Ping Tests There are a series of 'ping' tests that should be completed. Oracle recommends that you list the public IP. SCAN VIPs must be resolvable by DNS.2. A common naming convention for the private hostname is <public hostname>-priv.PUBLIC 192. The fully qualified SCAN for the cluster defaults to cluster_name-scan. and then the network adapter binding order should be checked. • The private network should NOT be part of a larger overall network topology.example. Configure the c:\windows\system32\drivers\etc\hosts file so that it is similar to the following example: NOTE: The SCAN VIP MUST NOT be in the hosts file.0.0. * Public Ping test Pinging Node1 from Node1 should return Node1's public IP address Pinging Node2 from Node If any of the above tests fail you should fix name/address resolution by updating the DNS or local hosts files on each node before continuing with the installation.0. 2. configure clients to use the SCAN to access the cluster.101 racnode2. SCAN VIPs must NOT be in the c:\windows\system32\drivers\etc\hosts file.2.example. #PublicLAN . This private hostname does not need to be resolvable through DNS and should be entered in the hosts file (typically located in: c:\windows\system32\drivers\etc).2. Active/passive is the preferred teaming method due to its simplistic configuration. The short SCAN for the cluster is docrac-scan. • It is recommended that redundant NICs are configured using teaming. Using the previous example. VIP and private addresses for each node in the hosts file on each node.PRIVATE 172.2.

• In the Property list. • Under the Adapters and Bindings tab use the up arrow to move the Public interface to the top of the Connections list. These issues are described in detail in Microsoft KB article 948496 and 951037. click Run.2. For Windows 2008: Perform the follow tasks to ensure this requirement is met: • Click Start. and then click the Advanced tab. • Click Configure. Network Interface Binding Order (and Protocol Priorities) 7 . • In the Property list.4. click Receive Side Scaling. is of type DWORD and has a value of 1.3. Navigate to the Key HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters and right click to create a new key of type DWORD. and then click OK. and then click OK. • Under Binding order for increase the priority of IPv4 over IPv6 • Click OK to save the changes • Repeat the previous 5 steps on ALL cluster nodes 2. • Right-click a network adapter object. and then click OK. You should disable this activity using the registry editor regedit. Typically a Microsoft Service: Distributed Transaction Coordinator (MSDTC) can interact with Oracle software during install. and then click OK.3. Disable SNP Features On Windows 2003 SP2 and later platforms there are several network issues related to SNP features.2. click 'Alt' to enable that menu item). Media Sense allows Windows to uncouple an IP address from a card when the link to the local switch is lost. Perform the following tasks to take proactive action on these potential issues: • Click Start. It is recommended that this service is stopped and set to 'manual' start using services.2. Stopping Services There can be issues with some (non-Oracle) services.cpl". click TCP/IP Offload. Disable DHCP Media Sense Media Sense should be disabled. and then click Properties. click Disable in the Value list. • In the menu bar on the top of the window click "Advanced" and choose "Advanced Settings" (For Windows 2008. which may already be running on the cluster nodes. click Run.2. if the "Advanced" is not showing. click Disable in the Value list.msc on both nodes.cpl". Network Interface Binding Order (and Protocol Priorities) It is required that the Public interface be listed first in the network interface binding order on ALL cluster nodes. 2. • Repeat steps 2 through 5 for each network adapter object.2.2. The same can be accomplished on Windows 2008 by issuing the following commands: netsh int tcp set global chimney=disabled and netsh int tcp set global rss=disabled Validate these changes with the command: netsh interface ipv4 show global 2. For Windows 2008 we can check the status of DHCP Media Sense with the command: netsh interface ipv4 show global 2. make sure that the Key is called DisableDHCPMediaSense. type "ncpa. type "ncpa.2.

5.2. Synchronizing the Time on ALL Nodes 8 . click the Environment Variables button • Modify the TEMP and TMP variables under "User variables for Administrator" to the desired setting. Ensure that you use only 32 bit versions of the Oracle Software on a 32bit OS and 64 bit versions of the Oracle Software on a 64bit OS. • Set the value for MaxNegPhaseCorrection? to 0 and exit the registry editor.0. • Repeat steps 1 through 6 for ALL cluster nodes. Environment Variables Set the TEMP and TMP environment variables to a common location that exists on ALL nodes in the cluster. During installation the Oracle Universal Installer (OUI) will utilize these directories to store temporary copies of the binaries. Important. For the Grid Infrastructure (clusterware and ASM) software download: Oracle Database 11g Release 2 Grid Infrastructure (11. 2.6. this path must be identical for both TMP and TEMP and they must be set to the same location on ALL cluster nodes. it MUST be configured to prevent the time from being adjusted backwards. • Click OK to save the changes. • Open a command prompt and execute the following to put the change into effect: cmd> W32tm /config /update • Repeat steps 1 through 4 for ALL cluster nodes.4. Keep in mind.1.2.2.4. If the Windows Time Service is being used. With 11gR2 time synchronization can be performed by the Clusterware using CTSSD (Cluster Time Synchronization Services Daemon) or by using the Windows Time Service. 2.0) for Windows 2. Stage the Oracle Software It is recommended that you stage the required software onto a local drive on Node 1 of your cluster. Perform the following steps to ensure the time is NOT adjusted backwards using Windows Time Service: • Open a command prompt and type "regedit" • Within the registry editor locate the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config key.1.0. Most commonly these parameters are set as follows: TMP=C:\temp TEMP=C:\temp For Windows 2008: To set the TEMP and TMP environment variables: • Log into the server as the user that will perform the installation • Open Computer Properties • Click the Advanced system settings link (on the left under tasks) • Under the Advanced tab.0) for Windows For the RDBMS software download from OTN: Oracle Database 11g Release 2 (11. Synchronizing the Time on ALL Nodes There is a general requirement for Oracle RAC that the time on all the nodes be the same. If the location is not the same for both variables on ALL cluster nodes the installation will fail.

we will be using the more complex of the above configurations by creating a 3rd diskgroup for storage of the OCR and Voting Disks. i. This diskgroup will also be used to store the ASM SPFILE. Check if there is a newer version of CVU available on otn compared to the one that ships on the installation media http://otn.oracle. Marking Disk Partitions for use by ASM 8. The minimum size of the 3 disks that make up this diskgroup is 1GB. Drive Letters 6. Enable Automounting of disks on Windows 3. Verify Clusterware Installation Readiness 3. 3. Shared Disk Layout It is assumed that the two nodes have local disk primarily for the operating system and the local Oracle Homes.com/rac 3. Cluster Verification Utility (CVU) stage check Now you can run the CVU to check the state of the cluster prior to the install of the Oracle Software.2. Clean the Shared Disks 4. The 2 nodes must also share some central disks. If the SAN supports caching that is visible to all nodes then this can be enabled.1. Prepare the shared storage for Oracle RAC This section describes how to prepare the shared storage for Oracle RAC 1. If you are utilizing external redundancy for your disk groups this means you will have 1 Voting Disk and 1 OCR. This disk must not have cache enabled at the node level. This configuration will provide 3 multiplexed copies of the Voting Disk and a single OCR which takes on the redundancy of that disk group (mirrored within ASM).e. Create Logical partitions inside Extended partitions 5. View Disks 7. For demonstration purposes within this cookbook.1.7. Labelled C: The Oracle Grid Infrastructure software also resides on the local disks on each node. For those who wish to utilize Oracle supplied redundancy for the OCR and Voting disks you could create a separate (3rd) ASM Diskgroup having a minimum of 2 fail groups (total of 3 disks). This means that the OCR and Voting disk will be stored along with the database related files.7. Cluster Verification Utility (CVU) stage check 9 .1. Shared Disk Layout 2. Our third disk group will be normal redundancy allowing for 3 Voting Disks and a single OCR which takes on the redundancy of that diskgroup. Grid Infrastructure Shared Storage With Oracle 11gR2 it is considered a best practice to store the OCR and Voting Disk within ASM and to maintain the ASM best practice of having no more than 2 diskgroups (Flash Recovery Area and Database Area). if the HBA drivers support caching of reads/writes it should be disabled. Disk Number Volume Size(MB) ASM Label Prefix Diskgroup Redundancy Disk 1 Disk 2 Disk 3 Volume 2 Volume 3 Volume 4 1024 1024 1024 OCR_VOTE OCR_VOTE OCR_VOTE OCR_VOTE OCR_VOTE OCR_VOTE Normal Normal Normal 2.

But see a later Appendix for coping with failed installs. diskmgmt. Cleaning will remove data from any previous failed install.2. WARNING this will destroy all of the data on the disk.------.3.--Disk 0 Online 34 GB 0 B Disk 1 Online 1024 MB 1024 MB Disk 2 Online 1024 MB 1024 MB Disk 3 Online 1024 MB 1024 MB Disk 4 Online 1024 MB 1024 MB Disk 5 Online 1024 MB 1024 MB Disk 6 Online 100 GB 100 GB Disk 7 Online 100 GB 100 GB Disk 8 Online 100 GB 100 GB Disk 9 Online 100 GB 100 GB 3. 3. Whenever possible Oracle also recommends sticking to the SAME (Stripe And Mirror Everything) methodology by using RAID 1+0. ASM Shared Storage It is recommended that ALL ASM disks within a disk group are of the same size and carry the same performance characteristics. Do not select the disk containing the operating system or you will have to reinstall the OS Cleaning the disk ‘scrubs’ every block on the disk. You must create logical drives inside of extended partitions for the disks to be used by Oracle Grid Infrastructure and Oracle ASM. If SAN level redundancy is available.3.msc instead of diskpart (as used in the following sections) to create these partitions.3790. ASM Shared Storage --- 10 .3959 Copyright (C) 1999-2001 Microsoft Corporation. DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------. For Microsoft Windows 2008. On computer: WINNODE1 DISKPART>AUTOMOUNT ENABLE Repeat the above command on all nodes in the cluster 3.msc cannot be used instead of diskpart to create these partitions. external redundancy should be used for database storage on ASM.---------. On Node1 from within diskpart you should clean each of the disks.2.------. Clean the Shared Disks You may want to clean your shared disks before starting the install. On each node log in as someone with Administrator privileges then Click START->RUN and type diskpart C:\>diskpart Microsoft DiskPart version 5. 4 4 1+0 1+0 100 100 DBDATA DBFLASH DBDATA DBFLASH External External Number of LUNs(Disks) RAID Level Size(GB) ASM Label Prefix Diskgroup Redundancy In this document we will use the diskpart command line tool to manage these LUNs. For MIcrosoft Windows 2003 it is possible to use diskmgmt. Enable Automount You must enable automounting of disks for them to be visible to Oracle Grid Infrastructure. There must be no drive letters assigned to any of the Disks1 – Disk10 on any node.2.1.2. This may take some time to complete.1.

DISKPART> clean all DISKPART> select disk 2 Disk 2 is now the selected disk. DISKPART> clean all 3. DISKPART> clean all DISKPART> select disk 4 Disk 4 is now the selected disk. DISKPART> clean all DISKPART> select disk 5 Disk 5 is now the selected disk. I have dedicated LUNS for each device.3. DISKPART> clean all DISKPART> select disk 3 Disk 3 is now the selected disk. In the following example. DISKPART> create part log DiskPart succeeded in creating the specified partition. Clean the Shared Disks 11 . Create Logical partitions inside Extended partitions Assuming the disks you are going to use are completely empty you must create an extended partition and then inside that partition a logical partition.4. DISKPART> select disk 1 Disk 1 is now the selected disk.Disk 10 Online 100 GB 100 GB Now you should clean disks 1 – 10 (Not disk0 as this the local C: drive) DISKPART>select disk 1 Disk 1 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> clean all DISKPART> select disk 9 Disk 9 is now the selected disk. DISKPART> clean all DISKPART> select disk 6 Disk 6 is now the selected disk. DISKPART> clean all DISKPART> select disk 8 Disk 8 is now the selected disk. 3. DISKPART> clean all DISKPART> select disk 7 Disk 7 is now the selected disk. for Oracle Grid Infrastructure. DISKPART> clean all DISKPART> select disk 10 Disk 10 is now the selected disk.

DISKPART> select disk 2 Disk 2 is now the selected disk.. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> select disk 4 Disk 4 is now the selected disk. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> select disk 7 Disk 7 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log 3. DISKPART> select disk 5 Disk 5 is now the selected disk. DISKPART> select disk 3 Disk 3 is now the selected disk. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> select disk 8 Disk 8 is now the selected disk. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. Create Logical partitions inside Extended partitions 12 . DISKPART> create part log DiskPart succeeded in creating the specified partition.4. DISKPART> select disk 9 Disk 9 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> select disk 6 Disk 6 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition.

4.5. View Created partitions 13 .------Disk 0 Online 34 GB 0 B Disk 1 Online 1024 MB 0 MB Disk 2 Online 1024 MB 0 MB Disk 3 Online 1024 MB 0 MB Disk 4 Online 1024 MB 0 MB Disk 5 Online 1024 MB 0 MB Disk 6 Online 100 GB 0 MB Disk 7 Online 100 GB 0 MB Disk 8 Online 100 GB 0 MB Disk 9 Online 100 GB 0 MB Disk 10 Online 100 GB 0 MB --- --- 3.---------. in this case. List Drive Letters Diskpart should not add drive letters to the partitions on the local node.4. On earlier versions of Windows 2003 a reboot of the ‘other’ node will be required for the new partitions to become visible. Using diskpart on Node2 DISKPART> list volume --C D E F G H I J K L M --------------NTFS RAW RAW RAW RAW RAW RAW RAW RAW RAW RAW ---------Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition ------16 GB 1023 MB 1023 MB 1023 MB 1023 MB 1023 MB 100 GB 100 GB 100 GB 100 GB 100 GB --------Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy -------System Volume ### Ltr Label Fs Type Size Status Info ---------Volume 0 Volume 1 Volume 2 Volume 3 Volume 4 Volume 5 Volume 6 Volume 7 Volume 8 Volume 9 Volume 10 Notice that the volumes are listed in a completely different order compared to the disk list. This relates to volume 0 (In this example) 3. Remove Drive Letters You need to remove the drive letters D E F G H I J K L M These relate to volumes 1 2 3 4 5 6 7 8 9 10 *Do NOT remove drive letter C which. The partitions on the other node may have drive letters assigned. DISKPART> create part log DiskPart succeeded in creating the specified partition. Windows 2003 SP2 and Windows 2008 do not suffer from this issue.1. 3. You must remove them. DISKPART> create part ext DiskPart succeeded in creating the specified partition. is the local disk. 3.------.1.5.1. View Created partitions DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------.DiskPart succeeded in creating the specified partition. DISKPART> select disk 10 Disk 10 is now the selected disk.

Remove Drive Letters 14 . DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 6 Volume 6 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 7 Volume 7 is the selected volume. DISKPART> select volume 9 Volume 9 is the selected volume. DISKPART> select volume 3 Volume 3 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point. 3. DISKPART> select volume 8 Volume 8 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 4 Volume 4 is the selected volume.5. DISKPART> remov DiskPart successfully removed the drive letter or mount point.DISKPART> select volume 1 Volume 1 is the selected volume. DISKPART> select volume 5 Volume 5 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 2 Volume 2 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point.1. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 10 Volume 10 is the selected volume.

The following table outlines the summary of the disks that will be stamped for ASM usage: 3 4 4 1+0 1+0 1+0 1024 100 100 OCR_VOTE DBDATA DBFLASH OCR_VOTE DBDATA DBFLASH Normal External External Number of LUNs(Disks) RAID Level Size(GB) ASM Label Prefix Diskgroup Redundancy Perform this task as follows: • Within Windows Explorer navigate to the asmtool directory within the Grid Infrastructure installation media and double click the â–asmtoolg. List volumes on Second node 15 . asmtoolg will be used to stamp the ASM disks. • Within ASM Tool GUI.5.5. ASM Tool is available in two different flavors. Therefore prior to running the OUI the disks that are to be used by Oracle RAC MUST be stamped using ASM Tool.2.exeâ– executable. select â–Add or Change Labelâ– and click â–Nextâ–.3. List volumes on Second node You should check that none of the RAW partitions have drive letters assigned DISKPART> list vol --C --------------NTFS RAW RAW RAW RAW RAW RAW RAW RAW RAW RAW ---------Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition ------16 GB 1023 MB 1023 MB 1023 MB 1023 MB 1023 MB 100 GB 100 GB 100 GB 100 GB 100 GB --------Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy -------System Volume ### Ltr Label Fs Type Size Status Info ---------Volume 0 Volume 1 Volume 2 Volume 3 Volume 4 Volume 5 Volume 6 Volume 7 Volume 8 Volume 9 Volume 10 You can now exit diskpart on all nodes 3. command line (asmtool) and graphical (asmtoolg).2. Marking Disk Partitions for use by ASM The only partitions that the Oracle Universal Installer acknowledges on Windows systems are logical drives that are created on top of extended partitions and that have been stamped as candidate ASM disks. For this installation. Both utilities can be found under the asmtool directory within the Grid Infrastructure installation media.6. 3.

• On the Select Disks screen choose the appropriate disks to be assigned a label and enter an ASM Label Prefix to make the disks easily identifiable for their intended purpose. click â–Nextâ– to continue. size and/or performance characteristics. • Review the summary screen and click â–Nextâ–. 3.6. After choosing the intended disks and entering the appropriate ASM Label Prefix. Marking Disk Partitions for use by ASM 16 .

Later in this document CLUVFY will be run in pre dbinst mode to validate the readiness for the RDBMS software installation. at this stage it should be run in the CRS pre-installation mode.7. click â–Finishâ– to update the ASM Disk Labels. 3. • Repeat these steps for all ASM disks that will differ in their label prefix.7. Verify Grid Infrastructure Installation Readiness 17 . 3. There are various levels at which CLUVFY can be run.• On the final screen. Verify Grid Infrastructure Installation Readiness Prior to installing Grid Infrastructure it is highly recommended to run the cluster verification utility (CLUVFY) to verify that the cluster nodes have been properly configured for a successful Oracle Grid Infrastructure installation.

see step 3) directory on the 11g Release 2 (11. Basic Grid Infrastructure Install (without GNS and IPMI) • Shutdown all Oracle Processes running on all nodes (not necessary if performing the install on new servers) • Start the Oracle Universal Installer (OUI) by running setup.0. these issues should be investigated and resolved before proceeding with the installation. • If using the DVD installation media (from edelivery. Oracle Grid Infrastructure Install 4.oracle.1.com) the first screen to appear will be the Select a Product to Install screen. 4. it is recommended to download and run the latest version of CLUVFY. 4. If the OTN media is used the RDBMS/ASM and Clusterware are separate downloads and the first screen to be displayed will be the Welcome screen . • Open a command prompt and run CLUVFY as follows to perform the Oracle Clusterware pre-installation verification: cmd> cmd> runcluvfy stage -post hwos -n runcluvfy stage -pre crsinst -n [â–verbose] [-verbose] If any errors are encountered.oracle.com/rac Once the latest version of the CLUVFY has been downloaded and installed. • On the Select Installation Option Screen. choose â–Install and Configure Grid Infrastructure for a Clusterâ– and click â–Nextâ– to continue.Though CLUVFY is packaged with the Grid Infrastructure installation media. Oracle Grid Infrastructure Install 18 .2. execute it as follows to perform the Grid Infrastructure pre-installation verification: • Login to the server in which the installation will be performed as the Local Administrator. The latest version of the CLUVFY utility can be downloaded from: http://otn.1) installation media. Choose Oracle Clusterware 11g and click next to continue.exe as the Local Administrator user from the Clusterware (db directory if using the DVD Media.

4. • Choose the appropriate language on the Select Product Languages screen and click â–Nextâ– to continue.* On the Select Installation Type screen.1. Basic Grid Infrastructure Install (without GNS and IPMI) 19 . choose â–Advanced Installationâ– and click â–Nextâ– to continue.

1. this port defaults to 1521. ♦ Uncheck the â–Configure GNSâ– checkbox. • Add all of the cluster nodes hostnames and Virtual IP hostnames on the Cluster Node Information Screen. ♦ Enter the port number for the SCAN Listener. additional nodes can be added using the â–Addâ– button. ♦ Enter the SCAN Name for the cluster. ♦ Click â–Nextâ– to continue. The SCAN Name MUST NOT be in the hosts file. 4. Basic Grid Infrastructure Install (without GNS and IPMI) 20 . The SCAN Name must be a DNS entry resolving to 3 IP addresses. By default the OUI only knows about the local node.• On the Grid Plug and Play Information screen perform the following: ♦ Enter the desired Cluster Name. this name should be unique for the enterprise and CAN NOT be changed post installation.

make sure that the public and private interfaces are properly specified.• On the Specify Network Interface Usage screen. 4. NOTE The public and private interface names MUST be consistent across the cluster. Make the appropriate corrections on this screen and click â–Nextâ– to continue.1. • Choose â–Automatic Storage Management (ASM) on the Storage Option Information screen and click â–Nextâ– to create the ASM diskgroup for Grid Infrastructure. Basic Grid Infrastructure Install (without GNS and IPMI) 21 . the same MUST be true on all of the other nodes in the cluster. so if â–Publicâ– is public and â–Privateâ– is private.

OCR_VOTE) ♦ Choose â–Normalâ– for the Redundancy level.1. • Click â–Nextâ– to continue.g. ♦ Select the disks that were previously designated for the OCR and Voting Disks. To stamp the disks review the instructions in the Storage Prerequisites section of this document and click the â–Stamp Disksâ– button on this screen.• On the Create ASM Disk Group screen perform the following: • ♦ Enter the Disk Group Name that will store the OCR and Voting Disks (e. NOTE If no Candidate disks are available the disks have not yet been stamped for use by ASM. Basic Grid Infrastructure Install (without GNS and IPMI) 22 . 4.

• Enter the appropriate passwords for the SYS and ASMSNMP users of ASM on the Specify ASM Passwords screen and click â–Nextâ– to continue. NOTE Continuing with failed prerequisites may result in a failed installation.g. Click â–Nextâ– to allow the OUI to perform the prerequisite checks on the target nodes.1. d:\app\oracle) and the Grid Infrastructure Installation (e.g. therefore it is recommended that failed prerequisite checks be resolved before continuing. • Specify the location for Oracle Base (e. Basic Grid Infrastructure Install (without GNS and IPMI) 23 . • On the Failure Isolation Support screen choose â–Do not use Intelligent Platform Management Interface (IPMI)â– and click â–Nextâ– to continue. d:\OraGrid) on the Specify Installation Location screen. 4.

Basic Grid Infrastructure Install (without GNS and IPMI) 24 . 4.1.• After the prerequisite checks have successfully completed. review the summary of the pending installation and click â–Finishâ– to install and configure Grid Infrastructure.

• On the Finish screen click â–Closeâ– to exit the OUI. check the status of the CRS resources as follows: NOTE All resources should report as online with the exception of GSD and OC4J? .1. • Once the installation has completed. cmd> %GI_HOME%\bin\crsctl stat res -t The output of the above command will be similar to the following: -------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS 4. Basic Grid Infrastructure Install (without GNS and IPMI) 25 . GSD will only be online if Grid Infrastructure is managing a 9i database and OC4J? is reserved for use in a future release. Though these resources are offline it is NOT supported to remove them.

ons ONLINE ONLINE ONLINE ONLINE ratwin01 ratwin02 ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 Started Started ONLINE ONLINE ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 -------------------------------------------------------------------------------Cluster Resources -------------------------------------------------------------------------------ora.scan1.vip 1 ora.net1.lsnr 1 ONLINE ONLINE ora.asm ONLINE ONLINE ora.vip 1 ONLINE ora.gsd OFFLINE OFFLINE OFFLINE OFFLINE ora.vip 1 ONLINE ora.LISTENER.lsnr ONLINE ONLINE ora.dg ONLINE ONLINE ora.LISTENER_SCAN2.scan3.lsnr 1 ONLINE ONLINE ora.ratwin02.lsnr 1 ONLINE ONLINE ora.1.oc4j 1 ratwin02 ratwin01 ratwin01 OFFLINE OFFLINE ora. Basic Grid Infrastructure Install (without GNS and IPMI) 26 .eons ONLINE ONLINE ora.OCR_VOTE.ratwin01.scan2.vip 1 ONLINE ratwin01 ONLINE ratwin02 ONLINE ONLINE ratwin02 ONLINE ONLINE ratwin01 ONLINE ONLINE ratwin01 4.network ONLINE ONLINE ora.LISTENER_SCAN1.vip 1 ora.LISTENER_SCAN3.-------------------------------------------------------------------------------Local Resources -------------------------------------------------------------------------------ora.

In order to achieve this CLUVFY must be run in pre dbinst mode. Grid Infrastructure Home Patching This Chapter is a placeholder 6. If you wish NOT to be notified or not to use Oracle Configuration Manager leave the email box blank and uncheck the check box.com) the first screen to appear will be the Select a Product to Install screen.5. • If using the DVD installation media (from edelivery. If the OTN media is used the RDBMS/ASM and Clusterware are separate downloads and the first screen to be displayed will be Installation Type screen . Choose Oracle Database 11g and click next to continue. Grid Infrastructure Home Patching 27 .oracle.2. 5. • The first OUI screen will prompt for an email address to allow you to be notified of security issues and to enable Oracle Configuration Manager.1) installation media and click next to begin the installation process. The following outlines how to achieve this task: cmd> runcluvfy stage -pre dbinst -n -verbose • Start the OUI by running the setup. RDBMS Software Install Prior to installing the Database Software (RDBMS) it is highly recommended to run the cluster verification utility (CLUVFY) to verify that Grid Infrastructure has been properly installed and the cluster nodes have been properly configured to support the Database Software installation. After entering the desired information click â–Nextâ– to continue. • Choose â–Install database software onlyâ– on the Select Installation Option screen and click â–Nextâ– to continue.exe command as the Local Administrator user from the DB directory of the Oracle Database 11g Release 2 (11.0.

• Choose the appropriate language on the Select Product Languages screen and click â–Nextâ– to continue.• On the Grid Installation Options screen. RDBMS Software Install 28 . Click â–Nextâ– to continue. 6. choose â–Real Application Clusters database installationâ– and select ALL nodes for the software installation.

g.• On the Select Database Edition screen choose â–Enterprise Editionâ– and click â–Nextâ– to continue. 6. • Specify the location for Oracle Base (e. RDBMS Software Install 29 . d:\app\oracle\product\11. these options can be chosen by clicking the â–Select Optionsâ– button.0\db_1) on the Specify Installation Location screen.2. NOTE It is recommended that the same Oracle Base location is used for the database installation that was used for the Grid Infrastructure installation. d:\app\oracle) and the database software installation (e. NOTE If there is a need for specific database options to be installed or not installed. Click â–Nextâ– to allow the OUI to perform the prerequisite checks on the target nodes.g.

• After the prerequisite checks have successfully completed. RDBMS Software Install 30 . 6. therefore it is recommended that failed prerequisite checks be resolved before continuing.NOTE Continuing with failed prerequisites may result in a failed installation. review the summary of the pending installation and click â–Finishâ– to install the database software.

7. RAC Home Patching 31 . the ASM disks for the database diskgroups were stamped for ASM usage. We will now use the ASM Configuration Assistant to create the diskgroups: 7. In an earlier chapter. RAC Home Patching This Chapter is a placeholder 8.• On the Finish screen click â–Closeâ– to exit the OUI. Run ASMCA to create diskgroups Prior to creating a database on the cluster. the ASM diskgroups that will house the database must be created.

NOTE: To reduce the complexity of managing ASM and its diskgroups. ♦ Select the candidate disks to include in the DiskGroup? NOTE: If no Candidate disks are available the disks have not yet been stamped for use by ASM. 8. DBDATA or DBFLASH) ♦ Choose External Redundancy (assuming redundancy is provided at the SAN level). Run ASMCA to create diskgroups 32 . such as multiplexed copies of the current control file and online redo logs. click the DiskGroups? tab.g.• Run the ASM Configuration Assistant (ASMCA) from the ASM Home by executing the following: cmd> %GI_HOME%\bin\asmca • After launching ASMCA. • While on the DiskGroups? tab. control files. and flashback log files • Within the Create DiskGroup? window perform the following: ♦ Enter the desired DiskGroup? Name (e. The Flash Recovery Area will house recovery-related files are created. archived redo logs. click the â–Createâ– button to display the Create DiskGroup? window. and change tracking files (used in incremental backups) are stored. a Database Area DiskGroup? and a Flash Recovery Area DiskGroup? . online redo logs. To stamp the disks review the instructions in the 'Rac11gR2WindowsPrepareDisk' chapter of this document and click the â–Stamp Disksâ– button on this screen. The Database Area Diskgroup will house active database files such as datafiles. Oracle recommends that generally no more than two diskgroups be maintained for database storage. backup sets.

To follow this recommendation add an OCR mirror. Mind that you can only have one OCR in a diskgroup.• ♦ Click â–OKâ– to create the DiskGroup? . Action: 8. Note: It is Oracle's Best Practise to have an OCR mirror stored in a second disk group. • Repeat the previous two steps to create all necessary diskgroups. Run ASMCA to create diskgroups 33 . • Once all the necessary DiskGroups? have been created click â–Exitâ– to exit from ASMCA.

select â–Oracle Real Application Clustersâ– and click â–Nextâ– • On the Operation screen. Run DBCA to create the database 34 . ensure that the Oracle Clusterware stack is running and run the following command as administrator: 2. To add OCR mirror to an Oracle ASM disk group. Run DBCA to create the database To help to verify that the system is prepared to successfully create a RAC database. D:\OraGrid\BIN>ocrconfig -add +DBDATA 3. D:\OraGrid\BIN>ocrcheck -config 9. select â–Create Databaseâ– and click â–Nextâ– 9.0\grid -verbose Perform the following to create an 11gR2 RAC database on the cluster: • Run the Database Configuration Assistant (DBCA) from the ASM Home by executing the following: cmd> %ORACLE_HOME\bin\dbca • On the Welcome screen.1.2. use the following Cluster Verification Utility command syntax: cmd> runcluvfy stage -pre dbcfg -n all -d c:\app\11.

• On the Database Identification screen perform the following: ♦ Select the â–Admin Managedâ– configuration type. ♦ Select ALL the nodes in the cluster. 9.• The Database Templates screen will now be displayed. ♦ Enter the desired SID prefix. Select the â–General Purpose or Transaction Processingâ– database template and click â–Nextâ– to continue. ♦ Enter the desired Global Database Name. ♦ Click â–Nextâ– to continue. Run DBCA to create the database 35 .

Run DBCA to create the database 36 . choose â–Configure Enterprise Managerâ–. Click â–Nextâ– to continue.• On the Management Options screen. Once Grid Control has been installed on the system this option may be unselected to allow the database to be managed by Grid Control. • Enter the appropriate database credentials for the default user accounts and click â–Nextâ– to continue. 9.

Run DBCA to create the database 37 .• On the Database File Locations screen perform the following: ♦ Choose Automatic Storage Management (ASM). ♦ Select â–Use Oracle-Managed Filesâ– and specify the ASM Disk Group that will house the database files (e. 9. ♦ Click â–OKâ– after entering the ASMSNMP password to continue.g. +DBDATA) ♦ Click â–Nextâ– and enter the ASMSNMP password when prompted.

Run DBCA to create the database 38 . select â–Specify Flash Recovery Areaâ–. Click â–Nextâ– to continue. enter +DBFLASH for the location and choose an appropriate size.• For the Recovery Configuration. • On the Database Content screen. It is also recommended to select â–Enable Archivingâ– at this point. Check the check box if the sample schemas are to be loaded into the database and click â–Nextâ– to continue. 9. you are able create the sample schemas.

NOTE: At this point you may want to increase the number of redo logs per thread to 3 and increase the size of the logs from the default of 50MB. review the file layout and click â–Nextâ– to continue. Run DBCA to create the database 39 . 9.• Enter the desired memory configuration and initialization parameters on the Initialization Parameters screen and click â–Nextâ– to continue. • On the Database Storage screen.

• Click â–OKâ– on the Summary window to create the database. Run DBCA to create the database 40 .• On the Last screen. ensure that â–Create Databaseâ– is checked and click â–Finishâ– to review the summary of the pending database creation. 9.

9. Run DBCA to create the database 41 .