Table of Contents

Rac11gR2OnWindows ........................................................................................................................................1 1. Introduction.........................................................................................................................................1 1.1. Overview of new concepts in 11gR2 Grid Infrastructure...................................................1 1.1.1. SCAN ..................................................................................................................1 1.1.2. GNS....................................................................................................................1 1.1.3. OCR and Voting on ASM storage......................................................................1 1.1.4. Intelligent Platform Management interface (IPMI)............................................1 1.1.5. Time sync ............................................................................................................2 1.1.6. Clusterware and ASM share the same Oracle Home ..........................................2 1.2. System Requirements ....................................................................................................................................2 1.2.1. Hardware Requirements ................................................................................................................2 1.2.2. Network Hardware Requirements .................................................................................................2 1.2.3. IP Address Requirements ..............................................................................................................3 1.2.4. Installation method.......................................................................................................................3 2. Prepare the cluster nodes for Oracle RAC.......................................................................................................3 2.1. User Accounts..................................................................................................................................3 2.1.1. User Account changes specifically for Windows 2008:..................................................3 2.1.2. Net Use Test....................................................................................................................4 . 2.1.3. Remote Registry Connect................................................................................................5 2.2. Networking...................................................................................................................................................5 2.2.1. Network Ping Tests.......................................................................................................................6 2.2.2. Network Interface Binding Order (and Protocol Priorities).........................................................7 2.2.3. Disable DHCP Media Sense.........................................................................................................7 2.2.4. Disable SNP Features...................................................................................................................7 2.3. Stopping Services ..........................................................................................................................................7 2.4. Synchronizing the Time on ALL Nodes.......................................................................................................8 2.5. Environment Variables.................................................................................................................................8 2.6. Stage the Oracle Software.............................................................................................................................8 2.7. Cluster Verification Utility (CVU) stage check ............................................................................................9 3. Prepare the shared storage for Oracle RAC.....................................................................................................9 3.1. Shared Disk Layout.........................................................................................................................9 . 3.1.1. Grid Infrastructure Shared Storage..................................................................................9 3.1.2. ASM Shared Storage.....................................................................................................10 3.2. Enable Automount......................................................................................................................................10 3.3. Clean the Shared Disks...............................................................................................................................10 3.4. Create Logical partitions inside Extended partitions..................................................................................11 3.4.1. View Created partitions..............................................................................................................13 3.5. List Drive Letters........................................................................................................................................13 3.5.1. Remove Drive Letters.................................................................................................................13 3.5.2. List volumes on Second node.....................................................................................................15 3.6. Marking Disk Partitions for use by ASM...................................................................................................15 3.7. Verify Grid Infrastructure Installation Readiness.......................................................................................17 4. Oracle Grid Infrastructure Install...................................................................................................................18 4.1. Basic Grid Infrastructure Install (without GNS and IPMI)...........................................................18 5. Grid Infrastructure Home Patching................................................................................................................27 6. RDBMS Software Install...............................................................................................................................27 7. RAC Home Patching ......................................................................................................................................31 8. Run ASMCA to create diskgroups................................................................................................................31 9. Run DBCA to create the database.................................................................................................................34

i

Rac11gR2OnWindows

1. Introduction
1.1. Overview of new concepts in 11gR2 Grid Infrastructure
1.1.1. SCAN
The single client access name (SCAN) is the address used by all clients connecting to the cluster. The SCAN name is a domain name registered to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS). The SCAN name eliminates the need to change clients when nodes are added to or removed from the cluster. Clients using SCAN names can also access the cluster using EZCONNECT. • The Single Client Access Name (SCAN) is a domain name that resolves to all the addresses allocated for the SCAN name. Allocate three addresses to the SCAN name. During Oracle grid infrastructure installation, listeners are created for each of the SCAN addresses, and Oracle grid infrastructure controls which server responds to a SCAN address request. Provide three IP addresses in the DNS to use for SCAN name mapping. This ensures high availability. • The SCAN addresses need to be on the same subnet as the VIP addresses for nodes in the cluster. • The SCAN domain name must be unique within your corporate network.

1.1.2. GNS
In the past, the host and VIP names and addresses were defined in the DNS or locally in a hosts file. GNS can simplify this setup by using DHCP. To use GNS, DHCP must be configured in the subdomain in which the cluster resides.

1.1.3. OCR and Voting on ASM storage
The ability to use ASM diskgroups for the storage of Clusterware OCR and Voting disks is a new feature in the Oracle Database 11g Release 2 Grid Infrastructure. If you choose this option and ASM is not yet configured, OUI launches ASM configuration assistant to configure ASM and a diskgroup.

1.1.4. Intelligent Platform Management interface (IPMI)
Intelligent Platform Management Interface (IPMI) provides a set of common interfaces to computer hardware and firmware that administrators can use to monitor system health and manage the system. With Oracle Database 11g Release 2, Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity. You must have the following hardware and software configured to enable cluster nodes to be managed with IPMI: • Each cluster member node requires a Baseboard Management Controller (BMC) running firmware compatible with IPMI version 1.5, which supports IPMI over LANs, and configured for remote control. • Each cluster member node requires an IPMI driver installed on each node. • The cluster requires a management network for IPMI. This can be a shared network, but Oracle recommends that you configure a dedicated network. • Each cluster node's ethernet port used by BMC must be connected to the IPMI management network. Rac11gR2OnWindows 1

If you intend to use IPMI, then you must provide an administration account username and password to provide when prompted during installation.

1.1.5. Time sync
There is a general requirement for Oracle RAC that the time on all the nodes be the same. With 11gR2 time synchronization can be performed by the Clusterware using CTSSD (Cluster Time Synchronization Services Daemon) or by using the Windows Time Service. If the Windows Time Service is being used, it MUST be configured to prevent the time from being adjusted backwards.

1.1.6. Clusterware and ASM share the same Oracle Home
The clusterware and ASM share the same home thus we call it Grid Infrastructure home (prior to 11gR2 ASM could be installed either in a separate home or in the same Oracle home as RDBMS.)

1.2. System Requirements
1.2.1. Hardware Requirements
• Physical memory (at least 1.5 gigabyte (GB) of RAM) • An amount of swap space equal the amount of RAM • Temporary space (at least 1 GB) available in /tmp • A processor type (CPU) that is certified with the version of the Oracle software being installed • At minimum of 1024 x 786 display resolution, so that Oracle Universal Installer (OUI) displays correctly • All servers that will be used in the cluster have the same chip architecture, for example, all 32-bit processors or all 64-bit processors • Disk space for software installation locations. You will need at least 4.5 GB of available disk space for the Grid home directory, which includes both the binary files for Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) and their associated log files, and at least 4 GB of available disk space for the Oracle Database home directory. • Shared disk space

An Oracle RAC database is a shared everything database. All data files, control files, redo log files, and the server parameter file (SPFILE) used by the Oracle RAC database must reside on shared storage that is accessible by all the Oracle RAC database instances. The Oracle RAC installation that is described in this guide uses Oracle ASM for the shared storage for Oracle Clusterware and Oracle Database files. The amount of shared disk space is determined by the size of your database.

1.2.2. Network Hardware Requirements
• Each node has at least two network interface cards (NIC), or network adapters. • Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter 'PublicLAN', then you must configure 'PublicLAN' as the public interface on all nodes. • You should configure the same private interface names for all nodes as well. If 'PrivateLAN' is the private interface name for the first node, then 'PrivateLAN' should be the private interface name for your second node. • For the private network, the end points of all designated interconnect interfaces must be completely reachable on the network. Every node in the cluster should be able to connect to every private network interface in the cluster. 1.1.4. Intelligent Platform Management interface (IPMI) 2

• The host name of each node must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.

1.2.3. IP Address Requirements
• One public IP address for each node • One virtual IP address for each node • Three single client access name (SCAN) addresses for the cluster

1.2.4. Installation method
This document details the steps for installing a 2-node Oracle 11gR2 RAC cluster on Windows: • The Oracle Grid Infrastructure Home binaries are installed on the local disk of each of the RAC nodes. • The files required by Oracle Clusterware (OCR and Voting disks) are stored in ASM • The installation is explained without GNS and IPMI (additional Information for installation with GNS and IPMI are explained)

2. Prepare the cluster nodes for Oracle RAC
The guides include hidden sections, use the and image for each section to show/hide the section or you can Expand all or Collapse all by clicking these buttons. This is implemented using the Twisty Plugin which requires Java Script to be enabled on your browser.

2.1. User Accounts
The installation should be performed as the Local Administrator, the Local Administrator username and password MUST be identical on all cluster nodes. If a domain account is used, this domain account must be explicitly defined as a member of the Local Administrator group on all cluster nodes. For Windows 2008: • Open Windows 2008 Server Manager • Expand the Configuration category in the console tree • Expand the Local Users and Groups category in the console tree • Within Groups, open the Administrator group • Add the desired user account as a member of the Administrator Group • Click OK to save the changes. We must now configure and test the installation user's ability to interact with the other cluster nodes.

2.1.1. User Account changes specifically for Windows 2008:
1. Change the elevation prompt behavior for administrators to "Elevate without prompting" to allow for user equivalence to function properly in Windows 2008: • Open a command prompt and type "secpol.msc" to launch the Security Policy Console management utility. 1.2.2. Network Hardware Requirements 3

Net Use Test The "net use" utility can be used to validate the ability to perform the software copy among the cluster nodes. • From the drop-down menu. • Click on "Local Policies" • Click on "User Rights Assignment" • Locate and double click the "Manage auditing and security log" in the listing of User Rights Assignments.• From the Local Security Settings console tree. 2. select: "Elevate without prompting (tasks requesting elevation will automatically run as elevated without prompting the administrator)" • Click OK to confirm the changes. • Repeat the previous 6 steps on ALL cluster nodes.cpl".1. • Repeat the previous 5 steps on ALL cluster nodes. and then click OK • In the Firewall Control Pannel. to ensure correct operation of the Oracle software.oracle.2.2. 2. • Click OK to save the changes (if changes were made). click "Turn Windows Firewall on or off" (upper left hand corner of the window).112/e10817/postinst. 3. click Run. • Choose the "Off" radio button in the "Windows Firewall Settings" window and click OK to save the changes. type "firewall. 2. Disable Windows Firewall When installing Oracle Grid Infrastructure and/or Oracle RAC it is required to turn off the Windows firewall. Follow these steps to turn off the windows firewall : • Click Start. then the changes might not be propagated correctly to all the nodes of the cluster. User Account changes specifically for Windows 2008: 4 . add the group now. • Open a command prompt • Execute the following (replacing C$ with the appropriate drive letter if necessary) repeat the command to ensure access to every node in the cluster from the local node replacing with the appropriate nodes in the cluster. such as: • Adding a node • Deleting a node • Upgrading to patch release • Applying a one-off patch If you do not disable the Windows Firewall before performing these actions. After the installation is successful. See Section 5. "Configure Exceptions for the Windows Firewall" of Oracle® Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Microsoft Windows for details: http://download.com/docs/cd/E11882_01/install. However.1. click Local Policies. • If the Administrators group is NOT listed in the "Local Security Settings" tab.htm#CHDJGCEH NOTE: The Windows Firewall must be disabled on all the nodes in the cluster before performing any cluster-wide configuration changes.1.1. and then Security Options • Scroll down to and double-click User Account Control: Behavior of the elevation prompt for administrators. you must add certain executables and ports to the Firewall exception list on all the nodes of a cluster.msc" to launch the Security Policy Console management utility. • Repeat the previous 3 steps on ALL cluster nodes. you can enable the Windows Firewall for the public connections. Ensure that the Administrators group is listed under "Manage auditing and security log": • Open a command prompt and type "secpol.

use the primary host name of each node. Oracle recommends that you provide a name in the format <public hostname>-vip. Remote Registry Connect Validate the ability to connect to the remote nodes registry(s) as follows: • Open a command prompt and type "regedit" • Within the registry editor menu bar.3. Determine the virtual hostname for each node in the cluster. Net Use Test 5 . 3. • It is recommended that NIC teaming is configured. The virutal hostname must meet the following requirements: • The virtual IP address and the network name must not be currently in use. click Run. • The virtual IP address must be on the same subnet as your public IP address. NOTE: It is a requirement that network interfaces used for the Public and Private Interconnect be consistently named (have the same name) on every node in the cluster.2. • The cluster name must consist of the same character set used for host names: single-byte alphanumeric characters (a to z. 1. and 0 to 9) and hyphens (-). choose File and select "Connect Network Registry" • In the Select Computer window enter the remote node name. For Windows 2008: Perform the following to rename the network interfaces: • Click Start. The virtual host name is a public node name that is used to reroute client requests sent to the node if the node is down. • Repeat the previous 4 steps for ALL cluster nodes.2. Networking NOTE: This section is intended to be used for installations NOT using GNS. for example: racnode1-vip. and then click OK. A to Z. In other words.1. 2. • The cluster name is at least 1 character long and less than 15 characters long. Determine your cluster name. 2. type "ncpa. • Determine the intended purpose for each of the interfaces (may need to view the IP configuration) • Right click the interface to be renamed and click "rename" • Enter the desired name for the interface. Active/passive is the preferred teaming method due to its simplistic configuration. Common practice is to use the names Public and Private for the interfaces. Determine the public host name for each node in the cluster. 2.cpl". 2. The cluster name should satisfy the following conditions: • The cluster name is globally unique throughout your host domain. • Repeat the previous 4 steps on ALL cluster nodes ensuring that the public and private interfaces have the same name on every node. long names should be avoided and special characters are NOT to be used.C:\Users\Administrator>net use \\remote node name\C$ The command completed succ • Repeat the previous 2 steps on ALL cluster nodes.1. use the name displayed by the hostname command for example: racnode1. • Click OK and wait for the remote registry to appear in the tree. For the public host name.

2. • The private IP should NOT be accessible to servers not participating in the local cluster.PRIVATE 172. the clients would use docrac-scan to connect to the cluster. SCAN VIPs must be resolvable by DNS.0. Active/passive is the preferred teaming method due to its simplistic configuration. Networking 6 . 2. Oracle recommends that you list the public IP. Network Ping Tests There are a series of 'ping' tests that should be completed.example. and then the network adapter binding order should be checked. • It is recommended that redundant NICs are configured using teaming. 2. Configure the c:\windows\system32\drivers\etc\hosts file so that it is similar to the following example: NOTE: The SCAN VIP MUST NOT be in the hosts file.GNS_subdomain_name.PUBLIC 192.2. configure clients to use the SCAN to access the cluster. as long as it is unique within your network and conforms to the RFC 952 standard.com racnode2-vip #PrivateLAN .0. Determine the private hostname for each node in the cluster.example. Using the previous example.2. 6. • The private network should be on standalone dedicated switch(es). SCAN VIPs must NOT be in the c:\windows\system32\drivers\etc\hosts file.2. VIP and private addresses for each node in the hosts file on each node. This private hostname does not need to be resolvable through DNS and should be entered in the hosts file (typically located in: c:\windows\system32\drivers\etc).com racnode2 #VIP 192.• The virtual host name for each node should be registered with your DNS.2.100 racnode1-priv 172. 4. Even if you are using a DNS.example.102 racnode1-vip. You should ensure that the public IP addresses resolve correctly and that the private addresses are of the form 'nodename-priv' and resolve on both nodes via the hosts file. Define a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin).2.2. #PublicLAN .101 racnode2.0.100 racnode1. 5.com racnode1-vip 192.com racnode1 192. * Public Ping test Pinging Node1 from Node1 should return Node1's public IP address Pinging Node2 from Node If any of the above tests fail you should fix name/address resolution by updating the DNS or local hosts files on each node before continuing with the installation. • The private network should NOT be part of a larger overall network topology. • The private network should be deployed on Gigabit Ethernet or better. This will result in only 1 SCAN VIP for the entire cluster. A common naming convention for the private hostname is <public hostname>-priv.101 racnode2-priv After you have completed the installation process.2. You can use any name for the SCAN.com.0.103 racnode2-vip.0. for example docrac-scan.example.1. The fully qualified SCAN for the cluster defaults to cluster_name-scan.example.0. The short SCAN for the cluster is docrac-scan.

Media Sense allows Windows to uncouple an IP address from a card when the link to the local switch is lost. type "ncpa. • In the menu bar on the top of the window click "Advanced" and choose "Advanced Settings" (For Windows 2008. Navigate to the Key HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters and right click to create a new key of type DWORD. click Run. and then click OK.cpl". click Disable in the Value list. and then click Properties. which may already be running on the cluster nodes. if the "Advanced" is not showing. You should disable this activity using the registry editor regedit.3. 2.cpl".2. is of type DWORD and has a value of 1. type "ncpa. Perform the following tasks to take proactive action on these potential issues: • Click Start. • Repeat steps 2 through 5 for each network adapter object.2.msc on both nodes. click Disable in the Value list. The same can be accomplished on Windows 2008 by issuing the following commands: netsh int tcp set global chimney=disabled and netsh int tcp set global rss=disabled Validate these changes with the command: netsh interface ipv4 show global 2. and then click OK. click 'Alt' to enable that menu item). • In the Property list. These issues are described in detail in Microsoft KB article 948496 and 951037. • Under Binding order for increase the priority of IPv4 over IPv6 • Click OK to save the changes • Repeat the previous 5 steps on ALL cluster nodes 2. It is recommended that this service is stopped and set to 'manual' start using services. For Windows 2008 we can check the status of DHCP Media Sense with the command: netsh interface ipv4 show global 2. and then click OK.2. and then click OK. Disable SNP Features On Windows 2003 SP2 and later platforms there are several network issues related to SNP features. Disable DHCP Media Sense Media Sense should be disabled.2.2. click Receive Side Scaling.2. Typically a Microsoft Service: Distributed Transaction Coordinator (MSDTC) can interact with Oracle software during install.4. and then click the Advanced tab. make sure that the Key is called DisableDHCPMediaSense. click Run. • Click Configure.2. • Under the Adapters and Bindings tab use the up arrow to move the Public interface to the top of the Connections list. • In the Property list.3. click TCP/IP Offload. • Right-click a network adapter object. Network Interface Binding Order (and Protocol Priorities) It is required that the Public interface be listed first in the network interface binding order on ALL cluster nodes. Network Interface Binding Order (and Protocol Priorities) 7 . For Windows 2008: Perform the follow tasks to ensure this requirement is met: • Click Start. Stopping Services There can be issues with some (non-Oracle) services.

this path must be identical for both TMP and TEMP and they must be set to the same location on ALL cluster nodes.6. it MUST be configured to prevent the time from being adjusted backwards. If the Windows Time Service is being used.1. For the Grid Infrastructure (clusterware and ASM) software download: Oracle Database 11g Release 2 Grid Infrastructure (11. • Repeat steps 1 through 6 for ALL cluster nodes.2.2. Most commonly these parameters are set as follows: TMP=C:\temp TEMP=C:\temp For Windows 2008: To set the TEMP and TMP environment variables: • Log into the server as the user that will perform the installation • Open Computer Properties • Click the Advanced system settings link (on the left under tasks) • Under the Advanced tab. Synchronizing the Time on ALL Nodes There is a general requirement for Oracle RAC that the time on all the nodes be the same. Keep in mind. Important. 2. If the location is not the same for both variables on ALL cluster nodes the installation will fail. • Set the value for MaxNegPhaseCorrection? to 0 and exit the registry editor. • Click OK to save the changes. • Open a command prompt and execute the following to put the change into effect: cmd> W32tm /config /update • Repeat steps 1 through 4 for ALL cluster nodes.4. Environment Variables Set the TEMP and TMP environment variables to a common location that exists on ALL nodes in the cluster.5.2.0) for Windows For the RDBMS software download from OTN: Oracle Database 11g Release 2 (11. Perform the following steps to ensure the time is NOT adjusted backwards using Windows Time Service: • Open a command prompt and type "regedit" • Within the registry editor locate the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config key. With 11gR2 time synchronization can be performed by the Clusterware using CTSSD (Cluster Time Synchronization Services Daemon) or by using the Windows Time Service.1.0) for Windows 2.0. Synchronizing the Time on ALL Nodes 8 . Stage the Oracle Software It is recommended that you stage the required software onto a local drive on Node 1 of your cluster. During installation the Oracle Universal Installer (OUI) will utilize these directories to store temporary copies of the binaries. click the Environment Variables button • Modify the TEMP and TMP variables under "User variables for Administrator" to the desired setting. Ensure that you use only 32 bit versions of the Oracle Software on a 32bit OS and 64 bit versions of the Oracle Software on a 64bit OS.0.4. 2.

Check if there is a newer version of CVU available on otn compared to the one that ships on the installation media http://otn. Cluster Verification Utility (CVU) stage check 9 . The 2 nodes must also share some central disks.7. we will be using the more complex of the above configurations by creating a 3rd diskgroup for storage of the OCR and Voting Disks. Shared Disk Layout It is assumed that the two nodes have local disk primarily for the operating system and the local Oracle Homes.e. Labelled C: The Oracle Grid Infrastructure software also resides on the local disks on each node. View Disks 7.7. if the HBA drivers support caching of reads/writes it should be disabled. This disk must not have cache enabled at the node level. Create Logical partitions inside Extended partitions 5. Our third disk group will be normal redundancy allowing for 3 Voting Disks and a single OCR which takes on the redundancy of that diskgroup. Clean the Shared Disks 4. Marking Disk Partitions for use by ASM 8.oracle. Shared Disk Layout 2. This means that the OCR and Voting disk will be stored along with the database related files. Drive Letters 6. Enable Automounting of disks on Windows 3. For demonstration purposes within this cookbook.com/rac 3. This configuration will provide 3 multiplexed copies of the Voting Disk and a single OCR which takes on the redundancy of that disk group (mirrored within ASM).2. Prepare the shared storage for Oracle RAC This section describes how to prepare the shared storage for Oracle RAC 1.1. Cluster Verification Utility (CVU) stage check Now you can run the CVU to check the state of the cluster prior to the install of the Oracle Software. This diskgroup will also be used to store the ASM SPFILE. Verify Clusterware Installation Readiness 3. For those who wish to utilize Oracle supplied redundancy for the OCR and Voting disks you could create a separate (3rd) ASM Diskgroup having a minimum of 2 fail groups (total of 3 disks).1. 3. Grid Infrastructure Shared Storage With Oracle 11gR2 it is considered a best practice to store the OCR and Voting Disk within ASM and to maintain the ASM best practice of having no more than 2 diskgroups (Flash Recovery Area and Database Area). If you are utilizing external redundancy for your disk groups this means you will have 1 Voting Disk and 1 OCR. Disk Number Volume Size(MB) ASM Label Prefix Diskgroup Redundancy Disk 1 Disk 2 Disk 3 Volume 2 Volume 3 Volume 4 1024 1024 1024 OCR_VOTE OCR_VOTE OCR_VOTE OCR_VOTE OCR_VOTE OCR_VOTE Normal Normal Normal 2.1. If the SAN supports caching that is visible to all nodes then this can be enabled. The minimum size of the 3 disks that make up this diskgroup is 1GB. i.

4 4 1+0 1+0 100 100 DBDATA DBFLASH DBDATA DBFLASH External External Number of LUNs(Disks) RAID Level Size(GB) ASM Label Prefix Diskgroup Redundancy In this document we will use the diskpart command line tool to manage these LUNs.3790. diskmgmt.2. This may take some time to complete.3. Clean the Shared Disks You may want to clean your shared disks before starting the install. If SAN level redundancy is available. Cleaning will remove data from any previous failed install.msc instead of diskpart (as used in the following sections) to create these partitions. For Microsoft Windows 2008. On Node1 from within diskpart you should clean each of the disks. But see a later Appendix for coping with failed installs. On computer: WINNODE1 DISKPART>AUTOMOUNT ENABLE Repeat the above command on all nodes in the cluster 3. You must create logical drives inside of extended partitions for the disks to be used by Oracle Grid Infrastructure and Oracle ASM. For MIcrosoft Windows 2003 it is possible to use diskmgmt.2.---------. There must be no drive letters assigned to any of the Disks1 – Disk10 on any node. On each node log in as someone with Administrator privileges then Click START->RUN and type diskpart C:\>diskpart Microsoft DiskPart version 5.3.1.2. Whenever possible Oracle also recommends sticking to the SAME (Stripe And Mirror Everything) methodology by using RAID 1+0. 3. Do not select the disk containing the operating system or you will have to reinstall the OS Cleaning the disk ‘scrubs’ every block on the disk.msc cannot be used instead of diskpart to create these partitions.1.------. ASM Shared Storage --- 10 .2. Enable Automount You must enable automounting of disks for them to be visible to Oracle Grid Infrastructure.--Disk 0 Online 34 GB 0 B Disk 1 Online 1024 MB 1024 MB Disk 2 Online 1024 MB 1024 MB Disk 3 Online 1024 MB 1024 MB Disk 4 Online 1024 MB 1024 MB Disk 5 Online 1024 MB 1024 MB Disk 6 Online 100 GB 100 GB Disk 7 Online 100 GB 100 GB Disk 8 Online 100 GB 100 GB Disk 9 Online 100 GB 100 GB 3.3959 Copyright (C) 1999-2001 Microsoft Corporation. WARNING this will destroy all of the data on the disk. DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------.------. ASM Shared Storage It is recommended that ALL ASM disks within a disk group are of the same size and carry the same performance characteristics. external redundancy should be used for database storage on ASM.

Create Logical partitions inside Extended partitions Assuming the disks you are going to use are completely empty you must create an extended partition and then inside that partition a logical partition. DISKPART> clean all DISKPART> select disk 3 Disk 3 is now the selected disk. In the following example. I have dedicated LUNS for each device. DISKPART> select disk 1 Disk 1 is now the selected disk. DISKPART> clean all DISKPART> select disk 7 Disk 7 is now the selected disk. DISKPART> clean all DISKPART> select disk 6 Disk 6 is now the selected disk.Disk 10 Online 100 GB 100 GB Now you should clean disks 1 – 10 (Not disk0 as this the local C: drive) DISKPART>select disk 1 Disk 1 is now the selected disk.4. DISKPART> clean all DISKPART> select disk 5 Disk 5 is now the selected disk. Clean the Shared Disks 11 . DISKPART> create part ext DiskPart succeeded in creating the specified partition. for Oracle Grid Infrastructure. DISKPART> clean all DISKPART> select disk 4 Disk 4 is now the selected disk. DISKPART> clean all DISKPART> select disk 8 Disk 8 is now the selected disk. DISKPART> clean all DISKPART> select disk 10 Disk 10 is now the selected disk. DISKPART> clean all DISKPART> select disk 9 Disk 9 is now the selected disk. DISKPART> create part log DiskPart succeeded in creating the specified partition. 3.3. DISKPART> clean all DISKPART> select disk 2 Disk 2 is now the selected disk. DISKPART> clean all 3.

DISKPART> select disk 9 Disk 9 is now the selected disk. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> select disk 7 Disk 7 is now the selected disk. DISKPART> create part log 3.DISKPART> select disk 2 Disk 2 is now the selected disk. Create Logical partitions inside Extended partitions 12 . DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> select disk 8 Disk 8 is now the selected disk. DISKPART> select disk 6 Disk 6 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition.. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition.4. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> select disk 4 Disk 4 is now the selected disk. DISKPART> select disk 5 Disk 5 is now the selected disk. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> select disk 3 Disk 3 is now the selected disk. DISKPART> create part log DiskPart succeeded in creating the specified partition.

Remove Drive Letters You need to remove the drive letters D E F G H I J K L M These relate to volumes 1 2 3 4 5 6 7 8 9 10 *Do NOT remove drive letter C which. DISKPART> create part log DiskPart succeeded in creating the specified partition.5. 3. in this case.DiskPart succeeded in creating the specified partition. Using diskpart on Node2 DISKPART> list volume --C D E F G H I J K L M --------------NTFS RAW RAW RAW RAW RAW RAW RAW RAW RAW RAW ---------Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition ------16 GB 1023 MB 1023 MB 1023 MB 1023 MB 1023 MB 100 GB 100 GB 100 GB 100 GB 100 GB --------Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy -------System Volume ### Ltr Label Fs Type Size Status Info ---------Volume 0 Volume 1 Volume 2 Volume 3 Volume 4 Volume 5 Volume 6 Volume 7 Volume 8 Volume 9 Volume 10 Notice that the volumes are listed in a completely different order compared to the disk list. The partitions on the other node may have drive letters assigned. View Created partitions 13 . This relates to volume 0 (In this example) 3.1.---------.------. List Drive Letters Diskpart should not add drive letters to the partitions on the local node. View Created partitions DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------.5. is the local disk.4.------Disk 0 Online 34 GB 0 B Disk 1 Online 1024 MB 0 MB Disk 2 Online 1024 MB 0 MB Disk 3 Online 1024 MB 0 MB Disk 4 Online 1024 MB 0 MB Disk 5 Online 1024 MB 0 MB Disk 6 Online 100 GB 0 MB Disk 7 Online 100 GB 0 MB Disk 8 Online 100 GB 0 MB Disk 9 Online 100 GB 0 MB Disk 10 Online 100 GB 0 MB --- --- 3. 3. On earlier versions of Windows 2003 a reboot of the ‘other’ node will be required for the new partitions to become visible. DISKPART> create part ext DiskPart succeeded in creating the specified partition. You must remove them. DISKPART> select disk 10 Disk 10 is now the selected disk.1.1.4. Windows 2003 SP2 and Windows 2008 do not suffer from this issue.

DISKPART> select volume 10 Volume 10 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 6 Volume 6 is the selected volume. DISKPART> select volume 4 Volume 4 is the selected volume. DISKPART> select volume 7 Volume 7 is the selected volume. 3. DISKPART> select volume 3 Volume 3 is the selected volume. DISKPART> select volume 8 Volume 8 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point.DISKPART> select volume 1 Volume 1 is the selected volume. Remove Drive Letters 14 .5. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> remov DiskPart successfully removed the drive letter or mount point.1. DISKPART> select volume 9 Volume 9 is the selected volume. DISKPART> select volume 2 Volume 2 is the selected volume. DISKPART> select volume 5 Volume 5 is the selected volume.

5. The following table outlines the summary of the disks that will be stamped for ASM usage: 3 4 4 1+0 1+0 1+0 1024 100 100 OCR_VOTE DBDATA DBFLASH OCR_VOTE DBDATA DBFLASH Normal External External Number of LUNs(Disks) RAID Level Size(GB) ASM Label Prefix Diskgroup Redundancy Perform this task as follows: • Within Windows Explorer navigate to the asmtool directory within the Grid Infrastructure installation media and double click the â–asmtoolg. Marking Disk Partitions for use by ASM The only partitions that the Oracle Universal Installer acknowledges on Windows systems are logical drives that are created on top of extended partitions and that have been stamped as candidate ASM disks.5. For this installation. List volumes on Second node You should check that none of the RAW partitions have drive letters assigned DISKPART> list vol --C --------------NTFS RAW RAW RAW RAW RAW RAW RAW RAW RAW RAW ---------Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition ------16 GB 1023 MB 1023 MB 1023 MB 1023 MB 1023 MB 100 GB 100 GB 100 GB 100 GB 100 GB --------Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy -------System Volume ### Ltr Label Fs Type Size Status Info ---------Volume 0 Volume 1 Volume 2 Volume 3 Volume 4 Volume 5 Volume 6 Volume 7 Volume 8 Volume 9 Volume 10 You can now exit diskpart on all nodes 3. 3. select â–Add or Change Labelâ– and click â–Nextâ–. Therefore prior to running the OUI the disks that are to be used by Oracle RAC MUST be stamped using ASM Tool.2. ASM Tool is available in two different flavors. List volumes on Second node 15 . asmtoolg will be used to stamp the ASM disks.3. Both utilities can be found under the asmtool directory within the Grid Infrastructure installation media. command line (asmtool) and graphical (asmtoolg).2.exeâ– executable. • Within ASM Tool GUI.6.

3. After choosing the intended disks and entering the appropriate ASM Label Prefix. size and/or performance characteristics. click â–Nextâ– to continue.6.• On the Select Disks screen choose the appropriate disks to be assigned a label and enter an ASM Label Prefix to make the disks easily identifiable for their intended purpose. Marking Disk Partitions for use by ASM 16 . • Review the summary screen and click â–Nextâ–.

• On the final screen. Later in this document CLUVFY will be run in pre dbinst mode to validate the readiness for the RDBMS software installation. click â–Finishâ– to update the ASM Disk Labels.7. • Repeat these steps for all ASM disks that will differ in their label prefix. at this stage it should be run in the CRS pre-installation mode. Verify Grid Infrastructure Installation Readiness Prior to installing Grid Infrastructure it is highly recommended to run the cluster verification utility (CLUVFY) to verify that the cluster nodes have been properly configured for a successful Oracle Grid Infrastructure installation. There are various levels at which CLUVFY can be run. Verify Grid Infrastructure Installation Readiness 17 . 3.7. 3.

Basic Grid Infrastructure Install (without GNS and IPMI) • Shutdown all Oracle Processes running on all nodes (not necessary if performing the install on new servers) • Start the Oracle Universal Installer (OUI) by running setup.Though CLUVFY is packaged with the Grid Infrastructure installation media. it is recommended to download and run the latest version of CLUVFY.0. see step 3) directory on the 11g Release 2 (11. Choose Oracle Clusterware 11g and click next to continue. choose â–Install and Configure Grid Infrastructure for a Clusterâ– and click â–Nextâ– to continue.oracle. • Open a command prompt and run CLUVFY as follows to perform the Oracle Clusterware pre-installation verification: cmd> cmd> runcluvfy stage -post hwos -n runcluvfy stage -pre crsinst -n [â–verbose] [-verbose] If any errors are encountered. • On the Select Installation Option Screen. these issues should be investigated and resolved before proceeding with the installation.exe as the Local Administrator user from the Clusterware (db directory if using the DVD Media.1. Oracle Grid Infrastructure Install 4. execute it as follows to perform the Grid Infrastructure pre-installation verification: • Login to the server in which the installation will be performed as the Local Administrator. 4. • If using the DVD installation media (from edelivery.oracle.1) installation media.com) the first screen to appear will be the Select a Product to Install screen.2.com/rac Once the latest version of the CLUVFY has been downloaded and installed. Oracle Grid Infrastructure Install 18 . If the OTN media is used the RDBMS/ASM and Clusterware are separate downloads and the first screen to be displayed will be the Welcome screen . The latest version of the CLUVFY utility can be downloaded from: http://otn. 4.

• Choose the appropriate language on the Select Product Languages screen and click â–Nextâ– to continue. Basic Grid Infrastructure Install (without GNS and IPMI) 19 . 4. choose â–Advanced Installationâ– and click â–Nextâ– to continue.* On the Select Installation Type screen.1.

1. ♦ Click â–Nextâ– to continue.• On the Grid Plug and Play Information screen perform the following: ♦ Enter the desired Cluster Name. By default the OUI only knows about the local node. The SCAN Name must be a DNS entry resolving to 3 IP addresses. this name should be unique for the enterprise and CAN NOT be changed post installation. • Add all of the cluster nodes hostnames and Virtual IP hostnames on the Cluster Node Information Screen. this port defaults to 1521. ♦ Enter the port number for the SCAN Listener. Basic Grid Infrastructure Install (without GNS and IPMI) 20 . 4. The SCAN Name MUST NOT be in the hosts file. additional nodes can be added using the â–Addâ– button. ♦ Enter the SCAN Name for the cluster. ♦ Uncheck the â–Configure GNSâ– checkbox.

Make the appropriate corrections on this screen and click â–Nextâ– to continue. • Choose â–Automatic Storage Management (ASM) on the Storage Option Information screen and click â–Nextâ– to create the ASM diskgroup for Grid Infrastructure.1. so if â–Publicâ– is public and â–Privateâ– is private. Basic Grid Infrastructure Install (without GNS and IPMI) 21 .• On the Specify Network Interface Usage screen. make sure that the public and private interfaces are properly specified. 4. the same MUST be true on all of the other nodes in the cluster. NOTE The public and private interface names MUST be consistent across the cluster.

♦ Select the disks that were previously designated for the OCR and Voting Disks. • Click â–Nextâ– to continue. To stamp the disks review the instructions in the Storage Prerequisites section of this document and click the â–Stamp Disksâ– button on this screen.1. Basic Grid Infrastructure Install (without GNS and IPMI) 22 . 4. NOTE If no Candidate disks are available the disks have not yet been stamped for use by ASM.g. OCR_VOTE) ♦ Choose â–Normalâ– for the Redundancy level.• On the Create ASM Disk Group screen perform the following: • ♦ Enter the Disk Group Name that will store the OCR and Voting Disks (e.

Basic Grid Infrastructure Install (without GNS and IPMI) 23 . • Specify the location for Oracle Base (e.• Enter the appropriate passwords for the SYS and ASMSNMP users of ASM on the Specify ASM Passwords screen and click â–Nextâ– to continue.g. d:\app\oracle) and the Grid Infrastructure Installation (e. • On the Failure Isolation Support screen choose â–Do not use Intelligent Platform Management Interface (IPMI)â– and click â–Nextâ– to continue. NOTE Continuing with failed prerequisites may result in a failed installation.g. therefore it is recommended that failed prerequisite checks be resolved before continuing. d:\OraGrid) on the Specify Installation Location screen.1. 4. Click â–Nextâ– to allow the OUI to perform the prerequisite checks on the target nodes.

• After the prerequisite checks have successfully completed. review the summary of the pending installation and click â–Finishâ– to install and configure Grid Infrastructure. Basic Grid Infrastructure Install (without GNS and IPMI) 24 .1. 4.

• Once the installation has completed.• On the Finish screen click â–Closeâ– to exit the OUI. Basic Grid Infrastructure Install (without GNS and IPMI) 25 . cmd> %GI_HOME%\bin\crsctl stat res -t The output of the above command will be similar to the following: -------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS 4. check the status of the CRS resources as follows: NOTE All resources should report as online with the exception of GSD and OC4J? . GSD will only be online if Grid Infrastructure is managing a 9i database and OC4J? is reserved for use in a future release.1. Though these resources are offline it is NOT supported to remove them.

lsnr 1 ONLINE ONLINE ora.scan3.ratwin01.scan2.vip 1 ONLINE ora.eons ONLINE ONLINE ora.asm ONLINE ONLINE ora.vip 1 ONLINE ora.lsnr 1 ONLINE ONLINE ora.-------------------------------------------------------------------------------Local Resources -------------------------------------------------------------------------------ora.net1.ratwin02.LISTENER.dg ONLINE ONLINE ora.LISTENER_SCAN1. Basic Grid Infrastructure Install (without GNS and IPMI) 26 .OCR_VOTE.vip 1 ONLINE ratwin01 ONLINE ratwin02 ONLINE ONLINE ratwin02 ONLINE ONLINE ratwin01 ONLINE ONLINE ratwin01 4.vip 1 ora.vip 1 ora.oc4j 1 ratwin02 ratwin01 ratwin01 OFFLINE OFFLINE ora.gsd OFFLINE OFFLINE OFFLINE OFFLINE ora.network ONLINE ONLINE ora.scan1.lsnr 1 ONLINE ONLINE ora.lsnr ONLINE ONLINE ora.LISTENER_SCAN2.LISTENER_SCAN3.ons ONLINE ONLINE ONLINE ONLINE ratwin01 ratwin02 ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 Started Started ONLINE ONLINE ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 -------------------------------------------------------------------------------Cluster Resources -------------------------------------------------------------------------------ora.1.

Grid Infrastructure Home Patching 27 . • The first OUI screen will prompt for an email address to allow you to be notified of security issues and to enable Oracle Configuration Manager.0. • Choose â–Install database software onlyâ– on the Select Installation Option screen and click â–Nextâ– to continue. If you wish NOT to be notified or not to use Oracle Configuration Manager leave the email box blank and uncheck the check box. 5. • If using the DVD installation media (from edelivery. Choose Oracle Database 11g and click next to continue.2.exe command as the Local Administrator user from the DB directory of the Oracle Database 11g Release 2 (11. The following outlines how to achieve this task: cmd> runcluvfy stage -pre dbinst -n -verbose • Start the OUI by running the setup. In order to achieve this CLUVFY must be run in pre dbinst mode. RDBMS Software Install Prior to installing the Database Software (RDBMS) it is highly recommended to run the cluster verification utility (CLUVFY) to verify that Grid Infrastructure has been properly installed and the cluster nodes have been properly configured to support the Database Software installation.com) the first screen to appear will be the Select a Product to Install screen. After entering the desired information click â–Nextâ– to continue.1) installation media and click next to begin the installation process. If the OTN media is used the RDBMS/ASM and Clusterware are separate downloads and the first screen to be displayed will be Installation Type screen .5. Grid Infrastructure Home Patching This Chapter is a placeholder 6.oracle.

• On the Grid Installation Options screen. choose â–Real Application Clusters database installationâ– and select ALL nodes for the software installation. Click â–Nextâ– to continue. RDBMS Software Install 28 . 6. • Choose the appropriate language on the Select Product Languages screen and click â–Nextâ– to continue.

d:\app\oracle) and the database software installation (e. NOTE If there is a need for specific database options to be installed or not installed. these options can be chosen by clicking the â–Select Optionsâ– button. Click â–Nextâ– to allow the OUI to perform the prerequisite checks on the target nodes.g. RDBMS Software Install 29 . • Specify the location for Oracle Base (e.0\db_1) on the Specify Installation Location screen. 6.• On the Select Database Edition screen choose â–Enterprise Editionâ– and click â–Nextâ– to continue.g.2. d:\app\oracle\product\11. NOTE It is recommended that the same Oracle Base location is used for the database installation that was used for the Grid Infrastructure installation.

therefore it is recommended that failed prerequisite checks be resolved before continuing. RDBMS Software Install 30 . • After the prerequisite checks have successfully completed. 6.NOTE Continuing with failed prerequisites may result in a failed installation. review the summary of the pending installation and click â–Finishâ– to install the database software.

the ASM disks for the database diskgroups were stamped for ASM usage. RAC Home Patching This Chapter is a placeholder 8. We will now use the ASM Configuration Assistant to create the diskgroups: 7. RAC Home Patching 31 . Run ASMCA to create diskgroups Prior to creating a database on the cluster.• On the Finish screen click â–Closeâ– to exit the OUI. the ASM diskgroups that will house the database must be created. 7. In an earlier chapter.

Run ASMCA to create diskgroups 32 . The Flash Recovery Area will house recovery-related files are created. • While on the DiskGroups? tab. ♦ Select the candidate disks to include in the DiskGroup? NOTE: If no Candidate disks are available the disks have not yet been stamped for use by ASM. archived redo logs. DBDATA or DBFLASH) ♦ Choose External Redundancy (assuming redundancy is provided at the SAN level). control files. To stamp the disks review the instructions in the 'Rac11gR2WindowsPrepareDisk' chapter of this document and click the â–Stamp Disksâ– button on this screen. The Database Area Diskgroup will house active database files such as datafiles. online redo logs. click the DiskGroups? tab. NOTE: To reduce the complexity of managing ASM and its diskgroups. click the â–Createâ– button to display the Create DiskGroup? window. 8. backup sets. a Database Area DiskGroup? and a Flash Recovery Area DiskGroup? . Oracle recommends that generally no more than two diskgroups be maintained for database storage.g. such as multiplexed copies of the current control file and online redo logs. and change tracking files (used in incremental backups) are stored. and flashback log files • Within the Create DiskGroup? window perform the following: ♦ Enter the desired DiskGroup? Name (e.• Run the ASM Configuration Assistant (ASMCA) from the ASM Home by executing the following: cmd> %GI_HOME%\bin\asmca • After launching ASMCA.

Action: 8. Run ASMCA to create diskgroups 33 . To follow this recommendation add an OCR mirror. • Once all the necessary DiskGroups? have been created click â–Exitâ– to exit from ASMCA. Mind that you can only have one OCR in a diskgroup. • Repeat the previous two steps to create all necessary diskgroups. Note: It is Oracle's Best Practise to have an OCR mirror stored in a second disk group.• ♦ Click â–OKâ– to create the DiskGroup? .

select â–Create Databaseâ– and click â–Nextâ– 9. D:\OraGrid\BIN>ocrcheck -config 9. Run DBCA to create the database To help to verify that the system is prepared to successfully create a RAC database. D:\OraGrid\BIN>ocrconfig -add +DBDATA 3.0\grid -verbose Perform the following to create an 11gR2 RAC database on the cluster: • Run the Database Configuration Assistant (DBCA) from the ASM Home by executing the following: cmd> %ORACLE_HOME\bin\dbca • On the Welcome screen. To add OCR mirror to an Oracle ASM disk group. use the following Cluster Verification Utility command syntax: cmd> runcluvfy stage -pre dbcfg -n all -d c:\app\11.1. Run DBCA to create the database 34 . ensure that the Oracle Clusterware stack is running and run the following command as administrator: 2.2. select â–Oracle Real Application Clustersâ– and click â–Nextâ– • On the Operation screen.

♦ Select ALL the nodes in the cluster.• The Database Templates screen will now be displayed. ♦ Enter the desired Global Database Name. Run DBCA to create the database 35 . ♦ Click â–Nextâ– to continue. 9. Select the â–General Purpose or Transaction Processingâ– database template and click â–Nextâ– to continue. • On the Database Identification screen perform the following: ♦ Select the â–Admin Managedâ– configuration type. ♦ Enter the desired SID prefix.

Run DBCA to create the database 36 . • Enter the appropriate database credentials for the default user accounts and click â–Nextâ– to continue.• On the Management Options screen. choose â–Configure Enterprise Managerâ–. Click â–Nextâ– to continue. Once Grid Control has been installed on the system this option may be unselected to allow the database to be managed by Grid Control. 9.

9. ♦ Select â–Use Oracle-Managed Filesâ– and specify the ASM Disk Group that will house the database files (e.• On the Database File Locations screen perform the following: ♦ Choose Automatic Storage Management (ASM). +DBDATA) ♦ Click â–Nextâ– and enter the ASMSNMP password when prompted. ♦ Click â–OKâ– after entering the ASMSNMP password to continue.g. Run DBCA to create the database 37 .

select â–Specify Flash Recovery Areaâ–. Run DBCA to create the database 38 . Check the check box if the sample schemas are to be loaded into the database and click â–Nextâ– to continue.• For the Recovery Configuration. • On the Database Content screen. you are able create the sample schemas. Click â–Nextâ– to continue. 9. It is also recommended to select â–Enable Archivingâ– at this point. enter +DBFLASH for the location and choose an appropriate size.

NOTE: At this point you may want to increase the number of redo logs per thread to 3 and increase the size of the logs from the default of 50MB. • On the Database Storage screen. Run DBCA to create the database 39 . 9.• Enter the desired memory configuration and initialization parameters on the Initialization Parameters screen and click â–Nextâ– to continue. review the file layout and click â–Nextâ– to continue.

• Click â–OKâ– on the Summary window to create the database.• On the Last screen. Run DBCA to create the database 40 . ensure that â–Create Databaseâ– is checked and click â–Finishâ– to review the summary of the pending database creation. 9.

9. Run DBCA to create the database 41 .

Sign up to vote on this title
UsefulNot useful