Professional Documents
Culture Documents
Rac11gR2OnWindows ........................................................................................................................................1 1. Introduction.........................................................................................................................................1 1.1. Overview of new concepts in 11gR2 Grid Infrastructure...................................................1 1.1.1. SCAN ..................................................................................................................1 1.1.2. GNS....................................................................................................................1 1.1.3. OCR and Voting on ASM storage......................................................................1 1.1.4. Intelligent Platform Management interface (IPMI)............................................1 1.1.5. Time sync ............................................................................................................2 1.1.6. Clusterware and ASM share the same Oracle Home ..........................................2 1.2. System Requirements ....................................................................................................................................2 1.2.1. Hardware Requirements ................................................................................................................2 1.2.2. Network Hardware Requirements .................................................................................................2 1.2.3. IP Address Requirements ..............................................................................................................3 1.2.4. Installation method.......................................................................................................................3 2. Prepare the cluster nodes for Oracle RAC.......................................................................................................3 2.1. User Accounts..................................................................................................................................3 2.1.1. User Account changes specifically for Windows 2008:..................................................3 2.1.2. Net Use Test....................................................................................................................4 . 2.1.3. Remote Registry Connect................................................................................................5 2.2. Networking...................................................................................................................................................5 2.2.1. Network Ping Tests.......................................................................................................................6 2.2.2. Network Interface Binding Order (and Protocol Priorities).........................................................7 2.2.3. Disable DHCP Media Sense.........................................................................................................7 2.2.4. Disable SNP Features...................................................................................................................7 2.3. Stopping Services ..........................................................................................................................................7 2.4. Synchronizing the Time on ALL Nodes.......................................................................................................8 2.5. Environment Variables.................................................................................................................................8 2.6. Stage the Oracle Software.............................................................................................................................8 2.7. Cluster Verification Utility (CVU) stage check ............................................................................................9 3. Prepare the shared storage for Oracle RAC.....................................................................................................9 3.1. Shared Disk Layout.........................................................................................................................9 . 3.1.1. Grid Infrastructure Shared Storage..................................................................................9 3.1.2. ASM Shared Storage.....................................................................................................10 3.2. Enable Automount......................................................................................................................................10 3.3. Clean the Shared Disks...............................................................................................................................10 3.4. Create Logical partitions inside Extended partitions..................................................................................11 3.4.1. View Created partitions..............................................................................................................13 3.5. List Drive Letters........................................................................................................................................13 3.5.1. Remove Drive Letters.................................................................................................................13 3.5.2. List volumes on Second node.....................................................................................................15 3.6. Marking Disk Partitions for use by ASM...................................................................................................15 3.7. Verify Grid Infrastructure Installation Readiness.......................................................................................17 4. Oracle Grid Infrastructure Install...................................................................................................................18 4.1. Basic Grid Infrastructure Install (without GNS and IPMI)...........................................................18 5. Grid Infrastructure Home Patching................................................................................................................27 6. RDBMS Software Install...............................................................................................................................27 7. RAC Home Patching ......................................................................................................................................31 8. Run ASMCA to create diskgroups................................................................................................................31 9. Run DBCA to create the database.................................................................................................................34
Rac11gR2OnWindows
1. Introduction
1.1. Overview of new concepts in 11gR2 Grid Infrastructure
1.1.1. SCAN
The single client access name (SCAN) is the address used by all clients connecting to the cluster. The SCAN name is a domain name registered to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS). The SCAN name eliminates the need to change clients when nodes are added to or removed from the cluster. Clients using SCAN names can also access the cluster using EZCONNECT. The Single Client Access Name (SCAN) is a domain name that resolves to all the addresses allocated for the SCAN name. Allocate three addresses to the SCAN name. During Oracle grid infrastructure installation, listeners are created for each of the SCAN addresses, and Oracle grid infrastructure controls which server responds to a SCAN address request. Provide three IP addresses in the DNS to use for SCAN name mapping. This ensures high availability. The SCAN addresses need to be on the same subnet as the VIP addresses for nodes in the cluster. The SCAN domain name must be unique within your corporate network.
1.1.2. GNS
In the past, the host and VIP names and addresses were defined in the DNS or locally in a hosts file. GNS can simplify this setup by using DHCP. To use GNS, DHCP must be configured in the subdomain in which the cluster resides.
If you intend to use IPMI, then you must provide an administration account username and password to provide when prompted during installation.
An Oracle RAC database is a shared everything database. All data files, control files, redo log files, and the server parameter file (SPFILE) used by the Oracle RAC database must reside on shared storage that is accessible by all the Oracle RAC database instances. The Oracle RAC installation that is described in this guide uses Oracle ASM for the shared storage for Oracle Clusterware and Oracle Database files. The amount of shared disk space is determined by the size of your database.
The host name of each node must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.
From the Local Security Settings console tree, click Local Policies, and then Security Options Scroll down to and double-click User Account Control: Behavior of the elevation prompt for administrators. From the drop-down menu, select: "Elevate without prompting (tasks requesting elevation will automatically run as elevated without prompting the administrator)" Click OK to confirm the changes. Repeat the previous 5 steps on ALL cluster nodes. 2. Ensure that the Administrators group is listed under "Manage auditing and security log": Open a command prompt and type "secpol.msc" to launch the Security Policy Console management utility. Click on "Local Policies" Click on "User Rights Assignment" Locate and double click the "Manage auditing and security log" in the listing of User Rights Assignments. If the Administrators group is NOT listed in the "Local Security Settings" tab, add the group now. Click OK to save the changes (if changes were made). Repeat the previous 6 steps on ALL cluster nodes. 3. Disable Windows Firewall When installing Oracle Grid Infrastructure and/or Oracle RAC it is required to turn off the Windows firewall. Follow these steps to turn off the windows firewall : Click Start, click Run, type "firewall.cpl", and then click OK In the Firewall Control Pannel, click "Turn Windows Firewall on or off" (upper left hand corner of the window). Choose the "Off" radio button in the "Windows Firewall Settings" window and click OK to save the changes. Repeat the previous 3 steps on ALL cluster nodes. After the installation is successful, you can enable the Windows Firewall for the public connections. However, to ensure correct operation of the Oracle software, you must add certain executables and ports to the Firewall exception list on all the nodes of a cluster. See Section 5.1.2, "Configure Exceptions for the Windows Firewall" of Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Microsoft Windows for details: http://download.oracle.com/docs/cd/E11882_01/install.112/e10817/postinst.htm#CHDJGCEH NOTE: The Windows Firewall must be disabled on all the nodes in the cluster before performing any cluster-wide configuration changes, such as: Adding a node Deleting a node Upgrading to patch release Applying a one-off patch If you do not disable the Windows Firewall before performing these actions, then the changes might not be propagated correctly to all the nodes of the cluster.
C:\Users\Administrator>net use \\remote node name\C$ The command completed succ Repeat the previous 2 steps on ALL cluster nodes.
2.2. Networking
NOTE: This section is intended to be used for installations NOT using GNS. 1. Determine your cluster name. The cluster name should satisfy the following conditions: The cluster name is globally unique throughout your host domain. The cluster name is at least 1 character long and less than 15 characters long. The cluster name must consist of the same character set used for host names: single-byte alphanumeric characters (a to z, A to Z, and 0 to 9) and hyphens (-). NOTE: It is a requirement that network interfaces used for the Public and Private Interconnect be consistently named (have the same name) on every node in the cluster. Common practice is to use the names Public and Private for the interfaces, long names should be avoided and special characters are NOT to be used. 2. Determine the public host name for each node in the cluster. For the public host name, use the primary host name of each node. In other words, use the name displayed by the hostname command for example: racnode1. It is recommended that NIC teaming is configured. Active/passive is the preferred teaming method due to its simplistic configuration. For Windows 2008: Perform the following to rename the network interfaces: Click Start, click Run, type "ncpa.cpl", and then click OK. Determine the intended purpose for each of the interfaces (may need to view the IP configuration) Right click the interface to be renamed and click "rename" Enter the desired name for the interface. Repeat the previous 4 steps on ALL cluster nodes ensuring that the public and private interfaces have the same name on every node. 3. Determine the virtual hostname for each node in the cluster. The virtual host name is a public node name that is used to reroute client requests sent to the node if the node is down. Oracle recommends that you provide a name in the format <public hostname>-vip, for example: racnode1-vip. The virutal hostname must meet the following requirements: The virtual IP address and the network name must not be currently in use. The virtual IP address must be on the same subnet as your public IP address. 2.1.2. Net Use Test 5
The virtual host name for each node should be registered with your DNS. 4. Determine the private hostname for each node in the cluster. This private hostname does not need to be resolvable through DNS and should be entered in the hosts file (typically located in: c:\windows\system32\drivers\etc). A common naming convention for the private hostname is <public hostname>-priv. The private IP should NOT be accessible to servers not participating in the local cluster. The private network should be on standalone dedicated switch(es). The private network should NOT be part of a larger overall network topology. The private network should be deployed on Gigabit Ethernet or better. It is recommended that redundant NICs are configured using teaming. Active/passive is the preferred teaming method due to its simplistic configuration. 5. Define a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin). SCAN VIPs must NOT be in the c:\windows\system32\drivers\etc\hosts file. SCAN VIPs must be resolvable by DNS. 6. Even if you are using a DNS, Oracle recommends that you list the public IP, VIP and private addresses for each node in the hosts file on each node. Configure the c:\windows\system32\drivers\etc\hosts file so that it is similar to the following example: NOTE: The SCAN VIP MUST NOT be in the hosts file. This will result in only 1 SCAN VIP for the entire cluster. #PublicLAN - PUBLIC 192.0.2.100 racnode1.example.com racnode1 192.0.2.101 racnode2.example.com racnode2 #VIP 192.0.2.102 racnode1-vip.example.com racnode1-vip 192.0.2.103 racnode2-vip.example.com racnode2-vip #PrivateLAN - PRIVATE 172.0.2.100 racnode1-priv 172.0.2.101 racnode2-priv After you have completed the installation process, configure clients to use the SCAN to access the cluster. Using the previous example, the clients would use docrac-scan to connect to the cluster. The fully qualified SCAN for the cluster defaults to cluster_name-scan.GNS_subdomain_name, for example docrac-scan.example.com. The short SCAN for the cluster is docrac-scan. You can use any name for the SCAN, as long as it is unique within your network and conforms to the RFC 952 standard.
* Public Ping test Pinging Node1 from Node1 should return Node1's public IP address Pinging Node2 from Node
If any of the above tests fail you should fix name/address resolution by updating the DNS or local hosts files on each node before continuing with the installation.
2.2. Networking
Number of LUNs(Disks) RAID Level Size(GB) ASM Label Prefix Diskgroup Redundancy
In this document we will use the diskpart command line tool to manage these LUNs. You must create logical drives inside of extended partitions for the disks to be used by Oracle Grid Infrastructure and Oracle ASM. There must be no drive letters assigned to any of the Disks1 Disk10 on any node. For MIcrosoft Windows 2003 it is possible to use diskmgmt.msc instead of diskpart (as used in the following sections) to create these partitions. For Microsoft Windows 2008, diskmgmt.msc cannot be used instead of diskpart to create these partitions.
---
10
Disk 10 Online 100 GB 100 GB Now you should clean disks 1 10 (Not disk0 as this the local C: drive) DISKPART>select disk 1 Disk 1 is now the selected disk. DISKPART> clean all DISKPART> select disk 2 Disk 2 is now the selected disk. DISKPART> clean all DISKPART> select disk 3 Disk 3 is now the selected disk. DISKPART> clean all DISKPART> select disk 4 Disk 4 is now the selected disk. DISKPART> clean all DISKPART> select disk 5 Disk 5 is now the selected disk. DISKPART> clean all DISKPART> select disk 6 Disk 6 is now the selected disk. DISKPART> clean all DISKPART> select disk 7 Disk 7 is now the selected disk. DISKPART> clean all DISKPART> select disk 8 Disk 8 is now the selected disk. DISKPART> clean all DISKPART> select disk 9 Disk 9 is now the selected disk. DISKPART> clean all DISKPART> select disk 10 Disk 10 is now the selected disk. DISKPART> clean all
DISKPART> select disk 2 Disk 2 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> select disk 3 Disk 3 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition.. DISKPART> select disk 4 Disk 4 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> select disk 5 Disk 5 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> select disk 6 Disk 6 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> select disk 7 Disk 7 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> select disk 8 Disk 8 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition. DISKPART> select disk 9 Disk 9 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log 3.4. Create Logical partitions inside Extended partitions 12
DiskPart succeeded in creating the specified partition. DISKPART> select disk 10 Disk 10 is now the selected disk. DISKPART> create part ext DiskPart succeeded in creating the specified partition. DISKPART> create part log DiskPart succeeded in creating the specified partition.
---
---
list volume
--C D E F G H I J K L M --------------NTFS RAW RAW RAW RAW RAW RAW RAW RAW RAW RAW ---------Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition ------16 GB 1023 MB 1023 MB 1023 MB 1023 MB 1023 MB 100 GB 100 GB 100 GB 100 GB 100 GB --------Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy -------System
Notice that the volumes are listed in a completely different order compared to the disk list.
13
DISKPART> select volume 1 Volume 1 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 2 Volume 2 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 3 Volume 3 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 4 Volume 4 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 5 Volume 5 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 6 Volume 6 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 7 Volume 7 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 8 Volume 8 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 9 Volume 9 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point. DISKPART> select volume 10 Volume 10 is the selected volume. DISKPART> remov DiskPart successfully removed the drive letter or mount point.
14
list vol
--C --------------NTFS RAW RAW RAW RAW RAW RAW RAW RAW RAW RAW ---------Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition Partition ------16 GB 1023 MB 1023 MB 1023 MB 1023 MB 1023 MB 100 GB 100 GB 100 GB 100 GB 100 GB --------Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy Healthy -------System
Number of LUNs(Disks) RAID Level Size(GB) ASM Label Prefix Diskgroup Redundancy
Perform this task as follows: Within Windows Explorer navigate to the asmtool directory within the Grid Infrastructure installation media and double click the asmtoolg.exe executable. Within ASM Tool GUI, select Add or Change Label and click Next.
15
On the Select Disks screen choose the appropriate disks to be assigned a label and enter an ASM Label Prefix to make the disks easily identifiable for their intended purpose, size and/or performance characteristics. After choosing the intended disks and entering the appropriate ASM Label Prefix, click Next to continue.
16
On the final screen, click Finish to update the ASM Disk Labels.
Repeat these steps for all ASM disks that will differ in their label prefix.
17
Though CLUVFY is packaged with the Grid Infrastructure installation media, it is recommended to download and run the latest version of CLUVFY. The latest version of the CLUVFY utility can be downloaded from: http://otn.oracle.com/rac Once the latest version of the CLUVFY has been downloaded and installed, execute it as follows to perform the Grid Infrastructure pre-installation verification: Login to the server in which the installation will be performed as the Local Administrator. Open a command prompt and run CLUVFY as follows to perform the Oracle Clusterware pre-installation verification:
cmd> cmd> runcluvfy stage -post hwos -n runcluvfy stage -pre crsinst -n [verbose]
[-verbose]
If any errors are encountered, these issues should be investigated and resolved before proceeding with the installation.
18
* On the Select Installation Type screen, choose Advanced Installation and click Next to continue.
Choose the appropriate language on the Select Product Languages screen and click Next to continue.
19
On the Grid Plug and Play Information screen perform the following: Enter the desired Cluster Name, this name should be unique for the enterprise and CAN NOT be changed post installation. Enter the SCAN Name for the cluster. The SCAN Name must be a DNS entry resolving to 3 IP addresses. The SCAN Name MUST NOT be in the hosts file. Enter the port number for the SCAN Listener, this port defaults to 1521. Uncheck the Configure GNS checkbox. Click Next to continue.
Add all of the cluster nodes hostnames and Virtual IP hostnames on the Cluster Node Information Screen. By default the OUI only knows about the local node, additional nodes can be added using the Add button.
20
On the Specify Network Interface Usage screen, make sure that the public and private interfaces are properly specified. Make the appropriate corrections on this screen and click Next to continue. NOTE The public and private interface names MUST be consistent across the cluster, so if Public is public and Private is private, the same MUST be true on all of the other nodes in the cluster.
Choose Automatic Storage Management (ASM) on the Storage Option Information screen and click Next to create the ASM diskgroup for Grid Infrastructure.
21
On the Create ASM Disk Group screen perform the following: Enter the Disk Group Name that will store the OCR and Voting Disks (e.g. OCR_VOTE) Choose Normal for the Redundancy level. Select the disks that were previously designated for the OCR and Voting Disks. NOTE If no Candidate disks are available the disks have not yet been stamped for use by ASM. To stamp the disks review the instructions in the Storage Prerequisites section of this document and click the Stamp Disks button on this screen. Click Next to continue.
22
Enter the appropriate passwords for the SYS and ASMSNMP users of ASM on the Specify ASM Passwords screen and click Next to continue.
On the Failure Isolation Support screen choose Do not use Intelligent Platform Management Interface (IPMI) and click Next to continue.
Specify the location for Oracle Base (e.g. d:\app\oracle) and the Grid Infrastructure Installation (e.g. d:\OraGrid) on the Specify Installation Location screen. Click Next to allow the OUI to perform the prerequisite checks on the target nodes. NOTE Continuing with failed prerequisites may result in a failed installation; therefore it is recommended that failed prerequisite checks be resolved before continuing.
23
After the prerequisite checks have successfully completed, review the summary of the pending installation and click Finish to install and configure Grid Infrastructure.
24
Once the installation has completed, check the status of the CRS resources as follows: NOTE All resources should report as online with the exception of GSD and OC4J? . GSD will only be online if Grid Infrastructure is managing a 9i database and OC4J? is reserved for use in a future release. Though these resources are offline it is NOT supported to remove them. cmd> %GI_HOME%\bin\crsctl stat res -t The output of the above command will be similar to the following:
-------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS
25
-------------------------------------------------------------------------------Local Resources -------------------------------------------------------------------------------ora.LISTENER.lsnr ONLINE ONLINE ora.OCR_VOTE.dg ONLINE ONLINE ora.asm ONLINE ONLINE ora.eons ONLINE ONLINE ora.gsd OFFLINE OFFLINE OFFLINE OFFLINE ora.net1.network ONLINE ONLINE ora.ons ONLINE ONLINE ONLINE ONLINE ratwin01 ratwin02 ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 ONLINE ONLINE ratwin01 ratwin02 Started Started ONLINE ONLINE ratwin01 ratwin02
ONLINE ONLINE
ratwin01 ratwin02
ONLINE ONLINE
ratwin01 ratwin02
-------------------------------------------------------------------------------Cluster Resources -------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE ora.oc4j 1
ratwin02
ratwin01
ratwin01
OFFLINE OFFLINE
ONLINE
ratwin01
ONLINE
ratwin02
ONLINE
ONLINE
ratwin02
ONLINE
ONLINE
ratwin01
ONLINE
ONLINE
ratwin01
26
Start the OUI by running the setup.exe command as the Local Administrator user from the DB directory of the Oracle Database 11g Release 2 (11.2.0.1) installation media and click next to begin the installation process. If using the DVD installation media (from edelivery.oracle.com) the first screen to appear will be the Select a Product to Install screen. Choose Oracle Database 11g and click next to continue. If the OTN media is used the RDBMS/ASM and Clusterware are separate downloads and the first screen to be displayed will be Installation Type screen . The first OUI screen will prompt for an email address to allow you to be notified of security issues and to enable Oracle Configuration Manager. If you wish NOT to be notified or not to use Oracle Configuration Manager leave the email box blank and uncheck the check box. After entering the desired information click Next to continue.
Choose Install database software only on the Select Installation Option screen and click Next to continue.
27
On the Grid Installation Options screen, choose Real Application Clusters database installation and select ALL nodes for the software installation. Click Next to continue.
Choose the appropriate language on the Select Product Languages screen and click Next to continue.
28
On the Select Database Edition screen choose Enterprise Edition and click Next to continue. NOTE If there is a need for specific database options to be installed or not installed, these options can be chosen by clicking the Select Options button.
Specify the location for Oracle Base (e.g. d:\app\oracle) and the database software installation (e.g. d:\app\oracle\product\11.2.0\db_1) on the Specify Installation Location screen. Click Next to allow the OUI to perform the prerequisite checks on the target nodes. NOTE It is recommended that the same Oracle Base location is used for the database installation that was used for the Grid Infrastructure installation.
29
NOTE Continuing with failed prerequisites may result in a failed installation; therefore it is recommended that failed prerequisite checks be resolved before continuing.
After the prerequisite checks have successfully completed, review the summary of the pending installation and click Finish to install the database software.
30
31
Run the ASM Configuration Assistant (ASMCA) from the ASM Home by executing the following: cmd> %GI_HOME%\bin\asmca After launching ASMCA, click the DiskGroups? tab.
While on the DiskGroups? tab, click the Create button to display the Create DiskGroup? window. NOTE: To reduce the complexity of managing ASM and its diskgroups, Oracle recommends that generally no more than two diskgroups be maintained for database storage, a Database Area DiskGroup? and a Flash Recovery Area DiskGroup? . The Database Area Diskgroup will house active database files such as datafiles, control files, online redo logs, and change tracking files (used in incremental backups) are stored. The Flash Recovery Area will house recovery-related files are created, such as multiplexed copies of the current control file and online redo logs, archived redo logs, backup sets, and flashback log files Within the Create DiskGroup? window perform the following: Enter the desired DiskGroup? Name (e.g. DBDATA or DBFLASH) Choose External Redundancy (assuming redundancy is provided at the SAN level). Select the candidate disks to include in the DiskGroup? NOTE: If no Candidate disks are available the disks have not yet been stamped for use by ASM. To stamp the disks review the instructions in the 'Rac11gR2WindowsPrepareDisk' chapter of this document and click the Stamp Disks button on this screen.
32
Repeat the previous two steps to create all necessary diskgroups. Once all the necessary DiskGroups? have been created click Exit to exit from ASMCA. Note: It is Oracle's Best Practise to have an OCR mirror stored in a second disk group. To follow this recommendation add an OCR mirror. Mind that you can only have one OCR in a diskgroup. Action:
33
1. To add OCR mirror to an Oracle ASM disk group, ensure that the Oracle Clusterware stack is running and run the following command as administrator: 2. D:\OraGrid\BIN>ocrconfig -add +DBDATA 3. D:\OraGrid\BIN>ocrcheck -config
34
The Database Templates screen will now be displayed. Select the General Purpose or Transaction Processing database template and click Next to continue.
On the Database Identification screen perform the following: Select the Admin Managed configuration type. Enter the desired Global Database Name. Enter the desired SID prefix. Select ALL the nodes in the cluster. Click Next to continue.
35
On the Management Options screen, choose Configure Enterprise Manager. Once Grid Control has been installed on the system this option may be unselected to allow the database to be managed by Grid Control. Click Next to continue.
Enter the appropriate database credentials for the default user accounts and click Next to continue.
36
On the Database File Locations screen perform the following: Choose Automatic Storage Management (ASM). Select Use Oracle-Managed Files and specify the ASM Disk Group that will house the database files (e.g. +DBDATA) Click Next and enter the ASMSNMP password when prompted. Click OK after entering the ASMSNMP password to continue.
37
For the Recovery Configuration, select Specify Flash Recovery Area, enter +DBFLASH for the location and choose an appropriate size. It is also recommended to select Enable Archiving at this point. Click Next to continue.
On the Database Content screen, you are able create the sample schemas. Check the check box if the sample schemas are to be loaded into the database and click Next to continue.
38
Enter the desired memory configuration and initialization parameters on the Initialization Parameters screen and click Next to continue.
On the Database Storage screen, review the file layout and click Next to continue. NOTE: At this point you may want to increase the number of redo logs per thread to 3 and increase the size of the logs from the default of 50MB.
39
On the Last screen, ensure that Create Database is checked and click Finish to review the summary of the pending database creation.
40
41