T EC HN IC AL R E P OR T

Nimble Storage for Oracle 11gR2 RAC on
Oracle Linux 6.4

TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 1

Document Revision
Table 1.
1

Date Revision Description

1/11/2013 1.0 Initial Draft

1/14/2013 1.1 Revised

2/25/2014 1.2 Revised

7/31/2014 1.3 Revised iSCSI timeout

11/17/2014 1.4 Updated iSCSI & Multipath

THIS TECHNICAL TIP IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN
TYPOGRAPHICAL ERRORS AND TECHNICAL INACCUURACIES. THE CONTENT IS PROVIDED AS IS,
WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.

Nimble Storage: All rights reserved. Reproduction of this material in any manner whatsoever
without the express written permission of Nimble is strictly prohibited.

TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 2

................................................................................................................................................................................................................ 6 Oracle Linux iSCSI Setup ................................................................................................................................................................................................. 8 Create New Initiator Group ................................................................................................................................................................................................................... 4 Audience ............................ 8 Create Volumes and Allow Access .................................................................................. 10 Protection via Volume Collection ......................................................................................................................... 4 Operating system configuration for Oracle ........................ 20 Install Oracle Software ............................................................................. 20 TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 3 ............................................ 7 Nimble Storage Setup .................................................................................................................................................................................. 15 Oracle 11gR2 RAC Setup............ 4 Oracle Linux 6............................................................................... 5 Oracle Linux Networking Setup ......... 18 Oracle ASM Disks Setup ..................................................................................................................................................................................................................................................................................... 6 Multipath Setup .................................................................................................. 18 Install Grid Infrastructure ..................18 Pre-requisites ............................................4 Installation ........................................................ 4 Servers Setup for Oracle RAC.......................Table of Contents Introduction ..........................................................................................................................................................................................................................................................................................................................................................

x86_64. and basic Nimble Storage operations.el6. It is assumed that the reader has a working knowledge of iSCSI SAN network design.el6.rpm • device-mapper-multipath-libs-0.1.4 Installation During the installation of the operating system.x86_64.3.29-15.x86_64. o Disabled selinux o Disabled NetworkManager o Disable iptables – if iptables is required. Please visit www. Oracle performance tuning is beyond the scope of this paper.rpm TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 4 .52-15. Servers Setup for Oracle RAC Configuration for a 2 node RAC Oracle database. Knowledge of Oracle Linux operating system. we recommend that Logical Volume Manager (LVM) NOT to be used for the boot disk.0. storage engineers. Audience This guide is intended for Oracle database solution architects.el6.9-1. the following packages are required for the Oracle 11gR2 and multipathing.el6.4 running Oracle RAC 11gR2 with Automatic Storage Management (ASM) when deploying on Nimble Storage.oracle. the following daemons need to be disabled on each node.21-15.rpm • compat-db43-4.x86_64.el6. system administrators and IT managers who analyze.rpm • compat-db-4.x86_64.4.1. Oracle Linux 6.com for Oracle Performance Tuning Guide for more information in tuning your database. and Oracle database is also required.0. o Disable ip6tables If not installed already. Remove all LVM configurations and use the standard partition method for the boot disk.rpm • device-mapper-multipath-0.4.6.9-64.Introduction The purpose of this technical white paper is to describe the basic setup of Oracle Linux 6. Oracle Clusterware. design and maintain a robust database environment on Nimble Storage.9-64.0. • /boot partition • SWAP partition • / partition [root@racnode1 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 32998256 22234944 9087076 71% / tmpfs 6291456 2607808 3683648 42% /dev/shm /dev/sda1 126931 50862 69516 43% /boot After the installation of the operating system. make sure it is setup correctly. • compat-db42-4.2.rpm • cvuqdisk-1.

el6_3.core. comment other entries for this parameter and re- run sysctl -p # fs.rmem_max = 4194304 # For 10g.x86_64. recommended value for file-max is 6815744 fs.rpm • libXp-1. recommended value for ip_local_port_range is 9000 65500 net.rpm Operating system configuration for Oracle • Add the below settings to the /etc/systctl.conf file on each node.0-7.min_free_kbytes = 51200 # Nimble Recommended net.core.rpm • sysstat-9.x86_64.ipv4.rpm • unixODBC-devel-2.x86_64.el6.wmem_max = 1048576 # For 10g.el6.el6_3.core.rpm • sysfsutils-2.rmem_default = 262144 # For 11g.ipv4.conf file on each node.rpm • libXp-devel-1.ip_local_port_range 1024 65000'.el6.aio-max-nr = 3145728 # For 11g.rmem_max 2097152'. comment other entries for this parameter and re-run sysctl -p # net.0-7.core.core.ipv4.file-max = 6815744 # For 10g.wmem_max = 16780000 net.el6.x86_64.x86_64.1.x86_64. uncomment 'net.rpm • libsysfs-2.107-10.ip_local_port_range = 9000 65500 # For 10g.x86_64.core.sem = 250 32000 100 142 kernel. uncomment 'net.ip_local_port_range:1024 65000 # Added min_free_kbytes 50MB to avoid OOM killer on EL4/EL5 vm.14-12.0-15.ipv4.rmem_max is 4194304 net.rpm • unixODBC-2.0-15.el6.ipv4.core.el6.rmem_max = 16780000 net.core.wmem_default = 262144 # For 11g.x86_64.1.core.x86_64.x86_64.el6.el6.1.0.2.3.file-max 327679'.core. recommended value for wmem_max is 1048576 net. recommended value for net. Run the sysctl –p when done.4-20.8-1. uncomment 'fs.0.file-max:327679 kernel. uncomment 'net. # For 11g.1.wmem_max 262144'.core.rpm • libaio-devel-0.rmem_max=2097152 net.• ksh-20100621-19.tcp_wmem = 10240 87380 16780000 • Add the settings below to the /etc/security/limits.tcp_rmem = 10240 87380 16780000 net.shmmni = 4096 net.rpm • oracleasm-support-2.14-12.wmem_max:262144 fs. grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 5 .msgmni = 2878 kernel. comment other entries for this parameter and re-run sysctl -p # net.0.2. comment other entries for this parameter and re-run sysctl -p # net.1.

After assigning the public and private network.lab racnode1 136. then if [ $SHELL = "/bin/ksh" ].lab racnode1-priv 10.1. grid.18. ensure software iSCSI initiator shown below has been installed on each node. One is assigned a private IP address.18. Private.0. and VIP networks for both nodes.75 racnode2-priv.127.0.x86_64 • Configure iSCSI as follow on each node: TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 6 .lab racnode1-vip 136. if [ $USER = "oracle" ] || [ $USER = "grid" ] . and oracle users. SCAN IP addresses are also required in 11gR2 RAC so make sure they are setup in DNS.sedemo.214 racnode2. To do this.127.127.sedemo.sedemo.127.127. The other two are assigned for iSCSI networks.lab racnode2-priv # VIP 136. This is required for RAC installation. make sure to generate SSH keys on each node and exchange the keys between the 2 nodes for root. grid hard nproc 131072 oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock 50000000 oracle hard memlock 50000000 • Add the code below to the /etc/profile file on each node.18.sedemo.lab racnode2 # Private 10. An example of /etc/hosts file: # Public 136. One is assigned a public IP address.127.18. After the keys are exchanged. ensure that these users can login to either node without the need of entering a password. Also ensure the /etc/hosts file contains entries of Public.sedemo.18.213 racnode1.18.216 racnode2-vip.lab racnode2-vip Oracle Linux iSCSI Setup • Leverage the native iSCSI initiator for Oracle.sedemo.215 racnode1-vip.2. then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi Oracle Linux Networking Setup • Each node should have four NICs.73 racnode1-priv.872-41.el6. iscsi-initiator-utils-6.

0 MTU=9000 Multipath Setup • Change directory to /etc and using your favorite editor to create a new file called multipath.session.session.conn[0].255.conf with the code below defaults { user_friendly_names yes find_multipaths yes } devices { device { vendor "Nimble" product "Server" path_selector "round-robin 0" path_checker tur rr_min_io_rq 10 rr_weight priorities failback immediate path_grouping_policy group_by_serial features "1 queue_if_no_path" } } Start multipath by running the following: o [root@racnode1 ~]# chkconfig multipathd on o [root@racnode1 ~]# modprobe dm_multipath o [root@racnode1 ~]# service multipathd start TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 7 .conn[0].timeo. • Edit the /etc/iscsi/iscsid.nr_sessions = 4 o node.timeo.noop_out_timeout = 10 o node.conf and modify the following parameters: o node.timeo.128.255.queue_depth = 1024 • Make sure the iSCSI NICs (eth1.71 NETMASK=255.18. eth2) have been assigned iSCSI network IP addresses and NET MASK.replacement_timeout = 120 o node. Should look something similar to this for both iSCSI NICs.session.cmds_max = 2048 o node.noop_out_interval = 5 o node.session. DEVICE="eth1" BOOTPROTO=static HWADDR="00:50:56:AE:30:71" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" UUID="4705a3e9-671a-45a0-8477-8b8b98d56c09" IPADDR=172.

create a new Initiator Group. • First. create new volumes for the database and allow access to the Initiator Group. Create New Initiator Group To create a new initiator group • Open the Nimble Management GUI • Click on “Manage Manage” Manage and select “Initiator Initiator Groups” Groups TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 8 . • Second.Nimble Storage Setup Creating iSCSI volumes for the Oracle RAC nodes on Nimble Storage is simple. This Initiator Group contains the Oracle RAC nodes’ iqn IDs.

OK You can find the iqn ID of the Oracle RAC nodes by running the following command on the Oracle Linux server: [root@racnode1 ~]# cat /etc/iscsi/initiatorname. We recommend using the Oracle cluster name for distinction.1988-12.oracle:496c4aead3e1 TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 9 .com. • Click on “Add Add Initiator” Initiator button.iscsi InitiatorName=iqn. Another window will appear.• Click on “New New Initiator Group” Group button and assign a name to the Initiator Group. Enter the Initiator Group Name (rac- cluster in the example below) and the Initiator Name (iqn ID) of both nodes and click “OK OK”.

Enter a short description. Next TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 10 . Enter a name for the volume. Also select the “Allow Allow multiple initiator access” access button then click “Next Next”.Create Volumes and Allow Access • Click on “Manage Manage” Manage and select “Volumes Volumes” Volumes • Click on “New New Volume” Volume button. • Select “Limit Limit access” access radio button and then select “Limit Limit access to iSCSI initiator group” group button and then click on the drop down arrow to select the initiator group for this cluster (rac-cluster from the earlier example).

• Enter the size for the volume and click “Next Next”. select “None None” None for now and click “Finish Finish”. TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 11 . Finish We will discuss the Protection option later in the paper. Next • At the Protection screen.

-login Note: Note if you want to login for all volumes that this Linux server has access to. sdc and sde is one disk which I used “racproddata1 racproddata1” racproddata1 as an alias. With either method. done ### sda: ### sda1: ### sda2: ### sda3: ### sdb: 2bfdb64e06cf5ee696c9ce900abdb3466 ### sdd: 2bfdb64e06cf5ee696c9ce900abdb3466 ### sdc: 2e4d6149bf39b0c626c9ce900abdb3466 ### sde: 2e4d6149bf39b0c626c9ce900abdb3466 ### sdg: 2c6c2a2e042cc67d96c9ce900abdb3466 ### sdf: 2c6c2a2e042cc67d96c9ce900abdb3466 ### sdh: 261dccce8c92d80256c9ce900abdb3466 ### sdi: 261dccce8c92d80256c9ce900abdb3466 ### sdj: 242d6ac9296637ba66c9ce900abdb3466 ### sdk: 242d6ac9296637ba66c9ce900abdb3466 • The new /etc/multipath.conf file on the first Oracle RAC node only and copy (using scp) the file to the second Oracle RAC node. Here is how you can setup multipath for the Oracle volumes. sdf and sdg is the one disk (same serial #) which I used “ocr ocr” ocr as an alias. Please refer to Oracle Linux documentation for specific parameter settings for other storage devices. you create one OCR volume for the Clusterware and Voting disk or three for normal redundancy and create more volumes for the DATA diskgroup and FRA diskgroup depending on how you want to architect your Oracle database. [root@racnode1 ~]# iscsiadm -m discovery -t st -p <iSCSI IP #1> [root@racnode1 ~]# iscsiadm -m discovery -t st -p <iSCSI IP #2> [root@racnode1 ~]# iscsiadm -m node . You can modify the multipath. then use this command “iscsiadm –m node –T <Target Volume IQN> . you can run the following command to find out which disk serial number belongs to which LUN so you can alias the volumes properly. please ensure that you can see the same volume across two networks. You can run this command on the first Oracle RAC node only.conf on your Oracle Linux servers to use the aliases for the Oracle volumes. then run the above login command. defaults { user_friendly_names yes find_multipaths yes } TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 12 . and so forth. please refer to the Oracle Best Practices Guide. If there is a single discovery IP address. • After the scan completes.-login”.conf would look something like this. do echo "### $a: `scsi_id -u -g /dev/$a`" . In the below output.Make sure to create enough volumes for the Oracle rac-cluster. [root@racnode1 ~]# for a in `cat /proc/partitions | awk '{print $4}' | grep sd`. Once all volumes have been created on the Nimble array. For the number of volumes for the DATADG and FRADG. Note that you need to scan both iSCSI networks. If you want to login for a certain volume. you can use that IP to run the discovery. scan for the new volumes on both Oracle RAC nodes and modify the /etc/multipath. Usually in an Oracle RAC ASM environment. • Scan for disks on the Oracle RAC Linux servers.

policy='round-robin 0' prio=1 status=active |.conf file to the second Oracle RAC node and reload multipathd daemon on both Oracle RAC nodes.Server size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+.devices { device { vendor "Nimble" product "Server" path_selector "round-robin 0" path_checker tur rr_min_io_rq 10 rr_weight priorities failback immediate features "1 queue_if_no_path" path_grouping_policy group_by_serial } } blacklist { wwid 26353900f02796769 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" } multipaths { multipath { wwid 2c6c2a2e042cc67d96c9ce900abdb3466 alias ocr } multipath { wwid 2e4d6149bf39b0c626c9ce900abdb3466 alias racproddata1 } multipath { wwid 261dccce8c92d80256c9ce900abdb3466 alias racproddata2 } multipath { wwid 242d6ac9296637ba66c9ce900abdb3466 alias racprodfra1 } multipath { wwid 2bfdb64e06cf5ee696c9ce900abdb3466 alias racprodfra2 } } • Copy the /etc/multipath.Server TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 13 . [root@racnode1 ~]# /etc/init.23:0:0:0 sdb 8:16 active ready running racprodfra1 (242d6ac9296637ba66c9ce900abdb3466) dm-3 Nimble. [root@racnode1 ~]# multipath -ll racprodfra2 (2bfdb64e06cf5ee696c9ce900abdb3466) dm-0 Nimble.d/multipathd reload Reloading multipathd: [ OK ] • Verify multipath on both Oracle RAC nodes.24:0:0:0 sdd 8:48 active ready running `.

32:0:0:0 sdk 8:160 active ready running racproddata2 (261dccce8c92d80256c9ce900abdb3466) dm-4 Nimble.conf file: elevator=noop To manually set for all disks: root@mktg04 ~]# multipath -ll | grep sd | awk -F":" '{print $4}' | awk '{print $2}' | while read LUN.30:0:0:0 sdi 8:128 active ready running racproddata1 (2e4d6149bf39b0c626c9ce900abdb3466) dm-1 Nimble.25:0:0:0 sdc 8:32 active ready running `. do echo noop > /sys/block/${LUN}/queue/scheduler .Server size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+.policy='round-robin 0' prio=1 status=active |.policy='round-robin 0' prio=1 status=active |. • Change vm dirty writeback and expire to “500” and “3000” respectively To change vm dirty writeback and expire: [root@racnode1 ~]# echo 100 > /proc/sys/vm/dirty_writeback_centisecs TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 14 . done • Change max_sectors_kb to “1024” To change max_sectors_kb to 1024 for a single volume: [root@racnode1 ~]# echo 1024 > /sys/block/sd?/queue/max_sectors_kb Change all volumes: multipath -ll | grep sd | awk -F":" '{print $4}' | awk '{print $2}' | while read LUN do echo 1024 > /sys/block/${LUN}/queue/max_sectors_kb done Note: Note To make this change persistent after reboot. add the commands in /etc/rc.0G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+.Server size=5.policy='round-robin 0' prio=1 status=active |.29:0:0:0 sdh 8:112 active ready running `.26:0:0:0 sde 8:64 active ready running ocr (2c6c2a2e042cc67d96c9ce900abdb3466) dm-2 Nimble.28:0:0:0 sdg 8:96 active ready running • Change disk IO scheduler to “noop” To set at boot time. add the elevator option at the kernel line in the /etc/grub.local file.Server size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+.size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+.policy='round-robin 0' prio=1 status=active |.27:0:0:0 sdf 8:80 active ready running `.31:0:0:0 sdj 8:144 active ready running `.

Development. Volume Collection allows you to take a snapshot via a schedule or ad hoc using the Nimble Storage management GUI or CLI. [root@racnode1 ~]# echo 100 > /proc/sys/vm/dirty_expire_centisecs Note: Note To make this change persistent after reboot. Please note that you cannot delete a snapshot if there is a clone volume associated with that snapshot. The snapshot(s) can be deleted manually if you wish to do so. To change CPU governor setting to performance: [root@racnode1 ~]# echo performance > /sys/devices/system/cpu/<cpu #>/cpufreq/scaling_governor Change all CPUs [root@racnode1 ~]# for a in $(ls -ld /sys/devices/system/cpu/cpu[0-9]* | awk '{print $NF}') . It allows you to create a clone of your production database for Test. In an Oracle environment where multiples volumes have been created for a particular database. Manage “Protection Protection”. This particular feature is very useful in an Oracle environment. Protection via Volume Collection Once you have created all volumes for the Oracle rac-cluster. The Volume Collection allows you to do just that. click on “Manage Manage”. add the commands in /etc/rc. Protection “Volume Volume Collection” Collection TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 15 . add the commands in /etc/rc. do echo performance > $a/cpufreq/scaling_governor .local file. you need to take a snapshot of all volumes concurrently. • In the Nimble management GUI. or Staging environment quickly and easily. After a snapshot has been taken using the Volume Collection feature. you can manually add the volumes to a Volume Collection. the snapshot(s) will be available for recovery based on your retention policy setting. done Note: Note To make this change persistent after reboot. for consistency. • Use “performance” setting for all available CPUs on the host when possible. Below are the steps to create a Volume Collection and assign the Oracle volumes to it.local file.

• Click on “New New Volume Collection” Collection button • A “Create Create a volume collection” collection window appears. Next TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 16 . Enter a name for this volume collection and select “Blank Blank schedule” schedule and click “Next Next”.

• A “Synchronization Synchronization” Synchronization window appears. you can select the “None None” None radio button and click “Next Next”. Next TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 17 . Enter a name and your desired schedule/retention and click “Next Next”. Next • A “Schedule Schedule” Schedule window appears. You must have at least one protection (snapshot) schedule defined. As of this writing.

click “Finish Finish”. Ctrl-C will abort. The following questions will determine whether the driver is loaded on boot and what permissions it will have. • A “Volume Volume” Volume window appears. • Create group ID of “oinstall” on both servers • Create group ID of “dba” on both servers • Create user ID of “grid” on both servers • Create user ID of “oracle” on both servers Oracle ASM Disks Setup o Run “/etc/init. Hitting <ENTER> without typing an answer will keep that current value. The directories must be exactly the same on both nodes.d/oracleasm configure” command to configure ASM disks on both nodes.d/oracleasm configure Configuring the Oracle ASM library driver. Finish Oracle 11gR2 RAC Setup Now that the operating system and Nimble storage have been setup for the Oracle rac-cluster. The current values will be shown in brackets ('[]'). This is where you need to add all of your Oracle volumes to this volume collection. TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 18 . we’ll discuss installing Grid Infrastructure and Oracle software. When done. [root@racnode1 ~]# /etc/init. This will configure the on-boot properties of the Oracle ASM library driver. Pre-requisites • Create a local directory on the boot LUN or a separate Nimble LUN for the Grid Infrastructure and Oracle software on both Oracle RAC nodes.

d/oracleasm creatdisk RACPRODDATA1 /dev/mapper/racproddata1 [root@ racnode1 ~]# /etc/init. then load it. ORACLEASM_ENABLED=true true # ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.d/oracleasm creatdisk RACPRODFRA2 /dev/mapper/racprodfra2 o Verify ASM disks [root@racnode1 ~]# /etc/init. # ORACLEASM_ENABELED: 'true' means to load the driver on boot. Default user to own the driver interface []: grid Default group to own the driver interface []: oinstall Start Oracle ASM library driver on boot (y/n) []: y Scan for Oracle ASM disks on boot (y/n) []: y Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ] o Modify the /etc/sysconfig/oracleasm file as follow on both nodes.d/oracleasm creatdisk RACPRODFRA1 /dev/mapper/racprodfra1 [root@ racnode1 ~]# /etc/init. ORACLEASM_UID=grid grid # ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.d/oracleasm listdisks OCR RACPRODDATA1 RACPRODDATA2 RACPRODFRA1 RACPRODFRA2 TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 19 .d/oracleasm creatdisk RACPRODDATA2 /dev/mapper/racproddata2 [root@ racnode1 ~]# /etc/init. ORACLEASM_GID=oinstall oinstall # ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.d/oracleasm creatdisk OCR /dev/mapper/ocr [root@ racnode1 ~]# /etc/init. [root@oraclelinux1 ~]# lsmod | grep oracleasm oracleasm 53352 1 o Create ASM disks on the first node only [root@racnode1 ~]# /etc/init. If not. ORACLEASM_SCANBOOT=true true # ORACLEASM_SCANORDER: Matching patterns to order disk scanning ORACLEASM_SCANORDER="dm dm" dm # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan ORACLEASM_SCANEXCLUDE="sd sd" sd o Make sure ASM module is loaded on both nodes.

if you created 3 volumes then use the “NORMAL REDUNDANCY”.e DATADG & FRADG).com | info@nimblestorage. and CASL are trademarks or registered trademarks of Nimble Storage. Install Oracle Software After the installation of GI. For the other four disks. Please note that the naming and number of disks specified in this document is an example only. 211 River Oaks Parkway. you can name the disk group as “OCRDG” and select only the OCR disk. Inc. for the Clusterware/Vote disk. make sure you select “EXTERNAL REDUNDANCY”. San Jose.oracle. All other trademarks are the property of their respective owners. o Verify ASM disks on the second node [root@racnode2 ~]# /etc/init.d/oracleasm scandisks Scanning the system for Oracle ASMLib disks: [ OK ] [root@racnode2 ~]# /etc/init. In this case. For the OCR diskgroup. change user to oracle and install Oracle software on the first node only.com. Inc.com © 2014 Nimble Storage. Inc. follow Oracle documentation on how to install Oracle if you don’t already know how. Again. Nimble Storage. NimbleConnect.nimblestorage. create two new disk groups and name them “DATADG” and “FRADG”. Remember that you need to create one disk group for the Clusterware/Vote disk and one or two disk groups for the Oracle database (i. TR-ORG-1114 TECHNICAL REPORT: NIMBLE STORAGE FOR ORACLE 11GR2 RAC 20 . run “dbca” to create your database. change the asm_diskstring to “/dev/oracleasm/disks/*” and select OK. DATADG & FRADG).e. SmartStack. After the software installation. When you get to the part where you have to create ASM disk group for the Clusterware. Follow Oracle documentation on how to install GI at www. you can now install Grid Infrastructure on the first node only. For other disk groups (i. You can have different names and as many disks as you’d like. Nimble Storage. It is much easier to select the option to install software only first. InfoSight.d/oracleasm listdisks OCR RACPRODDATA1 RACPRODDATA2 RACPRODFRA1 RACPRODFRA2 Install Grid Infrastructure As grid user. CA 95134 Tel: 877-364-6253) | www.