You are on page 1of 85

Install Grid Infrastructure 11gR2 on OL6 and Oracle VM

Created by : Hans Camu


Date : 3 December 2011
http://camoraict.wordpress.com

This article is one in a series of describing how-to install Oracle VM Server and several Oracle VM
guests. In this article I will describe how-to install Grid Infrastructure 11gR2 (GI) on 2 Oracle Virtual
Machines with Oracle Linux 6 as Operating System.

The steps described in this article will be:


 Create 2 virtual machines on the command line using a installation directory and a kickstart file
 Configure the virtual machines to be able to successfully install GI
 Add shared disks for the GI installations using disk local to the OVS or Openfiler NAS
 Install Grid Infrastructure 11gR2
 Install Oracle 11gR2 RDBMS Software
 Create an Oracle 11gR2 RAC database

The installation will take place on virtual machines with 4GB of memory. This guide is for testing
purposes only. It is not supported to run a production environment with a setup like described in this
article.

For the installation we will use Oracle Linux 6.1 64-bit with the Oracle Linux Server-uek (2.6.32-
100.34.1.el6uek.x86_64) kernel.

You must be aware that this is NOT a certified Oracle Linux version and kernel at the time this article
is published.

All you read in this article is for testing purposes only, and don't hold it against me if you use it
otherwise.
Contents
1. Configuration................................................................................................................................... 3
2. Create Virtual Machine ................................................................................................................... 4
3. Install additional OS packages .................................................................................................. 15
4. Create shared ASM disks .............................................................................................................. 16
4.1. Create shared disks on your Oracle VM Server ..................................................................... 16
4.1.1. Create and attach shared disks ..................................................................................... 16
4.1.2. Create partitions on the devices ..................................................................................... 17
4.1.3. Grant access rights and define logical names for devices ............................................. 18
4.2. Create shared disks using the Openfiler NAS ........................................................................ 19
4.2.1. Add disk to create iSCSI storage .................................................................................... 19
Create the disks like this (a 100G disk will be created): ............................................................... 19
4.2.2. iSCSI target service ....................................................................................................... 20
4.2.3. Openfiler network access .............................................................................................. 20
4.2.4. Openfiler storage configuration ..................................................................................... 21
4.2.5. Create Logical Volumes ................................................................................................ 23
4.2.6. Create iSCSI target and Grant access rights .................................................................. 24
4.2.7. Make iSCSI target available to clients (RAC nodes) .................................................... 26
4.2.8. Configure iSCSI volumes on Oracle RAC nodes .......................................................... 27
4.2.8.1. Installing the iSCSI (initiator) Service ...................................................................... 27
4.2.8.2. Configure the iSCSI (initiator) Service ..................................................................... 27
4.2.8.3. Discover available iSCSI targets ............................................................................... 28
4.2.8.4. Manually login to iSCSI targets ................................................................................ 28
4.2.8.5. Configure Automatic login to iSCSI targets.............................................................. 29
4.2.8.6. Create partitions on the iSCSI volumes ..................................................................... 29
4.2.8.7. Create persistent Local iSCSI device names ............................................................. 30
5. Configure NTP .............................................................................................................................. 33
6. Install Grid Infrastructure 11gR2 .................................................................................................. 34
7. Install Oracle 11gR2 RDBMS software ........................................................................................ 60
8. Create ASM diskgroup for database files ...................................................................................... 71
9. Create Oracle RAC Database ........................................................................................................ 75

Page 2 van 85
1. Configuration

In this article I describe the creation of a two node Oracle RAC cluster.
In the example configuration we will use the following settings:

Oracle RAC Node 1 - (parrot)


Device IP Address Subnet Gateway Purpose
eth0 192.168.0.206 255.255.255.0 192.168.0.1 public network
eth1 10.0.0.206 255.255.255.0 private network
/etc/hosts
127.0.0.1 localhost.localdomain localhost
##Private NAS Openfiler (eth1)
10.0.0.210 openfiler-interc.example.com openfiler-interc
## Public (eth0)
192.168.0.206 parrot.example.com parrot
192.168.0.207 pelican.example.com pelican
## Private (eth1)
10.0.0.206 parrot-interc.example.com parrot-interc
10.0.0.207 pelican-interc.example.com pelican-interc
# VIP (eth0:1)
192.168.0.216 parrot-vip.example.com parrot-vip
192.168.0.217 pelican-vip.example.com pelican-vip
## SCAN (eth0:#)
192.168.0.226 gridcl06-scan.example.com gridcl06-scan

Oracle RAC Node 2 - (pelican)


Device IP Address Subnet Gateway Purpose
eth0 192.168.0.207 255.255.255.0 192.168.0.1 public network
eth1 10.0.0.207 255.255.255.0 private network
/etc/hosts
127.0.0.1 localhost.localdomain localhost
##Private NAS Openfiler (eth1)
10.0.0.210 openfiler-interc.example.com openfiler-interc
## Public (eth0)
192.168.0.206 parrot.example.com parrot
192.168.0.207 pelican.example.com pelican
## Private (eth1)
10.0.0.206 parrot-interc.example.com parrot-interc
10.0.0.207 pelican-interc.example.com pelican-interc
# VIP (eth0:1)
192.168.0.216 parrot-vip.example.com parrot-vip
192.168.0.217 pelican-vip.example.com pelican-vip
## SCAN (eth0:#)
192.168.0.226 gridcl06-scan.example.com gridcl06-scan

ASM storage configuration:

iSCSI / Logical Volumes


Required Space
Volume Name Volume Description
(MB)
/dev/asmocrvotedisk1 ASM Clusterware 1.024
/dev/asmdisk1 ASM Disk 1 4.096
/dev/asmdisk2 ASM Disk 2 4.096
/dev/asmdisk3 ASM Disk 3 4.096

Page 3 van 85
2. Create Virtual Machine

To install GI 11gR2 we first must create 2 virtual machines. On these virtual machines I will install
Oracle Linux 6 update 1 64-bit. The DVD ISO can be downloaded from here:
http://edelivery.oracle.com/linux.

There are numerous ways of creating an Oracle virtual machine. For example with the command-line
utility virt-install, Oracle VM Manager, using Enterprise Manager Grid Control, and even in a manual
way.
In a previous article I discussed how-to create a virtual machine using the command line tool virt-
install. In this article I will discuss creating virtual machines by creating the configuration manually
and the starting then installation from the command line. To make sure both virtual machines are
exactly the same I will use a kickstart file.

What will be present after installing using the kickstart file?

- OL 6.1 64-bit with the selected packages


- a /u01
- an Oracle user
- a generic oracle profile

In this example I have created an installation directory on my Oracle VM Server:


/mount/OEL6u1_x86_64. This directory is a placeholder for the OL6.1 64-bit installation files. How-to
create this is discussed in article Install Oracle VM Manager 2.2.

I will now start creating the kickstart file:

[root@oraovs01::/software/kickstart]# vi OEL6u1_x86_64_GI.cfg
##START of kickstart file
install
poweroff
text
lang en_US.UTF-8
keyboard us
nfs --server=192.168.0.200 --dir=/mount/OEL6u1_x86_64
network --device eth0 --bootproto static --ip 192.168.0.206 --netmask 255.255.255.0
--gateway 192.168.0.1 --nameserver 192.168.0.1 --hostname parrot.example.com --
noipv6
network --device eth1 --bootproto static --ip 10.0.0.206 mask 255.255.255.0 --
gateway 192.168.0.1 --nameserver 192.168.0.1 --hostname parrot.example.com --noipv6
rootpw --iscrypted $1$wGAh8J7a$s3VZ07TWA8EcAUQG7esZt0
firewall --disabled
authconfig --enableshadow --enablemd5
selinux --disable
timezone Europe/Amsterdam
bootloader --location=mbr --driveorder=xvda
zerombr yes
clearpart --initlabel

part /boot --fstype ext4 --size=100 --ondisk=xvda


part pv.100000 --size=38912 --grow --ondisk=xvda
part pv.100001 --size=100000 --grow --ondisk=xvdb
volgroup systemvg --pesize=32768 pv.100000
volgroup u01vg --pesize=32768 pv.100001
logvol / --fstype ext4 --name=rootlv --vgname=systemvg --size=4096
logvol swap --fstype swap --name=swaplv --vgname=systemvg --size=8192
logvol /usr --fstype ext4 --name=usrlv --vgname=systemvg --size=8192
logvol /var --fstype ext4 --name=varlv --vgname=systemvg --size=8192
logvol /tmp --fstype ext4 --name=tmplv --vgname=systemvg --size=8192
logvol /home --fstype ext4 --name=homelv --vgname=systemvg --size=2048
logvol /u01 --fstype ext4 --name=u01lv --vgname=u01vg --size=100000

Page 4 van 85
services --disabled sendmail,xfs,bluetooth,cups,ip6tables,iptables,kdump

%packages
@base
@core
fipscheck
squashfs-tools
sgpio
e4fsprogs
audit
sysstat
nfs-utils
xorg-x11-xauth
xorg-x11-apps
compat-libcap1
compat-libstdc++-33
libstdc++-devel
gcc
gcc-c++
ksh
make
glibc-devel
libaio-devel
unixODBC
unixODBC-devel
iscsi-initiator-utils

%post --log=/root/ks-post.log

{
# Delete obsolete groups and users
#/usr/sbin/userdel -r oracle
#/usr/sbin/groupdel dba
#/usr/sbin/groupdel oinstall
#rm -rf /home/oracle

# create group
/usr/sbin/groupadd -g 501 dba
/usr/sbin/groupadd -g 502 asmadmin

#oracle password is oracle


/usr/sbin/useradd -c "npa" -u 500 -p '$1$wGAh8J7a$s3VZ07TWA8EcAUQG7esZt0' -g dba -G
asmadmin oracle

# Setup oracle profile


cat > /etc/profile.d/oracle_profile.sh << EOF
# .bash_profile
#

if [ \$USER = "oracle" ]; then


if [ \$SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

######extend search path


export PATH=\$PATH:\$HOME/bin
export PATH=/usr/java/default/bin:\$PATH
export LD_LIBRARY_PATH=/usr/lib:/lib

####### Set some linux variables


umask 022
trap 2 3

Page 5 van 85
if tty -s
then
set -o vi
export EDITOR=vi
export TERM=vt100
stty erase ^?

[ -s "\$MAIL" ] && echo "\$MAILMSG"


fi

####### Environment variables for Oracle


export ORACLE_BASE=/u01/app/oracle
export ORACLE_TERM=vt100
export NLS_LANG=AMERICAN_AMERICA.UTF8
export NLS_DATE_FORMAT='DD-MM-YYYY:HH24:MI:SS'
export NLS_SORT=Binary
export ORAADMIN=\$ORACLE_BASE/admin
export TNS_ADMIN=\$ORAADMIN/network/admin

###### if interactive session


if tty -s
then
alias l="ls -al"
alias ob="cd "\\\${ORACLE_BASE}""
alias oh="cd "\\\${ORACLE_HOME}""
alias oa="cd "\\\${ORAADMIN}""
alias sid="cat /etc/oratab |grep -v \"#\" |sort"
alias up="ps -ef|grep pm[o]n|awk '{print substr(\\\$NF,10)}'|sort"
alias oracle="sudo su - oracle"
alias root="sudo su -"
alias sqlplus="rlwrap sqlplus"
alias dgmgrl="rlwrap dgmgrl"
alias rman="rlwrap rman"
alias lsnrctl="rlwrap lsnrctl"
alias asmcmd="rlwrap asmcmd"
alias adrci="rlwrap adrci"
alias impdp="rlwrap impdp"
alias expdp="rlwrap expdp"
fi

####### Set unix prompt


USER=\${USER:-\$LOGNAME}

if tty -s
then
export PS1="\${USER}@\`hostname -s\`:\\\${ORACLE_SID}:\\\${PWD}
$ "
fi

if tty -s
then
#create aliases for all ORACLE_SIDs
echo -n aliases:
for LINE in \`cat /etc/oratab| sort | grep -v "^*" | grep -v "^#" | grep -vE "^[
]*$" | cut -f 1 -d :\`
do
sid=\`echo \$LINE|cut -f 1 -d :\`
alias \${sid}="ORAENV_ASK=NO; ORACLE_SID=\${sid}; . oraenv;unset ORAENV_ASK"
echo -n \${sid}" "
done
echo
fi

####### End .profile

EOF

Page 6 van 85
cat >> /etc/sysctl.conf << EOF
fs.aio-max-nr = 1048576
fs.file-max = 6815744
#kernel.shmall = 2097152
#kernel.shmmax = 1054504960
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
EOF

/sbin/sysctl -p

# Setup sudo for oracle


cat > /etc/sudoers << EOF
%rootmembers ALL=NOPASSWD: /bin/su -
%oraclemembers,%rootmembers ALL=NOPASSWD: /bin/su - oracle
oracle ALL=(ALL) NOPASSWD: ALL
root ALL=(ALL) ALL
EOF

cat >> /etc/security/limits.conf << EOF


oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 4096
oracle hard nofile 65536
oracle soft stack 10240
EOF

cat >> /etc/pam.d/login << EOF


session required pam_limits.so
EOF

# Setup hosts file


cat > /etc/hosts << EOF
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
## Private NAS Openfiler (eth1)
10.0.0.210 openfiler-interc.example.com openfiler-interc
## Public (eth0)
192.168.0.206 parrot.example.com parrot
192.168.0.207 pelican.example.com pelican
## Private (eth1)
10.0.0.206 parrot-interc.example.com parrot-interc
10.0.0.207 pelican-interc.example.com pelican-interc
# VIP (eth0:1)
192.168.0.216 parrot-vip.example.com parrot-vip
192.168.0.217 pelican-vip.example.com pelican-vip
## SCAN (eth0:#)
192.168.0.226 gridcl06-scan.example.com gridcl06-scan
EOF

mkdir -p /u01/app/oracle/admin/network/admin
mkdir -p /u01/app/grid/11.2.0.3
mkdir -p /u01/app/oracle/product/11.2.0.3/db_000
chown -R oracle:dba /u01
chmod -R 775 /u01

} 1>/root/ks-post.log 2>&1
##END of kickstart file

In the next few steps I will make some preparations to be able to create the virtual machines.

Page 7 van 85
First I will create the directories to store the files for the virtual machines:

[root@oraovs01 /]# mkdir /OVS/running_pool/parrot

[root@oraovs01 /]# mkdir /OVS/running_pool/pelican

As you can see I will name the virtual machines parrot and pelican.

Now create the files for the virtual machines. These file will be used as system file (containing the OS)
and for a /u01 mount point I choose to create spare files which will not immediately occupy all space.
In time it will grow until the maximum defined size. For the local file system and /u01 this I not really
a problem and will not lead to any performance problems.

[root@oraovs01 /]# dd if=/dev/zero of=/OVS/running_pool/parrot/system.img bs=1G


count=0 seek=40

[root@oraovs01 /]# dd if=/dev/zero of=/OVS/running_pool/parrot/u01disk01.img bs=1G


count=0 seek=100

[root@oraovs01 /]# dd if=/dev/zero of=/OVS/running_pool/pelican/system.img bs=1G


count=0 seek=40

[root@oraovs01 /]# dd if=/dev/zero of=/OVS/running_pool/pelican/u01disk01.img bs=1G


count=0 seek=100

If you want to allocate the space for these files immediately then use the following commands:

[root@oraovs01 /]# dd if=/dev/zero of=/OVS/running_pool/parrot/system.img bs=1G


count=40

[root@oraovs01 /]# dd if=/dev/zero of=/OVS/running_pool/parrot/u01disk01.img bs=1G


count=100

[root@oraovs01 /]# dd if=/dev/zero of=/OVS/running_pool/pelican/system.img bs=1G


count=40

[root@oraovs01 /]# dd if=/dev/zero of=/OVS/running_pool/pelican/u01disk01.img bs=1G


count=100

To create a virtual machine manually you must define the ramdisk and kernel needed for the initial
boot. Copy the boot ramdisk and kernel to the /boot directory.

[root@oraovs01::/root]# cp /mount/OEL6u1_x86_64/images/pxeboot/vmlinuz
/boot/vmlinuz_OEL6u1_x86_64

[root@oraovs01::/root]# cp /mount/OEL6u1_x86_64/images/pxeboot/initrd.img
/boot/initrd_OEL6u1_x86_64.img

The last step before we can actual create the virtual machines is to create a configuration file for both
virtual machines.

Page 8 van 85
This is the vm.cfg for virtual machine parrot:

[root@oraovs01::/root]# vi /OVS/running_pool/parrot/vm.cfg
kernel = "/boot/vmlinuz_OEL6u1_x86_64"
ramdisk = "/boot/initrd_OEL6u1_x86_64.img"
extra = "text ks=nfs:192.168.0.200:/beheer/kickstart/OEL6u1_x86_64_GI.cfg"
#bootloader = '/usr/bin/pygrub'
disk = ['file:/OVS/running_pool/parrot/system.img,xvda,w',
'file:/OVS/running_pool/parrot/u01disk01.img,xvdb,w'
]
memory = '4096'
name = 'parrot'
on_crash = 'restart'
on_reboot = 'restart'
vcpus = 1
vif = ['bridge=xenbr0,mac=00:16:3E:00:01:01,type=netfront',
'bridge=xenbr1,mac=00:16:3E:00:01:02,type=netfront',
]
vif_other_config = []

And this is the vm.cfg for virtual machine pelican:

[root@oraovs01::/root]# vi /OVS/running_pool/pelican/vm.cfg
kernel = "/boot/vmlinuz_OEL6u1_x86_64"
ramdisk = "/boot/initrd_OEL6u1_x86_64.img"
extra = "text ks=nfs:192.168.0.200:/beheer/kickstart/OEL6u1_x86_64_GI.cfg"
#bootloader = '/usr/bin/pygrub'
disk = ['file:/OVS/running_pool/pelican/system.img,xvda,w',
'file:/OVS/running_pool/pelican/u01disk01.img,xvdb,w'
]
memory = '4096'
name = 'pelican'
on_crash = 'restart'
on_reboot = 'restart'
vcpus = 1
vif = ['bridge=xenbr0,mac=00:16:3E:00:02:01,type=netfront',
'bridge=xenbr1,mac=00:16:3E:00:02:02,type=netfront',
]
vif_other_config = []

Before you start to create the virtual machine you must stop the iptables service first, otherwise the
installer is not able to find the kickstart file:

Page 9 van 85
If this error occurs, then you must destroy the running virtual machine before you can continue.
Destroying a virtual machine can be done like this:

[root@oraovs01 /]# xm destroy parrot

To stop the iptables service execute the next command on your Oracle VM Server:

[root@oraovs01::/root]# service iptables stop


Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]

Now we are ready to create the first virtual machine:

[root@oraovs01::/root]# xm create -c /OVS/running_pool/parrot/vm.cfg

Because of the -c option a console will be opened in which I can perform the actions to create the
virtual machine.

Page 10 van 85
Action:
Select eth0 as the network device to install through.
Click OK.

Action:
All filesystems will be created and configured for you as defined in the kickstart file.

Page 11 van 85
Action:
Based on the specifications in the kickstart file all dependencies for the installation will be checked.

Action:
OL 6.1 64-bit is now being installed. This only takes a few minutes.

Page 12 van 85
Action:
The installation is now finished and the virtual machine is powered off.

Now modify the virtual machines configuration file. You must deactivate the kernel, ramdisk and
extra lines and activate the bootloader line:

[root@oraovs01::/root]# vi /OVS/running_pool/parrot/vm.cfg
#kernel = "/boot/vmlinuz_OEL6u1_x86_64"
#ramdisk = "/boot/initrd_OEL6u1_x86_64.img"
#extra = "text ks=nfs:192.168.0.200:/beheer/kickstart/OEL6u1_x86_64_GI.cfg"
bootloader = '/usr/bin/pygrub'
disk = ['file:/OVS/running_pool/parrot/system.img,xvda,w',
'file:/OVS/running_pool/parrot/u01disk01.img,xvdb,w'
]
memory = '4096'
name = 'parrot'
on_crash = 'restart'
on_reboot = 'restart'
vcpus = 1
vif = ['bridge=xenbr0,mac=00:16:3E:00:01:01,type=netfront',
'bridge=xenbr1,mac=00:16:3E:00:01:02,type=netfront',
]
vif_other_config = []

You now have one Oracle RAC node for your cluster. Now repeat the steps to create the second virtual
machine called pelican.

But before you do you this, you must modify kickstart file "OEL6u1_x86_64_GI.cfg" lines 8 and 9.
In these line the network devices eth0 and eth 1 and configured. Change the IP addresses and
hostnames:

network --device eth0 --bootproto static --ip 192.168.0.206 --netmask 255.255.255.0


--gateway 192.168.0.1 --nameserver 192.168.0.1 --hostname parrot.example.com --
noipv6
network --device eth1 --bootproto static --ip 10.0.0.206 mask 255.255.255.0 --
gateway 192.168.0.1 --nameserver 192.168.0.1 --hostname parrot.example.com --noipv6

Page 13 van 85
After the second virtual machine is created you must start both virtual machines again:

[root@oraovs01::/root]# xm create -c /OVS/running_pool/parrot/vm.cfg

Action:
The virtual machine is being started. Just wait for a short time until it's started completely.

Action:
You can now login and check if the installation is performed as expected.

Don't forget to start your second virtual machine!

Page 14 van 85
3. Install additional OS packages

In this chapter I will install additional OS packages.


The package to install is the command line wrapper rlwrap. With this tool it is possible to track back
previous commands in command line tools like sqlplus, rman and so on. Download rlwrap from here:
rlwrap.

root@parrot::/root
$ rpm -ivh rlwrap-0.37-1.el6.x86_64.rpm
warning: rlwrap-0.37-1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID
0608b895: NOKEY
Preparing... ########################################### [100%]
1:rlwrap ########################################### [100%]

Install the package of both Oracle RAC nodes.

Page 15 van 85
4. Create shared ASM disks

To be able to install Grid Infrastructure 11gR2 correctly, you must have disks which can be shared
between the nodes in the cluster. In this chapter I will create these shared disks. I will discuss 2
methods of adding shared disks to your Oracle RAC nodes:

1. Create shared disks on your Oracle VM Server (§ 4.1 Create shared disks on your Oracle VM
Server)
2. Create shared disks using the Openfiler NAS (§ 4.2 Create shared disks using the Openfiler NAS)

A new feature in GI 11gR2 is that you can now store the OCR and votingdisks in ASM. The files
created in the next steps will be used to store the OCR and votingdisk and to create an ASM
diskgroups to store database files in.

4.1.Create shared disks on your Oracle VM Server

If you are creating a setup for testing purposes it is possible to create shared disks on your Oracle VM
Server using the dd command. While installing Oracle VM Server a default directory for this purpose
is created: /OVS/sharedDisk. Is this example we will create some shared disks in this directory.

Unlike the previous created files for the virtual machines it is not recommended to create sparse files,
but fully allocate the files for ASM usage. This will definitely improve the performance of the Oracle
RAC nodes.

4.1.1. Create and attach shared disks

[root@oraovs01 /]# mkdir /OVS/sharedDisk/gridcl06


[root@oraovs01 /]# dd if=/dev/zero of=/OVS/sharedDisk/gridcl06/asmocrvotedisk1.img
bs=1M count=1024
[root@oraovs01 /]# dd if=/dev/zero of=/OVS/sharedDisk/gridcl06/asmdisk1.img bs=1M
count=4096
[root@oraovs01 /]# dd if=/dev/zero of=/OVS/sharedDisk/gridcl06/asmdisk2.img bs=1M
count=4096
[root@oraovs01 /]# dd if=/dev/zero of=/OVS/sharedDisk/gridcl06/asmdisk3.img bs=1M
count=4096

These shared disks must be added to the configuration file vm.cfg of both Oracle RAC nodes in order
to be able to use these disks:

[root@oraovs01::/root]# vi /OVS/running_pool/parrot/vm.cfg
disk = ['file:/OVS/running_pool/parrot/system.img,xvda,w',
'file:/OVS/running_pool/parrot/u01disk01.img,xvdb,w',
'file:/OVS/sharedDisk/gridcl06/asmocrvotedisk1.img,xvdc,w!',
'file:/OVS/sharedDisk/gridcl06/asmdisk1.img.img,xvdd,w!',
'file:/OVS/sharedDisk/gridcl06/asmdisk2.img.img,xvde,w!',
'file:/OVS/sharedDisk/gridcl06/asmdisk3.img.img,xvdf,w!',
]

Page 16 van 85
The shared disks can now be attached to the Oracle RAC nodes online. It is not needed to stop the
Oracle RAC nodes first. Adding the shared disks can be accomplished with the xm block-attach
command:

[root@oraovs01::/root]# xm block-attach parrot


file:/OVS/sharedDisk/gridcl06/asmocrvotedisk1.img /dev/xvdc w!

[root@oraovs01::/root]# xm block-attach parrot


file:/OVS/sharedDisk/gridcl06/asmdisk1.img /dev/xvdd w!

[root@oraovs01::/root]# xm block-attach parrot


file:/OVS/sharedDisk/gridcl06/asmdisk2.img /dev/xvde w!

[root@oraovs01::/root]# xm block-attach parrot


file:/OVS/sharedDisk/gridcl06/asmdisk3.img /dev/xvdf w!

Repeat this step for all the Oracle RAC nodes that will be part of your GI cluster.

After attaching the shared disks check if the devices are available:

root@parrot::/root
$ ls -l /dev/xvd*
brw-rw---- 1 root disk 202, 0 Nov 26 19:24 /dev/xvda
brw-rw---- 1 root disk 202, 1 Nov 26 19:24 /dev/xvda1
brw-rw---- 1 root disk 202, 2 Nov 26 19:24 /dev/xvda2
brw-rw---- 1 root disk 202, 16 Nov 26 19:24 /dev/xvdb
brw-rw---- 1 root disk 202, 17 Nov 26 19:24 /dev/xvdb1
brw-rw---- 1 root disk 202, 32 Nov 26 19:32 /dev/xvdc
brw-rw---- 1 root disk 202, 48 Nov 26 19:32 /dev/xvdd
brw-rw---- 1 root disk 202, 64 Nov 26 19:32 /dev/xvde
brw-rw---- 1 root disk 202, 80 Nov 26 19:33 /dev/xvdf

The disks xvd is used for the OS and xvdb for the /u01 mount point.
The new disks are now available as the devices /dev/xvdc-e.

4.1.2. Create partitions on the devices

Before you can use the devices they must be partitioned first.
Partition the devices on only one Oracle RAC node:

root@parrot::/root
$ fdisk /dev/xvdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-130, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130):
Using default value 130

Command (m for help): w


The partition table has been altered!

Page 17 van 85
Calling ioctl() to re-read partition table.
Syncing disks.

Repeat this for devices /dev/xvdd, /dev/xvde and /dev/xvdf.

After all disks are partitioned, perform a quick check to see if the partitions are available:

root@parrot::/root
$ ls -l /dev/xvd*
brw-rw---- 1 root disk 202, 0 Nov 26 19:24 /dev/xvda
brw-rw---- 1 root disk 202, 1 Nov 26 19:24 /dev/xvda1
brw-rw---- 1 root disk 202, 2 Nov 26 19:24 /dev/xvda2
brw-rw---- 1 root disk 202, 16 Nov 26 19:24 /dev/xvdb
brw-rw---- 1 root disk 202, 17 Nov 26 19:24 /dev/xvdb1
brw-rw---- 1 root disk 202, 32 Nov 26 19:34 /dev/xvdc
brw-rw---- 1 root disk 202, 33 Nov 26 19:34 /dev/xvdc1
brw-rw---- 1 root disk 202, 48 Nov 26 19:34 /dev/xvdd
brw-rw---- 1 root disk 202, 49 Nov 26 19:34 /dev/xvdd1
brw-rw---- 1 root disk 202, 64 Nov 26 19:34 /dev/xvde
brw-rw---- 1 root disk 202, 65 Nov 26 19:34 /dev/xvde1
brw-rw---- 1 root disk 202, 80 Nov 26 19:34 /dev/xvdf
brw-rw---- 1 root disk 202, 81 Nov 26 19:34 /dev/xvdf1

Now run the partprobe command on the second Oracle RAC node to update the kernel with the
modified partition table.

root@pelican::/root
$ partprobe /dev/xvdc
$ partprobe /dev/xvdd
$ partprobe /dev/xvde
$ partprobe /dev/xvdf

4.1.3. Grant access rights and define logical names for devices

Now the devices are ready, they must be given the correct ownership and permissions. Otherwise they
can't be used while installing Grid Infrastructure. There are multiple ways to accomplish this like
UDEV rules, ASMLib and multipath rules.
In this example we will use UDEV rules to set the ownership/permissions but also to give the devices
a logical name.

First create the UDEV permissions file for the ASM disk devices:

root@parrot::/root
$ vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="xvdc1", NAME="asmocrvotedisk1p1", OWNER="oracle", GROUP="asmadmin",
MODE="0660"
KERNEL=="xvdd1", NAME="asmdisk1p1", OWNER="oracle", GROUP="asmadmin", MODE="0660"
KERNEL=="xvde1", NAME="asmdisk2p1", OWNER="oracle", GROUP="asmadmin", MODE="0660"
KERNEL=="xvdf1", NAME="asmdisk3p1", OWNER="oracle", GROUP="asmadmin", MODE="0660"

Copy this file to all other Oracle RAC nodes:

root@parrot::/root
$ scp /etc/udev/rules.d/99-oracle-asmdevices.rules pelican:/etc/udev/rules.d/99-
oracle-asmdevices.rules

Page 18 van 85
Now activate the UDEV rules (on both nodes parrot and pelican):

root@parrot::/root
$ udevadm control --reload-rules
$ start_udev
Starting udev: [ OK ]
Check if the permissions are set through correctly and if the devices are created:

$ ls -ltr /dev/asm*
brw-rw---- 1 oracle asmadmin 202, 33 Nov 26 19:39 /dev/asmocrvotedisk1p1
brw-rw---- 1 oracle asmadmin 202, 65 Nov 26 19:39 /dev/asmdisk2p1
brw-rw---- 1 oracle asmadmin 202, 49 Nov 26 19:39 /dev/asmdisk1p1
brw-rw---- 1 oracle asmadmin 202, 81 Nov 26 19:39 /dev/asmdisk3p1

If you want to check if the configuration from above steps will also function correctly after a node
reboot then this is the time to test this.

4.2. Create shared disks using the Openfiler NAS

Openfiler is a free browser-based network storage management utility that delivers file-based Network
Attached Storage (NAS) Openfiler supports several protocols but we will only be making use of its
iSCSI capabilities to implement the shared storage components required by Grid Infrastructure 11gR2.
A 100GB internal hard drive will be connected to the Openfiler server. The Openfiler server will be
configured to use this disk for iSCSI based storage and will be used in our Grid Infrastructure 11gR2
configuration to store the shared files required by Grid Infrastructure 11gR2 as well as all Oracle ASM
disks.

To learn more about Openfiler, just visit their website at http://www.openfiler.com/

In one of my previous articles I explained how-to create a virtual machine running the Openfiler
software. This article can be found here: Install Openfiler 2.9 on Oracle VM.

4.2.1. Add disk to create iSCSI storage

To use Openfiler as an iSCSI storage server, we have to perform three major tasks; set up iSCSI
services, configure network access, and create physical storage. But before we can perform these steps
in Openfiler we must add a disk to the Openfiler virtual machine on which we can create the iSCSI
volumes.

Create the disks like this (a 100G disk will be created):

[root@oraovs01:/]#$ dd if=/dev/zero
of=/OVS/running/pool/openfiler/volume_gridcl06.img bs=1024M count=100

To be able to use this disks the configuration file vm.cfg of the Openfiler virtual machine must be
modified:

[root@oraovs01::/root]# vi /OVS/running_pool/openfiler/vm.cfg
disk = ['file:/OVS/running_pool/openfiler/System.img,hda,w',
'file:/OVS/running_pool/openfiler/volume_gridcl06.img,sdb,w',
]

The disk can be attached to the virtual machine online. It is not needed to stop the virtual machine
first. This can be accomplished with the xm block-attach command:

Page 19 van 85
[root@oraovs01::/root]# xm block-attach openfiler
file:/OVS/running_pool/openfiler/volume_gridcl06.img sdb w

After attaching the disk check if the device is available:

[root@openfiler ~]# ls -l /dev/sd*


brw-rw---- 1 root disk 8, 0 Dec 2 19:51 /dev/sda
brw-rw---- 1 root disk 8, 1 Dec 2 19:51 /dev/sda1
brw-rw---- 1 root disk 8, 2 Dec 2 19:51 /dev/sda2
brw-rw---- 1 root disk 8, 3 Dec 2 19:51 /dev/sda3
brw-rw---- 1 root disk 8, 16 Dec 2 19:51 /dev/sdb

4.2.2. iSCSI target service

To control services, use the Openfiler Storage Control Center and navigate to Services:

Action:
To enable the iSCSI target service click Enable next to the service. After enabling the service the
Boot Status must be changes into Enabled. When the service is enabled the service must be started by
clicking Start next to the service. After starting the service the Start / Stop status must be changed into
Running.

4.2.3. Openfiler network access

The next step is to configure network access in Openfiler so both Oracle RAC nodes have permissions
to the iSCSI devices through the storage (private) network.

Navigate to System in the Openfiler Storage Control Center:

Page 20 van 85
Action:
When entering each of the Oracle RAC nodes, note that the Name field is just a logical name used for
reference only. As a convention when entering nodes, I simply use the node name defined for that IP
address. Next, when entering the actual node in the Network/Host field, always use it's IP address
even though its host name may already be defined in your /etc/hosts file or DNS.
It is important to remember that you will be entering the IP address of the private network (eth1) for
each of the RAC nodes in the cluster.

4.2.4. Openfiler storage configuration

Navigate to Volumes => Block Devices in the Openfiler Storage Control Center:

Action:
In this step we will create a single primary partition on the /dev/sdb internal hard drive. This device
was attached to the Openfiler server in a previous step.
Click on the /dev/sdb link.

Page 21 van 85
Action:
By clicking on the /dev/sdb link, we are presented with the options to Create or Reset a partition.
Since we will be creating a single primary partition that spans the entire disk, most of the options can
be left to their default settings. Specified the next values to create the primary partition on /dev/sdb:
Mode: Primary
Partition Type: Physical volume
Starting Cylinder: 1
Ending Cylinder: 13054

Click Create to create the partition.

Action:
The next step is to create a Volume Group. We will be creating a single volume group named grid that
contains the newly created primary partition.

Page 22 van 85
Navigate to Volumes => Volume Groups in the Openfiler Storage Control Center:

We would see the existing volume groups, or none as in our case. Enter the name of the new Volume
group name (GRID), click on the checkbox in front of /dev/sdb1 to select that partition, and finally
click on the Add volume group button. After that we are presented with the list that now shows our
newly created volume group named grid:

Action:
You now have a volume on which we will create iSCSI volumes in a later step.

4.2.5. Create Logical Volumes

We can now create the four logical volumes in the newly created volume group (grid).
Navigate to Volumes => Add Volume in the Openfiler Storage Control Center:

Action:
We will see the newly created volume group (grid) along with its block storage statistics. Also
available at the bottom of this screen is the option to create a new volume in the selected volume
group. Use this screen to create the following four logical (block (iSCSI)) volumes.

Page 23 van 85
iSCSI / Logical Volumes
Volume
Volume Name Required Space (MB) Filesystem Type
Description
gridcl06_asmocrvotedisk1 ASM Clusterware 1.024 iSCSI
gridcl06_asmdisk1 ASM Disk 1 4.096 iSCSI
gridcl06_asmdisk2 ASM Disk 2 4.096 iSCSI
gridcl06_asmdisk3 ASM Disk 3 4.096 iSCSI

The used naming convention for the Volume:


<clustername>_<diskpurpose>disk#

Action:
After all four logical volumes are created your will be presented this screen listing all available logical
volumes in the volume group grid.

4.2.6. Create iSCSI target and Grant access rights

Before an iSCSI client can have access to the newly created iSCSI volumes, iSCSI targets must be
created, These iSCSI targets needs to be granted the appropriate permissions. In a previous step, we
configured Openfiler with two hosts (the Oracle RAC nodes) that can be configured with access rights
to resources. We now need to grant both of the Oracle RAC nodes access to each of the newly created
iSCSI volumes.

Page 24 van 85
Navigate to Volumes => iSCSI Targets in the Openfiler Storage Control Center:

Action:
A new iSCSI Target gets a default name like iqn.2006-01.com.openfiler:tsn.d4ff3f93ae50.
To be able to know for which purpose you create a iSCSI target, it's advisable to change the default
naming convention into something like this:
iqn.2006-01.com.openfiler:tsn.gridcl06_asmocrvotedisk1, where gridcl06_asmocrvotedisk1 is the
logical volume name.

Click Add to create the iSCSI Target.

Action:
Accept the default settings for the iSCSI target. Now we must map a logical volume to the iSCSI
target.

Navigate to LUN Mapping in the Openfiler Storage Control Center:

Page 25 van 85
Action:
Select the correct LUN and click Map to map the logical volume with the iSCSI target.

Navigate to Network ACL in the Openfiler Storage Control Center:

Action:
Now mapping has taken place we must grant access rights on the iSCSI target. Click Network ACL.
Change both hosts from Deny to Allow and click Update.
Perform this step for all four logical volumes.

4.2.7. Make iSCSI target available to clients (RAC nodes)

Every time a new logical volume is added, we need to restart the associated service on the Openfiler
server. In our case we created four iSCSI targets, so we have to restart the iSCSI target (iSCSI Target)
service. This will make the new iSCSI targets available to all clients on the network who have
privileges to access them.

To restart the iSCSI target service, use the Openfiler Storage Control Center and navigate to Services.
The iSCSI Target service should already be enabled (several steps back). If so, disable the service then
enable it again.
The same task can be achieved through an SSH session on the Openfiler server:

[root@openfiler1 ~]# service iscsi-target restart


Stopping iSCSI target service: ...... [ OK ]
Starting iSCSI target service: [ OK ]

Page 26 van 85
4.2.8. Configure iSCSI volumes on Oracle RAC nodes

In this section we will be configuring the iSCSI software initiator on both of the Oracle RAC nodes.
OL 6.1 includes the Open-iSCSI iSCSI software initiator which can be found in the iscsi-initiator-utils
RPM. All iSCSI management tasks like discovery and logins will use the command-line interface
iscsiadm which is included with Open-iSCSI.

The iSCSI software initiator will be configured to automatically login to the network storage server
(openfiler) and discover the iSCSI volumes created in the previous steps. We will then go through the
steps of creating persistent local SCSI device names (i.e. /dev/asmdisk1) for each of the iSCSI target
names discovered using udev. Having a consistent local SCSI device name and which iSCSI target it
maps to is required in order to know which volume (device) is to be used for.
In this example only ASM disks are used, but if you for example also need disks for OCFS, it would
be very nice to know which disks you can use for that purpose, without using an incorrect disk by
accident.

Before we can do any of this, we must first install the iSCSI initiator software!

4.2.8.1. Installing the iSCSI (initiator) Service

With OL 6.1, the Open-iSCSI iSCSI software initiator does not get installed by default. The software
is included in the iscsi-initiator-utils package which can be found on the DVD.
If you have created your Oracle RAC nodes using the provided kickstart file, the iscsi-initiator-utils
package is already installed.

To determine if this package is installed, perform the following on both Oracle RAC nodes:

root@parrot::/root
$ rpm -qa | grep iscsi-initiator-utils
iscsi-initiator-utils-6.2.0.872-21.0.1.el6.x86_64

If the iscsi-initiator-utils package is not installed, make the DVD available to each of the Oracle RAC
nodes and perform the following:

root@parrot::/root
$ cd <OL6.1 DVD>/Packages
$ rpm -Uvh iscsi-initiator-utils-6.2.0.872-21.0.1.el6.x86_64.rpm

4.2.8.2. Configure the iSCSI (initiator) Service

After verifying that the iscsi-initiator-utils package is installed on both Oracle RAC nodes, start the
iscsid service and enable it to automatically start when the system boots.

To start the iscsid service:

root@parrot::/root
$ service iscsid start

Page 27 van 85
To configure the iscsid service to automatically start when the system boots:

root@parrot::/root
$ chkconfig iscsid on
$ chkconfig iscsi on

Repeat these steps on both Oracle RAC Nodes.

4.2.8.3. Discover available iSCSI targets

Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all
available targets on the network storage server. This should be performed on both Oracle RAC nodes
to verify the configuration is functioning properly:

root@parrot::/root
$ iscsiadm -m discovery -t sendtargets -p 10.0.0.210
10.0.0.210:3260,1 iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk3
192.168.0.210:3260,1 iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk3
10.0.0.210:3260,1 iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk2
192.168.0.210:3260,1 iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk2
10.0.0.210:3260,1 iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk1
192.168.0.210:3260,1 iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk1
10.0.0.210:3260,1 iqn.2006-01.com.openfiler:tsn.gridcl06_asmocrvotedisk1
192.168.0.210:3260,1 iqn.2006-01.com.openfiler:tsn.gridcl06_asmocrvotedisk1

As you can see it discovers all iSCSI targets twice! I am still investigating if this is normal behavior
when having multiple network devices or if this is a bug.
Never less we can continue because this will not be in our way.

4.2.8.4. Manually login to iSCSI targets

At this point the iSCSI initiator service has been started and each of the Oracle RAC nodes were able
to discover the available targets from the network storage server. The next step is to manually login to
each of the available targets which can be done using the iscsiadm command-line interface. This needs
to be run on both Oracle RAC nodes. Note that you have to specify the IP address and not the host
name of the network storage server (openfiler-interc) - I believe this is required given the discovery
(above) shows the targets using the IP address.

root@parrot::/root
$ iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.gridcl06_asmocrvotedisk1 -p 10.0.0.210 -l
$ iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk1 -p 10.0.0.210 -l
$ iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk2 -p 10.0.0.210 -l
$ iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk3 -p 10.0.0.210 -l

Page 28 van 85
4.2.8.5. Configure Automatic login to iSCSI targets

The next step is to ensure the client will automatically login to each of the targets listed above when
the machine is booted (or when the iSCSI initiator service is started/restarted). As with the manual
login process described above, perform the following on both Oracle RAC nodes:

root@parrot::/root
$ iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.gridcl06_asmocrvotedisk1 -p 10.0.0.210 --
op update -n node.startup -v automatic
$ iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk1 -p 10.0.0.210 --op
update -n node.startup -v automatic
$ iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk2 -p 10.0.0.210 --op
update -n root@parrot::/root
$ iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk3 -p 10.0.0.210 --op
update -n node.startup -v automatic

4.2.8.6. Create partitions on the iSCSI volumes

Before you can use the iSCSI volumes they must be partitioned first.
Partition the volumes on only one Oracle RAC node.

Before you can partition the volumes, we must first determine which volumes to partition.

We can determine the volumes as follows:

root@parrot::/root
$ (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
ip-10.0.0.210:3260-iscsi-iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk1-lun-0 -> ../../sdb
ip-10.0.0.210:3260-iscsi-iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk2-lun-0 -> ../../sdc
ip-10.0.0.210:3260-iscsi-iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk3-lun-0 -> ../../sdd
ip-10.0.0.210:3260-iscsi-iqn.2006-01.com.openfiler:tsn.gridcl06_asmocrvotedisk1-lun-0 -> ../.
./sda

As we can see we must partition the volumes /dev/sda-d.

root@parrot::/root
$ fdisk /dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-130, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130):
Using default value 130

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.

Repeat this for devices /dev/sdb, /dev/sdc and /dev/sdd.

Page 29 van 85
After all disks are partitioned, perform a quick check to see if the partitions are available:

root@parrot::/root
$ ls -l /dev/sd*
$ ls -la /dev/sd*
brw-rw---- 1 root disk 8, 0 Dec 3 12:12 /dev/sda
brw-rw---- 1 root disk 8, 16 Dec 3 12:12 /dev/sdb
brw-rw---- 1 root disk 8, 32 Dec 3 12:12 /dev/sdc
brw-rw---- 1 root disk 8, 48 Dec 3 12:12 /dev/sdd

Now run the partprobe command on the second Oracle RAC node to update the kernel with the
modified partition table.

root@pelican::/root
$ partprobe /dev/sda
$ partprobe /dev/sdb
$ partprobe /dev/sdc
$ partprobe /dev/sdd

4.2.8.7. Create persistent Local iSCSI device names

There are a few methods to create device persistence after a reboot and to make sure that devices
retain their access permissions and ownership (OS chown user:group / chmod).
Some methods are AMSLib, multipath and UDEV.
This article describes the steps that are required to establish device persistency and setup new device
names with desired permissions and ownership in OL6 using udev.

Add the following line to /etc/scsi_id.config file on both Oracle RAC nodes. This will make the given
option the default while executing the scsi_id command.

root@pelican::/root
$ vi /etc/scsi_id.config
options=--whitelisted --replace-whitespace

To be able to create persistent file names, we need to determine the World Wide Identifier of each
device. To determine for which devices we need this WWID for name execute the next commend (as
we did in an earlier step) on only one of the Oracle RAC nodes:

root@parrot::/root
$ (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
ip-10.0.0.210:3260-iscsi-iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk1-lun-0 -> ../../sdb
ip-10.0.0.210:3260-iscsi-iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk2-lun-0 -> ../../sdc
ip-10.0.0.210:3260-iscsi-iqn.2006-01.com.openfiler:tsn.gridcl06_asmdisk3-lun-0 -> ../../sdd
ip-10.0.0.210:3260-iscsi-iqn.2006-01.com.openfiler:tsn.gridcl06_asmocrvotedisk1-lun-0 -> ../.
./sda

Page 30 van 85
Now we can determine the World Wide IDentifier of each device using the scsi_id --device=/dev/sd*
command:

root@parrot::/root
$ scsi_id --device=/dev/sda
14f504e46494c455263794f556a672d6b716c742d48745648
root@parrot::/root
$ scsi_id --device=/dev/sdb
14f504e46494c455257323078704e2d674665642d53687663
root@parrot::/root
$ scsi_id --device=/dev/sdc
14f504e46494c4552374d654c63782d4e3772532d496c3645
root@parrot::/root
$ scsi_id --device=/dev/sdd
14f504e46494c45526670374532772d376c42592d34567777

Now we must create a rule to set the permissions and ownership for the device. This rule will also set
a name for the device. For each of the devices we will create one rule. For OL6 create a file
/etc/udev/rules.d/99-oracle-asmdevices.rules containing the rules for each device.

The rule will be as follows:

KERNEL=="sd*1" SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id /dev/$name",


RESULT=="UUID", NAME="persistent name", OWNER="oracle", GROUP=”asmadmin”,
MODE=”0660″

UUID : the value returned from the previous step.


NAME : points to a new persistent name under which the device will be known.
OWNER : is used to set the OS user.
GROUP : is used to set the OS group.
MODE : to set the permissions.

For /dev/sdb1 the rule will be as follows where NAME is asmdisk1, owner is OS oracle user, group is
OS asmadmin group and permission is 0660.
UUID is ==”14f504e46494c455257323078704e2d674665642d53687663″ as extracted from the
previous step.

KERNEL=="sd*1" SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id /dev/$name",


RESULT=="14f504e46494c455257323078704e2d674665642d53687663",
NAME="asmsdisk1", OWNER="oracle", GROUP="asmadmin", MODE="0660"

The next steps must be performed on both Oracle RAC nodes:

The content of /etc/udev/rules.d/99-oracle-asmdevices.rules file is as follows.

root@parrot::/root
$ vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*1" SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id /dev/$name",
RESULT=="14f504e46494c455263794f556a672d6b716c742d48745648", NAME="mapper/asmocrvotedisk1",
OWNER="oracle", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*1" SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id /dev/$name",
RESULT=="14f504e46494c455257323078704e2d674665642d53687663", NAME="mapper/asmdisk1",
OWNER="oracle", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*1" SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id /dev/$name",
RESULT=="14f504e46494c4552374d654c63782d4e3772532d496c3645", NAME="mapper/asmdisk2",
OWNER="oracle", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*1" SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id /dev/$name",
RESULT=="14f504e46494c45526670374532772d376c42592d34567777", NAME="mapper/asmdisk3",
OWNER="oracle", GROUP="asmadmin", MODE="0660"

Page 31 van 85
To activate the udev rules we have to reload the rules and restart udev:

root@parrot::/root
$ udevadm control --reload-rules
$ start_udev
Starting udev: [ OK ]

Check the names, ownerships and permissions of the devices.

root@parrot::/root
$ ls -la /dev/asm*
brw-rw---- 1 oracle asmadmin 8, 17 Dec 3 12:12 /dev/asmdisk1
brw-rw---- 1 oracle asmadmin 8, 33 Dec 3 12:12 /dev/asmdisk2
brw-rw---- 1 oracle asmadmin 8, 49 Dec 3 12:12 /dev/asmdisk3
brw-rw---- 1 oracle asmadmin 8, 1 Dec 3 12:12 /dev/asmocrvotedisk1

If you want to check if the configuration from above steps will also function correctly after a node
reboot then this is the time to test this.

Page 32 van 85
5. Configure NTP

For an Oracle cluster to function correctly it is of most importance that some kind of time
synchronization is in place. This is possible with the new CTSS (Cluster Time Synchronization
Service) daemon. If prefer to configure NTP on the hosts.

The NTP skewing option must be configured. Also prevent syncing the hardware clock to avoid NTP
start errors:

root@parrot::/root
$ vi /etc/sysconfig/ntpd
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -x"
SYNC_HWCLOCK=no

root@parrot::/root
$ chmod -x /sbin/hwclock

Now the NTP daemon can be started:

root@parrot::/root
$ service ntpd start
Starting ntpd: [ OK ]

NTP must also be started when the node has to be rebooted. This can be accomplished with the
chkconfig utility:

root@parrot::/root
$ chkconfig ntpd on

And with the same chkconfig you can check the modifications:

root@parrot::/root
$ chkconfig --list ntpd
ntpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Page 33 van 85
6. Install Grid Infrastructure 11gR2

The configuration of the Oracle RAC cluster we are going to create is described in chapter 1
Configuration. But as a fresh up I will repeat this configuration now:

Oracle RAC Node 1 - (parrot)


Device IP Address Subnet Gateway Purpose
eth0 192.168.0.206 255.255.255.0 192.168.0.1 public network
eth1 10.0.0.206 255.255.255.0 private network
/etc/hosts
127.0.0.1 localhost.localdomain localhost
##Private NAS Openfiler (eth1)
10.0.0.210 openfiler-interc.example.com openfiler-interc
## Public (eth0)
192.168.0.206 parrot.example.com parrot
192.168.0.207 pelican.example.com pelican
## Private (eth1)
10.0.0.206 parrot-interc.example.com parrot-interc
10.0.0.207 pelican-interc.example.com pelican-interc
# VIP (eth0:1)
192.168.0.216 parrot-vip.example.com parrot-vip
192.168.0.217 pelican-vip.example.com pelican-vip
## SCAN (eth0:#)
192.168.0.226 gridcl06-scan.example.com gridcl06-scan

Oracle RAC Node 2 - (pelican)


Device IP Address Subnet Gateway Purpose
eth0 192.168.0.207 255.255.255.0 192.168.0.1 public network
eth1 10.0.0.207 255.255.255.0 private network
/etc/hosts
127.0.0.1 localhost.localdomain localhost
##Private NAS Openfiler (eth1)
10.0.0.210 openfiler-interc.example.com openfiler-interc
## Public (eth0)
192.168.0.206 parrot.example.com parrot
192.168.0.207 pelican.example.com pelican
## Private (eth1)
10.0.0.206 parrot-interc.example.com parrot-interc
10.0.0.207 pelican-interc.example.com pelican-interc
# VIP (eth0:1)
192.168.0.216 parrot-vip.example.com parrot-vip
192.168.0.217 pelican-vip.example.com pelican-vip
## SCAN (eth0:#)
192.168.0.226 gridcl06-scan.example.com gridcl06-scan

ASM Storage configuration:

iSCSI / Logical Volumes


Required Space
Volume Name Volume Description
(MB)
/dev/asmocrvotedisk1 ASM Clusterware 1.024
/dev/asmdisk1 ASM Disk 1 4.096
/dev/asmdisk2 ASM Disk 2 4.096
/dev/asmdisk3 ASM Disk 3 4.096

Page 34 van 85
We are now almost ready to start installing Grid Infrastructure.
First we have to download the software. In this article I use version 11.2.0.3 which can only be
download as patch from My Oracle Support (patch 10404530).

Action:
Click Download.

Action:
For this installation you only need to download:
p10404530_112030_Linux-x86-64_1of7.zip Database binaries part 1
p10404530_112030_Linux-x86-64_2of7.zip Database binaries part 2
p10404530_112030_Linux-x86-64_3of7.zip Grid Infrastructure binaries

Unzip the files in your staging area after downloading the files.

Page 35 van 85
To prepare the OS to install GI without additional steps during the installation the cvudisk package
must be installed before installation. This package is available as part of p10404530_112030_Linux-
x86-64_3of7.zip.

root@parrot::/root
$ export CVUQDISK_GRP=dba
root@parrot::/root
$ rpm -ivh /software/Database/11.2.0.3/grid/rpm/cvuqdisk-1.0.9-1.rpm
Preparing... ########################################### [100%]
1:cvuqdisk ########################################### [100%]

Now we can start installing GI 11gR2. Set the DISPLAY parameter and start runInstaller.sh.

oracle@parrot::/home/oracle
$ export DISPLAY=192.168.0.105:0.0
oracle@parrot::/software
$ cd /software/Database/11.2.0.3/grid/
oracle@parrot::/software/Database/11.2.0.3/grid
$ ./runInstaller

Action:
Select Skip software updates.
Click Next.

Page 36 van 85
Action:
Select Install and Configure Oracle Grid Infrastructure for a Cluster.
Click Next.

Action:
Select Advanced Installation.
Click Next.

Page 37 van 85
Action:
Select the language of your choice.
Click Next.

Action:
SCAN is a new 11gR2 feature. SCAN (Single Client Access Name) makes it possible to resolve up to
3 IP addresses with 1 single name. Best practice is to configure your SCAN IP addresses in DNS.

Page 38 van 85
For this article I will use the local /etc/hosts file to resolve the SCAN address. Because of this choice it
is only possible to resolve 1 IP address.
At this point of the installation you don't have to take additional steps to configure the /etc/hosts file
because this was already taken care of while installing the virtual machine. This was one of the steps
defined in the kickstart file.
We not use the GNS (Global Naming Service) feature in this article.

Action:
Specify the Cluster Name, SCAN Name and SCAN Port.
Deselect Configure GNS.
Click Next.

Action:
Click Edit.

Page 39 van 85
Action:
Remove domain name from the entries.
Click OK.

Action:
Click Add to add the 2nd Oracle RAC node to the cluster.

Page 40 van 85
Action:
Specify the Hostname and the Virtual IP Name.
Click OK.

Action:
At this point is it possible to let the installer configure the SSH Connectivity between the nodes.
Click SSH Connectivity.

Page 41 van 85
Action:
Specify the oracle OS Password.
Click Setup.

Action:
Wait until the SSH Connectivity is setup.

Page 42 van 85
Action:
Click OK.

Action:
Click Next.

Page 43 van 85
Action:
The network interfaces are configured correctly.
Click Next.

Action:
Select Oracle Automatic Storage Management (Oracle ASM).
Click Next.

Page 44 van 85
Action:
At this point no disks are displayed, but they are there!
Click Change Discovery Path.

Action:
Specify the Disk Discovery Path as /dev/asm*.
Click OK.

Page 45 van 85
Action:
Specify the Disk Group Name, the Redundancy as External and select the Candidate Disk used for
the ASM diskgroup to store the OCR and votingdisk. Click Next.

Action:
Specify the passwords for the SYS and ASMSNMP accounts.
Click Next.

Page 46 van 85
Action:
Select Do not use Intelligent Platform Management Interface (IPNI).
Click Next.

Action:
Specify dba as the Oracle ASM Operator (OSOPER for ASM) Group.
Click Next.

Page 47 van 85
Action:
Click Yes.

Action:
Specify the Oracle Base and Software Location for the GI home.
Click Next.

Page 48 van 85
Action:
Specify the Inventory Directory. What we see now is the default.
Click Next.

Action:
Wait until some checks are performed.

Page 49 van 85
A check is returned with errors. The Device Checks for ASM point out to Bug 10357213: ASM
DEVICE CHECK FAILS WITH PRVF-5184 DURING GI INSTALL and can be ignored. This will
have no impact on the installation.

Action:
Select Ignore All and Click Next.

Action:
Click OK.

Page 50 van 85
Action:
Click Install.

Action:
Wait while GI gets installed.

Page 51 van 85
Action:
Execute the scripts as user root on the local node first and then on the second node.

First node (parrot):

root@parrot::/root
$ /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to dba.


The execution of the script is complete.

root@parrot::/root
$ /u01/app/grid/11.2.0.3/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/grid/11.2.0.3

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file:
/u01/app/grid/11.2.0.3/crs/install/crsconfig_params
Creating trace directory

Page 52 van 85
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'parrot'
CRS-2676: Start of 'ora.mdnsd' on 'parrot' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'parrot'
CRS-2676: Start of 'ora.gpnpd' on 'parrot' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'parrot'
CRS-2672: Attempting to start 'ora.gipcd' on 'parrot'
CRS-2676: Start of 'ora.cssdmonitor' on 'parrot' succeeded
CRS-2676: Start of 'ora.gipcd' on 'parrot' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'parrot'
CRS-2672: Attempting to start 'ora.diskmon' on 'parrot'
CRS-2676: Start of 'ora.diskmon' on 'parrot' succeeded
CRS-2676: Start of 'ora.cssd' on 'parrot' succeeded

ASM created and started successfully.

Disk Group DGGRID created successfully.

clscfg: -install mode specified


Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 33965780b5a44f06bf42b2dae944780b.
Successfully replaced voting disk group with +DGGRID.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 33965780b5a44f06bf42b2dae944780b (/dev/asmocrvotedisk1) [DGGRID]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'parrot'
CRS-2676: Start of 'ora.asm' on 'parrot' succeeded
CRS-2672: Attempting to start 'ora.DGGRID.dg' on 'parrot'
CRS-2676: Start of 'ora.DGGRID.dg' on 'parrot' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'parrot'
CRS-2676: Start of 'ora.registry.acfs' on 'parrot' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Page 53 van 85
Second node (pelican):

root@pelican::/root
$ /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to dba.


The execution of the script is complete.
root@pelican::/root
$ /u01/app/grid/11.2.0.3/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/grid/11.2.0.3

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file:
/u01/app/grid/11.2.0.3/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS
daemon on node parrot, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the
cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Now click OK.

Page 54 van 85
Action:
Wait until some the last configuration steps are being performed.

Page 55 van 85
This error is returned because I didn't setup DNS for the SCAN feature but added it to the host file.
For this reason this error can safely be ignored.

INFO: Checking name resolution setup for "gridcl06-scan"...


INFO: ERROR:
INFO: PRVG-1101 : SCAN name "gridcl06-scan" failed to resolve
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "gridcl06-scan" (IP address:
192.168.0.226) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name
"gridcl06-scan"
INFO: Verification of SCAN VIP and Listener setup failed

Action:
Click OK.

Action:
Click Skip.

Page 56 van 85
Action:
Click Next.

Action:
Click Yes.

Page 57 van 85
Action:
Click Close.

Perform a quick check to see if all GI processes are available:

oracle@parrot::/home/oracle
$ . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
oracle@parrot::/home/oracle
$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DGGRID.dg
ONLINE ONLINE parrot
ONLINE ONLINE pelican
ora.LISTENER.lsnr
ONLINE ONLINE parrot
ONLINE ONLINE pelican
ora.asm
ONLINE ONLINE parrot Started
ONLINE ONLINE pelican Started
ora.gsd
OFFLINE OFFLINE parrot <====== 1)
OFFLINE OFFLINE pelican <====== 1)
ora.net1.network
ONLINE ONLINE parrot
ONLINE ONLINE pelican
ora.ons
ONLINE ONLINE parrot
ONLINE ONLINE pelican
ora.registry.acfs
ONLINE ONLINE parrot
ONLINE ONLINE pelican
--------------------------------------------------------------------------------

Page 58 van 85
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE parrot
ora.cvu
1 ONLINE ONLINE parrot
ora.oc4j
1 ONLINE ONLINE parrot
ora.parrot.vip
1 ONLINE ONLINE parrot
ora.pelican.vip
1 ONLINE ONLINE pelican
ora.scan1.vip
1 ONLINE ONLINE parrot

oracle@parrot:+ASM1:/home/oracle
$ crsctl status resource -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE ONLINE parrot Started
ora.cluster_interconnect.haip
1 ONLINE ONLINE parrot
ora.crf
1 ONLINE ONLINE parrot
ora.crsd
1 ONLINE ONLINE parrot
ora.cssd
1 ONLINE ONLINE parrot
ora.cssdmonitor
1 ONLINE ONLINE parrot
ora.ctssd
1 ONLINE ONLINE parrot OBSERVER
ora.diskmon
1 OFFLINE OFFLINE <====== 2)
ora.drivers.acfs
1 ONLINE ONLINE parrot
ora.evmd
1 ONLINE ONLINE parrot
ora.gipcd
1 ONLINE ONLINE parrot
ora.gpnpd
1 ONLINE ONLINE parrot
ora.mdnsd
1 ONLINE ONLINE parrot

1) The ora.gsd resource is offline by default. This resource must be enabled when you have oracle9i
databases running in your environment.
2) The ora.diskmon resource is offline by default when the installation is not on Exadata.

The ora.ctssd resource is running in OBSERVER mode. This is the case when NTP is configured
correctly.

At this point the base installation of the GI software is completed.

Page 59 van 85
7. Install Oracle 11gR2 RDBMS software

We are now ready to continue with the installation of the Oracle 11gR2 RDBMS software so we can
create a RAC-database in a next step. Like the GI software I will use Oracle RDBMS version 11.2.0.3.
In a previous step I already downloaded the software needed for this step.

Set the DISPLAY parameter and start runInstaller.sh.

oracle@parrot::/home/oracle
$ export DISPLAY=192.168.0.105:0.0
oracle@parrot::/software
$ cd /software/Database/11.2.0.3/database
oracle@parrot::/software/Database/11.2.0.3/database
$ ./runInstaller

Action:
Deselect I wish to receive security updates via My Oracle Support.
Click Next.

Page 60 van 85
Action:
Click Yes.

Action:
Select Skip software updates.
Click Next.

Page 61 van 85
Action:
Select Install database software only.
Click Next.

Action:
Make sure all nodes are selected.
Click Next.

Page 62 van 85
Action:
Select the language of your choice.
Click Next.

Action:
Click Select Options.

Page 63 van 85
Action:
Because I want to be able to play with all available options I will install all options.
Click Select All and then click OK.

Action:
Click Next.

Page 64 van 85
Action:
Specify the Oracle Base and the database Software Location.
Click Next.

Action:
Select dba as the Database Operator (OSOPER) Group.
Click Next.

Page 65 van 85
Action:
Wait until some checks are performed.

Action:
This error is returned because I didn't setup DNS for the SCAN feature but added it to the host file.
For this reason this error can safely be ignored. Click Ignore All.

Page 66 van 85
Action:
Click Next.

Action:
Click Yes.

Page 67 van 85
Action:
Click Install to start the database software installation.

Action:
Wait while the database software is being installed.

Page 68 van 85
Action:
Execute the script as user root on the local node first and then on the second node.

First node (parrot):

root@parrot::/root
$ /u01/app/oracle/product/11.2.0.3/db_000/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0.3/db_000

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

Page 69 van 85
Second node (pelican):

root@pelican::/root
$ /u01/app/oracle/product/11.2.0.3/db_000/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0.3/db_000

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

Now Click OK.

Action:
Click Close.

At this point the base installation of the database software is completed.

Page 70 van 85
8. Create ASM diskgroup for database files

We are almost ready to create a RAC-database, but first we have to create an ASM diskgroup to store
the database files. I will use the utility asmca for this purpose.

Set the DISPLAY parameter and environment and start asmca:

oracle@parrot::/home/oracle
$ export DISPLAY=192.168.0.105:0.0
oracle@parrot::/home/oracle
$ +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
oracle@parrot:+ASM1:/home/oracle
$ asmca

Action:
Click Create.

Page 71 van 85
Action:
Specify the Disk Group Name, Redundancy as External (None) and select the Candidate Disks to
be part of the ASM diskgroup.
Click OK.

Action:
Wait while the ASM diskgroup is being created.

Page 72 van 85
Action:
Click OK.

Repeat the above steps to create a diskgroup called DGFRA with the remaining free ASM disk.
In the diskgroup you can create you flash_recovery_area when creating the databases discussed in the
next chapter.

Action:
Click Exit.

Page 73 van 85
Action:
Click Yes.

Page 74 van 85
9. Create Oracle RAC Database

And finally we are now able to create an Oracle RAC-database. I will give an example by creating a
database using the dbca utility and some options. Which options you must choose is dependent of your
needs.

Set the DISPLAY parameter and environment and start dbca:

oracle@parrot::/home/oracle
$ export DISPLAY=192.168.0.105:0.0
oracle@parrot::/home/oracle
oracle@parrot::/home/oracle
$ /u01/app/oracle/product/11.2.0.3/db_000/bin/dbca

Action:
Select Oracle Real Application Clusters (RAC) database.
Click Next.

Page 75 van 85
Action:
Select Create a Database.
Click Next.

Action:
Select Custom Database.
Click Next.

Page 76 van 85
Action:
Specify Admin-Managed as Configuration Type.
Specify the Global Database Name and SID Prefix.
Click Select All.
Click Next.

Action:
Deselect Configure Enterprise Manager.
Click Next.

Page 77 van 85
Action:
Specify the Passwords for the SYS and SYSTEM accounts.
Click Next.

Action:
Select Use Oracle-Managed Files and +DGDATA as Database Area.
Click Next.

Page 78 van 85
Action:
Use diskgroup +DGFRA as Fast Recovery Area.
A Fast Recovery Area Size of 4096 MB is enough for now.
Select Enable Archiving and click Next.

Action:
That's OK. We can add an extra disk to the diskgroup when needed.
Click Yes.

Page 79 van 85
Action:
Select all the options you want to install in your database.
Click Next.

Action:
Under Typical specify the Memory Size (SGA and PGA). A minimum size of 1024 MB is
recommended. This will also avoid that you will get some ORA-04031 errors while creating the
database.
Select Use Automatic Memory Management.
Click tab Character Sets.

Page 80 van 85
Action:
Select Use Unicode (AL32UTF8) as Database Character Set and UTF8 as National Character Set.
Of course you can select every character set from the list you need.
Click Next.

Action:
Click Next.

Page 81 van 85
Action:
If you are curious about the scripts generated by the dbca utility then select Generate Database
Creation Scripts.
Click Finish to start creating the RAC database.

Page 82 van 85
Action:
Click OK.

Action:
Click OK.

Page 83 van 85
Action:
Wait while the RAC database is being created.

Action:
Click Exit.

Page 84 van 85
Check if all instances are running

oracle@parrot::/home/oracle
$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db_000

oracle@parrot::/home/oracle
$ /u01/app/oracle/product/11.2.0.3/db_000/bin/srvctl status database -d ODBA1
Instance ODBA11 is running on node parrot
Instance ODBA12 is running on node pelican

Once the database is created, edit the /etc/oratab file and add the instance.

First node (parrot):

ODBA11:/u01/app/oracle/product/11.2.0.3/db_000:N

Second node (pelican):

ODBA12:/u01/app/oracle/product/11.2.0.3/db_000:N

Congratulations! You have successfully created a Oracle RAC cluster on Oracle VM using Oracle
Linux 6.

Page 85 van 85

You might also like