You are on page 1of 33

Oracle11g R2

Real Application Clusters


On
Oracle Linux 5 update 7

Hands-on Workshop

January 2012
Authors:
Efran Snchez
Platform Technology Manager
Oracle Server Technologies, PTS

Contributors / Reviewers:
Andr Sousa
Senior Technologist
Oracle Server Technologies, PTS

Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.

Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
www.oracle.com

Oracle Corporation provides the software


that powers the Internet.

Oracle is a registered trademark of Oracle Corporation. Various


product and service names referenced herein may be trademarks
of Oracle Corporation. All other product and service names
mentioned may be trademarks of their respective owners.

Copyright 2011 Oracle Corporation


All rights reserved.
Platform Technology Solutions, Latin America

RAC workshop concepts and overview

During this RAC workshop you will setup a RAC cluster using Oracle Clusterware and Oracle Database
11g. The cluster will be setup on Oracle Enterprise Linux.

A cluster comprises multiple interconnected computers or servers that appear as if they are one server
to end users and applications. Oracle Database 11g Real Application Clusters (RAC) enables the
clustering of the Oracle Database. RAC uses the Oracle Clusterware for the infrastructure to bind
multiple servers so that they operate as a single system.

Your first step in the workshop will be configuring the operating system for the Clusterware and RAC
software.

Each server in the cluster will have one public network interface and one private network interface.
The public network interface is the standard network connection, which connects the server to all of
the other computers in your network. The private network interface is a private network connection
shared by only the servers in the cluster. The private network interface is use by the Oracle
Clusterware and RAC software to communicate between the servers in the cluster.

All of the database files in the cluster will be stored on shared storage. The shared storage allows
multiple database instances, running on different servers, to access the same database information.

Your next step in the workshop will be to install the Oracle Clusterware, which binds multiple servers
into a cluster. During the Clusterware install you will specify the location to create two Clusterware
components: a voting disk to record node membership information and the Oracle Cluster Registry
(OCR) to record cluster configuration information. The Clusterware install is performed on one server
and will be automatically installed on the other servers in the cluster.

After the Clusterware is installed, you will install the Oracle Database and RAC software. The installer
will automatically recognize that the Clusterware has already been installed. Like the Clusterware
install, the database and RAC install is performed on one server and the software will be automatically
installed on the other servers in the cluster.

A virtual IP address is an alternate public address that client connections use instead of the standard
public IP address. If a node fails, then the node's VIP fails over to another node on which the VIP
cannot accept connections. Clients that attempt to connect to the VIP receive a rapid connection
instead of waiting for TCP connect timeout messages.

After the database software is installed you will create a database using the Database Configuration
Assistant.

Some parts of this workshop are based on the article Build Your Own Oracle RAC 11g Cluster on Oracle
Enterprise Linux and iSCSI by Jeffrey Hunter.
Platform Technology Solutions, Latin America

1.0.- Oracle Linux 5 Configuration Tasks


This step-by step guide is targeted for those who are implementing Oracle Database 11g R2 RAC for
applications that need High Availability, Scalability, Performance, Workload Management and Lower
Total Cost of Ownership (TCO).

We hope going through this step by step guide to install Oracle Real Application Clusters (RAC) will be a
great learning experience for those interested in trying out Oracle Grid technology. This guide is an
example of installing a 2-node RAC cluster, but the same procedure applies for single instance
database managed by the grid infrastructure services.

Expectations, Roles and Responsibilities

You are expected to have:


basic understanding of Unix or Linux operating system and commands such cp (for copy, etc.).
basic understanding of RAC architecture and its benefits.

This guide performs the roles of both sys admin and a DBA. You must have root privileges to perform
some of the steps

Operating System

This guide assumes the systems are pre-installed with either Oracle Enterprise Linux 5 or Red Hat
Enterprise AS/ES 5. If your system does not have the correct operating system some steps will not
work and the install may not perform as expected.

Storage

Oracle Real Application Clusters requires shared storage for its database files. This guide will use
Oracle Automatic Storage Management (ASM) during the install for the storage management.
Platform Technology Solutions, Latin America

Oracle required OS packages


Oracle has released the Unbreakable Enterprise Kernel for x86 32b and 64b servers and is the default
installation option, still you can switch to the Redhat compatible kernel.

Install required OS packages via local DVD yum repository:

1. Login as root user and open a terminal window, review the local-cdrom repository
configuration using cat to display the file content:

[root@db01 ~]# cat /etc/yum.repos.d/cdrom.repo

Review the output:

[ol5_u7_base_cdrom]
name=Oracle Linux $releasever - U7 - $basearch - base cdrom
baseurl=file:///media/cdrom/Server/
gpgcheck=0
enabled=1

[ol5_u7_cluster_cdrom]
name=Oracle Linux $releasever - U7 - $basearch - cluster cdrom
baseurl=file:///media/cdrom/ClusterStorage/
gpgcheck=0
enabled=1

Insert the DVD or configure the ISO file in Virtualbox, right click the cdrom icon on the
Virtualbox status bar, click Choose a virtual CD/DVD disk file and select the corresponding
ISO File.

Un-mount the cdrom from the current automount directory and re-mount it in /mnt/cdrom

umount /dev/cdrom
mount /dev/cdrom /media/cdrom

Display current yum configuration:

yum repolist

You will get the following output:

Loaded plugins: rhnplugin, security


This system is not registered with ULN.
ULN support will be disabled.
ol5_u7_base_cdrom | 1.1 kB 00:00
ol5_u7_cluster_cdrom | 1.1 kB 00:00
repo id repo name status
ol5_u7_base_cdrom Oracle Linux 5 - U7 - i386 - base cdrom 2,471
ol5_u7_cluster_cdrom Oracle Linux 5 - U7 - i386 - cluster cdrom 16
repolist: 2,487
Platform Technology Solutions, Latin America

2. The oracle-validated package verifies and sets system parameters based configuration
recommendations for Oracle Linux, the files updated are:

/etc/sysctl.conf
/etc/security/limits.conf
/etc/modprobe.conf
/boot/grub/menu.lst

This package will modify module parameters and re-insert them, it also installs any required
packages for Oracle Databases.

yum install oracle-validated

It's recommended that you also install these packages for previous versions compatibility :

yum install libXp-devel openmotif22 openmotif

3. Install Automatic Storage Manager (ASM) packages

yum install oracleasm-support oracleasm-2.6.18-274.el5

4. Clean all cached files from any enabled repository. It's useful to run it from time to time to
make sure there is nothing using unnecessary space in /var/cache/yum.

yum clean all

Eject the cdrom and disable the ISO image in order not to boot the OS Installation on next re-
boot.

Optionally you can configure the public yum repository to install new updates in the future, skip this
step for the workshop:

5. Disable current local-cdrom repository by changing enable=0

6. Download and install the Oracle Linux 5 repo file to your system.
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-el5.repo

7. Enable both the [ol5_u7_base] repositories in the yum configuration file by changing enable=0
to enable=1 in those sections.

8. To update your system use the following yum command:

# yum update
Platform Technology Solutions, Latin America

1.1.- Kernel Parameters


As root, update kernel parameters in /etc/sysctl.conf file on both node1 and node2.

Open a terminal window, su to root and run the following commands. Alternatively, if you are not
comfortable with scripting the update to the /etc/sysctl file.

Review the modifications made by the oracle-validated package

vi /etc/sysctl.conf

Optionally for production systems you can configure the OS to reboot in case of a kernel panic

# Enables system reboot in 60 seconds after kernel panic


kernel.panic = 60

Review also the following files modified by the oracle-validated package on each node of the cluster:

/etc/security/limits.conf
/etc/pam.d/login

Open a terminal window and run the following commands. Alternatively use vi to update the values in
the default profile file.

cat >> /etc/profile <<EOF


# Oracle settings for 11g
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF

Run the following command to make the changes immediately effective instead of rebooting the
machine.

/sbin/sysctl -p

Disable secure Linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set
as follows:

SELINUX=disabled

Disable firewall if not disabled at OS install.

/etc/rc.d/init.d/iptables stop
chkconfig iptables off

Repeat these the same procedure for each database node in the cluster.
Platform Technology Solutions, Latin America

1.2.- Check Installed and additional packages


Everything should be ready, but let's check the required packages:

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' compat-db compat-gcc-


34 compat-gcc-34-c++ compat-libstdc++-296 compat-libstdc++-33

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils control-


center gcc gcc-c++ glibc glibc-common glibc-headers glibc-devel libstdc++
libstdc++-devel make sysstat libaio libaio-devel

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' expat fontconfig


freetype zlib libXp libXp-devel openmotif22 openmotif elfutils-libelf
elfutils-libelf-devel unixODBC unixODBC-devel

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' oracleasm-support


oracleasm-2.6.18-274.el5

1.3.- Network Configuration


As root, edit /etc/hosts file on one node and include the host IP addresses, VIP IP addresses and Private
network IP addresses from all nodes in the cluster as follows, erase current configuration.

# Do not remove the following line, or various programs


# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost

# Admin Network
10.0.3.11 db1
10.0.3.12 db2

# Private Network
10.0.4.11 db1-priv
10.0.4.12 db2-priv
10.0.4.21 nas01

# Public network is configured on DNS


10.0.5.254 dns01

After the /etc/hosts file is configured on db1 copy the file to the other node(s) (db2) using scp. You
will be prompted to enter the root password of the remote node(s) for example:

scp /etc/hosts <db2>:/etc/hosts

As root, verify network configuration by pinging db1 from db2 and vice versa. As root, run the
following commands on each node.

ping -c 1 db1
ping -c 1 db2

ping -c 1 db1-priv
ping -c 1 db2-priv
Platform Technology Solutions, Latin America

Note that you will not be able to ping the virtual IPs (db1-vip, etc.) until after the clusterware is
installed, up and running.

Check that no gateway is defined for private interconnect.

If you find any problems, run as root user, the network configuration program:
/usr/bin/system-config-network

Verify MTU size for the private network interface

To set the current MTU size:

ifconfig eth1 mtu 1500

To make this change permanent, add MTU=1500 at the end of this the eth1 configuration file

cat >> /etc/sysconfig/network-scripts/ifcfg-eth1 <<EOF


MT=1500
EOF

Execute the same command on the second node.

Configure DNS name resolution

cat > /etc/resolv.conf <<EOF


search local.com
options timeout:1
nameserver 10.0.5.254
EOF

Execute the following command to test the DNS availability.

nslookup db-cluster-scan

Server: 10.0.5.254
Address: 10.0.5.254#53

Name: db-cluster-scan.local.com
Address: 10.0.5.20
Platform Technology Solutions, Latin America

1.4.- Configure Cluster Time Synchronization Service - (CTSS) and


Hangcheck Timer

If the Network Time Protocol (NTP) service is not available or properly configured, you can use Cluster
Time Synchronization Service to provide synchronization service in the cluster but first you need to de-
configure and de-install current NTP configuration.

To deactivate the NTP service, you must stop the existing ntpd service, disable it from the
initialization sequences and remove the ntp.conf file. To complete these steps on Oracle Enterprise
Linux, run the following commands as the root user on both Oracle RAC nodes:

/sbin/service ntpd stop


chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.original

Also remove the following file (This file maintains the pid for the NTP daemon) :

rm /var/run/ntpd.pid

Verify hangcheck-timer skip this step for VirtualBox installation


The hangcheck-timer module monitors the Linux kernel for extended operating
system hangs that could affect the reliability of a RAC node and cause a database
corruption. If a hang occurs, the module restarts the node in seconds.

To see if hangcheck-timer is running, run the following command, on both nodes.

/sbin/lsmod | grep hang

If nothing is returned, run the following to configure it:

echo "options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180"


>> /etc/modprobe.conf

modprobe hangcheck-timer

grep Hangcheck /var/log/messages | tail -2

Run the following command as root to start hangcheck-timer automatically on system startup

echo "/sbin/modprobe hangcheck-timer" >> /etc/rc.local


Platform Technology Solutions, Latin America

1.5.- Create groups and oracle user


The following O/S groups will be used during installation:

Description OS Users Assigned


OS Group Name Oracle Privilege Oracle Group Name
to this Group

Oracle Inventory and Software Owner oinstall grid, oracle - -

Oracle Automatic Storage Management Group asmadmin grid SYSASM OSASM


ASM Database Administrator Group asmdba grid, oracle SYSDBA for ASM OSDBA for ASM
ASM Operator Group asmoper grid SYSOPER for ASM OSOPER for ASM

Database Administrator dba oracle SYSDBA OSDBA

Database Operator oper oracle SYSOPER OSOPER

As root on both db1 and db2, create the groups, dba, oinstall and the user oracle.

/usr/sbin/groupadd oinstall
/usr/sbin/groupadd dba
/usr/sbin/groupadd oper

/usr/sbin/groupadd asmadmin
/usr/sbin/groupadd asmdba
/usr/sbin/groupadd asmoper

The following command will create the oracle user and the users home directory with the default group
as oinstall and secondary group as dba. The user default shell will be bash. Useradd unix man pages
will provide additional details on the command:

useradd -g oinstall -G asmadmin -m -s /bin/bash -d /home/grid -r grid


usermod -g oinstall -G asmadmin,asmdba,asmoper,dba grid

useradd -g oinstall -G dba -m -s /bin/bash -d /home/oracle -r oracle


usermod -g oinstall -G dba,asmadmin,asmdba -s /bin/bash oracle

Set the password for the oracle and grid account, use welcome1.

passwd oracle

Changing password for user oracle.


New UNIX password:<enter password>
retype new UNIX password:<enter password>
passwd: all authentication tokens updated successfully.

passwd grid

Verify that the attributes of the user oracle are identical on both db1 and db2

id oracle
id grid
Platform Technology Solutions, Latin America

The command output should be as follows:

[root@db01 ~]# id oracle

uid=54321(oracle) gid=54321(oinstall)
groups=54321(oinstall),54322(dba),54324(asmadmin),54325(asmdba)

[root@db01 ~]# id grid

uid=102(grid) gid=54321(oinstall)
groups=54321(oinstall),54324(asmadmin),54325(asmdba),54326(asmoper)

Enable xhost permissions is case you want to login with root and switch to oracle or grid user:

xhost +

Re-Login or switch as oracle OS user and edit .bash_profile file with the following:

umask 022
if [ -t 0 ]; then
stty intr ^C
fi

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/rac
#export ORACLE_SID=<your sid>

export ORACLE_PATH=/u01/app/oracle/common/oracle/sql
export ORACLE_TERM=xterm

PATH=${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH

THREADS_FLAG=native; export THREADS_FLAG

Copy the profile to db2

scp /home/oracle/.bash_profile oracle@db2:/home/oracle


Platform Technology Solutions, Latin America

Login or switch as grid user and edit .bash_profile with the following
umask 022
if [ -t 0 ]; then
stty intr ^C
fi

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/grid/11.2.0/infra
#export ORACLE_SID=<your sid>

export CV_NODE_ALL=db1,db2
export CVUQDISK_GRP=oinstall

export ORACLE_PATH=/u01/app/oracle/common/oracle/sql
export ORACLE_TERM=xterm

PATH=${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH

THREADS_FLAG=native; export THREADS_FLAG

Copy the profile to db2

scp /home/grid/.bash_profile grid@db2:/home/grid


Platform Technology Solutions, Latin America

1.6.- Create install directories, as root user (on each node)


rm -rf /u01/app

mkdir -p /u01/app/oracle/product/11.2.0/rac
chown -R oracle:oinstall /u01/app

mkdir -p /u01/app/grid/11.2.0/infra
chown -R grid:oinstall /u01/app/grid

chmod -R 775 /u01/


Platform Technology Solutions, Latin America

1.7.- Configure SSH on all nodes


The Installer uses the ssh and scp commands during installation to run remote commands and copy
files to the other cluster nodes. You must configure ssh so that these commands do not prompt for
a password

OPTIONAL: Oracle 11gR2 installer now configures ssh keys across all nodes in the cluster, but if you
want to manually configure them, use the following steps:

Logout and login as oracle in db1

NOTE: If you switch or oracle user, you must use the - option for example: su oracle, so
that the shell environment correctly set

mkdir .ssh

Create RSA and DSA type public and private keys on both nodes.

ssh-keygen -t rsa
ssh db2 /usr/bin/ssh-keygen -t rsa

Accept the default location for the key file


Leave the pass phrase blank.

This command writes the public key to the /home/oracle/.ssh/id_rsa.pub file and the private key
to the /home/oracle/.ssh/id_rsa file.

ssh-keygen -t dsa
ssh db2 /usr/bin/ssh-keygen -t dsa

Accept the default location for the key file


Leave the pass phrase blank.

This command writes the public key to the /home/oracle/.ssh/id_dsa.pub file and the private key
to the /home/oracle/.ssh/id_dsa file.

On node 1:

Concatenate the rsa and dsa public keys of both nodes into one file called authorized_keys with the
following commands, execute one-by-one.

ssh db1 cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys


ssh db1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

ssh db2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys


ssh db2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Copy the authorized_keys file from node 1 to node 2.

scp ~/.ssh/authorized_keys db2:~/.ssh/authorized_keys


ssh db1 chmod 600 ~/.ssh/authorized_keys
ssh db2 chmod 600 ~/.ssh/authorized_keys
Platform Technology Solutions, Latin America

Check the connections with the following commands in both nodes. Execute each line, one at a
time and choose to permanently add the host to the list of known hosts.

ssh db1 date


ssh db2 date
ssh db1-priv date
ssh db2-priv date

Try the next line to see if everything works

ssh db1 date; ssh db2 date; ssh db1-priv date; ssh db2-priv date

Execute the same procedure for user grid


Platform Technology Solutions, Latin America

1.8.- iSCSI Configuration


In this section, we will be using the static discovery method. We first need to verify that the iSCSI
software packages are installed on our servers before we can proceed further.

Enabling the Name Service Cache Daemon

To allow Oracle Clusterware to better tolerate network failures with NAS devices or NFS
mounts, enable the Name Service Cache Daemon (nscd).

To change the configuration to ensure that nscd is on for both run level 3 and run level 5, enter
the following command as root:

chkconfig --level 35 nscd on


service nscd start

Configure UDEV Rules

Execute the following to create the rule script:


cat >> /etc/udev/rules.d/99-iscsi.rules<<EOF
#iscsi devices
KERNEL=="sd*", BUS=="scsi", PROGRAM="/usr/local/bin/iscsidev %b",SYMLINK+="iscsi/%c{1}.p%n"
EOF

Use vi to create the following script:


[root@db1 ~]# vi /usr/local/bin/iscsidev

#!/bin/sh
BUS=${1}
HOST=${BUS%%:*}
LUN=`echo ${BUS} | cut -d":" -f4` [ -e /sys/class/iscsi_host ] || exit 1
file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session:session*/targetname"
target_name=`cut -d":" -f2 ${file}`
if [ -z "${target_name}" ]; then
exit 1
fi
echo "${target_name} ${LUN}"

Start iscsi services:

chmod a+x /usr/local/bin/iscsidev


chkconfig iscsid on
service iscsid start
setsebool -P iscsid_disable_trans=1

iscsiadm -m discovery -t sendtargets -p nas01


service iscsi restart
Platform Technology Solutions, Latin America

Display running sessions:


iscsiadm -m session

tcp: [1] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk06


tcp: [10] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk08
tcp: [11] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk03
tcp: [12] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk01
tcp: [2] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk04
tcp: [3] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk02
tcp: [4] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk07
tcp: [5] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk09
tcp: [6] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk11
tcp: [7] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk12
tcp: [8] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk10
tcp: [9] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk05

Check that the /dev/iscsi links are created, notice the order assigned.
ls -l /dev/iscsi

Copy iscsi scripts from db1.


scp /etc/udev/rules.d/99-iscsi.rules db2:/etc/udev/rules.d
scp /usr/local/bin/iscsidev db2:/usr/local/bin

Execute configuration in the remaining nodes also as root:

chkconfig iscsid on
service iscsid start
setsebool -P iscsid_disable_trans=1

iscsiadm -m discovery -t sendtargets -p nas01

service iscsi restart


ls -l /dev/iscsi

Now we are going to partition the disks, we will use the the first partition for the grid infrastructure
diskgroup.

Partition1 100 Mb
Partition2 Remaining space

First Disk

[root@db01 ~]# fdisk /dev/iscsi/nas01.disk01.p


Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n


Platform Technology Solutions, Latin America

Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1009, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009): +100M

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (49-1009, default 49): <press enter>
Using default value 49
Last cylinder or +size or +sizeM or +sizeK (472-1009, default 1009): <enter>
Using default value 1009

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or
resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

A method for cloning a partition table in linux is to use sfdisk, we are going to apply the same
configuration for all disks.

sfdisk -d /dev/iscsi/nas01.disk01.p>disk01part.txt

sfdisk /dev/iscsi/nas01.disk02.p<disk01part.txt
sfdisk /dev/iscsi/nas01.disk03.p<disk01part.txt
sfdisk /dev/iscsi/nas01.disk04.p<disk01part.txt
sfdisk /dev/iscsi/nas01.disk05.p<disk01part.txt
sfdisk /dev/iscsi/nas01.disk06.p<disk01part.txt
sfdisk /dev/iscsi/nas01.disk07.p<disk01part.txt
sfdisk /dev/iscsi/nas01.disk08.p<disk01part.txt
sfdisk /dev/iscsi/nas01.disk09.p<disk01part.txt
sfdisk /dev/iscsi/nas01.disk10.p<disk01part.txt
sfdisk /dev/iscsi/nas01.disk11.p<disk01part.txt
sfdisk /dev/iscsi/nas01.disk12.p<disk01part.txt
Platform Technology Solutions, Latin America

Initialize all block devices with the following commands from db1:

dd if=/dev/zero of=/dev/iscsi/nas01.disk01.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk01.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk02.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk02.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk03.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk03.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk04.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk04.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk05.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk05.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk06.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk06.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk07.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk07.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk08.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk08.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk09.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk09.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk10.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk10.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk11.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk11.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk12.p1 bs=1000k count=99


dd if=/dev/zero of=/dev/iscsi/nas01.disk12.p2 bs=1000k count=99

You'll need to propagate changes on node 2 by executing:

For a SAN configuration:

partprobe

For an iscsi configuration (Virtualbox):

service iscsi restart


Platform Technology Solutions, Latin America

1.9.- ASMlib configuration


The Oracle ASMLib kernel driver is now included in the Unbreakable Enterprise Kernel. No driver
package needs to be installed when using this kernel. The oracleasm-support and oracleasmlib
packages still need to be installed,

The package oracleasmlib can be downloaded directly from:


http://www-content.oracle.com/technetwork/topics/linux/downloads/index-088143.html

Make sure the two ASM packages are installed in both nodes, becase asmlib driver is implemented on
the new Oracle Unbrekable Kernel we no longer need to install oracleasmlib package:

[root@db01 ~]# rpm -qa | grep oracleasm


oracleasm-2.6.18-274.el5-2.0.5-1.el5
oracleasm-support-2.1.7-1.el5

Run the following command to configure ASM

[root@db1 ~]# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid


Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]

Every disk that ASMLib is going to be accessing needs to be made available. This is accomplished by
creating an ASM disk in db1:

/etc/init.d/oracleasm createdisk NAS01_GRID01 /dev/iscsi/nas01.disk01.p1


/etc/init.d/oracleasm createdisk NAS01_GRID02 /dev/iscsi/nas01.disk02.p1
/etc/init.d/oracleasm createdisk NAS01_GRID03 /dev/iscsi/nas01.disk03.p1
/etc/init.d/oracleasm createdisk NAS01_GRID04 /dev/iscsi/nas01.disk04.p1
/etc/init.d/oracleasm createdisk NAS01_GRID05 /dev/iscsi/nas01.disk05.p1
/etc/init.d/oracleasm createdisk NAS01_GRID06 /dev/iscsi/nas01.disk06.p1
/etc/init.d/oracleasm createdisk NAS01_GRID07 /dev/iscsi/nas01.disk07.p1
/etc/init.d/oracleasm createdisk NAS01_GRID08 /dev/iscsi/nas01.disk08.p1
/etc/init.d/oracleasm createdisk NAS01_GRID09 /dev/iscsi/nas01.disk09.p1
/etc/init.d/oracleasm createdisk NAS01_GRID10 /dev/iscsi/nas01.disk10.p1
/etc/init.d/oracleasm createdisk NAS01_GRID11 /dev/iscsi/nas01.disk11.p1
/etc/init.d/oracleasm createdisk NAS01_GRID12 /dev/iscsi/nas01.disk12.p1

/etc/init.d/oracleasm createdisk NAS01_DATA01 /dev/iscsi/nas01.disk01.p2


/etc/init.d/oracleasm createdisk NAS01_DATA02 /dev/iscsi/nas01.disk02.p2
/etc/init.d/oracleasm createdisk NAS01_DATA03 /dev/iscsi/nas01.disk03.p2
/etc/init.d/oracleasm createdisk NAS01_DATA04 /dev/iscsi/nas01.disk04.p2
/etc/init.d/oracleasm createdisk NAS01_DATA05 /dev/iscsi/nas01.disk05.p2
/etc/init.d/oracleasm createdisk NAS01_DATA06 /dev/iscsi/nas01.disk06.p2
/etc/init.d/oracleasm createdisk NAS01_DATA07 /dev/iscsi/nas01.disk07.p2
/etc/init.d/oracleasm createdisk NAS01_DATA08 /dev/iscsi/nas01.disk08.p2
/etc/init.d/oracleasm createdisk NAS01_DATA09 /dev/iscsi/nas01.disk09.p2
/etc/init.d/oracleasm createdisk NAS01_DATA10 /dev/iscsi/nas01.disk10.p2
/etc/init.d/oracleasm createdisk NAS01_DATA11 /dev/iscsi/nas01.disk11.p2
/etc/init.d/oracleasm createdisk NAS01_DATA12 /dev/iscsi/nas01.disk12.p2
Platform Technology Solutions, Latin America

List all the disks:

/etc/init.d/oracleasm listdisks

List all the ASM persistent links:


ls /dev/oracleasm/disks/

We also have to execute the ASMLib configuration in db2.

[root@db2~]# /etc/init.d/oracleasm configure

When a disk is marked with asmlib, other nodes have to be refreshed, just run the 'scandisks' option on
db2:

# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks [ OK ]

Existing disks can now be listed:

[root@db2 ~]# /etc/init.d/oracleasm listdisks

NAS01_GRID01
NAS01_GRID02
NAS01_GRID03
NAS01_GRID04

Because we are going to use asmlib support, we no longer need to assign permissions to block devices
upon reboot in /etc/rc.local file, asmlib will take care of that.
Platform Technology Solutions, Latin America

1.10.- Verify SCAN name with DNS Server


For the purpose of this workshop, we already configured the SCAN address resolution in the VM that
acts as an ISCSI and DNS server on client access network (10.0.5.254).

The nslookup binary will be executed by the Cluster Verification Utility during the Oracle grid
infrastructure install.

Verify the command output, it should look like the following:

[root@db1 ~]# nslookup db-cluster-scan

Server: 10.0.5.254
Address: 10.0.5.254#53

Name: node-cluster-scan.local.com
Address: 10.0.5.20

Remember to perform these actions on both Oracle RAC nodes.


Platform Technology Solutions, Latin America

1.11.- Configure VNC Server (Optional)


First check if you already have them installed on your system, open a terminal and type:

$ rpm -qa|grep vnc

We need to add at least 1 VNC user/group, open the file /etc/sysconfig/vncservers as root and
add the information shown:

VNCSERVERS="1:root"
VNCSERVERARGS[1]="-geometry 1024x768 -depth 16"

To add some security we need to add a password that must be given before a connection can be
established, open a terminal and type:

$ vncpasswd

To start the server we type the command 'vncserver' and the session you wish to start, in this case
root(if you have set up more than 1 entry in the /etc/sysconfig/vncservers file:

[root@db1 ~] vncserver :1

Now the server is started and a user could connect, however they will get a plain grey desktop by
default as the connection will not cause a new session of X to start by default, to fix this we need to
edit the startup script in the .vnc folder in your home directory.

vi ~/.vnc/xstartup
# Uncomment the following two lines for normal desktop:
unset SESSION_MANAGER
exec /etc/X11/xinit/xinitrc

As the file says make sure the two lines at the top are uncommented by removing the leading # sign.
Next we need to restart vncserver to pick up the changed we just made. To restart the vncserver we
need to kill the process and start a new one as root:

$ vncserver -kill :1
$ vncserver :1

To start the viewer type:


vncviewer <ip addess>:1
Platform Technology Solutions, Latin America

2.0 Oracle Software, Pre-Installation Tasks

The Cluster Verify Utility automatically checks all nodes that are specified, but first we need to install
the cvuqdisk rpm required for the CVU, on both nodes.

su -
export CVUQDISK_GRP=asmadmin
cd /install/11gR2/grid/rpm
rpm -ivh cvuqdisk-*

ssh root@db2 mkdir -p /install/11gR2/grid/rpm


scp cvuqdisk-* root@db2:/install/11gR2/grid/rpm

ssh root@db2
export CVUQDISK_GRP=asmadmin
rpm -ivh /install/11gr2/grid/rpm/cvuqdisk-*

Re-login in the OS Desktop graphical user interface using the grid user, execute the following
commands (if you are installing only one node replace -n all switch with -n db1) :

cd /install/11gR2/grid

Verifying node connectivity (only if you configured the shh equivalence):


./runcluvfy.sh comp nodecon -n all -verbose

Performing post-checks for hardware and operating system setup:


./runcluvfy.sh stage -post hwos -n all -verbose

Checking system requirements for crs, display only failed checks:


./runcluvfy.sh comp sys -n all -p crs -verbose | grep failed

Check warnings and errors.

Ignore memory and kernel parameters errors, installer will generate a script for you to run as
root user to change them to the correct kernel values.

You may need to update some rpms as ROOT user in BOTH NODES

Perform overall pre-checks for cluster services setup:


./runcluvfy.sh stage -pre crsinst -n all

Check time difference between the nodes, if there is more than one second, update the time manually
as root user using an ntp server and update hardware clock:

[root@db1 ~]# /usr/sbin/ntpdate north-america.pool.ntp.org


[root@db1 ~]# hwclock systohc

If needed, add 1024 Mb of extra swap to avoid installer warnings:


[root@db1 ~]# dd if=/dev/zero of=/extraswap bs=1M count=1024
[root@db1 ~]# mkswap /extraswap
[root@db1 ~]# swapon /extraswap
[root@db1 ~]# swapon -s
Platform Technology Solutions, Latin America

2.1 Install 11g Grid Infrastructure, former 10g Cluster Ready Services
(CRS)
The installer needs to be run from one node in the cluster under an X environment. Run the following
steps in VNC (or another X client) on only the first node in the cluster as grid user.

Review .bash_profile configuration: more ~/.bash_profile

Run the Oracle Universal Installer:

/install/11gR2/grid/runInstaller

Screen Name Response


Select Installation Option Select "Install and Configure Grid Infrastructure for a Cluster"

Select Installation Type Select "Advanced Installation"


Select Product Languages Click Next
Cluster Name: db-cluster
SCAN Name: db-cluster-scan
Grid Plug and Play Information SCAN Port: 1521

Only Un-check the option to "Configure GNS", Click Next


Click the "Add" button to add "db2" and its virtual IP address "db2-vip", Click Next
Cluster Node Information

Identify the network interface to be used for the "Public" and "Private" network.
Make any changes necessary to match the values in the table below:

Specify Network Interface Usage Interface Subnet Type


eth1 10.0.4.0 Private
eth2 10.0.5.0 Public
eth0 10.0.3.0 Do not Use
Storage Option Information Select "Automatic Storage Management (ASM)", Click Next
Change Discovery Path to

/dev/oracleasm/disks/*

Create an ASM Disk Group that will be used to store the Oracle Clusterware files
according to the values in the following values:

Create ASM Disk Group Disk Group: GRID


Name Redundancy: External Redundancy
Disks: NAS01_GRID*

Click Next

In a production environment is always recommended to use at least Normal


Redundancy
For the purpose of this article, I choose to "Use same passwords for these
Specify ASM Password
accounts", Click Next
Failure Isolation Support Select "Do not use Intelligent Platform Management Interface (IPMI)".
Make any changes necessary to match the values:
OSDBA for ASM: asmdba
OSOPER for ASM: asmoper
Privileged Operating System Groups OSASM: asmadmin
Click Next
Platform Technology Solutions, Latin America

Review default values, those are preloaded from the environment variables we
already set in the OS user.
Specify Installation Location
Click Next
Inventory Directory: /u01/app/oraInventory
Create Inventory oraInventory Group Name: oinstall
If OUI detects an incomplete task that is marked "fixable", then you can easily fix
the issue by generating the fixup script by clicking the [Fix & Check Again] button.

The fixup script is generated during installation. You will be prompted to run the
Prerequisite Checks script as root in a separate terminal session.

Ignore the Device Checks for ASM error by selecting the Ignore All checkbox

Click Finish to start the installation.


Summary

The installer performs the Oracle grid infrastructure setup process on both Oracle
Setup
RAC nodes.
Run the orainstRoot.sh script on both nodes in the RAC cluster:

[root@db1 ~]# /u01/app/oraInventory/orainstRoot.sh

[root@db2 ~]# /u01/app/oraInventory/orainstRoot.sh

Within the same new console window on both Oracle RAC nodes in the cluster,
(starting with the node you are performing the install from), stay logged in as the
root user account. Run the root.sh script on both nodes in the RAC cluster one at a
time starting with the node you are performing the install from:

Execute Configuration scripts [root@db1 ~]# /u01/app/11.2.0/grid/root.sh

[root@db2 ~]# /u01/app/11.2.0/grid/root.sh

The root.sh script can take several minutes to run. When running root.sh on the
last node, you will receive output similar to the following which signifies a
successful install:
...
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window.
Finish At the end of the installation, click the [Close] button to exit the OUI.

Install verification

The Installed Cluster Verify Utility can be used to verify the CRS installation.

Run the Cluster Verify Utility as grid user, if running from one node replace all for the node name for
example: db1.

cluvfy stage -post crsinst -n all

Reboot the server, in the following section we'll execute some commands to make sure all services
started successfully.
Platform Technology Solutions, Latin America

Troubleshooting:

If something goes wrong when executing the root.sh you can review the log and repair the error, but
before executing the script again, de-configure the node and then re-execute the root.sh script.

Dont execute this is you finished the configuration correctly


<oracle_home>crs/install/rootcrs.pl -deconfig -force
Platform Technology Solutions, Latin America

2.2 Post installation procedures


Verify Oracle Clusterware Installation

After the installation of Oracle grid infrastructure, you should run through several tests to verify the
install was successful. Run the following commands on both nodes in the RAC cluster as the grid user.

Check CRS Status

[grid@db1 ~]$ crsctl check crs

CRS-4638: Oracle High Availability Services is online


CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Check Clusterware Resources

[grid@db1 ~]$ crs_stat -t -v

Name Type R/RA F/FT Target State Host


----------------------------------------------------------------------
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE db1
ora.GRID.dg ora....up.type 0/5 0/ ONLINE ONLINE db1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE db1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE db1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE db1
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE db1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE db1
ora....B1.lsnr application 0/5 0/0 ONLINE ONLINE db1
ora.db1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.db1.ons application 0/3 0/0 ONLINE ONLINE db1
ora.db1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE db1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE db1
ora.oc4j ora.oc4j.type 0/5 0/0 ONLINE ONLINE db1
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE db1
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE db1

Check Cluster Nodes

[grid@db1 ~]$ olsnodes -n

Check Oracle TNS Listener Process on Both Nodes

[grid@db1 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'

LISTENER_SCAN1
LISTENER

[grid@db2 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'

LISTENER
Platform Technology Solutions, Latin America

Another method is to use the command:

[grid@db1 ~] srvctl status listener

Listener LISTENER is enabled


Listener LISTENER is running on node(s): db1

Confirming Oracle ASM Function for Oracle Clusterware Files

If you installed the OCR and voting disk files on Oracle ASM, then use the following command
syntax as the Grid Infrastructure installation owner to confirm that your Oracle ASM installation
is running:

[grid@db1 ~]$ srvctl status asm -a

ASM is running on db1,db2


ASM is enabled.

Check Oracle Cluster Registry (OCR)

[grid@db1 ~]$ ocrcheck

Status of Oracle Cluster Registry is as follows :


Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2224
Available space (kbytes) : 259896
ID : 670206863
Device/File Name : +GRID
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

Check Voting Disk

[grid@db1 ~]$ crsctl query css votedisk

## STATE File Universal Id File Name Disk group


-- ----- ----------------- --------- ---------
1. ONLINE 6ba1b1a0d1ed4fcbbfafac71f335bec3 (/dev/oracleasm/disks/NAS01_GRID03) [GRID]
Located 1 voting disk(s).

Note: To manage Oracle ASM or Oracle Net 11g release 2 (11.2) or later installations, use the srvctl
binary in the Oracle grid infrastructure home for a cluster (Grid home). When we install Oracle Real
Application Clusters (the Oracle database software), you cannot use the srvctl binary in the database
home to manage Oracle ASM or Oracle Net which reside in the Oracle grid infrastructure home.
Platform Technology Solutions, Latin America

Voting Disk Management

In prior releases, it was highly recommended to back up the voting disk using the dd command after
installing the Oracle Clusterware software. With Oracle Clusterware release 11.2 and later, backing up
and restoring a voting disk using the dd is not supported and may result in the loss of the voting disk.

Backing up the voting disks in Oracle Clusterware 11g release 2 is no longer required. The voting disk
data is automatically backed up in OCR as part of any configuration change and is automatically
restored to any voting disk added.

To learn more about managing the voting disks, Oracle Cluster Registry (OCR), and Oracle Local
Registry (OLR), please refer to the Oracle Clusterware Administration and Deployment Guide 11 g
Release 2 (11.2).

Back Up the root.sh Script

Oracle recommends that you back up the root.sh script after you complete an installation. If you
install other products in the same Oracle home directory, then the installer updates the contents of
the existing root.sh script during the installation. If you require information contained in the original
root.sh script, then you can recover it from the root.sh file copy.

Back up the root.sh file on both Oracle RAC nodes as root:

[root@db1 ~]# cd /u01/app/grid/11.2.0/infra


[root@db1 grid]# cp root.sh root.sh.db1

[root@db2 ~]# cd /u01/app/grid/11.2.0/infra


[root@db2 grid]# cp root.sh root.sh.db2

In order for JDBC Fast Connection Failover to work, you must start Global Service Daemon (GSD) for the
first time on each node as root user, this step is optional.

[root@db1 ~]# /u01/app/grid/11.2.0/infra/bin/gsdctl start

Next time you reboot the server or restart cluster services, GSD services will start automatically.
3.0 ASM Disk-groups Provisioning
Run the ASM Configuration Assistant (asmca) as the grid user from only one node in the cluster (db1) to
create the additional ASM disk groups which will be used to create the clustered database.

During the installation of Oracle grid infrastructure, we configured one ASM disk group named +GRID
which was used to store the Oracle clusterware files (OCR and voting disk).

In this section, we will create two additional ASM disk groups using the ASM Configuration Assistant
(asmca). These new ASM disk groups will be used later in this guide when creating the clustered
database.

The first ASM disk group will be named +DATA and will be used to store all Oracle physical database
files (data, online redo logs, control files, archived redo logs).

Normally a second ASM disk group is created for the Fast Recovery Area named +FLASH, but for this lab
we will use only one diskgroup.

Before starting the ASM Configuration Assistant, log in to db1 as the owner of the Oracle Grid
Infrastructure software which for this article is grid. Next, if you are using a remote client to connect
to the Oracle RAC node performing the installation (SSH or Telnet to db1 from a workstation configured
with an X Server) or directly using the console.

Update ASMSNMP password

As grid user, execute the following commands:

[grid@db1 ~]$ export ORACLE_SID=+ASM1


[grid@db1 ~]$ asmcmd

ASMCMD> lspwusr

Username sysdba sysoper sysasm


SYS TRUE TRUE TRUE
ASMSNMP TRUE FALSE FALSE

ASMCMD> orapwusr --modify --password sys


Enter password: manager

ASMCMD> orapwusr --modify --password asmsnmp


Enter password: manager
Create Additional ASM Disk Groups using ASMCA

Perform the following tasks as the grid user to create two additional ASM disk groups:

[grid@db1 ~]$ asmca &

Screen Name Response


Disk Groups From the "Disk Groups" tab, click the "Create" button.

The "Create Disk Group" dialog should show two of the ASMLib volumes we
created earlier in this guide.

When creating the datbase ASM disk group, use "DATA" for the "Disk Group
Name".

In the "Redundancy" section, choose "External Redundancy", for production is


Create Disk Group recommended at least normal redundancy.

Finally, check all the ASMLib volumes remaining in the "Select Member Disks"
section, If necessary change the Disk Discovery Path to:

/dev/oracleasm/disks/*

After verifying all values in this dialog are correct, click the [OK] button.
After creating the first ASM disk group, you will be returned to the initial dialog,
Disk Groups
if necessary you can create additional diskgroups
Exit the ASM Configuration Assistant by clicking the [Exit] button.
Disk Groups

Congratulations, you finished the first installation stage, see you tomorrow for the
next lab.

You might also like