You are on page 1of 69

DB EXPERTS Basavraju 9866465379

Implementing A Two Node RAC Using ASM and SAN (Openfiler)

First server (node1 in RAC) – where you will configure instance 1

Second server (node2 in RAC) – where you will configure instance 2

Third server (SAN BOX ) Provides shared DISK for Nodes

we need to configure ASM as follows

ASM instance 1 – should be configured on First Node


ASM instance 2 - Should be configured on second Node

Clusterware Software is 10g R2 and Database software is also 10g R2


We are going to install Linux OS in All RAC Nodes
Each Node should have 2 NIC cards

NIC 1 - eth0
NIC 2 - eth1

Install Linux OS (RHEL 4.3) in all Nodes


First Node Details are as follows

[root@rac1 ~]# hostname


rac1
[root@rac1 ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:22:B0:60:FD:7F
inet addr:192.168.2.10 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::222:b0ff:fe60:fd7f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:2977 errors:0 dropped:0 overruns:0 frame:0
TX packets:961 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:216281 (211.2 KiB) TX bytes:47648 (46.5 KiB)
Interrupt:209 Base address:0xc000

[root@rac1 ~]# ifconfig eth1


eth1 Link encap:Ethernet HWaddr 00:1C:C0:FC:94:20
inet addr:192.168.3.10 Bcast:192.168.3.255 Mask:255.255.255.0
inet6 addr: fe80::21c:c0ff:fefc:9420/64 Scope:Link

DB EXPERTS 1
DB EXPERTS Basavraju 9866465379

UP BROADCAST RUNNING MULTICAST MTU:1500


Metric:1
RX packets:2248 errors:0 dropped:0 overruns:0 frame:0
TX packets:980 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:141909 (138.5 KiB) TX bytes:74272 (72.5 KiB)
Interrupt:177

[root@rac1 ~]#

Second Node details are as follows

[root@rac2 ~]# hostname


rac2
[root@rac2 ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:07:E9:B0:07:17
inet addr:192.168.2.20 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::207:e9ff:feb0:717/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:1547 errors:0 dropped:0 overruns:0 frame:0
TX packets:973 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:110323 (107.7 KiB) TX bytes:66980 (65.4 KiB)
Base address:0xd000 Memory:d0440000-d0460000

[root@rac2 ~]# ifconfig eth1


eth1 Link encap:Ethernet HWaddr 00:27:0E:23:46:CA
inet addr:192.168.3.20 Bcast:192.168.3.255 Mask:255.255.255.0
inet6 addr: fe80::227:eff:fe23:46ca/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:3377 errors:0 dropped:0 overruns:0 frame:0
TX packets:1561 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:226171 (220.8 KiB) TX bytes:120494 (117.6 KiB)
Interrupt:177 Base address:0xc000

[root@rac2 ~]#

DB EXPERTS 2
DB EXPERTS Basavraju 9866465379

First Node Second Node

eth0 192.168.2.10 rac1 192.168.2.20 rac2 (public)

eth1 192.168.3.10 rac1-priv 192.168.3.20 rac2-priv

192.168.2.11 rac1-vip 192.168.2.21 rac2-vip

To Install Clusterware we need to perform Pre Installation Configuration


To prepare pre installation configuration login into First Node as root
(rac1) user

Check all Nodes are reachable using ping Command

[root@rac1 ~]# ping 192.168.2.11


PING 192.168.2.11 (192.168.2.11) 56(84) bytes of data.
From 192.168.2.10 icmp_seq=1 Destination Host Unreachable

--- 192.168.2.11 ping statistics ---


6 packets transmitted, 0 received, +3 errors, 100% packet loss, time
4999ms
, pipe 4

[root@rac1 ~]# ping 192.168.2.21


PING 192.168.2.21 (192.168.2.21) 56(84) bytes of data.
From 192.168.2.10 icmp_seq=1 Destination Host Unreachable

--- 192.168.2.21 ping statistics ---


5 packets transmitted, 0 received, +3 errors, 100% packet loss, time
4000ms
, pipe 4

[root@rac1 ~]# ping 192.168.2.10


PING 192.168.2.10 (192.168.2.10) 56(84) bytes of data.
64 bytes from 192.168.2.10: icmp_seq=0 ttl=64 time=0.020 ms
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=0.008 ms

--- 192.168.2.10 ping statistics ---


2 packets transmitted, 2 received, 0% packet loss, time 999ms

DB EXPERTS 3
DB EXPERTS Basavraju 9866465379

rtt min/avg/max/mdev = 0.008/0.014/0.020/0.006 ms, pipe 2

[root@rac1 ~]# ping 192.168.2.20


PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=0 ttl=64 time=1.65 ms

--- 192.168.2.20 ping statistics ---


2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.116/0.885/1.654/0.769 ms, pipe 2

[root@rac1 ~]# ping 192.168.3.10


PING 192.168.3.10 (192.168.3.10) 56(84) bytes of data.
64 bytes from 192.168.3.10: icmp_seq=0 ttl=64 time=0.024 ms

--- 192.168.3.10 ping statistics ---


3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.006/0.013/0.024/0.008 ms, pipe 2

[root@rac1 ~]# ping 192.168.3.20


PING 192.168.3.20 (192.168.3.20) 56(84) bytes of data.
64 bytes from 192.168.3.20: icmp_seq=0 ttl=64 time=0.085 ms

--- 192.168.3.20 ping statistics ---


2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.072/0.078/0.085/0.011 ms, pipe 2

Configure the following file which should contain Ip address and alias
name

[root@rac1 ~]# vi /etc/hosts

# Do not remove the following line, or various programs


# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost

192.168.2.10 rac1
192.168.3.10 rac1-priv
192.168.2.11 rac1-vip

192.168.2.20 rac2
192.168.3.20 rac2-priv
192.168.2.21 rac2-vip
~

DB EXPERTS 4
DB EXPERTS Basavraju 9866465379

~
-- INSERT --

save the above file and quit

Repeat above steps in all remaining nodes as root user

Login into Second Node (rac2) as root user and configure /etc/hosts file
[root@rac2 ~]# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost

192.168.2.10 rac1
192.168.3.10 rac1-priv
192.168.2.11 rac1-vip

192.168.2.20 rac2
192.168.3.20 rac2-priv
192.168.2.21 rac2-vip
~
~
-- INSERT --

save the above file and quit

On first node (rac1) as root user set the following Kernel parameters

[root@rac1 ~]# vi /etc/sysctl.conf


net.ipv4.ip_forward = 0

# Controls source route verification


net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing


net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel


kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

DB EXPERTS 5
DB EXPERTS Basavraju 9866465379

kernel.shmmax = 9999999999
kernel.sem = 256 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_max = 262144
net.core.rmem_default = 262144
net.core.wmem_max = 262144
net.core.wmem_default = 262144
-- INSERT --

Save the above file and quit.


Then execute the following command

[root@rac1 ~]# sysctl -p

Repeat the above step in second also as root user

Create the following groups and users on both the nodes

[root@rac1 ~]# groupadd -g 601 dba


[root@rac1 ~]# useradd -u 600 -g dba oracle
[root@rac1 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@rac1 ~]#

Repeat in second also


[root@rac2 ~]# groupadd -g 601 dba
[root@rac2 ~]# useradd -u 600 -g dba oracle
[root@rac2 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@rac2 ~]#

DB EXPERTS 6
DB EXPERTS Basavraju 9866465379

Create the following directory structures in all nodes

To install clusterware and oracle softwares. Here /cluster is used for CRS
Home and /orasoft is for Oracle Home

[root@rac1 ~]# ls /cluster


[root@rac1 ~]# ls /orasoft
[root@rac1 ~]# chown -R oracle:dba /cluster
[root@rac1 ~]# chown -R oracle:dba /orasoft
[root@rac1 ~]#

[root@rac2 ~]# ls /cluster


[root@rac2 ~]# ls /orasoft
[root@rac2 ~]# chown -R oracle:dba /orasoft
[root@rac2 ~]# chown -R oracle:dba /cluster
[root@rac2 ~]#

from first node check root user is able to login into all the nodes
using alias names mentioned in /etc/hosts file

[root@rac1 ~]# ssh rac1


The authenticity of host 'rac1 (192.168.2.10)' can't be established.
RSA key fingerprint is c9:34:a6:a3:c2:6b:cc:73:bf:ba:a7:78:dc:e6:f7:e3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.2.10' (RSA) to the list of
known hosts.
root@rac1's password:
Last login: Sun Jan 16 09:24:03 2005 from 192.168.2.99
[root@rac1 ~]# exit
logout
Connection to rac1 closed.
[root@rac1 ~]# ssh rac1-priv
The authenticity of host 'rac1-priv (192.168.3.10)' can't be established.
RSA key fingerprint is c9:34:a6:a3:c2:6b:cc:73:bf:ba:a7:78:dc:e6:f7:e3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1-priv,192.168.3.10' (RSA) to the list of
known hosts.
root@rac1-priv's password:
Last login: Sun Jan 16 09:51:57 2005 from rac1
[root@rac1 ~]# exit
logout
Connection to rac1-priv closed.
[root@rac1 ~]# ssh rac2

DB EXPERTS 7
DB EXPERTS Basavraju 9866465379

The authenticity of host 'rac2 (192.168.2.20)' can't be established.


RSA key fingerprint is c0:a0:4b:23:5d:93:16:51:88:bc:c8:bb:46:f9:57:f7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2' (RSA) to the list of known hosts.
root@rac2's password:
Last login: Wed Jan 5 19:00:45 2011 from 192.168.2.99
[root@rac2 ~]# exit
logout
Connection to rac2 closed.
[root@rac1 ~]# ssh rac2-priv
The authenticity of host 'rac2-priv (192.168.3.20)' can't be established.
RSA key fingerprint is c0:a0:4b:23:5d:93:16:51:88:bc:c8:bb:46:f9:57:f7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2-priv,192.168.3.20' (RSA) to the list of
known hosts.
root@rac2-priv's password:
Last login: Wed Jan 5 19:26:36 2011 from rac1
[root@rac2 ~]# exit
logout
Connection to rac2-priv closed.
[root@rac1 ~]#

Similarly from second also check root user is able to login into all the
nodes using alias names mentioned in /etc/hosts file

[root@rac2 ~]# ssh rac1


The authenticity of host 'rac1 (192.168.2.10)' can't be established.
RSA key fingerprint is c9:34:a6:a3:c2:6b:cc:73:bf:ba:a7:78:dc:e6:f7:e3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.2.10' (RSA) to the list of
known hosts.
root@rac1's password:
Last login: Sun Jan 16 09:52:15 2005 from rac1-priv
[root@rac1 ~]# exit
logout
Connection to rac1 closed.
[root@rac2 ~]# ssh rac1-priv
The authenticity of host 'rac1-priv (192.168.3.10)' can't be established.
RSA key fingerprint is c9:34:a6:a3:c2:6b:cc:73:bf:ba:a7:78:dc:e6:f7:e3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1-priv,192.168.3.10' (RSA) to the list of
known hosts.
root@rac1-priv's password:

DB EXPERTS 8
DB EXPERTS Basavraju 9866465379

Last login: Sun Jan 16 09:54:20 2005 from rac2


[root@rac1 ~]# exit
logout
Connection to rac1-priv closed.
[root@rac2 ~]# ssh rac2
The authenticity of host 'rac2 (192.168.2.20)' can't be established.
RSA key fingerprint is c0:a0:4b:23:5d:93:16:51:88:bc:c8:bb:46:f9:57:f7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.2.20' (RSA) to the list of
known hosts.
root@rac2's password:
Last login: Wed Jan 5 19:26:48 2011 from rac1-priv
[root@rac2 ~]# exit
logout
Connection to rac2 closed.
[root@rac2 ~]# ssh rac2-priv
The authenticity of host 'rac2-priv (192.168.3.20)' can't be established.
RSA key fingerprint is c0:a0:4b:23:5d:93:16:51:88:bc:c8:bb:46:f9:57:f7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2-priv,192.168.3.20' (RSA) to the list of
known hosts.
root@rac2-priv's password:
Last login: Wed Jan 5 19:29:00 2011 from rac2
[root@rac2 ~]# exit
logout
Connection to rac2-priv closed.
[root@rac2 ~]#

Configure root logins from any anode to any other node of RAC
without passwords

Generate public and private key pair at first node


[root@rac1 ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
73:4d:28:d9:b2:dc:25:21:7f:ee:aa:0a:b3:b0:3e:2d root@rac1
[root@rac1 ~]# cd .ssh
[root@rac1 .ssh]# ls

DB EXPERTS 9
DB EXPERTS Basavraju 9866465379

id_dsa id_dsa.pub known_hosts

[root@rac1 .ssh]# scp id_dsa.pub root@rac2:/root/.ssh/authorized_keys


root@rac2's password:
id_dsa.pub 100% 599 0.6KB/s 00:00
[root@rac1 .ssh]#

Generate public and private key pair at second node also


[root@rac2 ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
50:3d:60:0b:ac:a9:3e:bc:5b:ac:34:58:f6:21:9e:42 root@rac2
[root@rac2 ~]# cd .ssh/
[root@rac2 .ssh]# ls
authorized_keys id_dsa id_dsa.pub known_hosts
[root@rac2 .ssh]# cat id_dsa.pub >> authorized_keys

[root@rac2 .ssh]# scp authorized_keys root@rac1:/root/.ssh


root@rac1's password:
authorized_keys 100% 1198 1.2KB/s 00:00
[root@rac2 .ssh]#

Finally the authorized_keys file should contain public keys of all Nodes
and this file should present in every node under .ssh directory of root

Now check root logins without passwords from first node (rac1)

[root@rac1 ~]# ssh rac1


Last login: Sun Jan 16 09:54:37 2005 from rac2-priv
[root@rac1 ~]# exit
logout
Connection to rac1 closed.
[root@rac1 ~]# ssh rac1-priv
Last login: Sun Jan 16 10:04:39 2005 from rac1
[root@rac1 ~]# exit
logout
Connection to rac1-priv closed.
[root@rac1 ~]# ssh rac2

DB EXPERTS 10
DB EXPERTS Basavraju 9866465379

Last login: Wed Jan 5 19:29:08 2011 from rac2-priv


[root@rac2 ~]# exit
logout
Connection to rac2 closed.
[root@rac1 ~]# ssh rac2-priv
Last login: Wed Jan 5 19:39:08 2011 from rac1
[root@rac2 ~]# exit
logout
Connection to rac2-priv closed.
[root@rac1 ~]#

And similarly check root logins from second node without passwords

[root@rac2 ~]# ssh rac1


Last login: Sun Jan 16 10:04:50 2005 from rac1-priv
[root@rac1 ~]# exit
logout
Connection to rac1 closed.
[root@rac2 ~]# ssh rac1-priv
Last login: Sun Jan 16 10:06:08 2005 from rac2
[root@rac1 ~]# exit
logout
Connection to rac1-priv closed.
[root@rac2 ~]# ssh rac2
Last login: Wed Jan 5 19:39:11 2011 from rac1-priv
[root@rac2 ~]# exit
logout
Connection to rac2 closed.
[root@rac2 ~]# ssh rac2-priv
Last login: Wed Jan 5 19:40:31 2011 from rac2
[root@rac2 ~]# exit
logout
Connection to rac2-priv closed.
[root@rac2 ~]#

Similary configure logins for oracle from any node to any other node of
RAC without passwords

Here oracle user is oracle DBA os user.


Login into first node and perform the following operations
[oracle@rac1 ~]$ ssh rac1
The authenticity of host 'rac1 (192.168.2.10)' can't be established.

DB EXPERTS 11
DB EXPERTS Basavraju 9866465379

RSA key fingerprint is c9:34:a6:a3:c2:6b:cc:73:bf:ba:a7:78:dc:e6:f7:e3.


Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.2.10' (RSA) to the list of
known hosts.
oracle@rac1's password:
Last login: Sun Jan 16 10:07:43 2005 from 192.168.2.99
[oracle@rac1 ~]$ exit
logout
Connection to rac1 closed.
[oracle@rac1 ~]$ ssh rac1-priv
The authenticity of host 'rac1-priv (192.168.3.10)' can't be established.
RSA key fingerprint is c9:34:a6:a3:c2:6b:cc:73:bf:ba:a7:78:dc:e6:f7:e3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1-priv,192.168.3.10' (RSA) to the list of
known hosts.
oracle@rac1-priv's password:
Last login: Sun Jan 16 10:07:55 2005 from rac1
[oracle@rac1 ~]$ exit
logout
Connection to rac1-priv closed.
[oracle@rac1 ~]$ ssh rac2
The authenticity of host 'rac2 (192.168.2.20)' can't be established.
RSA key fingerprint is c0:a0:4b:23:5d:93:16:51:88:bc:c8:bb:46:f9:57:f7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.2.20' (RSA) to the list of
known hosts.
oracle@rac2's password:
[oracle@rac2 ~]$ exit
logout
Connection to rac2 closed.
[oracle@rac1 ~]$ ssh rac2-priv
The authenticity of host 'rac2-priv (192.168.3.20)' can't be established.
RSA key fingerprint is c0:a0:4b:23:5d:93:16:51:88:bc:c8:bb:46:f9:57:f7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2-priv,192.168.3.20' (RSA) to the list of
known hosts.
oracle@rac2-priv's password:
Last login: Wed Jan 5 19:42:25 2011 from rac1
[oracle@rac2 ~]$ exit
logout
Connection to rac2-priv closed.
[oracle@rac1 ~]$

DB EXPERTS 12
DB EXPERTS Basavraju 9866465379

Similarly login into second node as oracle user and perform the following
operations

[oracle@rac2 ~]$ ssh rac1


The authenticity of host 'rac1 (192.168.2.10)' can't be established.
RSA key fingerprint is c9:34:a6:a3:c2:6b:cc:73:bf:ba:a7:78:dc:e6:f7:e3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.2.10' (RSA) to the list of
known hosts.
oracle@rac1's password:
Last login: Sun Jan 16 10:08:04 2005 from rac1-priv
[oracle@rac1 ~]$ exit
logout
Connection to rac1 closed.
[oracle@rac2 ~]$ ssh rac1-priv
The authenticity of host 'rac1-priv (192.168.3.10)' can't be established.
RSA key fingerprint is c9:34:a6:a3:c2:6b:cc:73:bf:ba:a7:78:dc:e6:f7:e3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1-priv,192.168.3.10' (RSA) to the list of
known hosts.
oracle@rac1-priv's password:
Last login: Sun Jan 16 10:09:16 2005 from rac2
[oracle@rac1 ~]$ exit
logout
Connection to rac1-priv closed.
[oracle@rac2 ~]$ ssh rac2
The authenticity of host 'rac2 (192.168.2.20)' can't be established.
RSA key fingerprint is c0:a0:4b:23:5d:93:16:51:88:bc:c8:bb:46:f9:57:f7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.2.20' (RSA) to the list of
known hosts.
oracle@rac2's password:
Last login: Wed Jan 5 19:43:22 2011 from 192.168.2.99
[oracle@rac2 ~]$ exit
logout
Connection to rac2 closed.
[oracle@rac2 ~]$ ssh rac2-priv
The authenticity of host 'rac2-priv (192.168.3.20)' can't be established.
RSA key fingerprint is c0:a0:4b:23:5d:93:16:51:88:bc:c8:bb:46:f9:57:f7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2-priv,192.168.3.20' (RSA) to the list of
known hosts.
oracle@rac2-priv's password:

DB EXPERTS 13
DB EXPERTS Basavraju 9866465379

Last login: Wed Jan 5 19:43:47 2011 from rac2


[oracle@rac2 ~]$ exit
logout
Connection to rac2-priv closed.
[oracle@rac2 ~]$

Now at first node generate public and private key pair


[oracle@rac1 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
1b:6e:d5:5b:7d:6c:e8:67:16:63:de:81:c8:8f:a3:cb oracle@rac1
[oracle@rac1 ~]$ cd .ssh/
[oracle@rac1 .ssh]$ ls
id_dsa id_dsa.pub known_hosts
[oracle@rac1 .ssh]$ scp id_dsa.pub
oracle@rac2:/home/oracle/.ssh/authorized_keys
oracle@rac2's password:
id_dsa.pub 100% 601 0.6KB/s 00:00
[oracle@rac1 .ssh]$

Now at second node also generate public and private key pair
[oracle@rac2 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
73:72:11:ea:ab:2c:3b:16:42:ef:45:f8:ad:55:51:f5 oracle@rac2
[oracle@rac2 ~]$ cd .ssh/
[oracle@rac2 .ssh]$ ls
authorized_keys id_dsa id_dsa.pub known_hosts
[oracle@rac2 .ssh]$ cat id_dsa.pub >> authorized_keys
[oracle@rac2 .ssh]$ scp authorized_keys oracle@rac1:/home/oracle/.ssh
oracle@rac1's password:
authorized_keys 100% 1202 1.2KB/s 00:00
[oracle@rac2 .ssh]$

DB EXPERTS 14
DB EXPERTS Basavraju 9866465379

Check from all nodes as oracle user whether logins are allowed
between oracle user of all nodes in RAC

[oracle@rac1 ~]$ ssh rac1


Last login: Sun Jan 16 10:09:25 2005 from rac2-priv
[oracle@rac1 ~]$ exit
logout
Connection to rac1 closed.
[oracle@rac1 ~]$ ssh rac1-priv
Last login: Sun Jan 16 10:12:55 2005 from rac1
[oracle@rac1 ~]$ exit
logout
Connection to rac1-priv closed.
[oracle@rac1 ~]$ ssh rac2
Last login: Wed Jan 5 19:43:54 2011 from rac2-priv
[oracle@rac2 ~]$ exit
logout
Connection to rac2 closed.
[oracle@rac1 ~]$ ssh rac2-priv
Last login: Wed Jan 5 19:47:22 2011 from rac1
[oracle@rac2 ~]$ exit
logout
Connection to rac2-priv closed.
[oracle@rac1 ~]$

Repeat the above step from second also

[oracle@rac2 ~]$ ssh rac1


Last login: Sun Jan 16 10:13:03 2005 from rac1-priv
[oracle@rac1 ~]$ exit
logout
Connection to rac1 closed.
[oracle@rac2 ~]$ ssh rac1-priv
Last login: Sun Jan 16 10:13:55 2005 from rac2
[oracle@rac1 ~]$ exit
logout
Connection to rac1-priv closed.
[oracle@rac2 ~]$ ssh rac2
Last login: Wed Jan 5 19:47:31 2011 from rac1-priv
[oracle@rac2 ~]$ exit
logout
Connection to rac2 closed.
[oracle@rac2 ~]$ ssh rac2-priv

DB EXPERTS 15
DB EXPERTS Basavraju 9866465379

Last login: Wed Jan 5 19:48:24 2011 from rac2


[oracle@rac2 ~]$ exit
logout
Connection to rac2-priv closed.
[oracle@rac2 ~]$

As root user install the following Linux RPMs in each node of RAC if
not installed already
[root@rac1 ~]# rpm -qa | grep compat
compat-db-4.1.25-9
java-1.4.2-gcj-compat-1.4.2.0-27jpp
compat-libcom_err-1.0-5
compat-libstdc++-7.3-2.96.128
compat-libstdc++-33-3.2.3-47.3
compat-libgcc-296-2.96-132.7.2
compat-libstdc++-296-2.96-132.7.2
compat-openldap-2.1.30-4
compat-libstdc++-devel-7.3-2.96.128
compat-gcc-c++-7.3-2.96.128
compat-gcc-7.3-2.96.128
[root@rac1 ~]#

Check for the above packages in second node also as root users

Apart from the above packages install what ever required for oracle 10g
R2

Configure Shared Disk

To keep OCR and VOTING DISK

TO KEEP Database

Configuring Shared disk we can use the following technologies

FC SAN (H/w is specific to the vendor)

ISCSI SAN (This uses regular LAN)


In both the above technologies we can create LUNs and we can map
these LUNs to cluster Nodes (Logical Unit Number)

If you are using ISCSI SAN ( openfiler)

DB EXPERTS 16
DB EXPERTS Basavraju 9866465379

Login into first node (rac1) as root user and update the following file

Open /etc/iscsi.conf and specify Discovery address (SAN server IP)


And start ISCSI service

[root@rac1 ~]# fdisk -l

Disk /dev/sda: 500.1 GB, 500106780160 bytes


255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 2 11475 92164905 bf Solaris
/dev/sda2 11476 11488 104422+ 83 Linux
/dev/sda3 11489 21687 81923467+ 83 Linux
/dev/sda4 21688 60801 314183205 5 Extended
/dev/sda5 21688 22197 4096543+ 82 Linux swap
/dev/sda6 22198 22210 104391 83 Linux
/dev/sda7 22211 28584 51199123+ 83 Linux
/dev/sda8 28585 28597 104391 83 Linux
/dev/sda9 28598 34971 51199123+ 83 Linux
/dev/sda10 34972 35337 2939863+ 83 Linux
/dev/sda11 35338 35703 2939863+ 83 Linux
/dev/sda12 35704 35716 104391 83 Linux
/dev/sda13 35717 43365 61440561 83 Linux
/dev/sda14 43366 43378 104391 83 Linux
/dev/sda15 43379 56126 102398278+ 83 Linux
[root@rac1 ~]# clear
[root@rac1 ~]# vi /etc/iscsi.conf
[root@rac1 ~]# service iscsi restart
Stopping iscsid: [ OK ]
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root@rac1 ~]# fdisk -l

Disk /dev/sda: 500.1 GB, 500106780160 bytes


255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 2 11475 92164905 bf Solaris

DB EXPERTS 17
DB EXPERTS Basavraju 9866465379

/dev/sda2 11476 11488 104422+ 83 Linux


/dev/sda3 11489 21687 81923467+ 83 Linux
/dev/sda4 21688 60801 314183205 5 Extended
/dev/sda5 21688 22197 4096543+ 82 Linux swap
/dev/sda6 22198 22210 104391 83 Linux
/dev/sda7 22211 28584 51199123+ 83 Linux
/dev/sda8 28585 28597 104391 83 Linux
/dev/sda9 28598 34971 51199123+ 83 Linux
/dev/sda10 34972 35337 2939863+ 83 Linux
/dev/sda11 35338 35703 2939863+ 83 Linux
/dev/sda12 35704 35716 104391 83 Linux
/dev/sda13 35717 43365 61440561 83 Linux
/dev/sda14 43366 43378 104391 83 Linux
/dev/sda15 43379 56126 102398278+ 83 Linux

Disk /dev/sdb: 25.0 GB, 25065160704 bytes


64 heads, 32 sectors/track, 23904 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System


/dev/sdb1 1 249 254960 83 Linux
/dev/sdb2 250 498 254976 83 Linux
/dev/sdb3 499 23904 23967744 5 Extended
/dev/sdb5 499 3360 2930672 83 Linux
/dev/sdb6 3361 6222 2930672 83 Linux
/dev/sdb7 6223 6467 250864 83 Linux
/dev/sdb8 6468 6712 250864 83 Linux
[root@rac1 ~]#

Repeat the above step in second node also as root user

The following are the Oracle clusterware components

CRS (Cluster Ready Services) (processes)

OCR (Oracle cluster Registry) (on shared disk)

Votingdisk (on shared disk)

OCR and votingdisks can be files or raw devices

In our case we are configuring raw devices to keep OCR and votingdisk

DB EXPERTS 18
DB EXPERTS Basavraju 9866465379

Here we are using /dev/sdb9 and /dev/sdb10 partitions shared from SAN

[root@rac1 ~]# fdisk /dev/sdb

The number of cylinders for this disk is set to 23904.


There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 25.0 GB, 25065160704 bytes


64 heads, 32 sectors/track, 23904 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System


/dev/sdb1 1 249 254960 83 Linux
/dev/sdb2 250 498 254976 83 Linux
/dev/sdb3 499 23904 23967744 5 Extended
/dev/sdb5 499 3360 2930672 83 Linux
/dev/sdb6 3361 6222 2930672 83 Linux
/dev/sdb7 6223 6467 250864 83 Linux
/dev/sdb8 6468 6712 250864 83 Linux

Command (m for help): n


Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (6713-23904, default 6713):
Using default value 6713
Last cylinder or +size or +sizeM or +sizeK (6713-23904, default 23904):
+260M

Command (m for help): p

Disk /dev/sdb: 25.0 GB, 25065160704 bytes


64 heads, 32 sectors/track, 23904 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

DB EXPERTS 19
DB EXPERTS Basavraju 9866465379

/dev/sdb1 1 249 254960 83 Linux


/dev/sdb2 250 498 254976 83 Linux
/dev/sdb3 499 23904 23967744 5 Extended
/dev/sdb5 499 3360 2930672 83 Linux
/dev/sdb6 3361 6222 2930672 83 Linux
/dev/sdb7 6223 6467 250864 83 Linux
/dev/sdb8 6468 6712 250864 83 Linux
/dev/sdb9 6713 6961 254960 83 Linux

Command (m for help): n


Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (6962-23904, default 6962):
Using default value 6962
Last cylinder or +size or +sizeM or +sizeK (6962-23904, default 23904):
+260M

Command (m for help): p

Disk /dev/sdb: 25.0 GB, 25065160704 bytes


64 heads, 32 sectors/track, 23904 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System


/dev/sdb1 1 249 254960 83 Linux
/dev/sdb2 250 498 254976 83 Linux
/dev/sdb3 499 23904 23967744 5 Extended
/dev/sdb5 499 3360 2930672 83 Linux
/dev/sdb6 3361 6222 2930672 83 Linux
/dev/sdb7 6223 6467 250864 83 Linux
/dev/sdb8 6468 6712 250864 83 Linux
/dev/sdb9 6713 6961 254960 83 Linux
/dev/sdb10 6962 7210 254960 83 Linux

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
[root@rac1 ~]# fdisk -l

DB EXPERTS 20
DB EXPERTS Basavraju 9866465379

Disk /dev/sda: 500.1 GB, 500106780160 bytes


255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 2 11475 92164905 bf Solaris
/dev/sda2 11476 11488 104422+ 83 Linux
/dev/sda3 11489 21687 81923467+ 83 Linux
/dev/sda4 21688 60801 314183205 5 Extended
/dev/sda5 21688 22197 4096543+ 82 Linux swap
/dev/sda6 22198 22210 104391 83 Linux
/dev/sda7 22211 28584 51199123+ 83 Linux
/dev/sda8 28585 28597 104391 83 Linux
/dev/sda9 28598 34971 51199123+ 83 Linux
/dev/sda10 34972 35337 2939863+ 83 Linux
/dev/sda11 35338 35703 2939863+ 83 Linux
/dev/sda12 35704 35716 104391 83 Linux
/dev/sda13 35717 43365 61440561 83 Linux
/dev/sda14 43366 43378 104391 83 Linux
/dev/sda15 43379 56126 102398278+ 83 Linux

Disk /dev/sdb: 25.0 GB, 25065160704 bytes


64 heads, 32 sectors/track, 23904 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System


/dev/sdb1 1 249 254960 83 Linux
/dev/sdb2 250 498 254976 83 Linux
/dev/sdb3 499 23904 23967744 5 Extended
/dev/sdb5 499 3360 2930672 83 Linux
/dev/sdb6 3361 6222 2930672 83 Linux
/dev/sdb7 6223 6467 250864 83 Linux
/dev/sdb8 6468 6712 250864 83 Linux
/dev/sdb9 6713 6961 254960 83 Linux
/dev/sdb10 6962 7210 254960 83 Linux
[root@rac1 ~]#

configure /dev/sda9 and /dev/sda10 as raw devices on both the nodes

[root@rac1 ~]# vi /etc/sysconfig/rawdevices


# This file and interface are deprecated.
# Applications needing raw device access should open regular
# block devices with O_DIRECT.

DB EXPERTS 21
DB EXPERTS Basavraju 9866465379

# raw device bindings


# format: <rawdev> <major> <minor>
# <rawdev> <blockdev>
# example: /dev/raw/raw1 /dev/sda1
# /dev/raw/raw2 8 5
/dev/raw/raw1 /dev/sdb9
/dev/raw/raw2 /dev/sdb10
~
~

save the above file and quit then start rawdevices service

[root@rac1 ~]# service rawdevices restart


Assigning devices:
/dev/raw/raw1 --> /dev/sdb9
/dev/raw/raw1: bound to major 8, minor 25
/dev/raw/raw2 --> /dev/sdb10
/dev/raw/raw2: bound to major 8, minor 26
done
[root@rac1 ~]#

Repeat the above step on the remaining nodes

Installing Oracle Clusterware

download and unzip clusterware software on first node


Login as oracle DBA OS user(oracle) and execute runInstaller to install
Clusterware

[oracle@rac1 ~]$ /clusterware/runInstaller

DB EXPERTS 22
DB EXPERTS Basavraju 9866465379

Welcome Screen , Click Next

Give Oracle Inventory Location and Click Next

DB EXPERTS 23
DB EXPERTS Basavraju 9866465379

Give CRS Home Location and Click Next

Click Next

DB EXPERTS 24
DB EXPERTS Basavraju 9866465379

Click Add to add Nodes

Click Ok

DB EXPERTS 25
DB EXPERTS Basavraju 9866465379

After adding all Nodes Click Next

Click Edit and select public Interface

DB EXPERTS 26
DB EXPERTS Basavraju 9866465379

DB EXPERTS 27
DB EXPERTS Basavraju 9866465379

Give OCR location and Click Next

Give Voting Disk Location and Click Next

DB EXPERTS 28
DB EXPERTS Basavraju 9866465379

Click Install

DB EXPERTS 29
DB EXPERTS Basavraju 9866465379

Now execute the below scripts as root user in each Node

As Oracle user copy the following Binary file


[oracle@rac1 ~]$ cp /4679769/clsfmt.bin /cluster/home1/bin/

As root user execute the following scripts


[root@rac1 ~]# /orasoft/oraInventory/orainstRoot.sh
Changing permissions of /orasoft/oraInventory to 770.
Changing groupname of /orasoft/oraInventory to dba.
The execution of the script is complete

[root@rac1 ~]# /cluster/home1/root.sh


WARNING: directory '/cluster' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory


Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/cluster' is not owned by root
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.

DB EXPERTS 30
DB EXPERTS Basavraju 9866465379

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.


node <nodenumber>: <nodename> <private interconnect name>
<hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw2
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
CSS is inactive on these nodes.
rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac1 ~]#

running the above scripts on node2 as root user


first as oracle user copy the following binary file

[oracle@rac2 ~]$ cp /4679769/clsfmt.bin /cluster/home1/bin/

[root@rac2 ~]# /orasoft/oraInventory/orainstRoot.sh


Changing permissions of /orasoft/oraInventory to 770.
Changing groupname of /orasoft/oraInventory to dba.
The execution of the script is complete
[root@rac2 ~]# /cluster/home1/root.sh
WARNING: directory '/cluster' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory


Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/cluster' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.

DB EXPERTS 31
DB EXPERTS Basavraju 9866465379

Successfully accumulated necessary OCR keys.


Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name>
<hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.


-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be
used to configure virtual IPs.
[root@rac2 ~]#

Now configure VIP as root user by running vipca from CRS home on
both the nodes
--------------------------------------------------------------

[root@rac1 ~]# /cluster/home1/bin/vipca

Click Next

DB EXPERTS 32
DB EXPERTS Basavraju 9866465379

Select public interface then click next

Give vip alias names

DB EXPERTS 33
DB EXPERTS Basavraju 9866465379

Click Next

Click Finish

DB EXPERTS 34
DB EXPERTS Basavraju 9866465379

Click Ok

DB EXPERTS 35
DB EXPERTS Basavraju 9866465379

Click Exit

Similarly run vipca as root user from second node

DB EXPERTS 36
DB EXPERTS Basavraju 9866465379

Select public interface

Click Next

DB EXPERTS 37
DB EXPERTS Basavraju 9866465379

Click Finish

Then Click Ok and Exit

DB EXPERTS 38
DB EXPERTS Basavraju 9866465379

Now return to main installation screen (OUI) click OK


Then Click EXIT

Now we can check the CRS services


[oracle@rac1 ~]$ /cluster/home1/bin/crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@rac1 ~]$ /cluster/home1/bin/crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
[oracle@rac1 ~]$

Installing Oracle RDBMS software on 2 Node RAC

DB EXPERTS 39
DB EXPERTS Basavraju 9866465379

First Node Name -> rac1


Second Node Name -> rac2

Login into Node1 (rac1) as Oracle DBA Os user and execute runInstaller
from 10g release2 Software which launch OUI to install oracle

[oracle@rac1 ~]$ /media/cdrecorder/database10.2linux/runInstaller


Starting Oracle Universal Installer...

Checking installer requirements...


Checking operating system version: must be redhat-3, SuSE-9, redhat-4,
UnitedLinux-1.0, asianux-1 or asianux-2
Passed

All installer requirements met.


Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-
01-26_11-25-08PM. Please wait ...
[oracle@rac1 ~]$ Oracle Universal Installer, Version 10.2.0.1.0
Production
Copyright (C) 1999, 2005, Oracle. All rights reserved.

Welcome screen, click Next.

Select Enterprise Edition and click Next

DB EXPERTS 40
DB EXPERTS Basavraju 9866465379

Select Oracle Home, Click Next

Select all Nodes and Click Next

DB EXPERTS 41
DB EXPERTS Basavraju 9866465379

It will Prerequisite checks then Click Next

Select Install Database Software only option and click Next

DB EXPERTS 42
DB EXPERTS Basavraju 9866465379

Click Install

DB EXPERTS 43
DB EXPERTS Basavraju 9866465379

DB EXPERTS 44
DB EXPERTS Basavraju 9866465379

Now as root user execute the following script in each cluster node

Login into node1 (rac1) as root user and execute root.sh script from
Oracle Home.

[root@rac1 ~]# /orasoft/10g/root.sh


Running Oracle10 root.sh script...

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /orasoft/10g

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

[root@rac1 ~]#

Now Login into second as root user and run root.sh script from Oracle
Home

[root@rac1 ~]# ssh rac2


Last login: Fri Jan 28 09:10:20 2011 from rac1
[root@rac2 ~]# /orasoft/10g/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /orasoft/10g

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

DB EXPERTS 45
DB EXPERTS Basavraju 9866465379

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

[root@rac2 ~]#

Now return to oui screen and click ok and then click exit as oracle user.

Now we can query from Inventory as follows

[oracle@rac1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory


Invoking OPatch 10.2.0.1.0

Oracle interim Patch Installer version 10.2.0.1.0


Copyright (c) 2005, Oracle Corporation. All rights reserved..

Oracle Home : /orasoft/10g


Central Inventory : /orasoft/oraInventory

DB EXPERTS 46
DB EXPERTS Basavraju 9866465379

from : /orasoft/10g/oraInst.loc
OPatch version : 10.2.0.1.0
OUI version : 10.2.0.1.0
OUI location : /orasoft/10g/oui
Log file location : /orasoft/10g/cfgtoollogs/opatch/opatch-
2011_Jan_26_23-49-04-IST_Wed.log

Lsinventory Output file location :


/orasoft/10g/cfgtoollogs/opatch/lsinv/lsinventory-2011_Jan_26_23-49-
04-IST_Wed.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Database 10g 10.2.0.1.0


There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes


Local node = rac1
Remote node = rac2

--------------------------------------------------------------------------------

OPatch succeeded.
[oracle@rac1 ~]$

The above command is showing that Oracle software is installed in two


nodes of a RAC system.

Now configure Listeners in each node of RAC

DB EXPERTS 47
DB EXPERTS Basavraju 9866465379

Login into node1 as oracle user and execute netca

[oracle@rac1 ~]$ netca

Oracle Net Services Configuration:

Select Cluster Configuration, Click Next

Select all Nodes, Click Next

DB EXPERTS 48
DB EXPERTS Basavraju 9866465379

Select Listener Configuration, Click Next

Select Add and Click Next

DB EXPERTS 49
DB EXPERTS Basavraju 9866465379

Select Listener Name, Click Next

Select Required Protocols, Click Next

DB EXPERTS 50
DB EXPERTS Basavraju 9866465379

Give Port Number, Click Next

DB EXPERTS 51
DB EXPERTS Basavraju 9866465379

Click Next and Click Finish

DB EXPERTS 52
DB EXPERTS Basavraju 9866465379

Now we can check the Listeners status as follows

[oracle@rac1 ~]$ crs_stat -t


Name Type Target State Host
------------------------------------------------------------
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
[oracle@rac1 ~]$

Next we need to Configure ASM Instance and Diskgroups using DBCA

Login into first Node as Oracle User then execute dbca


[oracle@rac1 ~]$ dbca

Welcome screen, select Real Application Database and Click Next

Select Configure Automatic Storage Management, Click Next

DB EXPERTS 53
DB EXPERTS Basavraju 9866465379

Select All Nodes, Click Next

Give Sys user password for ASM, select Pfile location, Click Next

DB EXPERTS 54
DB EXPERTS Basavraju 9866465379

Click Ok

Now ASM instance will be created.

DB EXPERTS 55
DB EXPERTS Basavraju 9866465379

Now create Disk groups, Click New

Give Group Name, Select Redundancy type, Select all Candidates,

DB EXPERTS 56
DB EXPERTS Basavraju 9866465379

Give Failure Group Names then Click OK

Disk group created and mounted by two ASM instances, Click Finish

Now Create RAC Database using DBCA

DB EXPERTS 57
DB EXPERTS Basavraju 9866465379

Login into first Node as Oracle User

[oracle@rac1 ~]$ dbca

Select Oracle Real Application Cluster Database, Click Next

Select Create a Database and Click Next

DB EXPERTS 58
DB EXPERTS Basavraju 9866465379

Select All Nodes and Click Next

Select a Database Template and Click Next

DB EXPERTS 59
DB EXPERTS Basavraju 9866465379

Give Database Name and Click Next

If you want Check Configure Enterprise Manager Option and Click Next

DB EXPERTS 60
DB EXPERTS Basavraju 9866465379

Give Passwords for Database Admin Schemas and Click Next

Select Storage Option here it is ASM, then Click Next

DB EXPERTS 61
DB EXPERTS Basavraju 9866465379

Select Disk Group and Click Next

Select Location for the Database Files and Click Next

DB EXPERTS 62
DB EXPERTS Basavraju 9866465379

If u select the following options or Click Next

If Required Check Sample Schemas and Click Next

DB EXPERTS 63
DB EXPERTS Basavraju 9866465379

Click Next

Select Memory,Sizing, Character Set and Click Next

DB EXPERTS 64
DB EXPERTS Basavraju 9866465379

Click Next

Click Next

DB EXPERTS 65
DB EXPERTS Basavraju 9866465379

Click Ok

DB EXPERTS 66
DB EXPERTS Basavraju 9866465379

Click Exit

Now Starting Database and Its Instances

DB EXPERTS 67
DB EXPERTS Basavraju 9866465379

To Check the current state of all resources configured under oracle RAC
[oracle@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.dev.db application ONLINE ONLINE rac2
ora....v1.inst application ONLINE ONLINE rac1
ora....v2.inst application ONLINE ONLINE rac2
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
[oracle@rac1 ~]$

To check the configuration of Database


[oracle@rac1 ~]$ srvctl config database -d dev

DB EXPERTS 68
DB EXPERTS Basavraju 9866465379

rac1 dev1 /orasoft/10g


rac2 dev2 /orasoft/10g

To check the status of Database Instances


[oracle@rac1 ~]$ srvctl status database -d dev
Instance dev1 is running on node rac1
Instance dev2 is running on node rac2

To check the status of ASM instances.


[oracle@rac1 ~]$ srvctl status asm -n rac1
ASM instance +ASM1 is running on node rac1.
[oracle@rac1 ~]$ srvctl status asm -n rac2
ASM instance +ASM2 is running on node rac2.
[oracle@rac1 ~]$

DB EXPERTS 69

You might also like