You are on page 1of 47

Installing Oracle 11gR2 Single Node RAC (RAC One Node)on

VMWARE

Oracle introduced a new option called RAC One Node with the release of 11gR2 This option is
available with Enterprise edition only.

Single Node RAC Setup

Used Softwares Details

VMWARE : VMware Server 2.0

Install Vmware Server 2.0 on Host Operating System and inside that install Guest Operating
System.

Overview of the guest OS Layout:

Hostname OS Processor Memory


testrac1 Red Hat Enterprise Linux 5.5 1 2.5 GB

Overview of the virtual disk layout:

Virtual Disk Name On Size Disk Mode Disk Type Description


Guest
Localdisk.vmdk 20 GB Persistent SCSI 0:0 Used For root, u01, u02,
Swap partition
ocrvote.vmdk 1 GB Independent Persistent SCSI 1:1 Used For OCR & Voting Disk
asm1.vmdk 2 GB Independent Persistent SCSI 1:2 Used For Data disk
asm2.vmdk 2 GB Independent Persistent SCSI 1:3 Used For Data Disk
asm3.vmdk 2 GB Independent Persistent SCSI 1:4 Used For Flash recovery
Area disk

1
Virtual Machine Setup

Click the "Virtual Machine > Create Virtual Machine" menu option, or click the "Create Virtual Machine"

Enter the name "TESTRAC1" and click on the "Next" button.

Select the "Linux operating system" option, and set the version to "Red Hat Enterprise Linux 5 (64-bit)",
then click the "Next" button.

Enter the required amount of memory: 2520


Number of CPUs for the virtual machine:1
Then click the "Next" button.

Click on the "Create a New Virtual Disk" link or click the "Next" button.

Set the disk size to "20 GB" and click the "Next" button.

Click the "Add a Network Adapter" link or click the "Next" button.

Select the "Bridged" option and click the "Next" button.

Click the "Use a Physical Drive" link, or click the "Next" button.

Accept the DVD properties by clicking the "Next" button.

Click the "Don't Add a Floppy Drive" link.

Click the "Add a USB Controller" link, or click the "Next" button.

Click the "Finish" button to create the virtual machine.

Click the "Add Hardware" Link

Click the "Network Adapter" link.

Select the "Bridged" option and click the "Next" button.

Click the "Finish" button.

Install Red Hat Enterprise Linux 5.5 in vmware

2
Install VMware Client Tools

On the web console, highlight the "RAC1" VM and click the "Install VMware Tools" link and click the
subsequent "Install" button.

In the RAC1 console, right-click on the "VMwareTools*.rpm" file and select the "Open with "Software
Installer"" option.

Click the "Apply" button and accept the warning by clicking the subsequent "Install Anyway" button.

Next, run the "vmware-config-tools.pl" script as the root user.

pick the screen resolution of your choice. Ignore any warnings or errors.

Issue the "vmware-toolbox" command as the root user.

check the "Time synchronization between the virtual machine and host os" option and click the "Close"
button.

Shut down the RAC1 virtual machine, use command

# init 0

Create a directory on the host system to hold the shared virtual disks.

On the VMware Intrastructure Web Access Console, click the "Add Hardware" link.

3
Click the "Hard Disk" link, or click the "Next" button.

Click the "Create New Virtual Disk" link, or click the "Next" button.

Set the size to "1 GB" and the location to "[standard] SAN/ocrvote.vmdk".

Expand the "Disk Mode" section and check the "Independent" and "Persistent" options.

Expand the "Virtual Device Node" section and set the adapter to "SCSI 1" and the device to "1", then
click the "Next" button.

Repeat the previous hard disk creation steps 3 more times, using the following values.

File Name: [standard] shared/asm1.vmdk


Virtual Device Node: SCSI 1:2
Mode: Independent and Persistent

File Name: [standard] shared/asm2.vmdk


Virtual Device Node: SCSI 1:3
Mode: Independent and Persistent

File Name: [standard] shared/asm3.vmdk


Virtual Device Node: SCSI 1:4
Mode: Independent and Persistent

Once all disks Creation are done, open and edit Vmware vmx file.

For example : open testrac1.vmx


(open in wordpad and append below entries in vmx file)

4
disk.locking = "FALSE"
diskLib.dataCacheMaxSize = "0"
scsi1.sharedBus = "virtual"
scsi1:0.deviceType = "disk"
scsi1:1.deviceType = "disk"
scsi1:2.deviceType = "disk"
scsi1:3.deviceType = "disk"

Start the TESTRAC1 virtual machine by clicking the "Play" button on the toolbar,

Single Node RAC Installation Overview


Operating System: Red Hat Enterprise Linux 5.5 (2.6.18-194.el5):

Grid Infrastructure Software (Clusterware + ASM 11.2.0.1):


ORACLE_BASE: /u01/app/grid
ORACLE_HOME: /u01/app/11.2.0/grid
Owner: grid (Primary Group: oinstall, Secondary Group: asmadmin, asmdba)
OCR/Voting Disk Storage Type: ASM
Oracle Inventory Location: /u01/app/oraInventory

Oracle Database Software (RAC 11.2.0.1):


ORACLE_BASE: /u02/app/oracle
ORACLE_HOME: /u02/app/oracle/product/11.2.0/dbhome_1
Owner: oracle (Primary Group: oinstall, Secondary Group: asmdba, dba)
Oracle Inventory Location: /u02/app/oraInventory
Database Name: TESTPROD

Configured IP

Machine Public IP Private IP VIP


testrac1 testrac1.localdomain testrac1-priv testrac1-vip

Machine Public IP Private IP VIP


testrac1 192.168.2.10 192.168.21.50 172.19.21.52

5
Pre Installation Tasks

Login in the redhat enterprise linux as root user and check for the following rpm packages.

rpm -q binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel gcc \


gcc-c++ glibc glibc-common glibc-devel-2.5 libaio libaio-devel libgcc libstdc++ \libstdc++-devel
make sysstat unixODBC unixODBC-devel

Install missing packages.

Editing vi /etc/hosts

[root@testrac1 ~]# vi /etc/hosts


# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 testrac1.localdomain testrac1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

#Public Interface
192.168.2.10 testrac1.localdomain testrac1
#Intraconnect Private Interface
192.168.21.50 testrac1-priv.localdomain testrac1-priv
#Virtual IP For Oracle RAC
192.168.2.12 testrac1-vip.localdomain testrac1-vip
#Scan IP For Oracle RAC
192.168.2.20 testrac-scan.localdomain testrac-scan
192.168.2.21 testrac-scan.localdomain testrac-scan
192.168.2.22 testrac-scan.localdomain testrac-scan

Set Kernel parameters

Vi /etc/sysctl.conf

Add following parameters

kernel.shmall = 2097152
kernel.shmax= 1054504960
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.core.rmem_default = 262144

6
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

Stop Cluster Time Synchronization Service - (CTSS)

/sbin/service ntpd stop


chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.original

Creating user and groups for the oracle grid infrastructure

groupadd -g 1000 oinstall


groupadd -g 1200 asmadmin
groupadd -g 1201 asmdba
groupadd -g 1202 asmoper
groupadd -g 1203 dba

useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c


"Grid Infrastructure Owner" grid

set password for grid


passwd

/usr/sbin/useradd -m -u 1101 -g oinstall -G asmadmin, asmdba,dba -d /home/oracle -s


/bin/bash -c "Oracle Owner" oracle

set password for oracle


passwd

creating the directories and assigning the permissions

mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
chmod -R 775 /u01
mkdir -p /u02/app/oracle
mkdir -p /u02/app/oracle/product/11.2.0/db_1
chown oracle:oinstall /u02/app/oracle
chmod -R 775 /u02

7
Add the following lines to the "/etc/security/limits.conf" file

grid soft nproc 2047


grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536

Add the following lines to the "/etc/pam.d/login" file

session required pam_limits.so

Disable secure linux by editing the "/etc/selinux/config" file

SELINUX=disabled

Add the following lines to the /etc/profile

if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then


if [ \$SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF

Installing the cvuqdisk Package for Linux

[root@testrac1 rpm]# rpm -Uvh cvuqdisk-1.0.7-1.rpm


Preparing... ########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk ########################################### [100%]

8
Creating the ASM disks, however, will only need to be performed on a single node within the
cluster (racnode1).

Install ASMLib 2.0 Packages


oracleasm-2.6.18-194.el5-2.0.5-1.el5
oracleasmlib-2.0.4-1.el5
oracleasm-2.6.18-194.32.1.el5-2.0.5-1.el5
oracleasm-support-2.1.3-1.el5

Add the following lines to the /etc/udev/rules.d/50-udev.rules

Go to BIOS Enhanced Disk Device then add below lines

KERNEL=="oracleasm", OWNER="grid", GROUP="asmadmin", MODE="660", ACTION=="add|change"

Add the following lines to the /etc/rc.local

chown -R grid:asmadmin /dev/oracleasm

chown -R grid:asmadmin /dev/oracleasm/disks

Login as the oracle user and add the following lines at the end of the ".bash_profile" file.

# Get the aliases and functions


if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

# Profile for oracle

ORACLE_UNQNAME=testprod1;export ORACLE_UNQNAME
ORACLE_SID=testprod1; export ORACLE_SID
ORACLE_BASE=/u02/app/oracle; export ORACLE_BASE
ORACLE_HOME=/u02/app/oracle/product/11.2.0/dbhome_1; export ORACLE_HOME
ORACLE_PATH=/u02/app/oracle/common/oracle/sql; export ORACLE_PATH

9
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
export TEMP=/tmp
export TMPDIR=/tmp
PS1='[$USER:$host:$ORACLE_SID:\w$]'
PATH=${PATH}:$HOME/bin:$ORACLE_HOME/bin:/usr/ccs/bin:/u02/app/oracle/product/11.2.0/dbhome
_1/OPatch/
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export PATH
umask 022

Login as the grid user and add the following lines at the end of the ".bash_profile" file.

# .bash_profile

# Get the aliases and functions


if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs


PATH=$PATH:$HOME/bin
export PATH

# Profile for grid

ORACLE_SID=+ASM1; export ORACLE_SID


JAVA_HOME=/usr/local/java; export JAVA_HOME
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib

10
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
export JAVA_HOME=/jdk1.5.0_22
PATH=${PATH}:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$JAVA_HOME/bin:$JAVA
_HOME/jre/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin:/usr/ccs/bin
PATH=${PATH}:/u01/app/common/oracle/bin:/u01/app/11.2.0/grid/OPatch
export PATH
umask 022

DISK PARTITIONING

Use the "fdisk" command to partition the disks sdb to sde. The following output shows the
expected fdisk output for the sdb disk.

11
12
13
Configuring ASMLib Packages on testrac1 node

/usr/sbin/oracleasm configure i

To load the oracleasm kernel module:

[root@racnode1 ~]# /usr/sbin/oracleasm init


Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm

Create ASM Disks for Oracle

To create the ASM disks using the iSCSI target names to local device name mappings,
type the following:

[root@testrac1 ~]# /usr/sbin/oracleasm createdisk grid1 /dev/sdb1


Writing disk header: done
Instantiating disk: done
[root@testrac1 ~]# /usr/sbin/oracleasm createdisk data1 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@testrac1 ~]# /usr/sbin/oracleasm createdisk data2 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@testrac1 ~]# /usr/sbin/oracleasm createdisk reco1 /dev/sde1
Writing disk header: done
Instantiating disk: done

14
[root@testrac1 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@testrac1 ~]# /usr/sbin/oracleasm listdisks
DATA1
DATA2
GRID1
RECO1
[root@testrac1 ~]#

15
INSTALLATION OF GRID

Install grid infrastructure as grid user

Start the runInstaller from Grid Infrastructure Software Location:

[grid@racnode1 grid]$ /u01/softora/grid/runInstaller

Select the "Install and Configure Grid Infrastructure for a Cluster" option, then click the "Next"
button.

16
The "Advanced Installation" option was chosen.

17
Language, speaks for itself.

Provide Cluster name and scan Name and click on Next

18
Click the "SSH Connectivity" button and enter the password for the "grid" user.

Click the "Setup" button to to configure SSH connectivity,

and the "Test" button to test it once it is complete

19
20
Check the public and private networks are specified correctly.

Click the "OK" button and the "Next" button

Select ASM and on Next screen provide Disk Group name and choose disk on which you are going to
keep OCR & Voting Disk

21
22
23
24
25
In my situation, some kernel parameters needed to be changed.
If you have any issues, either fix them or check the "Ignore All" checkbox and click the "Next" button

but better correct it.

Click On Finish

26
When prompted, run the configuration scripts on each node

/u01/app/oraInventory/orainstRoot.sh

&

/u01/app/11.2.0/grid/root.sh

27
We expect the verification phase to fail with an error relating to the SCAN, assuming you are not using
DNS.

Provided this is the only error, it is safe to ignore this and continue by clicking the "Next" button.

Click the "Close" button to exit the installer.

28
29
Create ASM Disk Groups:

Disk Group Name: GRID, DATA, RECO


Disk Name: DATAGRID, DATA, RECO
Redundancy Level: External

We have created 3 disk groups/

GRID: store OCR & voting disk.


DATA: for storage of oracle database data
RECO: for flash recovery area and backups

DATAGRID group has been created during the installation of Grid.


Now we need to configure DATA & RECO group using asmca

Connect as grid user and Invoke the asmca utility under $GRID_HOME/bin to
create these disk groups. A screen will appear, click on create to create new disk group.

30
31
32
Now we have all the required disk groups

33
Installation of RDBMS and Configuration Of Database

Start the runInstaller from 11g R2 Real Application Cluster (RAC) Software Location:

Run as oracle user


[oracle@racnode1 ~]$ /u02/softora/database/runInstaller

34
Check the "Create and configure a database" option by clicking the "Next" button

35
36
Click the "SSH Connectivity" button and enter the password for the "oracle" user.

Click the "Setup" button to to configure SSH connectivity,

and the "Test" button to test it once it is complete.

37
Select Advance Install Option

38
39
Provide the global database name & SID

40
Click on Next_--> Choose character Set you may choose database control or grid controlOn next
screen choose ASM for database

41
Select DATA Disk group for database file system installation.

42
43
44
When prompted, run the configuration scripts on each node. When the scripts have been run on each
node, click the "OK" button

Check the Status of the Single Node RAC

First Check Your Oracle PMON process, here you can see that two database process is running

ASM & TESTPROD

Now check the status of cluster

#crs_stat -t

45
46
Now connect to ASM Instance

The srvctl utility shows the current configuration and status of the RAC database.

$ srvctl config database -d testprod1

How to stop RAC Single NODE(database and instance)

---------------------------------------

crsctl stop crs

or

crsctl stop cluster

this above command will stop rdbms instance first, then asm instance after that cluster services.

How to start RAC Single NODE

---------------------------------------

crsctl start crs

or

crsctl start cluster

this above command will start cluster services first, then asm instance after that rdbms instance.

47

You might also like