Professional Documents
Culture Documents
rest of things are similar like RAC 11gR1 installation in linux5 using VMware.
please refer the documentation installing oracle 11gR1 RAC in LINUX5 using
vmware. if you have any doubts in my installation steps.
11gR1RACInstallationOnOEL5UsingVMwareServer2
10gR2 RAC Installation in RHEL4 using VMware
vmware_server_installation
if you have any doubts in installing 10g in linux please refer below link:
10g installation in linux5
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=262144
/sbin/sysctl -p
Disable secure linux by editing the /etc/selinux/config file, making sure the
SELINUX flag is set as follows.
SELINUX=disabled
groupadd oinstall
groupadd dba
groupadd oper
groupadd asmadmin
mkdir -p /u01/crs/oracle/product/10.2..0/crs
mkdir -p /u01/app/oracle/product/10.2.0/db_1
chown -R oracle:oinstall /u01
redhat-4
eg:
[root@rac1 ~]# vi /etc/redhat-release
redhat-4
Login as the oracle user and add the following lines at the end of
the .bash_profile file.
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
The packages listed in this section (or later versions) are required for
Oracle Clusterware 10g Release 2 and Oracle RAC 10g Release 2
running on the Oracle Enterprise Linux 5 platform.
* binutils-2.17.50.0.6-2.el5
* compat-libstdc++-296-2.96-138
* compat-libstdc++-33-3.2.3-61
* elfutils-libelf-0.125-3.el5
* elfutils-libelf-devel-0.125
* gcc-4.1.1-52
* gcc-c++-4.1.1-52
* glibc-2.5-12
* glibc-common-2.5-12
* glibc-devel-2.5-12
* glibc-headers-2.5-12
* libaio-0.3.106
* libaio-devel-0.3.106
* libgcc-4.1.1-52
* libstdc++-4.1.1
* libstdc++-devel-4.1.1-52.e15
* libXp-1.0.0-8
* make-3.81-1.1
* openmotif-2.2.3
* sysstat-7.0.0
* unixODBC-2.2.11
* unixODBC-devel-2.2.11
Login as the root user on the RAC1 virtual machine, then select the "Install
VMware Tools" as shown in below picture.
Once the package is loaded, the CD should unmount automatically. You must
then run the "vmware-config-tools.pl" script as the root user.
# vmware-config-tools.pl
Accept all the default settings and pick the screen resolution of your choice.
Ignore any warnings or errors. The VMware client tools are now installed.
Reboot the server before proceeding. After the reboot, it is possible the monitor
will not be recognised. If this is the case don't panic. Follow the instructions
provided on the screen and reconfigure the monitor setting, which will allow the
XServer to function correctly.
Shut down the RAC1 virtual machine using the following command.
# shutdown -h now
here is a steps to create a ocr disk of size 1GB. follow the same steps to create
voting disk and asm disks.
select add hardware button and then select HARD DISK.
Click browse button to select the shared storage disk.
create a folder shared for storage storage and select shared and type ocr.
and then click finish after than you can see the new added hard disk
with size 1 gb and its properties.
Repeat the previous hard disk creation steps 4 more times, using the following
values:
• File Name: votingdisk
Virtual Device Node: SCSI 1:1
Mode: Independent and Persistent
• File Name: asm1
Virtual Device Node: SCSI 1:2
Mode: Independent and Persistent
• File Name: asm2
Virtual Device Node: SCSI 1:3
Mode: Independent and Persistent
• File Name: asm3
Virtual Device Node: SCSI 1:4
Mode: Independent and Persistent
At the end of this process, the virtual machine should look something like the
picture below.
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedBus = "VIRTUAL"
scsi1:0.present = "TRUE"
scsi1:0.mode = "independent-persistent"
scsi1:0.fileName = "/u01/VM/shared/ocr.vmdk"
scsi1:0.deviceType = "plainDisk"
scsi1:0.redo = ""
scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.fileName = "/u01/VM/shared/votingdisk.vmdk"
scsi1:1.deviceType = "plainDisk"
scsi1:1.redo = ""
scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.fileName = "/u01/VM/shared/asm1.vmdk"
scsi1:2.deviceType = "plainDisk"
scsi1:2.redo = ""
scsi1:3.present = "TRUE"
scsi1:3.mode = "independent-persistent"
scsi1:3.fileName = "/u01/VM/shared/asm2.vmdk"
scsi1:3.deviceType = "plainDisk"
scsi1:3.redo = ""
scsi1:4.present = "TRUE"
scsi1:4.mode = "independent-persistent"
scsi1:4.fileName = "/u01/VM/shared/asm3.vmdk"
scsi1:4.deviceType = "plainDisk"
scsi1:4.redo = ""
Start the RAC1 virtual machine by clicking the "Power on this virtual machine"
button on the VMware Server Console. When the server has started, log in as
the root user so you can partition the disks. The current disks can be seen by
issueing the following commands.
# cd /dev
# ls sd*
sda sda1 sda2 sdb sdc sdd sde sdf
#
Use the "fdisk" command to partition the disks sdb to sdf. The following output
shows the expected fdisk output for the sdb disk.
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
In each case, the sequence of answers is "n", "p", "1", "Return", "Return", "p"
and "w".
Once all the disks are partitioned, the results can be seen by repeating the
previous "ls" command.
# cd /dev
# ls sd*
sda sda1 sda2 sdb sdb1 sdc sdc1 sdd sdd1 sde sde1 sdf sdf1
#
Shut down the RAC1 virtual machine using the following command.
# shutdown -h now
Start the RAC1 virtual machine and restart the RAC2 virtual machine. When
both nodes have started, check they can both ping all the public and private IP
addresses using the following commands.
login as oracle user
ping -c 3 rac1
ping -c 3 rac1-priv
ping -c 3 rac2
ping -c 3 rac2-priv
Configure SSH on each node in the cluster. Log in as the "oracle" user and
perform the following tasks on each node.
su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t rsa # Accept the default settings.
The RSA public key is written to the ~/.ssh/id_rsa.pub file and the private key to
the ~/.ssh/id_rsa file.
Next, log in as the "oracle" user on RAC2 and perform the following commands.
su - oracle
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys rac1:/home/oracle/.ssh/
The "authorized_keys" file on both servers now contains the public keys
generated on all RAC nodes.
To enable SSH user equivalency on the cluster member nodes issue the
following commands on each node.
ssh rac1 date
ssh rac2 date
ssh rac1.localdomain date
ssh rac2.localdomain date
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
You should now be able to SSH and SCP between servers without entering
passwords.
Before installing the clusterware, check the prerequisites have been met using
the "runcluvfy.sh" utility in the clusterware root directory.
while running this you will get unsuccessful message no need to worry about
this. ignore this only for vmware setup testing and education purpose, testing
time no need to worry.
/home/oracle/clusterware/cluvfy/runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
To install 10gR2, you must first install the base release, which is 10.2.0.1. As
these version of OS are newer, you should use the following command to
invoke the installer:
$ runInstaller -ignoreSysPrereqs // This will bypass the OS check //
note: Edit the /etc/redhat-release file replacing the current release information
(Red Hat Enterprise Linux Server release 5 ) with the following:
redhat-4
./runInstaller -ignoreSysPrereqs
change the clusterware directory
if you didn't edit the redhat-release file,you will get error here, and it checks
for required rpm packages.
At end of root.sh on the last node vipca will fail to run with the following error:
Also, srvctl will show similar output if workaround below is not implemented.
edit vipca (in the CRS bin directory on all nodes) to undo the setting of
LD_ASSUME_KERNEL. After the IF statement around line 123 add an unset
command to ensure LD_ASSUME_KERNEL is not set as follows:
if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
Similarly for srvctl (in both the CRS and, when installed, RDBMS and ASM bin
directories on all nodes), unset LD_ASSUME_KERNEL by adding one line, around
line 168 should look like this:
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
after successful editing of files vipca and srvctl files in bin folder of CRS_HOME
directory.
run ./vipca to configure virtual ip manually you will get this error.
vipca will fail to run with the following error if the VIP IP's are in a non-routable
range [10.x.x.x, 172.(16-31).x.x or 192.168.x.x]:
# vipca
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
(vipca failing on non-routable VIP IP ranges, manually or during root.sh), if you
still have the OUI window open, click OK and it will create the "oifcfg"
information, then cluvfy will fail due to vipca not completed successfully, skip
below in this note and run vipca manually then return to the installer and cluvfy
will succeed. Otherwise you may configure the interfaces for RAC manually
using the oifcfg command as root, like in the following example (from any
node):
click ok button.
it wills gives the error. follow the steps. dont click next button.
The goal is to get the output of "oifcfg getif" to include both public and
cluster_interconnect interfaces, of course you should exchange your own IP
addresses and interface name from your environment. To get the proper IPs in
your environment run this command:
/bin # ./oifcfg iflist
click next
enter rac1-vip.localdomain
click the retry button
[root@rac1 bin]# ./crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[root@rac1 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
[root@rac1 bin]#
follow the same steps given in the below link. except invoking
runInstaller.
Start the RAC1 and RAC2 virtual machines, login to RAC1 as the oracle user and
start the Oracle installer.
To install 10gR2, you must first install the base release, which is 10.2.0.1. As these version of
OS are newer, you should use the following command to invoke the installer:
$ runInstaller -ignoreSysPrereqs // This will bypass the OS check //
./runInstaller -ignoreSysPrereqs