You are on page 1of 88

Step by Step RAC 10g R2 Install on Linux RH 3 EL 3 node test and training environment

Alejandro Vargas, Oracle Israel, Principal Support Consultant

1) Request Network configuration from Unix ............................................................................................................................................................. 2 2) Identify the network cards where these IP's belong ................................................................................................................................................ 3 3) Check ping from all nodes to all nodes ................................................................................................................................................................... 4 4) Create the Oracle User and Group DBA ................................................................................................................................................................. 5 5) Configure the Kernel Parameters on each node ..................................................................................................................................................... 7 6) Set Limits for user Oracle....................................................................................................................................................................................... 8 7) Configure the Hangcheck Timer on all nodes ......................................................................................................................................................... 9 8) Configure SSH....................................................................................................................................................................................................... 10 9) Configure User Equivalence.................................................................................................................................................................................. 13 10) Prepare Disks Partitions for ASM and OCFS ..................................................................................................................................................... 14 11)) Configure ASMlib for ASM Management......................................................................................................................................................... 19 12) Configure RAW Devices for OCR, Voting Disk and Spfile ............................................................................................................................... 24

13) Create mount points for Oracle Software ............................................................................................................................................................ 27 14) Configure Cluster Verification Utility................................................................................................................................................................. 29 15) Install Oracle Clusterware ................................................................................................................................................................................... 30 Mount the Oracle Sources Directory ..................................................................................................................................................................... 30 16) Install Oracle ASM Home and Instance .............................................................................................................................................................. 44 17) Install Oracle Database Home ............................................................................................................................................................................. 53 18) Create the RAC Database .................................................................................................................................................................................... 58 19) Configure XDB for ASM Management .............................................................................................................................................................. 79

1) Request Network configuration from Unix


The network needs to be configured with 3 IP's on each server: 1) A public IP's registered on DNS and 2) A virtual IP's registered on DNS, but NOT defined in the servers. It will be defined later during Oracle Clusterware Install by 'vipca' Oracle virtual IP configuration assistant. And 3) a private IP only known to the servers in the RAC configuration, to be used for the interconnect. The /etc/hosts file should be identical on all nodes in the cluster. This example show the IP's clearly identified: 10.5.225.24 10.5.225.36 10.5.225.44 10.5.225.7 10.5.225.8 10.5.225.9 vmractest1 vmractest2 vmractest3 vmractest1-vip vmractest2-vip vmractest3-vip # Public IP on Node 1 # Public IP on Node 2 # Public IP on Node 3 # Virtual IP on Node 1 # Virtual IP on Node 2 # Virtual IP on Node 3

100.100.100.101 vmractest1-priv 100.100.100.102 vmractest2-priv 100.100.100.103 vmractest3-priv

# Private IP on Node 1 # Private IP on Node 2 # Private IP on Node 3

2) Identify the network cards where these IP's belong


Use the command ifconfig a on each node, to display the network cards and the IP's assigned to them. In this example, run on node 1, we can see: eth0 eth1 match to Public IP 10.5.225.24 match to Private IP 100.100.100.101

The virtual IP has no match because is not defined yet. [root@vmractest1 root]# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:50:56:8F:35:B2 inet addr:10.5.225.24 Bcast:10.5.225.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:234002 errors:0 dropped:0 overruns:0 frame:0 TX packets:18905 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:41076542 (39.1 Mb) TX bytes:1534634 (1.4 Mb) Interrupt:9 Base address:0x1080 eth1 Link encap:Ethernet HWaddr 00:50:56:8F:67:09 inet addr:100.100.100.101 Bcast:100.100.100.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:59 errors:0 dropped:0 overruns:0 frame:0 TX packets:50 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4898 (4.7 Kb) TX bytes:3948 (3.8 Kb) Interrupt:10 Base address:0x10c0

3) Check ping from all nodes to all nodes


To ensure that communication can be established do ping tests using all the IP's, you can generate the following script based on the /etc/hosts information and run it on all hosts: ping -c 5 vmractest1 ping -c 5 vmractest2 ping -c 5 vmractest3 ping -c 5 vmractest1-priv ping -c 5 vmractest2-priv ping -c 5 vmractest3-priv The script will ping 5 times each IP, this is the output from running it on node 1, looking only at the summary result of each set all transmitted packets were received, 0% loss: [root@vmractest1 root]# ./chkping | grep "packet loss" 5 packets transmitted, 5 received, 0% packet loss, time 4017ms 5 packets transmitted, 5 received, 0% packet loss, time 4035ms 5 packets transmitted, 5 received, 0% packet loss, time 4009ms 5 packets transmitted, 5 received, 0% packet loss, time 4034ms 5 packets transmitted, 5 received, 0% packet loss, time 4007ms 5 packets transmitted, 5 received, 0% packet loss, time 4024ms 5 packets transmitted, 5 received, 0% packet loss, time 4000ms 5 packets transmitted, 5 received, 0% packet loss, time 4012ms 5 packets transmitted, 5 received, 0% packet loss, time 4018ms

4) Create the Oracle User and Group DBA


Start on node 1: [root@vmractest1 root]# /usr/sbin/groupadd dba [root@vmractest1 root]# /usr/sbin/useradd u 500 -m -G dba oracle -s /bin/tcsh [root@vmractest1 root]# id oracle uid=500(oracle) gid=501(oracle) groups=501(oracle),500(dba) The User ID and Group IDs must be the same on all cluster nodes. Using the information from the id oracle command, create the Oracle Groups and User Account on the remaining cluster nodes: /usr/sbin/groupadd -g 500 dba /usr/sbin/useradd oracle -m -u 500 -g dba s /bin/tcsh Setup the password for user oracle: [root@vmractest3 root]# passwd oracle Changing password for user oracle. New UNIX password: BAD PASSWORD: it is based on a dictionary word Retype new UNIX password: passwd: all authentication tokens updated successfully. Setup the .cshrc file on all nodes for user Oracle umask 022 unlimit setenv ORACLE_BASE /oradisk/app01/oracle setenv ORA_CRS_HOME /oradisk/app01/oracle/product/10gCRS

setenv ASM_HOME /oradisk/app01/oracle/product/10gASM setenv ORACLE_HOME /oradisk/app01/oracle/product/10gDB setenv DB_SCRIPTS $ORACLE_BASE/scripts setenv NLS_DATE_FORMAT 'dd/mm/yyyy hh24:mi:ss' setenv TEMP /tmp setenv TMPDIR /tmp setenv BASE_PATH $ORACLE_BASE/scripts/general:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6 /bin:/root/bin:/oradisk/app01/oracle/scripts:/usr/local/maint/oracle:/crmdb/app01/oracle/product/db_scripts/RAC:/crmdb/app01/oracle/pro duct/db_scripts setenv PATH ${ORACLE_HOME}/bin:${BASE_PATH} setenv EDITOR vi setenv ORACLE_TERM xsun5 setenv EPC_DISABLED TRUE setenv CV_HOME /oradisk/app01/cluvfy setenv CV_JDKHOME /oradisk/app01/cluvfy/jd14 setenv CV_DESTLOC /oradisk/app01/cluvfy/out setenv CVUQDISK_GRP dba set prompt="%/ >" alias 10db 'setenv ORACLE_HOME /oradisk/app01/oracle/product/10gDB; setenv PATH ${ORACLE_HOME}/bin:${BASE_PATH}' alias 10crs 'setenv ORACLE_HOME /oradisk/app01/oracle/product/10gCRS; setenv PATH ${ORACLE_HOME}/bin:${BASE_PATH}' alias 10asm 'setenv ORACLE_HOME /oradisk/app01/oracle/product/10gASM; setenv PATH ${ORACLE_HOME}/bin:${BASE_PATH}' alias cdo 'cd $ORACLE_HOME\/\!*' alias dbs 'cd $ORACLE_HOME/dbs\/\!*' alias cdbo 'cd ~/obackup/db\/\!*' alias ll 'ls -lrt' alias ora 'clear ; echo ----------- ; echo ORA Environement Variables: ; echo " "; env | grep ASM ;env | grep ORA | grep -v NO | grep -v NLS | sort | more ; echo -----------; echo ORACLE Databases Running: ; echo " "; ps -efa | grep smon | grep -v grep |more ; echo ---------- ; echo ORACLE Databases registered in Oratab: ; echo " " ; more /etc/oratab | grep -v #; echo ----------- ' alias av 'cd $ORACLE_BASE/scripts/av' alias sts 'setenv ORACLE_SID $1' alias tns 'cd $ORACLE_HOME/network/admin; clear; ps -efa | grep tns | grep -v grep; ls -ltr'

alias avd 'setenv DISPLAY 10.13.33.156:0.0' setenv v_alrt `hostname` alias cd 'chdir \!*; set prompt="{$LOGNAME} $cwd [$v_alrt] > "' cd . alias duk '/usr/xpg4/bin/du -xk |sort -rn|more' alias disp 'setenv DISPLAY $1' alias grid 'clear; cat $HOME/.grid' alias sql "sqlplus '/ as sysdba'" alias chkocrbk 'clear;echo OCR BACKUPS AVAILABLE:; echo; 10crs; ocrconfig -showbackup; echo ; echo' alias chkcrs '/home/oracle/chkcrs' alias mnt 'echo mount stagesrv:/vol/files2/ORA_Ins_Stage /mnt'

5) Configure the Kernel Parameters on each node


As root run the following script on all nodes, to setup kernel parameters: cat >> /etc/sysctl.conf << EOF kernel.shmall = 2097152 kernel.shmmax = 2147483648 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 658576 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 1048536 net.core.wmem_max = 1048536 EOF Check the new settings using /sbin/sysctl p

[root@vmractest1 root]# /sbin/sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 kernel.sysrq = 0 kernel.core_uses_pid = 1 kernel.shmall = 2097152 kernel.shmmax = 2147483648 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000 kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 658576 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 1048536 net.core.wmem_max = 1048536

6) Set Limits for user Oracle


cat >> /etc/security/limits.conf << EOF oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 EOF

cat >> /etc/pam.d/login << EOF session required /lib/security/pam_limits.so EOF

cat >> /etc/profile << EOF if [ \$USER = "oracle" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF cat >> /etc/csh.login << EOF if ( \$USER == "oracle" ) then limit maxproc 16384 limit descriptors 65536 umask 022 endif EOF

7) Configure the Hangcheck Timer on all nodes


All RHEL releases: modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 cat >> /etc/rc.d/rc.local << EOF

modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 EOF

8) Configure SSH
During the installation of Oracle RAC 10g Release 2, OUI needs to copy files to and execute programs on the other nodes in the cluster. In order to allow OUI to do that, you must configure SSH to allow user equivalence. Establishing user equivalence with SSH provides a secure means of copying files and executing programs on other nodes in the cluster without requiring password prompts. If you have Grid Control, a fast way to configure ssh is to use the provided script located at OMS_HOME, ie ?/oms10g/sysman/prov/resources/scripts/sshUserSetup.sh The first step is to generate public and private keys for SSH. There are two versions of the SSH protocol; version 1 uses RSA and version 2 uses DSA, so we will create both types of keys to ensure that SSH can use either version. The ssh-keygen program will generate public and private keys of either type depending upon the parameters passed to it. When you run ssh-keygen, you will be prompted for a location to save the keys. Just press Enter when prompted to accept the default. You will then be prompted for a passphrase. Enter a blank password and then enter it again to confirm. When you have completed the steps below, you will have four files in the ~/.ssh directory: id_rsa, id_rsa.pub, id_dsa, and id_dsa.pub. The id_rsa and id_dsa files are your private keys and must not be shared with anyone. The id_rsa.pub and id_dsa.pub files are your public keys and must be copied to each of the other nodes in the cluster. From each node, logged in as oracle: mkdir ~/.ssh chmod 755 ~/.ssh /usr/bin/ssh-keygen -t rsa Cut and paste the following line separately: /usr/bin/ssh-keygen -t dsa Example on Node 1 (repeat the same on Nodes 2 and 3) :

$ mkdir ~/.ssh $ chmod 755 ~/.ssh {oracle} /home/oracle [vmractest1] > /usr/bin/ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: b6:7e:20:7a:60:60:6c:de:3b:2e:b9:b2:b3:05:1c:f3 oracle@vmractest1

{oracle} /home/oracle [vmractest1] > /usr/bin/ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: 4c:1d:a5:16:66:d9:0d:c7:e1:5e:c8:fe:a0:61:13:d5 oracle@vmractest1

Now the contents of the public key files id_rsa.pub and id_dsa.pub on each node must be copied to the ~/.ssh/authorized_keys file on every other node. Use ssh to copy the contents of each file to the ~/.ssh/authorized_keys file. Note that the first time you access a remote node with ssh its RSA key will be unknown and you will be prompted to confirm that you wish to connect to the node. SSH will record the RSA key for the remote nodes and will not prompt for this on subsequent connections to that node. On all Nodes, logged in as oracle (copy the local account's keys so that ssh to the local node will work):

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Now copy the keys to the other node so that we can ssh to the remote node without being prompted for a password. On Node 1: chmod 644 ~/.ssh/authorized_keys ssh oracle@vmractest2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ssh oracle@vmractest2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys ssh oracle@vmractest3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ssh oracle@vmractest3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys On Node 2: chmod 644 ~/.ssh/authorized_keys ssh oracle@vmractest1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ssh oracle@vmractest1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys ssh oracle@vmractest3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ssh oracle@vmractest3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys On Node 3: chmod 644 ~/.ssh/authorized_keys ssh oracle@vmractest1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ssh oracle@vmractest1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys ssh oracle@vmractest2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ssh oracle@vmractest2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Check that you can run ssh without being prompted for password from each node to any other node:

On Node 1: {oracle} /home/oracle [vmractest1] > ssh vmractest2 hostname vmractest2 {oracle} /home/oracle [vmractest1] > ssh vmractest3 hostname vmractest3 On Node 2: {oracle} /home/oracle [vmractest2] > ssh vmractest1 hostname vmractest1 {oracle} /home/oracle [vmractest2] > ssh vmractest3 hostname vmractest3 On Node 3: {oracle} /home/oracle [vmractest3] > ssh vmractest1 hostname vmractest1 {oracle} /home/oracle [vmractest3] > ssh vmractest2 hostname vmractest2

9) Configure User Equivalence


on all nodes: exec /usr/bin/ssh-agent $SHELL /usr/bin/ssh-add Ex: {oracle} /home/oracle [vmractest1] > exec /usr/bin/ssh-agent $SHELL {oracle} /home/oracle [vmractest1] > /usr/bin/ssh-add Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa) (Note that user equivalence is established for the current session only. If you switch to a different session or log out and back in, you will have to run ssh-agent and ssh-add again to re-establish user equivalence.)

10) Prepare Disks Partitions for ASM and OCFS


Prepare the Shared Disks Both Oracle Clusterware and Oracle RAC requires access to disks that are shared by each node in the cluster. The shared disks must be configured using one of the following methods. Note that you cannot use a "standard" filesystem such as ext3 for shared disk volumes since such file systems are not cluster aware. For Clusterware: 1. OCFS (Release 2) http://oss.oracle.com/projects/ocfs2 2. raw devices 3. third party cluster filesystem such as GPFS or Veritas

For RAC database storage: 1. OCFS (Release 2) http://oss.oracle.com/projects/ocfs2 2. ASM 3. raw devices

Partition the Disks In order to use either OCFS or ASM, you must have unused disk partitions available. This section describes how to create the partitions that will be used for OCFS and for ASM. WARNING: Improperly partitioning a disk is one of the surest and fastest ways to wipe out everything on your hard disk. If you are unsure how to proceed, stop and get help, or you will risk losing data. Disk partitioning should be done from one node only. When finished partitioning, run the 'partprobe' command as root on each of the remaining cluster nodes in order to assure that the new partitions are configured. First check which devices are in use by existing file systems: This command will return the devices that are configured and mounted as file systems, the data show device name, % usage and directory where they are mounted. {oracle} > df -k | grep /dev/ | awk '{print $1" /dev/sda2 /dev/sda1 /dev/sdb1 /dev/sdg1 31% / 10% /boot 2% /vmasmtest 1% /oradisk "$5" "$6}' | grep -v none

In this case devices /dev/sda, /dev/sdb and /dev/sdg are in use by file systems, and you should NOT partition them Then check as root the devices that do exist on you environment, sdc/d/e/f are all available and not partitioned: [root@vmractest1 root]# fdisk -l | grep /dev/sd Disk /dev/sda: 10.7 GB, 10737418240 bytes /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 1033 8193150 83 Linux

/dev/sda3 1034 1305 2184840 82 Linux swap Disk /dev/sdb: 170.7 GB, 170724950016 bytes /dev/sdb1 1 20756 166722538+ 83 Linux Disk /dev/sdc: 10.7 GB, 10737418240 bytes Disk /dev/sdd: 10.7 GB, 10737418240 bytes Disk /dev/sde: 10.7 GB, 10737418240 bytes Disk /dev/sdf: 10.7 GB, 10737418240 bytes Disk /dev/sdg: 21.4 GB, 21474836480 bytes /dev/sdg1 1 2610 20964793+ 83 Linux In our configuration we will use sdc/d/e for ASM each of them with 100% of its space in a single partition And sdf for OCR/Voting Disk and ASM spfile in 3 partitions, 150MB each. Example, partitioning /dev/sdf to be used by OCR, Voting Disk and Pfile Note: if you are going to use a very large LUN for ASM, you may define an extended partition on the whole disk, and then create equally sized logical partitions for ASM disks. Remember that ASM uses with advantage a large number of medium sized disks instead of a small number of big disks; all ASM disks within a disk group must have the same size, ASM will permit different sizes within a disk group by is not recommended to mix disk sizes and types within an ASM Disk Group. [root@vmractest1 ASMlib]# fdisk /dev/sdf Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 1305. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1305, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): +150M

Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (20-1305, default 20): Using default value 20 Last cylinder or +size or +sizeM or +sizeK (20-1305, default 1305): +150M

Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 3 First cylinder (39-1305, default 39): Using default value 39 Last cylinder or +size or +sizeM or +sizeK (39-1305, default 1305): +150M Command (m for help): p Disk /dev/sdf: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/sdf1 /dev/sdf2 /dev/sdf3 Start 1 20 39 End Blocks Id System 19 152586 83 Linux 38 152617+ 83 Linux 57 152617+ 83 Linux

Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@vmractest1 ASMlib]#

Repeat the same procedure with /dev/sdc/d/e to be used by ASM, creating a single partition each time.

11)) Configure ASMlib for ASM Management


Automatic Storage Management (ASM) ASM provides the services of a filesystem, logical volume manager, and software RAID in a platform-independent manner. ASM can stripe and mirror your disks, allow disks to be added or removed while the database is under load, and automatically balance I/O to remove "hot spots." It also supports direct and asynchronous I/O. ASM can be used only for Oracle data files, redo logs, control files, and flash recovery area. Files in ASM are better managed using the Oracle Managed Files feature. But they can be also manually managed by the DBA. Because the files stored in ASM are not accessible to the operating system, to perform backup and recovery operations on databases that use ASM files you need to use Recovery Manager (RMAN), Storage snapshots or the XDB FTP feature, that enable moving files out of ASM easily. ASM is implemented as separate Oracle instance, that must be up if other databases are to be able to access it. Memory requirements for ASM are light: only 64 MB for most systems. Installing ASM On Linux platforms, ASM can use raw devices or devices managed via the ASMLib interface. Oracle recommends ASMLib over raw devices for ease-of-use and performance reasons. ASMLib 2.0 is available for free download from OTN. Steps for configuring an ASM instance using ASMLib 2.0 and building a database that uses ASM for disk storage.

Determine Which Version of ASMLib You Need ASMLib 2.0 is delivered as a set of three Linux packages:

oracleasmlib-2.0 - the ASM libraries oracleasm-support-2.0 - utilities needed to administer ASMLib oracleasm - a kernel module for the ASM library

Each Linux distribution has its own set of ASMLib 2.0 packages, and within each distribution, each kernel version has a corresponding oracleasm package. The following paragraphs describe how to determine which set of packages you need. First, determine which kernel you are using by logging in as root and running the following command: uname -rm Ex: [root@vmractest1 root]# uname -rm 2.4.21-37.EL i686 The example shows that this is a 2.4.21-37 kernel for a box using Intel i686 CPU. Use this information to find the correct ASMLib packages on OTN: 1. 2. 3. 4. Point your Web browser to http://www.oracle.com/technology/tech/linux/asmlib/index.html Select the link for your version of Linux. Download the oracleasmlib and oracleasm-support packages for your version of Linux Download the oracleasm package corresponding to your kernel. In the example above, the oracleasm-2.4.21-37.EL-1.0.41.i686.rpm package was used. Next, install the packages by executing the following command as root on all nodes, one after the other:

rpm -Uvh oracleasm-kernel_version-asmlib_version.cpu_type.rpm \ oracleasmlib-asmlib_version.cpu_type.rpm \ oracleasm-support-asmlib_version.cpu_type.rpm Ex: [root@vmractest1 ASMlib]# ls -ltr total 116 -rwxrwxrwx 1 3096 513 22160 Apr 5 2006 oracleasm-support-2.0.1-1.i386.rpm -rwxrwxrwx 1 3096 513 73145 Apr 5 2006 oracleasm-2.4.21-37.EL-1.0.4-1.i686.rpm -rwxrwxrwx 1 3096 513 13436 Apr 5 2006 oracleasmlib-2.0.1-1.i386.rpm [root@vmractest1 ASMlib]# rpm -Uvh oracleasm-2.4.21-37.EL-1.0.4-1.i686.rpm \ > oracleasmlib-2.0.1-1.i386.rpm \ > oracleasm-support-2.0.1-1.i386.rpm Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.4.21-37.EL ########################################### [ 67%] 3:oracleasmlib ########################################### [100%] Configuring ASMLib Before using ASMLib, you must run a configuration script to prepare the driver. Run the following command as root, and answer the prompts as shown in the example below. Run this on each node in the cluster. [root@vmractest1 ASMlib]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values

will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: dba Start Oracle ASM library driver on boot (y/n) [n]: y Fix permissions of Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: Creating /dev/oracleasm mount point: Loading module "oracleasm": Mounting ASMlib driver filesystem: Scanning system for ASM disks: [ OK ] [ OK ] [ OK ] [ OK ] [ OK ]

Next you tell the ASM driver which disks you want it to use. Oracle recommends that each disk contain a single partition for the entire disk. See Partitioning the Disks at the beginning of this section for an example of creating disk partitions. You mark disks for use by ASMLib by running the following command as root from one of the cluster nodes, run it from node 1: /etc/init.d/oracleasm createdisk DISK_NAME device_name Tip: Enter the DISK_NAME in UPPERCASE letters. Ex: [root@vmractest1 ASMlib]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1 Marking disk "/dev/sdd1" as an ASM disk: [ OK ] [root@vmractest1 ASMlib]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdd1 Marking disk "/dev/sdd1" as an ASM disk: [ OK ] [root@vmractest1 ASMlib]# /etc/init.d/oracleasm createdisk VOL3 /dev/sde1 Marking disk "/dev/sde1" as an ASM disk: [ OK ]

Verify that ASMLib has marked the disks: [root@vmractest1 ASMlib]# /etc/init.d/oracleasm listdisks VOL1 VOL2 VOL3 On all other cluster nodes, run the following command as root to scan for configured ASMLib disks: /etc/init.d/oracleasm scandisks Ex: [root@vmractest2 ASMlib_install]# /etc/init.d/oracleasm scandisks Scanning system for ASM disks: [ OK ] [root@vmractest3 ASMlib_install]# /etc/init.d/oracleasm scandisks Scanning system for ASM disks: [ OK ] And then check that everything is as in the first node: [root@vmractest2 ASMlib_install]# /etc/init.d/oracleasm listdisks VOL1 VOL2 VOL3 [root@vmractest3 ASMlib_install]# /etc/init.d/oracleasm listdisks VOL1 VOL2 VOL3 If need to deinstall: Use this sequence: lib,kernel,support. [root@bbtest1 ~]# rpm -e oracleasmlib-2.0.2-1.x86_64

[root@bbtest1 ~]# rpm -e oracleasm-2.6.9-22.ELsmp-2.0.3-1 [root@bbtest1 ~]# rpm -e oracleasm-support-2.0.3-1

12) Configure RAW Devices for OCR, Voting Disk and Spfile
Check the partitions we created on device /dev/sdf for the Clusterware shared files: [root@vmractest1 root]# fdisk -l /dev/sdf Disk /dev/sdf: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/sdf1 /dev/sdf2 /dev/sdf3 Start 1 20 39 End Blocks Id System 19 152586 83 Linux 38 152617+ 83 Linux 57 152617+ 83 Linux

Check the partitions for the Clusterware shared files on the other nodes. Note that the names of the devices ie: "/dev/sdf1" may change from node to node. That is "/dev/sdf1" in node vmractest1 may turn to be "/dev/sde" on node vmractest2 or 3. double check this. To check which device was used for the partitions on the second and third node you may check the first partition using the values of start, end, blocks, id and system from the first node, ie: [root@vmractest2 raw]# fdisk -l | grep "1 19 152586 83 Linux"

/dev/sde1

19

152586 83 Linux

Here we see that sdf1 was identified as sde1 on the second node, now we can check that all partitions correspond on vmractest2: [root@vmractest2 raw]# fdisk -l /dev/sde Disk /dev/sde: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/sde1 /dev/sde2 /dev/sde3 Start 1 20 39 End Blocks Id System 19 152586 83 Linux 38 152617+ 83 Linux 57 152617+ 83 Linux

We can see that sdf from node 1 is identical to sde on node 2. We need to remember this difference when mapping the raw device files to the corresponding devices on the next steps. Create a script to configure the raw devices with the following lines: mv /dev/raw/raw1 /dev/raw/votingdisk mv /dev/raw/raw2 /dev/raw/ocr.dbf mv /dev/raw/raw3 /dev/raw/spfile+ASM.ora chmod 660 /dev/raw/{votingdisk,ocr.dbf,spfile+ASM.ora} chown oracle:dba /dev/raw/{votingdisk,ocr.dbf,spfile+ASM.ora} echo echo checking the prepared raw devices: ls -ltr /dev/raw | grep oracle echo Execute the script on each node. It will create the customized rawdevice names on the default rawdevices, change the ownership and permissions, and will check that the devices were created and has the correct permissions:

[root@vmractest1 root]# ls -ltr /dev/raw | grep oracle crw-rw---- 1 oracle dba 162, 1 Jun 24 2004 votingdisk crw-rw---- 1 oracle dba 162, 3 Jun 24 2004 spfile+ASM.ora crw-rw---- 1 oracle dba 162, 2 Jun 24 2004 ocr.dbf Edit the /etc/sysconfig/rawdevices file and add the following lines for the cluster, to map the /dev/raw files to its corresponding devices. Take care to use the name devices you checked at the start of step 12: /dev/raw/votingdisk /dev/sdf1 /dev/raw/ocr.dbf /dev/sdf2 /dev/raw/spfile+ASM.ora /dev/sdf3 This files need to be created on each node. Start the rawdevice service on all nodes: [root@vmractest1 RAWS]# service rawdevices start Assigning devices: /dev/raw/votingdisk --> /dev/sdf1 /dev/raw/raw1: bound to major 8, minor 81 /dev/raw/ocr.dbf --> /dev/sdf2 /dev/raw/raw2: bound to major 8, minor 82 /dev/raw/spfile+ASM.ora --> /dev/sdf3 /dev/raw/raw3: bound to major 8, minor 83 Done Raw devices status can be checked with this command: [root@vmractest2 RAWS]# service rawdevices status /dev/raw/raw1: bound to major 8, minor 65 /dev/raw/raw2: bound to major 8, minor 66

/dev/raw/raw3: bound to major 8, minor 67

13) Create mount points for Oracle Software


The path to Oracle software must be the same in all nodes, if you are not using a shared Oracle Home. Repeat this step on all nodes: [root@vmractest1 RAWS]# mkdir -p /oradisk/app01/oracle/product/10gDB [root@vmractest1 RAWS]# mkdir -p /oradisk/app01/oracle/product/10gASM [root@vmractest1 RAWS]# mkdir -p /oradisk/app01/oracle/product/10gCRS [root@vmractest1 RAWS]# chown -R oracle:dba /oradisk [root@vmractest1 RAWS]# chmod -R 775 /oradisk Create a .cshrc file to setup the environment and copy it to all nodes {oracle} /home/oracle [vmractest2] > cat .cshrc umask 022 unlimit setenv ORACLE_BASE /oradisk/app01/oracle setenv ORA_CRS_HOME /oradisk/app01/oracle/product/10gCRS setenv ASM_HOME /oradisk/app01/oracle/product/10gASM setenv ORACLE_HOME /oradisk/app01/oracle/product/10gDB setenv DB_SCRIPTS $ORACLE_BASE/scripts setenv NLS_DATE_FORMAT 'dd/mm/yyyy hh24:mi:ss' setenv TEMP /tmp setenv TMPDIR /tmp setenv BASE_PATH $ORACLE_BASE/scripts/general:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr

/X11R6/bin:/root/bin:/oradisk/app01/oracle/scripts:/usr/local/maint/oracle:/crmdb/app01/oracle/product/db_scripts/RAC:/crmdb/ap p01/oracle/product/db_scripts setenv PATH ${ORACLE_HOME}/bin:${BASE_PATH} setenv EDITOR vi setenv ORACLE_TERM xsun5 setenv EPC_DISABLED TRUE setenv CV_HOME /oradisk/app01/cluvfy setenv CV_JDKHOME /oradisk/app01/cluvfy/jd14 setenv CV_DESTLOC /oradisk/app01/cluvfy/out setenv CVUQDISK_GRP dba set prompt="%/ >" alias 10db 'setenv ORACLE_HOME /oradisk/app01/oracle/product/10gDB; setenv PATH ${ORACLE_HOME}/bin:${BASE_PATH}' alias 10crs 'setenv ORACLE_HOME /oradisk/app01/oracle/product/10gCRS; setenv PATH ${ORACLE_HOME}/bin:${BASE_PATH}' alias 10asm 'setenv ORACLE_HOME /oradisk/app01/oracle/product/10gASM; setenv PATH ${ORACLE_HOME}/bin:${BASE_PATH}' alias cdo 'cd $ORACLE_HOME\/\!*' alias dbs 'cd $ORACLE_HOME/dbs\/\!*' alias cdbo 'cd ~/obackup/db\/\!*' alias ll 'ls -lrt' alias ora 'clear ; echo ----------- ; echo ORA Environement Variables: ; echo " "; env | grep ASM ;env | grep ORA | grep -v NO | grep -v NLS | sort | more ; echo -----------; echo ORACLE Databases Running: ; echo " "; ps -efa | grep smon | grep -v grep |more ; echo ----------- ; echo ORACLE Databases registered in Oratab: ; echo " " ; more /etc/oratab | grep -v #; echo ----------- ' alias av 'cd $ORACLE_BASE/scripts/av' alias sts 'setenv ORACLE_SID $1' alias tns 'cd $ORACLE_HOME/network/admin; clear; ps -efa | grep tns | grep -v grep; ls -ltr' alias avd 'setenv DISPLAY 10.13.33.156:0.0' setenv v_alrt `hostname` alias cd 'chdir \!*; set prompt="{$LOGNAME} $cwd [$v_alrt] > "' cd . alias duk '/usr/xpg4/bin/du -xk |sort -rn|more'

alias disp 'setenv DISPLAY $1' alias grid 'clear; cat $HOME/.grid' alias sql "sqlplus '/ as sysdba'" alias chkocrbk 'clear;echo OCR BACKUPS AVAILABLE:; echo; 10crs; ocrconfig -showbackup; echo ; echo' alias chkcrs '/home/oracle/chkcrs' alias mnt 'echo mount stagesrv:/vol/files2/ORA_Ins_Stage /mnt' Execute the source file to setup the environment, point to 10crs home: cd source .cshrc 10crs

14) Configure Cluster Verification Utility


Cluster Verification utility is executed at the end of the Oracle Clusterware Install, it is required to configure it beforehand on all nodes. The Cluster Verification Utility is included on the 10g Installation DVD or can be downloaded from OTN at http://www.oracle.com/technology/products/database/clustering/cvu/cvu_download_homepage.html Also can be copied from the install disk mounted on /mnt: /mnt/OracleInstallDisks/Linux/x86/10.2.0.1_X86_32bit/Cluster_Verification_Utility Create a directory for it mkdir p /oradisk/app01/cluvfy/out Copy the install files to /oradisk/app01/cluvfy and unzip them. Change ownership to oracle:dba chown R oracle:dba /oradisk/app01/cluvfy

Install as root cvuqdisk-1.0.1-1.rpm groupadd orainst rpm i cvuqdisk-1.0.1-1.rpm edit /etc/group and add oracle to group orainst Add to the .cshrc the following settings setenv CV_HOME /oradisk/app01/cluvfy setenv CV_JDKHOME /oradisk/app01/cluvfy setenv CV_DESTLOC /oradisk/app01/cluvfy/out setenv CVUQDISK_GRP dba

15) Install Oracle Clusterware


Mount the Oracle Sources Directory In this examples we are using server stagesrv that hold all the required software to install RAC, so as a first step we mount it on Linux, to do this, execute as root: mount stagesrv:/vol/files2/ORA_Ins_Stage /mnt Go to the 10g R2 install directory, to the Clusterware subdirectory {oracle} /oradisk/Install10gR2/10.2.0.1_X86_32bit [vmractest1] > ls client clusterware companion database doc gateways index index.pdx welcome.html {oracle} /oradisk/Install10gR2/10.2.0.1_X86_32bit [vmractest1] > cd clusterware/

Setup the DISPLAY environment variable and execute runInstaller [vmractest1] > setenv DISPLAY OSN3082:0.0 {oracle} /oradisk/Install10gR2/10.2.0.1_X86_32bit/clusterware [vmractest1] > ./runInstaller

Click on add to complete the cluster configuration

Check that the network cards match our definition Choose edit and change them to match /etc/hosts

Now the cards are configured as required:

Run the scripts in the order they appear. Wait until the Previous Script finish before running the next.

Note that because CRS home is within a path owned by oracle you will get the following errors that can be dismissed: [root@vmractest1 10gCRS]# ./root.sh WARNING: directory '/oradisk/app01/oracle/product' is not owned by root WARNING: directory '/oradisk/app01/oracle' is not owned by root WARNING: directory '/oradisk/app01' is not owned by root It is very important to wait until the run finish successfully until proceeding to the next step: WARNING: directory '/oradisk' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/oradisk/app01/oracle/product' is not owned by root WARNING: directory '/oradisk/app01/oracle' is not owned by root WARNING: directory '/oradisk/app01' is not owned by root WARNING: directory '/oradisk' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. assigning default hostname vmractest1 for node 1. assigning default hostname vmractest2 for node 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: vmractest1 vmractest1-priv vmractest1 node 2: vmractest2 vmractest2-priv vmractest2 clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. vmractest1 CSS is inactive on these nodes. vmractest2 vmractest3 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons. Only when the message " Run root.sh on remaining nodes to start CRS daemons" proceed to next nodes, one by one. On the 3rd node we got error at the end of the root.sh run, indicating that vipca was not able to be run. This error message happens because the range of IP's chosen for the virtual IP is not standard. You can run vipca manually to workaround this problem. Su - oracle

To check where the vipca executable is located su to oracle, then execute vipca as root: [root@vmractest3 10gCRS]# su - oracle {oracle} /home/oracle [vmractest3] > 10crs {oracle} /home/oracle [vmractest3] > which vipca

/oradisk/app01/oracle/product/10gCRS/bin/vipca Last part of the output from running root.sh Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. vmractest1 vmractest2 vmractest3 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps Invalid netmask "" entered in an invalid argument. From the first node execute vipca as root and configure the virtual ip's

In order to get the cluster verification utility to run successfully it need the following configuration settings:

A last check is to run crs_stat t to get the status of the crs components on each node. A formatted version of crs_stat t is provided by the following script: #!/usr/bin/ksh # chkcrs # Sample 10g CRS resource status query script # # Description: # - Returns formatted version of crs_stat -t, in tabular # format, with the complete rsc names and filtering keywords # - The argument, $RSC_KEY, is optional and if passed to the script, will # limit the output to HA resources whose names match $RSC_KEY. # Requirements: # - $ORA_CRS_HOME should be set in your environment RSC_KEY=$1 QSTAT=-u #AWK=/usr/xpg4/bin/awk AWK=/usr/bin/awk

# if not available use /usr/bin/awk

# Table header:echo "" $AWK \ 'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State"; printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}' # Table body: $ORA_CRS_HOME/bin/crs_stat $QSTAT | $AWK \ 'BEGIN { FS="="; state = 0; } $1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1}; state == 0 {next;} $1~/TARGET/ && state == 1 {apptarget = $2; state=2;}

$1~/STATE/ && state == 2 {appstate = $2; state=3;} state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}' # End chkcrs

{oracle} /home/oracle [vmractest1] > ./chkcrs HA Resource Target State ---------------- ----ora.vmractest1gsd ONLINE ONLINE on vmractest1 ora.vmractest1ons ONLINE ONLINE on vmractest1 ora.vmractest1vip ONLINE ONLINE on vmractest1 ora.vmractest2gsd ONLINE ONLINE on vmractest2 ora.vmractest2ons ONLINE ONLINE on vmractest2 ora.vmractest2vip ONLINE ONLINE on vmractest2 ora.vmractest3gsd ONLINE ONLINE on vmractest3 ora.vmractest3ons ONLINE ONLINE on vmractest3 ora.vmractest3vip ONLINE ONLINE on vmractest3 on the output we can see that the Global Services Deamon GSD, Online Services ONS and Virtual IP VIP processes are running on all nodes.

16) Install Oracle ASM Home and Instance


It is good practice to have separate Oracle Homes for CRS, ASM and RDBMS. The second step is to install the ASM home, as it is required to build on top the Oracle Database. First Screen Choose Home Location and Name

Choose all nodes in the cluster

Deselect all non required components

Pre Checks, must succeed 100%

Choose Configure ASM

Summary screen

Install screen

Running chkcrs we will see now that all ASM instances and it's listeners were incorporated to the cluster: {oracle} /home/oracle [vmractest3] > chkcrs HA Resource Target State ---------------- ----ora.vmractest1ASM1.asm ONLINE ONLINE on vmractest1 ora.vmractest1LISTENER_VMRACTEST1lsnr ONLINE ONLINE on vmractest1 ora.vmractest1gsd ONLINE ONLINE on vmractest1 ora.vmractest1ons ONLINE ONLINE on vmractest1 ora.vmractest1vip ONLINE ONLINE on vmractest1 ora.vmractest2ASM2.asm ONLINE ONLINE on vmractest2 ora.vmractest2LISTENER_VMRACTEST2lsnr ONLINE ONLINE on vmractest2 ora.vmractest2gsd ONLINE ONLINE on vmractest2 ora.vmractest2ons ONLINE ONLINE on vmractest2 ora.vmractest2vip ONLINE ONLINE on vmractest2 ora.vmractest3ASM3.asm ONLINE ONLINE on vmractest3 ora.vmractest3LISTENER_VMRACTEST3lsnr ONLINE ONLINE on vmractest3 ora.vmractest3gsd ONLINE ONLINE on vmractest3 ora.vmractest3ons ONLINE ONLINE on vmractest3 ora.vmractest3vip ONLINE ONLINE on vmractest3

17) Install Oracle Database Home


Once CRS and ASM are installed we can proceed to install the Oracle Cluster Database. Set the oracle home to point to the 10gdb Oracle Home Initial screen Custom Installation

Oracle Home pointing to 10gDB

Installer identify the cluster, choose all

Choose all components to test

All prechecks must succeed

Choose DBA group

Choose create database

Summary screen

Start software install

Net install basic

18) Create the RAC Database


DBCA can be run as part of the Oracle RDBMS installation, or after it. I you run dbca after installing the Oracle software follow this procedure. 1) Go to Oracle_Home/bin on the first node and execute ./dbca Start Database install Select RAC database

Select Create Database

Select all nodes in the cluster

Select custom Database

Enter Database Name

Choose "Use Database Control

Choose "Use the Same Password for all Acounts"

Choose "Automatic Storage Management"

Choose both DATA and ARCDG to be used

Choose "Use Oracle Managed Files" and set +DATADG for the database area. Use Datadg and Archdg as destination Click on Multiplex redo and ctls for multiplexed redo and ctl files

Specify flash recovery area

Use archdg for recovery area

Choose enable archiving

Choose minimum options

In this example we do create 3 services. AT LEAST ONE SERVICE MUST BE CREATED AND IT NEEDS TO HAVE A NAME THAT IS DIFFERENT TO THE DATABASE NAME!. If you define only one service with the same name as the database, you will not be able to add other services later. Create 3 database services, name must be different from database name! Click on add service dbaone, taf basic, preferred Node one, available nodes 2 and three

service dbatwo, taf basic, preferred service dbathree, taf basic, preferred nodes one and two, available node 1 node 3, available nodes 1 and 2.

Finally we check crs status {oracle} /home/oracle [vmractest1] > chkcrs HA Resource Target State ---------------- ----ora.racdba.db ONLINE ONLINE on vmractest1 ora.racdba.dbaone.cs ONLINE ONLINE on vmractest1 ora.racdba.dbaone.racdba1.srv ONLINE ONLINE on vmractest1 ora.racdba.dbathree.cs ONLINE ONLINE on vmractest3 ora.racdba.dbathree.racdba3.srv ONLINE ONLINE on vmractest3 ora.racdba.dbatwo.cs ONLINE ONLINE on vmractest2 ora.racdba.dbatwo.racdba2.srv ONLINE ONLINE on vmractest2 ora.racdba.dbatwo.racdba3.srv ONLINE ONLINE on vmractest3 ora.racdba.racdba1.inst ONLINE ONLINE on vmractest1 ora.racdba.racdba2.inst ONLINE ONLINE on vmractest2

ora.racdba.racdba3.inst ONLINE ONLINE on vmractest3 ora.vmractest1ASM1.asm ONLINE ONLINE on vmractest1 ora.vmractest1LISTENER_VMRACTEST1lsnr ONLINE ONLINE on vmractest1 ora.vmractest1gsd ONLINE ONLINE on vmractest1 ora.vmractest1ons ONLINE ONLINE on vmractest1 ora.vmractest1vip ONLINE ONLINE on vmractest1 ora.vmractest2ASM2.asm ONLINE ONLINE on vmractest2 ora.vmractest2LISTENER_VMRACTEST2lsnr ONLINE ONLINE on vmractest2 ora.vmractest2gsd ONLINE ONLINE on vmractest2 ora.vmractest2ons ONLINE ONLINE on vmractest2 ora.vmractest2vip ONLINE ONLINE on vmractest2 ora.vmractest3ASM3.asm ONLINE ONLINE on vmractest3 ora.vmractest3LISTENER_VMRACTEST3lsnr ONLINE ONLINE on vmractest3 ora.vmractest3gsd ONLINE ONLINE on vmractest3 ora.vmractest3ons ONLINE ONLINE on vmractest3 ora.vmractest3vip ONLINE ONLINE on vmractest3

Check that Enterprise Manager Database Control is working at http://vmractest1:1158/em

ASM main Page:

ASM Administration:

19) Configure XDB for ASM Management


During the install phase we did choose to install XML DB that is the base for XDB. There are a couple of additional steps to configure FTP and and GUI browsing through ASM directories:

ASM XDB CONFIGURATION Xdb configuration enable the possibility to use FTP from an ftp session on unix or through a browser on Windows. Files can be easily moved in/out from ASM this way It also provides an http interface to easily browse through ASM directories in a graphic environment. Follow Note: 243554.1 "How to Deinstall and Reinstall XML Database (XDB)" to install XDB.

Configuration steps: 1) As root check that ftp service is running: # netstat -a | grep ftp tcp 0 0 *:ftp *:* LISTEN If no output is returned, start ftp: # service vsftpd start Starting vsftpd for vsftpd: Also configure ftp to start automatically # chkconfig vsftpd on 2) Configure the FTP and HTTP ports of XDB using: connect / as sysdba execute dbms_xdb.sethttpport(8080); execute dbms_xdb.setftpport(2100); commit; to check use: select dbms_xdb.GETFTPPORT() from dual; select dbms_xdb.GETHTTPPORT() from dual; 3) Check the dispatchers configuration for xdb, if it is not set set it up, for single instance: ALTER SYSTEM SET dispatchers = (PROTOCOL=TCP) (SERVICE=<sid>XDB)" SCOPE=BOTH

[ OK ]

For RAC instances: SQL> select instance_name from gv$instance; INSTANCE_NAME ---------------racdba1 racdba2 racdba3 ALTER SYSTEM SET racdba1.dispatchers = "(PROTOCOL=TCP) (SERVICE=racdba1XDB)" SCOPE=BOTH ALTER SYSTEM SET racdba2.dispatchers = "(PROTOCOL=TCP) (SERVICE=racdba2XDB)" SCOPE=BOTH ALTER SYSTEM SET racdba3.dispatchers = "(PROTOCOL=TCP) (SERVICE=racdba3XDB)" SCOPE=BOTH

If you are not using the default Listener ensure you have set LOCAL_LISTENER in the (init.ora/spfile) as prescribed for RAC/NON-RAC instances or the end points will not register. 1* ALTER SYSTEM SET local_listener = "(ADDRESS = (PROTOCOL = TCP)(HOST = vmractest1-vip)(PORT = 1521))" COMMENT='using vip' SCOPE=BOTH SID='*' SQL> / System altered. SQL> show parameters local_listener NAME TYPE VALUE

------------------------------------ ----------- -----------------------------local_listener string (ADDRESS = (PROTOCOL = TCP)(HOST = vmractest1-vip)(PORT = 1521)) 4) Restart the listener: lsnrctl stop LISTENER_VMRACTEST1 lsnrctl start LISTENER_VMRACTEST1 5) Check that the following lines are returned when executing lsnrctl, if they are not you may need to restart your database. (DESCRIPTION =(ADDRESS = (PROTOCOL = tcp)(HOST = <host>)(PORT = 2100))(Presentation = FTP)(Session = RAW)) (DESCRIPTION = (ADDRESS = (PROTOCOL = tcp)(HOST = <host>)(PORT = 8080))(Presentation = HTTP)(Session = RAW)) If not then stop and start the database: srvctl stop database -d racdba srvctl start database -d racdba [vmractest1] > lsnrctl status| grep Session (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=vmractest1)(PORT=8080))(Presentation=HTTP)(Session=RAW)) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=vmractest1)(PORT=2100))(Presentation=FTP)(Session=RAW)) 5) FTP session example : ftp vmractest1 2100 Connected to vmractest1 220- vmractest1 Unauthorised use of this FTP server is prohibited and may be subject to civil and criminal prosecution. 220 vmractest1 FTP Server (Oracle XML DB/Oracle Database) ready.

530 Please login with USER and PASS. KERBEROS_V4 rejected as an authentication type Name (vmractest1:oracle): system 331 pass required for SYSTEM Password: 230 SYSTEM logged in Remote system type is Unix. ftp> cd sys 250 CWD Command successful ftp> cd asm 250 CWD Command successful ftp> ls 227 Entering Passive Mode (10,5,225,24,232,7) 150 ASCII Data Connection drw-r--r-- 2 SYS oracle 0 APR 11 16:51 ARCHDG drw-r--r-- 2 SYS oracle 0 APR 11 16:51 DATADG 226 ASCII Transfer Complete ftp> ls archdg 227 Entering Passive Mode (10,5,225,24,65,58) 150 ASCII Data Connection drw-r--r-- 2 SYS oracle 0 APR 11 16:52 RACDBA 226 ASCII Transfer Complete ftp> ls archdg/racdba 227 Entering Passive Mode (10,5,225,24,36,109) 150 ASCII Data Connection

drw-r--r-drw-r--r-drw-r--r--rw-r--r--

2 SYS 2 SYS 2 SYS 1 SYS

oracle oracle oracle oracle

0 APR 11 16:52 CONTROLFILE 0 APR 11 16:52 ONLINELOG 0 APR 11 16:52 PARAMETERFILE 4608 APR 11 16:52 spfileracdba.ora

226 ASCII Transfer Complete ftp> ls archdg/racdba/controlfile 227 Entering Passive Mode (10,5,225,24,127,237) 150 ASCII Data Connection -rw-r--r-- 1 SYS oracle 15286272 APR 11 16:52 Current.256.587471829 226 ASCII Transfer Complete ftp> ls DATADG/racdba/controlfile 227 Entering Passive Mode (10,5,225,24,252,254) 150 ASCII Data Connection -rw-r--r-- 1 SYS oracle 15286272 APR 11 16:52 Current.269.587471827 226 ASCII Transfer Complete ftp> bin 200 Type set to I. ftp> !pwd /home/oracle

ftp> cd sys/asm/DATADG/racdba/controlfile 250 CWD Command successful ftp> ls 227 Entering Passive Mode (10,5,225,24,224,12) 150 ASCII Data Connection -rw-r--r-- 1 SYS oracle 15286272 APR 11 17:00 Current.269.587471827 226 ASCII Transfer Complete

ftp> bin 200 Type set to I. ftp> !pwd /home/oracle ftp> get Current.256.587471829 local: Current.256.587471829 remote: Current.256.587471829 227 Entering Passive Mode (10,5,225,24,235,211) 150 BIN Data Connection 226 BIN Transfer Complete 15286272 bytes received in 1.4 seconds (1.1e+04 Kbytes/s)

ftp> by 221 QUIT Goodbye. {oracle} /home/oracle [vmractest1] > ls l total 14956 -rwxr-x--- 1 oracle dba 1060 Apr 10 10:56 chkcrs -rwx------ 1 oracle dba 58 Apr 5 12:04 configssh -rw-r--r-- 1 oracle dba 15286272 Apr 11 16:58 Current.256.587471829 In this example we copied out of ASM diskgroup Datadg the control file Current.256.587471829 6) FTP can be done using the FTP GUI provided by any internet browser for FTP type the url :--- ftp://vmractest1:2100/sys/asm In order to browse into ASM folders you must login as system

Having used the URL ftp://vmractest1:2100/sys/asm we immediately get to ASM diskgroups:

From within DATADG we can see the database folder and on it the structural folders, like the FS's on old systems:

HTTP provide a "browse only" graphical interface to ASM with similar look as ftp, it is available at this URL : http://vmractest1:8080/sys/asm Enter the user and password as SYSTEM and <password>

END OF THE PROCEDURE

You might also like