You are on page 1of 13

https://support.oracle.com/CSP/main/article?cmd=...

Configuring raw dev ices (multipath) for Oracle Clusterw are 10g
Release 2 (10.2.0) on RHEL5/OEL5 [ID 564580.1]

Modified 24-JAN-2010 Type HOWTO Status PUBLISHED

In this Document
Goal
Solution
Deprec ation of Support for Raw Devic es
A Bit About Udev and Devic e Name Persistenc y
Multipath, Raw and Udev
Configuring raw devic es (multipath) for Orac le Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5
Assumptions
1. Configure SCSI_ID to Return Unique Devic e Identifiers
1a. Whitelist SCSI devic es
1b. List all SCSI (Clusterware) devic es
1c . Obtain Clusterware devic e unique SCSI identifiers
2. Configure Multipath for Persistent Naming of Clusterware Devic es
2a. Configure Multipathing
2b. Verify Multipath Devic es
3. Create Raw Devic es
4. T est Raw Devic e Ac c essibility
5. Sc ript Creation of Raw Bindings and Permissions
6. T est the Raw Devic e Sc ript
7. Install Orac le 10gR2 Clusterware
Referenc es

Linux OS - Version: 5.0 to 5.0


Linux x86
Linux x86-64
Linux Itanium
Linux Kernel - Version: 5.0 to 5.0

This artic le is intended for Orac le on Linux Database and System Administrators, particularly those intending to
install (or migrate to) Orac le Real Applic ation Clusters 10g Release 2 (10.2.0) on Red Hat/Oracle Enterprise
Linux 5 (EL5). The artic le is intended to foc us on the configuration of raw devices against multipathed devic es
on EL5 in preparation for RAC Clusterware usage, rather than on multipathing or installation of the Cl usterware.

Examples were taken from a working system of the following configuration:

Enterprise Linux 5 (GA) - 2.6.18-8.el5


Orac le Clusterware 10g Release 2 (10.2.0)
Shared storage for Clusterware files served via iSCSI

Note: this doc ument differs to Note.465001.1 that describes configuration of raw devices against single pathed
devic es. This Note desc ribes c onfiguration of raw devices against multipathed devices.

1 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

Deprecation of Support for Raw Dev ices

In versions prior to EL5, applic ations suc h as Oracle, could access unstructured data on block devices by
binding to them via c harac ter raw devic es, such as /dev/raw/raw1 using the raw(8) command. Persistent
devic e assignments c ould be c onfigured using the /etc/sysconfig/rawdevices file in conjunction with the
rawdevices servic e.

Support for raw devic es was initially deprec ated in the Linux 2.6 kernel (EL5 < U4) in favour of direc tio
(O_DIRECT) ac c ess, however was later undeprecated from EL5 U4 (initscripts-8.45.30-2).

For details of the deprec ation and undeprec ation of support for rawio, refer to Linux kernel/version
doc umentation inc luding:

/usr/share/doc /kernel-doc -2.6.18/Doc umentation/feature-removal-schedule.txt


Red Hat Enterprise Linux 4/5 Release notes

Both the /etc/sysconfig/rawdevices file (EL4) and /etc/udev/rules.d/60-raw.rules file (EL5)


similarly disc uss deprec ation of raw.

OCFS2, Orac le's Cluster Filesystem version 2 (http://oss.oracle.com/projects/ocfs2), is an extent based, POSIX-
c ompliant file system that provides for shared, O_DIRECT file access. For certified ports and distributions, Orac le
extends free support of OCFS2 users with an Oracle database license for use in storing Oracle datafiles,
redologs, arc hivelogs, c ontrol files, voting disk (CRS), cluster registry (OCR), etc. along with shared Orac le
home.

A Bit About Udev and Dev ice Name Persistency

Unlike devlabel in the 2.4 kernel, udev (the 2.6 kernel device file naming scheme) dynamically creates devic e
file names at boot time. This c an, however, give rise to the possibility that device file names may change - a
devic e that may onc e have been named /dev/sdd say, may be named /dev/sdf, say, after reboot. Without
spec ific c onfiguration, if udev is left to dynamically name devices, the potential exists for devices referred to, or
inadvertently ac c essed by, their arbitrary kernel-assigned name (e.g. Oracle Clusterware files; Cluster Registry,
Voting disks, etc .) to bec ome c orrupt.

Multipath, Raw and Udev

The nec essity for high availability ac c ess to storage is well understood. For singlepath environments, raw
devic es c an easily be c onfigured via udev rules as described in Note.465001.1. For multipath environments,
however, c onfiguration of raw devic es against multipathed devices via udev is more complex. In fact,
signific ant modific ation of default udev rules can introduce issues with supportability. Therefore, other means
are rec ommended to ac hieve c onfiguration of raw devices against multipathed devices with multipath devic e
naming persistenc y.

Configuring raw dev ices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5

The following proc edure outlines the steps necessary to configure persistent multipath device naming and
c reation of raw devic es (inc luding permissions) in preparation for Oracle 10gR2 (10.2.0) Clusterware devic es.
From Orac le11g Release 1 (11.1.0), Clusterware files may be placed on either block or raw devices located on
shared disk partitions, therefore the following procedure only strictly applies when using Oracle 10gR2 (10.2.0)
and multipathing.

Therefore, take this opportunity to c onsider whether you wish to proceed using 10gR2 or 11gR1 Clusterware to
manage your 10gR2 databases - multipath device configuration for Oracle 11g Clusterware is described i n Note
605828.1. The following proc edure may also be used as a basis for configuring raw devices on EL4 (Update 2

2 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

or higher). Unless otherwise stated, all steps should be performed on each cluster node and as a privi leged user.

Assumptions

The following proc edure assumes the following to have occured:

Clusterware devic es have been c reated on supported shared storage


Clusterware devic es have been appropriately sized according to Oracle10g Release 2 (10.2.0) RAC
doc umentation
Clusterware devic es have been partitioned
All c luster nodes have multipath ac c ess to shared devices
Cluster nodes are c onfigured to satisfy Oracle Universal Installer (OUI) requirements

1. Configure SCSI_ID to Return Unique Dev ice Identifiers

1a. Whitelist SCSI dev ices

Before being able to c onfigure udev to explicitly name devices, scsi_id(8) should first be configured to
return SCSI devic e identifiers. Modify the /etc/scsi_id.config file - add or replace the option=-b
parameter/value pair (if exists) with option=-g, for example:

# grep -v ^# /etc/scsi_id.config
vendor="ATA",options=-p 0x80
options=-g

1b. List all SCSI (Clusterware) dev ices

Clusterware devic es must be visible and ac c essible to all cluster nodes. Typically, cluster node operating
systems need to be updated in order to see newly provisioned (or modified) devices on shared storage i .e. use
'/sbin/partprobe <device>' or '/sbin/sfdisk -r <device>', etc., or simply reboot. Resolve any issues
preventing c luster nodes from c orrec tly seeing or accessing Clusterware devices before proceeding.

Run the fdisk(8) and/or 'cat /proc/partitions' commands to ensure Clusterware devices are visible, for
example:

# cat /proc/partitions
major minor #blocks name

8 0 6291456 sda
8 1 5735173 sda1
8 2 554242 sda2
8 16 987966 sdb
8 17 987681 sdb1
8 32 987966 sdc
8 33 987681 sdc1
8 48 987966 sdd
8 49 987681 sdd1
8 64 987966 sde
8 65 987681 sde1
8 80 987966 sdf
8 81 987681 sdf1
8 96 987966 sdg

3 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

8 97 987681 sdg1
8 112 1004031 sdh
8 113 1003873 sdh1
8 128 1004031 sdi
8 129 1003873 sdi1
8 144 1004031 sdj
8 145 1003873 sdj1
8 160 1004031 sdk
8 161 1003873 sdk1
8 176 1004031 sdl
8 177 1003873 sdl1
8 192 1004031 sdm
8 193 1003873 sdm1

Above, though perhaps not entirely evident, the kernel has assigned two device files per multipathed devic e i.e.
devic es /dev/sdb and /dev/sdc both refer to the same device/LUN on shared storage, as do /dev/sdd and
/dev/sde and so on.

Note, at this point, eac h c luster node may refer to the would-be Clusterware devices by different devi ce file
names - this is expec ted.

1c. Obtain Clusterware dev ice unique SCSI identifiers

Run the scsi_id(8) c ommand against Clusterware devices from one cluster node to obtain their unique
devic e identifiers. When running the scsi_id(8) command with the -s argument, the device path and name
passed should be that relative to sysfs direc tory /sys/ i.e. /block/<device> when referring to /sys/block
/<device>. Rec ord the unique SCSI identifiers of Clusterware devices - these are required later (Step 2a.), for
example:

# for i in `cat /proc/partitions | awk {'print $4'} |grep sd`; do echo "### $i: `scsi_id -g -u
-s /block/$i`"; done
...
### sdb: 1494554000000000000000000010000005c3900000d000000
### sdb1:
### sdc: 1494554000000000000000000010000005c3900000d000000
### sdc1:
### sdd: 149455400000000000000000001000000843900000d000000
### sdd1:
### sde: 149455400000000000000000001000000843900000d000000
### sde1:
### sdf: 149455400000000000000000001000000ae3900000d000000
### sdf1:
### sdg: 149455400000000000000000001000000ae3900000d000000
### sdg1:
### sdh: 149455400000000000000000001000000d03900000d000000
### sdh1:
### sdi: 149455400000000000000000001000000d03900000d000000
### sdi1:
### sdj: 149455400000000000000000001000000e63900000d000000
### sdj1:
### sdk: 149455400000000000000000001000000e63900000d000000
### sdk1:
### sdl: 149455400000000000000000001000000083a00000d000000
### sdl1:
### sdm: 149455400000000000000000001000000083a00000d000000
### sdm1:

From the output above, note that multiple devices share common SCSI identifiers. It's should now be evident
that devic es suc h as /dev/sdb and /dev/sdc refer to the same shared storage device (LUN).

4 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

Note: Irrespec tive of whic h c luster node the scsi_id(8) command is run from, the value returned for a given
devic e (LUN) should always be the same.

2. Configure Multipath for Persistent Naming of Clusterware Dev ices

The purpose of this step is to provide persistent and meaningful, user-defined Clusterware multipath devic e
names. This step is provided to ensure c orrect use of the intended Clusterware multipath devices that could
otherwise be c onfused if solely relying on default multipath-assigned names (mpathn/mpathnpn), espec ially
when many devic es are involved.

2a. Configure Multipathing

Configure multipathing by modifying multipath configuration file /etc/multipath.conf. Comment and


unc omment various stanzas ac c ordingly to include (whitelist) or exclude (blacklist) specific devices/types as
c andidates for multipathing. Spec ific devic es, such as our intended Clusterware devices, should be explic itly
whitelisted as multipathing c andidates. This can be accomplished by creating dedicated multipath stanzas for
eac h devic e. Ideally, at a minimum, eac h device stanza should include the device wwid and an alias, for
example:

# cat /etc/multipath.conf
...
multipath {
wwid 1494554000000000000000000010000005c3900000d000000
alias voting1
}
...

Following is a sample multipath.conf file. Modify your configuration according to your own environment and
preferenc es, but ensuring to inc lude Clusterware device-specific multipath stanzas - substitite wwid values for
your own i.e. those returned from running Step 1c. above.

# grep -v ^# /etc/multipath.conf
defaults {
user_friendly_names yes
}
defaults {
udev_dir /dev
polling_interval 10
selector "round-robin 0"
path_grouping_policy failover
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout /bin/true
path_checker readsector0
rr_min_io 100
rr_weight priorities
failback immediate
#no_path_retry fail
user_friendly_name yes
}
devnode_blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^sda"
devnode "^cciss!c[0-9]d[0-9]*"
}
multipaths {

5 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

multipath {
wwid 1494554000000000000000000010000005c3900000d000000
alias voting1
}
multipath {
wwid 149455400000000000000000001000000843900000d000000
alias voting2
}
multipath {
wwid 149455400000000000000000001000000ae3900000d000000
alias voting3
}
multipath {
wwid 149455400000000000000000001000000d03900000d000000
alias ocr1
}
multipath {
wwid 149455400000000000000000001000000e63900000d000000
alias ocr2
}
multipath {
wwid 149455400000000000000000001000000083a00000d000000
alias ocr3
}
}

In the example above, devic es with a spec ific wwid (per scsi_id(8)) are assigned persistent, user-defined
names (aliases) i.e. voting1, voting2, voting3, ocr1, ocr2 and ocr3.

2b. Verify Multipath Dev ices

Onc e multipathing has been c onfigured and multipathd service started, you should now have multipathed
Clusterware devic es referable by user-defined names, for example:

# multipath -ll
ocr3 (149455400000000000000000001000000083a00000d000000) dm-9 IET,VIRTUAL-DISK
[size=980M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 2:0:0:10 sdl 8:176 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:11 sdm 8:192 [active][ready]
ocr2 (149455400000000000000000001000000e63900000d000000) dm-3 IET,VIRTUAL-DISK
[size=980M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 2:0:0:8 sdj 8:144 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:9 sdk 8:160 [active][ready]
ocr1 (149455400000000000000000001000000d03900000d000000) dm-6 IET,VIRTUAL-DISK
[size=980M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 2:0:0:6 sdh 8:112 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:7 sdi 8:128 [active][ready]
voting3 (149455400000000000000000001000000ae3900000d000000) dm-2 IET,VIRTUAL-DISK
[size=965M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 2:0:0:4 sdf 8:80 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:5 sdg 8:96 [active][ready]
voting2 (149455400000000000000000001000000843900000d000000) dm-1 IET,VIRTUAL-DISK

6 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

[size=965M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 2:0:0:2 sdd 8:48 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:3 sde 8:64 [active][ready]
voting1 (1494554000000000000000000010000005c3900000d000000) dm-0 IET,VIRTUAL-DISK
[size=965M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 2:0:0:0 sdb 8:16 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:1 sdc 8:32 [active][ready]

In fac t, various devic e names are c reated and used to refer to multipathed devices i.e.:

# dmsetup ls | sort
ocr1 (253, 6)
ocr1p1 (253, 11)
ocr2 (253, 3)
ocr2p1 (253, 8)
ocr3 (253, 9)
ocr3p1 (253, 10)
voting1 (253, 0)
voting1p1 (253, 5)
voting2 (253, 1)
voting2p1 (253, 4)
voting3 (253, 2)
voting3p1 (253, 7)

# ll /dev/disk/by-id/
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000083a00000d000000 -> ../../sdm
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000083a00000d000000-part1
-> ../../sdm1
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-1494554000000000000000000010000005c3900000d000000 -> ../../sdc
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-1494554000000000000000000010000005c3900000d000000-part1
-> ../../sdc1
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000843900000d000000 -> ../../sde
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000843900000d000000-part1
-> ../../sde1
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000ae3900000d000000 -> ../../sdg
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000ae3900000d000000-part1
-> ../../sdg1
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000d03900000d000000 -> ../../sdi
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000d03900000d000000-part1
-> ../../sdi1
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000e63900000d000000 -> ../../sdk
lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000e63900000d000000-part1
-> ../../sdk1

# ls -l /dev/dm-*
brw-rw---- 1 root root 253, 0 Apr 23 11:15 /dev/dm-0
brw-rw---- 1 root root 253, 1 Apr 23 11:15 /dev/dm-1
brw-rw---- 1 root root 253, 10 Apr 23 11:15 /dev/dm-10
brw-rw---- 1 root root 253, 11 Apr 23 11:15 /dev/dm-11
brw-rw---- 1 root root 253, 2 Apr 23 11:15 /dev/dm-2
brw-rw---- 1 root root 253, 3 Apr 23 11:15 /dev/dm-3
brw-rw---- 1 root root 253, 4 Apr 23 11:15 /dev/dm-4
brw-rw---- 1 root root 253, 5 Apr 23 11:15 /dev/dm-5
brw-rw---- 1 root root 253, 6 Apr 23 11:15 /dev/dm-6
brw-rw---- 1 root root 253, 7 Apr 23 11:15 /dev/dm-7
brw-rw---- 1 root root 253, 8 Apr 23 11:15 /dev/dm-8
brw-rw---- 1 root root 253, 9 Apr 23 11:15 /dev/dm-9

7 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

# ll /dev/mpath/
lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr1 -> ../dm-6
lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr1p1 -> ../dm-11
lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr2 -> ../dm-3
lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr2p1 -> ../dm-8
lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr3 -> ../dm-9
lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr3p1 -> ../dm-10
lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting1 -> ../dm-0
lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting1p1 -> ../dm-5
lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting2 -> ../dm-1
lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting2p1 -> ../dm-4
lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting3 -> ../dm-2
lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting3p1 -> ../dm-7

# ll /dev/mapper/
brw-rw---- 1 root disk 253, 6 Apr 23 11:15 ocr1
brw-rw---- 1 root disk 253, 11 Apr 23 11:15 ocr1p1
brw-rw---- 1 root disk 253, 3 Apr 23 11:15 ocr2
brw-rw---- 1 root disk 253, 8 Apr 23 11:15 ocr2p1
brw-rw---- 1 root disk 253, 9 Apr 23 11:15 ocr3
brw-rw---- 1 root disk 253, 10 Apr 23 11:15 ocr3p1
brw-rw---- 1 root disk 253, 0 Apr 23 11:15 voting1
brw-rw---- 1 root disk 253, 5 Apr 23 11:15 voting1p1
brw-rw---- 1 root disk 253, 1 Apr 23 11:15 voting2
brw-rw---- 1 root disk 253, 4 Apr 23 11:15 voting2p1
brw-rw---- 1 root disk 253, 2 Apr 23 11:15 voting3
brw-rw---- 1 root disk 253, 7 Apr 23 11:15 voting3p1

The /dev/dm-N devic es are used internally by device-mapper-multipath and are non-persistent across reboot,
so should not be used. The /dev/mpath/ devices are created for multipath devices to be visible together,
however, may not be available during early stages of boot, so, again, should not be used. However,
/dev/mapper/ devic es are persistent and c reated sufficiently early during boot - use only these devices to
ac c ess and interac t with multipathed devic es.

3. Create Raw Dev ices

During the installation of Orac le Clusterware 10g Release 2 (10.2.0), the Universal Installer (OUI) is unable to
verify the sharedness of bloc k devic es, therefore requires the use of raw devices (whether to singlepath or
multipath devic es) to be spec ified for OCR and voting disks. As mentioned earlier, this is no longer the c ase
from Orac le11g R1 (11.1.0) that c an use multipathed block devices directly.

Manually c reate raw devic es to bind against multipathed device partitions (/dev/mapper/*pN). Disregard
devic e permissions for now - this will be addressed later. For example:

# raw /dev/raw/raw1 /dev/mapper/ocr1p1


/dev/raw/raw1: bound to major 253, minor 11
# raw /dev/raw/raw2 /dev/mapper/ocr2p1
/dev/raw/raw2: bound to major 253, minor 8
# raw /dev/raw/raw3 /dev/mapper/ocr3p1
/dev/raw/raw3: bound to major 253, minor 10
# raw /dev/raw/raw4 /dev/mapper/voting1p1
/dev/raw/raw4: bound to major 253, minor 5
# raw /dev/raw/raw5 /dev/mapper/voting2p1
/dev/raw/raw5: bound to major 253, minor 4
# raw /dev/raw/raw6 /dev/mapper/voting3p1
/dev/raw/raw6: bound to major 253, minor 7

# raw -qa
/dev/raw/raw1: bound to major 253, minor 11

8 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

/dev/raw/raw2: bound to major 253, minor 8


/dev/raw/raw3: bound to major 253, minor 10
/dev/raw/raw4: bound to major 253, minor 5
/dev/raw/raw5: bound to major 253, minor 4
/dev/raw/raw6: bound to major 253, minor 7

# ls -l /dev/raw/
crw------- 1 root root 162, 1 Apr 23 11:52 raw1
crw------- 1 root root 162, 2 Apr 23 11:52 raw2
crw------- 1 root root 162, 3 Apr 23 11:52 raw3
crw------- 1 root root 162, 4 Apr 23 11:52 raw4
crw------- 1 root root 162, 5 Apr 23 11:52 raw5
crw------- 1 root root 162, 6 Apr 23 11:52 raw6

At this point, you should have raw devic es bound to multipathed device partitions using user-defined names.

4. Test Raw Dev ice Accessibility

Test read/write ac c essibility to and from raw devices from and between cluster nodes, for example:

# dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=100


100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.762352 seconds, 134 kB/s

# su - oracle
$ dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=100
dd: opening `/dev/raw/raw1': Permission denied

# dd if=/dev/zero of=/dev/mapper/ocr1p1 bs=1024 count=100


100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.0468961 seconds, 2.2 MB/s

...

Onc e testing is c omplete, unbind all raw devices, for example:

# raw /dev/raw/raw1 0 0
/dev/raw/raw1: bound to major 0, minor 0
# raw /dev/raw/raw2 0 0
/dev/raw/raw2: bound to major 0, minor 0
# raw /dev/raw/raw3 0 0
/dev/raw/raw3: bound to major 0, minor 0
# raw /dev/raw/raw4 0 0
/dev/raw/raw4: bound to major 0, minor 0
# raw /dev/raw/raw5 0 0
/dev/raw/raw5: bound to major 0, minor 0
# raw /dev/raw/raw6 0 0
/dev/raw/raw6: bound to major 0, minor 0

5. Script Creation of Raw Bindings and Permissions

Onc e raw devic es have been c reated and their accessibility and usability established, configure raw devic e
bindings and permissions. Fac toring the undeprecation of raw devices from EL5 Update 4 (initscripts-
8.45.30-2), depending on your versioning, configure raw devices accordingly.

For >= EL5U4, c onfigure raw devic es via /etc/sysconfig/rawdevices in conjunction with the rawdevices

9 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

servic e.

For < EL5U4, use a c ustom or existing sc ript such as /etc/rc.local to configure raw devices, for example:

# cat /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

#####
# Oracle Cluster Registry (OCR) devices
#####
chown root:oinstall /dev/mapper/ocr**
chmod 660 /dev/mapper/ocr*
raw /dev/raw/raw1 /dev/mapper/ocr1p1
raw /dev/raw/raw2 /dev/mapper/ocr2p2
raw /dev/raw/raw3 /dev/mapper/ocr3p3
sleep 2
chown root:oinstall /dev/raw/raw1
chown root:oinstall /dev/raw/raw2
chown root:oinstall /dev/raw/raw3
chmod 660 /dev/raw/raw1
chmod 660 /dev/raw/raw2
chmod 660 /dev/raw/raw3
#####
# Oracle Cluster Voting disks
#####
chown oracle:oinstall /dev/mapper/voting*
chmod 660 /dev/mapper/voting*
raw /dev/raw/raw4 /dev/mapper/voting1p1
raw /dev/raw/raw5 /dev/mapper/voting2p1
raw /dev/raw/raw6 /dev/mapper/voting3p1
sleep 2
chown oracle:oinstall /dev/raw/raw4
chown oracle:oinstall /dev/raw/raw5
chown oracle:oinstall /dev/raw/raw6
chmod 660 /dev/raw/raw4
chmod 660 /dev/raw/raw5
chmod 660 /dev/raw/raw6

Note: depending on the type and speed of the underlying storage, a sleep(1) of one or two seconds may be
nec essary between raw devic e c reation and ownership/permission setting.

6. Test the Raw Dev ice Script

Restart the rawdevices servic e and/or exec ute the /etc/rc.local script to test the proper creation and
permission setting of both raw and multipath devices. Additionally, reboot the server(s) to further verify proper
boot-time c reation of devic es, for example:

# /etc/rc.local
/dev/raw/raw1: bound to major 253, minor 11
/dev/raw/raw2: bound to major 253, minor 8
/dev/raw/raw3: bound to major 253, minor 10
/dev/raw/raw4: bound to major 253, minor 5
/dev/raw/raw5: bound to major 253, minor 4

10 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

/dev/raw/raw6: bound to major 253, minor 7

# ll /dev/mapper/
brw-rw---- 1 root disk 253, 6 Apr 23 11:15 ocr1
brw-rw---- 1 root disk 253, 11 Apr 23 11:15 ocr1p1
brw-rw---- 1 root disk 253, 3 Apr 23 11:15 ocr2
brw-rw---- 1 root disk 253, 8 Apr 23 11:15 ocr2p1
brw-rw---- 1 root disk 253, 9 Apr 23 11:15 ocr3
brw-rw---- 1 root disk 253, 10 Apr 23 11:15 ocr3p1
brw-rw---- 1 root disk 253, 0 Apr 23 11:15 voting1
brw-rw---- 1 root disk 253, 5 Apr 23 11:15 voting1p1
brw-rw---- 1 root disk 253, 1 Apr 23 11:15 voting2
brw-rw---- 1 root disk 253, 4 Apr 23 11:15 voting2p1
brw-rw---- 1 root disk 253, 2 Apr 23 11:15 voting3
brw-rw---- 1 root disk 253, 7 Apr 23 11:15 voting3p1

# ls -l /dev/raw/
crw-rw---- 1 root oinstall 162, 1 Apr 23 11:57 raw1
crw-rw---- 1 root oinstall 162, 2 Apr 23 11:57 raw2
crw-rw---- 1 root oinstall 162, 3 Apr 23 11:57 raw3
crw-rw---- 1 oracle oinstall 162, 4 Apr 23 11:57 raw4
crw-rw---- 1 oracle oinstall 162, 5 Apr 23 11:57 raw5
crw-rw---- 1 oracle oinstall 162, 6 Apr 23 11:57 raw6

7. Install Oracle 10gR2 Clusterware

Proc eed to install Orac le Clusterware 10g Release 2 (10.2.0), ensuring to specify the appropriate raw devic es
(/dev/raw/rawN) for OCR and voting disks. OCR devices are initialised (formatted) as part of running the
root.sh sc ript. Before running root.sh, be aware that several known issues exist that will cause the
Clusterware installation to fail, namely:

Bug.4679769 FAILED TO FORMAT OCR DISK USING CLSFMT


Note.414163.1 10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA Failures)

Due to Bug.4679769, initialisation of multipathed OCR devices will fail. Therefore, before running root.sh,
download and apply patc h for Bug.4679769. If root.sh was already run without first having applied patc h for
Bug.4679769, remove (null) the failed, partially initialised OCR structures from all OCR devices, for example:

# dd if=/dev/zero of=/dev/raw/raw1 bs=1M count=25


25+0 records in
25+0 records out

# dd if=/dev/zero of=/dev/raw/raw2 bs=1M count=25


25+0 records in
25+0 records out

Before re-running root.sh, review Note.414163.1 to proactively address several known (vipca) issues that would
otherwise need to be separately resolved later. With the above complete, the running (or re-running) of root.sh
should result in proper initialisation of multipathed OCR/voting devices and successful completion of Orac le
Clusterware i.e.:

[oracle@oel5a crs]$ sudo ./root.sh


WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured

11 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

Setting the permissions on OCR backup directory


Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
assigning default hostname oel5a for node 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oel5a oel5a-int oel5a
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Now formatting voting device: /dev/raw/raw6
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
oel5a
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
...

Upon c ompletion of Orac le 10gR2 Clusterware installation, the Clusterware should be up and running, making
use of raw devic es bound to multipathed devices i.e.:

[oracle@oel5a crs]$ ocrcheck


Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262144
Used space (kbytes) : 1164
Available space (kbytes) : 260980
ID : 1749049955
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2
Device/File integrity check succeeded

Cluster registry integrity check succeeded

[root@orl5a /]# crsctl query css votedisk


0. 0 /dev/raw/raw4
1. 0 /dev/raw/raw5
2. 0 /dev/raw/raw6

located 3 votedisk(s).

[root@oel5a /]# crsctl check crs


CSS appears healthy
CRS appears healthy
EVM appears healthy

Refer to Note.394848.1 for any issues, suc h as -16 EBUSY [Device or resource busy], arising from the continued

12 of 13 02-09-2010 15:27
https://support.oracle.com/CSP/main/article?cmd=...

use of raw devic es being bound to multipathed devices.

The requirement to use raw devic es for OCR and voting devices solely applies to the initial installati on of
Orac le 10gR2 Clusterware. Onc e the installation is complete, OCR and voting devices can be switched to use
multipath devic es direc tly - refer to Note.401132.1 for further details.

NOTE:394848.1 - Install Of CRS Gets "Spec ified Partition May Not Have Correct Permission"
NOTE:401132.1 - How to install Orac le Clusterware with shared storage on block devices
NOTE:414163.1 - 10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA / SRVCTL / OUI
Failures)
NOTE:465001.1 - Configuring raw devic es (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0) on
RHEL5/OEL5
NOTE:605828.1 - Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0) on RHEL5/OEL5
http://download-west.orac le.c om/doc s/c d/B19306_01/install.102/b14203/storage.htm#BABBHECD
http://download.orac le.c om/doc s/c d/B28359_01/install.111/b28263/storage.htm#CDEBFDEH

Related

Products

Unbreakable Linux and Virtualization > Unbreakable Linux > Operating System > Linux OS

Keyw ords

OUI; STORAGE; RESOURCE BUSY; RAW DEVICE; CLUSTERWARE; OCR

Bac k to top

Rate this document

13 of 13 02-09-2010 15:27

You might also like