You are on page 1of 12

2.

  Configure LUNs for ASM:

(System Administrator's Task)

2a. Verify Multipath Devices:

Once multipathing has been configured and the multipathd service started, the
multipathed devices should now be available.

For detailed multipathing commands, please refer to How To Setup ASM & ASMLIB On
Native Linux Multipath Mapper disks? (Doc ID 602952.1)
Update the kernel partition table with the new partition as follow (If Real Application
Clusters (RAC), do on each node.):
# /sbin/partprobe

Then verify that all multipaths are active by executing:


# multipath -ll

360060e80045b2b0000005b2b000006c4dm-1 HITACHI,OPEN-V*20
[size=500G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 3:0:0:17 sdab 65:176 [active][ready]
\_ 1:0:0:17 sdm 8:192 [active][ready]
360060e80045b2b0000005b2b000006d8dm-2 HITACHI,OPEN-V*20
[size=500G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 3:0:0:18 sdac 65:192 [active][ready]
\_ 1:0:0:18 sdn 8:208 [active][ready]
360060e80045b2b0000005b2b00001679dm-14 HITACHI,OPEN-V
[size=50G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:9 sdk 8:160 [active][ready]
\_ 3:0:0:9 sdz 65:144 [active][ready]
360060e80045b2b0000005b2b0000312edm-9 HITACHI,OPEN-V
[size=257M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:4 sdf 8:80 [active][ready]
\_ 3:0:0:4 sdu 65:64 [active][ready]
360060e80045b2b0000005b2b00001007dm-3 HITACHI,OPEN-V*5
[size=250G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 3:0:0:19 sdad 65:208 [active][ready]
\_ 1:0:0:19 sdo 8:224 [active][ready]
360060e80045b2b0000005b2b0000163cdm-20 HITACHI,OPEN-V*11 --->
multipathed
[size=550G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:7 sdh 8:128 [active][ready]   ---> Required to be [active]
[ready]
\_ 3:0:0:7 sdi  8:112 [active][ready]   ---> Required to be [active]
[ready]

Note:   DM-Multipath provides a way of organizing the I/O paths logically, by creating a
single multipath device on top of the underlying devices.  Each device that comes from
Hitachi Storage Sub-system (360060e80045b2b0000005b2b0000163c - WWID) requires
two underlying physical devices. These two physical devices have to be partitioned (e.g.
sdh1 and sdi1).  Also, the WWID device has to be partitioned (e.g.
3360060e80045b2b0000005b2b0000163cp1). Also, the two physical devices are
required to be the same size within that group, both non-partitioned and partitioned,
and across nodes in Real Application Clusters (RAC).

In fact, various device names are created and used to refer to multipathed
devices, for example:
# dmsetup ls | sort
360060e80045b2b0000005b2b000006b0 (253, 0)
360060e80045b2b0000005b2b000006b0p1 (253, 15)
360060e80045b2b0000005b2b000006c4 (253, 1)
360060e80045b2b0000005b2b000006c4p1 (253, 17)
360060e80045b2b0000005b2b000006d8 (253, 2)
360060e80045b2b0000005b2b000006d8p1 (253, 26)
360060e80045b2b0000005b2b0000163c (253,11)
360060e80045b2b0000005b2b0000163cp1 (253,20)

# ll /dev/mpath/
lrwxrwxrwx 1 root root 7 Jun 27 07:17
360060e80045b2b0000005b2b000006b0 -> ../dm-0
lrwxrwxrwx 1 root root 8 Jun 27 07:17
360060e80045b2b0000005b2b000006b0p1 -> ../dm-15
lrwxrwxrwx 1 root root 7 Jun 27 07:17
360060e80045b2b0000005b2b000006c4 -> ../dm-1
lrwxrwxrwx 1 root root 8 Jun 27 07:17
360060e80045b2b0000005b2b000006c4p1 -> ../dm-17
lrwxrwxrwx 1 root root 7 Jun 27 07:17
360060e80045b2b0000005b2b000006d8 -> ../dm-2
lrwxrwxrwx 1 root root 8 Jun 27 07:17
360060e80045b2b0000005b2b000006d8p1 -> ../dm-26
lrwxrwxrwx 1 root root 7 Jun 27 07:17
360060e80045b2b0000005b2b0000163c -> ../dm-11
lrwxrwxrwx 1 root root 8 Jun 27 07:17
360060e80045b2b0000005b2b0000163cp1 -> ../dm-20

# ll /dev/mapper/
brw-rw---- 1 root disk 253, 0 Jun 27 07:17
360060e80045b2b0000005b2b000006b0
brw-rw---- 1 root disk 253, 15 Jun 27 07:17
360060e80045b2b0000005b2b000006b0p1
brw-rw---- 1 root disk 253, 1 Jun 27 07:17
360060e80045b2b0000005b2b000006c4
brw-rw---- 1 root disk 253, 17 Jun 27 07:17
360060e80045b2b0000005b2b000006c4p1
brw-rw---- 1 root disk 253, 11 Jun 27 07:17
360060e80045b2b0000005b2b0000163c
brw-rw---- 1 root disk 253, 20 Jun 27 07:17
360060e80045b2b0000005b2b0000163cp1

# ls -lR /dev|more
/dev:
drwxr-xr-x 3 root root 60 Jun 27 07:17 bus
lrwxrwxrwx 1 root root 4 Jun 27 07:17 cdrom -> scd0
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrom-hda -> hda
lrwxrwxrwx 1 root root 4 Jun 27 07:17 cdrom-sr0 -> scd0
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrw -> hda
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrw-hda -> hda
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdwriter -> hda
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdwriter-hda -> hda
crw------- 1 root root 5, 1 Jun 27 07:18 console
lrwxrwxrwx 1 root root 11 Jun 27 07:17 core -> /proc/kcore
drwxr-xr-x 10 root root 200 Jun 27 07:17 cpu
drwxr-xr-x 6 root root 120 Jun 27 07:17 disk
brw-rw---- 1 root root 253, 0 Jun 27 07:17 dm-0
brw-rw---- 1 root root 253, 1 Jun 27 07:17 dm-1
brw-rw---- 1 root root 253, 10 Jun 27 07:17 dm-10
brw-rw---- 1 root root 253, 11 Jun 27 07:17 dm-11
brw-rw---- 1 root root 253, 12 Jun 27 07:17 dm-12
brw-rw---- 1 root root 253, 13 Jun 27 07:17 dm-13
brw-rw---- 1 root root 253, 14 Jun 27 07:17 dm-14
brw-rw---- 1 root root 253, 15 Jun 27 07:17 dm-15
brw-rw---- 1 root root 253, 16 Jun 27 07:17 dm-16
brw-rw---- 1 root root 253, 17 Jun 27 07:17 dm-17
brw-rw---- 1 root root 253, 18 Jun 27 07:17 dm-18
brw-rw---- 1 root root 253, 19 Jun 27 07:17 dm-19
brw-rw---- 1 root root 253, 2 Jun 27 07:17 dm-2
brw-rw---- 1 root root 253, 20 Jun 27 07:17 dm-20
brw-rw---- 1 root root 253, 21 Jun 27 07:17 dm-21
brw-rw---- 1 root root 253, 22 Jun 27 07:17 dm-22
brw-rw---- 1 root root 253, 23 Jun 27 07:17 dm-23
brw-rw---- 1 root root 253, 24 Jun 27 07:17 dm-24
brw-rw---- 1 root root 253, 25 Jun 27 07:17 dm-25
brw-rw---- 1 root root 253, 26 Jun 27 07:17 dm-26
...
/dev/disk/by-label:
lrwxrwxrwx 1 root root 10 Jun 27 07:17 1 -> ../../sda5
lrwxrwxrwx 1 root root 10 Jun 27 07:17 boot1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 27 07:17 optapporacle1 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 27 07:17 SWAP-sda3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Jun 27 07:17 tmp1 -> ../../sda7
lrwxrwxrwx 1 root root 10 Jun 27 07:17 var1 -> ../../sda6

3.  Automatic Storage Management Library (ASMLIB) setup:

Note:  For improved performance and easier administration, Oracle recommends that
the Automatic Storage Management Library (ASMLIB) driver be used instead of raw
devices to configure Automatic Storage Management disks.
Important:  The version of the ASMLIB driver version has to be the same as the
kernel version. Download the matching ASMLIB driver version from Oracle's website for
ASMLIB drivers:   http://www.oracle.com/technetwork/server-
storage/linux/downloads/index-088143.html
Important: ASMlib is supported for only Oracle Linux 6...not for Red Hat Enterprise
Linux 6 (RHEL6).

Important: For RHEL 6:  For RHEL6, Oracle will only provide ASMLib software and
updates when configured with a kernel distributed by Oracle.
Oracle will not provide ASMLib packages for kernels distributed by Red Hat as part of
RHEL6.
ASMLib updates will be delivered via Unbreakable Linux Network(ULN) which is
available to customers with Oracle Linux support.
ULN works with both Oracle Linux or Red Hat Linux installations, but ASMlib usage will
require replacing any Red Hat kernel with a kernel provided by Oracle.
ULN is a comprehensive resource for Oracle Linux support subscribers, and offers
access to Linux software patches, updates and fixes
The documents below provide the location for the ASMLIB RPMS for Red Hat Enterprise
Linux 6 (RHEL) and the current ASMLIB support guidelines & details on RHEL6:

        'oracleasmlib' and 'oracleasm-support' RPMs for Red Hat Enterprise Linux 6
(RHEL) Location.
        Note: 1089399.1 Oracle ASMLib Software Update Policy for Red Hat Enterprise
Linux Supported by Red Hat.

Note: For Red Hat Enterprise Linux 6 (beginning with 6.4) the kernel driver package
'kmod-oracleasm' is available directly from Red Hat, and can be installed from the
"RHEL Server Supplementary (v. 6 64-bit x86_64)" channel on Red Hat Network
(RHN).  Updates to this module will be provided by Red Hat.  Please check with your
Red Hat advisor for questions concerning support of the kernel driver package.  The
'oracleasmlib' and 'oracleasm-support' packages are maintained by Oracle; they are
required in order to use kmod-oracleasm.  
These are available for download from the Oracle Technology Network.

Also, a new thread for this information in our ASM Community:


ASMLIB Support On Red Hat 6 (ASMLIB RPMS Location For Red Hat Enterprise Linux 6
(RHEL)).  

For more information, you may reference Oracle Documents:Oracle ASMLib


Software Update Policy for Red Hat Enterprise Linux Supported by Red Hat
(Doc ID 1089399.1)
Oracle Linux 6 Release Notes (Doc ID 1292376.1)
For more information on RHEL 7, you may reference Oracle Documents:
Oracle Database Online Documentation 12c Release 1 (12.1) / Installing and
Upgrading Grid Infrastructure Installation Guide:
8.1 Supported Oracle Linux 7 and Red Hat Linux 7 Distributions for x86-64    

3a. Verify that ASMLIB has not been installed already before installing (If Real
Application Clusters (RAC), run this command on each node):

For example (as root):

# rpm -qa | grep oracleasm

i. Output if installed:

oracleasm-2.6.18-164.el5PAE-2.0.5-1.el5 -----> optional


oracleasm-2.6.18-164.el5debug-2.0.5-1.el5 -----> optional
oracleasm-2.6.18-164.el5-2.0.5-1.el5
oracleasmlib-2.0.4-1.el5
oracleasm-support-2.1.3-1.el5
oracleasm-2.6.18-164.el5xen-2.0.5-1.el5 -----> optional

ii. Output if not installed

package not installed

a. Install (if not installed).  Install MUST match kernel version.


(System Administrator's Task)

b. Verify kernel version:

# uname -r
2.6.18-164.el5PAE

c. Install the correct packages for the kernel version.

# rpm -i oracleasm-support-2.1.3-1.el5.i386.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
oracleasm-2.6.18-164.el5-2.0.5-1.el5.i386.rpm
       Failed example:
/etc/init.d/oracleasm status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
 

 If status failed ("no" displayed), then:

Configuring the Oracle ASM library driver, for example:


# /etc/init.d/oracleasm configure (If RAC, run this command on all
nodes)

Default user to own the driver interface []: grid        -----> input
Grid Infrastructure/ASM user name
Default group to own the driver interface []: asmadmin   -----> input
Grid Infrastructure/ASM group name
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]

Note: This will configure the on-boot properties of the Oracle ASM library driver. The
following questions will determine whether the driver is loaded on boot and what
permissions it will have. The current values will be shown in brackets ('[]'). Hitting
<ENTER> without typing an answer will keep that current value. Ctrl-C will abort.

3c. Check status again:

# /etc/init.d/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

4.  Create ASM diskgroups:

Important:  The /dev/dm-n devices are internal to device-mapper-multipath and are


non-persistent, so should not be used. The /dev/mpath/ devices are created for
multipath devices to be visible together, however, may not be available during the early
stages of the boot process, so should not typically be used.  However, /dev/mapper/
devices are persistent and created early during boot...These are the only device names
that should be used to access multipathed devices.

Note:  Use /dev/mapper/[WWID]p1 as the device name for createdisk.  If Real


Application Clusters (RAC), do only on the first node.  All commands done as root.
4a. Check prior to createdisk command:

# /etc/init.d/oracleasm querydisk DAT


Disk "DAT" does not exist or is not instantiated

# /etc/init.d/oracleasm querydisk
/dev/mapper/360060e80045b2b0000005b2b0000163cp1
Device "/dev/mapper/360060e80045b2b0000005b2b0000163cp1" is not marked
as an ASM disk

4b. After Check, do createdisk command:

# /etc/init.d/oracleasm createdisk DAT


/dev/mapper/360060e80045b2b0000005b2b0000163cp1
Marking disk "/dev/mapper/360060e80045b2b0000005b2b0000163cp[ OK ] ASM
disk

Note:  If using multiple devices/disks within an ASM diskgroup, a good practice would
be to use /etc/init.d/oracleasm createdisk with numbers appended to ASM alias name to
create the members of the diskgroup for example:              
# /etc/init.d/oracleasm createdisk DAT01
/dev/mapper/360060e80045b2b0000005b2b0000163cp1

5.  To make the disk available enter the following commands:      

Note:  If Real Application Clusters (RAC), run the following two commands, in the
order of first node, second node, etc.   All commands done as root.

        

5a.  Scan ASM disks:

# /etc/init.d/oracleasm scandisks

Scanning system for ASM disks: [ OK ]

5b.  List ASM disks:


 

# /etc/init.d/oracleasm listdisks
DAT

 6.  Check the ASM diskgroups:

For example (as root) (If Real Application Clusters (RAC), do on each node.):
# /etc/init.d/oracleasm querydisk DAT
Disk "DAT" is a valid ASM disk on device [253, 20]

or

# /etc/init.d/oracleasm querydisk -d DAT


Disk "DAT" is a valid ASM disk on device [253, 20]

# /etc/init.d/oracleasm querydisk
/dev/mapper/360060e80045b2b0000005b2b0000163cp1
Device "/dev/mapper/360060e80045b2b0000005b2b0000163cp1" is marked as
an ASM disk

Note:  The numbers, [253, 20], indicate major and minor numbers that correspond to
the major and minor numbers in the file, /proc/partitions.  These numbers can be used
to validate the multipathed device by cross-referencing these numbers with the file,
/proc/partitions, and the output of "multipath -ll" to ensure that the major and minor
numbers match.

# cat /proc/partitions
major minor #blocks name
.
.
.
253 20 524281275 dm-20

7.  Ensure that the allocated devices can be seen in /dev/mpath:

           For example (as root) (f Real Application Clusters (RAC), do on each node.):
# cd /dev/mpath

# ls -l
total 0
lrwxrwxrwx 1 root root 8 May 8 10:32 360060e80045b2b0000005b2b0000163c
-> ../dm-11
lrwxrwxrwx 1 root root 8 May 8 10:32
360060e80045b2b0000005b2b0000163cp1 -> ../dm-20
lrwxrwxrwx 1 root root 8 May 8 10:32 360060e80045b2b0000005b2b0000155a
-> ../dm-14
lrwxrwxrwx 1 root root 8 May 8 10:32
360060e80045b2b0000005b2b0000155ap1 -> ../dm-22
lrwxrwxrwx 1 root root 8 May 8 10:32 360060e80045b2b0000005b2b00001584
-> ../dm-16
lrwxrwxrwx 1 root root 8 May 8 10:32
360060e80045b2b0000005b2b00001584p1 -> ../dm-23
lrwxrwxrwx 1 root root 7 May 8 10:32 360060e80045b2b0000005b2b00003130
-> ../dm-1
lrwxrwxrwx 1 root root 7 May 8 10:32 360060e80045b2b0000005b2b00003131
-> ../dm-2
lrwxrwxrwx 1 root root 7 May 8 10:32 360060e80045b2b0000005b2b00003132
-> ../dm-3
lrwxrwxrwx 1 root root 7 May 8 10:32 360060e80045b2b0000005b2b00003133
-> ../dm-4
lrwxrwxrwx 1 root root 7 May 8 10:32 360060e80045b2b0000005b2b00003134
-> ../dm-5

8.  Ensure that the devices can be seen in /dev/mapper:

           For example (as root) (f Real Application Clusters (RAC), do on each node.):

# ls -l /dev/mapper
total 0
brw-rw---- 1 root disk 253, 0 Jun 27 07:17
360060e80045b2b0000005b2b000006b0
brw-rw---- 1 root disk 253, 15 Jun 27 07:17
360060e80045b2b0000005b2b000006b0p1
brw-rw---- 1 root disk 253, 1 Jun 27 07:17
360060e80045b2b0000005b2b000006c4
brw-rw---- 1 root disk 253, 17 Jun 27 07:17
360060e80045b2b0000005b2b000006c4p1
brw-rw---- 1 root disk 253, 2 Jun 27 07:17
360060e80045b2b0000005b2b000006d8
brw-rw---- 1 root disk 253, 26 Jun 27 07:17
360060e80045b2b0000005b2b000006d8p1
brw-rw---- 1 root disk 253, 11 Jun 27 07:17
360060e80045b2b0000005b2b0000163c
brw-rw---- 1 root disk 253, 20 Jun 27 07:17
360060e80045b2b0000005b2b0000163cp1
brw-rw---- 1 root disk 253, 12 Jun 27 07:17
360060e80045b2b0000005b2b0000155a
brw-rw---- 1 root disk 253, 29 Jun 27 07:17
360060e80045b2b0000005b2b0000155ap1
brw-rw---- 1 root disk 253, 14 Jun 27 07:17
360060e80045b2b0000005b2b00001679
brw-rw---- 1 root disk 253, 25 Jun 27 07:17
360060e80045b2b0000005b2b00001679p1
brw-rw---- 1 root disk 253, 13 Jun 27 07:17
360060e80045b2b0000005b2b00001584
brw-rw---- 1 root disk 253, 24 Jun 27 07:17
360060e80045b2b0000005b2b00001584p1
brw-rw---- 1 root disk 253, 3 Jun 27 07:17
360060e80045b2b0000005b2b00001007
brw-rw---- 1 root disk 253, 19 Jun 27 07:17
360060e80045b2b0000005b2b00001007p1
brw-rw---- 1 root disk 253, 4 Jun 27 07:17
360060e80045b2b0000005b2b0000189c
brw-rw---- 1 root disk 253, 18 Jun 27 07:17
360060e80045b2b0000005b2b0000189cp1

9.  Check the device type:

           For example (as root) (f Real Application Clusters (RAC), do on each node.):

# /sbin/blkid | grep oracleasm


/dev/dm-20: LABEL="DAT" TYPE="oracleasm"   ---> >>here multipathed
/dev/dm-22: LABEL="ARC" TYPE="oracleasm"
/dev/dm-23: LABEL="FRA" TYPE="oracleasm"
/dev/sdh1: LABEL="DAT" TYPE="oracleasm"    ---> here physical
/dev/sdx1: LABEL="ARC" TYPE="oracleasm"
/dev/sdj1: LABEL="FRA" TYPE="oracleasm"
/dev/sdi1: LABEL="DAT" TYPE="oracleasm"    ---> >> here physical
/dev/sdy1: LABEL="ARC" TYPE="oracleasm"
/dev/sdz1: LABEL="FRA" TYPE="oracleasm"
>>>>

10.  Setup the ASM parameter (ORACLEASM_SCANORDER), in the file for


ASMLIB configuration, /etc/sysconfig/oracleasm, for forcing ASM to bind with
the multipath devices

Note:  If Real Application Clusters (RAC), do commands on each node.  All commands
done as root.

10a.  Check the file, /etc/sysconfig/oracleasm:

# ls -la /etc/sysconfig/oracleasm
lrwxrwxrwx 1 root root 24 Jun 13 09:58 /etc/sysconfig/oracleasm ->
oracleasm-_dev_oracleasm
 

10b. Make a backup of the original file, /etc/sysconfig/oracleasm-_dev_oracleasm

# cp /etc/sysconfig/oracleasm-_dev_oracleasm /etc/sysconfig/oracleasm-
_dev_oracleasm.orig

10c.  Modify the ORACLEASM_SCANORDER and ORACLEASM_SCANEXCLUDE


parameters in /etc/sysconfig/oracleasm:

# vi /etc/sysconfig/oracleasm-_dev_oracleasm

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning

ORACLEASM_SCANORDER="mpath dm"

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan

ORACLEASM_SCANEXCLUDE="sd"

Note: Another valid value can be used for ORACLEASM_SCANORDER:


ORACLEASM_SCANORDER="dm"

10d. save file

10e. Restart oracleasm:

# service oracleasm restart

or

# /etc/init.d/oracleasm restart

10f. Check mulitpath device against /proc/partitions file:


 

# cat /proc/partitions
major minor #blocks name
.
.
.
253 20 524281275 dm-20

10g. Check mulitpath device against the file, /dev/oracleasm/disks:

# ls -ltr /dev/oracleasm/disks
brw-rw---- 1 grid asmadmin 253, 20 Oct 4 13:37 DAT

10f. Check oracleasm disks again:

# /etc/init.d/oracleasm listdisks
DAT

Note:  ASMLIB first tries/scans all disks that are in the /proc/partitions file. Within the
multipath directory, /dev/mpath, the alias names and the WWIDs point to/are linked to
the multipathed names dm[-n] names.  Furthermore, ASMLIB does not scan any disks
(ORACLEASM_SCANEXCLUDE) that start with "sd". This is all the SCSI disks.

Additional Resources

Community Discussions: Storage Management MOS Community


Still have questions? Use the above community to search for similar discussions or start
a new discussion on this subject.

You might also like