You are on page 1of 27

IBM Americas Advanced Technical Support

Oracle 11gR2 ASM on AIX


Configuration & Implementation
Considerations

R. Ballough
ballough@us.ibm.com
IBM Advanced Technical Support
October 1, 2013

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 1 of 27
IBM Americas Advanced Technical Support

Introduction......................................................................................................................... 4
Automatic Storage Management (ASM) Overview ........................................................... 5
ASM Management Overview ............................................................................................. 7
ASM Management with AIX............................................................................................ 12
ASM Disk Header Integrity .......................................................................................... 12
Recovering from ASM disk header corruption............................................................. 13
Storage Requirements and Dependencies ..................................................................... 14
IBM storage products:............................................................................................... 14
EMC Storage products:............................................................................................. 15
Hitachi Storage products:.......................................................................................... 15
Violin Memory.......................................................................................................... 16
ASM and Cluster Software ........................................................................................... 16
Adding or Removing LUNs.......................................................................................... 18
Adding a LUN........................................................................................................... 18
Removing a LUN...................................................................................................... 18
Preferred Mirror Read................................................................................................... 20
Administration Implications of ASM Implementation ................................................. 21
Storage Management Considerations ....................................................................... 21
Disk Group Considerations....................................................................................... 21
Tuning Options and Recommendations Specific to ASM Implementations .................... 23
AIX parameters ......................................................................................................... 23
Database Parameters ................................................................................................. 23
Parameters to be included in the spfile of the ASM instances:................................. 24
ASM disk group parameters ..................................................................................... 25
Reminders ......................................................................................................................... 26
Trademarks ....................................................................................................................... 26
References......................................................................................................................... 26
Acknowledgements........................................................................................................... 27

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 2 of 27
IBM Americas Advanced Technical Support

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 3 of 27
IBM Americas Advanced Technical Support

Introduction

Prior to the release of Oracle Database Server 10g, database administrators had to choose
between raw logical volumes or filesystems to house their data files. The decision to
implement one or the other was always a compromise, as raw logical volumes offer the
best performance, but filesystems are easier to administer.

The choice of filesystems offers many features for the dba. Since the unit of storage
allocation is a filesystem, and the filesystem is owned by the Oracle user, the DBA can
allocate additional data files as needed. Oracle’s autoextend feature can also be used to
increase the space allocation for data files when necessary, and free space can be easily
seen at the system level to be used with capacity planning tools. When it comes to
backups, the data copy can also take place at the system level, backing up only used data.

When raw logical volumes are used, a logical volume must be allocated for each database
file. Adding or resizing a logical volume must be performed by the systems
administrator. For backups, either RMAN must be used, or the entire raw logical
volume must be written to the backup media, including unused sectors.

Although the management issues are more complex, raw logical volumes traditionally
offered the best performance, though filesystem improvements to bypass the filesystem
buffer cache such as AIX’s Concurrent I/O (CIO) can offer response time near that of
raw in some configurations.

In Oracle 10g, a new choice for data file storage was introduced, called Automatic
Storage Management (ASM). Built on raw device files, ASM offers the performance
of raw logical volumes, but offers the configuration flexibility of filesystems to the DBA.

As ASM has largely replaced raw logical volume configurations, Oracle has desupported
raw device storage in 12c.1

The purpose of this paper is to provide information about the requirements and
considerations for ASM implementation for a both single instance and Real Applications
Cluster (RAC) environments using the AIX operating system.

1
MOS Note 754305.1, Oracle Database Upgrade Guide 12c Release 1
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 4 of 27
IBM Americas Advanced Technical Support

Automatic Storage Management (ASM) Overview

With AIX, each LUN has a raw device file in the /dev directory such as /dev/rhdisk0. In
a filesystem or raw logical volume environment, a volume group is created using one or
more LUNs, and the volume group is subdivided into logical volumes, each of which has
its own raw device file, such as /dev/rlv0. For an ASM environment, this raw device file
for either a LUN or a raw logical volume is assigned to the oracle user.2 An ASM
instance manages these device files; in a Grid Infrastructure environment, one ASM
instance will be created per cluster node.

The raw device files given to ASM are organized into ASM disk groups. In a typical
configuration, two disk groups are created, one for data and one for a recovery area, but if
LUNs are of different sizes or performance characteristics, more disk groups should be
configured, with only similar LUNs (similar in terms of size, performance, and raid
characteristics) comprising each disk group. For each ASM diskgroup, a level of
redundancy is defined, which may be normal (mirrored), high (3 mirrors), or external (no
mirroring). The majority of AIX customers use disk subsystems which offload
mirroring capabilities and therefore use ‘external’ redundancy with ASM.

When a file is created within ASM, it is automatically striped across all storage allocated
to the disk groups according to the stripe specification in v$asm_template. The stripe
width specified may be “FINE”, which is a 128k stripe width, or “COARSE”, indicating
the stripe width of the ASM disk group (default is 1M), which is called the allocation
unit size, or AU_SIZE. In 11g, the AU_SIZE may be set per disk group to values of
1M, 2M, 4M, 8M, 16M, 32M, or 64M.

In Oracle 10g, the extent size, which the smallest addressable unit of data (a contiguous
allocation of data blocks), is identical to the AU_SIZE. In 11g, the extent size has been
decoupled from the AU_SIZE to support very large database environments. Because
databases using ASM use memory from the shared pool to store each extent, a VLDB
environment may require a large shared pool to address the data in a 10g environment.
To address this issue in 11g, the first 20,000 extents for a datafile are sized at AU_SIZE,
the next 20,000 extents are sized at 8* AU_SIZE, and extents 40,000 and beyond are
sized at 64 * AU_SIZE. To minimize shared pool usage in an environment where
datafiles are expected to be large, AU_SIZE should be increased beyond the default
value.

In 11gR2, ASM is now installed as part of the Oracle Grid Infrastructure software stack,
the same software used for RAC installations; however, it it not necessary to install or

2
ASM also supports NFS files as ASM disks, and for that configuration, the NFS filesystem is used as the
ASM device rather than a raw device file. However, this configuration is seen infrequently, so it is out of
scope for this paper.
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 5 of 27
IBM Americas Advanced Technical Support

license Oracle RAC simply to use ASM. The ASM version installed may be higher
than the Oracle database accessing it for storage; the disk group parameter
compatible.rdbms sets the minimum database release that can access that disk group.
When this parameter is set lower than the ASM version installed, any features introduced
after the version specified by compatible.rdbms can not be used.

When ASM is started, it scans all devices which match the parameter asm_diskstring. If
the /dev/rhdisk device entry has the appropriate permissions (ownership of oracle:dba3
and permission mode 660), the ASM header (stored on disk) is read and diskgroups can
be created, or existing disk groups can be mounted. ASM then registers the diskgroups
with the Cluster Synchronization Services Daemon (CSS), which is in charge of
monitoring ASM, its diskgroups and disks, and keeping disk group metadata in sync.

When a database opens a file managed by ASM, the database instance queries the ASM
instance for the map of extents for that file, and stores the extent map in the shared pool.
Once that extent map is provided to the database instance, the database instance performs
I/O to the device file locations indicated by the extent map – that is, the ASM instance
provides the database with pointers to the data, and is not actually used to perform I/O.
In this way, the instance does not need to continually query the ASM instance for storage
location data.

New in 11gR2 is a logical volume and filesystem layer in ASM, called ASM Dynamic
Volume Manager (ADVM) and ASM Cluster File System (ACFS), respectively. This
new layer may be used for non-database files such as application files, exports, even
Oracle database binaries. ADVM and ACFS are not intended for Oracle data files or
Oracle Clusterware files. The parameter compatible.rdbms must be at least 11.2.0.2 in
order to user ADVM and ACFS.

3
Or the primary user and group of the Grid Infrastructure software, if a different user id is configured for
this role.
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 6 of 27
IBM Americas Advanced Technical Support

ASM Management Overview

Options for managing ASM include using Oracle Enterprise Manager (OEM), the sqlplus
command line ("sqlplus / as sysasm"), the ASMCMD utility, and, new in 11gR2, the
ASM Configuration Assistant (ASMCA). ASMCA can be used to create diskgroups,
add or remove disks from disk groups, mount or dismount diskgroups, or set up ADVM
and ACFS. ASMCA also has an ‘Advanced Options’ section which allows AU_SIZE
and compatibility parameters to be configured when a disk group is created.

Versions prior to 11gR2 can use DBCA for ASM configuration, but ASM diskgroups
must be created manually in order to alter the AU_SIZE. Manual creation, as well as

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 7 of 27
IBM Americas Advanced Technical Support

other management tasks, is typically performed by logging in to the ASM instance as the
'sysasm' user.

$ export ORACLE_SID=+ASM
$ sqlplus / as sysasm
SQL> create diskgroup bigau external redundancy disk '/dev/rhdisk6'
2 attribute 'au_size'= '16M', 'compatible.asm' = '11.2';

Datafiles can be created normally on the newly created ASM diskgroup, using the '+' sign
and diskgroup name rather than the usual file and path name.

export ORACLE_SID=client_sid
SQL> create tablespace bigautbs datafile '+BIGAU' size 64M;
Tablespace created.

The AU_SIZE can be verified from the v$asm_diskgroup view within the ASM instance,
for example:

SQL> select name, allocation_unit_size from v$asm_diskgroup;

NAME ALLOCATION_UNIT_SIZE
------------------------------ --------------------
DATA 1048576
BIGAU 16777216

Other useful views in ASM include v$asm_disk, v$asm_client, v$asm_file, v$asm_alias,


v$asm_operation, and v$asm_template.

To create a data file in an ASM environment, a tablespace is simply created within the
database instance, and no supporting filesystem or raw logical volume creation is
necessary.

SQL> create tablespace index01 datafile ‘+DATA’ size 1024M;

Since data file information such as location and free space by are not visible from the
operating system command line when using ASM, Oracle has provided a utility called
ASMCMD to provide a unix-like command line environment for querying information
about ASM storage. The utility was first released in 10.2.0.1, but it has not been until
the 11gR2 release that ASMCMD has evolved to include most commands needed to
manage ASM storage. Within ASMCMD, the systems administrator will find familiar
navigation commands like cd, cp, du, find, ls, mkdir, pwd, and rm.

Instead of checking the diskgroup AU_SIZE attribute as described earlier, ASMCMD


could be used:

$ export ORACLE_SID=+ASM
$ asmcmd
ASMCMD> lsattr –l –G data
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 8 of 27
IBM Americas Advanced Technical Support

Name Value
access_control.enabled FALSE
access_control.umask 066
au_size 8388608
cell.smart_scan_capable FALSE
compatible.asm 11.2.0.0.0
compatible.rdbms 11.1.0.0.0
disk_repair_time 3.6h
sector_size 512

The ‘lsdg’ command lists all disk groups and their attributes, including total and free disk
space:
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 8388608 15360 15224
0 15224 0 N DATA/
MOUNTED EXTERN N 512 4096 1048576 10240 8672
0 8672 0 N RECO/

The ‘lsdsk’ command breaks this information down by LUN:

ASMCMD> lsdsk -p
Group_Num Disk_Num Incarn Mount_Stat Header_Stat Mode_Stat
State Path
2 0 4265480687 CACHED MEMBER ONLINE
NORMAL /dev/rhdiskASM2
2 1 4265480686 CACHED MEMBER ONLINE
NORMAL /dev/rhdiskASM3
1 0 4265480698 CACHED MEMBER ONLINE
NORMAL /dev/rhdiskASM4
1 1 4265480699 CACHED MEMBER ONLINE
NORMAL /dev/rhdiskASM5

The iostat command includes performance statistics similar to the operating system iostat
information:

ASMCMD> iostat -t
Group_Name Dsk_Name Reads Writes Read_Time Write_Time
DATA DATA_0000 8650752 45522944 .044531 6.432436
DATA DATA_0001 208896 16384 .014472 .001202
RECO RECO_0000 918290432 27210886656 5.861197 1117.007562
RECO RECO_0001 920691200 22919404032 5.49306 804.800452

Information about individual datafiles can be seen, just like at the operating system level,
with the ‘ls’ command.
ASMCMD> ls -s
Block_Size Blocks Bytes Space Name
8192 1048577 8589942784 8592031744 CLASS.261.781456745
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 9 of 27
IBM Americas Advanced Technical Support

8192 99841 817897472 819986432 SYSAUX.266.779650785


8192 93441 765468672 767557632 SYSTEM.260.779650785
8192 87681 718282752 720371712 UNDOTBS1.264.779650785
8192 641 5251072 6291456 USERS.263.779650785

Databases using ASM can be seen with the ASMCMD command ‘lsct’

ASMCMD> lsct
DB_Name Status Software_Version Compatible_version Instance_Name
Disk_Group
DB06ASM CONNECTED 11.2.0.2.0 11.2.0.0.0 DB06ASM
DATA
DB06ASM CONNECTED 11.2.0.2.0 11.2.0.0.0 DB06ASM
RECO

Additional commands available with ASMCMD can be seen by issuing the 'help'
command, and further information about specific commands can be shown with 'help
<command>'

ASMCMD> help
ASMCMD> help chdg

If a drive is added to or removed from an ASM diskgroup, ASM will automatically


redistribute file extents evenly across the disks in the diskgroup. When the change
occurs, the rebalance process (RBAL) detects the change, and messages one or more
ARBx processes to perform the actual rebalance. The number of ARBx processes is
determined from the parameter ASM_POWER_LIMIT. By default, this parameter is
set to 1, making rebalancing a low impact operation. For some environments, even one
rebalancing process may not be desirable behavior during peak workloads. If
ASM_POWER_LIMIT is set to 0, no rebalancing will occur when a disk is added. If
this setting is chosen, care must be taken to manually increase ASM_POWER_LIMIT
when rebalancing is deemed acceptable.

Although ASM_POWER_LIMIT cannot be changed using ASMCMD, rebalancing


operations may be initiated manually with a specified power limit:

ASMCMD> rebal --power 4 DATA


Rebal on progress.
ASMCMD>

Any rebalance activity can be seen with the ‘lsop’ command:

ASMCMD> lsop
Group_Name Dsk_Num State Power
ASMCMD>

If an ACFS filesystem needs to be created, the easiest method is using the ASMCA
utility.
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 10 of 27
IBM Americas Advanced Technical Support

After creation is complete, the new filesystem is visible at the operating system level as
any other filesystem:

grid@lp11 /home/grid$ df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 458752 152076 67% 13231 6% /
/dev/hd2 2686976 142996 95% 57713 9% /usr
/dev/asm/test-114 2097152 1986300 6% 221704 6% /test

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 11 of 27
IBM Americas Advanced Technical Support

ASM Management with AIX

ASM Disk Header Integrity

The AIX Physical Volume Identifier, or PVID, is an ID number user to track physical
volumes. It is physically written on the LUN and registered in the AIX object data
manager (ODM). Its purpose is to preserve hdisk numbering across reboots and storage
reconfiguration A PVID is required to assign a disk to an AIX volume group, either
with the chdev, mkvg, or extendvg command.

PVIDs can be seen with the ‘lspv’ command:

grid@lp07 /home/grid$ lspv


hdisk0 00f623c45e60af9e rootvg active
hdisk1 00f623c4cc6f323c orabinvg active
hdiskASM2 none None

ASM disks do not need a PVID, as ASM preserves its own mapping between LUNs
based on data preserved in the ASM disk header. Unfortunately, the ASM header and
the AIX PVID are both stored in the same disk region, so assigning a PVID to an ASM
disk will cause ASM header corruption.4

To avoid this conflict, PVIDs should not be assigned to ASM disks unless an AIX LVM
structure is used underneath ASM.

At the operating system level, ASM disks can be identified using the ‘lquerypv’
command, which must be executed as root. In the following example, it can be seen that
the disk is an ‘ORCLDISK’, the diskgroup name is ‘DATA’, and the ASM disk name is
‘DATA_0000’.

root@lp06 / # lquerypv -h /dev/rhdiskASM4


00000000 00820101 00000000 80000000 0EAE8A6C |...............l|
00000010 00000000 00000000 00000000 00000000 |................|
00000020 4F52434C 4449534B 00000000 00000000 |ORCLDISK........|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 0B200000 00000103 44415441 5F303030 |. ......DATA_000|
00000050 30000000 00000000 00000000 00000000 |0...............|
00000060 00000000 00000000 44415441 00000000 |........DATA....|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00000000 00000000 44415441 5F303030 |........DATA_000|
00000090 30000000 00000000 00000000 00000000 |0...............|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 01F70A8D 8408D800 |................|

4
MOS Note #353761.1
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 12 of 27
IBM Americas Advanced Technical Support

000000D0 01F710AB C3AC9000 02001000 00100000 |................|


000000E0 0001BC80 00001E00 00000002 00000001 |................|
000000F0 00000002 00000002 00000000 00000000 |................|

Several improvements have been made in AIX to improve the safety of ASM managed
storage. In AIX 5.3 TL7 and above, the AIX commands ‘mkvg’ and ‘extendvg’ have
been enhanced to check for the presence of an ASM header, and will not overwrite the
ASM information if header data is discovered

# mkvg -y testvg hdiskASM3


0516-1339 mkvg: Physical volume contains some 3rd party volume group.
0516-1397 mkvg: The physical volume hdiskASM3, will not be added to
the volume group.
0516-862 mkvg: Unable to create volume group.

AIX 6.1 TL6 added two important enhancements, the ability to rename hdisks, and the
ability to lock them.

The ‘rendev’ command allows a disk to be dynamically renamed, so an hdisk can be


labeled with a tag indicating ASM usage, for example, the following command renames
hdisk3 to hdiskASM3:

# rendev –l hdisk3 –n hdiskASM3

It is strongly recommended to keep the initial ‘hdisk’ in the name, as this prevents issues
with other tools or utilities expecting AIX LUNs to have the standard naming convention.
The ‘lkdev’ command locks device attributes from being changed with the ‘chdev’
command, which is the command which can explicitly change a PVID.

# lkdev –l hdiskASM3 –a
hdiskASM3 locked
# chdev -l hdiskASM3 -a pv=clear
chdev: 0514-558 Cannot perform the requested function because hdiskASM3
is currently locked.

Recovering from ASM disk header corruption

Starting with version 11.1.0.7, Oracle has included a backup copy of the ASM disk
header in the second allocation unit of an ASM disk. This can be used by the Kernel
Files Metadata Editor (KFED) utility to repair corrupt ASM metadata. Metalink note
1485597.1 explains how to read ASM disk headers and how to see the help screen;
however, the KFED utility is not well documented by Oracle. Numerous blogs do offer
additional tips on utilizing both the 'kfed read' and the 'kfed repair' options.

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 13 of 27
IBM Americas Advanced Technical Support

The basic concept for using kfed repair involves using kfed read to check to make sure
the second allocation unit has a valid ASM header, and if so, the kfed repair utility can be
utilized to repair the primary ASM header.

For versions prior to 11.1.0.7, or for disks which do not have a valid ASM header in the
second allocation unit, Metalink note 750016.1 explains an alternative header repair
process of dropping individual hdisks from ASM and re-adding them into the disk group.

Both techniques has successfully been used on AIX to repair ASM disk header issues.

Storage Requirements and Dependencies

There are several storage requirements which must be met in order to utilize ASM for
Oracle data storage. If the installation is for a RAC environment, the device must
obviously support physical connectivity to multiple systems – such as dual-initiated
SCSI, fibre channel, or network attached (this paper will only address fibre channel
attachment). For all implementations, the logical device to be used by ASM must
support ownership by the ‘oracle’ user rather than the default of ‘root’. If multipathing
software is used, or if multiple hosts are used in a RAC environment, the device must
also support the capability to remove any type of locking mechanism that would prevent
true sharing by multiple hosts.

Starting with Oracle 10.2.0.3, use of storage configured through the Virtual I/O (VIO)
Server is certified with Oracle RAC and ASM. Reserve locking must be turned off at
the VIO Server level.

Typically in a fibre channel environment, multipathing software is used to facilitate use


of multiple fibre channel paths to disk, either for redundancy or bandwidth. The disk
multipathing software interoperability with ASM is the responsibility of the multipathing
software vendor, rather than Oracle. The vendor should be asked whether they have
tested their multipathing software with ASM.

Below is a list of high end storage subsystems known to have been implemented with
ASM on AIX. The information provided is not intended to be a comprehensive list of all
storage products which may be used with ASM on AIX, nor is it a substitute for vendor
install guides and recommendations, but should instead be used as a starting point for
obtaining information to architect an appropriate storage solution for an ASM
environment.

IBM storage products:

DS8000, DS6800, SVC and V7000 storage models have been successfully implemented
using AIX MPIO for multipathing. Using SDDPCM is highly recommended. SDD
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 14 of 27
IBM Americas Advanced Technical Support

multipathing software does not allow non-root ownership of devices, and may not be
used with AIX.
• In order to turn off device locking, all disk devices used for ASM, OCR, or voting
must set reserve_policy=no_reserve
# chdev –l hdisk# -a reserve_policy=no_reserve
• Verify that this is set correctly on all devices:
# lsattr –El hdisk# | grep reserve
reserve_policy no_reserve Reserve Policy True

XIV has also been successfully implemented with ASM; the IBM XIV Host Attachment
Kit for AIX should be installed5 for correct device recognition and multipathing. MPIO
is used for multipathing with ASM and XIV, and device locking is handled the same way
as the DS8000 configuration.

IBM FlashSystem 7xx and 8xx products also support ASM. The APAR for "MPIO
Support for FC attached IBM FlashSystem Storage" should be installed. For AIX 7.1, the
APAR number is IV38226, and for AIX 6.1 implementations, the APAR number is
IV38191. The reserve_policy for FlashSystem LUNs should also be set to no_reserve.

Older DS4000 storage models use RDAC for path failover capability, and this is also
known to work with ASM, but the parameter in this case to turn off device locking is
reserve_lock=no
• # chdev –l hdisk# -a reserve_lock=no
• Verify that this is set correctly on all devices:
# lsattr –El hdisk# | grep reserve
reserve_lock no Reserve device on open True

EMC Storage products:


EMC storage subsystems using PowerPath have been successfully deployed with AIX
and ASM since PowerPath version 4.3; MPIO with EMC's ODM has been successfully
deployed as well. Consult EMC’s Simple Support Matrix documents, available at
https://elabnavigator.emc.com, for information on supported PowerPath versions for a
given AIX level, storage subsystem, and VIO Server if desired. Installation instructions
can be found in the EMC Networked Storage Topology Guide.

Hitachi Storage products:


Hitachi storage has been implemented successfully with ASM and ASM using either
MPIO or HDLM. With MPIO or HDLM, reserve_policy=no_reserve must be set on all
devices (see instructions for IBM Storage products above).

5
The XIV host attachment kit is available at http://www.ibm.com/support/fixcentral
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 15 of 27
IBM Americas Advanced Technical Support

Additional installation instructions can be found in the document Hitachi Dynamic Link
Manager Software User Guide for AIX

Violin Memory
For Violin memory attachment requirements, refer to the technical report Connecting
Violin to AIX and PowerVM . Violin provides a path control module (PCM) for use
with MPIO.

ASM and Cluster Software

With Oracle 10g RAC, Oracle introduced its own clusterware, Cluster-Ready Services
(CRS), which replaces the heartbeat and cluster membership functions normally provided
by PowerHA. PowerHA software is not necessary in a 10g or 11g RAC
implementation except when using traditional raw logical volumes, requiring HACMP to
manage the concurrent LVM environment. If a clustered filesystem is preferred to store
Oracle datafiles, GPFS or Veritas software must be purchased. In an ASM environment,
no additional software is needed, which can result in cost savings over filesystem or raw
environments.

For single-instance environments requiring additional availability, an active/passive


configuration may be set up using ASM. There are two basic methods of configuration
for an active/passive ASM configuration.

One method involves one or more ASM disk groups which are shared across multiple
nodes of an active/passive cluster. For this configuration, the disk groups are mounted
on all nodes simultaneously and ASM appears as a RAC cluster, as in the figure below.
No reconfiguration is necessary in the event of a node failure.

Figure 1: Clustered Grid Infrastructure

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 16 of 27
IBM Americas Advanced Technical Support

An alternative configuration involves installing Grid Infrastructure for a single node on


both nodes of the PowerHA cluster, zoning the disks to both nodes, but creating a
separate ASM disk group for each cluster node, as the disk group cannot be shared
among the nodes (the CSS heartbeat mechanism also locks the disk for exclusive access
by one node only).

Figure 2: Single Instance Grid Infrastructure

While either method is a valid, supported configuration6, there are pros and cons to each
approach. The Clustered Grid Infrastructure method does not prevent data files from
multiple nodes being placed on a common disk group, which can limit maintenance
capabilities where different databases with different SLAs are included in the cluster.
The Clustered Grid Infrastrure method also opens the door for Oracle RAC database
configurations to be installed on top of the shared diskgroup configuration, which may
increase the requirement for purchasing RAC licenses from Oracle.
On the other hand, the Clustered Grid Infrastructure method provides the easiest method
for RAC failover.

The Single-Instance Grid Infrastructure method addresses the potential issues with the
Clustered Grid Infrastructure configuration, as it does not allow the disk group to be
shared across disk groups. However, during failover events it does require an 'alter
diskgroup mount' statement to mount the disk group from the source node on to the target
node of the cluster.

6
Reference MOS Notes 1296124.1, 1319050.1
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 17 of 27
IBM Americas Advanced Technical Support

Also for this configuration, consideration should be given to implementing ASM on top
of non-concurrent LVM volumes, and including importvg in the failover sequence. This
eliminates the possibility of being able to mistakenly access LUNs from multiple nodes
(though no such issues are known at this time).

Adding or Removing LUNs

Adding a LUN

If a LUN needs to be added to an existing ASM diskgroup, the LUN should be of similar
size and performance characteristics of the existing disks in that diskgroup. As with
creating a new disk group, the ownership for the /dev/rhdisk device must be oracle:dba
(or the appropriate owner and group for the Grid Infrastructure software), and the
permission mode must be set to 660

It is also recommended to rename the hdisk device descriptively, and lock the device
attributes prior to adding the LUN to ASM.

# rendev –l hdisk7 –n hdiskASM7


# lkdev –l hdiskASM7 –a

grid@lp11$ chown grid:oinstall /dev/rhdiskASM7


grid@lp11$ chmod 660 /dev/rhdiskASM7

Using the ASMCA utility is the easiest way to add the LUN, but the LUNs can also be
added from the SQLPlus command line using the alter diskgroup command:

SQL> alter diskgroup data add disk '/dev/rhdiskASM7';

It is also possible to use the ASMCMD command line utility to add a LUN, but an XML
configuration file must be created to specify the changes. More detail on this method
can be seen from the ASMCMD command line by typing in the following command:
ASMCMD> help chdg

Rebalancing occurs automatically provided the ASM_POWER_LIMIT parameter is set


to 1 or higher.

Removing a LUN

In theory, the process of removing a LUN from ASM is as simple as using ASMCA or,
from the SQLPlus command line, running the command:

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 18 of 27
IBM Americas Advanced Technical Support

SQL> select name, path from v$asm_disk;


NAME PATH
-----------------------------------------------------------------------
RECO_0001 /dev/rhdiskASM3
DATA_0000 /dev/rhdiskASM4
DATA_0001 /dev/rhdiskASM5
DATA_0002 /dev/rhdiskASM2

SQL> alter diskgroup data drop disk DATA_0002;

Check to see that the rebalance operation to move data off of the disk has completed:

ASMCMD> lsop
Group_Name Dsk_Num State Power
DATA REBAL RUN 1
ASMCMD> lsop
Group_Name Dsk_Num State Power

In practice, however, depending on the Oracle version, the ASM device file may remain
open. In that case, LUNs can be reassigned to other disk groups, but can't be dropped at
the OS level without closing the open files (generally by shutting down the database
and/or the grid infrastructure clusterware). The asmcmd command 'lsod' shows the
process open against the device:

ASMCMD> lsod

Instance Process OSPID Path


1 oracle@lp06 (DBW0) 8126714 /dev/rhdiskASM2
1 oracle@lp06 (DBW0) 8126714 /dev/rhdiskASM3
1 oracle@lp06 (GMON) 8192118 /dev/rhdiskASM2
1 oracle@lp06 (GMON) 8192118 /dev/rhdiskASM3
1 oracle@lp06 (LGWR) 7930100 /dev/rhdiskASM2
1 oracle@lp06 (LGWR) 7930100 /dev/rhdiskASM3
1 oracle@lp06 (RBAL) 8061054 /dev/rhdiskASM2
1 oracle@lp06 (RBAL) 8061054 /dev/rhdiskASM2
1 oracle@lp06 (RBAL) 8061054 /dev/rhdiskASM3
1 oracle@lp06 (RBAL) 8061054 /dev/rhdiskASM3
1 oracle@lp06 (TNS V1-V3) 12517500 /dev/rhdiskASM2
1 oracle@lp06 (TNS V1-V3) 7471334 /dev/rhdiskASM2
1 oracle@lp06 (TNS V1-V3) 12517500 /dev/rhdiskASM3
1 oracle@lp06 (TNS V1-V3) 7405796 /dev/rhdiskASM3
1 oracle@lp06 (TNS V1-V3) 7471334 /dev/rhdiskASM3

Even though the disk has clearly been removed from the ASM disk group:

SQL> select name, path, header_status from v$asm_disk where path like
'%ASM2';

NAME PATH HEADER_STATUS


-----------------------------------------------------------
/dev/rhdiskASM2 FORMER
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 19 of 27
IBM Americas Advanced Technical Support

root@lp06 /u01 # rmdev -dl hdiskASM2


Method error (/usr/lib/methods/ucfgdevice):
0514-062 Cannot perform the requested function because the
specified device is busy.

Preferred Mirror Read

As mentioned in the ASM Overview section of this paper, ASM supports three levels of
mirroring, external (no mirroring), normal (mirrored), or high (3 mirrors). Although
most AIX customers with disk storage subsystems mirror at the disk storage subsystem
layer (and therefore choose external mirroring), there are a few situations in which using
ASM mirroring should be considered.

When the disk storage subsystem does not offer RAID capabilities, ASM mirroring
should be used.

Second, where there is an extended-distance cluster implementation, where multiple


storage subsystems are used in the cluster, but each storage subsystem is located near its
host, with a distance in between. In this configuration, the host should read from its
nearest data copy.

Third, where two storage subsystems are used and one has substantially better
performance characteristics, and the lower performing subsystem is used for lowering
cost or providing advanced functions like copy services. This is being seen with
increased frequency with SSD or Flash systems such as the IBM FlashSystem.

For the last two configurations, the instance should be configured to use a preferred read
failure group, so hosts read from the local or higher performance copy. Preferred Read
Failure groups are available in Oracle 11gR1 and beyond.

To configure a preferred read failure group, the diskgroup is created with normal or high
redundancy, and the asm_preferred_read_failure_groups parameter specifies the disk
group and preferred read failure group, for example:

SQL> alter system set asm_preferred_read_failure_groups=data.sitea, reco.sitea;

In this example, 'data' and 'reco' are the ASM diskgroups, and 'sitea' is the name of the
local failure group.

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 20 of 27
IBM Americas Advanced Technical Support

Administration Implications of ASM Implementation

Storage Management Considerations

As ASM is built on raw devices, it is essential that systems administration processes and
procedures be able to work with a raw environment. This includes using RMAN for
backups, and the ability to change any scripts or processes involving use of filesystem
commands (such as utilization scripts using the ‘df’ command). Any procedures for
moving data between various hosts will need to include rebuilding the ASM environment
on the target host, and any file transfer will no longer be able to be done at the os level
(for example, with a ‘cp’, ‘mv’ or ‘rcp’ command). Disk-based copy technologies such
as PPRC or SRDF can still be used with ASM, but the secondary copy must be connected
to a second node to be used; it cannot be used concurrently with the primary copy on the
primary server.

Moving files between locations, such as from one ASM diskgroup to another (required if
the level of redundancy needs to be changed, for example) requires using the
DBMS_FILE_TRANSFER utility or RMAN restore. In addition to copying ASM files
to other ASM files, DBMS_FILE_TRANSFER can also copy ASM files to OS files (and
vice versa) to migrate data to or from ASM.

Any time a device is deleted using ‘rmdev –dl <hdisk#>’ and redetected using cfgmgr,
the device ownership will revert to the default, resulting in disks which will not be
useable by ASM. Raw device file ownership in /dev should be noted in case these
permissions need to be set in the future.

Disk Group Considerations

Number of Disk Groups


Most Oracle documents refer to just two ASM disk groups, one for data, and one for
Recovery files like redo logs, archive logs, etc. Depending on the implementation, this
may be insufficient.

When Oracle grid infrastructure 11gR2 is used, by default the key clusterware files - the
Oracle Cluster Registry (OCR), Voting Disk, and spfile for the ASM instance - are all
stored in the +DATA diskgroup. Unfortunately, this means the +DATA diskgroup
cannot be offlined without shutting down the grid infrastructure stack on that node.

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 21 of 27
IBM Americas Advanced Technical Support

SAP installation guides recommend creating a separate disk group, +OCR, to store these
cluster specific files.7 The recommendation seems prudent for all RAC environments.

LUN sizes or types should also be consistent across the disk group, so a separate disk
group should be created for LUNs of a different type, for example +SSD for a disk group
comprised entirely of SSD disks.

LUN size

In 11gR2, LUN sizes can be as large as 2TB; however, 11.2.0.1 bugs incorrectly
recognized LUN sizes over 1.1TB. Although this bug did not preclude using LUNs up
to 2TB, many of the automatic tools were unusable and any reference to LUN size (such
as adding a disk to a disk group) had to be done manually. For this reason, it is
recommended to keep LUN sizes in 11.2.0.1 under 1.1TB for ease of use.

In general, fewer, larger LUNs is much preferable to a larger number of smaller LUNs;
however, carrying this concept to extremes is not recommended (e.g., a single 2TB LUN
to support a 1TB database).

7
"SAP Databases on Oracle Automatic Storage Management 11g Release 2",
www.oracle.com/us/solutions/sap/asm-configguidelines-304656.pdf

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 22 of 27
IBM Americas Advanced Technical Support

Tuning Options and Recommendations Specific to ASM


Implementations

The best practices for AIX system setup for an ASM environment are virtually the same as when
using raw logical volumes. The following settings and parameters discussed below are relevant
specifically to ASM environments.

AIX parameters

Asynchronous I/O

AIX can use two types of asynchronous I/O, kernelized and threaded. All filesystem
implementations use threaded asynchronous I/O, and require the configuration of the
threaded asynchronous I/O subsystem through ‘smit aio’, typically increasing the number of I/O
servers (maximum # of servers) and the size of the request queue (maximum # of requests).
ASM, however, like raw, uses the kernelized asynchronous I/O subsystem, which does not
require configuration, although in order to install the Oracle software, the ‘STATE to be configured
at system restart’ for asynchronous I/O threads must be set to ‘available’ from the ‘smit aio’ menu.

For Oracle to take advantage of asynchronous I/O, the spfile must include the parameters
‘disk_asynch_io=TRUE' and 'filesystemio_options=asynch’.

Disk and Fibre Channel Maximum Transfer Sizes

The maximum transfer size set at the hdisk and fibre channel device level should be set equal to
the maximum IO size issued by the Oracle database (db_block_size *
db_multiblock_read_count). Typically this is a 1M IO size.

# chdev -l hdiskASM7 -a max_transfer=0x100000


# chdev -l fcs0 -a max_xfer_size=0x100000

Database Parameters

SGA and PGA size

Because ASM does not utliize filesystem buffer cache, it may be necessary to increase the SGA
and/or PGA size following a move from a cached filesystem implementation to ASM in order to
match the performance of the filesystem environment. Utilizing filesystem buffer cache to store
database blocks is generally less efficient than utilizing the SGA to do so; however, if buffer
cache is used, it does increase the available memory to cache data blocks. That memory should
be reassigned to the database instance by increasing any of the following parameters as
appropriate:

db_cache_size - size of area specifically allocated for caching database blocks

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 23 of 27
IBM Americas Advanced Technical Support

sga_target - aggregate size allocated for SGA memory regions including db_cache_size,
shared_pool_size, log_buffer_size etc (Oracle 10g+)
sga_max_size - the maximum size of the SGA over the life of the instance until restarted
pga_aggregate_target - the target aggregate PGA memory available to all server processes
attached to the instance
memory_target - aggregate size allocated for SGA and PGA combined (11g+, not
recommended for general use)
memory_max_size - the maximum size of the SGA and PGA over the life ot the instance until
restarted (11g+, not recommended for general use).

db_file_multiblock_read_count

When filesystems are used, filesystem read ahead mechanisms may proactively read additional
data blocks when a sequential access pattern is detected. On AIX, the J2_maxPageReadAhead
parameter dictates the maximum read ahead initiated by the filesystem. By default, this
parameter is 128 4K blocks, or 512K.
Without this filesystem read ahead option, the largest physical IOs which will be issued by an
Oracle database environment is the product of the two parameters db_block_size *
db_file_multiblock_read_count. This makes the db_file_multiblock_read_count parameter critical
for issuing large IO read requests for sequential IO in an ASM environment.
A typical value for an 8K block size database would be db_file_multiblock_read_count=128.
However, workloads should be monitored if this value is changed, as this value is considered by
Oracle's Cost-Based Optimizer(CBO) and may result in an increased amount of multiblock IO (full
table scans).
Some DBAs, rather than setting db_file_multiblock_read_count manually, will use the value
computed by running dbms_stats.gather_system_stats. This value should be checked in the
sys.aux_stats$ table, and overruled by db_file_multiblock_read_count if the setting is not
reasonable.

Oracle recommended database parameters to support ASM

The following parameters are recommended for databases using ASM in the document Oracle
Automatic Storage Management Administrator's Guide 11g Release 2

1. Increase Processes by 16
2. Increase Large_Pool by 600k
3. Increase Shared Pool by (1M per 100GB of usable space) + 2M (assumes external
redundancy is chosen)

Parameters to be included in the spfile of the ASM instances:

1. ASM_POWER_LIMIT=1
The Best Practices guide suggests this value to make ASM rebalance operations a low
priority; however, this can potentially cause rebalance operations to occur during peak
volume times. This parameter can also be set to 0 to prevent accidental rebalance
operations from occurring during peak loads, which may be preferable for many
environments. This parameter can be raised specifically when rebalancing is desired.
© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013
http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 24 of 27
IBM Americas Advanced Technical Support

2. Processes=50 + 50*n, where “n” is the number of databases which will use ASM.

ASM disk group parameters

Sector Size
Starting with 11.2.0.3, a SECTOR_SIZE attribute can be set at the disk group level.
This size can be 512 bytes (default) or 4K, and can only be set when the disk group is
created. Solid state disk solutions may benefit from using the 4K sector size. Note
that ACFS does not support 4K sector drives.

Allocation Unit Size

Generally, smaller allocation unit sizes are used for OLTP workloads, with larger au_size
values set for sequential IO environments like data warehouse, backups, etc. A general
Oracle best practice is to use a 4M size for data disk groups.8

Compatibility

The compatible.rdbms parameter should be set to the maximum version that supports all
databases using the ASM instance in order to utilize the most recent feature set.

8
Oracle Database Storage Administrator's Guide 11g Release 2 (11.2)
http://docs.oracle.com/cd/E16338_01/server.112/e10500/asmdiskgrps.htm#CHDHIDEH

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 25 of 27
IBM Americas Advanced Technical Support

Reminders

Copyright 2013 IBM Corporation. All Rights Reserved.


Neither this documentation nor any part of it may be copied or reproduced in any form or by any
means or translated into another language, without the prior consent of the IBM Corporation.
The information in this paper is provided by IBM on an "AS IS" basis. IBM makes no warranties
or representations with respect to the content hereof and specifically disclaim any implied
warranties of merchantability or fitness for any particular purpose. IBM assumes no
responsibility for any errors that may appear in this document. The information contained in this
document is subject to change without any notice. IBM reserves the right to make any such
changes without obligation to notify any person of such revision or changes. IBM makes no
commitment to keep the information contained herein up to date.
Version 1.0, published December 18, 2013

Trademarks

IBM, AIX, and pSeries are trademarks or registered trademarks of the International Business
Machines Corporation.
Oracle, Oracle9i, Oracle10g, Oracle 11g, Oracle 12c are trademarks or registered trademarks of Oracle
Corporation.
EMC, PowerPath are trademarks or registered trademarks of EMC.
HDS, HDLM are trademarks or registered trademarks of Hitachi.
UNIX is a registered trademark in the United States and other countries exclusively through
X/Open Company Limited.
All other products or company names are used for identification purposes only, and may be
trademarks of their respective owners.

References

Oracle Automatic Storage Management Administrator's Guide 11g Release 2 (11.2)


http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm
Oracle Database Release Notes 11g Release 2 (10.2) for AIX-Based Systems
http://docs.oracle.com/cd/E11882_01/relnotes.112/e23560/toc.htm
Grid Infrastructure Installation Guide 11g Release 2 (11.2) for IBM AIX on POWER
Systems
http://docs.oracle.com/cd/E11882_01/install.112/e24614.pdf
EMC Networked Storage Topology Guide.
https://elabnavigator.emc.com
Hitachi Dynamic Link Manager Software User Guide for AIX

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 26 of 27
IBM Americas Advanced Technical Support

http://www.hds.com/assets/pdf/hitachi-dynamic-link-manager-software-user-guide-for-
aix-r.pdf
Connecting Violin to AIX and PowerVM
http://www.violin-memory.com/wp-content/uploads/host-attach-guide-ibm-aix.pdf?d=1
SAP Databases on Oracle Automatic Storage Management 11g Release 2
www.oracle.com/us/solutions/sap/asm-configguidelines-304656.pdf

Acknowledgements

Thanks to Damir Rubic for reviewing this paper.

© 2013, IBM Advanced Technical Support Techdocs Version 12/18/2013


http://w3.ibm.com/support/Techdocs
________________________________________________________________________
Page 27 of 27

You might also like