You are on page 1of 22

Dell Compellent Storage Center – Oracle Extended Distance Clusters

Dell Compellent Storage Center –


Oracle Extended Distance Clusters

Dell | Compellent Technical Best Practices


Dell Compellent Storage Center – Oracle Extended Distance Clusters
Dell Compellent Storage Center – Oracle Extended Distance Clusters

Document revision
Date Revision Comments
3/9/2012 A draft

THIS TECHNICAL TIP IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR
IMPLIED WARRANTIES OF ANY KIND.

© 2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without
the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.

Dell, the DELL logo, the DELL badge, and Compellent are trademarks of Dell Inc. Other trademarks and
trade names may be used in this document to refer to either the entities claiming the marks and names
or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than
its own.
Dell Compellent Storage Center – Oracle Extended Distance Clusters

Contents

Document revision ...................................................................................................... 3


General Syntax .......................................................................................................... 5
Conventions .............................................................................................................. 5
Customer Support ...................................................................................................... 5
Introduction ............................................................................................................. 6
Scope ..................................................................................................................... 6
Audience ................................................................................................................. 6
Tested Configuration ................................................................................................... 6
Cluster Overview........................................................................................................ 8
Oracle Clusterware and RAC Overview ............................................................................. 9
Oracle RAC Extended Distance Cluster ............................................................................. 9
Oracle Automatic Storage Management (ASM) .................................................................. 10
Oracle Managed Files (OMF) ........................................................................................ 10
ASM, RDBMS, and Cluster Compatibility .......................................................................... 11
ASM and Initialization Parameters ................................................................................... 11
ASM OCR and Voting Disks ........................................................................................... 12
ASM Diskgroups and Failgroups ..................................................................................... 13
ASM Disk and Failure Group Creation .............................................................................. 14
ASM Preferred Read Failure Groups ............................................................................... 14
ASM Disk Group Compatibility ...................................................................................... 15
Linux Device Mapper - Multipath................................................................................... 16
Storage Center Setup and Configuration ......................................................................... 17
Putting it all Together ............................................................................................... 18
Conclusion ............................................................................................................. 20
Dell Compellent Storage Center – Oracle Extended Distance Clusters

General Syntax

Table 1. Conventions
Item Convention
Menu items, dialog box titles, field names, keys Bold
Mouse click required Click
User Input Monospace Font
User typing required Type:
System response to commands Blue
Output omitted for brevity <…snipped…>
Website addresses http://www.dell.com
Email addresses name@dell.com

Conventions

Notes are used to convey special information or instructions.

Timesavers are tips specifically designed to save time or reduce the number of steps.

Caution indicates the potential for risk including system or data damage.

Warning indicates that failure to follow directions could result in bodily harm.

Customer Support

Compellent provides live support 1-866-EZSTORE (866.397.8673), 24 hours a day, 7 days a week, 365
days a year. For additional support, email Compellent at support@compellent.com. Compellent
responds to emails during normal business hours.
Dell Compellent Storage Center – Oracle Extended Distance Clusters

Introduction
Disaster occurs more frequently than one would think in computer environments. Whether they are
natural disasters such as hurricanes, earth quakes, tornadoes, fires, floods, or manmade disasters,
protecting critical business data is the most important aspect of a computer environment a CIO has to
consider. Not only protecting the data is important, but the time it takes to recover the data also
plays an important role to business recovery. Without quickly and efficiently recovering the business
data, lost revenue will incur, hence putting the business in jeopardy.

Every business has some type of data whether that data resides on a database or flat files, they need
to be protected. This white paper describes general best practices for configuring Oracle Extended
Distance Clusters using 11gR2 Grid and 11gR2 RDBMS on Linux OEL5U5 (Oracle kernel fixes and
multipath) and DELL Compellent Storage Center. A test environment was used to demonstrate the
content in this paper and will be shown and described below.

Scope

Oracle Grid, RAC, ASM, and Database architecture, installation, configuration, management,
including performance tuning is beyond the scope of this paper. Please visit www.oracle.com for
more in-depth information on appropriate Oracle topics.

For comprehensive information regarding best practices with Oracle and Linux on Compellent Storage
Center, see documents “Oracle Best Practices on Compellent Storage Center” and “Dell Compellent
Linux Best Practices”.

Audience

This paper is intended for Database Administrators, System Administrators and Storage Administrators
that need to understand general best practices for Oracle Extended Distance Clusters using DELL
Compellent Storage Center. Readers should be familiar with Compellent Storage Center and have prior
experience in configuring and operating the following:

 Oracle RDBMS EE 11gR2


 Clusters and Oracle Clusterware and Real Application Clusters (RAC) 11gR2
 Oracle Automated Storage Management (ASM) 11gR2
 General understanding of SAN technologies
 OEL5U5 x86-64 with oracle kernel fixes (2.6.18-194.0.0.0.3.el5)
 Multipath (linux mpath) and user-friendly names

Tested Configuration
This section list the software installed, some basic configuration, as well as a diagram which depicts the
architecture that was used to test Oracle Extended Distance Clusters on Complement’s Storage Center.
Dell Compellent Storage Center – Oracle Extended Distance Clusters

Linux: OEL5U5 x86-64 with oracle kernel fixes (2.6.18-194.0.0.0.3.el5)


OEL5U5 x86-64 installed with Multipath (linux mpath). Device mapper used user-friendly names
(see section Linux Device Mapper – Multipath)
Packages installed:

libaio-devel-0.3.106-5.i386.rpm
libaio-devel-0.3.106-5.x86_64.rpm
sysstat-7.0.2-3.el5.x86_64.rpm
numactl-devel-0.9.8-11.el5.x86_64.rpm
kernel-headers-2.6.18-194.0.0.0.3.el5.x86_64.rpm
kernel-devel-2.6.18-194.0.0.0.3.el5.x86_64.rpm
kernel-2.6.18-194.0.0.0.3.el5.x86_64.rpm
oracleasm-support-2.1.3-1.el5.x86_64.rpm
oracleasm-2.6.18-194.0.0.0.3.el5-2.0.5-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
cvuqdisk-1.0.9-1.rpm

Oracle versions: 11.2.0.3 Grid,


11.2.0.3 RDBMS EE,
11.2.0.3 ASM,

Oracle cluster configuration: Two node RAC cluster

ASM Configuration:

3 ASM diskgroups with normal redundancy: DATADG, FRADG, OCRDG

Two failgroups for each DATADG and FRADG

Three failgroups for OCRDG

Diskgroup compatibility set to 11.2.0.0.0 for both RDBMS and ASM on all diskgroups.

Network setup:

Table 2. Network Setup

VLAN ID or Separate Description CRS Setting


Switches
1 or switch A Client Network - SCAN Public
2 or switch B RAC Interconnect Private

Storage Centers (SC):

SC11: DATADG (failgroup 1), FRADG (failgroup 1), OCRDG (failgroup 1)

SC14: DATADG (failgroup 2), FRADG (failgroup 2), OCRDG (failgroup 2)

SC6: OCRDG (failgroup 3), Oracle and Grid Homes for node 1 and node 2
Oracle and Grid Homes existed on a local ext3 fs mounted on /u01 on each node.
Dell Compellent Storage Center – Oracle Extended Distance Clusters

SC10: boot volumes for node 1 and node 2.

Table 3. Oracle Extended Distance Cluster

Site A Site B
Public IP Network
Public IP Network

Private IP Network
Private IP Network

Inter-Connect Node 2
Node 1 Switch
FC
FC
Site C
FC
FC FC
FC FC FC
Compellent Compellent Compellent
Storage Center Storage Center Storage Center
SC11 SC6 SC14

DATADG DATADG
Failgroup FG1 Failgroup FG2

E1 E1 E1
E1
DATA1SC11 DATA2SC11 DATA1SC14 DATA2SC14

FRADG FRADG
Failgroup FG1 Failgroup FG2

E1 E1 E1
E1
FRA1SC11 FRA2SC11 FRA1SC14 FRA2SC14

OCR / Voting Disk OCR / Voting Disk OCR / Voting Disk


OCRSC11 OCRSC6 OCRSC14

Primary Extent Primary Extent

Secondary Extent Secondary Extent

Cluster Overview
A cluster consists of multiple servers that are interconnected and appear as if they are one to
applications and users. In broad terms, a cluster is referred to as a geocluster, a term that’s used in
the context of business continuance and describes highly available clusters where all the nodes are
located at specific locations. If the nodes are not all local, depending upon the spanning distance
between sites, different terms are used to describe clusters:
Dell Compellent Storage Center – Oracle Extended Distance Clusters

Campus cluster Nodes located in facilities on a campus

Metro cluster Nodes located within a metro area from a few kilometers to 50 km

Regional cluster Nodes spanning hundreds of kilometers

Continental cluster Nodes spanning thousands of kilometers

In Oracle architectures, a Campus or Metro cluster are candidate clusters for deployment as an
Extended Distance Cluster, otherwise known as a Stretch Cluster.

Oracle Clusterware and RAC Overview


Oracle clusterware is a high-availability infrastructure software solution that is integrated with
Oracle’s Real Application Cluster (RAC) and allows one to cluster an oracle database. Oracle RAC
allows a database to run an application across the cluster, providing higher levels of application
availability. If any server in the cluster fails, the application will still function normally. RAC also
provides scalability within the environment. If additional servers are needed to help distribute the
work load in the RAC cluster, other servers can be simply added without generating an outage to the
users.

RAC also plays an instrumental role in deploying Oracle’s grid computing architecture. With multiple
instances connected to a single database, single point of failure with a node is eliminated.

Oracle RAC Extended Distance Cluster


If latency issues are realized within a cluster, a specific RAC cluster called an Extended Distance
Cluster can aid in reducing the latency. For more information regarding this inherent latency issue,
see section ‘ASM Preferred Read Failure Groups’.

Because Extended Distance Clusters can help in reducing the latency, there is a growing
interest in implementing solutions around it. But it should be noted that it’s not a panacea
for all situations. One needs to consider the impact of distance, complexity, and cost of
implementation before choosing this architecture. According to Oracle Corporation,
“distances of 50 km or less are recommended. Testing has shown the distance (greatest cable
stretch) between Oracle RAC cluster nodes generally affects the configuration, as follows:

■ Distances less than 10 km can be deployed using normal network cables.


■ Distances equal to or more than 10 km require DWDM links.” i

For Regional and Continental Clusters, there’s not yet enough empirical data to indicate the effect of
Extended Distance Clusters in these environments. Additional testing is required to identify what
workloads can be supported and the effect the distance has on performance.ii

Even though Extended Distance Clusters provide greater high availability than does RAC by requiring a
copy of the database at all sites, it may not provide the necessary disaster recovery and business
Dell Compellent Storage Center – Oracle Extended Distance Clusters

continuance requirements for your organization. It is imperative that a full analysis be performed to
determine if Extended Distance Clusters provide adequate protection from a disaster that affects
multiple sites. If its determine that protection against corruptions and regional disasters are
necessary, Oracle recommends a Maximum Availability Architecture (MAA) which includes the
integration of Oracle Data Guard with Oracle RAC.iii

Oracle Extended Distance Clusters require enough disk at each site so that each site can contain a
copy of the database. This database ‘copy’ is maintained one of two ways: 1) Host based mirroring
using a logical volume manager (eg. ASM), or 2) array based mirroring.

Oracle Automatic Storage Management (ASM)


Oracle ASM is a volume manager and a file system for Oracle databases. It supports both single-
instances databases, and RAC configurations. ASM is integrated into Oracle Grid
Infrastructure, providing one software bundle for Oracle Clusterware and Oracle Database,
and is Oracle’s recommended storage management solution for RAC configurations.

ASM uses disk groups to store data files. Disk groups and data files are similar to volume groups and
logical volumes, respectfully, in other LVMs. The content of ASM files is evenly distributed with
respect to disk extents to eliminate hot spots and to provide uniform performance across the disks.
The distribution occurs across the storage pool based on capacity and not the number of spindles.

ASM provides several options for server-based mirroring: normal and high, and an option for array
based mirroring: external redundancy. Normal and high redundancy instruct ASM to provide two- and
three-way mirroring respectfully. External redundancy is also available, but should be used to enable
RAID to perform the mirroring.

When ASM is used in a multipath configuration, ASM must be configured to discover only the
multipath device; otherwise ASM will generate an error when it discovers multiple device paths. You
can ensure ASM discovers the multipath devices by setting ASM’s initialization (init.ora) parameter
ASM_DISKSTRING to the name of the pseudo device (eg: ASM_DISKSTRING=/dev/oracleasm/disks/*).
Also, when ASMLIB is used with ASM on Linux, multipath disks can be discovered by configuring ASM to
scan device mapper devices first (ORACLEASM_SCANORDER=”dm”), and to exclude the single path
disks (ORACLEASM_SCANEXCLUDE=”sd”). This configuration is defined in /etc/sysconfig/oracleasm:

ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=oinstall
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER="dm"
ORACLEASM_SCANEXCLUDE="sd"

Oracle Managed Files (OMF)


OMF is a feature of Oracle that simplifies database file management, by automatically creating and
naming files in designated locations. OMF also relinquishes the space when it removes files, or
tablespaces. It’s recommended to use OMF.
Dell Compellent Storage Center – Oracle Extended Distance Clusters

ASM, RDBMS, and Cluster Compatibility


Both forward and backward compatibility between ASM in Oracle 11g, and Oracle RDBMS 10g and 11g
are supported. But in general, the database instance supports ASM functionality of the earliest
release in use. Compatibility between ASM and Oracle Clusterware exists, but the Oracle
Clusterware release must be at or greater than ASM’s release.

When using disk groups when the RDBMS and ASM are not at the same release, other compatibility
considerations need to be reviewed. Please see Oracle’s documentation for more information on
compatibility.

ASM and Initialization Parameters


To initially define the initialization parameters of ASM, Oracle recommendations that ASM is installed
and configured using Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA).
Once ASM has been installed, final configuration can be performed, and in most cases the default
settings are sufficient. The only required parameter for an ASM instance is INSTANCE_TYPE, and it is
set by DBCA. ASM initialization parameters are prefixed with ASM and cannot be used in a database
instance, but some database initialization parameters can be used with ASM.

Another parameter that is set by default is automatic memory management’s MEMORY_TARGET


parameter, and Oracle strongly recommends that it is set for ASM. In most installations, the default
value of 256 MB should be sufficient. You can increase MEMORY_TARGET up to
MEMORY_MAX_TARGET, but it is not recommend that it is disabled. When using automatic memory
management in Linux, /dev/shm must be available and properly sized, if not, AMM will not work.

Other useful ASM parameters are listed below:

INSTANCE_TYPE

This parameter must be set to ASM, and is the only required ASM parameter.

*.instance_type='asm'

ASM_POWER_LIMIT

This parameter controls the amount of cycles used to rebalance disk. The default value is 1
with an available range of 0 to 11 inclusive. A value of 0 disables rebalance, and the highest
value of 11 instructs ASM to spend as much available resources on the rebalance which could
result in higher I/O overhead.

*.asm_power_limit=1
Dell Compellent Storage Center – Oracle Extended Distance Clusters

ASM_DISKGROUPS

This parameter specifies the list of names of disk groups that should be mounted at startup.
The default value is NULL, but is dynamic. As disk groups are added or dropped, or mounted
or dismounted, ASM automatically adds or drops the disk group names from the parameter.
When using a PFILE, you must edit the pfile to add or remove the disk groups.

+ASM1.asm_diskgroups='DG1','FRA1'
+ASM2.asm_diskgroups='DG1','FRA1'

ASM_DISKSTRING

This parameter specifies a list of strings, called discovery strings, which control or limit the
sets of disks that an ASM instance discovers. The strings support pattern matching and
wildcard characters (*, ?). The supported format for discovery strings is dependent upon the
ASM library and operating system in use.

*.asm_diskstring='/dev/oracleasm/disks'

ASM_PREFERRED_READ_FAILURE_GROUPS

This parameter specifies a list of strings that define the failure groups that should be
preferentially read by a given instance. The parameter needs to be set in Extended Distance
Cluseters, and its value different on each node of the cluster.

+ASM1.asm_preferred_read_failure_groups='DG1.FG1','FRA1.FG1'
+ASM2.asm_preferred_read_failure_groups='DG1.FG2','FRA1.FG2'

The above definition instructs instance ASM1 to read files from diskgroups DG1 and FRA1 from
failgroups FG1, and ASM2 to read files in diskgroups DG1 and FRA1 from failgroups FG2 and
FG2 respectfully.

ASM OCR and Voting Disks


Oracle clusterware requires an odd number of OCR and voting disks. Each disk must reside in its own
ASM diskgroup and must have a redundant failgroup. The number of disk- and fail-groups, and the
placement of the OCR and voting disks will be determined by your business continuance
requirements.
Dell Compellent Storage Center – Oracle Extended Distance Clusters

ASM Diskgroups and Failgroups


Generally you’ll need only two disk groups. One to hold database related files (DATA), and the other
to hold the fast recovery area and multiplexed copies of controlfiles and redo logs (FRA). A minimum
of 4 dedicated LUNs per disk group is recommended by Oracle, but could change depending upon
your configuration and business continuance requirements.

It is recommended that LUNs within a disk group be defined with the same performance
characteristics (ex. FC 15K rpm or 10K rpm) and size. The performance of the group will be defined
by its slowest performing LUN member. When using LUNs of different performance capabilities and
size, it is best practice to group them according to similar characteristics. When creating volumes
for your databases, it is recommended that you initially create the volumes larger than needed for
future database growth. Since Compellent Storage Center has the ability to dynamically provision
storage, disk space is not taken up until actual data has been written to it. This way you can create
your tablespaces with the AUTOEXTEND parameter so you don’t have to worry about running out of
disk space in that volume.

As stated earlier, ASM distributes data across the storage pool based on LUN capacity and not the
number of spindles. Therefore, if LUNs are not sized equally, a sub-optimal performance will be
experienced due to the fact the larger LUNs will receive more I/O requests than smaller LUNs.

Diskgroups and the level of redundancy is created with the SQL command ‘CREATE DISKGROUP’.
Redundancy within the diskgroup defines how many disk failures can be tolerated before disks are
dismounted. There are several options for mirroring, which are defined with the appropriate
command options:

NORMAL REDUNDANCY Provides two-way mirroring. Requires at least two failure


groups. A lose of all the mirrored files is tolerated.

HIGH REDUNDANCY Provides three-way mirroring. Requires at least three failure


groups. A lose of both mirrored files is tolerated.

EXTERNAL REDUNDANCY No mirroring performed by ASM. Disks are protected using


RAID. Failure groups are not used.

Generally speaking, ASM mirroring (NORMAL or HIGH REDUNDANCY) is more flexible than traditional
RAID mirroring (EXTERNAL REDUNDANCY) as each file in a NORAMAL or HIGH REDUNDANCY
configuration can reside in the same disk group with some files being mirrored, while others are not.
ASM mirroring should be used when hardware RAID is absent, or when mirroring is needed across
storage systems, as is the case with Extended Distance Clusters. External redundancy should be used
when possible to reduce the mirroring overhead of ASM mirroring. However, the mirroring option
chosen should be based upon your business continuance requirements.

Failure groups are a subset of disks within a disk group and are used to mirror copies of extents.
Each extent copy is placed on a disk in a different failure group. Each disk in a disk group can belong
to only one failure group. When normal redundancy files are configured, and ASM allocates an
extent, ASM allocates two copies of the extent: a primary and secondary copy. The secondary extent
copy is allocated in a disk that resides in a different failure group then the failure group where the
primary extent is allocated. If all the disks in a failure group fail, no data loss would be realized as
there would be a copy of the extents in those files in a different failure group. Failure groups should
be created with equal size to avoid uneven distribution of data and space imbalance.

Oracle recommends offloading ASM mirroring which runs on the database server to the storage array
RAID controller. Except for a few exceptions, Oracle considers ASM mirroring as the alternative to third
party LVMs. Oracle lists the following as general considerations for selecting normal or high redundancy
Dell Compellent Storage Center – Oracle Extended Distance Clusters

over external redendancy

Storage system does not have RAID controller,


Mirroring across storage arrays,

Extended cluster configurations

ASM Disk and Failure Group Creation


When creating disk groups for Extended Distance Clusters with normal redundancy, make sure that all
ASM disks local to each site resides in their own failgroup. Using the cluster diagram, the diskgroups
were created on node 1 with the following:

create diskgroup datadg normal redundancy


failgroup fg1 disk '/dev/oracleasm/disks/DATA1SC11'
, '/dev/oracleasm/disks/DATA2SC11' -- local disk to node 1
failgroup fg2 disk '/dev/oracleasm/disks/DATA1SC14'
, '/dev/oracleasm/disks/DATA2SC14' -- local disk to node 2;

create diskgroup fradg normal redundancy


failgroup fg1 disk '/dev/oracleasm/disks/FRA1SC11'
, '/dev/oracleasm/disks/FRA2SC11' -- local disk to node 1
failgroup fg2 disk '/dev/oracleasm/disks/FRA1SC14'
, '/dev/oracleasm/disks/FRA2SC14' -- local disk to node 2;

On node 2, diskgroups DATADG and FRADG need to be mounted after they are created in node 1.
Diskgroup OCRDG doesn’t need to be mounted as it was mounted by the Grid installer.

ALTER DISKGROUP DATADG1 MOUNT;


ALTER DISKGROUP FRADG1 MOUNT;

ASM Preferred Read Failure Groups


Prior to 11gR1, there was always a latency issue with I/O in a RAC environment because Oracle
always read the primary copy of the mirrored extent regardless of which node initiated the read.
Extended clusters didn’t fare as well local clusters as needless network traffic would be generated
with the sending of the primary extents to the requesting node.

In geo-clusters, it would be more efficient for the requesting node to read the extent that is closest
or local, even if that extent is a secondary extent. 11gR1 introduced the ability to allow a node in a
cluster to access local failure groups rather than remote groups. However, one must configure this
capability by defining an extended cluster with preferred ASM read groups.

Each ASM instance running on each node of the cluster must have its own preferred read
configuration and is defined with the ASM_PREFERRED_READ_FAILURE_GROUPS initialization
parameter. The following instance specific parameters were defined during our testing. They define
Dell Compellent Storage Center – Oracle Extended Distance Clusters

preferred read disks in failure groups in both the DATADG and FRADG diskgroups. The failure groups
must only contain disks that are local or nearest to the instance to realize improved latency. With
normal redundancy, there should be only one failure group on each node of the extended cluster.

Instance +ASM1:

alter system set


asm_preferred_read_failure_groups='DATADG.FG1','FRADG.FG1'
scope=both
sid='+ASM1';

Instance +ASM2:

alter system set


asm_preferred_read_failure_groups='DATADG.FG2','FRADG.FG2'
scope=both
sid='+ASM2';

ASM Disk Group Compatibility


Disk group compatibility settings between ASM and RDBMS must be set appropriately to take
advantage of 11g features, eg: to simplify the management of ASM disk and failure groups after the
loss and recovery of a SAN, and to format the correct data structures for ASM metadata. The
ATTRIBUTE clause of the ALTER DISKGROUP command sets the compatibility and defines the
minimum Oracle version that a cluster can use for ASM and the RDBMS. The recommended best
practice is to set attributes compatible.asm and compatible.rdbms to 11.2.0.0.0.

DATADG::

alter diskgroup datadg set attribute 'compatible.asm' = '11.2.0.0.0';


alter diskgroup datadg set attribute 'compatible.rdbms' = '11.2.0.0.0';

FRADG:

alter diskgroup fradg set attribute 'compatible.asm' = '11.2.0.0.0';


alter diskgroup fradg set attribute 'compatible.rdbms' = '11.2.0.0.0';

OCRDG:

alter diskgroup ocrdg set attribute 'compatible.rdbms' = '11.2.0.0.0';


Dell Compellent Storage Center – Oracle Extended Distance Clusters

Linux Device Mapper - Multipath


Special consideration needs to be applied to Device Mapper so that when a SAN failure
occurs, Device Mapper takes immediate action to failover to an available path. If this
consideration is ignored, RAC could hang when the SAN outage occurs. The following
devices stanza in multipath.conf was used to accomplish this failover in our tests.

devices {
device {
vendor "COMPELNT"
product "Compellent Vol"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
path_selector "round-robin 0"
path_checker tur
features "0"
hardware_handler "0"
failback immediate
rr_weight uniform
no_path_retry fail
rr_min_io 1000
}
}

All multipath devices on both nodes used user-friendly names and were identified by their WWID.
Here’s an example of the multipath devices defined on one of the nodes in the test environment:

multipaths {
multipath {
wwid "36000d3100003d00000000000000022f8"
alias orahome
}
multipath {
wwid "36000d3100003d00000000000000022f9"
alias ocrsc6
}
multipath {
wwid "36000d3100000690000000000000010d8"
alias ocrsc11
}
multipath {
wwid "36000d3100002bf000000000000000266"
alias ocrsc14
}
multipath {
wwid "36000d3100000690000000000000010d9"
alias data1sc11
}
multipath {
wwid "36000d3100000690000000000000010da"
alias data2sc11
Dell Compellent Storage Center – Oracle Extended Distance Clusters

}
multipath {
wwid "36000d3100000690000000000000010db"
alias fra1sc11
}
multipath {
wwid "36000d3100000690000000000000010dc"
alias fra2sc11
}
multipath {
wwid "36000d3100002bf000000000000000262"
alias data1sc14
}
multipath {
wwid "36000d3100002bf000000000000000263"
alias data2sc14
}
multipath {
wwid "36000d3100002bf000000000000000264"
alias fra1sc14
}
multipath {
wwid "36000d3100002bf000000000000000265"
alias fra2sc14
}
}

Storage Center Setup and Configuration

When adding the servers for the Extended Distance Cluster in Storage Center, it is
recommended that the servers are defined as part of a cluster server group. Server
Clusters reduce the amount of work necessary to maintain the mapping between volumes
and the servers in the server cluster. When a volume is mapped to a Server Cluster,
Storage Center automatically maps the volume to all the
servers within the cluster server group. Within the Server
Cluster, a volume can be mapped to the same LUN on all
servers within the cluster, making is easier to identify the
same volume on all servers within the cluster.
Dell Compellent Storage Center – Oracle Extended Distance Clusters

Putting it all Together


When considering the implementation of Oracle Extended Distance Clusters with Storage Center, the
following points need to be considered:

 Perform a full analysis of the business continuance requirements to determine if extended distance
clusters provide the necessary high availability requirements.

 Before installing the necessary software for your Oracle Extended Distance cluster, you need to
take into consideration the following before deciding what should be installed and configured:

 Linux Operating System version

 Oracle Clusterware, RAC, ASM, RDBMS versions. It’s recommended to use 11.2.0.3 for all
Oracle components to simplify the management of the cluster.

 Required RPMs (kernel, asm, cvu, miscellaneous)

 Number of nodes in the Extended Distance Cluster

 Oracle disk group compatibility. If using 11.2.0.3 for RDBMS, ASM, Grid, then set
compatibility to 11.2.0.0.0 for both RDBMS and ASM when defining ASM disk groups.

 When creating volumes for your Oracle database server, you do not have to use any software
striping at the Operating System level. The data in the Compellent volume is automatically striped
depending on which RAID level you have selected for your volume. However, you should create
multiple LUNs for your larger workload database and use Operating System striping mechanism (eg.
LVM, VxVM) to get better performance because of multiple disk queues at the OS level if Oracle
ASM is not configured.

 When provisioning storage for your Oracle Extended Distance cluster, you need to take into
consideration the following before deciding how to setup and configure an Extended Distance
cluster on Compellent Storage Center:

 Number of Storage Centers:

Storage Center A: Supports OCR disk and database image for RAC node A
Storage Center B: Supports OCR disk and database image for RAC Node B
Storage Center C: Supports OCR disk. NFS file can be used as third OCR file. See Oracle
documentation for more information.
Storage Center D: Supports Boot volume and Grid and Oracle homes

 Use the same disk size and performance characteristics (ex. FC 15K rpm or 10K rpm) for the
LUNs within a disk group.

 Size of the Oracle database. Each node in the cluster must have enough disk to support a
local copy of the database.
Dell Compellent Storage Center – Oracle Extended Distance Clusters

 ASM Redundancy level: Normal or High

 Number of OCR disk groups. Must have an odd number, minimum of three

 Number of disk groups. Use at least two. One each for groups DATA, and FRA

 Number of failure groups for each disk group:

Normal Redundancy: use at least two failgroups for each disk group DATA and FRA.
High Redundancy: use at least three failgroups for each disk group DATA and FRA.

 Number of files per disk and failure groups. Use a minimum of 4 dedicated LUNs per disk
group.

 Number and placement of OCR disks. Each resides in their own diskgroup and must have
their own failgroup.

 File System type, placement and size for Oracle Home. Recommended to use non-
shareable homes to support Oracle rolling upgrades.

 Database Usage (Heavy, Medium, Light)

 Archived Log Mode or No Archived Log Mode

 For a more complex database, you might have to create multiple volumes dedicated to your data
files, assuming you separate your data tablespaces from your index tablespaces.

 Configure ASM to discover only multipath devices by setting ASM initialization pfile parameter:

ASM_DISKSTRING=/dev/oracleasm/disks/*

 When using ASMLIB, configure oracleasm to scan device mapper devices and exclude sd devices.
Set the following in /etc/sysconfig/oracleasm:

ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=oinstall
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER="dm"
ORACLEASM_SCANEXCLUDE="sd"

 All ASM initialization parameters should be sufficiently set by default. Oracle strongly recommends
that MEMORY_TARGET remain set to at least 256 MB. Set ASM’s POWER_LIMIT to the appropriate
Dell Compellent Storage Center – Oracle Extended Distance Clusters

level which will be dictated by available system resources and business requirements.
ASM_DISKSTRING should also be set to the location where ASM disk devices reside.

 Define the necessary preferred read failure groups.

+ASM1.asm_preferred_read_failure_groups='DG1.FG1','FRA1.FG1'
+ASM2.asm_preferred_read_failure_groups='DG1.FG2','FRA1.FG2'

 Define the necessary read and failure groups:

create diskgroup datadg normal redundancy


failgroup fg1 disk '/dev/oracleasm/disks/DATA1SC11'
, '/dev/oracleasm/disks/DATA2SC11' -- local disk to node 1
failgroup fg2 disk '/dev/oracleasm/disks/DATA1SC14'
, '/dev/oracleasm/disks/DATA2SC14' -- local disk to node 2;

create diskgroup fradg normal redundancy


failgroup fg1 disk '/dev/oracleasm/disks/FRA1SC11'
, '/dev/oracleasm/disks/FRA2SC11' -- local disk to node 1
failgroup fg2 disk '/dev/oracleasm/disks/FRA1SC14'
, '/dev/oracleasm/disks/FRA2SC14' -- local disk to node 2;

 Define the necessary stanza in multipath.conf so multipath performs immediate failover in the
event of a Storage Center failover:

 When creating volumes for your databases, it is recommended that you initially create the volumes
(datafiles, archived redo logs, flash recovery area) larger than needed for future database growth.
Since the Compellent Storage Center has the ability to dynamically provision storage, disk space is
not taken up until actual data has been written to it. This way you can create your tablespaces
with the AUTOEXTEND parameter so you don’t have to worry about running out of disk space in
that volume.

Conclusion
When Compellent SAN technologies are combined with 11gR2 Oracle Grid and 11gR2 RDBMS, RAC
Stretch Clusters become a very attractive storage solution for Oracle 11g. Running Oracle RDBMS
(single instance or RAC) with Compellent Storage Center provides availability, scalability,
manageability, and performance for your database applications. Compellent also simplifies the
management of the Storage and provides a robust and reliable business continuity solution for your
business continuance needs. Configuring the storage solution is easily accomplished with just a few
Dell Compellent Storage Center – Oracle Extended Distance Clusters

mouse clicks.

References
Dell Compellent Storage Center – Oracle Extended Distance Clusters

i
Oracle® Corporation, Oracle® Database, High Availability Best Practices, 11g Release 1 (11.1) B28282-
02
(Oracle® Corporation, December 2009), 2-34
http://docs.oracle.com/cd/B28359_01/server.111/b28282.pdf

ii
Oracle® Corporation, Oracle® Database, High Availability Best Practices, 11g Release 1 (11.1)
B28282-02
(Oracle® Corporation, December 2009), 2-34
http://docs.oracle.com/cd/B28359_01/server.111/b28282.pdf

iii
Oracle® Corporation, Oracle® Database, High Availability Overview, 11g Release 2 (11.2) E17157-04
(Oracle® Corporation, July 2010)
http://docs.oracle.com/cd/E18283_01/server.112/e17157.pdf

Oracle® Corporation, Oracle Maximum Availability Architecture – MAA


(Oracle® Corporation, March 15, 2012)
http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm

Oracle® Corporation, Oracle Database, Storage Administrator’s Guide, 11g Release 2(11.2)
(Oracle® Corporation
http://docs.oracle.com/cd/E16338_01/server.112/e10500.pdf

You might also like