Professional Documents
Culture Documents
Document revision
Date Revision Comments
3/9/2012 A draft
THIS TECHNICAL TIP IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR
IMPLIED WARRANTIES OF ANY KIND.
© 2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without
the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Dell, the DELL logo, the DELL badge, and Compellent are trademarks of Dell Inc. Other trademarks and
trade names may be used in this document to refer to either the entities claiming the marks and names
or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than
its own.
Dell Compellent Storage Center – Oracle Extended Distance Clusters
Contents
General Syntax
Table 1. Conventions
Item Convention
Menu items, dialog box titles, field names, keys Bold
Mouse click required Click
User Input Monospace Font
User typing required Type:
System response to commands Blue
Output omitted for brevity <…snipped…>
Website addresses http://www.dell.com
Email addresses name@dell.com
Conventions
Timesavers are tips specifically designed to save time or reduce the number of steps.
Caution indicates the potential for risk including system or data damage.
Warning indicates that failure to follow directions could result in bodily harm.
Customer Support
Compellent provides live support 1-866-EZSTORE (866.397.8673), 24 hours a day, 7 days a week, 365
days a year. For additional support, email Compellent at support@compellent.com. Compellent
responds to emails during normal business hours.
Dell Compellent Storage Center – Oracle Extended Distance Clusters
Introduction
Disaster occurs more frequently than one would think in computer environments. Whether they are
natural disasters such as hurricanes, earth quakes, tornadoes, fires, floods, or manmade disasters,
protecting critical business data is the most important aspect of a computer environment a CIO has to
consider. Not only protecting the data is important, but the time it takes to recover the data also
plays an important role to business recovery. Without quickly and efficiently recovering the business
data, lost revenue will incur, hence putting the business in jeopardy.
Every business has some type of data whether that data resides on a database or flat files, they need
to be protected. This white paper describes general best practices for configuring Oracle Extended
Distance Clusters using 11gR2 Grid and 11gR2 RDBMS on Linux OEL5U5 (Oracle kernel fixes and
multipath) and DELL Compellent Storage Center. A test environment was used to demonstrate the
content in this paper and will be shown and described below.
Scope
Oracle Grid, RAC, ASM, and Database architecture, installation, configuration, management,
including performance tuning is beyond the scope of this paper. Please visit www.oracle.com for
more in-depth information on appropriate Oracle topics.
For comprehensive information regarding best practices with Oracle and Linux on Compellent Storage
Center, see documents “Oracle Best Practices on Compellent Storage Center” and “Dell Compellent
Linux Best Practices”.
Audience
This paper is intended for Database Administrators, System Administrators and Storage Administrators
that need to understand general best practices for Oracle Extended Distance Clusters using DELL
Compellent Storage Center. Readers should be familiar with Compellent Storage Center and have prior
experience in configuring and operating the following:
Tested Configuration
This section list the software installed, some basic configuration, as well as a diagram which depicts the
architecture that was used to test Oracle Extended Distance Clusters on Complement’s Storage Center.
Dell Compellent Storage Center – Oracle Extended Distance Clusters
libaio-devel-0.3.106-5.i386.rpm
libaio-devel-0.3.106-5.x86_64.rpm
sysstat-7.0.2-3.el5.x86_64.rpm
numactl-devel-0.9.8-11.el5.x86_64.rpm
kernel-headers-2.6.18-194.0.0.0.3.el5.x86_64.rpm
kernel-devel-2.6.18-194.0.0.0.3.el5.x86_64.rpm
kernel-2.6.18-194.0.0.0.3.el5.x86_64.rpm
oracleasm-support-2.1.3-1.el5.x86_64.rpm
oracleasm-2.6.18-194.0.0.0.3.el5-2.0.5-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
cvuqdisk-1.0.9-1.rpm
ASM Configuration:
Diskgroup compatibility set to 11.2.0.0.0 for both RDBMS and ASM on all diskgroups.
Network setup:
SC6: OCRDG (failgroup 3), Oracle and Grid Homes for node 1 and node 2
Oracle and Grid Homes existed on a local ext3 fs mounted on /u01 on each node.
Dell Compellent Storage Center – Oracle Extended Distance Clusters
Site A Site B
Public IP Network
Public IP Network
Private IP Network
Private IP Network
Inter-Connect Node 2
Node 1 Switch
FC
FC
Site C
FC
FC FC
FC FC FC
Compellent Compellent Compellent
Storage Center Storage Center Storage Center
SC11 SC6 SC14
DATADG DATADG
Failgroup FG1 Failgroup FG2
E1 E1 E1
E1
DATA1SC11 DATA2SC11 DATA1SC14 DATA2SC14
FRADG FRADG
Failgroup FG1 Failgroup FG2
E1 E1 E1
E1
FRA1SC11 FRA2SC11 FRA1SC14 FRA2SC14
Cluster Overview
A cluster consists of multiple servers that are interconnected and appear as if they are one to
applications and users. In broad terms, a cluster is referred to as a geocluster, a term that’s used in
the context of business continuance and describes highly available clusters where all the nodes are
located at specific locations. If the nodes are not all local, depending upon the spanning distance
between sites, different terms are used to describe clusters:
Dell Compellent Storage Center – Oracle Extended Distance Clusters
Metro cluster Nodes located within a metro area from a few kilometers to 50 km
In Oracle architectures, a Campus or Metro cluster are candidate clusters for deployment as an
Extended Distance Cluster, otherwise known as a Stretch Cluster.
RAC also plays an instrumental role in deploying Oracle’s grid computing architecture. With multiple
instances connected to a single database, single point of failure with a node is eliminated.
Because Extended Distance Clusters can help in reducing the latency, there is a growing
interest in implementing solutions around it. But it should be noted that it’s not a panacea
for all situations. One needs to consider the impact of distance, complexity, and cost of
implementation before choosing this architecture. According to Oracle Corporation,
“distances of 50 km or less are recommended. Testing has shown the distance (greatest cable
stretch) between Oracle RAC cluster nodes generally affects the configuration, as follows:
For Regional and Continental Clusters, there’s not yet enough empirical data to indicate the effect of
Extended Distance Clusters in these environments. Additional testing is required to identify what
workloads can be supported and the effect the distance has on performance.ii
Even though Extended Distance Clusters provide greater high availability than does RAC by requiring a
copy of the database at all sites, it may not provide the necessary disaster recovery and business
Dell Compellent Storage Center – Oracle Extended Distance Clusters
continuance requirements for your organization. It is imperative that a full analysis be performed to
determine if Extended Distance Clusters provide adequate protection from a disaster that affects
multiple sites. If its determine that protection against corruptions and regional disasters are
necessary, Oracle recommends a Maximum Availability Architecture (MAA) which includes the
integration of Oracle Data Guard with Oracle RAC.iii
Oracle Extended Distance Clusters require enough disk at each site so that each site can contain a
copy of the database. This database ‘copy’ is maintained one of two ways: 1) Host based mirroring
using a logical volume manager (eg. ASM), or 2) array based mirroring.
ASM uses disk groups to store data files. Disk groups and data files are similar to volume groups and
logical volumes, respectfully, in other LVMs. The content of ASM files is evenly distributed with
respect to disk extents to eliminate hot spots and to provide uniform performance across the disks.
The distribution occurs across the storage pool based on capacity and not the number of spindles.
ASM provides several options for server-based mirroring: normal and high, and an option for array
based mirroring: external redundancy. Normal and high redundancy instruct ASM to provide two- and
three-way mirroring respectfully. External redundancy is also available, but should be used to enable
RAID to perform the mirroring.
When ASM is used in a multipath configuration, ASM must be configured to discover only the
multipath device; otherwise ASM will generate an error when it discovers multiple device paths. You
can ensure ASM discovers the multipath devices by setting ASM’s initialization (init.ora) parameter
ASM_DISKSTRING to the name of the pseudo device (eg: ASM_DISKSTRING=/dev/oracleasm/disks/*).
Also, when ASMLIB is used with ASM on Linux, multipath disks can be discovered by configuring ASM to
scan device mapper devices first (ORACLEASM_SCANORDER=”dm”), and to exclude the single path
disks (ORACLEASM_SCANEXCLUDE=”sd”). This configuration is defined in /etc/sysconfig/oracleasm:
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=oinstall
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER="dm"
ORACLEASM_SCANEXCLUDE="sd"
When using disk groups when the RDBMS and ASM are not at the same release, other compatibility
considerations need to be reviewed. Please see Oracle’s documentation for more information on
compatibility.
INSTANCE_TYPE
This parameter must be set to ASM, and is the only required ASM parameter.
*.instance_type='asm'
ASM_POWER_LIMIT
This parameter controls the amount of cycles used to rebalance disk. The default value is 1
with an available range of 0 to 11 inclusive. A value of 0 disables rebalance, and the highest
value of 11 instructs ASM to spend as much available resources on the rebalance which could
result in higher I/O overhead.
*.asm_power_limit=1
Dell Compellent Storage Center – Oracle Extended Distance Clusters
ASM_DISKGROUPS
This parameter specifies the list of names of disk groups that should be mounted at startup.
The default value is NULL, but is dynamic. As disk groups are added or dropped, or mounted
or dismounted, ASM automatically adds or drops the disk group names from the parameter.
When using a PFILE, you must edit the pfile to add or remove the disk groups.
+ASM1.asm_diskgroups='DG1','FRA1'
+ASM2.asm_diskgroups='DG1','FRA1'
ASM_DISKSTRING
This parameter specifies a list of strings, called discovery strings, which control or limit the
sets of disks that an ASM instance discovers. The strings support pattern matching and
wildcard characters (*, ?). The supported format for discovery strings is dependent upon the
ASM library and operating system in use.
*.asm_diskstring='/dev/oracleasm/disks'
ASM_PREFERRED_READ_FAILURE_GROUPS
This parameter specifies a list of strings that define the failure groups that should be
preferentially read by a given instance. The parameter needs to be set in Extended Distance
Cluseters, and its value different on each node of the cluster.
+ASM1.asm_preferred_read_failure_groups='DG1.FG1','FRA1.FG1'
+ASM2.asm_preferred_read_failure_groups='DG1.FG2','FRA1.FG2'
The above definition instructs instance ASM1 to read files from diskgroups DG1 and FRA1 from
failgroups FG1, and ASM2 to read files in diskgroups DG1 and FRA1 from failgroups FG2 and
FG2 respectfully.
It is recommended that LUNs within a disk group be defined with the same performance
characteristics (ex. FC 15K rpm or 10K rpm) and size. The performance of the group will be defined
by its slowest performing LUN member. When using LUNs of different performance capabilities and
size, it is best practice to group them according to similar characteristics. When creating volumes
for your databases, it is recommended that you initially create the volumes larger than needed for
future database growth. Since Compellent Storage Center has the ability to dynamically provision
storage, disk space is not taken up until actual data has been written to it. This way you can create
your tablespaces with the AUTOEXTEND parameter so you don’t have to worry about running out of
disk space in that volume.
As stated earlier, ASM distributes data across the storage pool based on LUN capacity and not the
number of spindles. Therefore, if LUNs are not sized equally, a sub-optimal performance will be
experienced due to the fact the larger LUNs will receive more I/O requests than smaller LUNs.
Diskgroups and the level of redundancy is created with the SQL command ‘CREATE DISKGROUP’.
Redundancy within the diskgroup defines how many disk failures can be tolerated before disks are
dismounted. There are several options for mirroring, which are defined with the appropriate
command options:
Generally speaking, ASM mirroring (NORMAL or HIGH REDUNDANCY) is more flexible than traditional
RAID mirroring (EXTERNAL REDUNDANCY) as each file in a NORAMAL or HIGH REDUNDANCY
configuration can reside in the same disk group with some files being mirrored, while others are not.
ASM mirroring should be used when hardware RAID is absent, or when mirroring is needed across
storage systems, as is the case with Extended Distance Clusters. External redundancy should be used
when possible to reduce the mirroring overhead of ASM mirroring. However, the mirroring option
chosen should be based upon your business continuance requirements.
Failure groups are a subset of disks within a disk group and are used to mirror copies of extents.
Each extent copy is placed on a disk in a different failure group. Each disk in a disk group can belong
to only one failure group. When normal redundancy files are configured, and ASM allocates an
extent, ASM allocates two copies of the extent: a primary and secondary copy. The secondary extent
copy is allocated in a disk that resides in a different failure group then the failure group where the
primary extent is allocated. If all the disks in a failure group fail, no data loss would be realized as
there would be a copy of the extents in those files in a different failure group. Failure groups should
be created with equal size to avoid uneven distribution of data and space imbalance.
Oracle recommends offloading ASM mirroring which runs on the database server to the storage array
RAID controller. Except for a few exceptions, Oracle considers ASM mirroring as the alternative to third
party LVMs. Oracle lists the following as general considerations for selecting normal or high redundancy
Dell Compellent Storage Center – Oracle Extended Distance Clusters
On node 2, diskgroups DATADG and FRADG need to be mounted after they are created in node 1.
Diskgroup OCRDG doesn’t need to be mounted as it was mounted by the Grid installer.
In geo-clusters, it would be more efficient for the requesting node to read the extent that is closest
or local, even if that extent is a secondary extent. 11gR1 introduced the ability to allow a node in a
cluster to access local failure groups rather than remote groups. However, one must configure this
capability by defining an extended cluster with preferred ASM read groups.
Each ASM instance running on each node of the cluster must have its own preferred read
configuration and is defined with the ASM_PREFERRED_READ_FAILURE_GROUPS initialization
parameter. The following instance specific parameters were defined during our testing. They define
Dell Compellent Storage Center – Oracle Extended Distance Clusters
preferred read disks in failure groups in both the DATADG and FRADG diskgroups. The failure groups
must only contain disks that are local or nearest to the instance to realize improved latency. With
normal redundancy, there should be only one failure group on each node of the extended cluster.
Instance +ASM1:
Instance +ASM2:
DATADG::
FRADG:
OCRDG:
devices {
device {
vendor "COMPELNT"
product "Compellent Vol"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
path_selector "round-robin 0"
path_checker tur
features "0"
hardware_handler "0"
failback immediate
rr_weight uniform
no_path_retry fail
rr_min_io 1000
}
}
All multipath devices on both nodes used user-friendly names and were identified by their WWID.
Here’s an example of the multipath devices defined on one of the nodes in the test environment:
multipaths {
multipath {
wwid "36000d3100003d00000000000000022f8"
alias orahome
}
multipath {
wwid "36000d3100003d00000000000000022f9"
alias ocrsc6
}
multipath {
wwid "36000d3100000690000000000000010d8"
alias ocrsc11
}
multipath {
wwid "36000d3100002bf000000000000000266"
alias ocrsc14
}
multipath {
wwid "36000d3100000690000000000000010d9"
alias data1sc11
}
multipath {
wwid "36000d3100000690000000000000010da"
alias data2sc11
Dell Compellent Storage Center – Oracle Extended Distance Clusters
}
multipath {
wwid "36000d3100000690000000000000010db"
alias fra1sc11
}
multipath {
wwid "36000d3100000690000000000000010dc"
alias fra2sc11
}
multipath {
wwid "36000d3100002bf000000000000000262"
alias data1sc14
}
multipath {
wwid "36000d3100002bf000000000000000263"
alias data2sc14
}
multipath {
wwid "36000d3100002bf000000000000000264"
alias fra1sc14
}
multipath {
wwid "36000d3100002bf000000000000000265"
alias fra2sc14
}
}
When adding the servers for the Extended Distance Cluster in Storage Center, it is
recommended that the servers are defined as part of a cluster server group. Server
Clusters reduce the amount of work necessary to maintain the mapping between volumes
and the servers in the server cluster. When a volume is mapped to a Server Cluster,
Storage Center automatically maps the volume to all the
servers within the cluster server group. Within the Server
Cluster, a volume can be mapped to the same LUN on all
servers within the cluster, making is easier to identify the
same volume on all servers within the cluster.
Dell Compellent Storage Center – Oracle Extended Distance Clusters
Perform a full analysis of the business continuance requirements to determine if extended distance
clusters provide the necessary high availability requirements.
Before installing the necessary software for your Oracle Extended Distance cluster, you need to
take into consideration the following before deciding what should be installed and configured:
Oracle Clusterware, RAC, ASM, RDBMS versions. It’s recommended to use 11.2.0.3 for all
Oracle components to simplify the management of the cluster.
Oracle disk group compatibility. If using 11.2.0.3 for RDBMS, ASM, Grid, then set
compatibility to 11.2.0.0.0 for both RDBMS and ASM when defining ASM disk groups.
When creating volumes for your Oracle database server, you do not have to use any software
striping at the Operating System level. The data in the Compellent volume is automatically striped
depending on which RAID level you have selected for your volume. However, you should create
multiple LUNs for your larger workload database and use Operating System striping mechanism (eg.
LVM, VxVM) to get better performance because of multiple disk queues at the OS level if Oracle
ASM is not configured.
When provisioning storage for your Oracle Extended Distance cluster, you need to take into
consideration the following before deciding how to setup and configure an Extended Distance
cluster on Compellent Storage Center:
Storage Center A: Supports OCR disk and database image for RAC node A
Storage Center B: Supports OCR disk and database image for RAC Node B
Storage Center C: Supports OCR disk. NFS file can be used as third OCR file. See Oracle
documentation for more information.
Storage Center D: Supports Boot volume and Grid and Oracle homes
Use the same disk size and performance characteristics (ex. FC 15K rpm or 10K rpm) for the
LUNs within a disk group.
Size of the Oracle database. Each node in the cluster must have enough disk to support a
local copy of the database.
Dell Compellent Storage Center – Oracle Extended Distance Clusters
Number of OCR disk groups. Must have an odd number, minimum of three
Number of disk groups. Use at least two. One each for groups DATA, and FRA
Normal Redundancy: use at least two failgroups for each disk group DATA and FRA.
High Redundancy: use at least three failgroups for each disk group DATA and FRA.
Number of files per disk and failure groups. Use a minimum of 4 dedicated LUNs per disk
group.
Number and placement of OCR disks. Each resides in their own diskgroup and must have
their own failgroup.
File System type, placement and size for Oracle Home. Recommended to use non-
shareable homes to support Oracle rolling upgrades.
For a more complex database, you might have to create multiple volumes dedicated to your data
files, assuming you separate your data tablespaces from your index tablespaces.
Configure ASM to discover only multipath devices by setting ASM initialization pfile parameter:
ASM_DISKSTRING=/dev/oracleasm/disks/*
When using ASMLIB, configure oracleasm to scan device mapper devices and exclude sd devices.
Set the following in /etc/sysconfig/oracleasm:
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=oinstall
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER="dm"
ORACLEASM_SCANEXCLUDE="sd"
All ASM initialization parameters should be sufficiently set by default. Oracle strongly recommends
that MEMORY_TARGET remain set to at least 256 MB. Set ASM’s POWER_LIMIT to the appropriate
Dell Compellent Storage Center – Oracle Extended Distance Clusters
level which will be dictated by available system resources and business requirements.
ASM_DISKSTRING should also be set to the location where ASM disk devices reside.
+ASM1.asm_preferred_read_failure_groups='DG1.FG1','FRA1.FG1'
+ASM2.asm_preferred_read_failure_groups='DG1.FG2','FRA1.FG2'
Define the necessary stanza in multipath.conf so multipath performs immediate failover in the
event of a Storage Center failover:
When creating volumes for your databases, it is recommended that you initially create the volumes
(datafiles, archived redo logs, flash recovery area) larger than needed for future database growth.
Since the Compellent Storage Center has the ability to dynamically provision storage, disk space is
not taken up until actual data has been written to it. This way you can create your tablespaces
with the AUTOEXTEND parameter so you don’t have to worry about running out of disk space in
that volume.
Conclusion
When Compellent SAN technologies are combined with 11gR2 Oracle Grid and 11gR2 RDBMS, RAC
Stretch Clusters become a very attractive storage solution for Oracle 11g. Running Oracle RDBMS
(single instance or RAC) with Compellent Storage Center provides availability, scalability,
manageability, and performance for your database applications. Compellent also simplifies the
management of the Storage and provides a robust and reliable business continuity solution for your
business continuance needs. Configuring the storage solution is easily accomplished with just a few
Dell Compellent Storage Center – Oracle Extended Distance Clusters
mouse clicks.
References
Dell Compellent Storage Center – Oracle Extended Distance Clusters
i
Oracle® Corporation, Oracle® Database, High Availability Best Practices, 11g Release 1 (11.1) B28282-
02
(Oracle® Corporation, December 2009), 2-34
http://docs.oracle.com/cd/B28359_01/server.111/b28282.pdf
ii
Oracle® Corporation, Oracle® Database, High Availability Best Practices, 11g Release 1 (11.1)
B28282-02
(Oracle® Corporation, December 2009), 2-34
http://docs.oracle.com/cd/B28359_01/server.111/b28282.pdf
iii
Oracle® Corporation, Oracle® Database, High Availability Overview, 11g Release 2 (11.2) E17157-04
(Oracle® Corporation, July 2010)
http://docs.oracle.com/cd/E18283_01/server.112/e17157.pdf
Oracle® Corporation, Oracle Database, Storage Administrator’s Guide, 11g Release 2(11.2)
(Oracle® Corporation
http://docs.oracle.com/cd/E16338_01/server.112/e10500.pdf