Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Standard view
Full view
of .
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1


|Views: 579|Likes:
Published by Shahid Mahmud
Best Practices for
Oracle Database 10g
Automatic Storage Management
on Dell/EMC Storage
Best Practices for
Oracle Database 10g
Automatic Storage Management
on Dell/EMC Storage

More info:

Published by: Shahid Mahmud on Nov 04, 2008
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





Oracle Database 10g
everal architectural enhancements have been intro-

duced in Oracle Database 10g that can benefit admin- istrators tasked with deploying and managing an Oracle database solution. The enhanced techniques and methods supported by Oracle Database 10g either automate or greatly simplify the process of configuring, monitoring, and managing an Oracle database, and can assist organi- zations with implementing the degree of availability and performance that best meets their service-level objectives. Key techniques include:

\u2022Automatic Storage Management (ASM): For opti-
mal layout of database files
\u2022Automatic Workload Repository (AWR): For gath-
ering performance data
\u2022Automatic Database Diagnostics Monitor (ADDM):
For analyzing performance issues
\u2022Automatic Workload Management (AWM):Fo r
managing and controlling processing resources
required for applications
\u2022Virtualization and provisioning: For efficient,
on-demand usage of processing resources

ASM addresses one of the major challenges faced by database administrators: the storage management process, which involves creating and tuning a storage layout for database files, identifying hot spots, monitoring I/O distri- bution among the physical disks, and monitoring overall storage capacity (see Figure 1). As the database grows in size, this often-daunting storage management process needs to be repeated. Moreover, many of the associated administrative tasks require taking the database offline, which can reduce availability to a level that may not be acceptable. ASM can help to improve the storage manage- ment process and to reduce total cost of ownership (TCO) by allowing storage to grow as needed without requiring up-front investments that account for future growth.

Before ASM, raw devices and the Oracle Cluster File System (OCFS) were the available storage management options for an Oracle database running on the Linux oper- ating system. However, these methods are subject to the following limitations:

\u2022Data file management on raw devices can be difficult with
respect to name space mapping and backup schemes.
Best Practices for
Automatic Storage Management
on Dell/EMC Storage

A highly available and scalable storage system forms the heart of data centers running Oracle\ue000databases. This article reflects the cumulative storage design and tuning recom- mendations from Dell and Oracle teams on Oracle Database 10g solutions using Dell\u2122 PowerEdge\u2122 servers, Dell/EMC Fibre Channel storage, and Oracle Automatic Storage Management on the Red Hat\ue000Linux\ue000 operating system.

Reprinted from Dell Power Solutions, October 2004. Copyright \u00a9 2004 Dell Inc. All rights reserved.
October 2004
92\u2022Storage must be allocated for both raw devices and OCFS with
future growth in mind\u2014resulting in significant initial costs.
\u2022A storage solution is not scalable if storage allocation is

based only on short-term requirements. Expanding the
storage for a data file requires creating a larger data file,
exporting the data from the existing data file, and importing
the data into the larger data file. During these operations,
the database is not available.

\u2022Both raw devices and OCFS abstract the organization (RAID

level) of the physical disk group. Consequently, an applica-
tion such as a database server cannot take advantage of
physical properties\u2014host bus adapters (HBAs), SCSI adapt-
ers, bus speeds, disk spindles, and so forth\u2014to balance I/O
or redistribute data as the data file grows.

This article provides an overview of Oracle Real Application Clusters (RAC) 10g running over a storage area network (SAN) that comprises Dell servers and Dell/EMC storage. It also describes how Dell/EMC storage systems and ASM can help improve database performance and availability.

Implementing Oracle RAC 10g on Dell clusters

Dell and Oracle have developed a RAC configuration that is based on a SAN comprising up to eight Dell PowerEdge server nodes, a Dell/EMC CX series Fibre Channel storage enclosure, a Fibre Channel network, a private network, and a public network (see Figure 2). The storage enclosure can be connected to the nodes directly or through Fibre Channel switches. The PowerEdge server nodes run the Red Hat Enterprise Linux AS 3 operating system with Update 2 or higher and Oracle Database 10g Enterprise Edition database software.1

An Oracle RAC 10g cluster requires a private network and a public network. The private network uses two Gigabit2Ethernet network interface cards (NICs) that are bonded together using the Red Hat Enterprise Linux network bonding feature (see the sidebar

\u201cConfiguring the private network\u201d for more information). Primary communication among the cluster nodes takes place over the private network. Clients and other application servers access the database over the public network.

Dell/EMC storage systems

The modular Dell/EMC CX series of Fibre Channel storage systems incorporates RAID, multipath I/O, and on-demand storage expansion capabilities to help provide cost-effective, continuous availability for critical business environments through either SAN or direct attach configurations (see Figure 3 for representative entry-level, midrange, and enterprise-class models). A SAN architecture allows administra- tors to increase storage incrementally and add servers as business demands grow. Key features of Dell/EMC CX series storage systems include the following:

\u2022High availability: Modular, redundant hardware architecture,
combined with switches, creates multiple paths to the storage
to help provide business continuance.
\u2022Scalability and capacity: Up to 35 TB of storage can be sup-

ported using Fibre Channel 2 (FC2) drives. Disks and enclo-
sures can be added as needed, resulting in efficient utilization
of storage.

\u2022Manageability: EMC\u00ae Navisphere\u00ae and VisualSAN\u00ae software
help provide simple, powerful storage management capabili-
ties, including nondisruptive software upgrades.
Figure 1. Managing storage for a database

Monitor disk
I/O distribution and
storage capacity

Create and
tune storage

hot spots
1To review the latest Dell and Oracle\u2013supported configurations on Dell PowerEdge servers, visit http://www.dell.com/oracle.
2This term does not connote an actual operating speed of 1 Gbps. For high-speed transmission, connection to a Gigabit Ethernet server and network infrastructure is required.
Figure 2. Architecture of Oracle RAC 10g cluster with Dell/EMC Fibre Channel storage
Public network
Private network (bonded NICs)
Private network (bonded NICs)
Dedicated network
Dell PowerEdge servers
Fibre Channel
Fibre Channel
Dell/EMC Fibre Channel storage

Public network
Fibre Channel network
Private network

RAID technology. The Dell/EMC CX series storage systems use

RAID technology. RAID technology groups separate inexpensive disks into one logical unit number (LUN) to help improve reliability, performance, or both. This approach spreads data across all disks, which are partitioned into units called stripe elements. Depending on the RAID level, the storage-system hardware can read from and write to multiple disks simultaneously and independently. Because this approach enables several read/write heads to work on the same task at once, RAID can enhance performance. The chunk data size read from or written to each disk at a time makes up the stripe element size. Figure 4 shows an example six-disk RAID-10 configuration in which each primary disk is first striped and then mirrored to another disk.

To set up network bonding for Broadcom or Intel\ue000 NICs and to configure
the private network, perform the following steps on each cluster node:
1. Log in as root.
2. Add the following line to the /etc/modules.conf file:
alias bond0 bonding
3. For high availability, edit the /etc/modules.conf file and set the

option for link monitoring. The default value formiimon is 0,
which disables link monitoring. Change the value to 100 milli-
seconds initially, and adjust it as needed to improve performance:

options bonding miimon=100
4. In the /etc/sysconfig/network-scripts/ directory, edit the

ifcfg-bondn configuration file for bond numbern. For example,
the configuration file ifcfg-bond0 for the first bond (bond0) would
appear as follows:


5. To use the virtual bonding device (such as bond0), all members
of bond0 must be configured so thatMASTER=bond0and
SLAVE=yes. For each member of the specified bonding device,
edit its respective configuration file ifcfg-ethn in /etc/sysconfig/
network-scripts/ as follows:


Enter the following command:
service network restart
Dell/EMC Maximum
storage capacity
of disks
with FC2 drives Specifications
60 in 1 disk processor
8.8 TB
\u2022 Entry-level array
enclosure (DPE) and
\u2022 Bandwidth: 680 MB/sec;
3 disk array enclosures
50,000 I/Os per second (IOPS)

\u2022 RAID levels: 0, 1, 3, 5, and 10
\u2022 2 GB cache
\u2022 2 Gbps DPE2
\u2022 Multipath I/O

120 in 1 DPE
17.5 TB
\u2022 Midrange array
and 7 DAEs
\u2022 Bandwidth: 780 MB/sec;
120,000 IOPS

\u2022 RAID levels: 0, 1, 3, 5, and 10
\u2022 4 GB cache
\u2022 2 Gbps DPE2
\u2022 Multipath I/O

240 in 16 DAEs
35 TB
\u2022 Enterprise-class array
\u2022 Bandwidth: 1520 MB/sec;
200,000 IOPS

\u2022 RAID levels: 0, 1, 3, 5, and 10
\u2022 8 GB cache
\u2022 Storage processor enclosure

\u2022 Multipath I/O
Figure 3. Specifications for Dell/EMC storage system enclosures
Disk 1 primary
64 KB
Disk 1 mirror
Disk 2 primary
Disk 2 mirror
Disk 3 primary
Disk 3 mirror
Stripe size
64 KB
64 KB
64 KB
64 KB
64 KB
Figure 4. RAID-10 with six disks

Activity (7)

You've already reviewed this. Edit your review.
1 hundred reads
1 thousand reads
Diana Dinu liked this
pichaiyan liked this
cristle17 liked this
schlibling liked this
miguelangel.mirandarios1109 liked this

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->