You are on page 1of 19

Best practices for Oracle 11g on ProLiant with EVA8100 using multiple databases

Overview . . . . . . . . Solution conguration . . . EVA conguration . . Testing . . . . . . . . . OLTP results . . . . . DSS results . . . . . . Best practices . . . . . . Storage administrators . Server administrators . Database administrators Conclusion . . . . . . . We value your feedback Appendix A Bill of materials For more information . . . HP solutions sites . . . HP technical references HP product sites . . . Oracle . . . . . . . Quest software . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

2 3 4 7 8 10 14 14 14 15 16 16 17 18 18 18 18 18 19

Overview
In 2007, Oracle formally announced Oracle 11gthe newest version of its enterprise RDBMS agship product. Steady adoption of Oracle 11g is expected over the next 6 to 12 monthsbased on historical patterns, IDC research, and customer data gathered by HP Customer Focused Testing (CFT) Oracle technical forums. The HP CFT Team previously delivered a well-received project that focused on what was then a new feature of Oracle 10g known as Automatic Storage Management (ASM). For more information, see http://h71028.www7.hp.com/ERC/downloads/4AA1-4941ENW.pdf?jumpid=reg_R1002_USEN. The purpose of ASM is to do the following: Provide a comprehensive volume management environment delivered entirely from within Oracle. Provide a unied look and feel across all operating systems. Eliminate the need for operating system-specic volume managers. One of the key features of ASM is its ability to stripe data across LUNs, which is similar to functionality that is core to the HP StorageWorks Enterprise Virtual Array (EVA). Since Oracle ASM presumes that the storage is JBOD rather than a high-function array, compelling technical questions arise about using ASM with the EVA. These questions were satisfactorily addressed in the previous project. This project is a refresh of that initial project due to overwhelmingly positive reactions to the rst EVA/ASM project, the evolutions in Oracle and HP technology, and valuable customer suggestions. Based upon previous lessons learned, feedback obtained, and to extend the ndings, two databases are deployed (instead of one), and key ndings derived from the previous project are used as the baseline in this project. This project focuses on the following: A typical midsize Oracle 11g database deployment for 2,000 users and 3 TB of data using HP ProLiant servers and an HP EVA8100 Servers deployed in a real application clusters (RAC) environment running 64-bit Linux and Oracles ASM Data spread across two databases The expected results are to identify the following: Best practices for deploying two Oracle 11g databases, with emphasis on how to congure the EVA storage Best practices and recommendations for the conguration of HP ProLiant servers, 64-bit Linux, and the Oracle 11g database Best practices that accelerate time to deployment and optimize performance, while reducing risks and minimizing total costs

Solution conguration
This project is designed to help customers determine which EVA congurations are the most benecial in their environment when running multiple databases sharing storage on an EVA8100. Our conguration consists of two 2-node Oracle 11g RAC databases connected to an EVA8100 with dual-fabrics (see Figure 1).

Figure 1. High-level conguration

Each two-node Oracle 11g RAC database with ASM runs on an HP ProLiant DL585 G1 server. Each server is congured with the following: Four single-core 2.6 GHz AMD Opteron 800 series CPUs with 32 GB of RAM Two QLogic 4Gb dual-channel host bus adapters (HBA) Red Hat Enterprise Linux 4, update 4 Multipathing provided by embedded HP HBA drivers for Linux; specically, option 3 (dynamic): least service time Latest ProLiant support pack For specic versions and part numbers, see Appendix A Bill of materials . Each database tested is 1.5 TB and uses 8 KB block size. Using a benchmarking tool, we simulate 2,000 concurrent users (1,000 from each database) in dedicated mode with archiving for the online

transaction processing (OLTP) benchmark. In the data warehouse/decision support system (DSS) benchmark, we run eight concurrent users with user-dened, query-intensive statements.

EVA conguration
Three EVA congurations are tested. Conguration 1, shown in Figure 2, is a variation of the best practice of this projects predecessor. In Conguration 1, each databases main/online les are placed in the rst disk group, which uses more physical drives for improved performance to the online les. Each LUN is Vraid 1. Backup les are located in the second disk group on Vraid 5 LUNs. The second disk group is accessed far less frequently and, therefore, has fewer spindles.

Figure 2. Conguration 1

Some customers might prefer Conguration 1, because it separates the different workload types. For example, the OLTP workload of small random read/write access occurs in the rst disk group, while the large sequential write workload (archive logs) occurs in the second (backup) disk group. A benet of Conguration 1 is that customers might save money on the second disk group by using lower performance drives, since the second disk group is infrequently accessed if the redo logs are sized in accordance with Oracle best practices.
Note If you congure the Flashback Area, Oracle places an implicit mirror copy of the online redo logs into the Flashback Area. This can affect performance, especially in Conguration 1. The simplest solution is to remove the mirrored online redo logs from the Flashback Area.

In Conguration 2, shown in Figure 3, we implemented a popular cross-conguration. Each disk group contains one databases online les and the backup les of the other database. For example, the rst disk group contains the online les of Database #1 and the backup les for Database #2. The second disk group contains the online les of Database #2 and the backup les for Database #1. Each disk group is divided into a near equal number of drives. Since an EVA best practice is to use drives in quantities of eight in a disk group, we placed 88 drives on one disk group and 80 drives on the other disk group. This allows us to use all 168 drives in our array. We did not put 84 drives on both disk groups, as 84 is not a multiple of eight. As before, each LUN for online data is Vraid 1 and each LUN for backup les is Vraid 5.

Figure 3. Conguration 2

Conguration 2 has mixed workloads in each disk group and mixed Vraid levels. All drives in Conguration 2 must be high-performance drives (preferably 15 k spindles), since neither disk group is dedicated to backup les. From a cost perspective, Conguration 2 can be more expensive. Customers need to determine the pros and cons of this solution for their environment. Conguration 3, shown in Figure 4, is similar in concept to Conguration 1 in that the backup les are stored on a separate disk group than the online les. Both databases share a backup disk group of 24 drives, and each database has an isolated disk group of 72 drives to store their online les. Again, each LUN for online data is Vraid 1, and each LUN for backup les is Vraid 5.

Figure 4. Conguration 3

Conguration 3 is tested to see if the database performance improves when each databases online les are isolated from one other. We question whether or not two disk groups of 72 drives will provide sufcient performance to each database, or if a large disk group of 136 drives (Conguration 1) is a more suitable solution when both databases share the storage resources.

Testing
This project is a follow-up to the original project: Oracle 10g Best Practices on EVA8000. In the original project, HP looked at various congurations of the EVA8000 and corresponding mappings of the ASM disk groups to the EVA storage. This was done for a single database. In this project, we look at the array conguration for multiple databases. We address the following questions in this project: What is the best way to congure EVA8100 with multiple Oracle databases running an OLTP benchmark? What is the best way to congure EVA8100 with multiple Oracle databases running a DSS benchmark? What is the best way to congure EVA8100 with multiple Oracle databases running a mixed workload benchmark? We test the congurations with OLTP and DSS in separate cycles, which leads to the results in this paper. Customers running mixed workloads can draw conclusions derived from our independent workload testing. When dening storage best practices, it is important to have enough workload on the storage array. This can sometimes be counterintuitive, since proper sizing of the database system global area (SGA) can result in a decrease in storage throughput as more work occurs in the servers memory. As a compromise between testing realistic customer environments and workloads that have dening storage best practices, we deliberately choose to minimize the SGA to increase the storage workload. However, customers must understand the relationship between SGA sizing and storage usage. Customers who want to free EVA resources for other applications should look closely at SGA sizing based on their servers memory capacities and recommendations from Oracles Automatic Database Diagnostic Monitor. Table 1 shows the effect of SGA tuning on database performance and storage usage. (This data comes from our preliminary testing during database tuning; therefore, these values may differ from the nal results shown in this paper.)
Table 1. SGA sizing Memory size (MB) 4,500 5,000 5,500 6,000 6,500 Transactions per second 9,751 10,498 11,255 12,030 13,220 Array host requests per second 18,850 16,389 14,676 11,858 7,382

Best practice By increasing the SGA from 4,500 MB to 6,500 MB, database TPS increased by 36% while the host requests on the EVA decreased by 61%.

There comes a point when oversizing the SGA provides no further benets. Although all of the results are not shown here, we also saw this in our preliminary testing. For example, with an SGA of 10,000 MB, we see only a slight improvement in TPS (13,547) while array host requests increased to 7,827. Customers should try various SGA tuning tactics in their test environments to nd the optimal setting between performance and storage usage.

OLTP results
OTLP is a benchmark running small random read/write operations, traditionally in a 60/40 or similar read/write ratio. Our testing used 2,000 concurrently-connected users in dedicated mode using an extremely low think time of 150 ms to execute a heavy load on the database and underlying storage array. At rst glance, our OLTP results are not as exciting as we would prefer (see Figure 5). Conguration 1 is the top performer with 10,998 transactions per second. Conguration 2 and Conguration 3 are 2% to 4% slower, which is typically not signicant.

Figure 5. OLTP results

Best practice HP improved the database performance of the OLTP workload by 2% to 4%, based on the storage array conguration.

The database server is monitored with HP OpenView Performance Manager (OVPM). The OLTP benchmark is a server-intensive benchmark, and the server is usually the primary source of bottlenecks for this workload and should be monitored closely. Figure 6 shows a global view of one RAC node from one of the databases. All nodes are monitored with similar results. Figure 6 shows the CPU usage is quite high at 85%. In addition, the run queue length is very high at above 40. This represents an average queue of ten I/Os waiting to be processed by each CPU. Preferably, this value is two or less for each CPU. These results are indicative of a CPU bottleneck for this workload. Memory usage is acceptable at 22%, and no swapping is occurring.

Figure 6. OVPM custom graph for OLTP testing

While the server proves to be a bottleneck on the workload, the array is still under heavy load, given the number of IOPS possiblebased on spindle count, spindle rate, RAID type, and workload characteristics on a given disk group. Our 95th percentile metric approaches 17,000 IOPS (see Figure 7). This graph represents the workload for Conguration 1 with 136 spindles in the rst (main) disk group. Yet our read and write latencies (not shown) are still at a manageable 2 ms to 10 ms. These values are generally acceptable. Typically, storage is not considered problematic until latencies reach 20 ms or higher.

Figure 7. EVAPerf total host requests per second

DSS results
A DSS workload represents a typical large sequential read operation and usually takes a fairly long time to execute. Traditionally, these queries are run by a handful of users (we tested eight) compared to 2,000 users in an OLTP workload. One variation of the DSS workload is a timed test, where we execute a static set of queries and look for the test run that nishes the fastest. In our testing, shown in Figure 8, we see Conguration 2 nishes in 56.67 minutes. Conguration 1 and Conguration 3 complete in 60 minutes or longer, representing a benchmark 6% to 9% slower.

10

Figure 8. DSS results

Best practice HP improved the database performance of the DSS workload by 6% to 9%, based on the storage array conguration.

A DSS benchmark tends to be a storage-intensive operation, which makes for greater distinctions between storage best practices. From the servers perspective, shown in Figure 9, we see the CPU usage on the database server is under 20%, and the run queue length is barely above 0 for all four CPUs. Memory usage is at at 10% with no swapping. The server is not under a heavy load with this workload and is not a bottleneck for performance.

11

Figure 9. OVPM custom graph for DSS testing

The throughput on the array, however, is under a signicant loadapproaching 1 GB/s. Figure 10 displays the MB/s on the EVA8100 as captured by EVAPerf, which is the EVA performance tool. Figure 10 shows Conguration 2 results. During the rst 30 minutes of the test, which represents the heaviest workload, the average read missed latency hovers around 15 ms/s.

12

Figure 10. EVAPerf megabytes per second

13

Best practices
Storage conguration affects the performance of the database and should be managed accordingly. In this project, we show three common congurations for customers running multiple Oracle databases on EVA8100. Our testing provides the following answers to the questions presented in Testing: What is the best way to congure EVA8100 with multiple Oracle databases running an OLTP benchmark? Conguration 1 provides the best performance for OLTP workload, although the differences are slight. An advantage to Conguration 1 is the potential cost-benet by using lower performance drives in the second (backup) disk group. In addition, if the workload stresses the storage array even harder, we expect to see even further distinctions between the three congurations with Conguration 1 remaining the best. What is the best way to congure EVA8100 with multiple Oracle databases running a DSS benchmark? Conguration 2 shows better performance throughput for DSS workloads. Conguration 2 has the disadvantage of being the more expensive solution, since all drives in both disk groups must be high-performing. Furthermore, if the customer decides to use disk-based backups, such as snapclones, the backup process may impact the workload of one of the databasesdepending on the amount of data being backed upsince there is no designated backup disk group. What is the best way to congure EVA8100 with multiple Oracle databases running a mixed workload benchmark? In a mixed workload environment, either Conguration 1 or Conguration 2 is an acceptable solution. Customers may be tempted to use Conguration 2, since the DSS performance is greater (6%) than the OLTP performance of Conguration 1 (2%). This decision depends on the mixed workload of the customers environment. We prefer Conguration 1, since it offers the customer more choices in conguring the second (backup) disk group.

Storage administrators
Best practices for storage administrators when conguring their environment: Create two EVA disk groups with multiple LUNs for the database. HP recommends Conguration 1 for most workloads. Use Vraid 1 for the online data (data les, online redo logs) and Vraid 5 for the backup data. However, this is not a requirement and available storage space may dictate Vraid level. Balance the databases across EVA controllers so that Database 1 is on Controller A and Database 2 is on Controller B, assuming the workload on the databases is somewhat even. Alternatively, the LUNs of each database can be congured to interchange use of each controller (A, B, A, B).

Server administrators
Server administrators should look at multipathing and host access: Use multiple HBAs for path availability and optimal performance.

14

In a 4 Gb environment, HP recommends a minimum of two ports for OLTP and four ports for DSS. Mixed workloads should err on the larger port count. Use multipathing software to balance the workload across paths, such as Microsoft Multipath I/O (MPIO), Secure Path, or device mapper for Linux. Typically, least service time or some similar multipathing algorithm provides optimal performance over basic sequential distribution.

Database administrators
Best practices for database administrators include proper ASM conguration: Create at least two ASM disk groups using external redundancy. EVA RAID 1 redundancy, with its virtualization striping, supports Oracles best practice of Stripe and Mirror Everything. Present multiple LUNs, as needed, to each of the ASM disk groups. Having a minimum of four LUNs for each ASM disk group to stripe across is an ASM best practice. Place online redo logs and database les in the rst ASM disk group, and place backup les/archive logs in the second ASM disk group.

15

Conclusion
There has been signicant interest in applying the lessons learned and feedback obtained from previous projects using a single database to the current project that uses multiple databases. With this project, we develop the best practices for multiple databases on an Oracle 11g using ProLiant servers in a RAC environment that use an EVA8100 storage array. Key areas we address are: A description of how to effectively congure, deploy, and operate a midsized Oracle 11g environment using HP ProLiant servers with an EVA8100 storage array for 2,000 users with 3 TB of stored data spread across two databases. A description of the recommended servers, their conguration, tuning guidelines, and the techniques for deployment of the software components that run on each server. A description of the required EVA conguration, and best practices describing how to determine the optimal number, size, and layout of the EVA disk groups and the ASM disk groups for multiple databases. Key takeaways include: By increasing the SGA from 4,500 MB to 6,500 MB, database TPS increased by 36% while the host requests on the EVA decreased by 61%. Customers should try various SGA tuning tactics in their test environment to nd the optimal setting between performance and storage usage. If conguring the Flashback Area, Oracle places an implicit mirror copy of the online redo logs into the Flashback Area. This can affect performance, especially if the backup disk group is congured with fewer drives or slower drive spindles. The simplest solution is to remove the mirrored online redo logs from the Flashback Area. During testing, multiple congurations are used. Different congurations yield varying performance improvements, depending on the type of workload begin performed. It is important to determine the proper conguration to use, given the workload of today and where the expected workload growth is to come from, especially in mixed workload environments. HP improved the database performance of the OLTP workload by 2% to 4%, based on the storage array conguration. HP improved the database performance of the DSS workload by 6% to 9%, based on the storage array conguration.

We value your feedback


In order to develop technical materials that address your information needs, we need your feedback. We appreciate your time and value your opinion. The following link takes you to a short survey regarding the quality of this paper: http://hpwebgen.com/Questions.aspx?id=12046&pass=41514

16

Appendix A Bill of materials


Qty Part number Description Oracle RAC database servers 4 8 390524-405 AE369A HP ProLiant DL585 G1 HP StorageWorks FC1243 Dual Channel 4Gb PCI-X 2.0 HBA

HP OpenView Performance Manager server 1 336549-002 HP ProLiant DL320 G2

HP Rapid Deployment Pack server 1 336549-002 HP ProLiant DL320 G2

HP Command View EVA Management server 1 336549-002 HP ProLiant DL360 G3

Benchmark Factory servers 2 391112-001 HP ProLiant DL385 G1

HP storage array 1 168 AG702A 364621-B22 HP StorageWorks Enterprise Virtual Array 8100 (2C12D) HP StorageWorks 146 GB 15K FC HDD (BF14658244)

Infrastructure 2 1 1 A7394A J4879A J4903A HP StorageWorks 4/32 SAN Switch HP Procurve Switch 2724 (RACA private inter-node) HP Procurve Switch 2824 (RACB private inter-node)

17

For more information


HP solutions sites
HP Customer Focused Testing http://www.hp.com/go/hpcft HP & Oracle alliance http://h71028.www7.hp.com/enterprise/cache/4281-0-0-0-121.html Network Storage Services http://h20219.www2.hp.com/services/cache/10825-0-0-225-121.aspx

HP technical references
Best Practices for Oracle 10g with Automatic Storage Management and HP StorageWorks Enterprise Virtual Array white paper http://h71028.www7.hp.com/enterprise/cache/429462-0-0-225-121.html HP StorageWorks Enterprise Virtual Array Conguration Best Practices white paper ftp://ftp.compaq.com/pub/products/storageworks/whitepapers/5982-9140EN.pdf

HP product sites
HP StorageWorks Enterprise Virtual Arrays http://h18006.www1.hp.com/products/storageworks/eva/index.html HP ProLiant DL Servers http://h10010.www1.hp.com/wwpc/pscmisc/vac/us/en/ss/proliant/proliant-dl.html B-Series SAN Switches http://h18006.www1.hp.com/storage/networking/b_switches/san/index.html Multi-Path Options for HP Arrays http://h18006.www1.hp.com/products/sanworks/multipathoptions/index.html Fibre Channel Host Bus Adapters http://h18006.www1.hp.com/storage/saninfrastructure/hba.html HP OpenView Performance Manager & Agent http://h20229.www2.hp.com/products/ovperf/index.html

Oracle
Oracle Database Installation Guide 11g Release 1 (11.1) for Linux http://download.oracle.com/docs/cd/B28359_01/install.111/b32002.pdf Oracle Real Application Clusters Installation Guide 11g Release 1 (11.1) for Linux and UNIX http://download.oracle.com/docs/cd/B28359_01/install.111/b28264.pdf

18

Oracle Clusterware Installation Guide 11g Release 1 (11.1) for Linux http://download.oracle.com/docs/cd/B28359_01/install.111/b28263.pdf

Quest software
Benchmark Factory for Databases (Database Performance and Scalability Testing) http://www.quest.com/benchmark_factory/

2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Linux is a U.S. registered trademark of Linus Torvalds. Oracle is a registered trademark of Oracle Corporation and/or its afliates. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. 4AA1-6194ENW, June 2008

19

You might also like