®

www.veritest.com • info@veritest.com

November 2006

Network Appliance FAS3070 and EMC CLARiiON CX3-80: Comparison of Performance and Usability
Test report prepared under contract from Network Appliance Executive summary
Network Appliance™ commissioned VeriTest, a service of Lionbridge Technologies Inc., to compare the performance and usability of the NetApp® FAS3070 and the EMC® CLARiiON®CX3-80. Our performance tests measured performance over a fibre channel (FC) SAN using an OLTP workload. We first evaluated the maximum aggregate system performance, and then evaluated the performance that could be delivered for a single workload. Our usability tests measured the elapsed time required to perform common administrative tasks associated with storage provisioning, data backup/restoration, and cloning. All testing was performed in FC-SAN configurations for dual-controller systems, using best practices published by each vendor. Please refer to the Testing Methodology section of this report for complete details on how we conducted both the usability and performance testing on the FAS3070 and CX3-80. Please refer to the Test Results section of this report for full details of the results of the testing. Performance Tests – Summary Results

Key findings
During performance tests using 200 disk drives, we found TM that the FAS3070 configured with dual parity RAID-DP delivered 10 percent higher aggregate performance and 8 percent lower average latency compared to the CX3-80 configured with RAID 5. Using a provisioned 400GB LUN and the FCP protocol, we found that the FAS3070 configured with RAID-DP generated 6.8 times the performance and 85 percent lower average latency compared to the CX3-80 when using RAID 1/0. When the same test was conducted after provisioning the 400GB LUN on a MetaLUN, we found that the FAS3070 configured with RAID-DP generated 1.95 times the performance and 49 percent lower average latency compared to the CX3-80 Using the snapshot capability with the CX3-80 during performance testing resulted in a sustained, 50 percent drop in overall performance. We found no sustained degradation in overall performance as a result of using the Snapshot™ technology available on the FAS3070. Provisioning our enterprise class OLTP database on the CX3-80 required 39 percent more physical disks compared to the FAS3070. Using NetApp FlexVol™ technology allowed us to provision an enterprise class OLTP database using a total of 56 physical disks compared to 78 disks to provision the same database on the CX3-80. In our test configurations, we found it required significantly less time to complete a series of typical provisioning and administrative tasks on the FAS3070 compared to the CX380. For example, it required over 27 minutes to create a clone of a 400GB LUN on the CX3-80 compared to only 7 seconds on the FAS3070.

Our first performance test was designed to measure the maximum aggregate performance available for a consolidated storage environment with OLTP applications including Microsoft® Exchange, Oracle® databases, and SQL Server databases. We configured both the FAS3070 and CX3-80 with 200 15K RPM

disks to measure the performance over FC-SAN. For all of our performance tests, we utilized IOMeter to generate an OLTP workload consisting of 60% random read and 40% random write operations, all using an 8KB request size over FCP. For this test, the FAS3070 was configured with dual-parity RAID-DP (NetApp’s implementation of RAID 6), and the CX3-80 was configured with single-parity RAID 5. We used four dual processor host systems running Windows® Server 2003 Enterprise Edition to generate the load across both storage processors using all 8 FC ports available on both the FAS3070 and CX3-80. In this test, we found the FAS3070 delivered approximately 10 percent higher performance (31,109 IOPS vs. 28,352 IOPS) and 8 percent lower average latency (66 ms vs. 72 ms) compared to the CX3-80. Figure 1 provides the results of another series of tests that measured the performance that can be achieved with a single application by simulating the load of an OLTP database against one of the available storage processors. On both the FAS3070 and CX3-80, we used a 400GB OLTP database LUN representing the Oracle OLTP production database created during the enterprise class database provisioning exercise (8 disks and RAID 1/0). We subjected the LUN to the OLTP workload described above using a single Windows host system and one FCP connection. In this test, we found that the FAS3070 generated 6.8 times the performance (14,114 IOPS vs. 2,069 IOPS) and 85 percent lower average latency (18 ms vs. 124 ms) compared to the CX3-80.
FAS3070 vs. CX3-80 : Performance and Latency Test Results Using a Single 400GB LUN

16000 14000 12000 10000 IOPS 8000 6000 4000 2000 0 FAS3070 as Provisioned CX3-80 Using Striped MetaLUN CX3-80 Using Concatenated MetaLUN CX3-80 Using 8 Disk RAID 1/0

1200 1000
Latency ( ms )

800 600 400 200 0

IOPS Latency

Test Configuration

Figure 1: Test Results for Provisioned Performance Using Provisioned 400GB OLTP LUN On the CX3-80, we redeployed the above 400GB LUN on a series of MetaLUNs. A MetaLUN is a group of identical smaller LUNs bound together in order to provide storage consisting of a larger number of physical drives. EMC recommends using MetaLUNs to support applications generating larger numbers of IOPS as a result of OLTP workloads. The goal of the test was to measure the performance improvement afforded by using a MetaLUN when subjected to an OLTP workload. For this test on the CX3-80, we created a total of four identical 8-disk RAID 1/0 groups and used them to create a single MetaLUN containing a total of 32 physical disks. We created MetaLUNs using both the stripe and concatenation methods. We then ran the identical OLTP workload used to generate the results described above. As figure 1 above shows, adding 24 more disk drives in the MetaLUN significantly improved the Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 2

overall performance on the CX3-80. In these cases, we found the performance generated on the FAS3070 was approximately 1.95 times the performance and 49 percent lower average latency (18 ms vs. 35 ms) compared to the CX3-80 when using either a concatenated or striped MetaLUN. For the final performance tests, we measured the impact on overall performance when using the snapshot capabilities of both the FAS3070 and CX3-80. The first test evaluated performance impacts associated with generating a single snapshot, and the second test evaluated impacts for a series of snapshots. Both tests were run over a 70 minute period. For both of these tests, we used IOMeter to generate the same OLTP workload as discussed above for the test cases using the single 400GB OLTP database LUN We set the run time in the IOMeter test script to 70 minutes and let the test run continuously. For both the FAS3070 and CX3-80, we allowed the IOMeter test script to run for 10 minutes and then created a single snapshot copy of the provisioned 400GB OLTP production database LUN. During the second test we created a series of 30 snapshots of the OLTP database LUN at two minute intervals on the FAS3070, and a series of 8 snapshots of the OLTP database on the CX3-80 at 5 minute intervals. During these tests, we found that the CX3-80 supported a maximum of 8 simultaneous snapshot copies. Figure 2 shows the results of the snapshot performance testing when taking a single snapshot. These results compare the relative performance impact when creating a snapshot when compared to overall performance when no snapshots are taken. A value of 100 indicates that there was no difference between the performance recorded during the testing when taking a snapshot and when not conducting the snapshot process. Data points less than 100 indicate the percentage of performance degradation between the baseline configuration when no snapshots were performed and the configuration where we conducted snapshots. .
Snapshot Performance When Taking a Single Snapshot
140 Performance Relative to Baseline With No Snapshots 120 100 80 60 40 20 0 2 6 10 14 18 21 25 29 33 37 41 44 48 52 56 60 64 67 Elapsed Time( minutes ) FAS3070 - 1 Snapshot CX3-80 - 1 Snapshot

Figure 2: Results for Snapshot Performance Tests Using Provisioned 400GB OLTP LUN In our test configurations, we found that creating a single snapshot copy had no sustained impact on the overall performance FAS3070 over the course of the test. On the CX3-80, creating a single snapshot 10 minutes into the test period caused the overall performance level to drop approximately 50 percent, and to improve marginally during the remainder of the test period. Similar results were observed for the multisnapshot tests, and details are provided in the Test Results section of this document.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability

3

Usability Tests – Summary Results The first usability test case consisted of designing and provisioning a sample corporate enterprise class database configuration consisting of multiple applications and databases with the goal of determining the number of physical disks required to provide storage for the RAID groups and LUNs comprising that database configuration. This database configuration consisted of 20 LUNs comprising just over 3.4TB of physical storage. Using NetApp FlexVol technology, we provisioned the database on the FAS3070 using a total of 56 physical disks. Using a combination of RAID 1/0 and RAID 5, provisioning the same enterprise class database on the CX3-80 required a total of 78 physical disks. This resulted in 39 percent more disks being required on the CX3-80 compared to the FAS3070. After creating the database LUNs for the enterprise class database, we used them to conduct a series of tests to measure the number of steps and the time required to conduct a series of common provisioning tasks on both the FAS3070 and CX3-80. These tasks included, but were not limited to, creating and expanding LUNS, creating LUN clones as well as creating and restoring snapshots. For each of these test cases, we measured the number of steps and elapsed time required to complete the specific task. Figure 3 shows the elapsed time required to complete the specific provisioning tasks as well as the percentage difference in time required to complete the specific task between the FAS3070 and CX3-80. In general, we found that the number of steps required to perform the tests was comparable between the FAS3070 and the CX3-80. However, we found performing these tasks on the FAS3070 required significantly less time compared to the CX3-80. Please refer to Appendix A of this report for complete details of the steps required to complete each task in Figure 3.
FAS3070 - Elapsed Time (hr:min:sec ) 0:15:35 N/A 0:00:21 0:00:05 0:00:18 0:00:07 CX3-80 - Elapsed Time (hr:min:sec ) 0:38:15 00:07:00 (striped MetaLUN) 00:06:50 (concatenated MetaLUN) 3:14:12 (striped) 00:00:24 (concatenated) 0:00:07 0:01:30 0:27:13

Usability/Provisioning Test Case Measure Time Required to Create RAID Groups, Volumes and LUNS (including transitioning time ) Measure Time Required to Create MetaLUNs on the CX3-80 Measure Time Required to Extend the Size of the Oracle OLTP Database Measure Time Required to Create Snapshot Copies (per snapshot ) Measure Time Required to Restore Snapshot Copies (per snapshot ) Measure Time Required to Clone LUNs

Difference 2.5X N/A >500X 1.1X 1.4X 5X 233X

Figure 3. Usability and Provisioning Test Results Summary

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability

4

Detailed Test Results
This section provides all the results for the usability and provisioning tests as well as the performance tests we conducted using the FAS3070 and the CX3-80. Please refer to the Testing Methodology section of this report for complete details on how we conducted the tests.

Performance Testing Results Using a Single OLTP LUN
Results of Performance Test Case #1 - 2: Performance Using the OLTP Production Database LUN
This section provides the results of the performance testing conducted against the OLTP database LUN as provisioned during the ACME Company database provisioning exercise described in this test report. For these tests, we used the following configurations: • • NetApp FAS3070 using Fibre Channel host attach with Fibre Channel disk drives EMC CX3-80 using Fibre Channel host attach, Fibre Channel disk drives and the 400GB OLTP database LUN provisioned on o An 8 disk RAID 1/0 configuration with Fibre Channel disk drives o A striped MetaLUN containing a total of 32 disk drives using the original 8 disk RAID 1/0 OLTP LUN as the base o A concatenated MetaLUN containing a total of 32 disk drives using the original 8 disk RAID 1/0 OLTP LUN as the base

Figure 4 below shows the results of the performance testing using a single 400GB LUN in IOPS and latency in milliseconds for both the FAS3070 and the CX3-80 when using the provisioned 400GB OLTP database LUN. For this test, we used an OLTP workload consisting of 60% random read and 40% random write operations all using an 8KB request size.
FAS3070 vs. CX3-80 : Performance and Latency Test Results Using a Single 400GB LUN

16000 14000 12000 10000 IOPS 8000 6000 4000 2000 0 FAS3070 as Provisioned CX3-80 Using Striped MetaLUN CX3-80 Using Concatenated MetaLUN CX3-80 Using 8 Disk RAID 1/0

1200 1000
Latency ( ms )

800 600 400 200 0

IOPS Latency

Test Configuration

Figure 4. Average IOPS for Performance Testing Using the Oracle OLTP Database LUN Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 5

Using the LUN representing the Oracle OLTP production database as initially provisioned (8 disks, RAID 1/0) we found that the FAS3070 using the Fibre Channel protocol generated 6.8 times the performance (14,114 IOPS vs. 2,069 IOPS) and 85 percent lower average latency (18 ms vs. 124 ms)compared to the CX3-80 when using the Fibre Channel protocol. This result can be most directly explained by the fact that the FlexVol technology used on the FAS3070 provided a total of 28 physical disks to share the workload compared to the original 8 disk RAID 1/0 configuration used on the CX3-80. EMC best practices discuss using MetaLUNs to improve performance for random workloads that generate higher number of random read and write traffic using small request sizes. This is basically the same type of workload we used during these tests. As a result, we deployed the LUN representing the Oracle OLTP production database on a series of MetaLUNs created using a total of 32 physical drives and repeated the test above to gauge the performance improvement afforded by using a MetaLUN. For these tests, we did not expand the initial OLTP database LUN used for the initial tests. Instead, we created a total of four identical RAID 1/0 groups each using 8 disks and containing a single 100GB LUN. We then combined those four RAID groups into a MetaLUN using either the stripe or concatenation methods, and allowed IOMeter to create a single 400GB data file directly on the MetaLUN for use during the testing. As expected, adding more disk drives in a MetaLUN significantly improves the overall performance on the CX3-80 compared to the original Oracle OLTP production LUN configured on a RAID 1/0 group using 8 disks. We found the overall performance was virtually identical using a MetaLUN created using either the stripe method or the concatenation method (7,218 IOPS vs. 7,229 IOPS, respectively). When compared to the results of the MetaLUN performance testing, we found that the performance generated on the FAS3070 was approximately 1.95 times the performance and 49 percent lower average latency (18 ms vs. 35 ms) compared to the CX3-80 when using either a concatenated or striped MetaLUN.

Results of Performance Test Case #3: Measure Relative Performance Impact When Taking Snapshot Copies of the Provisioned OLTP Production LUN
This section provides the details of the snapshot performance testing conducted on both the FAS3070 and the CX3-80. We measured the relative performance impact of creating a series of snapshot copies of the 400GB LUN representing the OLTP production database created during the provisioning tests. For this test, we used an IOMeter test script that generated an OLTP workload containing a mixture of 60% random reads and 40% random writes using an 8KB request size. The test script ran for a total of 70 minutes and included a 120 second ramp up phase. Please refer to the Test Methodology section of this report for complete details on how we conducted these tests. We tested the following configurations: • • • • • For both the FAS3070 and CX3-80, allow the script to run the full 70 minutes without taking any snapshots. These results are used as the baseline from which to compare the results generated while taking snapshots. For the FAS3070, allow the IOMeter script to run for 10 minutes, generate a single snapshot copy and allow the IOMeter test to run to completion. For the CX3-80, allow the IOMeter script to run for 10 minutes, generate a single snapshot copy and allow the IOMeter test to run to completion. For the FAS3070, allow the IOMeter script to run for 10 minutes and then begin generating a series of 30 snapshot copies at 2 minute intervals during the course of the 70 minute test run. For the CX3-80, allow the IOMeter script to run for 10 minutes and then begin generating a series of 8 snapshot copies at 5 minute intervals. The CX3-80 allows a maximum of 8 snapshots for a given LUN. After the final snapshot is created, allow the test to run to completion with no additional snapshot copies created.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability

6

The tests described above using multiple snapshots were designed to measure the impact on performance when taking a large number of snapshots over a relatively short amount of time and would likely be a worst case scenario. In addition to this worst case test, the tests described above attempted to measure the impact on performance when users create snapshots less frequently. During the 70 minute duration of the test, we also ran the Performance Monitor application from Microsoft on the host system running Windows Server 2003 to measure the read and write activity on the logical volumes being accessed on both the FAS3070 and the CX3-80 during the testing. We configured Performance Monitor to capture information related to the following logical disk counters at 10 second intervals during the testing: • • • • Number of disk reads per second Number of disk writes per second Average read latency in seconds per operation Average write latency in seconds per operation

By monitoring these counters while taking the snapshot copies, we were able to determine the impact on the overall performance as a result of the snapshot process over the entire 70 minute test. This was not possible using IOMeter alone as it reports only a single average IOPS metric calculated over the entire test run time. To compute the results presented for this test, we recorded the total IOPS values generated at each of the 10 second intervals using Performance Monitor over the course of the test when not performing the snapshot process and used these values as our baseline. This baseline represented the overall performance of the test configuration when not performing the snapshot process. We then recorded the total IOPS values generated at each of the 10 second intervals using Performance Monitor over the course of the test when performing the snapshot process. This data represented the overall performance of the test configuration when the snapshot process was performed. At each data point, we computed the difference in the number of IOPS between the baseline configuration using no snapshot process and the configuration where we conducted the snapshot process. We calculated the difference as a percentage of the baseline value to see how the performance was impacted over the course of the testing as a result of the snapshot process. Figure 5 below shows the results of the tests when taking just a single snapshot over the course of the 70 minute test period. The chart compares the relative performance generated during the test when performing the snapshot process to the performance generated when not performing the snapshot process. A value of 100 indicates that there was no difference between the performance recorded during the testing when taking a snapshot and when not conducting the snapshot process. Data points less than 100 indicate the percentage of performance degradation between the baseline configuration when no snapshots were performed and the configuration where we conducted snapshots. In our test configurations, we found that conducting a single snapshot copy on the FAS3070 over the course of the test had no sustained impact on the overall performance. When using the FAS3070, we observed a brief period where performance dropped to approximately 90 percent of that generated without the snapshot process coinciding with the creation of the snapshot. However, the results show that the overall performance of the FAS3070 recovered back to the levels generated using the baseline configuration and remained there for the remainder of the test. On the CX3-80, creating a single snapshot 10 minutes into the test period caused the overall performance level to drop to approximately 50 percent of the baseline performance generated with no snapshots taken. Additionally, the results show that the post snapshot performance did not recover to levels observed before the snapshot was taken.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability

7

Snapshot Performance When Taking a Single Snapshot
140 Performance Relative to Baseline With No Snapshots 120 100 80 60 40 20 0 2 6 10 14 18 21 25 29 33 37 41 44 48 52 56 60 64 67 Elapsed Time( minutes ) FAS3070 - 1 Snapshot CX3-80 - 1 Snapshot

Figure 5. Snapshot Performance Results Using a Single Snapshot Figure 6 below compares the performance of both the FAS3070 and CX3-80 when both are subjected to a series of snapshots over the course of the 70 minute test duration. For the FAS3070, we created a series of 30 snapshots at 2 minute intervals after letting the IOMeter script run for a 10 minute ramp up period. When testing the FAS3070, we observed brief periods where performance dropped to approximately 80 percent of that generated without the snapshot process coinciding with the creation of the snapshots. However, the results show that after each of the 30 snapshots, the overall performance of the FAS3070 recovered back to the levels generated using the baseline configuration Because the CX3-80 has a limit of 8 active snapshots, we created a series of 8 snapshots at 5 minute intervals on the CX3-80 over the course of the 70 minute test duration. As was the case when creating a single snapshot, we observed a drop in performance of approximately 50 percent. During the remainder of the testing, we conducted an additional seven snapshot copies at 5 minute intervals. During these remaining snapshot copies, we observed no additional significant drops in the performance of the CX3-80. Unlike the FAS3070, the overall performance recorded on the CX3-80 between the snapshot copies did not recover to pre-snapshot levels. After conducting the last of the eight snapshot copies, we allowed the test to continue against the CX3-80 for the remaining 20 minutes of the test. During this time, the overall performance of the CX3-80 did not come back to the level of performance observed before we began the snapshot process.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability

8

Snapshot Performance When Taking Multiple Snapshots
140 Performance Relative to Baseline W ith No Snapshots 120 100 80 60 40 20 0 2 6 10 14 18 21 25 29 33 37 41 44 48 52 56 60 64 67 Elapsed Time( minutes ) FAS3070 - 30 Snapshots CX3-80 - 8 snapshots

Figure 6. Snapshot Performance Results Using a Multiple Snapshot Copies

Performance Test Result Using 200 Disk Drives
Performance Test Cases #3-5: Measure Performance of the FAS3070 and CX3-80 Using 200 Drives
This section provides the results for the performance testing on both the FAS3070 and CX3-80 using 200 drives with an OLTP workload containing a mixture of 60% random reads and 40% random writes using an 8KB request size and with the FCP protocol. We chose 200 disk drives so that we could test both the FAS3070 and CX3-80 with the same number of disk drives, LUNS and data set. For the FAS3070 we tested only RAID-DP configurations using FCP host connections while testing the CX3-80 using both RAID 5 and RAID 1/0 with the FCP protocol. Figure 7 below shows the performance test results in IOPS and latency in milliseconds for each test configuration. In this configuration, we found that the FAS3070 configured with RAID-DP generated approximately 10 percent better performance measured in IOPS (31,109 IOPS vs. 28,352 IOPS) and 8 percent lower average latency (66 ms vs. 72 ms) compared to the CX3-80 configured with RAID 5. The CX3-80 system was also tested with storage in a mirrored RAID 1/0 configuration. It was recognized that comparing RAID-DP or RAID 5 to RAID 1/0 was a comparison of different storage deployments with very different cost and efficiency characteristics. For example, to replicate the RAID 5 test configuration using RAID 1/0 would have required significantly more physical disk drives on the CX3-80. As the CX3-80 configuration used for these tests contained 210 physical drives, we chose to use the same set of 200 disk drives configured for the RAID 5 test, and simply doubled the number of LUNs per RAID 1/0 group compared to RAID 5. We found that the CX3-80 configured with RAID 1/0 generated approximately 7 percent better performance measured in IOPS (33,221 IOPS vs. 31,109 IOPS) and 8 percent lower average latency (61 ms vs. 66 ms)compared to the FAS3070 configured with RAID-DP.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability

9

We conducted these tests using 2Gb/s FCP connections between the host systems and the storage processors. Due to the nature of the tests, the throughputs obtained were not sufficient to saturate the total of 8 @ 2Gb/s connections used for these tests. As a result, using 4Gb/s FCP connections would not have improved the performance in these tests for either the FAS3070 or CX3-80. During the testing with the CX3-80 we worked with EMC technical support and performance engineers to ensure the results we generated were optimal for the specific CX3-80 configuration tested. As a result, we set the following options on the CX3-80: • • • Cache levels were left at their default values Block Size was increased to 16K from the default of 8K Cache Write Aside was set to 511 from the default of 2048

We made no changes to the default FAS3070 configuration for these performance tests.
FAS3070 vs. CX3-80 : Performance and Latency Test Results Using 200 Disk Drives

35000 30000 25000 IOPS 20000

600 500 400 300

Latency ( ms )

15000 10000 5000 0 CX3-80 RAID5 FAS3070 RAID DP Test Configuration CX3-80 RAID 1/0 200 100 0

IOPS Latency

Figure 7. Average IOPS Results for Performance Testing Using 200 Disk Drives

Usability/Provisioning Test Results
For these tests, we compared the features and functionality that users of these technologies encounter in day to day operations to solve typical problems related to storage provisioning, data backup and restoration and cloning. We looked at the ease with which these features are utilized from an administrative perspective. Additionally, we measured the time required to perform a set of typical administrative tasks using the features provided by both the FAS3070 and CX3-80 products. For all test cases, we conducted a “dry run” of the setup and configuration of the specific task on both the FAS3070 and CX3-80 systems to allow our engineers to familiarize themselves with the setup procedure and to allow time for consulting the proper documentation before timing the actual operations. We used publicly available best practices documentation from both NetApp and EMC to create a plan to provision both the FAS3070 and the CX3-80 for use in a corporate environment consisting of multiple

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 10

applications and multiple databases. Specifically, we used the following documentation from both NetApp and EMC: • • • NetApp : Block Management with Data ONTAP™ 7G: FlexVol™, FlexClone™, and Space Guarantees(http://www.netapp.com/tech_library/3348.html) NetApp : Thin Provisioning in a NetApp SAN Environment(http://www.netapp.com/library/tr/3483.pdf ) EMC : EMC CLARiiON Best Practices for Fibre Channel Storage – CLARiiON Release 22 Firmware Update

Results of Provisioning Test Case #1: Storage Provisioning of ACME Database Environment
Figure 8 below summarizes the results for this database provisioning test. This test reflects a representative storage system where several applications are consolidated. For purposes of this testing, the simulated company “ACME” has 3000 employees running Windows PCs, one large Exchange database serving its email needs, two internal departmental SQL Server databases for Payroll and HR, and one large Oracle OLTP database. ACME needs two additional copies of each of its SQL Server and Oracle Databases, one for the Quality Assurance (QA) teams and one for the development teams. ACME initially will access all of its storage over a Fibre Channel SAN. We found that to deploy the 3.4TB of database and log LUNs required a total of 56 physical disk drives on the FAS3070 compared to 78 physical disk drives on the CX3-80. The CX3-80 therefore required just over 39% more disk drives for a best-practices configuration matching the enterprise workload used for these tests. To provision ACME’s Oracle OLTP production database to effectively handle peak loads of 5000 IOPS, the recommended EMC solution was to create a MetaLUN using a minimum of 28 disks. Provisioning the Oracle OLTP production database as a MetaLUN on the CX3-80 required a total of 98 physical disks to provision all database and log files. This is 75% more disks than were required using the FAS3070. FAS3070 3.44 TB 56 56 CX3-80 3.44TB 78 98 Percentage Increase 39% 75%

Total space required for LUNs Total disk drives required, base test configuration Total disk drives required, enabling highperformance Oracle DB (5,000 IOPS)

Figure 8. FAS3070 vs. CX3-80 ACME Company Database Provisioning Space Summary NetApp’s FlexVol technology allowed us to provision the LUNs representing the Exchange and Oracle OLTP production database and log files on separate storage controllers using a RAID-DP configuration while allowing the other non-critical development and QA databases to share the same disks. This allowed both the Exchange and Oracle OLTP databases to each use a total of 28 physical disks to ensure an adequate number of disks to support OLTP environments with high levels of random read and write traffic. The higher number of disks required for deployment on the CX3-80 can be attributed to EMC recommendations that RAID 1/0 configurations should be used to support random OLTP workloads where the percentage of write operations is higher than 30 percent. Figure 9 below show the specifics of how the various database and log files were laid out on the FAS3070.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 11

Application Exchange DB Exchange Logs Payroll Prod DB Payroll Prod Logs Payroll QA DB Payroll QA Logs Payroll Devel DB Payroll Devel Logs HR Prod DB HR Prod Logs HR QA DB HR QA Logs HR Devel DB HR Devel Logs Oracle Prod DB Oracle Prod Logs Oracle QA DB Oracle QA Logs Oracle Devel DB Oracle Devel Logs Totals

Storage Controller 1 2 2 1 2 1 1 2 1 2 2 1 1 2 2 1 1 2 2 1

RAID Level 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6

LUN Size (GB) 400 100 200 40 200 40 200 40 200 40 200 40 200 40 400 100 400 100 400 100 3,440

Figure 9. FAS3070 Database Provisioning Details As noted earlier, when conducting this exercise on the CX3-80, we utilized available EMC documentation and best practices guides to ensure an optimal layout. These documents contained detailed worksheets to help with planning, provisioning and sizing a deployment similar to the one we completed for this testing. As figure 10 shows, we found it required a total of 78 physical disk drives to deploy the ACME Company database environment on the CX3-80 compared to the 56 physical disk drives required for the same deployment performed on the FAS3070. Based on EMC best practices, we used the following rationale when provisioning the ACME storage on the CX3-80: • • • • We used RAID 1/0 for the Oracle OTLP production database because the expected database load contained more than 30 percent random writes using a small request size. We provisioned the logs on the other SP for performance. We used RAID 1/0 for the Exchange DB RAID because it is a large email database Because they were used in production, we utilized RAID 1/0 for both the production Payroll and HR databases Because they were less critical and not production oriented, we utilized RAID 5 for the Payroll Development, Payroll QA, HR Development and HR QA database and logs. We provisioned the database and log LUNS in the same RAID groups because of the non-production nature of the database and log files. Even though the Oracle QA and Oracle Development database and logs are not used in production, we felt they were important enough to warrant provisioning using RAID 1/0.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 12

Application Exchange DB Exchange Logs Payroll Prod DB Payroll Prod Log Payroll QA DB Payroll QA Log Payroll Development DB Payroll Development Log HR Prod DB HR Prod Logs HR QA DB HR QA Logs HR Development DB HR Development Logs Oracle Prod DB Oracle Prod Logs Oracle QA DB Oracle QA Logs Oracle Development DB Oracle Development Logs Snapshot Space for Oracle Prod DB Snapshot Space for Exchange DB Totals

Storage Processor B A A A B B A B B B A A B A A B B B A A A B

LUN Size (GB) 400 100 200 40 200 40 200 40 200 40 200 40 200 40 400 100 400 100 400 100 400 400 3,440

RAID Group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

RAID Level 1/0 1/0 1/0 1/0 5 5 1/0 1/0 5 5 1/0 1/0 1/0 1/0 1/0 1/0 5 5

BUS B1E3 B1E2 B2E0 B2E1 B2E2 B2E2 B3E0 B3E1 B3E2 B3E2 B0E3 B0E2 B0E1 B0E1 B0E0 B0E0

Physical Disks 8 2 4 2 5 5 4 2 5 5 8 2 8 2 8 2 3 3 78

Approximate Raw Space (GB) 1,168 292 584 292 730 730 584 292 730 730 1,168 292 1,168 292 1,168 292 438 438 11,388

Figure 10: CX3-80 Database Provisioning Details The tables above provide only the minimal set of disks on which to provision the LUN representing the Oracle OLTP production database. One of the criteria for this test was to provide an option to allow the Oracle OLTP production database to effectively handle peak loads of up to 5,000 IOPS consisting of a mixture of small, random read and write operations in the ratio of 60 percent read and 40 percent write operations. EMC best practices recommend the usage of MetaLUNs containing larger numbers of physical disks to support these higher levels of IOPS served up by today’s high performance OLTP databases. MetaLUNs combine multiple LUNs configured on smaller RAID groups into a storage unit that combines the individual disks of each of the smaller RAID groups into a LUN capable of handling larger numbers of random IOPS. When sizing the number of physical disks required to support a specific level of IOPS, EMC recommends associating 180 IOPS per each physical 15K RPM disk drive that will be contained in the MetaLUN. In this case, that equates to a minimum of 28 physical drives to handle the 5,000 IOPS expected by the ACME Oracle OLTP production database. Figure 11 below shows the difference in the total number of required disks had the Oracle OLTP production database been deployed as a MetaLUN capable of handling 5000 random IOPS. In the case of the CX3-80, creating a MetaLUN to handle 5,000 random IOPS would require a total of 98 disks or another 20 physical disks over and above the 78 disks initially required to provision the ACME database and log files.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 13

Application Exchange DB Exchange Logs Payroll Prod DB Payroll Prod Log Payroll QA DB Payroll QA Log Payroll Development DB Payroll Development Log HR Prod DB HR Prod Logs HR QA DB HR QA Logs HR Development DB HR Development Logs Oracle Prod DB as MetaLUN Oracle Prod Logs Oracle QA DB Oracle QA Logs Oracle Development DB Oracle Development Logs Snapshot Space for Oracle Prod DB Snapshot Space for Exchange DB Totals

Storage Processor B A A A B B A B B B A A B A A B B B A A A B

LUN Size (GB) 400 100 200 40 200 40 200 40 200 40 200 40 200 40 400 100 400 100 400 100 400 400 3,440

RAID Group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

RAID Level 1/0 1/0 1/0 1/0 5 5 1/0 1/0 5 5 1/0 1/0 1/0 1/0 1/0 1/0 5 5

BUS B1E3 B1E2 B2E0 B2E1 B2E2 B2E2 B3E0 B3E1 B3E2 B3E2 B0E3 B0E2 B0E1 B0E1 B0E0 B0E0

Physical Disks 8 2 4 2 5 5 4 2 5 5 28 2 8 2 8 2 3 3 98

Approximate Raw Space (GB) 1,168 292 584 292 730 730 584 292 730 730 4,088 292 1,168 292 1,168 292 438 438 14,308

Figure 11. CX3-80 Provisioning Details Using a MetaLUN for the Oracle OLTP production database As initially provisioned, the FAS3070 storage processor on which we assigned the ACME OLTP production database contained a total of 28 physical disks. Because of NetApp’s FlexVol technology, individual volumes and LUNs can effectively share a set of physical disks allowing each to derive performance benefits from the larger number of available disk drives. Using the same level of 180 IOPS per physical disk recommended by EMC best practices, we found it was not necessary to provide additional disk drives on the FAS3070 in order to support a level of 5,000 random IOPS with the Oracle OLTP production database.

Results of Provisioning Test Cases #2 - #6: Ease of Use/Storage Provisioning
These test cases build on the results of the ACME Database provisioning exercise detailed in the previous section and include test cases related to typical storage provisioning tasks like creating LUN clones and snapshots as well as extending the size of existing LUNS. Please refer to the Test Methodology section of this report for details on how we conducted these tests. Figure 12 below provides a summary of the results of the usability and provisioning testing. The table shows the elapsed time required to complete a specific task using both the FAS3070 and the CX3-80. For complete details of the test results for each of the test cases described above, including tester comments and feedback, please refer to Appendix A of this report.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 14

Usability/Provisioning Test Case Measure Time Required to Create RAID Groups, Volumes and LUNS(including transitioning time ) Measure Time Required to Create MetaLUNs on the CX3-80 Measure Time Required to Extend the Size of the Oracle OLTP Database Measure Time Required to Create Snapshot Copies (per snapshot ) Measure Time Required to Restore Snapshot Copies (per snapshot ) Measure Time Required to Clone LUNs

FAS3070 - Elapsed Time (hr:min:sec) 0:15:35 N/A 0:00:21 0:00:05 0:00:18 0:00:07

CX3-80 - Elapsed Time (hr:min:sec) 0:38:15 00:07:00 (striped MetaLUN) 00:06:50 (concatenated MetaLUN) 3:14:12 (striped ) 00:00:24 (concatenated) 0:00:07 0:01:30 0:27:13

Difference 2.5X N/A >500X 1.1X 1.4X 5X 233X

Figure 12. Usability and Provisioning Test Results Summary Summary observations from the usability and provisioning test cases are as follows: • • It required less than half the time to create the RAID groups and LUNs to deploy the ACME Corp database structure on the FAS3070 compared to the CX3-80 To ensure consistent performance, EMC best practices recommend using the stripe method when expanding the size of a LUN. We found that expanding the size of the 400GB LUN representing the Oracle OLTP production database required over 3 hours when done using the stripe expansion method compared to less than 30 seconds to expand the same LUN using the concatenate expansion method. Using either of the LUN expansion methods available on the CX3-80 does not immediately make the additional storage available to the Windows host system. The additional storage initially shows up on the Windows host as an unformatted area. To make this additional storage available to the Windows host system required that we manually use either a volume manager or other third party tool like “diskpart” to add the new storage to the existing volume. Using the NetApp SnapDrive® 4.1 tool under Windows to expand the size of the LUN required less than 30 seconds and immediately made the additional storage available to the Windows host requiring no further action on the part of the Windows administrator We found that creating a clone of the 400GB LUN representing the Oracle OLTP production database required only 7 seconds on the FAS3070 compared to over 27 minutes on the CX3-80.

• •

NetApp FlexShare Validation Testing Results
In this study we also tested the capabilities of NetApp FlexShare, which provides quality of service or QoS capabilities for the FAS3070. As we completed the testing required for this report, EMC announced a new product called the Navisphere Quality of Service Manager. According to EMC press releases, this product provides a similar feature set when compared to the NetApp FlexShare feature. The Navisphere Quality of Service Manager was not available in time for use in these tests. The remainder of this section provides a simple validation of using FlexShare in a mixed application environment where e-mail and database applications are running on a single NetApp storage system. During this test, we evaluated the following: Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 15

• • •

General performance of the e-mail and database applications running simultaneously on the NetApp storage system with the FlexShare feature disabled The impact on performance of both the e-mail and database applications after FlexShare is enabled to give the database application higher priority compared to the e-mail application General performance of the e-mail and database applications running simultaneously on the NetApp storage server after disabling FlexShare to give the database and e-mail application equal priority to the resources on the NetApp storage server.

To conduct the validation, we used one of the FAS3070 storage systems and configured it as follows. These steps set the priority of the database volume higher than the priority of the email volume when both are accessing the resources on the FAS3070 storage system. • • • • • • • • Created a single aggregate using RAID-DP and a total of 28 disk drives In the aggregate, create two FlexVols each 1TB in size and name them “email” and “database” In each of the FlexVols created a single 400GB LUN Create an IOMeter test script to populate each of the 400GB LUNs with a single 400GB data file Enable the FlexShare feature on the FAS3070 by issuing the command "priority on" at the CLI At the CLI, issue the command “priority set volume database level="VeryHigh" At the CLI, issue the command “priority set volume email level="VeryLow” Disable the FlexShare feature on the FAS3070

After executing the steps above, the priorities for each volume are set but FlexShare is disabled resulting in traffic to both the database and email volumes having equal priority to the resources on the FAS3070 storage system. We then used the same IOMeter test script used for the performance test cases using a single 400GB LUN defined previously in this report to place an identical load on the database and email volumes. We increased the runtime for the IOMeter script to 15 minutes with a 120 second ramp up. We then executed the following steps: • • • • Started the IOMeter test script After running for 5 minutes, issue the "priority on” command at the FAS3070 CLI to enable FlexShare with the priorities configured in the steps above. Allow the test to continue for another 5 minutes and issue the “priority off” command at the FAS3070 CLI to disable FlexShare so that both the database and email volumes again have the same priorities when accessing resources on the FAS3070. Allow the test to run to completion.

The chart in figure 13 below provides the details of how using FlexShare to change the priorities of different workloads can impact the performance of each workload. Initially, both the email and database applications are generating roughly the same number of IOPS while FlexShare is disabled. This is expected as the load is the same to each LUN and both have equal access to the resources of the FAS3070. Once FlexShare is enabled, the database traffic has a significantly higher priority compared to the email traffic. The result is that the performance of the database application increases substantially at the expense of the performance of the email application. Once FlexShare is disabled again, the performance of each of the respective applications drops back to roughly the same levels observed before FlexShare was enabled.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 16

FlexShare Validation T est Results
10000 9000 8000 7000 6000 IOPS 5000 4000 3000 2000 1000 0 130 195 260 325 390 455 520 585 650 715 780 845 910 975 Elapsed Time( seconds ) Exchange Volume OLTP Database Volume

Figure 13. Results of FlexShare Validation Testing

Testing methodology
Network Appliance commissioned VeriTest, a service of Lionbridge Technologies Inc., to compare the usability of a variety of features and functionality that users of both the NetApp FAS3070 and the EMC CX380 encounter in day to day operations to solve typical problems related to storage provisioning. Additionally, we compared the performance of the NetApp FAS3070 mid-range storage server and the EMC CX3-80 storage server. The performance tests focused on the OLTP workloads that database applications typically encounter. Like many other industry standard OLTP benchmarks, including the majority of the top 10 TPC-C price/performance results, the loads used during the performance tests used an 8K request size with a mix of random read and write operations for both the FAS3070 and the CX3-80.

Usability and Database Provisioning Tests
For these tests, we compared the features and functionality that users of both the FAS3070 and the CX3-80 encounter in day to day operations to solve typical problems related to storage provisioning, data backup and restoration, cloning etc. We looked at the ease with which these features are utilized from an administrative perspective as well as the general usefulness of the related documentation including both online and hard copy. We used a stopwatch to measure the time required to perform a set of typical administrative tasks using the features provided by both the FAS3070 and CX3-80 products. For each test case defined below, we conducted a “dry run” of the setup and configuration of the specific task on both the Network Appliance FAS3070 and EMC CX3-80 devices to allow VeriTest engineers to familiarize themselves with the setup procedure and to allow time for consulting the proper documentation etc. The results below were obtained after the devices in question were reset to an ‘unused’ condition. Refer to Appendix A for a detailed description of the steps performed for each Provisioning test.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 17

Provisioning Test Case #1: Ease of Use/Storage Provisioning For this test we used publicly available best practices documentation from both NetApp and EMC to create a plan to provision both the FAS3070 and the CX3-80 for use in a corporate environment consisting of multiple departments and multiple databases. Specifically, we used the following documentation from both NetApp and EMC: • • • NetApp : Block Management with Data ONTAP™ 7G: FlexVol™, FlexClone™, and Space Guarantees(http://www.netapp.com/tech_library/3348.html) NetApp : Thin Provisioning in a NetApp SAN Environment(http://www.netapp.com/library/tr/3483.pdf ) EMC : EMC CLARiiON Best Practices for Fibre Channel Storage – CLARiiON Release 22 Firmware Update

For purposes of this testing, the simulated company is ACME, Inc. ACME has 3,000 employees running Windows PCs, one large Exchange database serving its email needs, two internal departmental SQL Server databases for Payroll and HR, and one large Oracle OLTP database. ACME needs two additional copies of each of its SQL Server and Oracle Databases, one for the Quality Assurance (QA) teams and one for the development teams. ACME initially will access all of its storage over a Fibre Channel Storage Network. ACME’s initial storage needs are specified in figure 14 below. Database Function Microsoft Exchange SQL Server Payroll SQL Server HR Oracle OLTP Number, Type and Size of Database and Log Files One 400GB database LUN and one 100GB log file LUN Three 200GB database LUNs – 1 for production, 1 for development, 1 for quality assurance and three 40GB LUNs for the database log files Three 200GB database LUNs – 1 for production, 1 for development, and 1 for quality assurance and three 40GB LUNs for the database log files Three 400GB database LUNs – 1 for production, 1 for development, and 1 for quality assurance and three 100GB LUNs for the database logs

Figure 14. Acme Database and Log File LUN Descriptions Additionally, we used the following guidelines when provisioning the database LUNS on both the FAS3070 and CX3-80: 1. For the Oracle OLTP production and Microsoft Exchange databases, assume that the workload associated with the database is an OLTP mixture of small, random read and write operations in the ratio of 60 percent read and 40 percent write. 2. Database and log volumes should reside on separate volumes to facilitate backup procedures. 3. The Oracle production databases, Exchange database and their associated logs are performance sensitive. The remaining non-production database and log files are not performance sensitive 4. Enough space needs to be allocated to store a full snapshot of the Oracle OLTP production and Exchange Database LUNs. 5. Provide a option to allow the Oracle OLTP production database to effectively handle peak loads of up to 5000 IOPS consisting of a mixture of small, random read and write operations in the ratio of 60 percent read and 40 percent write Output from this test consists of a table for each storage solution under test specifying the following items relating to how the database and log volumes described above are provisioned: • • • Total Number of RAID groups including the RAID type for each(i.e. RAID-DP, RAID 1/0 ) Total Number of LUNs Total Number of Physical Disk Drives required Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 18

• • •

Approximate amount of raw disk space required for deployment Actual usable disk space The storage processor/controller on which the specific database or log file was deployed

Provisioning Test Case #2: Measure the Time Required Creating RAID Groups, Volumes and LUNs For this test we measured and recorded the total amount of time required to create the RAID Groups, Volumes and LUNs necessary to deploy the database configuration for ACME as documented in storage provisioning test case #1 above for both the FAS3070 and the CX3-80. For the CX3-80 we used EMC’s Navisphere product and on the FAS3070 we used SnapDrive 4.1 from NetApp to manage the LUN creation process on the Windows host systems. Provisioning Test Case #3: Measure Steps and Time Required to Create a MetaLUN on the CX3-80 For this test, we measured the time and recorded the number of steps to create a MetaLUN on the CX3-80 for the purpose of deploying the 400GB Oracle OLTP Production database LUN created during the storage provisioning test described above. A MetaLUN is a group of identical smaller LUNs bound together in order to provide storage consisting of a larger number of physical drives. EMC best practices recommend using MetaLUNs to provide higher performance for applications that generate large numbers of random read and write traffic using small request sizes like the Oracle OLTP database created for ACME Corp in Provisioning Test Case #2 defined above. Additionally, MetaLUNs are recommended for use if LUN expansion is a requirement. For this test, we created the identical 400GB LUN representing the Oracle OLTP production database defined in Provisioning Test Case #1 above using an 8 disk RAID 1/0 configuration. We used this LUN as the source LUN for the creation of a MetaLUN. We created three additional LUNS that were identical to the original LUN (i.e. 8 disks and RAID 1/0) so that the number of disks in the MetaLUN (32) was similar to the number of disks in the 28 disk aggregates created on the FAS3070 during provisioning test case #2 defined above. Finally, we used the four identical LUNs to create a MetaLUN using both the stripe and concatenation methods. There is no corresponding test case for the FAS3070. NetApp’s FlexVol technology allows the Oracle OLTP database LUN to be provisioned such that its performance benefits from being able to access all of the disks in the containing aggregate. As a result there is no need to add more disk spindles to improve performance. Provisioning Test Case #4: Extending the Size of the Oracle OLTP Database In this test, we measured the amount of time and recorded the number of steps required to add 400GB of additional space to the Oracle OLTP database LUN. For the CX3-80 we used EMC’s Navisphere product and on the FAS3070 we used SnapDrive 4.1 from NetApp to manage the process of extending the size of the Oracle OLTP database LUN. Additionally, we looked at any steps that were required to allow the extra space to be utilized on the Windows host system after the LUN expansion. The CX3-80 provides multiple methods for expanding the size of a LUN, namely stripe expansion, concatenate expansion and hybrid expansion. The stripe expansion method actually re-stripes the existing data on the LUN across all of the drives now participating in the expanded LUN. Concatenate expansion adds the additional space to the end of the existing LUN and does not take the time to re-stripe the existing data over the new disk drives. Hybrid expansion combines the striping and concatenation methods. For these tests, we focused only on the striped and concatenation methods of LUN size expansion. For this test, we used IOMeter to initialize the 400GB LUN with a single 400GB data file before beginning the expansion process.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 19

Provisioning Test Case #5: Measure Time Required to Restore Snapshot Copies For this test, we measured the amount of time and recorded the number of steps required to create and restore a total of 30 different snapshot copies of the LUN representing the Oracle OLTP database described in Provisioning Test Case #2 above on both the FAS3070 and CX3-80 configurations. A snapshot is a point in time copy of a LUN that does not change over time even as the LUN from which it was created changes. On the FAS3070 we were able to perform the 30 snapshot copies. The CX3-80 only supports eight simultaneous active snapshots, so we performed the test using the maximum number of snapshots for the CX3-80. For the FAS3070, we used SnapDrive 4.1 under Windows Server 2003. For the CX3-80, we used EMC’s SnapView utility under Windows Server 2003. Provisioning Test Case #6: Measure Time Required to Clone LUNs For this test, we measured the amount of time and recorded the number of steps required to clone the 400GB LUN representing the Oracle OLTP database described in Provisioning Test Case #1 above on both the FAS3070 and the CX3-80. For the FAS3070 we used the LUN clone commands accessed through the FAS3070 command line interface. For the CX3-80 we used the SnapView Clone command.

Performance Testing
During the performance tests, we conducted tests using two distinct configurations. One set of tests measured the performance using the LUN representing the Oracle OLTP production database (both RAID 1/0 and MetaLUNs) created during the provisioning tests defined previously in this document. These “provisioned performance” test cases used a single host system to generate an OLTP workload against the specific LUN under test. These provisioned performance tests used the same SAN configuration for all test cases. This configuration is defined in detail in Appendix B of this report. Short descriptions of the provisioned performance tests are listed below. Specifics of how we conducted these tests are presented in the sections that follow. • • • • • Measure the performance of both the FAS3070 and CX3-80 using the single 400GB OLTP database LUN defined in Provisioning Test Case #1 and created during the provisioning tests Measure the performance of both the FAS3070 and CX3-80 using the single 400GB OLTP database LUN defined in Provisioning Test Case #1 while taking a series of snapshots over a 70 minute period Measure the performance of both the FAS3070 and CX3-80 using the single 400GB OLTP database LUN defined in Provisioning Test Case #1 after taking a single snapshot over a 70 minute period Measure the performance of the CX3-80 using the single 400GB OLTP database LUN defined in Provisioning Test Case #1 when configured as a MetaLUN created using the striped method Measure the performance of the CX3-80 using the single 400GB OLTP database LUN defined in Provisioning Test Case #1 when configured as a MetaLUN created using the concatenation method

In addition to the provisioned performance tests using the LUN representing the Oracle OLTP production database, we conducted a series of additional performance tests on both the FAS3070 and CX3-80 using 200 disk drives on each platform. For these tests, we configured a total of 4 host systems each with dual 2Gb FC ports and generated a significantly heavier load against both the FAS3070 and CX3-80 compared to the provisioned performance tests described above. The SAN configuration used for these tests is defined in detail in Appendix C of this report. Short descriptions of these additional performance tests are listed below. Specifics of how we conducted these tests are presented in the sections that follow. • Measure the performance of the FAS3070 configured with 200 physical disk drives each of which was 144GB in size and 15,000 RPM. We tested the FAS3070 using RAID-DP with FCP host connections. Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 20

• •

Measure the performance of the CX3-80 configured with 200 physical disk drives each of which was 146GB in size and 15,000 RPM. For this test, we configured the CX3-80 using FCP and RAID 5 Measure the performance of the CX3-80 configured with 200 physical disk drives each of which was 146GB in size and 15,000 RPM. For this test, we configured the CX3-80 using FCP and RAID 1/0

To generate the load for the performance testing, we used the industry-standard, open-source load generator IOMeter, available from Source Forge at http://sourceforge.net/projects/iometer/. For the CX3-80 Provisioned Performance and Full Scale Performance test cases, each Windows Server 2003 host participating in the test case was configured with a 64K alignment using diskpart, as well as formatted as NTFS with 64K block size.

Performance Test Cases Using a Single OLTP LUN
Performance Test Case #1: Performance Using Oracle OLTP Production LUN
We designed this test case to measure the performance of both the FAS3070 and the CX3-80 by testing each configuration using the single 400GB LUN representing the Oracle OLTP Production LUN created as part of the provisioning tests described above. Please refer to figures 9 and 10 in this report for specifics of how the Oracle OLTP production database LUN is provisioned on the FAS3070 and the CX3-80, respectively. For this test, we employed an IOMeter test script that generated a workload considered comparable to a database application running OLTP (Online Transaction Processing) workloads. This load consisted of a mixture of 60% random reads and 40% random writes using an 8KB request size. We configured the IOMeter test script to use a ramp up of 120 seconds and a run time of 120 seconds. We ran this test script twice for both the FAS3070 and the CX3-80 and averaged the results of the two tests to generate the results presented in this report. During this testing, we used the following configurations: • • NetApp FAS3070 using Fibre Channel host attach with Fibre Channel disk drives EMC CX3-80 using Fibre Channel host attach with Fibre Channel disk drives

We configured each of the host HBAs used during the testing to use a queue depth of 256. The specific IOMeter test parameters used for this test are shown in figure 15 below:
Test Type 60% random reads and 40% random writes using an 8KB request size and 8KB IO Alignment # of IOMeter Workers 1 # of Outstanding Ios Per Worker 256 Maximum File Size 400GB Ramp Up Time (seconds) 120 Run Time (seconds) 120

Figure 15. IOMeter Test Parameters For Provisioning Performance Tests for FAS3070 and CX3-80

Performance Test Case #2: Performance Using Oracle OLTP Production LUN Deployed on a MetaLUN
We designed this test case to measure the performance of the CX3-80 after deploying the single 400GB LUN representing the Oracle OLTP Production LUN created as part of the provisioning tests described above on a MetaLUN. EMC best practices recommend using MetaLUNs as a way to improve overall performance when processing workloads consisting of large numbers of random IO using small request sizes like the OLTP load that ACME’s Oracle OLTP production database will encounter. For this test, we used both the striped and concatenated MetaLUNs created during provisioning test case #2 described above. For details on how we created the MetaLUNs used for this test, please refer to the section entitled “Provisioning Test Case #3”. We used an IOMeter test script that generated a workload considered comparable to a database application running OLTP workloads. This load consisted of a mixture of 60% random reads and 40% random writes Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 21

using an 8KB request size. We configured the IOMeter test script to use a ramp up of 120 seconds and a run time of 120 seconds. We ran this test script twice for both the FAS3070 and the CX3-80 and averaged the results of the two tests to generate the results presented in this report. We configured each of the host HBAs used during the testing to use a queue depth of 256. The specific IOMeter test parameters used for this test are shown in figure 16 below:
Test Type 60% random reads and 40% random writes using an 8KB request size and 8KB IO Alignment # of IOMeter Workers 1 # of Outstanding IOS Per Worker 256 Maximum File Size 400GB Ramp Up Time (seconds) 120 Run Time (seconds) 120

Figure 16. IOMeter Test Parameters For Provisioned Performance Test Using MetaLUNs

Performance Test Case #3: Measure Performance Impact When Taking Snapshot Copies of the Oracle OLTP Production LUN
For this test, we measured the performance impact on both the FAS3070 and CX3-80 of creating a single snapshot as well as a series of 30 snapshot copies of the 400GB LUN representing the Oracle OLTP production database described above while the LUN was under a constant load of read and write traffic. To generate the load for the test, we used an IOMeter test script that generated a mixture of 60% random reads and 40% random writes using an 8KB request size. We set the run time in the IOMeter test script to 70 minutes and let the test run continuously while taking the series of snapshot copies over the course of the test. During the testing we found that the CX3-80 had a maximum limit of 8 active snapshots. As a result we performed these tests on the CX3-80 using a total of 8 snapshots. We encountered no such limit on the FAS3070 and were able to conduct the test using the original 30 snapshots. We configured each of the host HBAs used during the testing to use a queue depth of 256. The specific IOMeter test parameters used for this test are shown in figure 17 below:
Test Type 60% random reads and 40% random writes using an 8KB request size and 8KB IO Alignment # of IOMeter Workers 1 # of Outstanding Ios Per Worker 256 Maximum File Size 400GB Ramp Up Time (seconds) 120 Run Time (seconds) 4200

Figure 17. IOMeter Test Parameters For Snapshot Performance Tests for FAS3070 and CX3-80 We tested the following configurations: • • • • For the FAS3070, allow the IOMeter script to run for 10 minutes and then begin generating a series of 30 snapshot copies at 2 minute intervals during the course of the 70 minute test run. For the CX3-80, allow the IOMeter script to run for 10 minutes and then begin generating a series of 8 snapshot copies at 5 minute intervals. After the final snapshot is created, allow the test to run to completion with no additional snapshot copies created. For the FAS3070, allow the IOMeter script to run for 10 minutes, generate a single snapshot copy and allow the IOMeter test to run to completion. For the CX3-80, allow the IOMeter script to run for 10 minutes, generate a single snapshot copy and allow the IOMeter test to run to completion.

In addition to executing the tests described above, we also ran the same 70 minute IOMeter test script but did not perform the snapshot process during the test period. We used these baseline performance results to help assess the performance impact of the snapshot process on each system. For both the FAS3070 and CX3-80, we rebooted the host system as well as the storage processors and restored the IOMeter data files to their initialized condition between each of the 70 minute test runs.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 22

For the FAS3070, we used the snapshot feature to create a snapshot of the aggregate containing the 400GB LUN used for the testing after IOMeter had initialized the 400GB data file accessed during the testing. After each test, we restored the snapshot so the IOMeter data files were consistent run to run. During the test, we found that using the snapshot facility on the CX3-80 resulted in a significant performance penalty if even a single snapshot was active for the 400GB LUN used during the testing. As a result we restored the IOMeter data files for the CX3-80 between test runs by having IOMeter create a new data file for each test iteration. During the 70 minute duration of the tests, we ran the Performance Monitor application from Microsoft on the host system running Windows Server 2003 to measure the read and write activity on the logical volumes being accessed on both the FAS3070 and the CX3-80 during the testing. We configured Performance Monitor to capture information related to the following logical disk counters at 10 second intervals during the testing: • • • • Number of disk reads per second Number of disk writes per second Average read latency in seconds per operation Average write latency in seconds per operation

By monitoring these counters while taking the snapshot copies, we were able to determine the impact on the overall performance as a result of the snapshot process. This was not possible using IOMeter alone as it reports only a single average IOPS metric calculated over the entire test run time. To compute the results presented for this test, we recorded the total IOPS values generated at each of the 10 second intervals using Performance Monitor over the course of the test when not performing the snapshot process and used these values as our baseline. We then recorded the total IOPS values generated at each of the 10 second intervals using Performance Monitor over the course of the test when performing the snapshot process. This data represented the overall performance of the test configuration when the snapshot process was performed. At each data point, we computed the difference in the number of IOPS between the baseline configuration when no snapshot process was performed and the configuration where we conducted the snapshot process. We calculated the difference as a percentage of the baseline value to see how the performance was impacted over the course of the testing as a result of the snapshot process. For example, if a specific data point had a value of 5,000 IOPS when not using a snapshot and 4,950 IOPS at the same point in time when performing the snapshot process, we calculated that the performance when conducting the snapshot process at that specific point in time was 99 percent of the baseline performance as follows: • 100 – ( ( ( 5000 – 4950 ) / 5000 ) * 100 )

For this test, a value of 100 indicates that there was no difference between the performance recorded during the testing when taking a snapshot and when not conducting the snapshot process. Data points less than 100 indicate that there was performance degradation between the baseline configuration when no snapshot process was performed and the configuration where we conducted the snapshot process over the course of the test. Lower values indicate a greater performance difference between the baseline configuration and the configuration where the snapshot process was performed.

Performance Tests Cases Using 200 Drives
We designed these test cases to measure the performance of both the FAS3070 and the CX3-80 by testing each configuration with a total of 200 physical drives configured into a total of 40 LUNS each with a size of 80GB.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 23

To generate the load for these tests, we employed an IOMeter test script that generated a workload considered comparable to a database application running OLTP (Online Transaction Processing) workloads. This load consisted of a mixture of 60% random reads and 40% random writes using an 8KB request size. During the performance testing, we used the following configurations: • NetApp FAS3070 using the Fibre Channel protocol and RAID-DP • EMC CX3-80 using Fibre Channel and RAID 1/0 • EMC CX3-80 using Fibre Channel and RAID 5 For complete details of the systems used in these tests, including driver related information, please refer to Appendix D of this report.

Performance Test Case #4: Measure Performance of CX3-80 Using 200 Drives and RAID 5
The goal of this test is to measure the performance of the CX3-80 using an OLTP workload when using a total of 200 physical disk drives in a RAID 5 configuration. This load consisted of a mixture of 60% random reads and 40% random writes using an 8KB request size. We configured the IOMeter test script to use a ramp up of 30 seconds and a run time of 120 seconds. We ran this test script twice on the CX3-80 and averaged the results of the two tests to generate the results presented in this report. We configured the CX3-80 from scratch and provisioned the system such that a series of 40 LUNs each with a size of 80GB were created and spread evenly across a total of 200 disk drives using RAID 5 with the following layout. Each of the 200 disk drives had a capacity of 146GB and a speed of 15,000 RPM. • • • • • • On Storage Processor A, we created a total of 20 RAID 5 groups each using 5 physical drives in a (4+1) configuration On Storage Processor B, we created a total of 20 RAID 5 groups each using 5 physical drives in a (4+1) configuration On Storage Processor A, we created and bound a total of 1 LUN to each of the 20 RAID groups encompassing the entire usable space in each RAID group. We assigned each of the 20 LUNS a size of 80GB. On Storage Processor B, we created and bound a total of 1 LUN to each of the 20 RAID groups encompassing the entire usable space in each RAID group. We assigned each of the 20 LUNS a size of 80GB. On Storage Processor A, we created a storage group and added each of the 20 LUNs to the storage group. On Storage Processor B, we created a storage group and added each of the 20 LUNs to the storage group

We then configured the four Fujitsu-Siemens host systems used in the test so that each host system had access to a total of 10 of the LUNs created on the CX3-80. On each host system, the first FC initiator port accessed 5 LUNs on Storage Processor A and the second FC initiator port accessed 5 LUNs on Storage Processor B. For the IOMeter test script, we configured a total of 10 workers for each of the four IOMeter manager systems and assigned one of the 40 LUNs mapped on the CX3-80 to each of the workers as a disk target. The result was a 1 to 1 mapping between the 40 IOMeter workers and the 40 mapped LUNs on each of the FujitsuSiemens host systems for a total of 40 IOMeter workers targeting 40 individual LUNs spread evenly across the two CX3-80 storage processors utilizing a total of 200 physical disk drives. For the test, we configured IOMeter to sequentially populate each of the 40 LUNs with an 80GB test file. This ensured that enough data was being accessed so that it could not be cached by the CX3-80. Figure 19 below shows the main IOMeter test parameters used for this test. We configured each of the 40 IOMeter workers to use a total of 51 outstanding IOs and configured each of the eight host HBAs used during the testing to use a queue depth of 256.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 24

Test Type 60% random reads and 40% random writes using an 8KB request size and 8 KB I/O alignment

# of IOMeter Workers 40

# of Outstanding IOS Per Worker 51

Maximum File Size 80GB

Ramp Up Time (seconds) 30

Run Time (seconds) 120

Figure 19. CX3-80 IOMeter Test Parameters using 200 Drives and RAID 5

Performance Test Case #5: Measure Performance of CX3-80 Using 200 Drives and RAID 1/0
The goal of this test is to measure the performance of the CX3-80 using an OLTP workload when using a total of 200 physical disk drives in a RAID 1/0 configuration. This load consisted of a mixture of 60% random reads and 40% random writes using an 8KB request size. We configured the IOMeter test script to use a ramp up of 30 seconds and a run time of 120 seconds. We ran this test script twice on the CX3-80 and averaged the results of the two tests to generate the results presented in this report. It was recognized that comparing RAID-DP or RAID 5 to RAID 1/0 was a comparison of different storage deployments with very different cost and efficiency characteristics. For example, to replicate the RAID 5 test configuration using RAID 1/0 would have required significantly more physical disk drives on the CX3-80. As the CX3-80 configuration used for these tests contained 210 physical drives, we chose to use the same set of 200 disk drives configured for the RAID 5 test, and simply doubled the number of LUNs per RAID 1/0 group compared to RAID 5: We configured the CX3-80 from scratch and provisioned the system such that a series of 40 LUNs each with a size of 80GB were created and spread evenly across a total of 200 disk drives using RAID 1/0 with the following layout. Each of the 200 disk drives had a capacity of 146GB and a speed of 15,000 RPM. • • • • • • On Storage Processor A, we created a total of 10 RAID 1/0 groups each using 10 physical drives On Storage Processor B, we created a total of 10 RAID 1/0 groups each using 10 physical drives On Storage Processor A, we created and bound a total of 2 LUNs to each of the 10 RAID groups encompassing the entire usable space in each RAID group. We assigned each of the 20 LUNS a size of 80GB. On Storage Processor B, we created and bound a total of 2 LUNs to each of the 10 RAID groups encompassing the entire usable space in each RAID group. We assigned each of the 20 LUNS a size of 80GB. On Storage Processor A, we created a storage group and added each of the 20 LUNs to the storage group. On Storage Processor B, we created a storage group and added each of the 20 LUNs to the storage group

We then configured the four Fujitsu-Siemens host systems used in the test so that each host system had access to a total of 10 of the LUNs created on the CX3-80. On each host system, the first FC initiator port accessed 5 LUNs on Storage Processor A and the second FC initiator port accessed 5 LUNs on Storage Processor B. For the IOMeter test script, we configured a total of 10 workers for each of the four IOMeter manager systems and assigned one of the 40 LUNs mapped on the CX3-80 to each of the workers as a disk target. The result was a 1 to 1 mapping between the 40 IOMeter workers and the 40 mapped LUNs on each of the FujitsuSiemens host systems for a total of 40 IOMeter workers targeting 40 individual LUNs spread evenly across the two CX3-80 storage processors utilizing a total of 200 physical disk drives. For the test, we configured IOMeter to sequentially populate each of the 40 LUNs with an 80GB test file. This ensured that enough data was being accessed so that it could not be cached by the CX3-80. Figure 18 below shows the main IOMeter test parameters used for this test. We configured each of the 40 IOMeter workers to use a total of 51 outstanding IOs and configured each of the eight host HBAs used during the testing to use a queue depth of 256.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 25

Test Type 60% random reads and 40% random writes using an 8KB request size and 8 KB I/O alignment

# of IOMeter Workers 40

# of Outstanding Ios Per Worker 51

Maximum File Size 80GB

Ramp Up Time (seconds) 30

Run Time (seconds) 120

Figure 18. CX3-80 IOMeter Test Parameters using 200 Drives and RAID 1/0

Performance Test Case #6: Measure Performance of FAS3070 Using 200 Drives and RAID-DP
The goal of this test is to measure the performance of the FAS3070 using an OLTP workload when configured with 200 physical disk drives in a RAID-DP configuration using Fibre Channel host connections. This load consisted of a mixture of 60% random reads and 40% random writes using an 8KB request size. We configured the IOMeter test script to use a ramp up of 30 seconds and a run time of 120 seconds. We ran this test script twice on the FAS3070 and averaged the results of the two tests to generate the results presented in this report. For this test, we configured the FAS3070 from scratch and provisioned the system such that a series of 40 LUNs were created and spread evenly across a total of 200 disk drives using RAID-DP with the following layout. Each of the 200 disk drives had a capacity of 144GB and a speed of 15,000 RPM. • • • • On storage controller A, we created a single aggregate and flexible volume containing 100 physical drives. On storage controller B, we created a single aggregate and flexible volume containing 100 physical drives. On storage controller A, we created a total of 20 distinct LUNs each with a size of 80GB. On storage controller B, we created a total of 20 distinct LUNs each with a size of 80GB.

We then configured the four Fujitsu-Siemens host systems used in the test so that each host system had access to a total of 10 of the LUNs created on the FAS3070. On each host system, the first FC initiator port accessed 5 LUNs on storage controller A and the second FC initiator port accessed 5 LUNs on storage controller B. For the IOMeter test script, we configured a total of 10 workers for each of the four IOMeter manager systems and assigned one of the 40 LUNs mapped on the FAS3070 to each of the workers as a disk target. The result was a 1 to 1 mapping between the 40 IOMeter workers and the 40 mapped LUNs on each of the FujitsuSiemens host systems for a total of 40 IOMeter workers targeting 40 individual LUNs spread evenly across the two FAS3070 storage processors utilizing a total of 200 physical disk drives. For the test, we configured IOMeter to sequentially populate each of the 40 LUNs with an 80GB test file. This ensured that enough data was being accessed so that it could not be cached by the FAS3070. Figure 20 below shows the main IOMeter test parameters used for this test. We configured each of the 40 IOMeter workers to use a total of 51 outstanding IOs and configured each of the eight of the host HBAs used during the testing to use a queue depth of 256.
Test Type 60% random reads and 40% random writes using an 8KB request size and 8 KB I/O alignment # of IOMeter Workers 40 # of Outstanding Ios Per Worker 51 Maximum File Size 80GB Ramp Up Time (seconds) 30 Run Time (seconds) 120

Figure 20. FAS3070 IOMeter Test Parameters using 200 Drives and RAID-DP

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 26

Appendix A. Usability and Provisioning Test Result Details
This section provides the details of the provisioning test cases including the specific steps required to complete each of the test cases along with tester comments and feedback logged during the provisioning test process. Please refer to the Testing Methodology section of this report for complete details on how we conducted each of these tests.

Provisioning Test Case #2: Measure the Time Required to Create RAID Groups, Volumes and LUNs
For this test we measured and recorded the total amount of time required to create the RAID Groups, Volumes and LUNs necessary to deploy the database configuration for ACME as documented in storage provisioning test case #1 above for both the FAS3070 and the CX3-80. Execution Steps and Elapsed Time for Network Appliance FAS3070 Creating all of the necessary aggregates, volumes and LUNs for deploying the ACME database scenario consumed 15.5 minutes of time and required that we complete the following steps: 1. Using the command line, create a volume aggregate on each of the FAS3070 storage controllers containing a total of 28 physical disk drives 2. Using the command line, create a flexible volume under each of the two volume aggregates containing the LUNs for database and log files 3. Using SnapDrive 4.1, create each of the 20 LUNs required for the database and log files. We found the SnapDrive plug in for Windows to be extremely useful when interacting with the FAS3070 by providing a GUI based interface for creating and managing LUNs in our Windows environment. The interface aids in the management and adds a nice ease-of-use touch to the device. The FAS3070 required the creation of a CIFS share on the storage controller in order to use the SnapDrive interface. Because this is different from what most customers would be accustomed to with Fibre Channel disk arrays, it could be considered confusing to the user. Execution Steps and Elapsed Time for EMC CX3-80 Creating all of the necessary RAID Groups and LUNs for deploying the ACME database scenario on the CX380 consumed 38.25 minutes of time and required that we complete the following steps: 1. Using EMC’s Navisphere application, create the RAID groups to contain the database and log file LUNS as defined in the ACME storage provisioning exercise 2. Using EMC’s Navisphere application, create the LUNs for each of the databases and logs. 3. Using EMC’s Navisphere application, create appropriate storage groups and add individual LUNs to a specific storage group. For this exercise, we used the EMC Navisphere Java Applet to administer and monitor the physical layout of the array while implementing the ACME database layout. We found the Navisphere interface to be easy to learn and utilize. We found that the EMC Configuration Planning Guides and best practice documents provide very detailed and easy to understand guidance on how to configure the CX3-80.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 27

Provisioning Test Case #3: Measure Time Required to Create MetaLUNs on the CX380
For this test, we measured the time and recorded the number of steps to create a MetaLUN on the CX3-80 for the purpose of deploying the 400GB Oracle OLTP Production database LUN created during the storage provisioning test described above. A MetaLUN is a group of identical smaller LUNs bound together in order to provide storage consisting of a larger number of physical drives. EMC best practices recommend using MetaLUNs to provide higher performance for applications that generate large numbers of random read and write traffic using small request sizes like the Oracle OLTP database created for ACME Corp in Provisioning Test Case #2 defined above. Additionally, MetaLUNs are recommended for use if LUN expansion is a requirement. For this test, we created the identical 400GB LUN representing the Oracle OLTP production database defined in Provisioning Test Case #1 above using an 8 disk RAID 1/0 configuration. We used this LUN as the source LUN for the creation of a MetaLUN. We created three additional LUNS that were identical to the original LUN (i.e. 8 disks and RAID 1/0). Finally, we used the four identical LUNs to create a MetaLUN using both the stripe and concatenation methods. Execution Steps and Elapsed Time for EMC CX3-80 To complete this test on the CX3-80 required 7 minutes of time, including disk transition time, and required that we complete the following steps below. The amount of time required to complete the test did not change significantly based on the type of expansion (stripe or concatenate) used to create the MetaLUN. 1. 2. 3. 4. 5. 6. Create 4 RAID 1/0 Groups with 8 disks each On each of the 4 RAID Groups, create and bind a single 100GB LUN Select one of the 4 LUNs and select the expand option Select either the stripe or concatenate expansion method Select the three remaining 100GB LUNS for inclusion in the MetaLUN and click finish Assign the new MetaLUN to a storage group

Provisioning Test Case #4: Extending the Size of the Oracle OLTP Database
In this test, we measured the amount of time and recorded the number of steps required to add 400GB of additional space to the 400GB Oracle OLTP database LUN. Execution Steps and Elapsed Time for Network Appliance FAS3070 Increasing the size of the Oracle OLTP database LUN required 21 seconds and required we complete the following steps. For this test, we used the SnapDrive 4.1 application from a Windows server: 1. 2. 3. 4. 5. 6. 7. 8. 9. Open Microsoft MMC Expand the "Storage" menu in the left-pane Expand the "SnapDrive" menu in the left-pane Left click on the "Disks" item under "SnapDrive" Select the virtual disk representing the Oracle OLTP production database for expansion Select the Expand Disk option Select "No" for the option to limit the maximum disk size to save at least one snapshot Enter 400GB in the Expand by Size option Press OK

Using the NetApp SnapDrive 4.1 tool under Windows to expand the size of the LUN required less than 30 seconds and immediately made the additional storage available to the Windows host requiring no further action on the part of the Windows administrator Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 28

Execution Steps and Elapsed Time for EMC CX3-80 The CX3-80 provides two primary methods for expanding the size of an existing LUN, namely stripe expansion and concatenate expansion. The stripe expansion method actually re-stripes the existing data on the LUN across all of the drives now participating in the expanded LUN. Concatenate expansion adds the additional space to the end of the existing LUN and does not take the time to re-stripe the existing data over the new disk drives. Because these are two vastly different approaches, we investigated both for this test. Increasing the size of the Oracle OLTP database LUN required 24 seconds using the concatenation expansion process and 194 minutes when using the stripe expansion and required we complete the following steps on the CX3-80: 1. Create a new RAID group of the same RAID type and size as the LUN representing the Oracle OLTP production database 2. Bind a new LUN to this RAID group and allow transitioning phase to complete 3. Right click on the LUN to be expanded and select the Expand option 4. Using the LUN expansion wizard, identify the newly created LUN for use in the expansion process 5. Select either the stripe or concatenate method to expand the LUN We found the process of expanding the size of the LUNs on the CX3-80 to be very fast and easy to use. However, when compared to the FAS3070, the LUN expansion process required significantly more time on the CX3-80 when selecting the stripe expansion method. Additionally, we found that using either of the LUN expansion methods available on the CX3-80 does not immediately make the additional storage available to the Windows host system. The additional storage initially shows up on the Windows host as an unformatted area. To make this additional storage available to the Windows host system required that we use either a volume manager or other third party tool like “diskpart” to add the new storage to the existing volume.

Provisioning Test Case #5: Measure Time Required to Restore Snapshot Copies
For this test, we measured the amount of time and recorded the number of steps required to create and restore a total of 30 different snapshot copies of the LUN representing the Oracle OLTP database on both the FAS3070 and CX3-80 devices. A snapshot is a point in time copy of a LUN that does not change over time even as the LUN from which it was created changes. For the FAS3070, we used SnapDrive under Windows Server 2003. For the CX3-80, we used the EMC SnapView utility under Windows Server 2003. During the course of the testing, we discovered that the CX3-80 has a hard limitation of eight (8) snapshot copies per LUN. As a result, we only created and restored a total of eight snapshot copies on the CX3-80. Execution Steps and Elapsed Time for Network Appliance FAS3070 To complete this test on the FAS3070 consumed 5 seconds of time to create each of the 30 snapshot copies, 18 seconds of time to restore each of the 30 snapshot copies. To create the 30 snapshots, we completed the following steps: 1. 2. 3. 4. 5. 6. 7. 8. Open “Computer Management” window. Expand “SnapDrive” item Expand “Disks” item Select the virtual disk of which to create a snapshot Select “Snapshots”. Right click “Snapshots” and select “Create Snapshot”. Provide the name of the new snapshot Click OK Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 29

9. Repeat steps 5 through 9 above 29 more times to create all 30 snapshots To restore the 30 snapshot copies, required that we execute the following steps: 1. 2. 3. 4. Select a snapshot created in the above process Right click on the target snapshot and select “Restore disk from snapshot”. Click “Yes” Repeat the above steps to restore the remaining 29 snapshots

We found the snapshot function was very easy to perform. The visual interface in SnapDrive makes the snapshot process understandable and manageable. In addition, we found the documentation on disk space requirements related to the snapshot process was clear and easy to understand. Execution Steps and Elapsed Time for EMC CX3-80 To complete this test on the CX3-80 consumed 7 seconds of time to create each of the 8 snapshot copies, 90 seconds of time to restore each of the 8 snapshot copies. To create the 8 snapshots, we complete the following steps: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. Create a 2+1 RAID 5 group for the Reserved LUN Pool Create and bind 2 @ 120GB LUNs to the RAID group Right click on Reserved LUN Pool and select configure Select the 2 LUNs created above and assign one for SPA and one for SPB Right click on the Snapview icon and select start Snapview session for the source LUN. Repeat the step above 7 times using the same source LUN Right click on source LUN Select Snapview and create the initial snapshot Repeat the step 8 above 7 more times to create the remaining snapshots Right click on first snapshot under snapshot name, select activate and choose the first session Assign the snapshot to a storage group (secondary server) different than the storage group containing the source LUN

To restore the 8 snapshot copies we execute the following steps: 1. 2. 3. 4. Right click a snapshot created in the above process and select “Fracture”. Right click on the snapshot session and select “Start Rollback” Label the rollback, select the priority level, and click OK Repeat the steps above for the remaining 29 snapshot copies to be restored

We found that the visual interface in Navisphere provides an easy to follow method of creating and restoring snapshot copies. Additionally, we found the documentation is clear on how to perform the operations.

Provisioning Test Case #6: Measure Time Required to Clone LUNs
For this test, we measured the amount of time and recorded the number of steps required to clone the LUN representing the Oracle OLTP database on both the FAS3070 and the CX3-80. For the FAS3070 we used the LUN clone commands from the Data ONTAP CLI. For the CX3-80 we used the SnapView Clone command. We found that the CX3-80 documentation provides a clear set of steps related to the cloning process. However, compared to the FAS3070, we found the cloning process on the CX3-80 required significantly more time and steps to complete.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 30

Execution Steps and Elapsed Time for Network Appliance FAS3070 To complete this test on the FAS3070 consumed 7 seconds of time and required that we complete the following steps: 1. Open command line interface with FAS3070 Storage Servers using a Telnet session 2. Issue the command “snap create vol1 vol1_oltp_snap” 3. lun clone create /vol/vol1/oltp.lun_clone -b /vol/vol1/oltp.lun vol1_oltp_snap” For this test, we used the LUN clone feature on the specific OLTP database LUNs. A significant benefit of cloning is that the cloned database uses the same blocks on disk as the original database so only the changed blocks require additional space. The commands to clone the LUNs happen almost instantaneously. We liked the fact that the documentation to support this process is located on board the Storage Server. Execution Steps and Elapsed Time for EMC CX3-80 To complete this test on the CX3-80 consumed 27 minutes of time and required that we complete the following steps: 1. Create a RAID group (1 disk) 2. For the entire array, bind 2 LUNs of 128MB, one for each SP. 3. Allocate the two 128MB LUNs as private LUNs as follows a. right click on the CX, choose snapview and select clone features properties b. select the 128MB LUNs and hit OK 4. Create Clone Group as follows a. right click on CX, choose snapview and select create clone group b. enter name, select the source lun and hit OK 5. Create a RAID Group for the Clone LUNs 6. Bind a LUN identical to the source LUN for use as the cloned LUN 7. Add the Clone as follows: a. Choose snapview b. Right click on clone group and choose the add clone option c. Select the clone LUN created above, select high sync rate and click apply

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 31

Appendix B: Network Diagram for Provisioned Performance Tests
We used the network configuration described below for all of the provisioned performance test cases. These include the following: • • • Performance Test Case #1: Performance Using the OLTP Production Database LUN Performance Test Case #2: Performance Using Oracle OLTP Production LUN Deployed on a MetaLUN Performance Test Case #3: Measure Relative Performance Impact When Taking Successive Snapshot Copies of the Provisioned OLTP Production LUN

Figure 21: NetApp FAS3070 Fibre channel connection diagram for provisioned performance tests

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 32

Figure 22: EMC CX3-80 Fibre channel connection diagram for provisioned performance tests

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 33

Appendix C: Network Diagram for Performance Tests Using 200 Disks
We used the network configuration described below for all of the performance test cases using 200 drives. These include the following: • The FAS3070 configuration populated with 200 physical disk drives each of which was 144GB in size and 15,000 RPM. We tested the FAS3070 using RAID-DP with FCP host connections. RAIDDP (Double Parity) uses two parity disks per RAID group to decrease the likelihood that a double disk failure will cause data loss. The CX3-80 configuration populated with 200 physical disk drives each of which was 146GB in size and 15,000 RPM. We tested the CX3-80 using only FCP with RAID 5 and RAID 1/0.

Figure 23. CX3-80 Performance Configuration Using 200 Disk Drives

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 34

Figure 24. FAS3070 Performance Configuration Using 200 Disk Drives

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 35

Appendix D: System disclosures
This appendix provides specific details for the FAS3070, CX3-80 and the server systems that were used during the tests. We received the EMC CX3-80 array as a new purchase directly through an authorized EMC sales channel. This array came in a default factory state from EMC and was initially configured by EMC field personnel. The FAS3070 system used in these tests was shipped from a Network Appliance testing lab in Research Triangle Park, North Carolina. Because the FAS3070 did not come directly from the factory, we felt it was necessary to initialize all of the disks in order to set the FAS3070 in a “direct from factory state” in order to facilitate direct comparisons to the CX3-80 that are the basis of much of this testing. As a result, we did not include the time to initialize the disks on the FAS3070 in any of the timed tests in this report. Network Appliance FAS3070 Storage Processor Unit –1 w/2 SPU Cache size Disk Arrays – 2 Disk Drives Base software version SnapDrive Version Host Attach Kit Version Network Appliance FAS3070 Storage Servers 16GB Disk Array Enclosures 14 per DAE – 144GB 15K RPM Fibre Channel Network Appliance Data ONTAP Release 7.2.1X8 4.1 NTAP Windows FCP Host Attach Kit 3.0

Figure 25: Network Appliance FAS3070 Disclosure Information EMC CX3-80 CLARiiON Storage Server Storage Processor Unit –1 w/2 SPU EMC CX3-80 Storage Processor Enclosure Array Cache size 16GB EMC DAE3P-OS Disk Array Enclosure and Supplement Disk Arrays – 2 Power Supplies Disk Drives 14 per DAE3P – 146GB 15K RPM Fibre Channel EMC PowerPath 4.5.0.132 EMC Navisphere 6.22.0.4.67 Generation 220 EMC Navisphere Manager Generation 140 Figure 26: EMC CX3-80 Storage Server Disclosure Information Fujitsu-Siemens Primergy RX300S2– IOMeter Manager Processor / Speed / # of CPUs Dual Intel 3.6Ghz Xeon System RAM / Type / # of Slots 4GB Network Adapter 2- Broadcom Gigabit Ethernet Controller OS for IOMeter tests Installed HBAs Microsoft Windows 2003 Enterprise Edition SP1 2 x Qlogic 2340 version 9.0.1.12 (SCSI miniport) Driver Date: 10/10/2004

Figure 27: Fujitsu-Siemens Primergy RX300S2 Servers Used for IOMeter Manager Systems Networking Equipment Fibre Channel Switch Network Switch (for iSCSI Tests)

Brocade SilkWorm 3800 Firmware v3.1.3ª 3COM Gig-E Managed Switch

Figure 28: Networking Equipment Used

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 36

Appendix E. SAN Zoning Configurations for Performance Testing
Fabric Zone Configuration for the CX3-80
All test cases (Provisioning, Full Scale, etc) were configured with the following zone layout for each of the four Fujitsu-Siemens hosts: • • • • • • • • Host1, QLA2340-1 to SPA(fe2) Port2 & SPB(fe2) Port2 Host1, QLA2340-1 to SPA(fe2) Port2 & SPB(fe2) Port3 Host2, QLA2340-1 to SPA(fe2) Port2 & SPB(fe2) Port2 Host2, QLA2340-2 to SPA(fe3) Port3 & SPB(fe3) Port3 Host3, QLA2340-1 to SPA(fe0) Port0 & SPB(fe0) Port0 Host3, QLA2340-2 to SPA(fe1) Port1 & SPB(fe1) Port1 Host4, QLA2340-1 to SPA(fe0) Port0 & SPB(fe0) Port0 Host4, QLA2340-2 to SPA(fe1) Port1 & SPB(fe1) Port1

Fabric Zone Configuration for the FAS3070
• • • • • • • • Host1, QLA2340-1 to SPA(e0a) Host1, QLA2340-2 to SPB(e0a) Host2, QLA2340-1 to SPA(e0b) Host2, QLA2340-2 to SPB(e0b) Host3, QLA2340-1 to SPA(e0c) Host3, QLA2340-2 to SPB(e0c) Host4, QLA2340-1 to SPA(e0d) Host4, QLA2340-2 to SPB(e0d)

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 37

e

VeriTest (www.veritest.com), the testing division of Lionbridge Technologies, Inc., provides outsourced testing solutions that maximize revenue and reduce costs for our clients. For companies who use high-tech products as well as those who produce them, smoothly functioning technology is essential to business success. VeriTest helps our clients identify and correct technology problems in their products and in their line of business applications by providing the widest range of testing services available. VeriTest created the suite of industry-standard benchmark software that includes WebBench, NetBench, Winstone, and WinBench. We' distributed over 20 million copies of these tools, which are in use at every one ve of the 2001 Fortune 100 companies. Our Internet BenchMark service provides the definitive ratings for Internet Service Providers in the US, Canada, and the UK. Under our former names of ZD Labs and eTesting Labs, and as part of VeriTest since July of 2002, we have delivered rigorous, objective, independent testing and analysis for over a decade. With the most knowledgeable staff in the business, testing facilities around the world, and almost 1,600 dedicated network PCs, VeriTest offers our clients the expertise and equipment necessary to meet all their testing needs. For more information email us at info@veritest.com or call us at 919-380-2800.

Disclaimer of Warranties; Limitation of Liability:
VERITEST HAS MADE REASONABLE EFFORTS TO ENSURE THE ACCURACY AND VALIDITY OF ITS TESTING, HOWEVER, VERITEST SPECIFICALLY DISCLAIMS ANY WARRANTY, EXPRESSED OR IMPLIED, RELATING TO THE TEST RESULTS AND ANALYSIS, THEIR ACCURACY, COMPLETENESS OR QUALITY, INCLUDING ANY IMPLIED WARRANTY OF FITNESS FOR ANY PARTICULAR PURPOSE. ALL PERSONS OR ENTITIES RELYING ON THE RESULTS OF ANY TESTING DO SO AT THEIR OWN RISK, AND AGREE THAT VERITEST, ITS EMPLOYEES AND ITS SUBCONTRACTORS SHALL HAVE NO LIABILITY WHATSOEVER FROM ANY CLAIM OF LOSS OR DAMAGE ON ACCOUNT OF ANY ALLEGED ERROR OR DEFECT IN ANY TESTING PROCEDURE OR RESULT. IN NO EVENT SHALL VERITEST BE LIABLE FOR INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH ITS TESTING, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IN NO EVENT SHALL VERITEST' LIABILITY, INCLUDING FOR DIRECT DAMAGES, EXCEED S THE AMOUNTS PAID IN CONNECTION WITH VERITEST' TESTING. CUSTOMER’S SOLE AND EXCLUSIVE S REMEDIES ARE AS SET FORTH HEREIN.

Network Appliance™ FAS3070 and EMC CX3-80: Comparison of Performance and Usability 38