Professional Documents
Culture Documents
Executive Overview ................................................................................................ 1! Introduction ............................................................................................................ 2! Database market share for Enterprise Applications ..................................................... 2! Oracle-SAP Technology Relationship ....................................................................... 3! Oracle Exadata Database Machine for SAP Customers ............................................... 4! Extreme Scalability.............................................................................................. 5! Oracle Advanced Compression ................................................................................. 6! OLTP Table Compression .................................................................................... 6! Minimal Performance Overhead........................................................................ 8! SecureFiles Compression ..................................................................................... 8! How to achieve a compressed Oracle database for SAP applications?....................... 9! Recovery Manager (RMAN) Compression........................................................... 10! Data Pump Compression .................................................................................... 10! Compression for Network Traffic........................................................................ 11! Oracle Cloud File System....................................................................................... 11! Moving SAP databases to Oracle 11gR2 on ASM................................................. 12! Real Application Testing........................................................................................ 13! Database Replay................................................................................................ 13! Faster deployment ......................................................................................... 13! SQL Performance Analyzer................................................................................ 14! Real Application Testing to validate Advanced Compression for SAP ................... 15! Conclusion.................................................................................................... 15! Online Patching..................................................................................................... 15! Direct NFS ........................................................................................................... 16! SecureFiles Performance ........................................................................................ 17! Deferred Segment Creation .................................................................................... 18!
Enhanced ADD COLUMN Functionality (Dictionary-Only Add Column).................. 18! Table Partitioning .................................................................................................. 19! SAP Standard Applications Benchmarks ................................................................. 20! Real Application Clusters for SAP (RAC for SAP) ................................................... 22! High Availability for SAP Resources (through SAPCTL)...................................... 24! Data Guard for SAP............................................................................................... 24! Patching of Oracle Databases and Real Application Clusters ..................................... 27! Oracle Advanced Security ...................................................................................... 27! Tablespace Encryption ....................................................................................... 28! RMAN Backup Encryption (Oracle Secure Backup) ............................................. 28! Data Guard Secure Transmission of Redo Data .................................................... 28! Secure Database exports with Encryption ............................................................ 29! SecureFiles Encryption ...................................................................................... 30! Database Vault .................................................................................................. 30! More new 11g features........................................................................................... 31! Data Guard Improvements.................................................................................. 31! Fast-Start Failover for Maximum Performance Mode in a Data Guard Configuration .................................................................................................................... 31! User Configurable Conditions to Initiate Fast-Start Failover in a Data Guard Configuration ................................................................................................ 31! Data Guard Integration, Simplification, and Performance .................................. 31! Support Up to 30 Standby Databases ............................................................... 32! Integration, Simplification, and Performance of Availability Features .................... 32! Automatic Reporting of Corrupt Blocks........................................................... 32! Automatic Block Repair ................................................................................. 32! Block Media Recovery Performance Improvements.......................................... 33! Parallel Backup and Restore for Very Large Files............................................. 33! Enhanced Tablespace Point-In-Time Recovery (TSPITR) ................................. 33!
Online Application Maintenance and Upgrade ..................................................... 33! Invisible Indexes ........................................................................................... 33! Online Index Creation and Rebuild Enhancements............................................ 34! RMAN Integration, Simplification, and Performance ............................................ 34! Archive Log Management Improvements ........................................................ 34! Fast Incremental Backups on Physical Standby Database .................................. 35! Server Manageability ......................................................................................... 35! Global Oracle RAC ASH Report + ADDM Backwards Compatibility ................ 35! ADDM for Oracle Real Application Clusters ................................................... 35! Oracle Linux for SAP ............................................................................................ 35! Oracle Expertise in the SAP environment ................................................................ 37! The Solution Center SAP Support and Service offers SAP customers the following services: ........................................................................................................... 37! Conclusion............................................................................................................ 37! Appendix.............................................................................................................. 38!
Executive Overview
Since 1988, Oracle has been the database of choice as the development platform for SAP applications. In November 1999, a contract between Oracle and SAP was signed to ensure future cooperation and maintain Oracle!s position as a tier one database platform for SAP. SAP R/3 was originally developed on the Oracle database, and the companies have a long standing technology relationship. Subsequent SAP products, such as SAP NetWeaver Business Warehouse (SAP NetWeaver BW)) . With Oracle!s assistance, incorporation of new database features, performance testing, bug fixing and customer problem escalations have been invaluable to SAP and the large number of SAP customers running on the Oracle database. SAP customers running the Oracle database have always benefited from the close cooperation beween Oracle and SAP Development Teams which resulted in the highest levels of one stop service and Oracle database optimizations for SAP applications. The Oracle Database has an established history as the industry leader for relational databases. Today, many successful businesses use the Oracle Database to power their mission critical applications. Deploying Oracle Database 11g Release 2, within their IT architecture, SAP customers can leverage the power of the world's leading database to reduce their server and storage costs, eliminate idle redundancy and improve quality of service. Since June 10 2011, SAP customers can use Oracle Exadata Database Machine for their SAP applications, the Oracle Exadata Database Machine is an easy to deploy solution for hosting the Oracle Database that delivers the highest level of database performance available.
th
Introduction
Oracle and SAP continue to satisfy the tens of thousends mutual SAP on Oracle database customers. The joint effort has always been charecterized by a constant desire to provide mutual customers with efficient service and support solutions for their SAP application needs, in order to bring additional benefit to their businesses and to offer optimum protection of their investments. The Oracle database is always optimized for SAP Applications and with each new database release many new features are provided like just done with Oracle Exadata Database Machine that help customers cope with constant challenges such as, reducing storage costs, improving performance, minimizing downtime etc. This paper will describe the most important features and technologies supported by SAP and show the main differentiators between the Oracle Database and DB2 and SQL Server. Many features, such as Oracle Real Application Clusters (RAC) Data Guard, Table Partitioning, AWR, and etc, have been available in earlier Oracle database versions ( 9i and 10g ), have been enhanced in the current Oracle database version 11g Release 2 and can be used with Oracle Exadata Database Machine. Some major new features like Advanced Compression, Oracle ASM, Real Application Testing, and Online Patching are available immediately with Oracle Database 11g Release 2 for SAP.
More than two-thirds of all mid-size to large enterprise SAP customers in every industry entrust their application deployments to Oracle databases, and companies are running SAP applications with Oracle databases on all major operating systems. Note that the larger the system (i.e. more users, more data) the higher the requirements in regards to storage saving, performance, security and high availability. Very large systems are almost exclusively based on Oracle.
Oracle dominates the SAP database market share across operating systems platforms including, the various flavors of Unix and Linux as well as Windows. Oracles market position has real advantages for customers considering database choices for their SAP system. A large installed base indicates that Oracle is able to meet the database needs of SAP customers across many industries and geographies. It also means that a large group of customers have tested the SAP-Oracle combination in situations that no QA group at SAP could ever recreate. Both Oracle and SAP have learned from this experience in the field, and both products have been enhanced as a result. Customers now choosing Oracle for SAP will get the accumulated benefits of years of product testing in the real world. The impressively large customer base translates into several advantages: ! Proven technology ! Widest choice of solutions and systems ! Highest consulting expertise on the market ! Best cooperation with hardware and tool vendors ! Largest labor pool of people with combined Oracle SAP skills
The Oracle development team working at SAP HQ in Walldorf, Germany assists SAP in:
! Performance testing of each release with the Oracle database to ensure there is no degradation of response time, throughput and scalability between SAP versions.
! Fixing database bugs found during SAP functional testing, and including SAP enhancement requests in the database product roadmap ! Incorporating new Oracle features in SAP releases ! Optimize each new release of the DBMS and new versions of SAP applications ! Responding to escalated customer problems, when related to database issues
Database Machine is a cloud in a box composed of database servers, Oracle Exadata Storage Servers, an InfiniBand fabric for storage networking and all the other components required to host an Oracle Database. It delivers outstanding I/O and SQL processing performance for online transaction processing (i.e. SAP ERP 6.0), Business Warehouse (i.e. SAP BW 7.x) and consolidation of mixed workloads. Extreme performance is delivered for all types of database applications by leveraging a massively parallel grid architecture using Real Application Clusters and Exadata storage. Database Machine and Exadata storage delivers breakthrough performance with linear I/O scalability, is simple to use and manage, and delivers mission-critical availability and reliability. The Exadata Storage Server is an integral component of the Exadata Database Machine. Several features of the product deliver extreme performance. Exadata storage provides database aware storage services, such as the ability to offload database processing from the database server to storage, and provides this while being transparent to SQL processing and database applications. Hence just the data requested by the application is returned rather than all the data in the queried tables. Exadata Smart Flash Cache dramatically accelerates Oracle Database processing by speeding I/O operations. The Flash provides intelligent caching of database objects to avoid physical I/O operations. The Oracle Database on the Database Machine is the first Flash enabled database. Exadata storage provides an advanced compression technology ), Exadata Hybrid Columnar Compression, that typically provides up to 10x and higher levels of data compression (limited use with SAP). Exadata compression boosts the effective data transfer by an order of magnitude. The Oracle Exadata Database Machine is the world's most secure database machine. Building on the superior security capabilities of the Oracle Database, the Exadata storage provides the ability to query fully encrypted databases with near zero overhead at hundreds of gigabytes per second. The combination of these, and many other, features of the product are the basis of the outstanding performance of the Exadata Database Machine.
Extreme Scalability
The Exadata Database Machine X2-8 is a full rack system with 2 database servers (Oracle Linux 5 or Solaris 11 Express) and 14 Exadata Storage Servers. Each database server comes with 64 Intel CPU cores (8 x eight-core Intel Xeon X7560 processors) and 1 TB of memory. It is available with either 600 GB High Performance SAS disks or 2 TB High Capacity SAS disks.. While an Exadata Database Machine X2-8 rack is an extremely powerful system, a building-block approach is used that allows Exadata Database Machine X2-8 to scale to almost any size. Exadata Database Machine X2-8 racks can be connected using the integrated InfiniBand fabric. As new racks of Exadata Database Machines are incrementally added to a system, the storage capacity and performance of the system grow. A system composed of two Exadata Database Machines X2-8 racks is simply twice as powerful as a single rack system providing double the I/O throughput and double the storage capacity. It can be run in single system image mode or logically partitioned for consolidation of multiple databases. Scaling out is easy with Exadata Database Machine. Oracle Real Application Clusters (RAC) can dynamically add more processing
power and Automatic Storage Management ASM) can dynamically rebalance the data across Exadata Storage Servers to fully utilize all the hardware in each configuration Three versions of the Exadata Database Machine X2-2 are available. From the Full Rack system with 8 database servers (Oracle Linux 5 or Solaris 11 Express) and 14 Exadata Storage Servers to the Quarter Rack system with 2 database servers and 3 Exadata Storage Servers, there is a configuration that fits any application. One version can be upgraded online to another ensuring a smooth upgrade path as processing requirements grow. All three versions are available with either 600 GB High Performance SAS disks or 2 TB High Capacity SAS disks. Information about the prerequisites, minimum requirements and all the necessary steps to setup an SAP system on Oracle Exadata can be found in the SAP Note 1590515 and the Best Practices Guide Using SAP NetWeaver with the Oracle Exadata Database Machine (http://www.oracle.com/us/products/database/sap-exadata-wp-409603.pdf)
symbol table that maintains compression metadata. When a block is compressed, duplicate values are eliminated by first adding a single copy of the duplicate value to the symbol table. Each duplicate value is then replaced by a short reference to the appropriate entry in the symbol table. Through this innovative design, compressed data is self-contained within the database block as the metadata used to translate compressed data into its original state is stored in the block. When compared with competing compression algorithms that maintain a global database symbol table, Oracles unique approach offers significant performance benefits by not introducing additional I/O when accessing compressed data. In general, customers can expect to reduce their storage space consumption by a factor of 2 to 3 by using the OLTP Table Compression feature. That is, the amount of space consumed by uncompressed data will be two to three times larger than that of the compressed data. The benefits of OLTP Table Compression go beyond just on-disk storage savings. One significant advantage is Oracles ability to read compressed blocks directly without having to first uncompress the block. Therefore, there is no measurable performance degradation for accessing compressed data. In fact, in many cases performance may improve due to the reduction in I/O since Oracle will have to access fewer blocks. Further, the buffer cache will become more efficient by storing more data without having to add memory. The results achieved using OLTP compression in real world SAP BW customers are depicted in figure 2 that shows space saving up to 86% at table level.
As stated above, OLTP Table Compression has no adverse impact on read operations. There is additional work performed while writing data, making it impossible to eliminate performance overhead for write operations. However, Oracle has put in a significant amount of work to minimize this overhead for OLTP Table Compression. Oracle compresses blocks in batch mode rather than compressing data every time a write operation takes place. A newly initialized block remains uncompressed until data in the block reaches an internally controlled threshold. When a transaction causes the data in the block to reach this threshold, all contents of the block are compressed. Subsequently, as more data is added to the block and the threshold is again reached, the entire block is recompressed to achieve the highest level of compression. This process repeats until Oracle determines that the block can no longer benefit from further compression. Only transactions that trigger the compression of the block will experience the slight compression overhead. Therefore, a majority of OLTP transactions on compressed blocks will have the exact same performance as they would with uncompressed blocks.
SecureFiles Compression
SecureFiles is a new feature in Oracle Database 11g that introduces a completely reengineered large object (LOB) data type to dramatically improve performance, manageability, and ease of application development.
SecureFile data is compressed using industry standard compression algorithms. Compression not only results in significant savings in storage but also improved performance by reducing IO, buffer cache requirements, redo generation and encryption overhead. If the compression does not yield any savings or if the data is already compressed, SecureFiles will automatically turn off compression for such columns. Compression is performed on the server-side and allows for random reads and writes to SecureFile data. SecureFiles compression provides significant storage savings for unstructured data depending on the degree of compression: LOW, MEDIUM (default) and HIGH, which represent a tradeoff between storage savings and latency. SecureFiles compression handles in-line and out-of-line LOB data which are getting more and more important in SAP applications and are widely used in SAP products such as SAP CRM, SAP XI, SAP NetWeaver Portal, and even in SAP ERP. Almost all non-cluster tables in SAP ERP use out-of-line LOBS that are unique to Oracle database. OLTP compression and SecureFiles compression lead Oracle to be able to compress each type of data related to SAP applications such as tables, indexes, and unstructured data. Using all 11g space optimizations the database size can be reduced up to a factor of 3.
Figure 4: Space savings achieved with 11.2 Compression and other space optimizations
All existing SAP systems with SAP Kernel 6.40 EX2 and higher running on oracle database 9.2.0.8 and 10.2.0.4 require an upgrade to 11.2.0.2 (documentation at http//service.sap.com/instguides "database upgrades "Oracle) and the latest SAP Bundle Patch available (http//service.sap.com/oracle-download). After the upgrade the database can then be compressed through a reorganization using SAP BRSPACE Release 7.20 and higher. The new BRSPACE version has been enriched with new options (see SAP Note 1431296 LOB conversion and table compression with BRSPACE 7.20) that allow the conversion of
LONG and LOB segments into SecureFiles, OLTP and SecureFiles compression and even combined with Index compression. BRSPACE is as such intelligent to know which tables and indexes need to be compressed and are worth compressing. BRSPACE will skip all the other tables where the compression does not make sense e.g. SAP Cluster Tables, SAP Pool tables etc. (please check SAP Note 1431296 for the complete list). This means the easiest way to compress all relevant tables within a tablespace is to compress the whole tablespace and let BRPSACE do the rest (compress the suitable and skip the non-suitable tables).
All new SAP system installations based on Oracle Database 11.2 have the possibility to activate Advanced Compression during the installation. Index Compression can be done either in combination with OLTP table compression or in a separate step.
10
In the following compression example from the Oracle sample database, the OE and SH schemas were exported while simultaneously compressing all data and metadata. The dump file size was reduced by 74.67%. Three versions of the gzip (GNU zip) utility and one UNIX compress utility were used to compress the 6.0 MB dump file set. The reduction in dump file size was comparable to Data Pump compression. Note that the reduction in dump file size will vary based on data types and other factors. Full Data Pump functionality is available using a compressed file. Any command that is used on a regular file will also work on a compressed file. Users have the following options to determine which parts of a dump file set should be compressed:
ALL enables compression for the entire export operation. DATA-ONLY results in all data being written to the dump file in compressed format. METADATA-ONLY results in all metadata being written to the dump file in compressed format. This is the default. NONE disables compression for the entire export operation.
11
Oracle Cloud File System that compliments Automatic Storage Management for Oracle Databases, is now certified by SAP. This certification has been completed as of Oracle version 11.2.0.2 for Oracle Single Instance and Oracle Real Application Clusters (Oracle RAC). The Oracle Cloud File System includes: Automatic Storage Management (ASM), Automatic Storage Management Cluster File System (ACFS) and Automatic Storage Management Dynamic Volume Manager (ADVM). These three components of the Oracle Cloud File System create an integrated foundation for database and general purpose files, as well as an infrastructure for cluster and single node server configurations. The Oracle Grid Infrastructure simplifies and streamlines management of volumes, file systems and cluster configurations, therefore eliminating the need for multiple 3rd party software layers, complexity and cost. For SAP customers using RAC, complexity and costs can be reduced dramatically by using SAPCTL (SAP Control) based on Oracle Clusterware, making SAP services highly available and eliminating the need for 3rd party failover software. Starting with 11gR2 (11.2.0.2), Oracle Cloud File System is the preferred storage platform for SAP systems running on Oracle Real Application Clusters (RAC), as well as SAP systems running on a single Oracle Database. SAP customers are now able to use Oracle Cloud File System to manage ALL data: Oracle Database files, Oracle Clusterware files and non-structured general purpose data such as Oracle and SAP kernel binaries, external files, and text files.
ASM Disk Groups configuration Grid Infrastructure installation Oracle Database 11.2.0.2 installation Configuration of Source and Target system Online database migration through RMAN using duplicate from active database in case the OS platform does not change or the source and target system have the same ENDIAN Offline database migration thorough RMAN using transportable tablespaces or export/import or Triple O based on Oracle GoldenGate if the platform changes (can also be used for homogenous migration)
For more information about the migration process, please check the white papers mentioned above.
12
Database Replay
Database Replay provides DBAs and system administrators with the ability to faithfully, accurately and realistically rerun actual production workloads, including online user and batch workloads, in test environments. By capturing the full database workload from production systems, including all concurrency, dependencies and timing, Database Replay enables you to realistically test system changes by essentially recreating production workloads on the test system something that a set of scripts can never duplicate. With Database Replay, DBAs and system administrators can test:
Database upgrades, patches, parameter, schema changes, etc. Configuration changes such as conversion from a single instance to RAC, ASM, etc. Storage, network, interconnect changes Operating system, hardware migrations, patches, upgrades, parameter changes
Faster deployment
Another major advantage of Database Replay is that it does not require the DBA to spend months getting a functional knowledge of the application and developing test scripts. With a few point and clicks, DBAs have a full production workload available at their fingertips to test and
13
rollout any change. This cuts down testing cycles from many months to days or weeks and brings significant cost savings to businesses as a result. Database Replay consists of four three main steps:
Capture workload in production including critical concurrency Replay workload in test with production timing Analyze and fix issues before production
Database upgrade, patches, initialization parameter changes Configuration changes to the operating system, hardware, or database Schema changes such as adding new indexes, partitioning or materialized views Gathering optimizer statistics. SQL tuning actions, for example, creating SQL profiles
Capture SQL workload in production including statistics and bind variables Re-execute SQL queries in test environment
14
No significant overhead observed during capture process Advanced Compression reduced database size by 50% Redo generated increased by ~25% Physical reads reduced by 60% CPU usage stayed flat
Conclusion
Real Application Testing proved a vital tool for validating upgrade from Oracle Database 10! Release 2 to 11!"Release 2. The ability to test with production workloads and SQL statements is essential for testing business critical applications like SAP and this feature will significantly mitigate upgrade and change risk when used by experienced DBAs.
Online Patching
A regular RDBMS patch is comprised of one or more object files and/or libraries. Installing a regular patch requires shutting down the RDBMS instance, re-linking the oracle binary, and restarting the instance; uninstalling a regular patch requires the same steps. With Oracle Database 11g, it is possible to install single or bundle patches completely online, without requiring the database instance to be shut down, and without requiring RAC or Data Guard configurations. With online patching, which is integrated with OPatch, each process associated with the instance checks for patched code at a safe execution point, and then copies the code into its process space.
15
An online patch is a special kind of patch that can be applied to a live, running RDBMS instance. An online patch contains a single shared library; installing an online patch does not require shutting down the instance or relinking the oracle binary. An online patch can be installed/uninstalled using Opatch (which uses oradebug commands to install/uninstall the patch). Online patches are currently only supported for the RDBMS, i.e. the oracle binary. How does Online Patching differ than traditional diagnostic patching?
Online patches are applied and removed from a running instance where traditional patches require the instances to be shutdown. Online patches utilize the oradebug interface to install and enable the patches where traditional diagnostic patches are linked into the "oracle" binary. Online patches do not require the "oracle" binary to be relinked where traditional diagnostic patches do.
Direct NFS
Standard NFS client software, provided by the operating system, is not optimized for Oracle Database file I/O access patterns. With Oracle Database 11g Release 2, you can configure Oracle Database to access NAS devices directly using Oracle Direct NFS Client, rather than using the operating system kernel NFS client. Oracle Database will access files stored on the NFS server directly through the integrated Direct NFS Client eliminating the overhead imposed by the operating system kernel NFS. These files are also accessible via the operating system kernel NFS client thereby allowing seamless administration. Direct NFS Client includes two fundamental I/O optimizations to increase throughput and overall performance. First, Direct NFS Client is capable of performing concurrent direct I/O, which bypasses any operating system level caches and eliminates any operating system writeordering locks. This decreases memory consumption by eliminating scenarios where Oracle data is cached both in the SGA and in the operating system cache and eliminates the kernel mode CPU cost of copying data from the operating system cache into the SGA. Second, Direct NFS Client performs asynchronous I/O, which allows processing to continue while the I/O request is submitted and processed. SAP customers can benefit from Direct NFS in the following way:
Up to 50% more database throughput in NAS environments with multiple NICs Up to 20% CPU savings on database server
Works for Single Instance and Real Application Clusters (RAC) Works for UNIX/Linux and Windows Platforms
16
Failure of NICs will not impact access to data as long as one single NIC survives. Up to four network cards can be used between database server and NAS.
Faster, easier and more available than any OS or NAS based bonding or trunking solution Direct NFS with NAS may provide higher throughput than traditional, more complex SAN solutions Superior to any bonding solution Faster and Easier Better throughput than most SAN solutions
SecureFiles Performance
SecureFiles offer the best solution for storing file content, such as images, audio, video, PDFs, and spreadsheets. Traditionally, relational data is stored in a database, while unstructured contentboth semi-structured and unstructuredis stored as files in file systems. SecureFiles is a major paradigm shift in the choice of files storage. SecureFiles is specifically engineered to deliver high performance for file data comparable to that of traditional file systems, while retaining the advantages of the Oracle Database. SecureFiles offers the best database and file system architecture attributes for storing unstructured content. SAP customers benefit from SecureFiles because of
Significantly faster access times compared to LOBs in SAP environments Increased transaction throughput on SAP cluster tables especially with RAC Prerequisite for compression of SAP tables containing LOBs (e.g. cluster tables)
Overall transaction throughput increases when LOB data is stored in SecureFiles (see figure 5). LOB data stored in SecureFiles delivers equal or better performance compared with LOB data stored in LONG or BasicFiles (LOB implementation prior to 11!). SecureFiles improve dramatically the scalability of SAP applications running against Oracle Database 11!"RAC but also Oracle Database 11!"Single Instance benefits substantially from SecureFiles. Therefore a clear recommendation is given to migrate all existing LONG and Basicfile LOB data to SecureFiles.
17
Empty database objects will not consume any disk space Very important for SAP environments as 60-70% of all tables, lobs, indexes and partitions in an SAP installation are empty Makes database installation for SAP a lot faster because creation of empty tables, LOBs, and indexes are dramatically faster. Oracle Data Dictionary Space queries run substantially faster
18
Now in Oracle 11g the database can optimize the resource usage and storage requirements for this operation, default values of columns are maintained in the data dictionary for columns specified as NOT NULL. Adding new columns with DEFAULT values and NOT NULL constraint no longer requires the default value to be stored in all existing records. This not only enables a schema modification in sub-seconds and independent of the existing data volume, it also consumes no space. Especially for large tables, updating table column results in reduced execution time and space saving. Because add column is very common within SAP BW applications and SAP upgrades, enhanced ADD Column Functionality leads to:
Factor 10-20 performance improvement for SAP BW during add column process Saving large amount of disk space
Table Partitioning
Table partitioning has been supported since SAP Release 4.6C (using many of the Oracle partitioning types available) and SAP BW 2.0 where many SAP InfoCubes tables are partitioned by default. As of 11g Release 2 and SAP BASIS Release 700 (Support Package 22), composite partitioning (or subpartitioning) and interval partitioning are also supported by SAP:
With composite partitioning - a scheme introduced in Oracle8i Database - you can create subpartitions from partitions, allowing further granularity of the table. But in that release, you could use subpartition range-partitioned tables only via hash subpartitioning. In Oracle9i, composite partitioning was expanded to include range-list subpartitioning. In Oracle Database 11g, you are not limited to range-hash and range-list composite partitioning. Rather, your choices are virtually limitless; you can create composite partitions in any combination. This means customers can create the following types of composite partitions available in Oracle 11g: Range-range, Range-hash, Range-list, List-range, List-hash, List-list. Interval partitioning, new in 11g, is an extension of range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the existing range partitions. You must specify at least one range partition. The range partitioning key value determines the high value of the range partitions, which is called the transition point, and the database creates interval partitions for data beyond that transition point. The lower boundary of every interval partition is the non-inclusive upper boundary of the previous range or interval partition. For example, if you create an interval partitioned table with monthly intervals and the transition point at January 1, 2007, then the lower boundary for the January 2007 interval is
19
January 1, 2007. The lower boundary for the July 2007 interval is July 1, 2007, regardless of whether the June 2007 partition was already created.
The "SAP Partition Engine" provides a tool for SAP/Oracle systems that you can use to partition large application tables to optimize archiving. The Partition Engine offers a predefined set of approximately 30 application tables to be partitioned based on time based criteria:
Existing non-partitioned tables will be converted through an ABAP/SAP BR*Tools task. Partition Maintenance is fully automated through the internal SAP SM37 job, and requires no DBA intervention.
For additional information regarding prerequisits and usage check the SAP Note 1333328.
20
Additionally, the one-node and two-node results delivered more than 67 percent higher performance per core and 3.3 times more performance-per-processor, respectively, than the highest IBM DB2 results on the SAP BI-D Standard Application Benchmark. The data mart scenario is one use of the business intelligence capabilities of the SAP NetWeaver technology platform. The data mart contains a static snapshot of a huge amount of operational data and multiple users run queries on this data in 10 InfoCubes that contain 2.5 billion (2,500,000,000) records. The key figure is the number of query navigation steps per hour against an enormous amount of data. Oracle has extended this series of benchmarks to three (SAP certification number 2009044) and then four (SAP certification number 2009045) RAC nodes cluster to prove that scalability stays on the same high level whenever we double the node number and thus the resources.
In November 2007, Oracle announced a world-record result on the SAP Sales and Distribution-Parallel (SD-Parallel) Standard Application Benchmark running on the SAP ERP 6.0 application with 37,040 SD users, Certification Number 2008013. Based on these results an IBM paper states This document will demonstrate that the SAP software suite works and scales very well utilizing multiple server nodes in a Oracle RAC cluster. A Head-to-Head Comparison finds Oracle on Top with 3,600 more SAP SD Users than Microsoft SQL Server 2005 on Identical Fujitsu Hardware, Oracle/Linux certification number 2006071 and SQL Server/Windows certification number 2006068. Another benchmark comparison on identical HP Hardware shows Oracle Database could serve 34% more SD users than SQL Server , Oracle/Linux certification number 2008064 and SQL Server/Windows
21
certification number 2008026. This comparison shows that Oracle 10g was better than SQL Server 2008. Oracle continues to prove that it is far ahead of the competition when it comes to meeting the high-performance, data-intensive computing demands of our customers, said Juan Loaiza, senior vice president, Systems Technology, Oracle. "This new world-record result and superior scalability proof points clearly distinguish Oracle Database and Real Application Clusters in demanding enterprise application environments. A production SAP system sees fluctuating user loads, contention on frequently used tables, a mix of reads and writes, and occasional large batch jobs. The database platform for the system needs to be able to scale easily with this mixed workload without requiring frequent and extensive DBA intervention. SAP standard benchmark results as well as customer experience show that the Oracle RDBMS distinguishes itself through an optimal usage of available system resources. SAP certified a series of benchmarks that demonstrate the impressive scalability of Oracle Real Application Clusters (RAC): The throughput increased by a factor of 1.9 whenever the number of nodes was doubled. This scalability was proven by Oracle in two of the most known SAP Standard Application Benchmarks: SAP BI-D (figure 5) and SAP SD (figure 6).
22
unavailability, such as system failures, data failures, disasters, human errors, system maintenance operations and data maintenance operations The cornerstone of Oracles high availability solutions that protects from system failures is Oracle Real Application Clusters (RAC). Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches, to provide a highly scalable and available database solution for SAP applications. RAC supports the transparent deployment of a single database across a cluster of active servers, providing fault tolerance from hardware failures or planned outages. RAC supports mainstream business applications of all kinds these include popular packaged products such as SAP, as well as custom applications. RAC provides a very high availability for these applications by removing the single point of failure with a single server. In a RAC configuration, all nodes are active and serve production workload. If a node in the cluster fails, the Oracle Database continues running on the remaining nodes. Individual nodes can also be shutdown for maintenance while application users continue to work. A RAC configuration can be built from standardized, commodity-priced processing, storage, and network components. RAC also enables a flexible way to scale applications, using a simple scaleout model. When more processing power is needed by a particular application service, another server can be added easily and dynamically, without taking any of the active users offline. Based on customer configurations, SAP Dialog instances and connected users can be routed to dedicated nodes in the RAC cluster. Contrary to Failover Cluster, where every SAP instance is connected to a single database instance, with an Oracle RAC cluster one or more SAP instances can be connected to one dedicated Oracle RAC instance from within the available instances. If one RAC node crashes, the users connected to the other nodes will not be affected since they are connected to a different database instance. The SAP dialog instances that were connected to the crashed database instance (node1) will be automatically reconnected to a surviving database instance (node2) within seconds. In case more than one SAP instance were connected to the crashed database instance, then the SAP instances concerned can be reconnected either to only one available RAC instance or to different RAC instances in order to split the workload.
23
24
standby system, if the primary fails in order to maintain high availability for mission critical applications, without downtime. Data Guard standby databases can be located at remote disaster recovery sites thousands of miles away from the production data center, or they may be located in the same city, same campus, or even in the same building. If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus minimizing downtime and preventing any data loss. Oracle Data Guard 11g Release 2 redefines what users should expect from such solutions. Data Guard is included with Oracle Database Enterprise Edition and provides the management, monitoring, and automation software to create and maintain one or more synchronized standby databases that protect data from failures, disasters, errors, and corruptions. It can address both High Availability and Disaster Recovery requirements and is the ideal complement to Oracle Real Application Clusters Data Guard functionalities for SAP customers:
Snapshot Standby enables a physical standby database to be open read-write for testing or any activity that requires a read-write replica of production data. A Snapshot Standby continues to receive, but not apply, updates generated by the primary. These updates are applied to the standby database automatically when the Snapshot Standby is converted back to a physical standby database. Primary data is protected at all times. A physical standby database, because it is an exact replica of the primary database, can also be used to offload the primary database of the overhead of performing backups Automatic Gap Resolution: In cases where the primary and standby databases become disconnected (network failures or standby server failures), and depending upon the protection mode used, the primary database will continue to process transactions and accumulate a backlog of redo that cannot be shipped to the standby until a new network connection can be established. While in this state, Data Guard continually monitors standby database status, detects when connection is re-established, and automatically resynchronizes the standby database with the primary (step four in Figure 3). No administrative intervention is required as long as the archive logs required to resynchronize the standby database are available on-disk at the primary database. In the case of an extended outage where it is not practical to retain the required archive logs, a physical standby can be resynchronized using an RMAN fast incremental backup of the primary database. Oracle Data Validation: One of the significant advantages of Data Guard is its ability to use Oracle processes to validate redo before it is applied to the standby database. Data Guard is a loosely coupled architecture where standby databases are kept synchronized by applying redo blocks, completely detached from possible data file corruptions that can occur at the primary database. Redo is also shipped directly from memory (system global area), and thus is
25
completely detached from I/O corruptions on the primary. Corruption-detection checks occur at a number of key interfaces during redo transport and apply.
Managing a Data Guard Configuration: Primary and standby databases and their various interactions may be managed by using SQL*Plus. Data Guard also offers a distributed management framework called the Data Guard Broker, which automates and centralizes the creation, maintenance, and monitoring of a Data Guard configuration. Administrators may interact with the Broker using either Enterprise Manager Grid Control or the Brokers command-line interface (DGMGRL). Role Management Services: Data Guard Role Management Services quickly transition a designated standby database to the primary role. A switchover is a planned operation used to reduce downtime during planned maintenance, such as operating system or hardware upgrades. Regardless of the transport service (SYNC or ASYNC) or protection mode utilized, a switchover is always a zero data loss operation. A failover brings a standby database online as the new primary database during an unplanned outage of the primary database. A failover operation does not require the standby database to be restarted in order to assume the primary role. Also, as long as the database files on the original primary database are intact and the database can be mounted, the original primary can be reinstated and resynchronized as a standby database for the new primary using Flashback Database it does not have to be restored from a backup.
Fast-Start Failover: Fast-Start Failover allows Data Guard to automatically fail over to a previously chosen, standby database without requiring manual intervention to invoke the failover. A Data Guard Observer process continuously monitors the status of a Fast-Start Failover configuration. If both the Observer and the standby database lose connectivity to the primary database, the Observer attempts to reconnect to the primary database for a configurable amount of time before initiating a fast-start failover. Fast-start failover is designed to ensure that out of the three fast-start failover members - the primary, the standby and the Observer - at least two members agree to major state transitions to prevent split-brain scenarios from occurring. Once the failed primary is repaired and mounted, it must establish connection with the Observer process before it can open. When it does, it will be informed that a failover has already occurred and the original primary is automatically reinstated as a standby of the new primary database. The simple, yet elegant architecture of fast-start failover makes it excellent for use when both high availability and data protection is required. Automating Client Failover: The ability to quickly perform a database failover is only the first requirement for high availability. SAP Applications must also be able to quickly drop their connections from a failed primary database, and quickly reconnect to the new primary database. Effective SAP failover in a Data Guard context has three components:
26
Fast start of database services on the new primary database Fast notification of clients and fast reconnection to the new primary database
In previous Oracle releases, one or more user-written database triggers were required to automate client failover, depending upon configuration. Data Guard 11g Release 2 simplifies configuration significantly by eliminating the need for user-written triggers to automate client failover. Role transitions managed by the Data Guard broker can automatically failover the database, start the appropriate services on the new primary database, disconnect clients from the failed database and redirect them to the new primary database no manual intervention is required.
Easy conversion of a physical standby database to a reporting database A physical standby database can be opened read/write for reporting purposes, and then flashed back to a point in the past to be easily converted back to a physical standby database. At this point, Data Guard automatically synchronizes the standby database with the primary database. This allows the physical standby database to be utilized for read/write reporting activities for SAP applications e.g. NetWeaver BI.
Many Oracle Data Guard 11g Release 2 functionalities are enhanced features that were originally available with Oracle 10g, like the improved Redo Transmission, Easy conversion of a physical standby database to a reporting database, and Real Time Apply etc.
27
Prior to 11g Release 2 SAP customers could implement security features such as Column Encryption through Transparent Data Encryption (TDE) and Client Server Network Encryption, to secure the data transfer between SAP instances and the database server. Unlike most database encryption solutions, TDE is completely transparent to existing applications with no triggers, views or other application changes required. Data is transparently encrypted when written to disk and transparently decrypted after an application user has successfully authenticated, and passed all authorization checks.
Tablespace Encryption
Starting with Oracle Database 11g it is possible to encrypt entire tablespaces. This makes it much easier to ensure that all relevant data is encrypted because everything stored in the tablespace gets encrypted automatically. Tablespace encryption means entire application tables can be transparently encrypted. Data blocks will be transparently decrypted as they are accessed by the database.
Oracle database to tape through integration with Recovery Manager (RMAN) supporting versions Oracle9i to Oracle Database 11g. Optimized Oracle database backups to tape provide unparalleled performance achieving 1025% faster backups than comparable media management utilities with up to 30% less CPU utilization Backup encryption using AES128, AES192 or AES256 encryption algorithms File system data protection of local and distributed servers Policy-based tape backup management
28
system. The following is a summary of steps needed for each database in the Data Guard configuration: Create a password file for each database in the Data Guard configuration (Set the initialization parameter on each instance):
After you have performed these steps to set up security on every database in the Data Guard configuration, Data Guard transmits redo data only after the appropriate authentication checks using SYS credentials are successful. This authentication can be performed even if Oracle Advanced Security is not installed and provides some level of security when shipping redo data.
Protected by the Transparent Data Encryption master encryption key Protected by a passphrase Protected by both passphrase and Oracle Transparent Data Encryption master encryption key
Using Oracle Transparent Data Encryption, Oracle Data Pump uses the Transparent Data Encryption master encryption key either from the Oracle Wallet or a Hardware Security Module (HSM). Using a passphrase, Oracle Data Pump uses the passphrase supplied on the command line as the key for the encryption algorithm. This is beneficial if the export file is to be imported into another database, where the matching master encryption key is not available, but the temporary passphrase can be shared with the receiving site. If using both passphrase and TDE master encryption key, the export file can be decrypted transparently if the TDE master encryption key is available, or by providing a passphrase. This is convenient when export files are to be imported back into the source database, and shipped off to other locations where the matching TDE master encryption key is not available, but the temporary passphrase can be shared with the receiving site. Oracle Data Pump supports the AES encryption algorithm with key sizes ranging from 128 to 256.
29
Oracle Data Pump command line parameters can be used to specify the granularity of data encryption in the export file. For example, Data Pump can be instructed to encrypt all information or only those columns currently encrypted using Oracle Transparent Data Encryption.
SecureFiles Encryption
In 11g, Oracle has extended the encryption capability to SecureFiles and uses the Transparent Data Encryption (TDE) syntax. The database supports automatic key management for all SecureFile columns within a table and transparently encrypts/decrypts data, backups and redo log files. Applications require no changes and can take advantage of 11g SecureFiles using TDE semantics. SecureFiles supports the following encryption algorithms:
3DES168: Triple Data Encryption Standard with a 168-bit key size AES128: Advanced Encryption Standard with a 128 bit key size AES192: Advanced Encryption Standard with a 192-bit key size (default) AES256: Advanced Encryption Standard with a 256-bit key size
Database Vault
Outsourcing, application consolidation, and increasing concerns over insider threats have resulted in an almost mandatory requirement for strong controls on access to sensitive application data. In addition, regulations such as Sarbanes-Oxley (SOX), Payment Card Industry (PCI), and the Health Insurance Portability and Accountability Act (HIPAA) require strong internal controls to protect sensitive information such as financial, healthcare, and credit cards records. Oracle Database Vault enforces real-time preventive controls and separation-of-duty in the Oracle Database to secure the SAP application data. Oracle Database Vault Protection for SAP enables SAP customers to prevent access to application data by privileged database users, enforce separation-of-duty, and provide stronger access control with multi-factor authorization (Oracle Database Vault is currently in controlled availability as described in SAP Note 1355140). Database Vault enforces security controls even when a database user bypasses the application and connects directly to the database. Database Vault certification with SAP applications benefits customers by:
Preventing privileged user access to application data using protection realms for the SAP ABAP stack and the SAP Java stack Enforcing separation of duty in the Oracle Database while allowing SAP administrators to perform their duties and protecting their SAP administration roles Provides SAP specific Database Vault protection policies for SAP BR*Tools
30
Implements all Database Vault protections transparently and without any change to the SAP application code
Preventing Privileged User Access: Database administrators hold highly trusted positions within the enterprise. With Database Vault realms, enterprises increase security by preventing access to application data even if the request is coming from privileged users. This is especially important when a privileged account is compromised or accessed outside normal business hours or from an un-trusted IP address. The regular tools used by administrators to help manage and tune the Oracle database continue to work as before, but they can no longer be used to access SAP application data. Enforcing Separation-of-Duty: Database Vault helps administrators manage operations more securely by providing fine-grained controls on database operations such as creating accounts, and granting privileges. For more information and White Paper see: http://www.oracle.com/newsletters/sap/products/dbvault.html
This feature enables fast-start failover to be used in a Data Guard configuration that is set up in the maximum performance protection mode. Since there is some possibility of data loss when a Data Guard failover occurs in maximum performance mode, administrators can now choose not to do a fast-start failover if the redo loss exposure exceeds a certain amount. This enhancement allows a larger number of disaster recovery configurations to take advantage of Data Guard's automatic failover feature.
User Configurable Conditions to Initiate Fast-Start Failover in a Data Guard Configuration
For lights out administration, you can enable fast-start failover to allow the broker to determine if a failover is necessary and to initiate a failover to a pre-specified target standby database, with either no data loss or a configurable amount of data loss. In addition, you can specify under which conditions or errors you want a failover to be initiated. Oracle also provides the DBMS_DG PL/SQL package to allow an application to request a fast-start failover. This feature enables the administrator to choose and configure a list of conditions that, if they occur, will initiate fast-start failover and increases the flexibility and manageability of customers' disaster recovery configurations.
Data Guard Integration, Simplification, and Performance
31
The new features in the following sections simplify the configuration and use of Oracle Data Guard. For example, some features provide a smaller set of integrated parameters, a unified SQL/Broker syntax, and better integration with other High Availability features like RMAN and Oracle RAC. Other features enhance the performance of key Oracle Data Guard features like redo transport, gap resolution, switchover/failover times.
Enhanced Data Guard Broker Based Management Framework: The enhancements for this release include:
! ! ! ! !
Data Guard Broker improved logging and tracing Oracle Managed Files (OMF) support for Data Guard Broker configuration files Data Guard Broker integration with database startup Guard Broker support for advanced redo transport settings Data Guard Broker support of prepared switchovers for Logical Standby
These enhancements make it possible to use Data Guard Broker in a wider variety of disaster recovery configurations.
Support Up to 30 Standby Databases
The number of standby databases that a primary database can support is increased from 9 to 30 in this release. The capability to create 30 standby databases can be used to offload large reporting and testing workloads from a production database.
During instance recovery, if corrupt blocks are encountered, the DBA_CORRUPTION_LIST is automatically populated. Block validation occurs at every level of backup, media recovery, and instance recovery.
Automatic Block Repair
Automatic block repair allows corrupt blocks on the primary database or physical standby database to be automatically repaired, as soon as they are detected, by transferring good blocks from the other destination. In addition, RECOVER BLOCK is enhanced to restore blocks from a physical standby database. The physical standby database must be in real-time query mode.
32
This feature reduces time when production data cannot be accessed, due to block corruption, by automatically repairing the corruptions as soon as they are detected in real-time using good blocks from a physical standby database. This reduces block recovery time by using up-to-date good blocks from a real-time, synchronized physical standby database as opposed to disk or tape backups or flashback logs.
Block Media Recovery Performance Improvements
In prior releases, block media recovery needed to restore original block images from disk or tape backup before applying needed archived logs. In this release, if flashback logging is enabled and contains older, uncorrupted blocks of the corrupt blocks in question, then these blocks will be used, speeding up the recovery operation. The benefit is a reduction in the time it takes for block media recovery by restoring block images from flashback logs instead of from disk or tape backups.
Parallel Backup and Restore for Very Large Files
Backups of large data files now use multiple parallel server processes to efficiently distribute the workload for each file. This is especially useful for very large files. This feature improves the performance backups of large data files by parallelizing the workload for each file.
Enhanced Tablespace Point-In-Time Recovery (TSPITR)
You now have the ability to recover a dropped tablespace. TSPITR can be repeated multiple times for the same tablespace. Previously, once a tablespace had been recovered to an earlier point-in-time, it could not be recovered to another earlier point-in-time.
Beginning with Release 11g, you can create invisible indexes. An invisible index is an index that is ignored by the optimizer unless you explicitly set the OPTIMIZER_USE_INVISIBLE_INDEXES initialization parameter to TRUE at the session or
33
system level. Making an index invisible is an alternative to making it unusable or dropping it. Using invisible indexes, you can do the following:
Use temporary index structures for certain operations or modules of an application without affecting the overall application.
In highly concurrent environments, the requirement of acquiring a DML-blocking lock at the beginning and end of an online index creation and rebuild could lead to spikes of waiting DML operations and, therefore, a short drop and spike of system usage. While this is not an overall problem for the database, this anomaly in system usage could trigger operating system alarm levels. This feature eliminates the need for DML-blocking locks when creating or rebuilding an online index. Online index creation and rebuild prior to this release required a DML-blocking lock at the beginning and end of the rebuild for a short period of time. This meant that there would be two points at which DML activity came to a halt. This DML-blocking lock is no longer required, making these online index operations fully transparent.
Ensure that archive logs are deleted only when not needed by required components (for example, Data Guard, Streams, and Flashback). In a Data Guard environment, allow all standby destinations to be considered where logs are applied (instead of just mandatory destinations), before marking archive logs to be deleted. This configuration is specified using CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY. Allow optional archive log destination to be utilized in the event that the flash recovery area is inaccessible during backup. Archive logs in this optional destination can be deleted using BACKUP DELETE INPUT or DELETE ARCHIVELOG.
This feature simplifies archive log management when used by multiple components. It also increases availability when backing up archive logs, when an archive log in the flash recovery area is missing or inaccessible. In this case, the backup will failover to an optional archive log destination to continue backing up the archive logs.
34
You can enable block change tracking on a physical standby database. RMAN uses the change tracking file, on incremental backups, to quickly identify the changed blocks since the last incremental backup and to read and write just those blocks. This feature enables faster incremental backups on a physical standby database than in previous releases.
Server Manageability
Global Oracle RAC ASH Report + ADDM Backwards Compatibility
The Active Session History (ASH) report now includes cluster-wide information, greatly enhancing it's utility in identifying and troubleshooting performance issues that span nodes for a cluster database. Automatic Database Diagnostic Monitor (ADDM) has been enhanced to be backward compatible allowing it to analyze archived data, or data preserved through database upgrades, allowing a customer to do performance comparisons over a longer time frame.
ADDM for Oracle Real Application Clusters
ADDM has been enhanced to provide comprehensive cluster-wide performance diagnostic and tuning advice. A special mode of ADDM analyzes an Oracle RAC database and reports on issues that are affecting the entire cluster as well as those that are affecting individual instances. This feature is particularly helpful in tuning global resources such as I/O and interconnect traffic and makes the tuning of Oracle RAC databases easier and more precise.
35
For more information, see SAP Notes 1565179 and 1567511. For a list of supported applications see SAP Note 1398634.
36
Oracle Expertise in the SAP environmentThe Solution Center SAP Support and Service offers SAP customers the following services:
! Advanced Customer Services (ACS) ! Performance Analysis and Tuning ! Development of concepts for Backup/Restore/Recovery, and High Availability, Administration ! Security concepts ! Optimizing of ABAP/4 programs (performance improvement) ! Migration service for customers, who want to use Oracle as the database for SAP applications (from Informix, MaxDB, DB2, or SQL Server to Oracle). ! Migration services from Oracle to Oracle (e.g. Tru64 to HP_UX) ! Integration Products and Services ! Oracle Database: The Database of Choice for Deploying SAP Solutions
Conclusion
Oracle has a large and growing share of the database market used to deploy SAP. This is not by chance, both companies invest in making Oracle technology work well for SAP, and Oracle has a long track record of delivering the de facto standard database for enterprise applications. SAP customers continue to choose Oracle because of the Scalability, High Availability, Manageability and Security benefits they obtain.
37
Appendix
Certification Number 2009037: The SAP BI-D Standard Application Benchmark performed on August 11, 2009 by Fujitsu in Walldorf, Germany has been certified with the following data: Throughput/hour (query navigation steps): 609,349. CPU utilization of servers: 96% (Node 1 active: 96%. Node 2 active: 95%). Operating system all servers: SuSE Linux Enterprise Server 10. RDBMS: Oracle 10g Real Application Clusters (RAC). Technology platform release: SAP NetWeaver 7.0. Configuration: 2 servers (2 active nodes): Fujitsu Primergy RX300-S5, 2 processors / 8 cores / 16 threads, Intel Xeon Processor X5570, 2.93 GHz, 64 KB L1 cache and 256 KB L2 cache per core, 8 MB L3 cache per processor, 96 GB main memory Certification Number 2009036: The SAP BI-D Standard Application Benchmark performed on August 11, 2009 by Fujitsu in Walldorf, Germany has been certified with the following data: Throughput/hour (query navigation steps): 320,363. CPU utilization of servers: 99% (one node active: 99%). Operating system all servers: SuSE Linux Enterprise Server 10. RDBMS: Oracle 10g Real Application Clusters (RAC). Technology platform release: SAP NetWeaver 7.0. Configuration: 1 servers (1 active node): Fujitsu Primergy RX300-S5, 2 processors / 8 cores / 16 threads, Intel Xeon Processor X5570, 2.93 GHz, 64 KB L1 cache and 256 KB L2 cache per core, 8 MB L3 cache per processor, 96 GB main memory Certification Number 2008063: The SAP BI-D Standard Application Benchmark performed on October 17, 2008 by IBM in Rochester, MN, USA was certified on October 31, 2008 with the following data: Throughput/hour: 182,112 query navigation steps.CPU utilization of central system: 94% Operating system, Central server: i 6.1. RDBMS: DB2 for i 6.1 Platform release: SAP NetWeaver 7.0 (2004s). Configuration: Central server: IBM Power System 570, 4 processors / 8 cores / 16 threads, POWER6, 5 GHz, 128 KB L1 cache and 4 MB L2 cache per core, 32 MB L3 cache per processor, 128 GB main memory Certification Number 2009044: The SAP BI-D Standard Application Benchmark performed on August 11, 2009 by Fujitsu in Walldorf, Germany has been certified with the following data: Throughput/hour (query navigation steps): 900,309. CPU utilization of servers: 93% (Node 1 active: 94%. Node 2 active: 93%. Node 3 active: 93%). Operating system all servers: SuSE Linux Enterprise Server 10. RDBMS: Oracle 10g Real Application Clusters (RAC). Technology platform release: SAP NetWeaver 7.0. Configuration: 3 servers (3 active nodes): Fujitsu Primergy RX300-S5, 2 processors / 8 cores / 16 threads, Intel Xeon Processor X5570, 2.93 GHz, 64 KB L1 cache and 256 KB L2 cache per core, 8 MB L3 cache per processor, 96 GB main memory Certification Number 2009045: The SAP BI-D Standard Application Benchmark performed on August 11, 2009 by Fujitsu in Walldorf, Germany has been certified with the following data: Throughput/hour (query navigation steps): 1,165,742. CPU utilization of servers: 88% (Node 1 active: 89%. Node 2 active: 88%. Node 3 active: 88%. Node 4 active: 88%). Operating system all servers: SuSE Linux Enterprise Server 10. RDBMS: Oracle 10g Real Application Clusters (RAC). Technology platform release: SAP NetWeaver 7.0. Configuration: 4 servers (4 active nodes): Fujitsu Primergy RX300-S5, 2 processors / 8 cores / 16 threads, Intel Xeon Processor X5570, 2.93 GHz, 64 KB L1 cache and 256 KB L2 cache per core, 8 MB L3 cache per processor, 96 GB main memory Certification Number 2006071: The SAP SD standard mySAP ERP 2004 application benchmark performed on August 19, 2006 by Fujitsu in Paderborn, Germany was certified on August 31, 2006 with the following data: 12,500 SD (Sales and Distribution) Benchmark users, 1.83 seconds average dialog response time, 1,268,000 fully processed order line items per hour, 3,804,000 dialog steps per hour, 63,400 SAPS, 0.014 seconds/0.046 seconds average database
38
request time (dia/upd), 85 percent CPU utilization of central server. Configuration of the central server was as follows: Fujitsu PRIMEQUEST 580, 32 processors / 64 cores / 128 threads, DualCore Intel Itanium 2 9050, 1.6 GHz, 32 KB(I) + 32 KB(D) L1 cache, 2 MB(I) + 512 KB(D) L2 cache, 24 MB L3 cache, 512 GB main memory. The server was running the SuSe Linux Enterprise 9 operating system, Oracle Database 10g, and SAP ECC 5.0. Certification Number 2006068: The SAP SD standard mySAP ERP 2004 application benchmark performed on August 5, 2006 by Fujitsu in Paderborn, Germany was certified on August 31, 2006 with the following data:: 8,900 SD (Sales and Distribution) Benchmark users, 1.95 seconds average dialog response time, 893,670 fully processed order line items per hour, 2,681,000 dialog steps per hour, 44,680 SAPS, 0.043 seconds/0.042 seconds average database request time (dia/upd), 90 percent CPU utilization of central server. Configuration of the central server was as follows: Fujitsu PRIMEQUEST 580, 32 processors / 64 cores / 128 threads, Dual-Core Intel Itanium 2 9050, 1.6 GHz, 32 KB(I) + 32 KB(D) L1 cache, 2 MB(I) + 512 KB(D) L2 cache, 24 MB L3 cache, 512 GB main memory. The Server was running Windows Server 2003 Datacenter Edition, SQL Server 2005 database, and SAP ECC 5.0 Certification Number 2008064: The SAP SD standard SAP ERP 6.0 (2005) application benchmark performed on November 05, 2008 by HP in Marlboro, MA, USA was certified on November 12, 2008 with the following data: 7,010 SD (Sales and Distribution) Benchmark users, 1.88 seconds average dialog response time, 708,000 fully processed order line items per hour, ,2,124,000 dialog steps per hour, 35,400 SAPS, 0.016 seconds/0.022 seconds average database request time (dia/upd), 90 percent CPU utilization of central server. Configuration of the central server was as follows: HP ProLiant DL785 G5, 8 processors / 32 cores / 32 threads, Quad-Core AMD Opteron Processor 8384, 2.7 GHz, 128 KB L1 cache and 512 KB L2 cache per core, 6 MB L3 cache per processor, 128 GB main memory. The server was running the SuSe Linux Enterprise Server 10 operating system, Oracle Database 10g, and SAP ECC 6.0. Certification Number 2008026: The SAP SD standard SAP ERP 6.0 (2005) application benchmark performed on April 22, 2008 by HP in Houston, TX, USA was certified on May 5, 2008 with the following data: 5,230 SD (Sales and Distribution) Benchmark users, 1.99 seconds average dialog response time, 523,670 fully processed order line items per hour, 1,571,000 dialog steps per hour, 26,180 SAPS, 0.030 seconds/0.028 seconds average database request time (dia/upd), 92 percent CPU utilization of central server. Configuration of the central server was as follows: HP ProLiant DL785, 8 processors / 32 cores / 32 threads, Quad-Core AMD Opteron processor 8360 SE, 2.5 GHz, 128 KB L1 cache and 512 KB L2 cache per core, 2 MB L3 cache per processor, 128 GB main memory. The Server was running Windows Server 2003 Enterprise Edition, SQL Server 2008 database, and SAP ECC 6.0 Certification Number: 2008013: The SAP SD Parallel Standard Application benchmark performed on November 26, 2007 by IBM in Beaverton, OR, USA, was certified by SAP on March 25, 2008 with the following data: 37,040 SAP SD-Parallel Benchmark users, 1.86 seconds average dialog response time, 3,749,000 fully processed order line items per hour, 11,247,000 dialog steps per hour, 187,450 SAPS. Server configuration: 5X IBM System p 570, 8 processors/16 cores/32 threads, POWER6, 4.7 GHz, 128 KB L1 cache and 4 MB L2 cache per core, 32 MB L3 cache per processor, 128 GM main memory, running AIX 5L version 5.3, Oracle 10g Real Application Clusters and SAP ERP 6.0. Certification Number: 2008012: The SAP SD Parallel Standard Application benchmark performed on November 26, 2007 by IBM in Beaverton, OR, USA, was certified by SAP on March 25, 2008 with the following data: 30,016 SAP SD-Parallel Benchmark users, 1.86 seconds average dialog response time, 3,036,000 fully processed order line items per hour, 9,108,000 dialog steps per hour, 151,800 SAPS. Server configuration: 4X IBM System p 570, 8 processors/16 cores/32 threads, POWER6, 4.7 GHz, 128 KB L1 cache and 4 MB L2 cache per core, 32 MB L3 cache per processor,
39
128 GM main memory, running AIX 5L version 5.3, Oracle 10g Real Application Clusters and SAP ERP 6.0. Certification Number: 2008011: The SAP SD Parallel Standard Application benchmark performed on November 26, 2007 by IBM in Beaverton, OR, USA, was certified by SAP on March 25, 2008 with the following data: 22,416 SAP SD-Parallel Benchmark users, 1.94 seconds average dialog response time, 2,252,330 fully processed order line items per hour, 6,757,000 dialog steps per hour, 112,620 SAPS. Server configuration: 3X IBM System p 570, 8 processors/16 cores/32 threads, POWER6, 4.7 GHz, 128 KB L1 cache and 4 MB L2 cache per core, 32 MB L3 cache per processor, 128 GM main memory, running AIX 5L version 5.3, Oracle 10g Real Application Clusters and SAP ERP 6.0. Certification Number: 2008010: The SAP SD Parallel Standard Application benchmark performed on November 26, 2007 by IBM in Beaverton, OR, USA, was certified by SAP on March 25, 2008 with the following data: 15,520 SAP SD-Parallel Benchmark users, 1.94 seconds average dialog response time, 1,559,330 fully processed order line items per hour, 4,678,000 dialog steps per hour, 77,970 SAPS. Server configuration: 2X IBM System p 570, 8 processors/16 cores/32 threads, POWER6, 4.7 GHz, 128 KB L1 cache and 4 MB L2 cache per core, 32 MB L3 cache per processor, 128 GM main memory, running AIX 5L version 5.3, Oracle 10g Real Application Clusters and SAP ERP 6.0.
40
Oracle Database: The Database of Choice for Deploying SAP Solutions June 2011 Author: Abdelrhani Boukachabine
Copyright 2011, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.
Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 oracle.com/sap Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd. 1010