You are on page 1of 8

SAP NetWeaver

SAP® ENTERPRISE PORTAL
Scalability Study

ABOUT SAP® ENTERPRISE PORTAL
SAP® Enterprise Portal is a key component of the SAP NetWeaver™ platform. SAP Enterprise Portal is the industry’s most comprehensive portal solution, providing a complete portal infrastructure along with bundled knowledge management and collaboration capabilities. It provides people-centric integration of all types of enterprise information, including SAP and third-party applications, structured and unstructured data, and Web content. It is based on open standards such as Web services and supports both Java 2 Platform, Enterprise Edition (J2EE) and Microsoft .NET technology. Business content delivered with SAP Enterprise Portal speeds portal implementation and reduces the cost of integrating existing IT systems. SAP Enterprise Portal provides employees, supply chain partners, customers, and other user communities with immediate, secure, and role-based access to key information and applications across the extended enterprise. Because information and applications are unified through the portal, users can identify and address business issues faster, more effectively, and at lower cost – creating measurable benefits and strategic advantages.

ABOUT THIS STUDY
A major strength of SAP Enterprise Portal lies in its ability to handle large numbers of users in enterprise-scale implementations. This study provides guidelines and practical portal performance examples that can assist you when evaluating or sizing portal implementation projects. The paper, which should be viewed in conjunction with the SAP Enterprise Portal sizing document, clearly demonstrates the linear scalability afforded by SAP Enterprise Portal. Portal performance is measured on a given hardware platform, assuming a certain typical content and user behavior, and provides information on CPU usage, server response times, and the maximum number of supported concurrent users at 65% total CPU usage.

2

TEST CONFIGURATION
Hardware Configuration

TEST SCENARIOS
The test scenarios reflect portal implementations typically encountered at customer sites.
Portal Content Types

System:

Network: Comments:

HP ProLiant BL20p G2 blade server with 4.0 GB of RAM Eight blades, each with one 2.8-GHz Intel® Xeon™ processor 100-Mbit LAN No firewalls or reverse proxies are used. The portal is accessed via HTTP. No load balancer is used. Load is generated by Mercury Interactive LoadRunner clients and addressed directly to each blade unit.

Two usage types are examined: 1. Portal with URLs linking to external content This is a typical scenario for customers who have information already in place and want to establish the portal as a single point of entry to this information. Thus, the portal provides roles and navigation to pages with information views that are URLs to back-end servers, where the back-end servers generate the content to be displayed in the portal. Five pages with an average number of four information views are used – that is, the information views are simply four IFRAMEs with URLs to back-end systems. 2. Portal with application This scenario simulates the use of information views with a heavy processing load on the portal server – for example, a processing load involving information views that manipulate and validate application information or information views that have a complex user interface. In this scenario, heavy processing and HTML output generation is performed within the views on the portal server. Again, five pages are defined with varying numbers of information views, averaging four views per page, but this time with a simulated CPU load of 50 ms and an average 5-KB response size per information view and request. This simulates a heavy application load on the portal, and because the application load runs on the same server as the portal itself, we expect less throughput.
Navigation Elements

Software Configuration

Operating system: Database: Portal: Application server:

Microsoft Windows 2000 SP4 Microsoft SQL Server 2000 SP3 (8.00.934) SAP Enterprise Portal 6.0 SP2, patch 4 hotfix 6 SAP Web Application Server 6.20, PL24: one dispatcher and two server nodes per blade

The test is based on users having between four and seven roles assigned, with the role structure being designed such that there are 110 top-level navigation entries. This corresponds to a medium-large set of roles for users.

3

TEST SEQUENCE
The test sequence is a simulation of typical portal user behavior: Each user logs in and then clicks on portal pages with a think time (that is, a pause) between page requests of 30 seconds. There is no “logout” in this test sequence, but the simulated users keep requesting portal pages. Five different pages are requested, with an average of four information views per page. The number of users is increased until a CPU load of 65% is reached; this information is reflected in the number of “concurrent users” reported below. The test sequence per user in detail is as follows: Step 1 Log-in Step 2 Navigation through the portal The pages used in the test are reached by clicking on level 0 followed by level 1 (both top-level navigation) and then selecting three pages at level 2 (detailed navigation). Step 2 (page-click sequence) is repeated per user until the end of the test.

TEST RESULTS
Scenario 1 – Portal with URLs Linking to External Content (Light Load)
Scalability

The HP server supports 3,392 concurrent users at a maximum CPU load of 65%, with the pages defined as above and with a think time of 30 seconds. This corresponds to a throughput of about 110 pages per second. Scalability is linear with respect to CPUs – that is, throughput increases linearly with the number of CPUs (not shown in the graph below). The graph below shows the increase of CPU load per user, which also increases relatively linearly with the number of users (CPU measured on one node).

70 60 CPU LOAD (%) 50 40 30 20 10 0 0 500 1000 1500 2000 2500 3000 3500 4000 NUMBER OF USERS

LEVEL 0 LEVEL 1 LEVEL 2

1... 1 . . . 2 . . . 3 . . . 4 . . . 5 . . . 6 . . . 7 . . . 8 . . . 9 . . . 10 . . . 1 2 3 4 5 6 . . . . . . . . . . . . . . . . . .

Figure 1: Portal navigation levels and entries as simulated in the test

Figure 2: Linear scalability – Portal with URLs linking to external content

4

Response Times

CPU LOAD (%)

Server response times average less than 0.4 seconds and remain flat across the entire test. When you add network latency, backend response times, and client rendering times, you get typically between 1 and 3 seconds of total response time in the user experience. Wide-area network, or WAN, scenarios add lower bandwidth and higher latency to the scenario. The actual numbers thus depend on all these factors.

Scalability is linear with respect to CPUs and CPU per user (same as in case 1).

70 60 50 40 30 20 10 0 0 100 200 300 400 500 600 NUMBER OF USERS

SERVER RESPONSE TIME (S)

1,00 0,90 0,80 0,70 0,60 0,50 0,40 0,30 0,20 0,10 0,00 14:56 10:40 12:48 17:04 23:28 25:36 29:52 00:00 02:08 08:32 32:00 34:08 21:20 27:44 40:32 42:40 06:24 36:16 38:24 04:16 19:12

Figure 4: Linear scalability – Portal with application
Response Times

Server response times average less than 1.5 seconds and remain virtually flat across the entire test. The same notes from case 1 about server versus total user response times apply here as well.

TIME ELAPSED (MM:SS)

Figure 3: Average server response time
SERVER RESPONSE TIME (S) 1,8 1,6 1,4 1,2 1 0,8 0,6 0,4 0,2 0 00:00 00:08 00:21 00:25 00:29 00:38 00:42 00:46 00:55 00:59 00:04 00:34 00:51 00:12 01:04 00:17

Scenario 2 – Portal with Application (Heavy Load)
Scalability

This scenario has an additional four information views, with 50-ms CPU consumption and 5-KB response size per page request, that need to be serviced by the same portal server. In this scenario, the HP server supports 480 concurrent users at 30 seconds of think time and a maximum CPU load of 65%. This corresponds to 16 pages per second because of the high CPU load per view.

TIME ELAPSED (MM:SS)

Figure 5: Average server response time

5

INTERPRETATION OF THE RESULTS AND COMMENTS
Linear Scalability

The key takeaway from this study is that SAP Enterprise Portal demonstrates near linear scalability (with respect to CPUs/ blades) while offering excellent response times and is thus wellsuited for the most demanding of enterprise implementations.
How Other Think Times Would Influence the Results

The distribution of think times in a specific customer scenario is determined by the application and usage patterns. Assuming there is enough memory to maintain the user sessions, you can scale the number of concurrent users by multiplying the pages per second reported here with the factor of average think time.
Portal Content

The core measure can also be considered “pages per second” per scenario, and then the number of concurrent users derives from the average think times. This study assumes user think time (time between a page being returned and the next user page request) of 30 seconds. This may seem to be a long time but must be considered as an average, for example, over a whole day. Even when a typical user clicks in the portal every 2 to 5 seconds to access a piece of information or an application, there will often be a long pause at the portal level afterward, especially when the user continues to work inside a back-end application that was accessed through the portal. The 30-second think time corresponds to the “high load (activity)” user type in the standard SAP Enterprise Portal sizing document. Below is an extract from the sizing document that defines the standard usage patterns assumed for sizing. The distribution of think times for various user types amounts to an overall average of 140 seconds of think time.
User Activity Low Medium High Think Time (seconds) 600 180 30 Average Pages Per Hour 6 20 120

As shown by the tests above, the lighter the content, the better the performance. Most portal implementations will have a mix of light and heavy content, and, therefore, the test above should serve as the two extreme cases that most portals’ performance will fall between (assuming the hardware above).
Back-End System Performance and Network Components

This study measures pure portal behavior in a 100-Mbit LAN. Access times to back-end or knowledge management systems are not in the scope of this paper and must be taken into account separately. In the same way, adding network components such as firewalls, load balancers, reverse proxies, and so on will add latency – that is, user response time – to the request. However, adding components will not affect throughput – that is, the number of concurrent users that can be served with a given think time.
Relationship Between Concurrent and Named Users

The study measures concurrent users. Named users are those users who have access to the portal (have a user ID); that is, they are the potential users. Of course, not all users are active in the portal at the same time. Typical customer projects assume a named-to-current ratio of 10:1, but this number will vary from project to project.

We assume a distribution of user activity patterns (low, medium, and high levels of activity). In our distribution, there are a large number of low-intensity users, a smaller number of medium intensity users, and a still smaller number of high intensity users in the course of a day; specifically, 60% low, 34% medium, and 6% high. This results in an average of 140 seconds of think time.

6

SUMMARY
When relating the findings in this paper to specific customer portal implementation projects, you must take the following information into account: 1. A typical portal implementation will invariably involve a mixture of content types: the lighter the content, the better the performance (as demonstrated in this study). 2. A portal implementation will involve varying user behavior patterns. While the tests that are documented in this paper are performed with 30 seconds of think time, experience shows that frequently an average portal user think time is closer to 140 seconds. (See the section on page six titled “How Other Think Times Would Influence the Results”) In this case, the same portal infrastructure would support at least three times the number of concurrent portal users reported in the test (assuming sufficient memory).

7

www.sap.com /contactsap www.hp.com/go/sap/enterpriseportal

2004 by SAP AG. All rights reserved. SAP, R/3, mySAP, mySAP.com, xApps, xApp, SAP NetWeaver, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary. Printed on environmentally friendly paper.

50 071 053 (04/10)

©

These materials are subject to change without notice. These materials are provided by SAP AG and its affiliated companies (“SAP Group”) for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty.