You are on page 1of 61

White Paper

Performance and Scalability of


BMC Remedy IT Service Management 7.6,
BMC Service Request Management 7.6,
and BMC Atrium 7.6 on Red Hat Linux®
Benchmarking conducted at Dell Inc. laboratories, Texas

March 2010

www.bmc.com
Contacting BMC Software
You can access the BMC Software website at http://www.bmc.com. From this website, you can obtain
information about the company, its products, corporate offices, special events, and career opportunities.
United States and Canada
Address BMC SOFTWARE INC Telephone 713 918 8800 Fax 713 918 8000
2101 CITYWEST BLVD or 800 841 2031
HOUSTON TX 77042-2827 USA
Outside United States and Canada
Telephone (01) 713 918 8800 Fax (01) 713 918 8000

If you have comments or suggestions about this documentation, contact Information Design and Development by
email at doc_feedback@bmc.com.

© Copyright 2010 BMC Software, Inc.


BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S.
Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service
marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered
trademarks are the property of their respective owners.
Linux is the registered trademark of Linus Torvalds.
Oracle is a registered trademark of Oracle Corporation.
Java is a trademark or registered trademark of Sun Microsystems, Inc., in the U.S. and other countries.
The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates,
or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for
the product and to the proprietary and restricted rights notices included in the product documentation.
Restricted Rights Legend
U.S. Government Restricted Rights to Computer Software. UNPUBLISHED -- RIGHTS RESERVED UNDER THE COPYRIGHT
LAWS OF THE UNITED STATES. Use, duplication, or disclosure of any data and computer software by the U.S. Government is
subject to restrictions, as applicable, set forth in FAR Section 52.227-14, DFARS 252.227-7013, DFARS 252.227-7014, DFARS
252.227-7015, and DFARS 252.227-7025, as amended from time to time. Contractor/Manufacturer is BMC Software, Inc., 2101
CityWest Blvd., Houston, TX 77042-2827, USA. Any contract notices should be sent to this address.
Customer Support
You can obtain technical support by using the Support page on the BMC Software website or by contacting Customer Support by
telephone or email. To expedite your inquiry, please see “Before Contacting BMC Software.”

Support website
You can obtain technical support from BMC Software 24 hours a day, 7 days a week at http://www.bmc.com/support. From
this website, you can:
• Read overviews about support services and programs that BMC Software offers.
• Find the most current information about BMC Software products.
• Search a database for problems similar to yours and possible solutions.
• Order or download product documentation.
• Report a problem or ask a question.
• Subscribe to receive email notices when new product versions are released.
• Find worldwide BMC Software support center locations and contact information, including email addresses, fax numbers,
and telephone numbers.

Support by telephone or email


In the United States and Canada, if you need technical support and do not have access to the Web, call 800 537 1813 or send an
email message to customer_support@bmc.com. (In the Subject line, enter SupID:<yourSupportContractID>, such
as SupID:12345.) Outside the United States and Canada, contact your local support center for assistance.

Before contacting BMC Software


Have the following information available so that Customer Support can begin working on your issue immediately:
• Product information
o Product name
o Product version (release number)
o License number and password (trial or permanent)
• Operating system and environment information
o Machine type
o Operating system type, version, and service pack
o System hardware configuration
o Serial numbers
o Related software (database, application, and communication) including type, version, and service pack or
maintenance level
• Sequence of events leading to the problem
• Commands and options that you used
• Messages received (and the time and date that you received them)
o Product error messages
o Messages from the operating system, such as file system full
o Messages from related software
License key and password information
If you have a question about your license key or password, contact Customer Support through one of the following methods:
• E-mail customer_support@bmc.com. (In the Subject line, enter SupID:<yourSupportContractID>, such as
SupID:12345.)
• In the United States and Canada, call 800 537 1813. Outside the United States and Canada, contact your local support
center for assistance.
• Submit a new issue at http://www.bmc.com/support.
Contents
Executive Summary .................................................................................................. 1
Environment .............................................................................................................. 2
Scenarios ................................................................................................................... 4
BMC Remedy ITSM and BMC Remedy Service Request Management workload
stand-alone test cases ............................................................................................. 4
BMC Atrium CMDB and BMC Atrium Integration Engine stand-alone test cases
............................................................................................................................... 7
BMC Remedy ITSM and BMC Atrium CMDB mixed load............................... 12
Results ..................................................................................................................... 13
BMC Remedy ITSM and BMC Service Request Management stand-alone
workloads ............................................................................................................ 13
BMC Remedy Atrium CMDB and BMC Atrium Integration Engine stand-alone
scenarios .............................................................................................................. 24
BMC Remedy ITSM, BMC Service Request Management, and BMC Atrium
CMDB continuous mode test execution simultaneously - mixed mode runs ...... 31
End-user response times for popular use cases across a WAN ............................... 34
Performance tuning and recommendations ............................................................. 35
Mid tier settings ................................................................................................... 35
AR server settings................................................................................................ 35
DB server settings................................................................................................ 36
Create CI batch job settings ................................................................................. 37
NE batch job settings ........................................................................................... 37
RE batch job settings ........................................................................................... 38
BMC Atrium Integration Engine settings ............................................................ 38
Hardware and network ........................................................................................ 39
Appendix A ............................................................................................................. 40
BMC Remedy ITSM foundation data and application data ................................ 40
Data Setup ........................................................................................................... 41
Appendix B.............................................................................................................. 42
BMC Remedy ITSM user scenarios and associated actions ............................... 42
SRM Test Scripts - User Scenarios ..................................................................... 48
Appendix C.............................................................................................................. 53
Dell EqualLogic PS6000XV disk array setup ..................................................... 53
Appendix D ............................................................................................................. 53
Product version numbers ..................................................................................... 53
Appendix E .............................................................................................................. 54
Dell Product Details ............................................................................................ 54
White paper

Performance and Scalability of BMC Remedy


IT Service Management 7.6, BMC Service
Request Management 7.6, and BMC Atrium
7.6 on Red Hat Linux®—Benchmarking
conducted at Dell Inc. laboratories, Texas

Executive Summary
As businesses continue to grow with BMC Remedy IT Service Management solution
implementations, it becomes increasingly critical to provide information about the type of
performance that can be expected for a particular solution installation. BMC Software
and Dell conducted a series of performance-driven tests to help customers understand
workload performance with BMC Remedy IT Service Management (ITSM), BMC
Service Request Management, and BMC Atrium Configuration Management Database
(CMDB) applications on Dell PowerEdge servers. They also conducted a series of high-
volume benchmark tests to demonstrate scalability and performance. These tests were
conducted at Dell at their labs in Austin, Texas, in November, 2009.

This white paper provides quantitative results of the tests and guidelines for achieving the
best possible performance and scalability with BMC Remedy ITSM, BMC Service
Request Management, and BMC Atrium CMDB solutions on Dell PowerEdge Servers in
a Linux® environment.

Using Linux-based Dell PowerEdge systems and BMC software, organizations can gain a
standards-based solution that can lower total cost of ownership (TCO) with better
performance, and lower energy costs that help to deliver more value throughout the
company. By directing focus on energy efficiency, reliability, and value throughout the
design and development process, Dell PowerEdge servers offer outstanding performance
and reliability without locking customers into a proprietary solution.

Realistic user scenarios and workloads derived from customer cases were used for the
performance and scalability tests. This paper provides hardware sizing guidelines and

Executive Summary 1
White paper

overall system configuration recommendations. These in turn assist customers in


achieving a smooth production rollout and long-term sustainable operation and scalability
with the BMC Remedy ITSM, BMC Service Request Management, and BMC Atrium
CMDB product suites, coupled with the performance and reliability of Dell PowerEdge
servers.

Environment
This section describes benchmark architecture and provides background on how BMC
Remedy ITSM, BMC Service Request Management, and BMC Atrium CMDB
applications were deployed in a Linux and Oracle® environment.

For this benchmark, an integration server was dedicated for the purpose as specified in
the reference architecture document in order to get optimum performance for BMC
Atrium applications such as the BMC Atrium Integration Engine and the CMDB
normalization and reconciliation jobs.

Table 1. summarizes the initial setup for conducting the benchmark tests.
Table 1. Benchmark environment at Dell labs

Tiers Servers Specification per Comments


server
BMC Remedy 4 Dell PowerEdge R610
Simulated load balancing
Action Request Mid (2)xIntel Xeon X5570
using Borland Silk
Tier Quad Core 2.93 GHz,
Performer 2009
24 GB RAM
BMC Remedy 4 Dell PowerEdge R610
Servers configured in an
Action Request (2)xIntel Xeon X5570
AR System server group
(AR) System server Quad Core 2.93 GHz,
24 GB RAM
Each AR System instance
serves as authentication
server for each mid tier
server
Database server 1 Dell PowerEdge R710
Dell Equallogic PS6000XV
(2)xIntel Xeon X5570
storage attached using RAID
Quad Core 2.93 GHz ,
10
144 GB RAM
Windows client 2 Dell PowerEdge 1950,
Silk Performer Agents
computer(s) 2 Dual Intel Xeon
E5405, 2.0GHz, 16 GB
RAM
SilkPerformer 2009 1 Dell Latitude E6400
Silk Performer Controller
Controller 2x2.6 GHz

2 Environment
Environment

Figure 1: Benchmark Architecture

Silk Performer
Workstation: Dell Latitude E6400
Controller
CPU: IntelCoreDuoT5990, 2.6 Ghz
RAM: 3.48GB

Server: Dell PowerEdge 1950


Silk Performer CPU: 2Dual IntelXeonE5405, 2.0
Agents Ghz
RAM: 16GB

o BMC Remedy Mid Tier

Server: Dell Power Edge R610


CPU: 2xIntel Xeon5570 Quad Core
2.63 GHz
Mid Tier servers RAM: 24GB

o BMC Atrium CMDB


o BMC Atrium Product Catalog
o BMC Remedy Approval Server
o BMC Remedy Assignment Engine
o BMC Remedy Asset Management
o BMC Remedy Change
Management
o BMC Remedy Service Desk
o BMC Service Level Management
` o BMC Atrium Integration Engine
AR Servers Integration server o BMC Remedy Service Request
Management
AR Servers:2x Dell PowerEdgeR610, 2xDell
PoweredgeR710
CPU: 2xIntel Xeon5570 Quad Core 2.63/2.93 GHz
AR Server Group
RAM: 24GB
Integration Server: Dell Poweredge R710
CPU: 2x5570 Quad Core 2.93 GHz
RAM: 24 GB

Server: Dell PowerEdge R710


CPU: 2xIntel Xeon 5570 Quad Core 2.93 GHz
RAM: 144 GB
DB: Oracle 11g databases

Disk: Dell Equallogic PS6000XV[SAS]


16x450 upto 7.2TB
Database1: CD Source DB
Database server Disk Subsystem Database2: AR/CMDB

All five BMC Remedy Action Request System (AR System) servers were configured in a
server group. Four BMC Remedy Mid Tier (Mid Tier server) instances were hosted on
separate physical servers, and were configured with four out of five AR System server
instances participating in a server group configuration. Each of those AR System servers
was configured to be an authentication server for the respective mid-tier servers. The
AR System instance installed in the server dedicated as integration server had all
applications—that is, BMC Remedy ITSM 7.6, BMC Service Level Management 7.5,
and BMC Service Request Management 7.6 were installed in addition to the core
functionalities of BMC Atrium 7.6. In addition to hosting AR System, the integration
server also hosted the source database for the BMC Atrium Integration Engine source

Environment 3
White paper

data. The database hosted on the integration server had Oracle 11g installed and shared
the disk storage subsystem with the main database.

Scenarios
The following test cases were used for characterizing the performance and scalability of
the BMC Remedy ITSM 7.6, BMC Remedy Service Request Management 7.6, and BMC
Atrium CMDB 7.6 solutions on Linux and Oracle environments:

• BMC Remedy ITSM and BMC Service Request Management workload stand-alone

• BMC Atrium CMDB and BMC Atrium Integration Engine stand-alone

• BMC Remedy ITSM, BMC Service Request Management, and BMC Atrium CMDB
continuous mode jobs in a mixed workload

The activities of the BMC Remedy ITSM, BMC Remedy Service Request Management,
and BMC Atrium CMDB test scenarios are described in the following sections. Appendix
A provides detailed information about how the test environment was prepopulated.

BMC Remedy ITSM and BMC Remedy Service Request


Management workload stand-alone test cases
All BMC Remedy ITSM and BMC Remedy Service Request Management load tests
were characterized by keeping the core user scenarios constant while varying the
following parameters over several runs:

• Number of users

• Transaction pacing assigned for each test scenario

• Number of AR System and BMC Remedy Mid Tier servers

Nominal workload
While it is not practical to conduct tests with all combinations of the variables, BMC
decided to define a nominal workload to represent a typical workload for many BMC
Remedy ITSM and BMC Service Request Management large customer bases. The
nominal workload can be used as a baseline for benchmarking performance and
scalability of the BMC Remedy ITSM and BMC Service Request Management solutions
consistently over time.

Nominal workload is defined by the distributions of concurrent users and transaction


rates among the test scenarios being considered. Workload type was designed to be the

4 Scenarios
Scenarios

queuing model, and load tests were executed for over an hour to simulate a real-life
customer environment.

Workload spread between BMC Remedy ITSM and BMC Service Request Management
applications was split into 30% and 70% of the total workload respectively, to simulate
customer environments, and distributed evenly across four mid tier servers and four AR
servers configured in a server group, as displayed in Table 2.
Table 2. Nominal workload for the BMC Remedy ITSM solution test

BMC Remedy ITSM test scenario Percentage of Transaction rate


virtual users (per hour per
user)
Search Incident By ID 6% 3
Search Incident By Customer 6% 3
Create Incident No CI Redisplay
Current 2% 4
Create Incident With CI No Action 3% 4
Create Incident With CI Redisplay
Current 3% 4
Modify Incident to Resolve 7% 6
Create Change with Task 1% 2
Search Change by ID 2% 3

Table 3. Nominal workload for the BMC Service Request Management solution
test

BMC Remedy Service Request Percentage of Transaction rate


Management test scenario virtual users (per hour per
user)
Add Activity Log 2% 6
View Service Request 2% 6
View Services in Category 5% 7
Browse Sub Category 5% 7
Create Service Request w/ 6 8% 6
questions
mapped to 2 fields
Create Service Request w/ 6 8% 6
questions no
mapping
View Quick Picks 20% 7
Search by Keyword 20% 6

Projected incident tickets, task entries, and service requests created and modified in the
system with 1,200 users after an hour of simulation under the nominal workload is
summarized in Table 4.

Scenarios 5
White paper

Table 4. Projected data after 1 hour of simulation of nominal workload for 1200
users

Projected numbers for nominal


Entry type
workload
Incidents Created 384
Incidents Modified 504
Changes created 24
Service Requests Created 1152
Add Activity Log Entry 144

Scalability tests
Although the nominal workload is a good approximation of what most customers would
have in production, scalability tests were conducted to demonstrate how the ITSM and
SRM applications would respond under varying workloads.

In order to simulate a high load environment, the Silk Performer scenario scripts were
configured with a higher transaction pacing (5 times in this case) while keeping the
number of virtual users constant.

For each scenario under consideration, both the transaction pacing and number of virtual
users were varied independently relative to the nominal, thereby simulating very large
BMC Remedy ITSM and BMC Service Request Management environments successfully.

Tests to assess scalability of BMC Remedy ITSM and BMC Service Request
Management applications on Linux consist of the following parameters:
Table 5. Scalability tests performed for BMC Remedy ITSM and BMC Service
Request Management workload

Number of users Transaction workload Equivalent workload

1,600 Nominal 1,600 Users


2,000 Nominal 2,000 Users
2,500 Nominal 2,500 Users
1,200 5 x Nominal 6,000 Users
1,600 5 x Nominal 9,000 Users
2,000 5 x Nominal 10,000 Users
2,500 5 x Nominal 12,500 Users

6 Scenarios
Scenarios

Additional scalability tests with reduced hardware and


increased workload
Additional tests were conducted by reducing servers at the web tier and AR System
server tier to assess scalability and performance of BMC Remedy ITSM and BMC
Service Request Management applications. AR System servers were still participating in
the server group configuration, and mid-tier servers were configured to send HTTP
requests to only two AR System servers, with equal distribution of user transactions
across those two AR System server nodes.

Tests that exercised increased workload mix were conducted as a part of this exercise to
validate the reduced hardware configuration, as described in Table 6.
Table 6. Tests performed for BMC Remedy ITSM and BMC Service Request
Management for reduced hardware and increased workload

Number of users Transaction workload Equivalent workload

1,200 5 x Nominal 6,000 Users


1,600 5 x Nominal 9,000 Users
2,000 5 x Nominal 10,000 Users
2,500 5 x Nominal 12,500 Users

BMC Atrium CMDB and BMC Atrium Integration Engine stand-


alone test cases
The BMC Atrium 7.6 performance and scalability test consisted of BMC Atrium CMDB
batch jobs and BMC Atrium Integration Engine test scenarios.

CMDB batch jobs


Test cases conducted to assess performance and scalability of the BMC Remedy CMDB
Atrium Core batch functionalities are as follows:

• Create CI (Configuration Item) Regular

• Create CI Bulk

• NE Batch mode

• RE Batch mode

Scenarios 7
White paper

Create CI (Regular) and Create CI (Bulk) in batch mode

Two types of create CI APIs were tested in batch mode. The first one is referred to as
“Regular Create,” where the instances were created one at a time. The second test is
referred to as “Bulk Create CI,” where 50 instances were created using a single bulk API
call in Atrium.

A multi-threaded Java™ application developed internally by BMC was used to generate


the load. An instance of this application was executed locally in the AR System server
and used as the integration server.

The data model that was created by this tool is described in Table 7. The computer
system is the root of the tree, and all other CIs that were connected to the computer
system via a relationship are also shown.
Table 7. Class relationship distribution for Create CI batch tests

Number of
Class Relationship CIs
BMC_ComputerSystem BMC_Dependency 1
BMC_Product BMC_HostedSystemComponents 30
BMC_Patch BMC_Component 8
BMC_Activity BMC_Dependency 1
BMC_Monitor BMC_HostedSystemComponents 1
BMC_IPEndpoint BMC_HostedSystemComponents 1
BMC_OperatingSystem BMC_HostedSystemComponents 2
BMC_Person BMC_Dependency 1
BMC_Printer BMC_Dependency 1
BMC_Processor BMC_HostedSystemComponents 1
BMC_DiskDrive BMC_HostedSystemComponents 1
BMC_Card BMC_HostedSystemComponents 1
BMC_BIOSElement BMC_HostedSystemComponents 1
BMC_NetworkPort BMC_HostedSystemComponents 1
A total of 51 Class instances and 50 Relationship instances were used per iteration. Each
instance generator had 30 threads, so each one created a total of 3030 instances per
iteration.

Each class instance was populated with over 15 attributes. These include Name, Serial
Number, Short Description, Owner Name, Owner Contact, Dataset Id, Reconciliation
Identity, Category, Type, Item, Model, Manufacturer Name, Description, Version
Number, and several other class-specific attributes.

Tests conducted to measure CI creation performance were as follows:

• Create CI Regular for 1M and 2M CI

• Create CI Bulk for 1M and 2M CI

8 Scenarios
Scenarios

NE batch mode
The batch normalization mode was used to normalize a large number of instances that
were generated by the initial bulk load of configuration items. This was done to show a
typical BMC Remedy large customer environment going live in production.

The data model used for Normalization tests is summarized Table 8.


Table 8. Class distribution used for the NE batch job

# of
ClassName instances Level Relationship
BMC_ComputerSystem 1 1 BMC_Dependency
BMC_Product 30 2 BMC_HostedSystemComponents
BMC_Patch 8 2 BMC_HostedSystemComponents
BMC_Activity 1 2 BMC_Dependency
BMC_Monitor 1 2 BMC_HostedSystemComponents
BMC_IPEndpoint 1 2 BMC_HostedSystemComponents
BMC_OperatingSystem 2 2 BMC_HostedSystemComponents
BMC_Person 1 2 BMC_Dependency
BMC_Processor 1 2 BMC_HostedSystemComponents
BMC_DiskDrive 1 2 BMC_HostedSystemComponents
BMC_Card 1 2 BMC_HostedSystemComponents
BMC_BIOSElement 1 2 BMC_HostedSystemComponents
BMC_NetworkPort 2 2 BMC_HostedSystemComponents
Each product had the following attributes populated: Name, Serial Number, Short
Description, Owner Name, Owner Contact, Dataset Id, Reconciliation Identity,
Company, Category, Type, Item, Model, Manufacturer Name, Description, Version
Number, Patch Number, and Token Id.

The same Java tool developed by BMC used for Create CI batch jobs was used to create
test datasets for NE batch jobs. To quantify performance and scalability of the
Normalization Engine, the following tests were conducted, varying the dataset volume
for each iteration.

Tests conducted to measure NE performance were as follows:

• Normalize 500K CI

• Normalize 1M CI

• Normalize 2M CI

RE batch mode

A Standard Reconciliation was set up for this test case, which consisted of the two most
common reconciliation activities:

• Identifying class instances that are the same entity in two or more datasets

• Merging class instances from one dataset (such as Discovery) to another dataset (by
default, the production BMC.ASSET dataset)

Scenarios 9
White paper

All the identification and merge settings use standard rules. These standard rules work
with all classes in the Common Data Model (CDM) and BMC extensions. They identify
each class, using attributes that typically have unique values, and they merge based on
rules of precedence set for BMC datasets.

The Standard Reconciliation job was configured in a Non-Continuous mode.

The same Java tool developed by BMC used for Create CI batch jobs was used to create
test datasets for RE batch jobs. To quantify performance and scalability of the
Reconciliation Engine, the following tests were conducted, varying the dataset volume
for each iteration:

• RE Identify and Merge 500K CI

• RE Identify and Merge 1M CI

• RE Identify and Merge 2M CI

All CIs were created with ‘Reconciliation Identity’ set to 0, indicating that these newly
created CIs had not yet been identified. Distribution (Data Model) of the CIs across
classes used to create data for RE batch jobs is summarized in Table 9.
Table 9. Class Distribution used for RE Batch Job

Class name # of Level Relationship


instances
BMC_ComputerSystem 1 1 N/A

BMC_Product 30 2 BMC_HostedSystemComponents
BMC_Patch 10 2 BMC_Components
BMC_Activity 1 2 BMC_Dependency

BMC_Monitor 2 2 BMC_HostedSystemComponents
BMC_IPEndpoint 1 2 BMC_HostedAccessComponents

BMC_OperatingSystem 3 2 BMC_HostedSystemComponents
BMC_Person 1 2 BMC_Dependency

BMC_Processor 2 2 BMC_HostedSystemComponents

BMC_DiskDrive 2 2 BMC_HostedSystemComponents

BMC_Card 2 2 BMC_HostedSystemComponents

BMC_BIOSElement 2 2 BMC_HostedSystemComponents

BMC_NetworkPort 2 2 BMC_HostedSystemComponents

10 Scenarios
Scenarios

BMC Atrium Integration Engine tests


All BMC Atrium Integration Engine test cases were characterized with an external
Configuration Discovery (CD) data source consisting of approximately 8.5 million CIs
hosted on a separate database server using Oracle 11g. Out of the 8.5 million CI records,
approximately 4.7 million records were categorized as class records and the remaining
records were categorized as relationship records.

For performance and scalability, a total of six BMC Atrium Integration Engine instances
were installed for running tests involving parallel exchanges.

Data Exchange objects were used to transfer data from the external CD source and
synchronized into the BMC Atrium CMDB database. Twelve primary classes were
identified for the data transfer and each of these classes was again configured with
relationship classes. Data exchanges were configured for each of the primary classes as
well as for the relationship classes.

The table below summarizes all the classes that were identified for the BMC Atrium
Integration Engine data transfer process along with respective data distribution details in
the CD data source.
Table 10. Primary Classes data distribution

Class Database Records


BMC_Product 3,088,587
BMC_BIOSElement 20,421
BMC_ComputerSystem 20,674
BMC_Monitor 22,430
BMC_IPEndpoint 30,109
BMC_Person 20,734
BMC_NetworkPort 216,889
BMC_OperatingSystem 20,600
BMC_Patch 1,033,247
BMC_Printer 129,957
BMC_Processor 23,032
BMC_CDROMDrive 80,978
The total number of exchanges configured was 24, out of which 12 exchanges were
mapped directly to the 12 primary classes identified and 12 additional exchanges were
added for each relationship class.

The performance and scalability tests on BMC Atrium Integration Engine 7.6 were
focused on the following types of test scenarios:

• Class exchanges run sequentially

• Relationship exchanges run sequentially

• Class exchanges run in parallel

• Relationship exchanges run in parallel

Scenarios 11
White paper

BMC Remedy ITSM and BMC Atrium CMDB mixed load


All tests cases for a BMC Remedy ITSM, BMC Service Request Management, and
BMC Remedy Atrium CMDB mixed environment were characterized by keeping the
BMC Remedy ITSM and BMC Service Request Management test scenarios the same as
those of the BMC Remedy ITSM and BMC Service Request Management stand-alone
environment for 1200 users and 2500 users with nominal workload.

In BMC Atrium CMDB 7.5, BMC introduced a mode for Normalization and
Reconciliation processes called “Continuous Mode,” which provided near real-time
reconciliation of configuration items. In this mode, RE and NE runs continuously
normalized and reconciled CIs in small batches based on either time interval or record
count configuration settings.

To simulate a typical, “day in a life” scenario for BMC Remedy CMDB customers, BMC
Atrium CMDB or BMC Atrium Core continuous mode jobs were conducted
simultaneously while the BMC Remedy ITSM or BMC Service Request Management
load test was running.

In the mixed mode, the BMC Atrium/CMDB test for creating CIs was executed on the
AR server hosted in the dedicated integration server. This was done consciously to
simulate a typical large customer base where both BMC Remedy ITSM and
CMDB/Atrium Core components are deployed in production and hosted on separate
nodes for better scalability and reliability. All five AR System servers were participating
in a server group environment where the integration server acted as primary server for all
CMDB related activities.

Both BMC Remedy ITSM or BMC Service Request Management and BMC Atrium
CMDB or BMC Atrium Core load tests were conducted simultaneously to simulate the
mixed-mode environment where the BMC Remedy ITSM and BMC Service Request
Management test was conducted for an hour to ensure that the BMC Atrium CMDB and
BMC Atrium Core tests finish while BMC Remedy ITSM or BMC Service Request
Management load test is ongoing.

12 Scenarios
Results

Variations for mixed mode test cases are summarized in Table 11.
Table 11. Mixed mode test cases for BMC Remedy ITSM, BMC Atrium CMDB,
and BMC Atrium Core

BMC Atrium CMDB CMDB BMC Remedy Workload


and BMC Atrium Core workload ITSM or BMC
test case Service Request
Management test
case

Create CI, NE, and RE 50K CI 2,500 User BMC Remedy


configured in Continuous ITSM or BMC
Mode Service Request
Management
Heavy/CMDB
Normal

Create CI, NE, and RE 100K CI 1,200 User BMC Remedy


configured in Continuous ITSM or BMC
Mode Service Request
Management
Normal/CMDB
Heavy

Results
This section describes the quantitative test results on the performance and scalability of
the BMC Remedy ITSM, BMC Service Request Management, and BMC Atrium CMDB
solutions, starting with the BMC Remedy ITSM and BMC Service Request Management
stand-alone test case.

BMC Remedy ITSM and BMC Service Request Management


stand-alone workloads
All BMC Remedy ITSM and BMC Service Request Management load tests were
conducted using Borland Silk Performer, which is an industry standard load testing tool.
Silk Performer generated an equal load against each BMC Remedy Mid Tier server in the
environment to effectively simulate the job of a load balancer. All four mid tier servers
were configured with a set of four AR Server instances participating in a server group
configuration. Each AR Server was configured to be an authentication server for the
respective mid-tier servers.

The benchmarking response times do not include the more variable client-side
components of response times that a typical end user observes.

Results 13
White paper

To quantify the end-to-end user response times, results of manual stop-watch timings for
a few use cases across a WAN environment is presented in a later section.

The average response time for each user scenario represents the response times averaged
over all actions listed in Appendix B for each user scenario with a test period of one hour.

In all tests conducted, response times were acceptable, even with 5 times the nominal
workload transaction rate. Resource utilization for all mid-tier, AR, and DB servers
stayed well within acceptable limits.

The following charts show how BMC Remedy ITSM and BMC Service Request
Management will respond to 1200, 1600, 2000, and 2500 concurrent users under the
nominal workload and 5 times the nominal workload.

Chart 1. Response times comparison for nominal workload with varying users
for BMC Remedy ITSM use cases

Response Times for ITSM Scenarios Nominal Workload

1.20

1.00
Tim es in S econds

0.80

0.60

0.40

0.20

0.00
Search Search Create Create Create Update
Open IM Open CM Save Initiate Search
Incident By Incident By Incident(No Incident( Incident (CI / Incident to
Console Console Change Change By ID
ID Username CI/Redisplay CI/Redisplay No Action) Resolve

1200 Users 0.17 0.06 0.09 0.34 0.73 0.11 0.16 0.45 0.75 0.16

1600 Users 0.34 0.20 0.16 0.87 0.94 0.74 0.21 0.56 0.88 0.21

2000 Users 0.36 0.21 0.16 0.85 0.94 0.73 0.22 0.56 0.97 0.21

2500 Users 0.41 0.20 0.16 0.85 0.95 0.73 0.23 0.73 0.98 0.23

14 Results
Results

Chart 2. Response time comparison for nominal workload with varying users for
BMC Service Request Management use cases

Response Times for SRM Scenarios Nominal Workload

1.20

1.00

Times in Seconds 0.80

0.60

0.40

0.20

0.00
Add Activity View Services Open Request Create SR w / Create SR w /o View Quick Search by
View SR
Log in Category Console m apping m apping Picks Keyw ord

1200 Users 0.14 0.14 0.19 0.34 0.78 0.56 0.28 0.19
1600 Users 0.24 0.21 0.22 0.59 0.96 0.72 0.34 0.26
2000 Users 0.24 0.21 0.23 0.62 0.99 0.71 0.34 0.27
2500 Users 0.24 0.28 0.24 0.66 1.02 0.72 0.35 0.28

Chart 3. Response time comparison for 5xnominal workload with varying users
for BMC Remedy ITSM use cases

Response Times for ITSM Scenarios


5xNominal Workload
1.40

1.20
Tim es in Seconds

1.00

0.80

0.60

0.40

0.20

0.00
Search Search Create Create Create Update
Open IM Open CM Save Initiate Search
Incident By Incident By Incident(No Incident( Incident (CI / Incident to
Console Console Change Change By ID
ID Username CI/Redisplay CI/Redisplay No Action) Resolve

1200 Users 0.35 0.19 0.15 0.93 0.99 0.79 0.23 0.55 1.04 0.19

1600 Users 0.35 0.20 0.15 0.93 1.02 0.80 0.24 0.55 1.05 0.22

2000 Users 0.37 0.20 0.16 0.97 1.04 0.85 0.26 0.56 1.11 0.23

2500 Users 0.39 0.21 0.16 1.06 1.21 0.94 0.29 0.60 1.29 0.23

Results 15
White paper

Chart 4. Response time comparison for 5xnominal workload with varying users
for BMC Service Request Management use cases

Response Times for SRM Scenarios 5xNominal Workload

1.60

1.40

T im es in S ec o n d s
1.20

1.00

0.80

0.60

0.40

0.20

0.00
Add Activity View Services Open Request Create SR w / Create SR w /o View Quick Search by
View SR
Log in Category Console mapping mapping Picks Keyw ord

1200 Users 0.23 0.19 0.26 0.61 1.15 0.81 0.39 0.30
1600 Users 0.25 0.20 0.28 0.66 1.24 0.84 0.43 0.33
2000 Users 0.26 0.20 0.31 0.71 1.33 0.90 0.48 0.36
2500 Users 0.30 0.23 0.37 0.83 1.52 0.98 0.62 0.43

Tables 12 and 13 summarize the transaction data created or modified and total number of
searches executed in the system for all the relevant scalability test run simulations for an
hour under varying workloads.
Table 12. Transaction data created/modified/searched for all nominal workload
runs

1,200 user 1,600 user 2,000 user 2,500 user


Entry type workload workload workload workload
Incidents created 321 510 616 794
Incidents modified 481 651 830 1,031
Changes created 24 31 38 46
Service Requests
created 1,123 1,485 1,844 2,307
Add Activity Log
entry 139 185 232 290

Table 13. Transaction data created/modified/searched for all 5x Nominal


workload runs

Entry type 6,000 user 9,000 user 10,000 user 12,500 user
workload workload workload workload
Incidents created 1923 2,340 3,102 3,915
Incidents modified 2,498 3,335 4,172 5207
Changes created 119 155 198 245

16 Results
Results

Entry type 6,000 user 9,000 user 10,000 user 12,500 user
workload workload workload workload
Service Requests
created 5,561 7,389 9,177 11,539
Add Activity Log
entry 715 948 1,182 1,466

System resource utilization with BMC Remedy ITSM and


BMC Service Request Management stand-alone scalability
test runs
This section displays the system resource utilization for scalability test runs supporting
the high scalability of the BMC Remedy ITSM and BMC Service Request Management
applications under the varying workloads.

The following charts compare the scalability test runs for nominal workloads versus five
times nominal workloads.

Chart 5. CPU utilization comparison for mid tier server, AR System server, and
DB server tiers for all nominal runs and 5xnominal runs

CPU Utilization on Mid-Tier, AR Server & DB Tiers


Nominal & 5xNominal Workload

20
18
16
14
12
CPU %

10
8
6
4
2
0
Nominal MT 5xNominal MT Nominal AR 5xNominal AR Nominal DB 5xNominal DB

1200 User 1.29 2.26 3.27 11.06 1.7 5.5


1600 User 1.38 4.96 4.51 13.18 1.7 7.4
2000 User 1.65 8.64 8.25 16.19 2 10.8
2500 User 2.26 11.14 8.65 18.61 2.6 12

Results 17
White paper

Chart 6. Memory utilization comparison for mid tier server, AR System server,
and DB server tiers for all Nominal runs and 5xnominal runs

Memory Utilization on Mid-Tier, AR Server & DB Tiers


Nominal & 5xNominal Workload

12

10

8
M em %

0
Nominal MT 5xNominal MT Nominal AR 5xNominal AR Nominal DB 5xNominal DB

1200 User 6.08 11.75 8.75 9.16 7.31 7.4


1600 User 6.5 11.76 8.91 9.33 7.35 7.42
2000 User 8.5 11.76 9.16 9.36 7.31 7.6
2500 User 8.79 11.89 9.16 9.36 7.35 7.4

Additional BMC Remedy ITSM and BMC Service Request


Management scalability test runs with reduced hardware
The BMC Remedy ITSM and BMC Service Request Management applications
scaled well even with reduced servers from the server group configuration in the
AR tier. Results given in this section are only for the very high load test scenarios,
because the intent was only to prove that deployment of these applications using a
reduced number of servers would still provide reasonable response-time metrics
to Remedy customers that have similar installations.

Observations indicated that two servers in each mid tier and AR server tier would
easily scale up to a 10,000 user equivalent workload mix without any issues.
Going beyond a 10,000 user workload in the reduced environment could not be
pursued further due to constraints in the project timeline.

18 Results
Results

The following charts characterize how BMC Remedy ITSM and BMC Service
Request Management will respond to 1200, 1600, 2000 users under 5 times the
nominal workload, utilizing only 2 servers each at the web tier and application
tier, respectively:

Chart 7. Response times for reduced hardware and increased workload


(5xnominal ) for BMC Remedy ITSM use cases with varying user workloads

User Workload Wise ITSM Response Times


for Reduced HW & Increased Workload (5xNominal)

2.50

2.00
Times in Seconds

1.50

1.00

0.50

0.00
Search Search Create Create Create Modify Search
Open IM Open CM Save Initiate
Incident By Incident By Incident (No Incident Incident Incident to Change By
Console Console Change
ID Username CI/Redisplay (CI/Redisplay (CI/NoAction) Resolve ID

1200 Users 0.37 0.18 0.16 0.96 1.05 0.81 0.26 0.55 1.03 0.20
1600 Users 0.39 0.20 0.17 0.99 1.13 0.89 0.27 0.63 1.17 0.21
2000 Users 0.52 0.26 0.22 1.58 1.72 1.49 0.44 0.69 2.20 0.29

Chart 8. Response times for reduced hardware and increased workload


(5xnominal) for BMC Service Request Management use cases with varying user
workloads

User Workload Wise SRM Response Times


for Reduced HW & Increased Workload (5xNominal)

3.00

2.50
Times in Seconds

2.00

1.50

1.00

0.50

0.00
View Services Open Request Create SR w / Create SR w /o View Quick Search by
Add Activity Log View SR
in Category Console mapping mapping Picks Keyw ord

1200 Users 0.26 0.19 0.34 0.68 1.26 0.87 0.47 0.38
1600 Users 0.29 0.21 0.45 0.77 1.38 0.94 0.61 0.49
2000 Users 0.48 0.30 0.89 1.54 2.47 1.65 1.18 0.90

Results 19
White paper

Response times for each of the BMC Remedy ITSM and BMC Service Request
Management use cases under consideration were compared between reduced and
regular hardware used in the benchmarking environment and are shown in the
following charts:

Chart 9. Response times comparison for BMC Remedy ITSM scenarios for 1,200
users and 5xnominal workload between regular versus reduced hardware

ITSM Response Times Comparison Between Regular & Reduced HW


5xNominal Workload for 1200 Users

1.20
1.00
Tim es in Seconds

0.80
0.60
0.40
0.20
0.00
Open IM Search Search Create Create Create Modify Open CM Save Search
Console Incident Incident Incident Incident Incident Incident Console Initiate Change

1200 Reduced HW 0.37 0.18 0.16 0.96 1.05 0.81 0.26 0.55 1.03 0.20
1200 Regular HW 0.35 0.19 0.15 0.93 0.99 0.79 0.23 0.55 1.04 0.19

Chart 10. Response times comparison for BMC Remedy ITSM scenarios for
1,600 users and 5xnominal workload between regular versus reduced hardware

ITSM Response Times Comparison Between Regular & Reduced HW


5xNominal Workload for 1600 Users

1.50
Times in Seconds

1.00

0.50

0.00
Open IM Search Search Create Create Create Modify Open Save Search
Console Incident Incident Incident Incident Incident Incident CM Initiate Change

1600 Reduced HW 0.39 0.20 0.17 0.99 1.13 0.89 0.27 0.63 1.17 0.21
1600 Regular HW 0.35 0.20 0.15 0.93 1.02 0.80 0.24 0.55 1.05 0.22

20 Results
Results

Chart 11. Response times comparison for BMC Remedy ITSM scenarios for 2000
users and 5xnominal workload between regular versus reduced hardware

ITSM Response Times Comparison Between Regular & Reduced HW


5xNominal Workload for 2000 Users

2.50

Tim es in S econds
2.00

1.50

1.00

0.50

0.00
Open IM Search Search Create Create Create Modify Open CM Save Search
Console Incident Incident Incident Incident Incident Incident Console Initiate Change

2000 Reduced HW 0.52 0.26 0.22 1.58 1.72 1.49 0.44 0.69 2.20 0.29
2000 Regular HW 0.37 0.20 0.16 0.97 1.04 0.85 0.26 0.56 1.11 0.23

Chart 12. Response times comparison for BMC Service Request Management
scenarios for 1,200 users and 5xnominal workload between regular versus
reduced hardware

SRM Response Times Comparison Between Regular & Reduced HW


5xNominal Workload for 1200 Users

1.50
Tim es in Seconds

1.00

0.50

0.00
Add View Open Create SR Create SR View Quick Search by
View SR
Activity Services in Request w/ w /o Picks Keyw ord

1200 Reduced HW 0.26 0.19 0.34 0.68 1.26 0.87 0.47 0.38
1200 Regular HW 0.23 0.19 0.26 0.61 1.15 0.81 0.39 0.30

Results 21
White paper

Chart 13. Response times comparison for BMC Service Request Management
scenarios for 1,600 users and 5xnominal workload between regular versus
reduced hardware

SRM Response Times Comparison Between Regular & Reduced HW


5xNominal Workload for 1600 Users

1.50
Times in Seconds
1.00

0.50

0.00
Add View Open Create SR Create SR View Quick Search by
View SR
Activity Services in Request w/ w /o Picks Keyw ord

1600 Reduced HW 0.29 0.21 0.45 0.77 1.38 0.94 0.61 0.49
1600 Regular HW 0.25 0.20 0.28 0.66 1.24 0.84 0.43 0.33

Chart 14. Response times comparison for BMC Service Request Management
scenarios for 2,000 users and 5xnominal workload between regular versus
reduced hardware

SRM Response Times Comparison Between Regular & Reduced HW


5xNominal Workload for 2000 Users

3.00
Times in Seconds

2.50
2.00
1.50
1.00
0.50
0.00
Add View Open Create SR Create SR View Quick Search by
View SR
Activity Services in Request w/ w /o Picks Keyw ord

2000 Reduced HW 0.48 0.30 0.89 1.54 2.47 1.65 1.18 0.90
2000 Regular HW 0.26 0.20 0.31 0.71 1.33 0.90 0.48 0.36

System resource utilization comparison between reduced


hardware and normal hardware in an increased workload
scenario
This section compares the system resource utilization of the scalability test runs for
increased workload scenarios between normal and reduced hardware.

Observations indicated that the percentage increase in resource utilization while the
benchmark architecture was reduced to 2 AR server and 2 Mid Tier server nodes was in
line with expectations. No adverse deviation in either memory or CPU utilization was

22 Results
Results

observed in any tier. The variations in resource utilization in each tier are summarized as
follows:

• DB Tier: Both CPU and memory utilization stayed the same for both environments,
with CPU variations ranging 2-3%, while memory variations stayed below 1%.

• AR Tier: The reduced hardware environment experienced an increased CPU load by


almost 2 times and increased memory utilization by 7-14% as compared to the
regular environment comprising 4 AR and 4 MT servers.

• MT Tier: The reduced hardware environment experienced an increased CPU load by


2-3 times and increased memory utilization by approximately 30% compared to the
regular environment comprising 4 AR and 4 MT servers.

Chart 15. CPU utilization comparison for mid tier server, AR System server, and
DB server between regular hardware and reduced hardware

CPU %Utilization Comparison


Regular Vs. Reduced HW Environment

30
CPU %

20
10
0
MT Reduced
MT Regular HW AR Regular HW AR REduced HW DB Regular HW DB Reduced HW
HW

1200 User 2.26 6.78 11.06 21.35 5.5 5.7


1600 User 4.96 8.57 13.18 23.54 7.4 7.7
2000 User 8.64 11.58 16.19 27.73 10.8 11.1

Results 23
White paper

Chart 16. Memory utilization comparison for mid tier server, AR System server,
and DB server using between regular hardware and reduced hardware

Memory % Utilization Comparison


Regular Vs. Reduced HW Environment

20
15
Mem %
10

5
0
MT Regular HW MT Reduced HW AR Regular HW AR REduced HW DB Regular HW DB Reduced HW

1200 User 11.75 15.25 8.91 10.18 7.4 7.46


1600 User 11.78 15.7 9.33 10 7.42 7.45
2000 User 11.78 16.3 9.36 10.38 7.67 7.64

BMC Remedy Atrium CMDB and BMC Atrium Integration


Engine stand-alone scenarios

CMDB batch jobs


Test results for all CMDB batch jobs are detailed in their respective results section below.

Create CI batch Job

Two types of Create CI tests were executed:

Regular Create CI
In this test, each instance was created individually by using the following API:

CMDBInstance.create (ARServerUser context);


Each thread would create all instances and relationships as defined in the data model in
each iteration of the CI generator API.

Bulk Create CI

In this test, all the instances in the data model were created at once using the following
new API:
CMDBUtil.CMDBCreateMultipleInstances(ARServerUser context,
java.lang.String datasetId, CMDBInstance[] instanceList);

This functionality was introduced in 7.5 and significantly increases throughput by taking
advantage of the bulk insert functionality of the database. CMDB Instance objects are
created on the client side and placed in the instance list array. Once all objects have been
created, the API call is invoked to create these objects in the database. In this test

24 Results
Results

scenario, all the class instances are created in one call, and all the relationships in another.
So, 50 CIs are created for each bulk call.

Test results for the CMDB batch job create CI functionality for both regular and bulk
mode are displayed in Chart 17.

Chart 17. Throughput for Create CI batch jobs for varying dataset volume

Create CI Regular Vs. Bulk for Varying CI Volume


Throughput: Total CI/Total Time

580

560

540

520

500

480
1M 2M

Create CI - Regular 542 523


Create CI - Bulk 573 565

System resource utilizations during the Create CI batch jobs for different tiers are shown
in Chart 18.

Chart 18. System resource utilization for Create CI batch jobs for varying
dataset volume

Resource Utilization for AR & DB Servers


CreateCI Regular Vs. Bulk API

40

30

20

10

0
1M - AR 2M AR 1M AR 2M AR 1M - DB 2M DB 1M DB 2M DB
CPU% CPU% Mem% Mem% CPU% CPU% Mem% Mem%

Create CI - Regular 15.45 31.27 8.20 8.87 3.60 5.00 7.13 7.16
Create CI Bulk 21.76 37.24 8.75 9.12 7.70 10.50 7.19 7.19

Results 25
White paper

NE batch job

For all NE batch jobs, the throughput was measured in terms of CIs/Sec, and was
calculated based on the following formula:

Normalization Throughput = Total Number of CIs / (Completion Time for


Normalization Activity in Seconds)

Test results in terms of total throughput derived from the above formula are
shown in the chart below for different CI volumes under consideration.

Chart 19. Throughput and timing for NE batch jobs for varying dataset volume

NE Batch Jobs Throughput & Tim ings for Varying Dataset Volume

400
Throughput: CIs/Sec

300

200

100

0
500K 1M 2M

Throughput 288.9 254.4 199.7


CIs/Sec

System resource utilization during NE batch jobs for different tiers are shown in Chart
20.

Chart 20. System resource utilization for NE batch jobs for varying dataset
volume

NE Resource Utilizations for Varying Dataset Volume

25
CPU% - Mem%

20
15
10
5
0
AR CPU% AR Mem% DB CPU% DB Mem %

500K 6.07 21.65 2.4 7.94


1M 7.66 21.75 3.5 7.95
2M 8.47 21.75 3.8 7.96

26 Results
Results

RE batch job

For all RE batch jobs, the throughput was measured in terms of CIs/Sec at each activity—
that is, Identification and Merge activity—separately for a given job and the average
throughput (of complete Reconciliation activity) was calculated:

Reconciliation Throughput = Total Numbers of CIs / (Completion Time for Identification


Activity) + (Completion Time for Merge Activity)

Test results for RE batch jobs for both Identification and Merge activities together are
shown in Chart 21.

Chart 21. Throughput for RE (Identification & Merge activity) batch jobs for
varying dataset volume

Throughput (CIs/Sec) Comparison for RE Batch Jobs

400
Throughput: CIs/Sec

300

200

100

0
Identification Merge Average(ID+Merge)

500K CI 315 92 71
1M CI 295 76 60
2M CI 290 65 54

System resource utilization during RE batch jobs for each Identification and
Merge activity for different tiers are shown in Chart 22.

Results 27
White paper

Chart 22. System resource utilization for RE-Identification batch jobs for varying
dataset volume

Resource Utilizations for RE - Identification Batch Jobs


for Varying Dataset Volumes

30

20

10

0
AR CPU% AR Mem% DB CPU% DB Mem%

500K 18 13.85 5 7.18


1M 20.5 14.96 6.9 7.08
2M 23.6 15.32 7.8 7.18

Chart 23. System resource utilization for RE-Merge batch jobs for varying
dataset volume

Resource Utilizations for RE - Merge Batch Jobs


for Varying Dataset Volumes

20
15
10
5
0
AR CPU% AR Mem% DB CPU% DB Mem%

500K 6 14.35 8.7 7.18


1M 6.2 15.37 8.9 8.9
2M 6.4 15.7 9.3 9.3

Average system utilization for all RE Batch jobs for both activities (Identification and
Merge) together is shown in Chart 24.

28 Results
Results

Chart 24. System Resource Utilization for RE Batch jobs for varying dataset
volume

Average Resource Utilizations for RE Batch Jobs


for Varying Dataset Volumes

20
15
10
5
0
AR CPU% AR Mem % DB CPU% DB Mem %

500K 12 14.1 6.85 7.18


1M 13.35 15.16 7.9 8.07
2M 15 15.51 8.55 8.03

BMC Atrium Integration Engine tests


Sequential runs for both class exchanges and relationship exchanges were conducted
multiple times.

Class Exchanges run sequentially

In this scenario, all twelve Class Exchanges were configured on a single BMC Atrium
Integration Engine instance and exchanges were run sequentially. The throughput of
individual exchanges was measured and average throughput was calculated by using the
following formula:

Average Throughput = SUM (Records transferred for each Class Exchange) divided by
SUM (Completion time for each Class Exchange)

Throughput observed using the formula: 417 Records/Sec

Relationship Exchanges run sequentially

In this scenario, all twelve Relationship Exchanges were configured on a single BMC
Atrium Integration Engine instance and exchanges were run in sequence. The throughput
of individual Exchange runs was measured, and the total throughput was calculated by
using following formula:

Average Throughput = SUM (Records transferred for each Relationship Exchange)


divided by SUM (Completion time for each Relationship Exchange)

Throughput observed using the formula: 204 Records/Sec

Results 29
White paper

Class Exchanges run in parallel

In this scenario, the BMC_PRODUCT Class Exchange was split into two exchanges and
run in parallel. The throughput of the individual Exchange run was measured and total
throughput was calculated as the total number of records transferred (for both exchanges)
divided by time required for the longest running Exchange.

Average Throughput = SUM (Records transferred for each Class Exchange) / Response
time for the longest running Class Exchange

Table 14. summarizes the class exchange distribution across BMC Atrium Integration
Engine instances in two sets.
Table 14. Class Exchange Distribution across Instances for parallel run

Class Exchange Name Total Records Instance Throughput


CIs/Sec

BMC_Product Pull_ARS_INV_APPLICATION_1 1269118 Instance 1 311


Pull_ARS_INV_APPLICATION_2 963866 Instance 2 309

Total 2229109 547

Average throughput observed: 547 Records/Sec

Test results for all BMC Atrium Integration Engine batch jobs together are shown
in Chart 25.

Chart 25. Throughput for BMC Atrium Integration Engine serial and parallel
batch jobs

Throughput for AIE Batch Jobs

600

500

400
CIs/S ec

300

200

100

0
Class Exch Serial Class Exch Parallel RelationshipExch Serial

Series1 417 547 204

Resource utilization for all BMC Atrium Integration Engine batch jobs are characterized
in Chart 26.

30 Results
Results

Chart 26. System resource utilization for BMC Atrium Integration Engine batch
jobs

Resource Utilization For AIE Batch Jobs

40

C P U % & M em %
30

20

10

0
AR CPU% AR Mem% DB CPU% DB Mem%

Class Exch Serial 21.7 18.34 6.9 7.32


Class Exch Parallel 30 19.9 6.7 7.31
RelationshipExch Serial 20 15.32 5 7.5

BMC Remedy ITSM, BMC Service Request Management, and


BMC Atrium CMDB continuous mode test execution
simultaneously - mixed mode runs
All mixed mode runs resulted in minimal impact on the BMC Remedy ITSM and BMC
Service Request Management load tests while BMC Atrium CMDB and BMC Atrium
Core continuous mode tests were simultaneously creating, normalizing, and reconciling
CIs in the system.

Total time taken by the BMC Atrium CMDB and BMC Atrium Core continuous mode
tests from start to end for all mixed mode scenarios is shown in Chart 27.

Chart 27. Total time taken by BMC Atrium CMDB and BMC Atrium Core
Continuous mode for all mixed mode runs

Total Time Taken by CMDB Continous Mode Jobs for Varying


ITSM/SRM Workload

40
Time in Minutes

30

20

10

0
1200 User ITSM & 100K CI 2500 User ITSM & 50K CI

Total Tim e (Mins) 37 15

Results 31
White paper

In general, the impact on the BMC Remedy ITSM or BMC Service Request Management
response times was in line with the increased resource utilization on the AR System
servers and the DB server due to simultaneous execution of BMC Atrium CMDB or
BMC Atrium Core continuous mode jobs.

The following charts help assess the impact of simultaneous CMDB Atrium continuous
mode jobs for a 100K CI BMC Remedy ITSM and BMC Service Request Management
load test workload for 1,200 users:

Chart 28. Response time comparison between BMC Remedy ITSM stand-alone
versus Mixed runs for 1200 users nominal workload

ITSM Response Times Comparison


For 1200 Users
Standalone Vs. Mixed Mode

1.60
1.40
T im e s in S e c o n d s

1.20
1.00
0.80
0.60
0.40
0.20
0.00
Search Create Create Create Update
Open IM Search Open CM Save Initiate Search
Incident By Incident(No Incident( Incident (CI / Incident to
Console Incident By ID Console Change Change By ID
Username CI/Redisplay CI/Redisplay No Action) Resolve

1200 Users Standalone 0.17 0.06 0.09 0.34 0.73 0.11 0.16 0.45 0.75 0.16
1200Users Mixed Mode 0.498 0.252 0.291 1.106 1.211 0.995 0.32 1.261 1.458 0.298

Chart 29. Response time comparison between BMC Service Request


Management stand-alone vs. Mixed runs for 1200 users nominal workload

SRM Response Tim es Com parison


For 1200 Users
Standalone Vs. Mixed

1.80
1.60
1.40
Times in Seconds

1.20
1.00
0.80
0.60
0.40
0.20
0.00
View Open
Add Activity Create SR w / Create SR View Quick Search by
View SR Services in Request
Log mapping w /o mapping Picks Keyw ord
Category Console

1200 Users Standalone 0.14 0.14 0.19 0.34 0.78 0.56 0.28 0.19
1200Users Mixed Mode 0.319 0.438 0.25 1.013 1.579 1.087 0.443 0.317

32 Results
Results

Impact on resource utilization during the mixed mode runs is compared and characterized
in the charts below with the normal standalone mode of BMC Remedy ITSM or BMC
Service Request Management runs:

Chart 30. CPU load variation between BMC Remedy ITSM or BMC Service
Request Management stand-alone versus Mixed runs for 1200 users workload

CPU Load Comparison


ITSM/SRM Standalone Vs. MixedMode

10

8
CPU%

0
MT CPU% AR CPU% DB CPU%

1200 User ITSM MixedMode 1.38 8.32 3.7


1200 User ITSM Standalone 1.29 3.27 1.7

Chart 31. Memory usage variation between BMC Remedy ITSM or BMC Service
Request Management stand-alone versus Mixed runs for 1200 users workload

Memory Usage Comparison


ITSM/SRM Standalone Vs. MixedMode

14
12
10
Mem%

8
6
4

2
0
MT Mem % AR Mem % DB Mem %

1200 User ITSM MixedMode 7.62 11.25 12


1200 User ITSM Standalone 6.08 8.75 7.31

All transaction data created and/or modified in the system for the mixed mode test runs is
summarized in Table 15.

Results 33
White paper

Table 15. BMC Remedy ITSM or BMC Service Request Management transaction
data created during mixed mode runs for varying test scenarios

Entry type 2,500 users, 50K CI 1,200 users, 100K CI


Incidents Created 809 313
Incidents Modified 1,026 482
Changes created 48 22
Service Requests Created 2,301 1,106
Add Activity Log Entry 291 132

End-user response times for popular use cases


across a WAN
To help establish a performance baseline for the BMC Remedy applications deployed in
the Dell labs environment and accessed by users across a wide area network (WAN), a
few BMC Remedy ITSM and BMC Service Request Management use cases were tested
manually to record the response times, using the stop-watch technique.

The response time collected using Internet Explorer version 7, are summarized in Table
16.
Table 16. Manual end user response times over WAN connection

BMC Remedy ITSM and BMC Average response times


Service Request Management (seconds)
use cases

Open IM Console 3.2

Open New Incident Form 4.0

Submit Incident (“No Action”) 2.0


Submit Incident (“Redisplay 5.1
Current”)
Open Change Management 4.1
Console
Submit Change (“No Action”) 3.2

Open Console (Request Entry) 5.0

Search by a Query String 2.0


Create Submit Request (with 3.3
Mapping)

34 End-user response times for popular use cases across a WAN


Performance tuning and recommendations

Network latency varied between 65 and 70 ms while recording the values using a VPN
connection over the Internet.

Performance tuning and recommendations


Tuning parameters that were set in different tiers to achieve this highly scalable
architecture in the benchmarking environment are discussed.

Mid tier settings


Recommended configuration settings for all mid tier server instances are summarized in
Table 17.
Table 17. Mid tier server settings

Parameter Settings
JVM Min Heap 2 GB
JVM Max Heap 2 GB
Max Pool Size Per Server 625
Definition Change Check Interval 1 day
Tomcat max threads 1,000
Tomcat accept count 300

The Max Pool Size Per Server parameter can be set to 400 to achieve comparable
response times. Since the servers used in the benchmarking environment had 24 GB of
RAM available, the Max Pool Size Per Server was set to 625 to accommodate a 2500
virtual user workload without any latency. Setting this parameter higher did not yield any
better response times.

AR server settings
Configuration settings recommended for all AR server instances are summarized in Table
18.
Table 18. AR server settings

Parameter Settings for BMC Remedy Settings for BMC Atrium


ITSM and BMC Service Integration Engine, Create CI,
Request Management load tests NE & RE tests on the
integration server
Fast min/max 30/30 30/30
threads

Performance tuning and recommendations 35


White paper

Parameter Settings for BMC Remedy Settings for BMC Atrium


ITSM and BMC Service Integration Engine, Create CI,
Request Management load tests NE & RE tests on the
integration server
List min/max 20/20 30/30
threads
Next ID Block size 100 100
CAI Plugin Thread 2/2 N/A
Server group True True (Mixed Mode Runs)
False (Standalone Batch Jobs)

DB server settings
Configuration settings recommended for the Oracle database server are summarized in
the table below:
Table 19. DB server settings

Parameter Settings
Cursor Sharing Similar
memory_target 15G
memory_max_target 15G
Disk Async IO TRUE
Filesystemio_options SETALL

• The Automatic memory management (AMM) feature of Oracle 11g was enabled to
manage SGA and PGA memory structures by setting the MEMORY_TARGET
parameters.

• The Oracle ASM feature was used to manage the disk I/O subsystem.

• LOB storage option was changed to IN_ROW for the following SRM application-
related tables – CFG:Broadcast, SRD:ServiceRequestDefinition

Best practice guidelines published by Dell were referenced to configure the database
server and the I/O subsystem. White papers published by Dell can be accessed from
http://www.dell.com/oracle. Configurations related to Dell and Oracle can be found at
this URL by clicking on the “Dell Validated Components” link. Related white papers can
be accessed from the “Whitepapers” link.

36 Performance tuning and recommendations


Performance tuning and recommendations

Create CI batch job settings


Configuration settings recommended for the Java tool used for the Create CI batch job
are summarized in Table 20.
Table 20. Recommended settings for the Create CI batch jobs

Parameter 1M CI 2M CI 50K CI 100K CI


stand-alone stand-alone mixed mixed
run run continuous continuous
mode run mode run
Number of threads
30 30 5 5

Number of CIs created


per thread per iteration 51 51 51 51

Number of Relationships 50 50 50 50
per thread per iteration
Number of iterations 334 668 102 204

NE batch job settings


Configuration recommendations specific to the BMC Atrium CMDB Normalization
batch jobs are as follows:
Table 21. Recommended settings for CMDB Normalization batch job

Parameter Value
Chunk Size 50
Number of Threads (Batch) 15
Continuous Job No

• To maximize throughput for NE batch jobs, BMC recommends that you cache the
BMC:BaseElement table in the DB before starting a NE batch job. As the batch jobs
are intended to be used mostly as a first-time production roll-out functionality,
caching the BMC:BaseElement table in memory speeds up the total normalization
timeline. After completion of the NE batch job, this table can then be altered to the
“nocache” option if the same AR System server is intended to be used for other
applications.

• It is recommended to compute statistics for the ARADMIN schema prior to starting


NE batch job, to ensure optimal performance of the DB.

Performance tuning and recommendations 37


White paper

RE batch job settings


Configuration recommendations specific to the BMC Atrium CMDB Reconciliation
batch jobs are given in Table 22.
Table 22. Recommended settings for CMDB Reconciliation batch job

Parameter Value
Definition Check Interval (Seconds) 300
Polling Interval (Seconds) 300
Continuous Job No
Qualification Set None

It is recommended to compute statistics for the ARADMIN schema prior to starting RE


Identification and Merge batch job to ensure optimal performance of the DB.

BMC Atrium Integration Engine settings


Configuration recommendations for BMC Atrium Integration Engine server instances are
summarized in Table 23.
Table 23. Recommended settings for BMC Atrium Integration Engine jobs

Parameter Serial Parallel


Number of Instance Installed 6 6
Number of Instances Active 1 2
Number of AIE Threads / Exchange 49 49
Record Limit 0 (No 0 (No Limit)
Limit)
Update Existing Record (Data in No No
Target)
Create Record (Data in Target) Always Always

38 Performance tuning and recommendations


Performance tuning and recommendations

Hardware and network


At Dell labs, each system was connected to a 1 GB switch network. Detailed
hardware specifications for each server and storage subsystem used for the
benchmark are summarized in Table 24.
Table 24. Hardware Configurations

Server System information Comments


Integration Server – Dell 24GB RAM Dell EqualLogic PS6000XV
PowerEdge R710 storage attached using RAID 10
(2) xIntel Xeon X5570-
to host CDS database using
Quad core Nehalem 2.93
Oracle 11g
GHz
Database Server – Dell 144 GB RAM Dell EqualLogic PS6000XV
PowerEdge R710 storage attached using RAID 10
(2) xIntel Xeon X5570-
to host CMDB & ITSM
Quad core Nehalem 2.93
database using Oracle 11g
GHz
AR System servers and 24GB RAM
mid tier servers – Dell
Mid-Tier: 4xDell
PowerEdge R610 and
PowerEdgeR610
Dell PowerEdge R710
AR Tier: 2xDell
PowerEdge R610, 2xDell
PowerEdge R710
Dell PowerEdgeR610 :
(2)xIntel Xeon X5570
Quad Core Nehalem
2.93 GHz
Dell PowerEdge R710 :
(2)xIntel Xeon X5550-
Quad core Nehalem 2.66
GHz
Storage – Dell PS6000XV (SAS) 2 R710 servers hosted Oracle
EqualLogic databases (Integration server
16 x 450 GB
hosting the CDS DB for AIE
7.2 TB tests and DB server hosting
CMDB) connecting to one
PS6000XV array through two
PC 6224 switches.
Both Dell PowerConnect 6224
switches and disk array
PS6000XV were configured
according to Dell Oracle
solutions best practice
recommendations to ensure
optimal iSCSI network
performance for Oracle
databases.

Performance tuning and recommendations 39


White paper

Server System information Comments


Detailed storage configuration
is provided in Appendix C.
Silk Performer Agents – 16 GB
2 x Dell PowerEdge
2xDual Intel Xeon
1950
E5405, 2.00 GHz
Silk Performer Dell Latitude E6400
Controller and Client Intel Core 2Duo CPU
box used for manual T9550, 2.66 GHz
Response times over 3.48 GB RAM
WAN

Appendix A

BMC Remedy ITSM foundation data and application data


The following tables summarize the foundation data and application data inserted
into the BMC Remedy AR System database prior to starting the tests.
Table 25. BMC Remedy ITSM foundation data

Type Foundation

Companies (multi-tenancy) 696


Sites 1,396
People 35,889
People Organizations/Dept 404
People Application Permission 153,190
Groups
Support Organizations 23
Support Groups 205
Support Group Functional Role 39,010
Assignments 2,791

Table 26. BMC Remedy ITSM application data

Type Volume

Incident 581,750

40 Appendix A
Appendix A

Type Volume

Change 13,000
Service Target 1,625
CI 2,00,0000
People to CI Association 23,000
Contracts to CI Association 3,000
Incidents with CI Associations 52,000
Service CI 2,000

Table 27. BMC Remedy SRM foundation data

Type Foundation

AOT 50
PDT 200
SRD 1,000
Navigational Categories 612
Service Requests 77,500
Work Order 77,500
Entitlement Rules 400

Data Setup
Application object template (AOT)
• 10 Incident
• 15 Change
• 25 Work Order
• All global
• No template
Navigational Category
• 10 Tier 1 values
• 10 Tier 2 values
• 5 Tier 3 values
Process definition template (PDT)
• At least 1 AOT associated to a PDT

Appendix A 41
White paper

• 50 PDTs have parallel association


• All global
Service request definition (SRD)
• Have levels defined
• Have keyword defined
• 1 SRD has 6 questions mapped to 2 incident fields
• 100 SRDs have a price over $100
• All global
Entitlement Qualification Rules
• 200 PED groups. 20 users from each of the 199 companies per group
• 200 PED company by each of the 199 companies
• 1 SRDED based on price qualification of price > $100
• 10 SRDED based on each of the ten levels
• 200 SRDED based on a specific SRD
• 10 SRDED based on the ten Tier 1 categories. The Tier2 and Tier3 values
are static
Entitlement Rules
• 200 company to 100 SRDED based on specific SRD
• 50 PED groups to SRDED levels (5 groups per level)
• 50 PED groups to SRDED category (5 groups per category)
• 100 PED groups to SRDED price qualification

Appendix B

BMC Remedy ITSM user scenarios and associated actions


This appendix describes the associated actions for each of the BMC Remedy ITSM user
scenarios.
Table 28. Use Case 1: Search Incident by ID

Transaction Phase name Transaction steps Repeat? Timed?


name
Search Incident Initialization Configure load No No
By ID generator settings.

42 Appendix B
Appendix B

Transaction Phase name Transaction steps Repeat? Timed?


name
Transaction As a support user, No No
log in to home
page.
Click the Incident Yes Yes
Management
Console link.
Click the Search Yes Yes
Incident link.
Enter an ID in the Yes Yes
Incident ID+ field.
Click Search. Yes Yes
Close the Incident Yes No
form.
Log out. No No
End Close any open No No
files.

Table 29. Use Case 2: Search Incident by Customer Name

Transaction Phase name Transaction steps Repeat? Timed?


name
Search Initialization Configure load No No
Incident By generator settings.
Customer
Transaction As a support user, log No No
Name
in to the home page.
Click the Incident Yes Yes
Management Console
link.
Click the Search Yes Yes
Incident link.
Enter Customer name Yes Yes
in the Customer field
and press the Enter
key.

Click Search. Yes Yes


Close Incident form. Yes No
Log out. No No
End Close any open files. No No

Appendix B 43
White paper

Table 30. Use Case 3: Create Incidents with Service CI Related with Redisplay
Current after submit action.

Transaction Phase name Transaction steps Repeat? Timed?


name
Create Initialization Configure load No No
Incidents generator settings.
With CI
Transaction As a support user, log No No
Related
in to the homepage.
Click the Incident Yes Yes
Management Console
link.
Click the New Yes Yes
Incident link.
Enter the first name of Yes Yes
a customer in the
Customer field and
press the Enter key.
Enter a summary and Yes No
notes.
Open the Service Yes Yes
menu and make a
selection.
Open Impact menu Yes No
and make a selection.
Open the Urgency Yes Yes
menu and make a
selection.
Click Submit. Yes Yes
Close the Incident Yes No
window. If it
redisplays in a new
window, close the
new window and the
previous window.
Log out. No No
End Close any open files. No No

44 Appendix B
Appendix B

Table 31. Use Case 4: Create Incidents with No CI Related with Redisplay
Current after submit action.

Transaction Phase name Transaction steps Repeat? Timed?


name
Create Initialization Configure load No No
Incidents generator settings.
Without CI
Transaction As a support user, log No No
Related
in to the home page.
Click on the Incident Yes Yes
Management Console
link.
Click the New Yes Yes
Incident link.
Enter the first name of Yes Yes
a customer in the
Customer field and
press Enter.
Enter summary and Yes No
notes.
Open Impact menu Yes No
and select.
Open the Urgency Yes Yes
menu and select.
Click Submit. Yes Yes
Close the Incident Yes No
window. If
redisplayed in a new
window, close the
new window and the
previous window.
Log out No No
End Close any open files. No No

Appendix B 45
White paper

Table 32. Use Case 5: Update Incident to Resolved Status

Transaction Phase name Transaction steps Repeat? Timed?


name
Update Initialization Configure load No No
Incident generator settings.
Resolved
Transaction As a support user, log No No
Status
in to the home page.
Click on the Incident Yes Yes
Management Console
link.
Go to the Defined Yes Yes
Searches -> All Open
-> All Priorities link
to get work assigned
to the logged-on user.
Select an entry from Yes Yes
the incident table and
click the View button.
The Incident form Yes Yes
opens the entry.
Change the Status
field to In Progress
and save the form.
Change the Status Yes No
field to Resolved.
Open the Status
Reason menu and
select No Further
Action Required.
Enter some text in the
Resolution field.
Click Save. Yes Yes
Close the Incident Yes No
form.
Log out. No No
End Close any open files. No No

46 Appendix B
Appendix B

Table 33. Use Case 6: Create Change with Task

Transaction Phase name Transaction steps Repeat? Timed?


name
Create Initialization Configure load No No
Change with generator settings.
Task
Transaction As a support change No No
user, log in to the
home page.
Click the Change Yes Yes
Management Console
link.
Click the New Change Yes Yes
link.
Enter Summary text. Yes No
Go to the Dates tab. Yes No
Open the Scheduled Yes No
Start Date calendar
and select a future
date. (in script, just
generate date 1-30
days from current)
Open the Scheduled Yes No
End Date calendar and
select a future date
that is 1 day more
than scheduled start
date.
Go to the Task tab. Yes Yes
The Request Type
menu should have Ad
Hoc selected. Click
Relate button.
In the Create Task Yes No
dialog box, enter
name, summary, and
notes values.
Click Save in the Yes Yes
Create Task dialog
box.
In the Process Flow Yes Yes
Status Initiate, choose
Next Stage.

Appendix B 47
White paper

Transaction Phase name Transaction steps Repeat? Timed?


name
The Change Yes Yes
Initialization Dialog
opens. Click Save.
Log out No No
End Close any open files. No No

Table 34. Use Case 7: Search Change by ID

Transaction Phase name Transaction steps Repeat? Timed?


name
Search Initialization Configure load No No
Change By generator settings.
ID
Transaction As a support change No No
user, log in to the
home page.
Click the Change
Management Console
link.
Open the Change Yes Yes
form in Search mode.
Randomly enter a Yes Yes
Change entry ID and
click Search.
Close the Change Yes No
form.
Log out No No
End Close any open files. No No

SRM Test Scripts - User Scenarios

Table 35. Use Case 1: Add Activity Log Entry to Existing Service Request

Transaction Phase name Transaction steps Repeat? Timed?


name
Add Activity Initialization Configure load No No
Log generator settings.
Transaction As an end-user, log in No No
to the home page.

48 Appendix B
Appendix B

Transaction Phase name Transaction steps Repeat? Timed?


name
Click the Request Yes Yes
Entry link.
Click the Submitted Yes Yes
Requests link.
Select an entry from Yes Yes
the results.

Click Add in the Yes Yes


Activity Log section.
Type some summary Yes Yes
and notes. Save the
new activity log entry.
Log out. No No
End Close any open files. No No

Table 36. Use Case 2: View Services in a Category

Transaction Phase name Transaction steps Repeat? Timed?


name
View Initialization Configure load No No
Services in generator settings.
Category
Transaction As an end-user, log in No No
to the home page.
Click the Request Yes Yes
Entry link.
Click on any of the Yes Yes
available category
links
Log out. No No
End Close any open files. No No

Appendix B 49
White paper

Table 37. Use Case 3: Browse a Sub Service Category

Transaction Phase name Transaction steps Repeat? Timed?


name
Browse Sub Initialization Configure load No No
Category generator settings.
Transaction As an end-user, log in to No No
the home page.
Click the Request Entry Yes Yes
link.
Click the Browse Sub- Yes Yes
Categories link under
any of the available
categories.
Log out. No No
End Close any open files. No No

Table 38. Use Case 4: Create Service Request with 6 Questions and 2 Field
Mappings

Transaction Phase name Transaction steps Repeat? Timed?


name
Create Initialization Configure load No No
Service generator settings
Request with
Transaction As an end-user, log in No No
6 Questions
to the home page.
and 2 Field
Mappings Click the Request Yes Yes
Entry link.
In the search bar, Yes No
enter “simulation” and
click the magnifying
glass icon.
Click Request Now Yes Yes
for the SRD with
mapping.
Click Submit to create Yes Yes
the SR.
Log out. No No
End Close any open files. No No

50 Appendix B
Appendix B

Table 39. Use Case 5: Create Service Request with 6 Questions and No Mapping

Transaction Phase name Transaction steps Repeat? Timed?


name
Create Initialization Configure load No No
Service generator settings.
Request with
Transaction As an end-user, log in No No
6 Questions
to the home page.
and No
Mapping Click the Request Yes Yes
Entry link.
In search bar, enter Yes No
“simulation” and click
magnifying glass
icon.
Click Request Now Yes Yes
for the SRD with no
mapping.

Click Submit to create Yes Yes


the SR.
Log out. No No
End Close any open files. No No

Table 40. Use Case 6: View Request Entry Quick Pick Link

Transaction Phase name Transaction steps Repeat? Timed?


name
View Quick Initialization Configure load No No
Pick generator settings.
Transaction As an end-user, log in No No
to the home page.
Click the Request Yes Yes
Entry link.
Click the Quick Pick Yes Yes
link.
Log out. No No
End Close any open files. No No

Appendix B 51
White paper

Table 41. Use Case 7: Search Request Entry by Keyword

Transaction Phase name Transaction steps Repeat? Timed?


name
Search Initialization Configure load No No
Request generator settings.
Entry By
Transaction As an end-user, log in No No
Keyword
to the home page.
Click the Request Yes Yes
Entry link.
Randomly enter a Yes Yes
keyword of
“keyword#,” where #
is a value from 1 to
10.
Log out. No No
End Close any open files. No No

Table 42. Use Case 8: View an Existing Service Request

Transaction Phase name Transaction steps Repeat? Timed?


name
View Initialization Configure load No No
Service generator settings.
Request
Transaction As an end-user, log in No No
to the home page.
Click the Request Yes Yes
Entry link.
Click the Submitted Yes Yes
Requests link.
Randomly select an Yes Yes
entry from the results.
Click Request Details. Yes Yes
Close the dialog Yes No
window.
Log out. No No
End Close any open files. No No

52 Appendix B
Appendix C

Dell EqualLogic PS6000XV disk array setup


The EqualLogic disk array 16 disks, two of them designated as hot spares. The Oracle
ASM disk group distribution and RAID details are provided below:

Table 43. Storage array configuration details

Oracle ASM Disk Dell EqualLogic Linux partitions Linux file format
Group Storage Volume
DATABASEDG 2 RAID 10 volumes 1 partition on each Block device
of 150 GB each volume presented
(300 GB in total to OS
size)
FLASHBACKDG 1 RAID 10 volume 1 partition on each Block device
of 100 GB volume presented
to OS

Appendix D

Product version numbers


Version numbers for various components of BMC Remedy ITSM, BMC Atrium CMDB,
and BMC Atrium Core products are listed in Table 44.
Table 44. Product Version Numbers

Application type Version

BMC Remedy AR System 7.5 patch 3 configured in server group

BMC Remedy Mid Tier 7.5 Patch 3 / Java 1.6.0_13/ Tomcat 5.5.25

BMC Atrium CMDB 7.6 GA

BMC Remedy ITSM 7.6 GA

BMC Remedy SRM 7.6 GA

BMC Remedy SLM 7.5 GA

53
White paper

Application type Version

Linux Version RHEL 2.6.18-128.e15 x86_64 GNU/Linux

AR System Database Oracle 11g

Silk Performer client Windows Server 2003 64bit

Appendix E

Dell Product Details

Dell PowerEdge servers


Dell 11th Generation PowerEdge servers are at the core of Dell’s enterprise strategy. The
portfolio centers on reliability and commonality as well as a strong focus on better power
and energy efficiency, virtualization optimization, and simplified systems management.

PowerEdge R710
The Dell PowerEdge R710 is designed to be the conerstone of today’s competitive
enterprise. Engineered in response to input from IT professionals, it is the next-generation
2U rack server created to efficiently address a wide range of key business applications.
The successor to the PowerEdge 2950 III, the R710 runs the Intel Xeon 5500/5600 Series
Processors and helps lower the total cost of ownership with enhanced virtualization
capabilities, improved energy efficiency, and innovative system management tools.

PowerEdge R610
Inspired by customer feedback, the Intel-based Dell PowerEdge R610 server is
engineered to simplify data center operations, improve energy efficiency, and lower total
cost of ownership. System commonality, purposeful design, and service options combine
to deliver a 1U rack server solution that can help better manage the enterprise.

54 Appendix E
*121188*
*121188*

You might also like