Professional Documents
Culture Documents
March 2010
www.bmc.com
Contacting BMC Software
You can access the BMC Software website at http://www.bmc.com. From this website, you can obtain
information about the company, its products, corporate offices, special events, and career opportunities.
United States and Canada
Address BMC SOFTWARE INC Telephone 713 918 8800 Fax 713 918 8000
2101 CITYWEST BLVD or 800 841 2031
HOUSTON TX 77042-2827 USA
Outside United States and Canada
Telephone (01) 713 918 8800 Fax (01) 713 918 8000
If you have comments or suggestions about this documentation, contact Information Design and Development by
email at doc_feedback@bmc.com.
Support website
You can obtain technical support from BMC Software 24 hours a day, 7 days a week at http://www.bmc.com/support. From
this website, you can:
• Read overviews about support services and programs that BMC Software offers.
• Find the most current information about BMC Software products.
• Search a database for problems similar to yours and possible solutions.
• Order or download product documentation.
• Report a problem or ask a question.
• Subscribe to receive email notices when new product versions are released.
• Find worldwide BMC Software support center locations and contact information, including email addresses, fax numbers,
and telephone numbers.
Executive Summary
As businesses continue to grow with BMC Remedy IT Service Management solution
implementations, it becomes increasingly critical to provide information about the type of
performance that can be expected for a particular solution installation. BMC Software
and Dell conducted a series of performance-driven tests to help customers understand
workload performance with BMC Remedy IT Service Management (ITSM), BMC
Service Request Management, and BMC Atrium Configuration Management Database
(CMDB) applications on Dell PowerEdge servers. They also conducted a series of high-
volume benchmark tests to demonstrate scalability and performance. These tests were
conducted at Dell at their labs in Austin, Texas, in November, 2009.
This white paper provides quantitative results of the tests and guidelines for achieving the
best possible performance and scalability with BMC Remedy ITSM, BMC Service
Request Management, and BMC Atrium CMDB solutions on Dell PowerEdge Servers in
a Linux® environment.
Using Linux-based Dell PowerEdge systems and BMC software, organizations can gain a
standards-based solution that can lower total cost of ownership (TCO) with better
performance, and lower energy costs that help to deliver more value throughout the
company. By directing focus on energy efficiency, reliability, and value throughout the
design and development process, Dell PowerEdge servers offer outstanding performance
and reliability without locking customers into a proprietary solution.
Realistic user scenarios and workloads derived from customer cases were used for the
performance and scalability tests. This paper provides hardware sizing guidelines and
Executive Summary 1
White paper
Environment
This section describes benchmark architecture and provides background on how BMC
Remedy ITSM, BMC Service Request Management, and BMC Atrium CMDB
applications were deployed in a Linux and Oracle® environment.
For this benchmark, an integration server was dedicated for the purpose as specified in
the reference architecture document in order to get optimum performance for BMC
Atrium applications such as the BMC Atrium Integration Engine and the CMDB
normalization and reconciliation jobs.
Table 1. summarizes the initial setup for conducting the benchmark tests.
Table 1. Benchmark environment at Dell labs
2 Environment
Environment
Silk Performer
Workstation: Dell Latitude E6400
Controller
CPU: IntelCoreDuoT5990, 2.6 Ghz
RAM: 3.48GB
All five BMC Remedy Action Request System (AR System) servers were configured in a
server group. Four BMC Remedy Mid Tier (Mid Tier server) instances were hosted on
separate physical servers, and were configured with four out of five AR System server
instances participating in a server group configuration. Each of those AR System servers
was configured to be an authentication server for the respective mid-tier servers. The
AR System instance installed in the server dedicated as integration server had all
applications—that is, BMC Remedy ITSM 7.6, BMC Service Level Management 7.5,
and BMC Service Request Management 7.6 were installed in addition to the core
functionalities of BMC Atrium 7.6. In addition to hosting AR System, the integration
server also hosted the source database for the BMC Atrium Integration Engine source
Environment 3
White paper
data. The database hosted on the integration server had Oracle 11g installed and shared
the disk storage subsystem with the main database.
Scenarios
The following test cases were used for characterizing the performance and scalability of
the BMC Remedy ITSM 7.6, BMC Remedy Service Request Management 7.6, and BMC
Atrium CMDB 7.6 solutions on Linux and Oracle environments:
• BMC Remedy ITSM and BMC Service Request Management workload stand-alone
• BMC Remedy ITSM, BMC Service Request Management, and BMC Atrium CMDB
continuous mode jobs in a mixed workload
The activities of the BMC Remedy ITSM, BMC Remedy Service Request Management,
and BMC Atrium CMDB test scenarios are described in the following sections. Appendix
A provides detailed information about how the test environment was prepopulated.
• Number of users
Nominal workload
While it is not practical to conduct tests with all combinations of the variables, BMC
decided to define a nominal workload to represent a typical workload for many BMC
Remedy ITSM and BMC Service Request Management large customer bases. The
nominal workload can be used as a baseline for benchmarking performance and
scalability of the BMC Remedy ITSM and BMC Service Request Management solutions
consistently over time.
4 Scenarios
Scenarios
queuing model, and load tests were executed for over an hour to simulate a real-life
customer environment.
Workload spread between BMC Remedy ITSM and BMC Service Request Management
applications was split into 30% and 70% of the total workload respectively, to simulate
customer environments, and distributed evenly across four mid tier servers and four AR
servers configured in a server group, as displayed in Table 2.
Table 2. Nominal workload for the BMC Remedy ITSM solution test
Table 3. Nominal workload for the BMC Service Request Management solution
test
Projected incident tickets, task entries, and service requests created and modified in the
system with 1,200 users after an hour of simulation under the nominal workload is
summarized in Table 4.
Scenarios 5
White paper
Table 4. Projected data after 1 hour of simulation of nominal workload for 1200
users
Scalability tests
Although the nominal workload is a good approximation of what most customers would
have in production, scalability tests were conducted to demonstrate how the ITSM and
SRM applications would respond under varying workloads.
In order to simulate a high load environment, the Silk Performer scenario scripts were
configured with a higher transaction pacing (5 times in this case) while keeping the
number of virtual users constant.
For each scenario under consideration, both the transaction pacing and number of virtual
users were varied independently relative to the nominal, thereby simulating very large
BMC Remedy ITSM and BMC Service Request Management environments successfully.
Tests to assess scalability of BMC Remedy ITSM and BMC Service Request
Management applications on Linux consist of the following parameters:
Table 5. Scalability tests performed for BMC Remedy ITSM and BMC Service
Request Management workload
6 Scenarios
Scenarios
Tests that exercised increased workload mix were conducted as a part of this exercise to
validate the reduced hardware configuration, as described in Table 6.
Table 6. Tests performed for BMC Remedy ITSM and BMC Service Request
Management for reduced hardware and increased workload
• Create CI Bulk
• NE Batch mode
• RE Batch mode
Scenarios 7
White paper
Two types of create CI APIs were tested in batch mode. The first one is referred to as
“Regular Create,” where the instances were created one at a time. The second test is
referred to as “Bulk Create CI,” where 50 instances were created using a single bulk API
call in Atrium.
The data model that was created by this tool is described in Table 7. The computer
system is the root of the tree, and all other CIs that were connected to the computer
system via a relationship are also shown.
Table 7. Class relationship distribution for Create CI batch tests
Number of
Class Relationship CIs
BMC_ComputerSystem BMC_Dependency 1
BMC_Product BMC_HostedSystemComponents 30
BMC_Patch BMC_Component 8
BMC_Activity BMC_Dependency 1
BMC_Monitor BMC_HostedSystemComponents 1
BMC_IPEndpoint BMC_HostedSystemComponents 1
BMC_OperatingSystem BMC_HostedSystemComponents 2
BMC_Person BMC_Dependency 1
BMC_Printer BMC_Dependency 1
BMC_Processor BMC_HostedSystemComponents 1
BMC_DiskDrive BMC_HostedSystemComponents 1
BMC_Card BMC_HostedSystemComponents 1
BMC_BIOSElement BMC_HostedSystemComponents 1
BMC_NetworkPort BMC_HostedSystemComponents 1
A total of 51 Class instances and 50 Relationship instances were used per iteration. Each
instance generator had 30 threads, so each one created a total of 3030 instances per
iteration.
Each class instance was populated with over 15 attributes. These include Name, Serial
Number, Short Description, Owner Name, Owner Contact, Dataset Id, Reconciliation
Identity, Category, Type, Item, Model, Manufacturer Name, Description, Version
Number, and several other class-specific attributes.
8 Scenarios
Scenarios
NE batch mode
The batch normalization mode was used to normalize a large number of instances that
were generated by the initial bulk load of configuration items. This was done to show a
typical BMC Remedy large customer environment going live in production.
# of
ClassName instances Level Relationship
BMC_ComputerSystem 1 1 BMC_Dependency
BMC_Product 30 2 BMC_HostedSystemComponents
BMC_Patch 8 2 BMC_HostedSystemComponents
BMC_Activity 1 2 BMC_Dependency
BMC_Monitor 1 2 BMC_HostedSystemComponents
BMC_IPEndpoint 1 2 BMC_HostedSystemComponents
BMC_OperatingSystem 2 2 BMC_HostedSystemComponents
BMC_Person 1 2 BMC_Dependency
BMC_Processor 1 2 BMC_HostedSystemComponents
BMC_DiskDrive 1 2 BMC_HostedSystemComponents
BMC_Card 1 2 BMC_HostedSystemComponents
BMC_BIOSElement 1 2 BMC_HostedSystemComponents
BMC_NetworkPort 2 2 BMC_HostedSystemComponents
Each product had the following attributes populated: Name, Serial Number, Short
Description, Owner Name, Owner Contact, Dataset Id, Reconciliation Identity,
Company, Category, Type, Item, Model, Manufacturer Name, Description, Version
Number, Patch Number, and Token Id.
The same Java tool developed by BMC used for Create CI batch jobs was used to create
test datasets for NE batch jobs. To quantify performance and scalability of the
Normalization Engine, the following tests were conducted, varying the dataset volume
for each iteration.
• Normalize 500K CI
• Normalize 1M CI
• Normalize 2M CI
RE batch mode
A Standard Reconciliation was set up for this test case, which consisted of the two most
common reconciliation activities:
• Identifying class instances that are the same entity in two or more datasets
• Merging class instances from one dataset (such as Discovery) to another dataset (by
default, the production BMC.ASSET dataset)
Scenarios 9
White paper
All the identification and merge settings use standard rules. These standard rules work
with all classes in the Common Data Model (CDM) and BMC extensions. They identify
each class, using attributes that typically have unique values, and they merge based on
rules of precedence set for BMC datasets.
The same Java tool developed by BMC used for Create CI batch jobs was used to create
test datasets for RE batch jobs. To quantify performance and scalability of the
Reconciliation Engine, the following tests were conducted, varying the dataset volume
for each iteration:
All CIs were created with ‘Reconciliation Identity’ set to 0, indicating that these newly
created CIs had not yet been identified. Distribution (Data Model) of the CIs across
classes used to create data for RE batch jobs is summarized in Table 9.
Table 9. Class Distribution used for RE Batch Job
BMC_Product 30 2 BMC_HostedSystemComponents
BMC_Patch 10 2 BMC_Components
BMC_Activity 1 2 BMC_Dependency
BMC_Monitor 2 2 BMC_HostedSystemComponents
BMC_IPEndpoint 1 2 BMC_HostedAccessComponents
BMC_OperatingSystem 3 2 BMC_HostedSystemComponents
BMC_Person 1 2 BMC_Dependency
BMC_Processor 2 2 BMC_HostedSystemComponents
BMC_DiskDrive 2 2 BMC_HostedSystemComponents
BMC_Card 2 2 BMC_HostedSystemComponents
BMC_BIOSElement 2 2 BMC_HostedSystemComponents
BMC_NetworkPort 2 2 BMC_HostedSystemComponents
10 Scenarios
Scenarios
For performance and scalability, a total of six BMC Atrium Integration Engine instances
were installed for running tests involving parallel exchanges.
Data Exchange objects were used to transfer data from the external CD source and
synchronized into the BMC Atrium CMDB database. Twelve primary classes were
identified for the data transfer and each of these classes was again configured with
relationship classes. Data exchanges were configured for each of the primary classes as
well as for the relationship classes.
The table below summarizes all the classes that were identified for the BMC Atrium
Integration Engine data transfer process along with respective data distribution details in
the CD data source.
Table 10. Primary Classes data distribution
The performance and scalability tests on BMC Atrium Integration Engine 7.6 were
focused on the following types of test scenarios:
Scenarios 11
White paper
In BMC Atrium CMDB 7.5, BMC introduced a mode for Normalization and
Reconciliation processes called “Continuous Mode,” which provided near real-time
reconciliation of configuration items. In this mode, RE and NE runs continuously
normalized and reconciled CIs in small batches based on either time interval or record
count configuration settings.
To simulate a typical, “day in a life” scenario for BMC Remedy CMDB customers, BMC
Atrium CMDB or BMC Atrium Core continuous mode jobs were conducted
simultaneously while the BMC Remedy ITSM or BMC Service Request Management
load test was running.
In the mixed mode, the BMC Atrium/CMDB test for creating CIs was executed on the
AR server hosted in the dedicated integration server. This was done consciously to
simulate a typical large customer base where both BMC Remedy ITSM and
CMDB/Atrium Core components are deployed in production and hosted on separate
nodes for better scalability and reliability. All five AR System servers were participating
in a server group environment where the integration server acted as primary server for all
CMDB related activities.
Both BMC Remedy ITSM or BMC Service Request Management and BMC Atrium
CMDB or BMC Atrium Core load tests were conducted simultaneously to simulate the
mixed-mode environment where the BMC Remedy ITSM and BMC Service Request
Management test was conducted for an hour to ensure that the BMC Atrium CMDB and
BMC Atrium Core tests finish while BMC Remedy ITSM or BMC Service Request
Management load test is ongoing.
12 Scenarios
Results
Variations for mixed mode test cases are summarized in Table 11.
Table 11. Mixed mode test cases for BMC Remedy ITSM, BMC Atrium CMDB,
and BMC Atrium Core
Results
This section describes the quantitative test results on the performance and scalability of
the BMC Remedy ITSM, BMC Service Request Management, and BMC Atrium CMDB
solutions, starting with the BMC Remedy ITSM and BMC Service Request Management
stand-alone test case.
The benchmarking response times do not include the more variable client-side
components of response times that a typical end user observes.
Results 13
White paper
To quantify the end-to-end user response times, results of manual stop-watch timings for
a few use cases across a WAN environment is presented in a later section.
The average response time for each user scenario represents the response times averaged
over all actions listed in Appendix B for each user scenario with a test period of one hour.
In all tests conducted, response times were acceptable, even with 5 times the nominal
workload transaction rate. Resource utilization for all mid-tier, AR, and DB servers
stayed well within acceptable limits.
The following charts show how BMC Remedy ITSM and BMC Service Request
Management will respond to 1200, 1600, 2000, and 2500 concurrent users under the
nominal workload and 5 times the nominal workload.
Chart 1. Response times comparison for nominal workload with varying users
for BMC Remedy ITSM use cases
1.20
1.00
Tim es in S econds
0.80
0.60
0.40
0.20
0.00
Search Search Create Create Create Update
Open IM Open CM Save Initiate Search
Incident By Incident By Incident(No Incident( Incident (CI / Incident to
Console Console Change Change By ID
ID Username CI/Redisplay CI/Redisplay No Action) Resolve
1200 Users 0.17 0.06 0.09 0.34 0.73 0.11 0.16 0.45 0.75 0.16
1600 Users 0.34 0.20 0.16 0.87 0.94 0.74 0.21 0.56 0.88 0.21
2000 Users 0.36 0.21 0.16 0.85 0.94 0.73 0.22 0.56 0.97 0.21
2500 Users 0.41 0.20 0.16 0.85 0.95 0.73 0.23 0.73 0.98 0.23
14 Results
Results
Chart 2. Response time comparison for nominal workload with varying users for
BMC Service Request Management use cases
1.20
1.00
0.60
0.40
0.20
0.00
Add Activity View Services Open Request Create SR w / Create SR w /o View Quick Search by
View SR
Log in Category Console m apping m apping Picks Keyw ord
1200 Users 0.14 0.14 0.19 0.34 0.78 0.56 0.28 0.19
1600 Users 0.24 0.21 0.22 0.59 0.96 0.72 0.34 0.26
2000 Users 0.24 0.21 0.23 0.62 0.99 0.71 0.34 0.27
2500 Users 0.24 0.28 0.24 0.66 1.02 0.72 0.35 0.28
Chart 3. Response time comparison for 5xnominal workload with varying users
for BMC Remedy ITSM use cases
1.20
Tim es in Seconds
1.00
0.80
0.60
0.40
0.20
0.00
Search Search Create Create Create Update
Open IM Open CM Save Initiate Search
Incident By Incident By Incident(No Incident( Incident (CI / Incident to
Console Console Change Change By ID
ID Username CI/Redisplay CI/Redisplay No Action) Resolve
1200 Users 0.35 0.19 0.15 0.93 0.99 0.79 0.23 0.55 1.04 0.19
1600 Users 0.35 0.20 0.15 0.93 1.02 0.80 0.24 0.55 1.05 0.22
2000 Users 0.37 0.20 0.16 0.97 1.04 0.85 0.26 0.56 1.11 0.23
2500 Users 0.39 0.21 0.16 1.06 1.21 0.94 0.29 0.60 1.29 0.23
Results 15
White paper
Chart 4. Response time comparison for 5xnominal workload with varying users
for BMC Service Request Management use cases
1.60
1.40
T im es in S ec o n d s
1.20
1.00
0.80
0.60
0.40
0.20
0.00
Add Activity View Services Open Request Create SR w / Create SR w /o View Quick Search by
View SR
Log in Category Console mapping mapping Picks Keyw ord
1200 Users 0.23 0.19 0.26 0.61 1.15 0.81 0.39 0.30
1600 Users 0.25 0.20 0.28 0.66 1.24 0.84 0.43 0.33
2000 Users 0.26 0.20 0.31 0.71 1.33 0.90 0.48 0.36
2500 Users 0.30 0.23 0.37 0.83 1.52 0.98 0.62 0.43
Tables 12 and 13 summarize the transaction data created or modified and total number of
searches executed in the system for all the relevant scalability test run simulations for an
hour under varying workloads.
Table 12. Transaction data created/modified/searched for all nominal workload
runs
Entry type 6,000 user 9,000 user 10,000 user 12,500 user
workload workload workload workload
Incidents created 1923 2,340 3,102 3,915
Incidents modified 2,498 3,335 4,172 5207
Changes created 119 155 198 245
16 Results
Results
Entry type 6,000 user 9,000 user 10,000 user 12,500 user
workload workload workload workload
Service Requests
created 5,561 7,389 9,177 11,539
Add Activity Log
entry 715 948 1,182 1,466
The following charts compare the scalability test runs for nominal workloads versus five
times nominal workloads.
Chart 5. CPU utilization comparison for mid tier server, AR System server, and
DB server tiers for all nominal runs and 5xnominal runs
20
18
16
14
12
CPU %
10
8
6
4
2
0
Nominal MT 5xNominal MT Nominal AR 5xNominal AR Nominal DB 5xNominal DB
Results 17
White paper
Chart 6. Memory utilization comparison for mid tier server, AR System server,
and DB server tiers for all Nominal runs and 5xnominal runs
12
10
8
M em %
0
Nominal MT 5xNominal MT Nominal AR 5xNominal AR Nominal DB 5xNominal DB
Observations indicated that two servers in each mid tier and AR server tier would
easily scale up to a 10,000 user equivalent workload mix without any issues.
Going beyond a 10,000 user workload in the reduced environment could not be
pursued further due to constraints in the project timeline.
18 Results
Results
The following charts characterize how BMC Remedy ITSM and BMC Service
Request Management will respond to 1200, 1600, 2000 users under 5 times the
nominal workload, utilizing only 2 servers each at the web tier and application
tier, respectively:
2.50
2.00
Times in Seconds
1.50
1.00
0.50
0.00
Search Search Create Create Create Modify Search
Open IM Open CM Save Initiate
Incident By Incident By Incident (No Incident Incident Incident to Change By
Console Console Change
ID Username CI/Redisplay (CI/Redisplay (CI/NoAction) Resolve ID
1200 Users 0.37 0.18 0.16 0.96 1.05 0.81 0.26 0.55 1.03 0.20
1600 Users 0.39 0.20 0.17 0.99 1.13 0.89 0.27 0.63 1.17 0.21
2000 Users 0.52 0.26 0.22 1.58 1.72 1.49 0.44 0.69 2.20 0.29
3.00
2.50
Times in Seconds
2.00
1.50
1.00
0.50
0.00
View Services Open Request Create SR w / Create SR w /o View Quick Search by
Add Activity Log View SR
in Category Console mapping mapping Picks Keyw ord
1200 Users 0.26 0.19 0.34 0.68 1.26 0.87 0.47 0.38
1600 Users 0.29 0.21 0.45 0.77 1.38 0.94 0.61 0.49
2000 Users 0.48 0.30 0.89 1.54 2.47 1.65 1.18 0.90
Results 19
White paper
Response times for each of the BMC Remedy ITSM and BMC Service Request
Management use cases under consideration were compared between reduced and
regular hardware used in the benchmarking environment and are shown in the
following charts:
Chart 9. Response times comparison for BMC Remedy ITSM scenarios for 1,200
users and 5xnominal workload between regular versus reduced hardware
1.20
1.00
Tim es in Seconds
0.80
0.60
0.40
0.20
0.00
Open IM Search Search Create Create Create Modify Open CM Save Search
Console Incident Incident Incident Incident Incident Incident Console Initiate Change
1200 Reduced HW 0.37 0.18 0.16 0.96 1.05 0.81 0.26 0.55 1.03 0.20
1200 Regular HW 0.35 0.19 0.15 0.93 0.99 0.79 0.23 0.55 1.04 0.19
Chart 10. Response times comparison for BMC Remedy ITSM scenarios for
1,600 users and 5xnominal workload between regular versus reduced hardware
1.50
Times in Seconds
1.00
0.50
0.00
Open IM Search Search Create Create Create Modify Open Save Search
Console Incident Incident Incident Incident Incident Incident CM Initiate Change
1600 Reduced HW 0.39 0.20 0.17 0.99 1.13 0.89 0.27 0.63 1.17 0.21
1600 Regular HW 0.35 0.20 0.15 0.93 1.02 0.80 0.24 0.55 1.05 0.22
20 Results
Results
Chart 11. Response times comparison for BMC Remedy ITSM scenarios for 2000
users and 5xnominal workload between regular versus reduced hardware
2.50
Tim es in S econds
2.00
1.50
1.00
0.50
0.00
Open IM Search Search Create Create Create Modify Open CM Save Search
Console Incident Incident Incident Incident Incident Incident Console Initiate Change
2000 Reduced HW 0.52 0.26 0.22 1.58 1.72 1.49 0.44 0.69 2.20 0.29
2000 Regular HW 0.37 0.20 0.16 0.97 1.04 0.85 0.26 0.56 1.11 0.23
Chart 12. Response times comparison for BMC Service Request Management
scenarios for 1,200 users and 5xnominal workload between regular versus
reduced hardware
1.50
Tim es in Seconds
1.00
0.50
0.00
Add View Open Create SR Create SR View Quick Search by
View SR
Activity Services in Request w/ w /o Picks Keyw ord
1200 Reduced HW 0.26 0.19 0.34 0.68 1.26 0.87 0.47 0.38
1200 Regular HW 0.23 0.19 0.26 0.61 1.15 0.81 0.39 0.30
Results 21
White paper
Chart 13. Response times comparison for BMC Service Request Management
scenarios for 1,600 users and 5xnominal workload between regular versus
reduced hardware
1.50
Times in Seconds
1.00
0.50
0.00
Add View Open Create SR Create SR View Quick Search by
View SR
Activity Services in Request w/ w /o Picks Keyw ord
1600 Reduced HW 0.29 0.21 0.45 0.77 1.38 0.94 0.61 0.49
1600 Regular HW 0.25 0.20 0.28 0.66 1.24 0.84 0.43 0.33
Chart 14. Response times comparison for BMC Service Request Management
scenarios for 2,000 users and 5xnominal workload between regular versus
reduced hardware
3.00
Times in Seconds
2.50
2.00
1.50
1.00
0.50
0.00
Add View Open Create SR Create SR View Quick Search by
View SR
Activity Services in Request w/ w /o Picks Keyw ord
2000 Reduced HW 0.48 0.30 0.89 1.54 2.47 1.65 1.18 0.90
2000 Regular HW 0.26 0.20 0.31 0.71 1.33 0.90 0.48 0.36
Observations indicated that the percentage increase in resource utilization while the
benchmark architecture was reduced to 2 AR server and 2 Mid Tier server nodes was in
line with expectations. No adverse deviation in either memory or CPU utilization was
22 Results
Results
observed in any tier. The variations in resource utilization in each tier are summarized as
follows:
• DB Tier: Both CPU and memory utilization stayed the same for both environments,
with CPU variations ranging 2-3%, while memory variations stayed below 1%.
Chart 15. CPU utilization comparison for mid tier server, AR System server, and
DB server between regular hardware and reduced hardware
30
CPU %
20
10
0
MT Reduced
MT Regular HW AR Regular HW AR REduced HW DB Regular HW DB Reduced HW
HW
Results 23
White paper
Chart 16. Memory utilization comparison for mid tier server, AR System server,
and DB server using between regular hardware and reduced hardware
20
15
Mem %
10
5
0
MT Regular HW MT Reduced HW AR Regular HW AR REduced HW DB Regular HW DB Reduced HW
Regular Create CI
In this test, each instance was created individually by using the following API:
Bulk Create CI
In this test, all the instances in the data model were created at once using the following
new API:
CMDBUtil.CMDBCreateMultipleInstances(ARServerUser context,
java.lang.String datasetId, CMDBInstance[] instanceList);
This functionality was introduced in 7.5 and significantly increases throughput by taking
advantage of the bulk insert functionality of the database. CMDB Instance objects are
created on the client side and placed in the instance list array. Once all objects have been
created, the API call is invoked to create these objects in the database. In this test
24 Results
Results
scenario, all the class instances are created in one call, and all the relationships in another.
So, 50 CIs are created for each bulk call.
Test results for the CMDB batch job create CI functionality for both regular and bulk
mode are displayed in Chart 17.
Chart 17. Throughput for Create CI batch jobs for varying dataset volume
580
560
540
520
500
480
1M 2M
System resource utilizations during the Create CI batch jobs for different tiers are shown
in Chart 18.
Chart 18. System resource utilization for Create CI batch jobs for varying
dataset volume
40
30
20
10
0
1M - AR 2M AR 1M AR 2M AR 1M - DB 2M DB 1M DB 2M DB
CPU% CPU% Mem% Mem% CPU% CPU% Mem% Mem%
Create CI - Regular 15.45 31.27 8.20 8.87 3.60 5.00 7.13 7.16
Create CI Bulk 21.76 37.24 8.75 9.12 7.70 10.50 7.19 7.19
Results 25
White paper
NE batch job
For all NE batch jobs, the throughput was measured in terms of CIs/Sec, and was
calculated based on the following formula:
Test results in terms of total throughput derived from the above formula are
shown in the chart below for different CI volumes under consideration.
Chart 19. Throughput and timing for NE batch jobs for varying dataset volume
NE Batch Jobs Throughput & Tim ings for Varying Dataset Volume
400
Throughput: CIs/Sec
300
200
100
0
500K 1M 2M
System resource utilization during NE batch jobs for different tiers are shown in Chart
20.
Chart 20. System resource utilization for NE batch jobs for varying dataset
volume
25
CPU% - Mem%
20
15
10
5
0
AR CPU% AR Mem% DB CPU% DB Mem %
26 Results
Results
RE batch job
For all RE batch jobs, the throughput was measured in terms of CIs/Sec at each activity—
that is, Identification and Merge activity—separately for a given job and the average
throughput (of complete Reconciliation activity) was calculated:
Test results for RE batch jobs for both Identification and Merge activities together are
shown in Chart 21.
Chart 21. Throughput for RE (Identification & Merge activity) batch jobs for
varying dataset volume
400
Throughput: CIs/Sec
300
200
100
0
Identification Merge Average(ID+Merge)
500K CI 315 92 71
1M CI 295 76 60
2M CI 290 65 54
System resource utilization during RE batch jobs for each Identification and
Merge activity for different tiers are shown in Chart 22.
Results 27
White paper
Chart 22. System resource utilization for RE-Identification batch jobs for varying
dataset volume
30
20
10
0
AR CPU% AR Mem% DB CPU% DB Mem%
Chart 23. System resource utilization for RE-Merge batch jobs for varying
dataset volume
20
15
10
5
0
AR CPU% AR Mem% DB CPU% DB Mem%
Average system utilization for all RE Batch jobs for both activities (Identification and
Merge) together is shown in Chart 24.
28 Results
Results
Chart 24. System Resource Utilization for RE Batch jobs for varying dataset
volume
20
15
10
5
0
AR CPU% AR Mem % DB CPU% DB Mem %
In this scenario, all twelve Class Exchanges were configured on a single BMC Atrium
Integration Engine instance and exchanges were run sequentially. The throughput of
individual exchanges was measured and average throughput was calculated by using the
following formula:
Average Throughput = SUM (Records transferred for each Class Exchange) divided by
SUM (Completion time for each Class Exchange)
In this scenario, all twelve Relationship Exchanges were configured on a single BMC
Atrium Integration Engine instance and exchanges were run in sequence. The throughput
of individual Exchange runs was measured, and the total throughput was calculated by
using following formula:
Results 29
White paper
In this scenario, the BMC_PRODUCT Class Exchange was split into two exchanges and
run in parallel. The throughput of the individual Exchange run was measured and total
throughput was calculated as the total number of records transferred (for both exchanges)
divided by time required for the longest running Exchange.
Average Throughput = SUM (Records transferred for each Class Exchange) / Response
time for the longest running Class Exchange
Table 14. summarizes the class exchange distribution across BMC Atrium Integration
Engine instances in two sets.
Table 14. Class Exchange Distribution across Instances for parallel run
Test results for all BMC Atrium Integration Engine batch jobs together are shown
in Chart 25.
Chart 25. Throughput for BMC Atrium Integration Engine serial and parallel
batch jobs
600
500
400
CIs/S ec
300
200
100
0
Class Exch Serial Class Exch Parallel RelationshipExch Serial
Resource utilization for all BMC Atrium Integration Engine batch jobs are characterized
in Chart 26.
30 Results
Results
Chart 26. System resource utilization for BMC Atrium Integration Engine batch
jobs
40
C P U % & M em %
30
20
10
0
AR CPU% AR Mem% DB CPU% DB Mem%
Total time taken by the BMC Atrium CMDB and BMC Atrium Core continuous mode
tests from start to end for all mixed mode scenarios is shown in Chart 27.
Chart 27. Total time taken by BMC Atrium CMDB and BMC Atrium Core
Continuous mode for all mixed mode runs
40
Time in Minutes
30
20
10
0
1200 User ITSM & 100K CI 2500 User ITSM & 50K CI
Results 31
White paper
In general, the impact on the BMC Remedy ITSM or BMC Service Request Management
response times was in line with the increased resource utilization on the AR System
servers and the DB server due to simultaneous execution of BMC Atrium CMDB or
BMC Atrium Core continuous mode jobs.
The following charts help assess the impact of simultaneous CMDB Atrium continuous
mode jobs for a 100K CI BMC Remedy ITSM and BMC Service Request Management
load test workload for 1,200 users:
Chart 28. Response time comparison between BMC Remedy ITSM stand-alone
versus Mixed runs for 1200 users nominal workload
1.60
1.40
T im e s in S e c o n d s
1.20
1.00
0.80
0.60
0.40
0.20
0.00
Search Create Create Create Update
Open IM Search Open CM Save Initiate Search
Incident By Incident(No Incident( Incident (CI / Incident to
Console Incident By ID Console Change Change By ID
Username CI/Redisplay CI/Redisplay No Action) Resolve
1200 Users Standalone 0.17 0.06 0.09 0.34 0.73 0.11 0.16 0.45 0.75 0.16
1200Users Mixed Mode 0.498 0.252 0.291 1.106 1.211 0.995 0.32 1.261 1.458 0.298
1.80
1.60
1.40
Times in Seconds
1.20
1.00
0.80
0.60
0.40
0.20
0.00
View Open
Add Activity Create SR w / Create SR View Quick Search by
View SR Services in Request
Log mapping w /o mapping Picks Keyw ord
Category Console
1200 Users Standalone 0.14 0.14 0.19 0.34 0.78 0.56 0.28 0.19
1200Users Mixed Mode 0.319 0.438 0.25 1.013 1.579 1.087 0.443 0.317
32 Results
Results
Impact on resource utilization during the mixed mode runs is compared and characterized
in the charts below with the normal standalone mode of BMC Remedy ITSM or BMC
Service Request Management runs:
Chart 30. CPU load variation between BMC Remedy ITSM or BMC Service
Request Management stand-alone versus Mixed runs for 1200 users workload
10
8
CPU%
0
MT CPU% AR CPU% DB CPU%
Chart 31. Memory usage variation between BMC Remedy ITSM or BMC Service
Request Management stand-alone versus Mixed runs for 1200 users workload
14
12
10
Mem%
8
6
4
2
0
MT Mem % AR Mem % DB Mem %
All transaction data created and/or modified in the system for the mixed mode test runs is
summarized in Table 15.
Results 33
White paper
Table 15. BMC Remedy ITSM or BMC Service Request Management transaction
data created during mixed mode runs for varying test scenarios
The response time collected using Internet Explorer version 7, are summarized in Table
16.
Table 16. Manual end user response times over WAN connection
Network latency varied between 65 and 70 ms while recording the values using a VPN
connection over the Internet.
Parameter Settings
JVM Min Heap 2 GB
JVM Max Heap 2 GB
Max Pool Size Per Server 625
Definition Change Check Interval 1 day
Tomcat max threads 1,000
Tomcat accept count 300
The Max Pool Size Per Server parameter can be set to 400 to achieve comparable
response times. Since the servers used in the benchmarking environment had 24 GB of
RAM available, the Max Pool Size Per Server was set to 625 to accommodate a 2500
virtual user workload without any latency. Setting this parameter higher did not yield any
better response times.
AR server settings
Configuration settings recommended for all AR server instances are summarized in Table
18.
Table 18. AR server settings
DB server settings
Configuration settings recommended for the Oracle database server are summarized in
the table below:
Table 19. DB server settings
Parameter Settings
Cursor Sharing Similar
memory_target 15G
memory_max_target 15G
Disk Async IO TRUE
Filesystemio_options SETALL
• The Automatic memory management (AMM) feature of Oracle 11g was enabled to
manage SGA and PGA memory structures by setting the MEMORY_TARGET
parameters.
• The Oracle ASM feature was used to manage the disk I/O subsystem.
• LOB storage option was changed to IN_ROW for the following SRM application-
related tables – CFG:Broadcast, SRD:ServiceRequestDefinition
Best practice guidelines published by Dell were referenced to configure the database
server and the I/O subsystem. White papers published by Dell can be accessed from
http://www.dell.com/oracle. Configurations related to Dell and Oracle can be found at
this URL by clicking on the “Dell Validated Components” link. Related white papers can
be accessed from the “Whitepapers” link.
Number of Relationships 50 50 50 50
per thread per iteration
Number of iterations 334 668 102 204
Parameter Value
Chunk Size 50
Number of Threads (Batch) 15
Continuous Job No
• To maximize throughput for NE batch jobs, BMC recommends that you cache the
BMC:BaseElement table in the DB before starting a NE batch job. As the batch jobs
are intended to be used mostly as a first-time production roll-out functionality,
caching the BMC:BaseElement table in memory speeds up the total normalization
timeline. After completion of the NE batch job, this table can then be altered to the
“nocache” option if the same AR System server is intended to be used for other
applications.
Parameter Value
Definition Check Interval (Seconds) 300
Polling Interval (Seconds) 300
Continuous Job No
Qualification Set None
Appendix A
Type Foundation
Type Volume
Incident 581,750
40 Appendix A
Appendix A
Type Volume
Change 13,000
Service Target 1,625
CI 2,00,0000
People to CI Association 23,000
Contracts to CI Association 3,000
Incidents with CI Associations 52,000
Service CI 2,000
Type Foundation
AOT 50
PDT 200
SRD 1,000
Navigational Categories 612
Service Requests 77,500
Work Order 77,500
Entitlement Rules 400
Data Setup
Application object template (AOT)
• 10 Incident
• 15 Change
• 25 Work Order
• All global
• No template
Navigational Category
• 10 Tier 1 values
• 10 Tier 2 values
• 5 Tier 3 values
Process definition template (PDT)
• At least 1 AOT associated to a PDT
Appendix A 41
White paper
Appendix B
42 Appendix B
Appendix B
Appendix B 43
White paper
Table 30. Use Case 3: Create Incidents with Service CI Related with Redisplay
Current after submit action.
44 Appendix B
Appendix B
Table 31. Use Case 4: Create Incidents with No CI Related with Redisplay
Current after submit action.
Appendix B 45
White paper
46 Appendix B
Appendix B
Appendix B 47
White paper
Table 35. Use Case 1: Add Activity Log Entry to Existing Service Request
48 Appendix B
Appendix B
Appendix B 49
White paper
Table 38. Use Case 4: Create Service Request with 6 Questions and 2 Field
Mappings
50 Appendix B
Appendix B
Table 39. Use Case 5: Create Service Request with 6 Questions and No Mapping
Table 40. Use Case 6: View Request Entry Quick Pick Link
Appendix B 51
White paper
52 Appendix B
Appendix C
Oracle ASM Disk Dell EqualLogic Linux partitions Linux file format
Group Storage Volume
DATABASEDG 2 RAID 10 volumes 1 partition on each Block device
of 150 GB each volume presented
(300 GB in total to OS
size)
FLASHBACKDG 1 RAID 10 volume 1 partition on each Block device
of 100 GB volume presented
to OS
Appendix D
BMC Remedy Mid Tier 7.5 Patch 3 / Java 1.6.0_13/ Tomcat 5.5.25
53
White paper
Appendix E
PowerEdge R710
The Dell PowerEdge R710 is designed to be the conerstone of today’s competitive
enterprise. Engineered in response to input from IT professionals, it is the next-generation
2U rack server created to efficiently address a wide range of key business applications.
The successor to the PowerEdge 2950 III, the R710 runs the Intel Xeon 5500/5600 Series
Processors and helps lower the total cost of ownership with enhanced virtualization
capabilities, improved energy efficiency, and innovative system management tools.
PowerEdge R610
Inspired by customer feedback, the Intel-based Dell PowerEdge R610 server is
engineered to simplify data center operations, improve energy efficiency, and lower total
cost of ownership. System commonality, purposeful design, and service options combine
to deliver a 1U rack server solution that can help better manage the enterprise.
54 Appendix E
*121188*
*121188*