Professional Documents
Culture Documents
Monitoring The Siebel Application Response Times With Siebel Application Response
Monitoring The Siebel Application Response Times With Siebel Application Response
I E B E L
Y S T E M S
, I
N C
Siebel Application
Response Monitoring
Summary ..................................................................................................................................... 3
Overview of Siebel Application Response Monitoring............................................................. 4
Architecture of Siebel Application Response Monitoring 7.5.3.................................................... 5
Overview................................................................................................................................... 5
SARM areas of instrumentation ............................................................................................... 5
Request execution path Example ......................................................................................... 6
Enabling SARM ........................................................................................................................... 9
Enabling SARM on the Web Server ............................................................................................ 9
Enabling SARM on the Siebel Server ........................................................................................ 11
SARM Analyzer Tool ................................................................................................................. 13
SARM Analyzer Syntax.............................................................................................................. 13
Performance Area Aggregation Analysis................................................................................... 15
Example output ...................................................................................................................... 18
Call Map Generation .................................................................................................................. 24
Example output ...................................................................................................................... 25
User Session trace..................................................................................................................... 26
Example output ...................................................................................................................... 28
SARM Binary File to CSV Conversion ....................................................................................... 32
Best Practices ........................................................................................................................... 35
Page 2 of 36
Summary
This Technical Note discusses the architecture and usage of Siebel Application Response
Measurement (SARM) a service that has been introduced as part of Siebel 7.5.3. SARM is
modeled after ARM, the industry standard for application response time measurement.
SARM can be used in both pre- and post production systems to identify or troubleshoot response
time problems.
This Technical Note provides basic examples of SARM usage and describes the currently
supported methods of utilizing SARM.
Page 3 of 36
SWSE
Web Server
Name Server
Instrumented
Areas
Resonate
Siebel Server
Server Thread
SWE
Workflow
Scripting Engine
Database
SarmAnalyzer.exe
Siebel App
Database
Page 4 of 36
Area
Number
1
SWSE
Login
Server Thread
(SMI)
DB Connector
3
4
Scripting Engine
Workflow
Request
Handling
Execute
Query
VB Script
Execute
Invoke Method
SWE
Sub-Area Number
2
3
SWE Request
Session
Manager
Write Record
Fetch Next
Record
eScript
Execution
Process
Resume
VB Script
Compilation
Process Init
Prepare
Statement
eScript
Compilation
Step
Execution
Process SWE
Build View
Command
Table 1
Area: SarmIO
SarmIO measures the time it takes to write the SARM data from memory to disk.
Area: Siebel Web Server Extension (SWSE)
SWSE measures the time duration between entry to the SWSE and messages being sent to the
Siebel server. Time spent in the SWSE includes the Siebel Gateway and Resonate time.
Sub-area: Login
Time spent to request a user login
Sub-area: SWE Request
Time for SWSE to handle a request
Sub-area: Session Manager
Time for the Session Manager to handle a request.
Area: Server Thread (SMI)
Server thread is the area in the Siebel architecture that handles all Siebel server requests. This is
the entry point of a request from the web server to the Siebel server. The time indicates the
duration it takes the Siebel server to handle a request.
Sub-area: Request Handling
Time to handle a request on the Siebel server side
2003 Siebel Systems, Inc
Page 5 of 36
Area: SWE
The Siebel Web Engine (SWE) executes within the context of the Siebel Object Manager.
Therefore any time spent in the SWE is a subset of time of the total Siebel Object Manager time.
Sub-area: Process SWE Command
Time it takes to process a request submitted to SWE
Sub-area: Build View
The SWE assembles the Siebel View (web page). The Object Manager then
sends it to the Siebel Web Server Extension running on the Web Server so it can
pass the web page onto the client. This metric reflects the time it takes to
assemble/build the view.
Page 6 of 36
Page 7 of 36
1. The first response time instrumentation point once the request is submitted from the
browser to the web server is SWSE.
2. SWSE submits the request to SWE. SWE will be the second instrumentation point for
the request.
3. The request invokes a script, a workflow process and it accesses the Siebel database to
retrieve the account information. Based on the path of the request, SARM captures
response time information for the Scripting Engine, DB connector and Workflow process.
4. The total response time for the Siebel Server to handle the request is being captured by
the Server Thread.
Page 8 of 36
Enabling SARM
SARM is controlled through Siebel server parameters and environment variables. To enable
SARM, three parameters need to be enabled for both the web server via environment variables
and the Siebel Server via the Server Manager. When a component starts up in the Siebel
Enterprise, it will check for the status of the SARM parameters.
There are 3 parameters that control the behavior of SARM:
SARMEnabled: indicates whether SARM is enabled or disabled for a Siebel Server
Component. It is a boolean value (true | false). The default value is false. This
parameter can be set at the Siebel Server or Siebel Server Component level.
SARMMaxMemory: SARM 7.5.3 uses a shared memory segment to store the data
collected from the Siebel Server Components. Once the in-memory data size reaches a
threshold defined by the parameter SARMMaxMemory, SARM will write the data to a file
on the local disk subsystem. The default value is 500000, about 5 MB and is specified in
bytes.
SARMMaxFileSize: Specifies how large a file gets before SARM will start a new file.
SARM will continue to append file segments to the current file until the specified size is
reached. When the file limit is reached, SARM will start a new file. The default value is
20000000 (about 20 MB) and is specified in bytes.
SARM is disabled by default. To enable SARM, set the SARMEnabled parameter to true. This
can be done on the Web Server, the Siebel Server, or both. When enabling SARM it is important
to also consider the appropriate settings for the SARMMaxMemory and SARMMaxFileSize
parameters since these will determine how soon SARM flushes its data to disk, and how large the
SARM files will be. Recommendations for setting these parameters are discussed later in this
section.
= true
= 20000
= 400000
4. Note that the equal signs indicate the values that the system variables should be set to.
The optimal values vary by specific deployment and may be different than depicted here.
On Windows, the machine needs to be restarted so that the settings can take effect.
Page 9 of 36
Page 10 of 36
400000
Once the values have been modified the web server needs to be re-started for the changes to
take effect. To disable SARM once it has been enabled on the web server, simply set the
SIEBEL_SARMEnabled variable to false and restart the web server process/service.
Note: If the web server and the Siebel Server are running on the same machine, the maximum
memory and files size on the web server will override the value of the Siebel Server. However, in
most testing and production environments, the web server and Siebel Servers are running on
different machines, so the likelihood of this scenario occurring is minimal.
To enable SARM on the Siebel Server using the Siebel Server Manager Graphical User Interface:
1. Go to Site Map Server Administration Servers Server Parameters
2. In the Server Parameters List Applet, query for SARM*.
3. Update the values of SARM Data File Size Limit, SARM Enabled, SARM Memory Size
Limit accordingly
4. Stop and re-start the Siebel server for the new values to take effect.
Page 11 of 36
Page 12 of 36
Independent of the mode in which the SARM Analyzer is used, to get help type:
On a Windows environment, enter:
SARMAnalyzer help
On a Unix environment, enter
2003 Siebel Systems, Inc
Page 13 of 36
Page 14 of 36
Usage 1
Executing the SARM Analyzer executable aggregates the contents of a SARM binary file by area
of instrumentation. The output file will always be sarm.xml and it will be placed in the location
from where the command is executed.
Command Syntax:
On Windows: SARMAnalyzer f <sarm file>
On Unix:
sarmalyzer f <sarm file>
Output File Name: sarm.xml
Example on Windows: SARMAnalyzer f S01_P20862_N0002.sarm
Usage 2
Executing the SARM Analyzer executable aggregates the contents of a SARM 7.5.3 binary file by
area of instrumentation, but in this case by piping the output to a file, the output file will have the
name specified and will be stored in the location specified in the command.
Command Syntax:
On Windows: SARMAnalyzer f <sarm file> > <some_file_name.xml>
On Unix:
sarmanalyzer f <sarm file> > <some_file_name.xml>
Output File Name: some_file_name.xml
Example: SARMAnalyzer f S01_P20862_N0002.sarm > %HOME%/
P20862_N0002_area_agr.xml
Description:
SARM Analyzer provides the capabilities to perform grouping against the performance data
captured in SARM files. A single XML file sarm.xml is generated upon the successful execution
of the SARM Analyzer. The sarm.xml output file is stored in the current working directory. When
running the SARM Analyzer, the full path of the SARM binary files has to be specified if the file is
not in the same directory as the SARM Analyzer executable
SARM files are grouped based on the areas of instrumentation that is webserver (SWSE), Server
thread, Siebel Web Engine, Workflow, Scripting Engine and Database Connector.
Example output schema:
<Group> *
<Name>
<ResponseTime>
<Total>
<Average>
<NonRecursiveCalls>
<RecursiveCalls>
<Max>
<Min>
2003 Siebel Systems, Inc
Page 15 of 36
Group: refers to each area that is instrumented by SARM. Performance data is captured
for the webserver (SWSE), Server threads, Database Connector, Scripting Engine,
Workflow and Siebel Web Engine.
Response Time: also called inclusive time in most commercial profiling tools. It is the
time spent for a request between entering and exiting an instrumentation area.
a. Total: Total time spent on a request between entering and exiting an
instrumentation area.
b. Average: average response time for a request. This is calculated by dividing
total time (Total) by number of requests (NonRecursiveCalls).
c.
d. RecursiveCalls: One of the key features of the tool is the capability to handle
recursion. For example, if a workflow step calls an Object Manager function,
which also invokes another workflow step, then there is a recursion in workflow.
Considering the number of times the workflow layer is being called, there are two
relevant metrics: RecursiveCalls and NonRecursiveCalls. In this case,
RecursiveCalls is 1 and NonRecursiveCalls is also 1. When calculating the
response time, only the root-level call is being accounted for. When calculating
execution time, both calls are being accounted.
e. Max: maximum time for a request between entering and exiting an
instrumentation area.
2003 Siebel Systems, Inc
Page 16 of 36
Min: minimum time for a request between entering and exiting an instrumentation
area.
Execution Time: It is often called exclusive time in most commercial profiling tools. It
is the total time spent in a particular instrumentation area only, not including the time
spent in the descendant layers.
a. Total: Total time spent on a request between entering and exiting an
instrumentation area.
b. Average: average time spent on a request between entering and exiting an
instrumentation area.
c.
d. Min: minimum time for a request between entering and exiting an instrumentation
area.
Parent: parent of the group. This information helps identify the caller of a group and the
total time and number of calls the group contributed to its parents response time.
a. Name: name of the parent group. A group is an area of instrumentation.
b. Total Contributing Time: total time a group contributed to the parents total
time. For example, if SWSE calls the Object Manager (OM) and OM spends a
total of 10 seconds, then the Total Contributing Time is 10. If Scripting Engine
also calls OM, and OM spends 40 seconds when called by the Scripting Engine,
then the Contributing Time Percentage of SWSE to OM is 20%. This is
calculated as (Total Contributing Time / Total OM Time)*100% or ((10/50) *100%
= 20%). The Contributing Time of Scripting to OM in this case would be 80% or
((40/50) *100% = 80%).
c.
Children: Children refer to the areas called by a parent group. A user can drill into a
groups children information to determine response time break down within each of the
child. By drilling down into the childrens information, the user can find potential
performance bottlenecks.
a. Name: Name of the child group
b. TotalContributedTime: total time a child group contributed to the parents total
response time. The sum of all children contributions time (response time) added
to the areas execution time should be the total response time for the area.
c.
Page 17 of 36
Example output
The data is displayed in nanoseconds as the sample was taken from a Unix machine.
- <xml>
- <Group>
<Name>SMI</Name>
- <ResponseTime>
<Total>325577844947</Total>
<Average>1839422852</Average>
<NonRecursiveCalls>177</NonRecursiveCalls>
<RecursiveCalls>0</RecursiveCalls>
<Max>133062957179</Max>
<Min>3293465</Min>
</ResponseTime>
+ <ExecutionTime>
<Parents />
+ <Children>
</Group>
- <Group>
<Name>Database</Name>
- <ResponseTime>
<Total>28846037763</Total>
<Average>2804943</Average>
<NonRecursiveCalls>10284</NonRecursiveCalls>
<RecursiveCalls>106</RecursiveCalls>
<Max>3623108101</Max>
<Min>47397</Min>
</ResponseTime>
+ <ExecutionTime>
+ <Parents>
+ <Children>
</Group>
- <Group>
<Name>SarmIO</Name>
- <ResponseTime>
<Total>756465475</Total>
<Average>6200536</Average>
<NonRecursiveCalls>122</NonRecursiveCalls>
<RecursiveCalls>0</RecursiveCalls>
<Max>181488478</Max>
<Min>730255</Min>
</ResponseTime>
+ <ExecutionTime>
+ <Parents>
<Children />
</Group>
- <Group>
<Name>SWE</Name>
- <ResponseTime>
<Total>167202095979</Total>
<Average>966486103</Average>
<NonRecursiveCalls>173</NonRecursiveCalls>
<RecursiveCalls>16</RecursiveCalls>
<Max>51087996109</Max>
<Min>141423</Min>
</ResponseTime>
+ <ExecutionTime>
+ <Parents>
+ <Children>
</Group>
- <Group>
<Name>Scripting Engine</Name>
- <ResponseTime>
<Total>42078467851</Total>
<Average>825067997</Average>
<NonRecursiveCalls>51</NonRecursiveCalls>
<RecursiveCalls>0</RecursiveCalls>
2003 Siebel Systems, Inc
Page 18 of 36
Figure 1
1. First, view all the groups and find out which one has the highest ResponseTime. By
looking at figure 1 (XML file), note that Server Thread is the entry point to the Siebel
server (doesnt have parent group) and it took 325 seconds.
a. Server Thread = 326 seconds (325,577,844,947 nanoseconds)
b. Database = 29 seconds (28,846,037,763)
c.
2. When compared to the rest of the groups, the request spent most of its time on the
Server Thread area; therefore the Server Thread information require further analysis.
Also note the following information:
a. At an average, it took 1.8 seconds for Server Thread request to be processed.
b. The maximum time it took to process a Server Thread request was 133 seconds.
c.
The minimum time it took to process a Server Thread request was .003 seconds.
It is suspicious that a given Server Thread request took much longer than the
average (133 vs. 1.8 seconds).
3.
Next, look at the children group of the Server Thread and find out which child took the
longest to processed (TotalContributedTime):
- <xml>
- <Group>
<Name>SMI</Name>
+ <ResponseTime>
+ <ExecutionTime>
<Parents />
- <Children>
- <ChildGroup>
<Name>Database</Name>
<TotalContributedTime>10052385093</TotalContributedTime>
<Calls>7378</Calls>
<Average>1362481</Average>
<PercentageTime>5.65</PercentageTime>
<PercentageCalls>96.62</PercentageCalls>
Page 19 of 36
Figure 6
a. Form Figure 6, SWEs contribution time was the highest with 167 seconds vs. 10
seconds for Database and .6 seconds for SarmIO.
b. Note that of the total number of calls spent on the children groups, only 2.27% of
the calls were made to SWE (SWE Calls /(Database Calls + SarmIO Calls +
SWE Calls)) *100% or (173 / (7378+85+173))*100%.
However, even though only 2.27% of the calls were made to SWE, those calls
accounted for 93.96% of the response time within the childrens group.
(SWE TotalContributedTime / (Database TotalContributedTime + SarmIO
TotalContributedTime + SWE TotalContributedTime)) *100%
or (167202095979 / (10052385093 + 695242267+ 167202095979)) *100%
These findings further indicate that there are very few calls within the SWE child
group (173), but the percent of time spent on those SWE calls were very high
(93.96%). Therefore additional analysis should be done on the SWE group to
isolate the performance problem.
4. Look at the SWE group in more detail and specifically expand the childrens groups.
- <xml>
+ <Group>
+ <Group>
+ <Group>
- <Group>
<Name>SWE</Name>
- <ResponseTime>
<Total>167202095979</Total>
<Average>966486103</Average>
<NonRecursiveCalls>173</NonRecursiveCalls>
<RecursiveCalls>16</RecursiveCalls>
<Max>51087996109</Max>
<Min>141423</Min>
</ResponseTime>
- <ExecutionTime>
<Total>173968409607</Total>
<Calls>189</Calls>
2003 Siebel Systems, Inc
Page 20 of 36
Figure 7
a. Total response time is calculated by adding the parents own execution time to
the sum of the childrens contribution time. In this case, SWEs execution time is:
SWEs ExecutionTime = SWE ResponseTime - (Database ContributedTime +
SarmIO
ContributedTime + SWE ContributedTime + Scripting Engine
ContributedTime +
Workflow ContributedTime)
2003 Siebel Systems, Inc
Page 21 of 36
Note: In the example above (Figure 7) and in the current 7.5.3 version of SARM, the
parents execution time is not displayed correctly. In order to calculate the parents
execution time, subtract the childrens total contribution time from the parents total
response time.
b. Identifying the percentage of time a child group contributed to the parents total
response time helps to illustrate which child contributed the most to the parents
total response time. This information helps in identifying if additional investigation
needs to be done on a particular child.
Child Area
Database
Total contributed
Time
18,540,207,197
SarmIO
55,622,335
0.03%
((SarmIO TotalContributedTime/SWE Total Response Time)
*100%)
((55,622,335/167,202,095,979) *100%) = 0.03%
SWE
34,393,218,236
20.50%
((SWE TotalContributedTime/SWE Total Response Time) *100%)
((34,393,218,236/167,202,095,979) *100%) = 20.50%
Scripting
Engine
42,078,467,851
25.10%
((Scripting Engine TotalContributedTime/SWE Total Response
Time) *100%)
((42,078,467,851/167,202,095,979) *100%) = 25.10%
Workflow
375,294,777
0.23%
((Workflow TotalContributedTime/SWE Total Response Time)
*100%)
((375,294,777/167,202,095,979) *100%) = 0.23%
% of SWE
Response
Time spent
on its
children
SWEs
Execution
Time
95,442,810,396
56.86%
(11.00% + 0.03% + 20.50% + 25.10% + 0.23%) = 56.86%
71,759,285,583
43.14%
Table 2
a. From Figure 7 and Table 2, the total contribution time for each of the childrens
areas within the SWE group are:
Database = 19 seconds (18,540,207,197) see underlined numbers
2003 Siebel Systems, Inc
Page 22 of 36
= 167,202,095,979
Page 23 of 36
Figure 8
a. Notice that the maximum response time is around 40 seconds whereas the
minimum response time is 0.00085 seconds, and the average is 0.825 seconds.
The maximum and minimum response times are very far apart, and probably the
maximum response time value is making the average number look much higher.
If the maximum response time is subtracted from the total and the average
calculated, a more realistic average time is derived.
(Total Response Time Max)/NonRecursiveCalls = Average
(42,078,467,851 40,459,460,508)/50 = 32,380,146 (0.032 seconds)
b. By calculating this new average, it is apparent that on an average, the execution
of the scripts was efficient. It should be noted that these numbers should be
compared to a base line and are always relative.
c.
It is also noticeable that one of the scripts took a very long time. By looking at
the file, the name of the script is not known, but more investigation can occur by
looking at the CSV file (look for CSV conversion for additional explanation).
Page 24 of 36
Node: Each instance of an instrumented area is a node. Each node can have zero to
many nodes as its descendants
ParentID: A unique number representing the caller of an instrumentation point within the
same request. The caller is another instrumented area.
RootID: A unique number assigned to a request submitted from the SWSE to the Siebel
Server. RootID is also known as Request ID.
Area: Instrumentation element within the Siebel architecture. The seven elements that
have been instrumented to collect response time information are: SarmIO, SWSE, Server
Thread, SWE, Workflow, Scripting Engine and Database Connector.
Example output
The data is displayed in nanoseconds as the sample was taken from a Unix machine.
Page 25 of 36
Description
SARM Analyzer expects one SARM binary file from the SWSE on the Web Server, and one
SARM binary file from the Siebel Server.
The XML output file contains detailed information on each of the SWSE requests that the user
has made. If the user has logged into the system multiple times, then the output will show that
there are multiple sessions. The SWSE requests are grouped into specific login sessions, and
sorted by the time the requests were made.
Example Output:
<Session>
<SessionID>
<LoginName>
<SWERequests>
<SWERequest>*
<ReqID>
<ClickID>
2003 Siebel Systems, Inc
Page 26 of 36
Page 27 of 36
Server ID = !1
Process ID = 2b40 Represents the Operating System Process ID 11072
Task ID = 182b Represents the Siebel Task ID 6187
ClickID: Used to associate multiple requests to a single user action, such as 'click on a
button'.
RequestBody: Total time to handle a request from the web server to the Siebel server.
Detail timing is grouped by web server, infrastructure, Siebel server and database time.
TotalServerTime: total time on the server (includes web server, Siebel server, and
network time).
WebServerTime: total time spent on the web server for a given request.
InfraTime: Infrastructure Time. Total time spent between the web server and the Siebel
server. This may also include some Siebel infrastructure time routing the request to the
handling Siebel server task
ReqTimeStamp: Request Time Stamp. This is the time when the request was made.
DatabaseTime: Total database Time. Database time includes the time spent on the
network when communicating to the database.
SiebsrvrDetail: Total time by instrumentation area within the Siebel server. The areas
of instrumentation within the Siebel server are: server thread, SWE, workflow, scripting
engine and database connector.
Example output
The data is displayed in nanoseconds as the Sample originates from a Unix machine.
Page 28 of 36
Figure 10
1. By looking at Figure 10, notice that there are multiple sessions for the same user.
2. To run session tracing, both the web server file and the Siebel Server file are used.
However, there may be multiple SARM files on the server (one set of SARM files per
process). To identify which SARM file to use to correlate SWE requests with server
requests to construct a users session trace, follow the instructions below:
a. Concatenate the web server SARM files.
In a Windows environment, use any utility to concatenate the files.
In a Unix environment, use the following command:
cat <list of files> >> <filename.sarm>
Example
cat S01_P11072_N0001.sarm >> sum.sarm
cat S01_P11072_N0003.sarm >> sum.sarm
cat S01_P11072_N0004.sarm >> sum.sarm
Page 29 of 36
Table 3
c.
Look for all the records that have the property value of the username, in our case
sadmin. Right below the sadmin property name is the session of the user in
hexadecimal. This value needs to be converted to decimal.
!1.2b40.182b
Server ID = !1
Process ID = 2b40
Task ID = 182b
d. Take the second piece of information such as !1.2b40.182b and covert it to decimal
(11072). The Process ID 11072 will correspond to the Siebel Server Process ID
(SARM file). Here it shows the OM Process ID for the user that is having problems.
e. Run the user session trace SARMAnalyzer w <websrvr sarm file> -s <siebsrvr
sarm file> -u <username> to trace all the requests the user has made and identify
potential performance problems using the process id identified in the previous step.
f.
Open the output file from the User Trace (sadmin.xml). Find out the time user was
logged in when the problem occurred the appropriate session Id to investigate can be
obtained. Usually the user will not have logged in too many times (as in the case of
Page 30 of 36
Figure 11
h. Use a similar process to the aggregation analysis to identify which area in the Siebel
architecture was high in performance. Additional analysis would be needed using the
CSV file to identify specific sub-areas are the problem areas.
Page 31 of 36
Table 4
a. From Table 4, the following information should be highlighted:
SERVER THREAD (area =3) took 133 seconds. Note that the ID and RootID
have the same number of 1. A RootID of 1 represents the starting points of
all requests. The first SERVER THREAD operation having a high response
time does not appear too suspicious since it could be attributed to the system
warming and caching information for subsequent requests.
Server Thread (SMI), SWE, Script Engine, SarmIO, SWE Plug-in took nearly
40 seconds. All of these areas have the same RootID of 10400, which
Page 32 of 36
Table 5
a. From Table 5, notice that line 9857 has an ID and RootID of 10400, which
means that this is the starting point of the request. In this case, Server Thread
(area =3) was the first area of instrumentation for this request.
b. Notice the values of the ParentID and ID columns and walk through the call
hierarchy:
1. 10400 calls 10401
2. 10401 calls 10403
3. 10403 calls 10404
4. 10404 calls 10498
c.
Now examine the values of the Area and SubArea to find out which Siebel
infrastructure areas were called:
1. 10400 calls 10401 Server Thread:Request Handling (Area 3, SubArea 1)
called
SWE:Process SWE Command (Area 8, SubArea 1)
2. 10401 calls 10403 SWE:Process SWE Command (Area 8, SubArea 1)
called
Scripting Engine:eScript Execution (Area 5, SubArea 3)
3. 10403 calls 10404 Scripting Engine:eScript Execution (Area 5, SubArea 3)
called
Workflow:Invoke Method (Area 7, SubArea 1)
4. 10404 calls 10498 Workflow:Invoke Method (Area 7, SubArea 1) called
Workflow:Process Init (Area 7, SubArea 2)
d. In addition, the objects that were called by looking at the UserString column (M)
are identified. For example ID 10403 calls the BusComp_PreSetFieldValue
eScript code and ID 10498 calls the ABC SARM Test workflow.
e. Notice that the Duration for each of the operations is over 40 seconds. This is
because the parent operation includes the time of its children. Therefore each
2003 Siebel Systems, Inc
Page 33 of 36
Because operation 10498 (line 9940) takes 40.136 seconds, it is evident that the
problem resides somewhere in the ABC SARM Test Workflow, which require
further analysis.
Table 6
c.
d. By looking at line 9943, 9958, 9961 and 9964, it is evident that there were 10
seconds delays in each Wait step and the same step was called four times. This
is the root cause of the performance problem.
Note: Improperly placed wait times are a common cause of workflow performance
problems.
By using the data provided from the aggregation analysis in conjunction with data
from the CSV file, a user can view response time information for individual elements
within a request and immediately understand and isolate the location of the problem.
Page 34 of 36
Best Practices
Below is a set of recommendation when using SARM.
When carrying out performance analysis, a baseline should be used. Siebel recommends that
administrators take a snapshot of their system at a given point in time. This snapshot will be
used to compare system behavior and identify performance problems. Administrators can also
write additional tools to filter, aggregate, and compute the data, to help diagnose any potential
problems.
The performance aggregation data is useful for diagnosing performance data at a given point in
time. The administrator can look at the data by group and diagnose the area that is resulting in
poor performance. After having a high level view of the performance data, more detailed
information can be extrapolated by running the SARMAnalyzer CSV option and looking at the
details of each area to identify the root of the problem. The performance aggregation data is also
useful for doing trend analysis over a period of time.
As discussed, at most there will be four SARM files per process. The correct SARM file will
depend on what the administrator is looking for. If the administrator wants to see the
performance aggregation data for a given point in time, then SARM Analyzer should be run
against a single SARM file (Note that if the maximum memory size (buffer size) is too small, the
time stamp of the SARM files may be very close in time). If on the other hand the administrator
wants to see the data for multiple requests for a given process, then it is recommended to
concatenate the SARM files into a single file and run SARMAnalyzer against that single file.
There may be situations where SARM shows that most of the time spent in a request takes place
within SWE. Although SARM does not provide further detail on SWE, the administrator should
check the complexity of the Siebel Views in the application. One way to identify complex or nonperformant Siebel Views is to perform the following steps:
1. Identify which views appear to be slow based on user feedback or performance and
scalability testing
2. Define a usage scenario that involved calling the slow view(s)
3. Enable SARM
4. Modify the configuration of the slow view(s)
5. Run SARM Analyzer to get the output described in this section
6. If the SWE time increases or decreases, how much time the configuration affects
performance can be seen directly.
7. If the feedback came from a user, run the Session Trace described on this paper for
more precise validation.
It is recommended to monitor performance activity during the testing and user acceptance stage
to detect incipient problems before they have an adverse effect. In some cases, when SARM is
used to monitor performance continuously, the user may detect a situation that requires additional
data collection or explicit recreation of the problem to collect additional analysis data. Further
analysis of the SARM data can be done by writing additional post-processing tools.
Summary
SARM provides performance metrics such as cumulative and average response times, enabling
the user to diagnose response time information in the Siebel architecture. SARM captures
response time information based on different areas in the Siebel architecture; showing the actual
2003 Siebel Systems, Inc
Page 35 of 36
Page 36 of 36