Professional Documents
Culture Documents
Version History
Date 01/07/2011 1/10/2011 1/22/2011 Version 1.0 1.1 1.2 Author AS/PS AS/PS AS/PS Change Description -OrigAfter review with AT&T Populate test results
Document References
Description Hardware_Estimations_for_ATT_-_LTE_v1_15_PCRF_site_PS_model.doc Cisco HA Architecture for ESC 0.16.doc Cisco Policy Manager (PCRF) Business Integration Analysis (BIA) pLTE SBP Dual APN TOL 010411.xlsx - Policy Performance_EDP4 tab 01-04_01.03.13.PM__3G_Subs_PCRF_500k_Perf3_1800TPS__RID-33.xls Spirent sample report Version 1.15 0.16 0.8 010411
Contents
INTRODUCTION ............................................................................................................................................. 5 Purpose ..................................................................................................................................................... 5 Goals ......................................................................................................................................................... 5 Stakeholders ............................................................................................................................................. 5 Scope ......................................................................................................................................................... 5 Terminology .................................................................................................................................................. 6 Roles and Responsibilities ............................................................................................................................. 8 PROJECT SUMMARY ...................................................................................................................................... 9 Description ................................................................................................................................................ 9 Test Environment .................................................................................................................................... 10 Hardware and Software .......................................................................................................................... 11 PCRF and DB Hosts .............................................................................................................................. 11 SAN ...................................................................................................................................................... 11 Software .............................................................................................................................................. 12 Testing Approach .................................................................................................................................... 13 Description .......................................................................................................................................... 13 Measurement...................................................................................................................................... 13 Logging ................................................................................................................................................ 13 Statistics .............................................................................................................................................. 13 Test Execution ......................................................................................................................................... 14 Performance Tests .............................................................................................................................. 14 Capacity Tests ..................................................................................................................................... 15 Test Scenarios ............................................................................................................................................. 16 Performance Test 1- Gx TPS on PHONE APN ...................................................................................... 16
INTRODUCTION
Purpose
Performance and Capacity test of the PCRF solution in a production-like setup where PCRF servers are connected to the real network elements on all external interfaces.
Goals
The goal of this testing exercise is to meet performance targets set out in the contract. Performance targets fall into three categories: Transactions per second Latency Number of concurrent sessions
Performance target details for each testing scenario are provided in Test Scenarios section.
Stakeholders
CISCO AS Openet Services AT&T CTO Group
Scope
Identify whether the performance criteria are met or they are not. Collect all standard performance test metrics. These include the following: Throughput in Transactions Per Second (TPS on Gx, Sy and Sp) Latency (Gx, SPR Lookup and LDAP interface average latencies), Sy latency from the NetScout raw data, if available Machine Statistics (I/O statistics, memory usage, CPU usage etc.) Number of concurrent subscriber sessions
Collection/evaluation/definition of performance related test objectives and data for nodes other than PCRF application servers and PCRF database servers is out of scope.
Terminology
Symbol
AAA APN ATTM AVP BM CCA
Description
Authentication, Authorization, and Accounting Access Point Name AT&T Mobility Attribute Value Pair Balance Manager network element that stores subscriber volume usage counters Credit Control Answer (PCRF PCEF), with types corresponding to the request type: CCAI, CCA-U, CCA-T CCR Credit Control Request (PCEF PCRF ), of three types: initial (CCR-I), update (CCR-U), and terminate (CCR-T) CLI Command-Line Interface CSG Cisco Content Services Gateway CSG2 Second Generation Cisco Content Services Gateway Diameter A networking protocol for AAA; a successor to RADIUS DPE Dynamic Provisioning Environment FFA First Field Application (formerly FOA First Office Application) IMS IP Multimedia Subsystem (used at ATTM for VSC) JRD Joint Requirements Document LDAP Lightweight Directory Access Protocol MAG Mobile Application Gateway platform by Openwave MIND Master Integrated Network Directory -- An Openwave LDAP server that stores subscriber information at AT&T Mobility MRC Monthly Recurring Charge postpaid subscribers that had been migrated to the Balance Manager for monthly volume cap tracking Netcool SNMP management system used by ATTM for SNMP traps NGG Next Generation Gateway platform by Ericsson OCS Online Charging Service OCG Openet Charging Gateway OFCS Offline Charging Service PCC Policy and Charging Control PCEF Policy and Charging Enforcement Function (used interchangeably with eGGSN for the purposes of this document) PCRF Policy and Charging Rules Function (used interchangeably with Cisco Policy Manager for the purposes of this document) P-CSCF Proxy Call Session Control Function (part of IMS) PDP Packet Data Protocol PO Postpaid a billing plan that uses off-line charging PR Prepaid a billing plan that uses online, real-time charging QoS Quality of Service RAA Re-Authentication Answer (PCEF PCRF as ACK only) RADIUS Remote Authentication Dial-In User Service (an AAA protocol) RAR Re-Authentication Request (PCRF PCEF) 6
Symbol
SBP SBP pLTE SBP ST SL SMPP SMSC SNMP SOAP SPR XML VCS VSC
Description
Session Based Pricing SBP pre-LTE migration of existing 3G SBP services SBP Speed-Tiers next phase of the SBP/PCRF functionality Smart Limits a billing plan that uses online charging systems for subscriber usage limits enforcement, but is actually charged offline Short Message Peer-to-Peer protocol Short Message Service Center Simple Network Management Protocol Simple Object Access Protocol Subscriber Profile Repository eXtensible Markup Language Veritas Cluster Server by Symantec Video Share Calling
Gagan Kumar Tom Nguyen Jiming Shen Mo Miri Arghya Mukherjee Landon Holy
PROJECT SUMMARY
Description
The Cisco Policy Manager will have interfaces with SPR system over Sp interface to perform subscriber profile retrieval PCEF system over Gx interface to perform policy enforcement Counter store center over Diameter Sy interface
Provisioning
Policies
PAS
Policies
VPN
Sub Info
PCRF
Session & Sub Info
Policies
PCEF
Performance testing of the Policy Manager is focused on the performance of the Diameter protocol messages (TPS and latency) over the Gx Interface.
Test Environment
Spirent
The traffic will be driven by 10 Spirent servers towards 3 Calico zones each containing 3 PGW and 3 CSG2 active blades (9 in total). 3G messaging will use Gn interface between Spirent and P-GW. LTE messaging will use S11 interface between Spirent and S-GW. There are 3 Service Zone with a total of 9 CSG2 peers will be evenly distributing requests across 5 PCRF application servers. Each of the 5 PCRF application servers will connect to a load balancer for MIND queries with 2 active MIND servers behind the load balancer. 5 PCRF application servers will connect to 4 Sy Balance Manager peers to send AAR/STR requests and 4 Sy Balance Manager peers to receive Sy RARs from, all Balance Manager peer instances are running on 2 physical servers. Only control plane traffic is generated by the Spirent, CCR-Us may be generated by simulating PLMN or RAT change events. CCR-U TPS is difficult to apply consistently across all tests, so the expected CCR-U TPS will be evenly distributed between CCR-I and CCR-T. For example, 1800, 400, 1800 (CCR-I, CCR-U, CCR-T) traffic flow will become 2000, 2000 (CCR-I, CCR-T).
10
SAN
11
Software
All hosts run Solaris 10 10/09 Database servers run Oracle Database Enterprise Edition in RAC configuration, Oracle Grid, all version 11.2.0.1 PCRF application servers run Oracle 11.2.0.1 client and Java JDK 6 Update 20 (both 32- and 64-bit) PCRF application servers have the following Cisco Policy Manage software and patches installed: Timestamp, action, version, status Thu Jan 06 12:36:39 EST 2011, EndInstall, FW_6.1B2734, SUCCESS Thu Jan 06 12:36:51 EST 2011, EndPatch, FW_6.1.0.6B1, SUCCESS Thu Jan 06 12:36:58 EST 2011, EndPatch, FW_6.1.0.9B1, SUCCESS Thu Jan 06 12:37:04 EST 2011, EndPatch, FW_6.1.0.14B1, SUCCESS Thu Jan 06 12:37:10 EST 2011, EndPatch, FW_6.1.0.35B1, SUCCESS Thu Jan 06 12:37:16 EST 2011, EndPatch, FW_6.1.0.45B1, SUCCESS Thu Jan 06 12:46:07 EST 2011, EndInstall, PM_3.0.2B79, SUCCESS Thu Jan 06 12:46:20 EST 2011, EndPatch, FW_6.1.0.25B2, SUCCESS Thu Jan 06 12:46:27 EST 2011, EndPatch, FW_6.1.0.30B4, SUCCESS Thu Jan 06 12:46:39 EST 2011, EndPatch, PM_3.0.2.2B1, SUCCESS Thu Jan 06 12:46:45 EST 2011, EndPatch, PM_3.0.2.3B1, SUCCESS Thu Jan 06 12:49:21 EST 2011, EndInstall, PRDE_3.0.2, SUCCESS Thu Jan 06 12:51:02 EST 2011, EndUpgrade, PRDE_3.0.2.1, SUCCESS Thu Jan 06 12:51:09 EST 2011, EndPatch, PRDE_3.0.2.2B10, SUCCESS Thu Jan 06 12:52:43 EST 2011, EndUpgrade, PRDE_3.0.2.3, SUCCESS
12
Measurement
Spirent measurements are provided in Appendix A Statistics, a sample XLS report file is also referenced in Document References. Round trip latencies (Gx, Sp, Sy) are measured by analyzing binary network traces taken on respective interfaces by NetScout. Product latencies are measured from collected product statistics. All latency figures used are the average latency unless otherwise stated. Database measurements are obtained via AWR snapshots. Transaction Audit Logging disk space consumption is recorded after each run and TAL partition cleaned.
Logging
The Unified Logs will be written to the Logging Database. Log levels will be set to ERROR, FATAL and WARN for the test run. The same logging configuration is expected to be used in production.
Statistics
Please see Appendix A Available Product and Custom Statistics. AT&T requirement for statistics periodicity in production is 900 seconds. In order to collect sufficient amount of samples, statistics will be re-configured to period of 60 seconds for this performance exercise and then reverted back to 900 seconds upon completion.
13
The following Sy TPS stages will apply to all Sy performance tests, it is expected that Gx TPS performance scenarios will generate sufficient Sy TPS and separate test runs will not be required to reach Sy TPS targets: Generate 250 TPS (125 TPS AAR, 125 TPS STR) for 15 minutes, collect data, and then stop. Generate 500 TPS (250 TPS AAR, 250 TPS STR) for 15 minutes, collect data, and then stop. Generate 750 TPS (375 TPS AAR, 375 TPS STR) for 15 minutes, collect data, and then stop. Generate 1000 TPS (500 TPS AAR, 500 TPS STR) for 15 minutes, collect data, and then stop.
After the component has been configured Spirent Landslide will generate GTP-C messages on either Gn or S11 interface depending on particular test scenario. The following high-level procedure will be used for each test run: 1. 2. 3. 4. 5. 6. 7. 8. Verify test setup at low TPS Turn down logging levels to ERROR/FATAL Turn off session validation Start the data collection script Run traffic for 15 minutes for TPS test and up to 30min for capacity tests Stop traffic Collect results using script kicked off in step four: collects AWR, product statistics for the test run, machine statistics output, TAL disk utilization and adds everything to tar archive Record test run start/stop time and manually collect NetScout network traces for that period
14
Capacity Tests
Subscriber sessions will be gradually created over a period of time and measurement snapshots taken when concurrent session targets for a particular stage have been reached. TPS numbers will be collected and reported, but the objective of this test is to reach target numbers for concurrent subscriber sessions. Generate 1 Million concurrent sessions, verify system is stable, then continue Generate 2 Million concurrent sessions, verify system is stable then continue Generate 3 Million concurrent sessions, verify system is stable, then continue Generate 4 Million concurrent sessions, verify system is stable, then continue Generate 5 Million concurrent sessions, collect performance data, then stop
The TPS used to reach 5 million concurrent users will be the highest possible rate for the lowest performing component, which is currently estimated at 1000 Gx CCR-I TPS. At this rate the test will take about 1 hour and 30 minutes to reach 5 million, and an equal amount of time to gracefully terminate all sessions.
15
Test Scenarios
Performance Test 1- Gx TPS on PHONE APN
Objective: To reach target Gx TPS in 4 stages, and measure the Cisco Policy Manager and Oracle database performance with high TPS, without MIND and without Balance Manager Components: PCEF and PCRF only.
Spirent
S/PGW
CSG
CPM
Oracle
MIND
BalMgr
Scenario Name Interfaces Objectives Policy Configuration Threshold Type Message Flow TPS MSISDN Range APN Counters Decision Tables
Gx TPS on PHONE APN Gx Reach target Gx TPS in 4 stages None, only Billing-Plan-Name returned TPS Threshold CCR-I, CCR-T Gx: 1000, Gx: 2000, Gx: 3000, Gx: 4000 Gx: 8000 included in test, but beyond solution requirements. phone none
Type Size
16
n/a 38 ms 44 ms 38 ms 44 ms 67 ms
For each test, the system utilization is averaged across 30 seconds; the Highest Average is the single highest 30 second average seen on any single system. For every 15 minute test described in this document, there are 330 sampled averages of each measurement across the platform. The highest average is the worst of those 330 samples.
17
Objective: To reach target Gx TPS in 4 stages, and measure the Cisco Policy Manager and Oracle database performance with high TPS, with MIND, but without Balance Manager. Components: PCEF and PCRF and MIND
Spirent
S/PGW
CSG
CPM
Oracle
MIND
BalMgr
Scenario Name Interfaces Objectives Policy Configuration Threshold Type Message Flow TPS
Gx TPS on BROADBAND APN, POST subscribers Gx, Sp Reach target Gx TPS in 4 stages None, only Billing-Plan-Name returned TPS Threshold CCR-I, LDAP search, CCR-T Gx: 1000 Gx: 2000 Gx: 3000 Gx: 4000 broadband none
Type Command Level APN Mapping SPR Mapping Size 15 Rows x 3 Input Cols 10 Rows x 1 Input Col 12 Rows x 4 Input Cols
18
16% 29 % 45 % 99 %
Investigation revealed that 9000 binds were being sent by the PCRF one for each ldap timeout, and this does not scale when the F5 or MIND has an issue. Issue opened with development team. Both the F5 and MIND were reported to have several thousands of open connections to the PCRFs ; by design should only have 1000. These excessive connections caused MIND and F5 outages. The ldap search latency statistic is based on successful searches and doesnt reveal the timeouts. It was also discovered that the F5 in front of the MIND servers was adding 100ms to each ldap search. Issue opened with F5 team.
19
Objective: To reach target Gx TPS in 4 stages, and measure the Cisco Policy Manager and Oracle database performance with high TPS, with MIND and with Balance Manager, using 3G SBP Policy. Components: PCEF and PCRF and MIND and Balance Manager
Spirent
S/PGW
CSG
CPM
Oracle
MIND
BalMgr
Scenario Name Interfaces Objectives Policy Configuration Threshold Type Message Flow TPS
Gx and Sy TPS on BROADBAND APN, SBP subscribers (3G) Gx, Sp, Sy Reach target Gx and Sy TPS in 4 stages 2 Service Groups 4 Services TPS Threshold CCR-I, LDAP Search, AAR, Gx RAR, CCR-T, STR Gx: 250 / Sy 250 Gx: 500 / Sy 500 Gx: 750 / Sy 750 Gx: 1000 / Sy 1000 broadband 2
Type Command Level Service Status APN Mapping SPR Mapping Size 15 Rows x 3 Input Cols 4 Rows x 3 Input Cols 10 Rows x 1 Input Col 12 Rows x 4 Input Cols
20
In test 500 and 750 the F5 team found a parameter to remove the 100ms latency during ldap searches. The ldap search latency using the primary MIND VIP were significantly improved going from 107ms to 5 ms, and this is reflected in the session connect times. Test 1000 we discovered that the secondary path to the MIND VIP through the F5 OAM load balancer also needed to be modified to remove the 100ms latency. This is reflected in the higher session connection times. To accommodate the Sy signaling, the Spirent test case was modified to wait 10 seconds rather than 1 second before sending a disconnect. This increased the number of concurrent sessions for any 1 second measurement period.
21
Objective: To reach target Gx TPS in 4 stages, and measure the Cisco Policy Manager and Oracle database performance with high TPS (1000 Gx and Sy), with MIND and with Balance Manager, using 4G SBP Policy. Components: PCEF and PCRF and MIND and Balance Manager
Spirent
S/PGW
CSG
CPM
Oracle
MIND
BalMgr
Scenario Name Interfaces Objectives Policy Configuration Threshold Type Message Flow TPS MSISDN Range APN Counters Decision Tables
Gx and Sy TPS on BROADBAND APN, MRC subscribers (LTE) Gx, Sp, Sy Reach target Gx and Sy TPS in 4 stages 2 Service Groups 4 Services 18 QoS categories TPS Threshold CCR-I, LDAP Search, AAR, Gx RAR, CCR-T, STR Gx: 1000 / Sy: 1000 broadband 2
Type Command Level Service Status APN Mapping SPR Mapping Size 15 Rows x 3 Input Cols 4 Rows x 3 Input Cols 10 Rows x 1 Input Col 12 Rows x 4 Input Cols
22
Test
TPS - Gx
4_1000_1
1000
On 4_1000, the F5 OAM load balancer did not have the delay ack timer disabled and all ldap searches were delayed by 100ms. This increases the Session Connect time significantly.
23
Capacity Testing Scenarios Capacity Test 1 5 Million Concurrent 3G Phone APN users.
Objective: To reach target concurrent sessions in 5 stages with PHONE APN (3G subscribers) and collect PCRF utilization statistics. Components: PCEF and PCRF and MIND and Balance Manager
Spirent S/PGW CSG CPM Oracle
MIND
BalMgr
Scenario Name Interfaces Objectives Policy Configuration Threshold Type Message Flow Capacity levels MSISDN Range APN Counters Decision Tables
Concurrent sessions on PHONE APN Gx Reach target concurrent sessions in 5 stages None, only Billing-Plan-Name returned Concurrent sessions CCR-I, CCR-T 5 Million phone 2
Type Command Level APN Mapping SPR Mapping Size 15 Rows x 3 Input Cols 10 Rows x 1 Input Col 12 Rows x 4 Input Cols
24
The test took about 2 hours to run, and a few short periods of network issues were seen, one creating more than 40k session errors. Troubleshooting was not performed and these 30 seconds were excluded from the Spirent connect time average. SGSN and CSG and Radius retransmit timers need to be modified so as not to create a retransmission storm when latency is encountered. Cisco has an action item to review these settings in the lab.
Load Profile of Oracle Session Server #1 with 5 million 3G Phone APN Subscribers
Per Second DB Time(s): DB CPU(s): Redo size: Logical reads: Block changes: Physical reads: Physical writes: User calls: Parses: Hard parses: W/A MB processed: Logons: Executes: Rollbacks: Transactions: 0.6 0.4 491,817.8 4,998.4 2,362.6 0.5 943.6 234.2 17.8 0.1 0.1 0.1 134.8 4.2 116.5 Per Transaction 0.0 0.0 4,222.9 42.9 20.3 0.0 8.1 2.0 0.2 0.0 0.0 0.0 1.2 0.0 Per Exec 0.00 0.00 Per Call 0.00 0.00
25
MIND
BalMgr
Scenario Name Interfaces Objectives Policy Configuration Threshold Type Diameter Interactions Capacity levels
Concurrent sessions on BROADBAND APN, MRC subscribers (LTE) Gx, Sp, Sy Reach target concurrent sessions in 5 stages 2 Service Groups 4 Services 18 QoS categories Concurrent sessions CCR-I, LDAP Search, AAR, Gx RAR, CCR-T, STR 1 Million 2 Million 3 Million 4 Million 5 Million broadband 2
Type Command Level Service Status APN Mapping SPR Mapping Size 15 Rows x 3 Input Cols 4 Rows x 3 Input Cols 10 Rows x 1 Input Col 12 Rows x 4 Input Cols
26
Test
TPS - Gx
Subs attached per at 1 second (grows) 1,000,000 2,000,000 3,000,000 4,000,000 5,000,000
35 %
10.5 GB
55 ms
100 ms
82 ms
To achieve a 5 million subscriber mix of 3G and LTE, 3 million 3G subscribers were used with 2 million 4G subscribers. In order to minimize the impact on other testing teams and to not overload any network component during the test, an activation rate of no more than 2000 session per second was used.
Load Profile for Oracle Session Server #1 during the 5 Million LTE subscriber tests.
Per Second DB Time(s): DB CPU(s): Redo size: Logical reads: Block changes: Physical reads: Physical writes: User calls: Parses: Hard parses: W/A MB processed: Logons: Executes: Rollbacks: Transactions: 2.3 1.9 2,440,966.5 18,649.5 12,177.4 13.8 1,051.0 2,784.5 10.2 0.0 0.3 0.1 2,091.1 0.0 496.1 Per Transaction 0.0 0.0 4,920.0 37.6 24.5 0.0 2.1 5.6 0.0 0.0 0.0 0.0 4.2 0.0 Per Exec 0.00 0.00 Per Call 0.00 0.00
27
Table 2 - Gx TPS with Broadband APN Test TPS - Gx Subs attached per at 1 second 500 1000 1500 2000 Front End Highest Avg CPU % 16% 29 % 45 % 99 % Front End Highest Avg Memory 8.9 GB 9.3 GB 9.5 GB 9.5 GB LDAP Search Latency (success) 121 ms 119 ms 121 ms 122 ms Sy Latency AAR/ AAA n/a n/a n/a n/a Session Connect Time 173 ms 172 ms 179 ms 186 ms
Table 3 - Gx and Sy TPS on Broadband APN (3G) Test TPS - Gx Subs attached per at 1 second 1200 2500 3690 5000 Front End Highest Avg CPU % 6% 7% 11 % 19 % Front End Highest Avg Memory 9.2 GB 9.3 GB 9.4 GB 9.4 GB LDAP Search Latency (success) 103 ms 6 ms 6 ms 66 ms Sy Latency AAR/ AAA 96 ms 93 ms 77 ms 89 ms Session Connect Time 174 ms 78 ms 71 ms 133 ms
Table 4 - Gx and Sy TPS on Broadband APN (4G) Test TPS - Gx Subs attached per at 1 second 5000 Front End Highest Avg CPU % 20 % Front End Highest Avg Memory Used 9.5 GB LDAP Search Latency (success) 51 ms Sy Latency AAR/ AAA Session Connect Time (Spirent) 132 ms
4_1000_1
1000
90 ms
28
5_2000_2
4000
n/a
Table 6 - Capacity performance of 5 Million 3G and LTE subscribers Test TPS - Gx Subs attached per at 1 second (grows) 5,000,000 Front End Highest Avg CPU % 35 % Front End Highest Avg Memory Used 10.5 GB LDAP Search Latency (success) 55 ms Sy Latency AAR/ AAA Session Connect Time (Spirent) 88 ms
6_1000_1
2000
100 ms
29
One 3G Phone APN users will create 1 Gx TPS during session initiation and 1 Gx TPS during session termination (and no Sy, and no Sp traffic). One LTE SBP subscriber will create 2 Gx and 1 Sy TPS during session initiation and 1 Gx and 1 Sy TPS during session termination.
Requirement 1800 CCR-I + 400 CCR-U + 1800 CCR-T or 2000 CCR-I + 2000 CCR-T 500 AAR + 500 STR 50ms
Result 4000 Gx.CCR-I + 4000 Gx.CCR-T 500 Sy.AAR + 500 Sy.STR 44 ms @ 4000 TPS.
Sy TPS across 5 PCRF nodes PCRF Processing Latency with no external interface latency
Capacity Targets
5,000,000 concurrent subscriber sessions across all PCFR nodes.
30
Deliverables
Reporting
A progress report will be generated nightly and distributed to stakeholders and project management. A tar file will be generated for each test run. This will be stored in TBD. Each tar file will contain the following: Machine statistics collected during the test run (iostat, prstat, top etc.) Product statistics containing a subset of the available product statistics Raw statistics containing a full database dump of all statistics collected during the test run Oracle AWR (Automatic Workload Repository) report TAL mount point utilization Text file containing information about the test run
Results will be collected per single test run. Tests will be mapped to the specific tests as outlined in QC and QC will be updated daily. The specific tests contained within QC will be replaced by the tests as outlined in this document. The overall test report will be written and stored in the following location: TBD
31
Testing Tools
In NDC-1, performance is measured using the following tools:
Performance Statistics
Openet uses scripts to capture the results of the collection of the Performance Statistics. Sample scripts will be made available to this project, but will take some time to adjust to particulars of the NDC-1 environment. The scripts perform an initial snapshot of the environment, and capture the ending Statistics to form a snapshot of the Statistics during the performance run.
32
Appendix A Statistics
Gx Statistics - PCRF
Statistic Name Latency_CCR_INITIAL_REQUEST Latency_CCR_TERMINATION_REQUEST Latency_CCR_UPDATE_REQUEST Count_CCR_INITIAL_REQUEST Count_Failed_CCR_INITIAL_REQUEST Count_CCR_UPDATE_REQUEST Count_Failed_CCR_UPDATE_REQUEST Count_CCR_TERMINATION_REQUEST Count_Failed_CCR_TERMINATION_REQUEST Measurement The latency of CCR-I messages The latency of CCR-T messages The latency of CCR-U messages Total CCR-I count Failed CCR-I count Total CCR-U count Failed CCR-U count Total CCR-T count Failed CCR-T count
Sy Statistics - PCRF
Statistic Name Latency_AAR Latency_RAA Latency_Sy_AA-Answer Latency_Sy_Re-Auth-Request Latency_Sy_Session-Termination-Answer Count_AAR Count_Failed_AAR Measurement May be only applicable to synchronous EBMI Is this for Sy? Sy AA-Answer Latency Sy RAR Latency Sy Session-Termination-Answer Latency Total AAR count Failed AAR count
Sp Statistics - PCRF
Statistic Name Latency_SPR_Lookup Latency_SPR_Roaming_Classification_Lookup Latency_SPR_LDAP_Search Latency_SPR_LDAP_Search_Plugin Count_SPR_Lookup Count_SPR_Roaming_Classification_Lookup Count_Failed_SPR_Roaming_Classification_Lookup Count_SPR_LDAP_Search Count_Failed_SPR_LDAP_Search Measurement Total latency of the SPR Lookup from PRDE perspective SPR API latency of Roaming Classification SPR API latency of LDAP Search procedure SPR API latency of the LDAP Search operation at the plugin level Total SPR Lookup count from PRDE perspective Total SPR API Roaming Classification count Failed SPR API Roaming Classification count Total SPR API LDAP Search count Failed SPR API LDAP Search count
33
Spirent
Statistic Name Attempted Connect Rate (Sessions/Second) Attempted Disconnect Rate (Sessions/Second) Actual Connect Rate (Sessions/Second) Actual Disconnect Rate (Sessions/Second) Sessions Established Session Errors Attempted Session Connects Attempted Session Disconnects Statistic Name Actual Session Connects Actual Session Disconnects Average Session Connect Time Average Session Disconnect Time Minimum Connect Time Maximum Connect Time User Authentication Failure
34