You are on page 1of 76

Cisco BroadWorks

System Engineering Guide


Document Version 51
Copyright Notice

Copyright© 2022 Cisco Systems, Inc. All rights reserved.


Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its
affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL:
https://www.cisco.com/c/en/us/about/legal/trademarks.html. Third-party trademarks
mentioned are the property of their respective owners. The use of the word partner does
not imply a partnership relationship between Cisco and any other company. (1721R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not
intended to be actual addresses and phone numbers. Any examples, command display
output, network topology diagrams, and other figures included in the document are shown
for illustrative purposes only. Any use of actual IP addresses or phone numbers in
illustrative content is unintentional and coincidental.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 2 OF 76


Document Revision History

Version Reason for Change Date

1 Updated document for rebranding. March 2, 2006

1 Changed name of document from System Guide to System June 2, 2006


Engineering Guide for Release 14.0.

1 Added Release 14.0 updates. July 7, 2006

1 Edited document. August 20, 2006

2 Added information for fax weightings and guidelines on the November 22, 2006
maximum number of directory numbers (DNs).

2 Edited document. December 11, 2006

3 Made document improvements for System Monitoring project. February 1, 2007

3 Edited changes. March 5, 2007

4 Made minor changes to call weightings and added an explanation March 26, 2007
of business trunking.

4 Added section 6.2.1 IPSec Tunnels for March 29, 2007


EV 46217.

4 Edited changes. April 3, 2007

5 Updated system architecture section and added reference to the April 30, 2007
Network Server Memory Estimator tool.

5 Fixed example Network Server capacity calculation. Added April 25, 2007
information on assumptions for real-time accounting configuration.
Clarified 10,000 Conferencing Server administrator limit.

5 Edited changes. May 1, 2007

6 Updated for replication bandwidth. May 31, 2007

6 Edited changes and published document. June 21, 2007

7 Added information on T2000 hardware. July 17, 20007

7 Edited changes and published document. July 25, 2007

8 Clarified Application Server and Network Server database sizing August 30, 2007
rules.

8 Aligned capacity with the Cisco BroadWorks Recommended October 4, 2007


Hardware Guide, fixed Web Server example, and updated
Application Server residential cache computations.

8 Edited changes and published document. October 12, 2007

9 Added Simple Network Management Protocol (SNMP) get/sec November 6, 2007


guidelines.

9 Added information on BEA integration. Added content on high November 15, 2007
performance IBM systems. Removed section on base system
capacities.

9 Added information on unified message storage engineering rules. December 5, 2007

9 Edited changes and published document. December 7, 2007

10 Removed restriction on number of enterprises for Release 14.sp4. February 14, 2008
Added additional service weightings to table in Appendix B.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 3 OF 76


Version Reason for Change Date

10 Edited changes and published document. February 18, 2008

11 Revised document for EV 58416. February 25, 2008


Added statement indicating that information applies to earlier
releases unless otherwise noted.
Updated enterprise database size per user to
40 KB to match Capacity Planner.
Added Xtended Services Platform.
Updated call weightings for voice mail deposit.

11 Edited changes and published document. April 14, 2008

12 Updated Application Server Growth Estimate procedure to used April 23, 2008
licensing number of users instead of the SNMP number.

12 Changed maximum number of Conferencing Servers in a cluster. April 29, 2008

12 Edited changes and published document. April 30, 2008

13 Added Device Management/Profile Server impacts. Added June 9, 2008


performance guidelines for new Sun and IBM systems to section 9
Server Performance Guidelines.

13 Edited changes and published document. June 18, 2008

14 Removed Cisco BroadWorks release references. July 8, 2008


Added numbers for large Media Servers.

14 Edited changes and published document. July 21, 2008

15 Deleted BEA section and added Sun x64 series. October 20, 2008

15 Edited changes and published document. October 20, 2008

16 Fixed error in trunk group user estimate. November 1, 2008

16 Changed Network Server DB PERM sizing to maximum 48% of December 1, 2008


available memory.

16 Updated description of overload controls. December 8, 2008

16 Described Capacity Planner changes to support input of multiple- March 24, 2009
packaged configurations.
Removed references to the Network Server Memory Estimator
since this has been incorporated into the Capacity Planner.
Updated Java heap utilization examples.

16 Added Media Server jitter indicators. April 10, 2009

16 Edited changes and published document. April 21, 2009

17 Updated section 7.1.3 Generic Port Calculation Rules for EV May 15, 2009
89418.

17 Cleaned up database “temp” size calculation and removed Thread June 1, 2009
Activity as a platform KPI.

17 Edited changes and published document. June 3, 2009

18 Updated replication bandwidth requirements according to EV June 22, 2009


90986.

18 Edited changes and published document. June 25, 2009

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 4 OF 76


Version Reason for Change Date

19 Added section to explain how to use the System Capacity Planner August 21, 2009
to input live data.
Updated growth estimate to clarify that busy CPU should only
include system and user activity.
Added UC-Connect specifications.

19 Edited changes and published document. August 28, 2009

20 Described enhancements to System Capacity Planner Live Data October 20, 2009
Input mode.
Described impact of Simple Object Access Protocol (SOAP)
transactions on the Web Server, according to EV 98133.
Added provisioning guidance for Communication Barring Fixed
feature.

20 Reworked Hardware Capacities section to be CPU Unit-driven and December 2, 2009


increased Network Server memory maximum to 48 GB and
Application Server memory maximum to 64 GB.

20 Edited changes and published document. December 8, 2009

21 Made small correction, changing CPUs to CPU Units and December 18, 2009
published document.

22 Updated section 4.2.1 Scaling Characteristics for EV 106924. February 17, 2010

22 Updated section 8 Server Provisioning Guidelines. February 18, 2010

22 Updated Media Server engineering rules in section 7.1 Media February 24, 2010
Server Engineering Rules.

22 Edited changes and published document. March 17, 2010

23 Synchronized document with System Capacity Planner, version March 29, 2010
3.3. Added formulas for Video-enabled Media Server. Renamed
Web Server.

24 Edited changes and published document. April 19, 2010

25 Removed reference to 5,000-user limit in the Application Server May 19, 2010
scaling constraints section and cleaned up OCP/ODP provisioning
guidelines.

25 Updated call weightings for call center calls. May 19, 2010

25 Edited changes and published document. May 28, 2010

26 Fixed errors in growth estimate for EV 114031. June 29, 2010

26 Edited changes and published document. July 26, 2010

27 Renamed “Number of Simultaneous Ringing Personal entries” to November 10, 2010


“Number of Simultaneous Ringing Personal criteria entries” in
section
8.1 Application Server Provisioning Guidelines for EV 108732.

27 Updated company address and references to Xchange. February 9, 2011

27 Added Database Server (DBS) dimensioning information. April 19, 2011

27 Edited changes and published document. May 3, 2011

28 Updated provisioning guidelines for EV 140694. Fixed May 6, 2011


simultaneous call calculation.

28 Added CTI-Connect Xtended Services Platform (Xsp) deployment May 23, 2011
model.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 5 OF 76


Version Reason for Change Date

28 Fixed garbage collection (GC) heap collection example error in June 9, 2011
section 10.2.1.1 Execution Server Post Full Collection Heap Size
for
EV 143495.

28 Edited changes and published document. June 22, 2011

29 Updated Media Server port weightings. Added Open Client December 12, 2011
Interface (OCI) over SOAP rating.

29 Edited changes and published document. January 3, 2012

30 Changed Growth Estimate procedure to identify number of users April 30, 2012
on an Application Server.

30 Corrected Access Mediation Server (AMS) example calculation. July 28, 2012
Added section to document restriction on compressed codec
usage for conferencing on UltraSPARC Media Server for EV
154886.
Removed Conferencing Server.
Added Service Control Function (SCF) Server.

30 Edited changes and published document. August 10, 2012

31 Corrected formula for number of simultaneous connections per October 1, 2012


Xtended Services Platform in section Introduction 4.7 Xtended
Services Platform.
Added guidance on number of agents per route point in section 8.1
Application Server Provisioning Guidelines.
Updated section 7.5.1 Message Storage Requirements for mailbox
size calculations for
EV 140398.

31 Edited changes and published document. October 12, 2012

32 Added section 7.6 Service Control Function Network December 10, 2012
Requirements.

32 Edited changes and published document. December 11, 2012

33 Added information on the Virtualized System Capacity Planner December 10, 2013
worksheet.
Added Communicator Xtended Services Platform information.
Added details on Device Management system engineering.
Removed CDS information.
Added information on provisioning guidelines for Communication
Baring (Hierarchical) feature.
Increased supported group size.
Added information on Session Access Control.
Added EVRC-A codec and described maximum generic and
maximum RTP port requirements.
Added video conferencing.
Updated performance guidelines to support additional CPU
configurations.

33 Edited changes and published document. January 6, 2014

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 6 OF 76


Version Reason for Change Date

34 Added Messaging Server (UMS), Sharing Server (USS), and April 11, 2014
WebRTC Server (WRS).
Updated Media Server calculations to match capacity planner.
Added Adaptive Multi-Rate Wideband (AMR-WB).
Added guidelines for disposition/unavailable codes for EV 191987.
Updated voice mail storage requirements for
EV 213785.

34 Edited changes and published document. April 14, 2014

35 Added clarification to description of Media Server port weightings. June 6, 2014


Added information for XS mode.

35 Edited changes and published document. June 27, 2014

36 Added Network Function Manager (NFM). December 17, 2014


Removed references to the Solaris operating system and
UltraSPARC hardware.
Updated Network Server database size constraints for EV 235455.
Added Enhanced Variable Rate Codec – Narrowband Wideband
(EVRC-NW) codec weightings.

36 Edited changes and published document. December 18, 2014

37 Added rebranded server icons. March 30, 2015

37 Added Video Server (UVS). April 3, 2015


Removed references to hardware vendors.

37 Edited changes and published document. April 10, 2015

38 Added Growth Estimate for additional server types. March 11, 2016
Added restriction on enterprise trunk number ranges.
Corrected Application Server bandwidth formula.

38 Edited changes and published document. March 21, 2016

39 Added Notification Push Server (NPS). June 6, 2016


Removed UC-Connect.

39 Edited changes and published document. June 10, 2016

40 Added information on monitoring database checkpoint activity. February 17, 2017


Re-formatted and added information to the Capacity table in
section 5 Hardware Capacities.

40 Edited changes and published document. February 22, 2017

41 Updated checkpoint calculation. May 19, 2017


Updated Messaging Server (UMS) information.
Added Network Database Server (NDS).

41 Edited changes and published document. May 25, 2017

42 Updated Server Performance Guidelines to remove reference to June 29,2018


disk service time.
Removed references to MGCP and EMS.
Updated maximum DB size for trunking deployments.
Updated UMS description to remove references to Release 22.0
farm model.

42 Edited changes and published document. August 9, 2018

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 7 OF 76


Version Reason for Change Date

43 Removed obsolete sections. November 13, 2018


Removed references to Maintenance Guide.
Update SCF specifications.
Updated checkpoint example.
Updated Messaging Server (UMS) section to indicate Application
Server (AS) can use multiple UMS.
Updated Xtended Services Platform/Profile Server growth estimate
to account for tomcat thread usage.
Updated Unified Messaging storage server requirements.
Updated MS CLI context in section 7.1.7 Playout and Recording
Memory Usage Rules.

43 Edited changes and published document. December 18, 2018

44 Added Device Activation Server. April 5, 2019

44 Completed rebranding for Cisco. April 18, 2019

44 Edited changes and published document. April 18, 2019

45 Removed references to Assistant-Enterprise as the service is no April 23, 2019


longer supported.

45 Edited changes and published document. April 26, 2019

46 Added Webex Teams Xtended Services Platform. May 29, 2020


Added information about Call Settings Webview application.
Removed system domain limit (ap362553).
Corrected checkpoint load example.

46 Edited changes and published document. June 5, 2020

47 Added Application Delivery Platform (ADP) server. August 25, 2020

47 Edited changes and completed latest rebranding for Cisco. September 21, 2020

47 Accepted changes and published document. September 21, 2020

48 Renamed Webex. December 21, 2020

48 Edited changes and published document. January 5, 2021

49 Remove incorrect PM from XSP growth estimate. November 5, 2022


Added clarification on communication barring digit pattern soft
limit.

50 Adjust number of entities calculation on the Network Server to add November 15, 2022
new systemNbLinePorts PM and remove obsolete systemNbExts
PM.

51 Edited and published document. November 16, 2022

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 8 OF 76


Table of Contents

1 Introduction....................................................................................................................................... 13
2 Definitions ......................................................................................................................................... 14
2.1 Average Call Hold Time ............................................................................................................ 14
2.2 Erlang ......................................................................................................................................... 14
2.3 Busy Hour Call Attempts/Calls per Second ............................................................................. 14
2.4 Simultaneous Calls .................................................................................................................... 14
2.5 Call Model .................................................................................................................................. 15
2.6 Call Weighting ............................................................................................................................ 15
3 Server Capacity Disclaimer ............................................................................................................ 16
4 System Architecture and Scalability ............................................................................................ 17
4.1 Network Server .......................................................................................................................... 17
4.1.1 Centralized Location Repository ...................................................................................... 17
4.2 Centralized Routing Engine ...................................................................................................... 17
4.2.1 Scaling Characteristics ..................................................................................................... 17
4.2.2 Scaling Constraints ........................................................................................................... 18
4.3 Application Server...................................................................................................................... 20
4.3.1 Scaling Characteristics ..................................................................................................... 20
4.3.2 Scaling Constraints ........................................................................................................... 21
4.4 Execution Server (XS Mode) .................................................................................................... 22
4.5 Media Server.............................................................................................................................. 22
4.6 Application Delivery Platform .................................................................................................... 23
4.7 Xtended Services Platform ....................................................................................................... 23
4.7.1 Xtended Services Platform Web Container Dimensioning ............................................ 23
4.7.2 Xtended Services Platform Transaction Ratings ............................................................ 23
4.7.3 Xtended Services Platform Deployment Configurations ................................................ 24
4.8 Cisco BroadWorks Device Management Profile Server ......................................................... 26
4.8.1 Profile Server Deployment Configurations ...................................................................... 26
4.9 Database Server........................................................................................................................ 27
4.9.1 Database Server Dimensioning ....................................................................................... 28
4.9.2 Database Server Disk Requirement ................................................................................ 28
4.10 Access Mediation Server .......................................................................................................... 28
4.11 Service Control Function ........................................................................................................... 29
4.12 Messaging Server...................................................................................................................... 29
4.13 Sharing Server ........................................................................................................................... 29
4.14 WebRTC Server ........................................................................................................................ 29
4.15 Network Function Manager....................................................................................................... 29
4.16 Network Database Server ......................................................................................................... 29
4.17 Video Server .............................................................................................................................. 30
5 Hardware Capacities ....................................................................................................................... 31

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 9 OF 76


6 System Capacity Planning ............................................................................................................. 33
6.1 Use System Capacity Planner .................................................................................................. 33
6.1.1 Use Single-Packaged Configuration ............................................................................... 33
6.1.2 Use Multiple-Packaged Configurations ........................................................................... 34
6.1.3 Use Manual Configuration ............................................................................................... 35
6.1.4 Bare Metal/Virtualized Configurations ............................................................................. 35
6.1.5 Use Live Data Input .......................................................................................................... 36
7 System Engineering Rules ............................................................................................................. 37
7.1 Media Server Engineering Rules .............................................................................................. 37
7.1.1 Media Server Resources.................................................................................................. 37
7.1.2 Media Server Processor Allocation Rules ....................................................................... 37
7.1.3 Generic Port Calculation Rules ........................................................................................ 38
7.1.4 Audio Port Calculation Rules ........................................................................................... 38
7.1.5 Video Calculation Rules ................................................................................................... 39
7.1.6 View Media Server Port Assignments ............................................................................. 41
7.1.7 Playout and Recording Memory Usage Rules................................................................ 41
7.2 Video Server Engineering Rules .............................................................................................. 42
7.3 SNMP Guidelines ...................................................................................................................... 43
7.4 Replication Bandwidth Requirements ...................................................................................... 43
7.4.1 Application Server Replication Bandwidth Requirements .............................................. 43
7.4.2 Network Server Replication Bandwidth Requirements .................................................. 44
7.5 Cisco BroadWorks Unified Messaging Storage Server Requirements .................................. 45
7.5.1 Message Storage Requirements ..................................................................................... 45
7.5.2 Busy Hour Messaging Throughput .................................................................................. 46
7.6 Service Control Function Network Requirements.................................................................... 48
8 Server Provisioning Guidelines .................................................................................................... 49
8.1 Application Server Provisioning Guidelines ............................................................................. 49
8.2 Network Server Provisioning Guidelines .................................................................................. 50
8.3 Messaging Server Provisioning Guidelines ............................................................................. 51
9 Server Performance Guidelines .................................................................................................... 52
10 Server Growth Estimation Procedures ........................................................................................ 53
10.1 General Procedure .................................................................................................................... 53
10.1.1 Gather and Analyze Data ................................................................................................. 53
10.1.2 Revisit Growth Estimate ................................................................................................... 54
10.1.3 Server Health .................................................................................................................... 54
10.2 Application Server Growth Estimate Procedure ...................................................................... 54
10.2.1 Collect Information ............................................................................................................ 54
10.2.2 Calculate Maximum Users per Resource Indicator ........................................................ 57
10.2.3 Analyze Data..................................................................................................................... 58
10.3 Network Server Growth Estimate Procedure ........................................................................... 59
10.3.1 Collect Information ............................................................................................................ 59

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 10 OF 76


10.3.2 Calculate Max Identities Based per Resource Indicator................................................. 59
10.3.3 Analyze Data..................................................................................................................... 60
10.4 Media Server Growth Estimate Procedure .............................................................................. 61
10.4.1 Collect Information ............................................................................................................ 61
10.4.2 Analyze Data..................................................................................................................... 61
10.5 Web Container (Xtended Services Platform/Application Delivery Platform/Profile Server)
General Growth Estimate Procedure ....................................................................................... 62
10.5.1 Collect Information ............................................................................................................ 62
10.5.2 Analyze Web Container Data .......................................................................................... 64
10.6 Device Management Growth Estimation Procedure ............................................................... 64
10.6.1 Number of Devices ........................................................................................................... 64
10.6.2 IOPS (File Repository Only)............................................................................................. 64
10.6.3 Bandwidth (Device Management Access Only) ............................................................. 65
10.6.4 Calculate Maximum Devices per Resource Indicator (Device Management) .............. 65
11 Database Server Disk Estimation Tool ........................................................................................ 66
12 XS Mode System Engineering (Sh Interface Traffic) ................................................................. 67
12.1 Subscriber Profile Sizing ........................................................................................................... 67
12.2 Execution Server Traffic Model................................................................................................. 67
12.2.1 Patterns ............................................................................................................................. 67
12.2.2 Example ............................................................................................................................ 68
12.3 Profile Server Traffic Model....................................................................................................... 69
12.3.1 Patterns ............................................................................................................................. 69
12.3.2 Example ............................................................................................................................ 69
13 Appendix A: Server Overload Controls ....................................................................................... 70
13.1 Application Server...................................................................................................................... 70
13.2 Network Server .......................................................................................................................... 71
13.2.1 Network Server Licensing ................................................................................................ 71
13.3 Media Server.............................................................................................................................. 72
13.4 Xtended Services Platform ....................................................................................................... 72
Acronyms and Abbreviations ............................................................................................................... 73
References ............................................................................................................................................... 76

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 11 OF 76


Table of Figures

Figure 1 Execution Server Sh Transaction Rate Example, Full Business User .................................... 68
Figure 2 Execution Server Sh Data Volume Example, Full Business User ........................................... 68

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 12 OF 76


1 Introduction

This document is intended as a guide to engineer a Cisco BroadWorks system. The


scalability aspects of the architecture are described along with details on how to plan for
expected system capacity. This information is intended to be used during system planning.
This document applies to all supported Cisco BroadWorks releases. Any release-specific
information is noted where applicable.
Information on monitoring system performance is also presented in this document. This is
a crucial part of the ongoing system planning. Current system performance is the most
accurate way to predict future system scalability.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 13 OF 76


2 Definitions

2.1 Average Call Hold Time


Average call hold time (ACH) defines how long a call is connected (call duration), on
average.

2.2 Erlang
Call load is measured in Erlang units. Erlang units represent traffic intensity or load as
traffic volume per time unit. An Erlang can be defined as one telephone line carrying traffic
continuously for one hour.
For example, if a system receives 100 calls per hour with each call requiring three minutes
(0.05 hour) of service, then the traffic volume in an eight hour period is 100 * 0.05 * 8 = 40
Call Hours (Ch). One Erlang equals one Ch/hour, so the traffic load is 40/8 = 5E.
Peak or busy hour is the busiest one-hour (60 minutes) period of the day. This is typically
the load for which resources are calculated since it represents the “worst case” scenario.
Average call hold time defines how long a call is connected (call duration), on average.
Following are metrics typically used in the industry:
 Residential: 0.1 Erlang with three-minute average call hold times
 Enterprise: 0.2 Erlang with three-minute average call hold times

2.3 Busy Hour Call Attempts/Calls per Second


Peak or busy hour is the busiest one-hour period of the day. This is typically the load for
which resources are calculated since it represents the “worst case” scenario. The number
of calls during this hour represents the number of Busy Hour Call Attempts (BHCA). Calls
per second (CPS) are equal to the BHCA converted to seconds.
To calculate BHCA the following formula is used:
BHCA = # of subscribers * ([Erlang per subscriber * 60 minutes] / call hold time per
subscriber in minutes)
To calculate CPS, the following formula is used:
CPS = BHCA / 3600
For example, 1000 residential subscribers with an Erlang of 0.1 would have the BHCA
computed follows: BHCA = 1000 * ([0.1 * 60]/3) = 2000 BHCA.

2.4 Simultaneous Calls


The number of simultaneous calls can be computed from the number of subscribers and
the Erlang per subscriber as follows.
Number of simultaneous calls = Number of subscribers * Erlang per
subscriber

For Business Trunking applications, the number of simultaneous calls is used for licensing
instead of the number of users. The number of simultaneous calls is equivalent to the
overall system trunk group call capacity. However, for capacity planning purposes, the
effective number of users must be estimated to determine the database size and expected
number of web and provisioning transactions. The effective number of users is determined
from the number of simultaneous calls and the Erlang per user.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 14 OF 76


Effective Number of Users = Number of simultaneous calls / Erlang per
user

2.5 Call Model


The call model defines the calling profile. Elements of the call model include:
 Percentage of internal versus external calls – Internal calls are defined as calls
between Cisco BroadWorks users within the same group.
 Incoming external versus outgoing external calls – An incoming external call is
defined as a call originating from another network element and terminating to a Cisco
BroadWorks user on the Application Server. An outgoing external call is a call
originating from a Cisco BroadWorks user and terminating to another network
element.
 Call disposition – Answered, Not Answered, Busy, or Error.
 Service distribution – This is the expected penetration of the various Cisco
BroadWorks services, each of which has a varying effect on system capacity.

2.6 Call Weighting


Each service in the call model is assigned a call weighting to estimate the impacts. The
weighting is an impact relative to a basic call. A basic call is defined as a simple answered
call. The call weighting is based primarily on impacts to messaging throughput. For
example, the Simultaneous Ringing service results in an extra terminating call leg for each
provisioned number. If one number is configured, the effective call weight is 1.5.
Registrations are also weighed as a “call”. A registration with authentication is assumed to
have a call weighting of one-half an answered call.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 15 OF 76


3 Server Capacity Disclaimer

The server capacities are shown in this document as an indication only as to the expected
performance of a standard system, using a typical system configuration and feature mix.
Variations in hardware platforms, operating system updates, selected customer feature
sets, and customer mix affect the maximum number of users actually supported by the
system.
Cisco makes every effort to provide useful data in the sections that follow, but the
information should be viewed as guidelines rather than absolute performance figures.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 16 OF 76


4 System Architecture and Scalability

The following sections briefly describe the architecture and associated scalability aspects
of each of the Cisco BroadWorks nodes.

4.1 Network Server


The Network Server cluster is the key element in Cisco BroadWorks system scaling. A
Network Server cluster is defined as a number of Network Servers using the same
synchronized data store. In general, a Cisco BroadWorks deployment has one Network
Server cluster. The Network Server cluster provides two main functions:
 Centralized Location Repository
 Centralized Routing Engine
It is possible to segregate the Network Server functionality such that different Application
Servers are “homed” on different Network Server clusters. In this case, each Network
Server cluster would be considered a different Cisco BroadWorks system and the general
scaling rules outlined in this document would apply to each system independently.

4.1.1 Centralized Location Repository


The Network Server cluster knows where all login user IDs, directory numbers (DNs), and
Session Initiation Protocol (SIP) Uniform Resource Locators (URLs) are hosted.
The Network Server acts as a SIP-based redirection server for ingress INVITEs returning
a SIP 302 Moved Temporarily response with an Application Server cluster as contact. The
Network Server knows where all device address of record (AOR)/line ports are hosted
(Release 14.sp2) and can act as SIP-based redirection server for ingress REGISTERs
returning a SIP 302 Moved Temporarily response with a serving Application Server cluster
as contact.
The Network Server acts as a Hypertext Transfer Protocol (HTTP)-based user location
server for web and client login returning the user’s serving primary and secondary
Application Server. This location application programming interface (API) can be used by
a third-party portal.

4.2 Centralized Routing Engine


The Network Server provides policy-driven originator-based/terminator-based routing
capabilities. It provides:
 Centralized advanced routing capabilities, for example, Equal Access Support,
Physical Location Routing, and so on.
 SIP-based Media Server Selection.
For more information on Network Server capabilities, see the Cisco BroadWorks Network
Server Product Description [5].

4.2.1 Scaling Characteristics


Network Servers are deployed in clusters. To increase capacity, additional servers are
added to the cluster.
Network Servers are deployed in an N+1 manner, in which the cluster is over-provisioned
with an additional server. This is done to provide redundancy. If a server should fail, there
would be enough capacity to accommodate busy hour traffic.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 17 OF 76


A maximum of twenty servers can be in a cluster. Nineteen servers are used for capacity
planning, with one additional redundant server.
On systems with a large volume of provisioning transactions, a dedicated Network Server
can be deployed as a provisioning-only server.
The Network Server uses the Oracle TimesTen database, with a memory resident data
store. TimesTen replication is used to keep the data stores across the clusters in
synchronization.
Network Server transactions consist primarily of network-bound SIP redirection traffic,
Media Server Selection requests, and User Locator requests. The Network Server can
also be optionally configured to handle access-side call and registration traffic.
There are a number of ways that network elements such as Application Servers, public
switched telephone network (PSTN) interconnects, and session border controls (SBCs)
can select Network Servers. Some common Network Server selection algorithms are as
follows.
 Predefined Network Server route list with round-robin algorithm route selection
 Segmentation of Network Server based on access originated traffic versus network
originated traffic
 Segmentation of Network Server based on per node fixed Network Server list (for
example, Application Server cluster 1 uses Network Server 1/Network Server 2 while
Application Server cluster 2 uses Network Server 2/Network Server 3)

4.2.2 Scaling Constraints


The number of subscribers supported by a given Network Server cluster is constrained by
the following items:
 TimesTen Database Size
 Transactions per Second (TPS)
 Execution Server Heap Size
The above constraints are directly related to platform resource usage, that is, CPU and
memory. The type of platform, number of servers required and memory-per-server, is
dictated by the system requirements. For example, to support ten million subscribers, the
requirement of the Network Server database might be 6 GB, which would mean 12 GB
minimum RAM for each server. Assuming a typical residential deployment with two calls
per user per hour, the cluster would need to support twenty million calls per hour or 5,556
TPS. If the selected hardware was rated at 600 TPS, then ten servers would be required,
with the eleventh server available to handle load during a node failure.

4.2.2.1 Database Size Constraints


The TimesTen database is replicated across all members of the cluster. The database
size is directly related to available system memory/RAM. This corresponds to a maximum
database size of 24 GB. The database allocated permanent size can be changed at any
time and should not be set to greater than 50% of available physical memory. Database
temporary size, unless otherwise indicated by Cisco, should be set to the script default
value associated with the selected permanent size.
The Network Server database footprint is driven by the following key elements:
 Deployment model (enterprise/service provider/group mix)
− Residential Model – 1 group per user/household model

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 18 OF 76


− Business/enterprise model – many users per group
 Subscriber attributes:
− A subscriber’s Network Server profile is made up of a number of data attributes:
− Directory number(s)
− Web/client login user ID
− SIP URL aliases
− Extension (if user does not have a directory number)
 Routing attributes:
− Network elements (NEs)
− Routing profiles, routing policies, route list entries
− NPA-NXX Active Code List (NNACL) files, Local Calling Area (LCA) entries, Local
Number Portability (LNP) entries
The main driver to the database size as a system scales is the subscriber information. In
general, a subscriber has at least two attributes: a directory number and a web/client login
user ID.

4.2.2.2 Transactions per Second (TPS) Constraints


Each Network Server cluster member has a rate TPS guideline that is based on hardware
type. A Network Server transaction can be one of the following equally weighted types:
 SIP INVITE redirection
 SIP REGISTER redirection
 User location lookup
A server’s rated TPS is based on CPU resource usage and is derived from performance
validation testing data. TPS per-platform rating ranges from 300 to 4500 TPS. For specific
Network Server TPS ratings per hardware (HW) type, see section 5 Hardware Capacities.
The total Network Server cluster TPS is the sum of TPSs across each individual cluster
member−one server (for redundancy). The Network Server cluster is currently rated at up
to 19 + 1 elements, which means that the maximum total system TPS is currently rated at
85,500 TPS (19 servers * 4500 TPS for a 24 CPU Unit Intel 5600 Xeon-based server).
The Cisco BroadWorks System Capacity Planner [2] identifies how many Network Server
elements of a certain hardware type are required to meet the estimated traffic model.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 19 OF 76


4.2.2.3 Execution Server Heap Size Constraints
The Execution Server (XS) is the process responsible for call processing. The memory
heap size used by the Execution Server process is directly related to available system
memory/RAM. Cisco performance benchmarking has validated the Network Server with
up to 48 GB of physical memory. This corresponds to an Execution Server heap size of
7.2 GB (15% of available memory). The Network Server Execution Server heap size
footprint is driven by the following key elements:
 Data caching
− Routing information
 Transactions per second
− SIP-based redirection and user location lookups
− Each active session uses up heap space
Unlike the database size constraint, which is “cluster-wide”, the Execution Server heap is
“server-specific”. This means that different servers in the cluster can have heaps at
different sizes if they are not handling the same traffic loads. In general, the Network
Server Execution Server heap size is rarely the bottleneck for growth for the Network
Server cluster.

4.3 Application Server


The Application Server is the service delivery platform that provides the line-side soft
switch capabilities. A subscriber belongs to a specific Application Server cluster and as
such, the subscriber’s profile resides in the database that is unique to each cluster.
Cisco BroadWorks provides two different deployment models for its Telephony Application
Server (TAS) product targeted for IMS deployments: the Application Server (AS) TAS and
the Execution Server (XS) TAS. A system with an Application Server TAS is referred to as
AS mode. A system with an Execution Server TAS is referred to as XS mode.
This section describes AS mode. For information on XS mode, see section 4.4 Execution
Server (XS Mode).

4.3.1 Scaling Characteristics


The Cisco BroadWorks Application Server is deployed in a primary/secondary server
cluster scheme. Multiple Application Server clusters are deployed within the same
Network Server cluster to increase capacity. Each Application Server cluster is managed
independently of the other. The number of Application Server clusters per system is
limited by the capacity of the Network Server cluster. Adding additional Application Server
clusters drive additional requirements on other network components.
The secondary server operates in a hot-standby mode, with all subscriber data kept
current on both servers of the cluster. Failure of the primary node causes rollover of user
control to the secondary server.
Subscriber data is stored in an Oracle TimesTen database that is memory resident. A
memory database provides for optimum performance and throughput. Replication of
subscriber data is accomplished using TimesTen replication.
Provisioning of service providers, enterprises, groups, users, and services is done via the
Open Client Interface-Provisioning (OCI-P). This provisioning traffic is generated via the
web interface, local command line interface (CLI), and by Cisco BroadWorks/third-party
clients (and proxied by the Open Client Server [OCS] and Xsi-Actions web application).

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 20 OF 76


The system can be (optionally) configured to allow for preferred provisioning on the
secondary server. This can more evenly distribute processing load during steady state
operation, but is not a mechanism to provide increased capacity, as a single server must
be able to service both call and provisioning load during software upgrades and failure
scenarios.

4.3.2 Scaling Constraints


An Application Server’s capacity is constrained by the following:
 Processing Power:
− Calls per second (CPS)
− Open Client Interface (OCI) provisioning and call processing transactions per
second
And
 Disk:
− IOPS
− Throughput
 Memory:
− Database size
− Execution Server Heap Size
The Cisco BroadWorks System Capacity Planner [2] takes the previously listed
constraints into account when providing an “estimate subscribers-per-Application Server”
cluster.

4.3.2.1 Execution Server Heap Size Constraints


The Execution Server (XS) is the process responsible for call processing. The memory
heap size used by the Execution Server process is directly related to available system
memory/RAM. Cisco performance benchmarking has validated the Application Server
with up to 128 GB of physical memory. This corresponds to an Execution Server heap
size of 32 GB (25% of available memory). The Application Server Execution Server heap
size footprint is driven by the following key elements:
 Data caching
− Group information and subscriber information
 Transactions per second
− Call processing and non-call processing traffic
− Each active call uses up heap space

4.3.2.2 Calls per Second and Provisioning TPS Constraints


The Application Server is rated at a maximum number of Calls Per Second (CPS) and
PTPS rate. The quoted CPS rates are for basic answered calls with a call weighting of
one. The quoted PTPS rate assumes 100% “get” transactions and is a rough estimate
since provisioning transactions have highly variable performance. A modify transaction is
approximately five times more costly that a “get”, and an “add” or “delete” is ten times
more costly.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 21 OF 76


A server’s rated CPS and PTPS are based on CPU resource usage and are derived from
performance validation testing data. For specific Application Server CPS/PTPS ratings per
hardware type, see section 5 Hardware Capacities.

4.3.2.3 Database Size Constraints


The TimesTen database is replicated across the two Application Server cluster members.
The database size is directly related to available system memory/RAM. The database
allocated permanent size can be changed at any time and should not be set to greater
than 38% of available physical memory. The maximum database size is 24 GB for non-
trunking solutions. For low-feature/trunking solutions a database size of 36 GB is
supported (the calling rate is limited to 400 CPS). Database temporary size, unless
otherwise indicated by Cisco, should be set to the script default value associated with the
selected permanent size.
The Application Server database footprint is driven by the call model: number of users per
group, groups per service provider/enterprise, and so on, along with the number of
services assigned and configured.

4.4 Execution Server (XS Mode)


The Execution Server Telephony Application Server provides a different scalability model
than the Application Server Telephony Application Server. Execution Server Telephony
Application Server nodes are deployed in a farm configuration with N+1 redundancy. The
Execution Server Telephony Application Server provides call processing and service logic.
An individual node is dimensioned similar to the Application Server Telephony Application
Server with the following exceptions:
 The Execution Server Telephony Application Server has no database; therefore, the
database usage is not part of the dimensioning. Subscriber profile information is
retrieved from the Home Subscriber Server (HSS) and/or Database Server (DBS).
 The Execution Server Telephony Application Server heap is configured to consume
more of the system RAM. Its size is 66 percent of the system memory, with a
maximum total size of 32 GB.
 Provisioning is handled by the Profile Server – Provisioning Application.

4.5 Media Server


Media Servers are also deployed in clusters. To increase capacity, additional servers are
added to the cluster. There can be up to 2,000 Media Servers per Network Server cluster.
Each Media Server operates independently.
The Media Server is dimensioned in terms of ports. A port can provide interactive voice
response (IVR) or conferencing functions. Ports for different codecs, fax, and video are
weighted in proportion to the resources used relative to the G.711 codec.
A given system’s Media Server port requirements are derived from the call model. The
number of users per Media Server port is estimated based on the average amount of time
each user requires a Media Server resource/port.
A Media Server can be classified as a video transcoding/conferencing-enabled Media
Server. This type of Media Server has increased hardware requirements. Video
transcoding/conferencing-enabled Media Servers can be deployed alongside non-video-
enabled Media Servers. Video transcoding/conferencing-enabled Media Servers are
divided into CPUs dedicated for video transcoding/conferencing and CPUs dedicated for
IVR and conferencing.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 22 OF 76


4.6 Application Delivery Platform
The Application Delivery Platform is a general-purpose application-hosting server. It is
meant as a controlled platform on which applications that integrate with other Cisco
BroadWorks services/servers can run. It replaces the concept of the Xtended Services
Platform and Profile Server nodes in AS-mode.
The Application Delivery Platform can be deployed in different models depending on the
services offered. The modes and deployment configurations are identical to the Xtended
Services Platform and Profile Server configurations described below.

4.7 Xtended Services Platform


The Xtended Services Platform (Xsp) provides a generic container platform for running a
customer Demilitarized Zone (DMZ) application. The Xtended Services Platform comes
with a number of preinstalled applications that can be deployed and share the same
server resources. There are two types of applications:
 Web applications: Hypertext Transfer Protocol/Hypertext Transfer Protocol Secure
Sockets (HTTP/HTTPS)-based applications that run in a Tomcat Web Container. All
web applications run in the same Web Container and share its resources.
 Cisco BroadWorks applications: Java or C++based applications providing some
specific function. Each Cisco BroadWorks application is independent and has its own
resource and dimensioning requirements.
Xtended Services Platforms are deployed in a cluster with any Xtended Services Platform
about to handle requests destined for any of the deployed applications. Generically, the
Xtended Services Platform uses the Network Server Location API to determine the
Application Server (AS) cluster that should be the recipient of a request.
There can be up to 1,000 Xtended Services Platform nodes per Network Server cluster.

4.7.1 Xtended Services Platform Web Container Dimensioning


Xtended Services Platform web applications all share the same Web Container.
The Cisco BroadWorks System Capacity Planner [2] uses the Web Container
dimensioning information previously mentioned when modeling Xtended Services
Platform capacity based on a web application’s general footprint.

4.7.2 Xtended Services Platform Transaction Ratings


In general, the Xtended Services Platform takes requests for a variety of different
applications from the outside world and forwards them on to the appropriate Cisco
BroadWorks Server within the core. The Xtended Services Platform’s workload is rated in
unweighted requests per second (RPS), which is a factor of the available CPU. For
example, a 12 CPU Unit Xtended Services Platform would be rated at 1800 RPS. Each
request processed by the Xtended Services Platform, regardless of the application or
underlying protocol, has an RPS weighting. For example:
 Network Server Location API lookup weight: 1 RPS
 OCI CAP or OCI-P request weight: 1 RPS
 Xtended Services Interface action or event weight: 1 RPS
 OCI over SOAP request weight: 3 RPS
The Cisco BroadWorks System Capacity Planner [2] uses a transaction weighting model
and an applications traffic footprint when modeling Xtended Services Platform capacity.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 23 OF 76


The RPS is only one metric of Xtended Services Platform (Xsp) scalability. The
performance of the backend Cisco BroadWorks server is also a consideration when
examining Xtended Services Platform throughput.

4.7.3 Xtended Services Platform Deployment Configurations


As a generic container platform, in theory, the Xtended Services Platform could support
deployment of all applications on the same Xtended Services Platform. However, in
practice, a combined deployment model can be problematic for a few reasons.
 Memory allocation: Each Cisco BroadWorks application has its own default memory
allocation footprint, as does the Web Container application (in which the web
applications run). Certain combinations of deployed applications are in excess of the
available memory on the server. Although the Xtended Services Platform
automatically adjusts each application’s memory allocation to fit into the available
memory, the Xtended Services Platform server would no longer be running at the
recommended memory level for each application and would not follow the generic
capacity models captured in the Cisco BroadWorks System Capacity Planner [2].
 Application cross-impacts: Applications have different behavior and traffic
footprints. For example, with the Device Management application, it would be
considered “normal” for a mass reboot of phones to occur and the Xtended Services
Platform would not be able to handle all the requests simultaneously. It would be
forced to refuse requests when the Xtended Services Platform is at capacity, in which
case the phone would try another Xtended Services Platform or it would try again
later. If that same Xtended Services Platform were supporting Call Center clients,
these clients would not be able to access the Xtended Services Platform during the
mass device reboot, thus denying service to the client, which would be an
unacceptable grade of service for that specific application.
As a best practice, Cisco recommends grouping different applications together into
different Xtended Services Platform deployment configurations based on application
functions. The following lists the generic Xtended Services Platform deployment
configurations:
 General User Service and Call Control/Presence
 Business Clients
 Device Management Access
 CTI-Connect
 Webex
 UC-One Communicator
 Notification Push Server (NPS)

4.7.3.1 General User Service and Call Control/Presence Deployment Configuration


The function of the configuration is to support Xtended Services Platform applications that
provide general users with access to service control and call control/presence applications
such as the web portal, Cisco BroadWorks clients, and/or third-party clients. This
deployment configuration would generally support the following applications:
 CommPilot Web Portal (optional)
 Xsi-Actions
 Xsi-Events
 OCI over SOAP (optional)

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 24 OF 76


 Meet-Me Moderator (optional)
The General User Service and Call Control Xtended Services Platforms would also be the
home for all low penetration applications (for example, Business client applications in low
volume deployments, and Device Management applications in low volume deployments).

4.7.3.2 Xtended Services Platform: Business Client Services


This deployment configuration provides for the segregation of Cisco BroadWorks
Receptionist and Call Center clients to dedicated Xtended Services Platforms to support a
high grade of service for those applications over and above the general user applications.
A deployment planning to scale to greater than 2,000 business clients should consider this
deployment model. This deployment configuration would generally support the following
applications:
 Cisco BroadWorks Call Center
 Cisco BroadWorks Call Center Reporting
 Cisco BroadWorks Receptionist
 Xsi-Actions
 Xsi-Events

4.7.3.3 Xtended Services Platform: Device Management Access


This deployment configuration provides for the segregation of Cisco BroadWorks Device
Management applications to dedicated nodes to protect other applications. A deployment
planning to scale to greater than 10,000 phones should consider this deployment model.
This deployment configuration would generally support the following applications:
 Cisco BroadWorks Device Management
 Cisco BroadWorks Device Management TFTP (optional)

4.7.3.4 Xtended Services Platform: CTI-Connect


The CTI-Connect deployment configuration interconnects the Cisco BroadWorks
Application Server to a Genesys platform to support route point functionality. This
deployment configuration would generally support the following applications:
 Xsi-Actions
 Xsi-Events

4.7.3.5 Xtended Services Platform: Webex


This deployment configuration provides for the segregation of Webex applications to
dedicated Xtended Services Platforms to protect other Xtended Services Platform
applications. This deployment configuration generally supports the following applications:
 Xsi-Actions
 Xsi-Events
 Authentication Service
 Call Settings Webview
 Device Management
 Computer Telephony Integration (CTI) for Xsi-Events

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 25 OF 76


4.7.3.6 Xtended Services Platform: UC-One Communicator
This deployment configuration provides for the segregation of UC-One Communicator
applications to dedicated Xtended Services Platforms to protect other Xtended Services
Platform applications. This deployment configuration generally supports the following
applications:
 Xsi-Actions
 Xsi-Events
 Call Settings Webview

4.7.3.7 Xtended Services Platform: Notification Push Server


This deployment configuration provides a web application that is used by the Cisco
BroadWorks Messaging Server (UMS) to push notifications to UC-One applications,
targeting mobile and tablet devices running the iOS and Android operating systems.

4.8 Cisco BroadWorks Device Management Profile Server


The Profile Server provides a generic container platform for running network core
applications. The Profile Server comes with a number of preinstalled applications that can
be deployed and share the same server resources. There are two types of applications:
 Web applications: HTTP/HTTPS-based applications that run in a Web Container. All
web applications run in the same Web Container and share its resources.
 Cisco BroadWorks applications: Java or C++ based applications providing some
specific functions. Each Cisco BroadWorks application is independent and has its own
resource and dimensioning requirements.
Profile Servers are deployed in a cluster. File replication is used between Profile Servers
to keep files synchronized within the cluster. To increase capacity, additional nodes are
added to the cluster. There can be up to ten Profile Server nodes per Profile Server
cluster.

4.8.1 Profile Server Deployment Configurations


As a generic container platform, in theory, the Profile Server could support the deployment
of all applications on the same Profile Server. However, in practice, a combined
deployment model can be problematic for a few reasons:
 Memory allocation: Each Cisco BroadWorks application has its own default memory
allocation footprint, as does the Web Container application (in which the web
applications run). Certain combinations of deployed applications are in excess of the
available memory on the server. Although the Profile Server automatically adjusts
each application’s memory allocation to fit into the available memory, that Profile
server would no longer be running at the recommended memory level for each
application and would not follow the generic capacity models captured in the Cisco
BroadWorks System Capacity Planner [2].
 Application cross-impacts: Applications have different behavior and traffic footprints
and as such, can impact each other.
As a best practice, Cisco recommends grouping different applications together into
different Profile Server deployment configurations based on application functions. There
are generic Profile Server deployment configurations:
 Device Management File Repository
 Enhanced Call Center Reporting

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 26 OF 76


 Provisioning Application (XS mode)

4.8.1.1 Profile Server: Device Management Repository


This deployment configuration provides for the segregation of Cisco BroadWorks Device
Management on to dedicated Profile Servers. This deployment configuration provides a
central repository for the storage of device files. It offers an HTTP and Web-based
Distributed Authoring and Versioning (WebDAV) interface for file operations, and it
supports authentication and file replication to other Profile Servers. A deployment planning
to scale to greater than 10,000 phones should consider this deployment model. This
deployment configuration would generally support the Cisco BroadWorks File Repository
application.

4.8.1.2 Profile Server: Enhanced Call Center Reporting


This deployment configuration provides for the segregation of Enhanced Call Center
Reporting applications to dedicated Profile Servers. A deployment planning to scale to
greater than 2,000 Call Center clients should consider this deployment model. This
deployment configuration would generally support the following applications:
 Cisco BroadWorks Call Center Reporting
 Cisco BroadWorks Call Center Reporting Repository
 Cisco BroadWorks Call Center Reporting Database Management
The Profile Server workload, when running the Cisco BroadWorks Enhanced Call Center
Reporting applications, is rated in simultaneous requests-per-second reports, which is a
factor of the available CPU. For example, a 12 CPU Unit Profile Server with 12 GB of
RAM would be rated at 750 RPS. The Cisco BroadWorks System Capacity Planner [2]
uses this simultaneous requests-per-second reports rating and average report generation
numbers when modeling Profile Server capacity.

4.8.1.3 Profile Server: Provisioning Application (XS Mode)


This deployment configuration supports the provisioning application in XS mode. The
Provisioning Server in this mode connects to the Database Server. For guaranteed
performance, this node cannot be combined with other Profile Server applications. The
Cisco BroadWorks System Capacity Planner [2] uses the expected provisioning
transactions per second (administrative and user) to model the Profile Server capacity.

4.9 Database Server


The Database Server provides a generic Oracle 11g2 database platform that can be used
to support one or more schemas. A Database Server configuration consists of one or
more server platforms that run the Oracle instance and storage, and internal disks or
external Storage Area Network (SAN), to support the database files.
The number of Database Server instances required for a given solution can vary from one
to four platforms depending on the deployment model. A Database Server can be
deployed in one of four deployment models:
 Single Instance – Non-redundancy single Database Server (DBS) with internal or
external storage.
 Cluster Redundancy – Oracle Real Application Cluster (RAC) providing two Database
Servers working in a load-balanced cluster using a shared storage SAN.
 Standby Redundancy – Oracle Data Guard providing two independent synchronized
DBSs, each with their own storage. One DBS is active, while other is on standby.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 27 OF 76


 Cluster and Standby Redundancy – This includes both the Oracle RAC and the
Oracle Data Guard.

4.9.1 Database Server Dimensioning


Database Server dimensioning and capacity is different for each schema/application. In
general, a database model is required for proper dimensioning regardless of the
application. This model requires the following:
 Database Write Model: Number of input/output operations per second (IOPS) and
throughput requirements to support busy hour traffic load under different input
parameters
 Database Read Model: Number of IOPS and throughput requirements to support
busy hour database “reads” under different input parameters
 Database Storage Model: Database sizing requirements driven by the number of
tables/rows and data retention time frames
Once the database model is defined and validated, the hardware footprint (CPU, memory)
can be derived for the Database Server and storage requirements (IOPS and throughput)
for the given application.

4.9.2 Database Server Disk Requirement


The Database Server can use either internal or external storage for the database. The
Cisco BroadWorks System Capacity Planner [2] provides a view of the required IOPS, the
data throughput, and the storage space that the storage system needs to support based
on the expected scaling of the solution. The storage system needs to be engineered to
meet those requirements. As a general rule of thumb, a 10K RPM disk can support 150
IOPS, while a 15K RPM disk can support 180 IOPS. The DBS provides a tool to estimate
disk capabilities post-schema deployment. For more information, see section 10.6 Device
Management Growth Estimation Procedure.

4.9.2.1 External Array Minimum Requirements


Cisco has validated the Database Server with external SAN storage. The minimum SAN
requirements are:
 Interface: 4 Gigabits per second (Gbps) Fiber Channel/SAS
 Controller caching: 1 GB

4.10 Access Mediation Server


The Access Mediation Server enables support for the Skinny Call Control Protocol
(SCCP) on Cisco BroadWorks. The Access Mediation Server provides conversion
between SCCP and SIP, thereby allowing the Cisco BroadWorks Application Server to
control and route calls to and from SCCP devices connected to the Access Mediation
Server.
The Access Mediation Server is deployed in an ACTIVE-ACTIVE configuration. This is the
standard deployment in which two Access Mediation Servers are collocated and are
installed on the same subnet (site redundancy). This allows for seamless Real-Time
Transport Protocol (RTP) failover. The Access Mediation Server also supports a
geographic redundant configuration without site redundancy/RTP failover. Both
mechanisms can be used together; however, this requires twice the number of servers.
The Access Mediation Server is dimensioned in terms of media ports as well as resources
for the expected load of phone registrations and calls.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 28 OF 76


4.11 Service Control Function
The Service Control Function (SCF) provides inter-networking capabilities, allowing the
Cisco BroadWorks Application Server to deliver mobile calls to the Voice over Internet
Protocol (VoIP) domain. The SCF communicates with the mobile operator’s network over
standard protocols, such as CAMEL, GSM MAP, ANSI-41, and WIN. It resides in the
mobile operator’s network or in a hosting center.
The Service Control Function Server is deployed in a farm configuration with up to 3
nodes and is dimensioned in terms of overall transactions per second.

4.12 Messaging Server


The Messaging Server (UMS) is an Extensible Messaging and Presence Protocol (XMPP)
server that can be deployed in the Cisco BroadWorks environment. It provides one-on-one
messaging, group chat, vCard Support, shared contacts, roster list presense, and
federation with other XMPP servers.
An Application Server cluster can utilize multiple Messaging Server pairs. A Messaging
Server pair can be shared by more than one Application Server cluster. Prior to Release
22.0, the TimesTen DSN size is limited to 24GB.
In Release 22.0, the Maria database can be installed on a Network Database Server
(NDS). The NDS cluster cannot be be shared across Messaging Server (UMS) pairs.
The Messaging Server is dimensioned by the number of simultaneously connected
UC-One clients. It is primarily limited by memory.

4.13 Sharing Server


The Sharing Server (USS) provides desktop sharing capability to the UC-One client
Release 20.0+. It is deployed in a farm configuration with N+1 redundancy. A single farm
can be used by the entire system.
The Sharing Server is dimensioned in terms of the number of desktop share sessions.
The Sharing Server is primarily limited by memory.

4.14 WebRTC Server


The Cisco BroadWorks WebRTC Server (WRS) is a dual-homed server that terminates
signaling and media from WebRTC-enabled browsers on the public side and provides
interworking of calls originated by WebRTC-enabled browsers to trusted SIP elements on
the private side. It is deployed in a farm configuration with N+1 redundancy. A single farm
can be used by the entire system.
A given WebRTC node supports a fixed number of audio and video transcoding sessions.

4.15 Network Function Manager


The Network Function Manager (NFM) provides centralized licensing starting in Release
21.0. It is deployed in a farm configuration, with a minimum of three servers required for
high availability support.

4.16 Network Database Server


The Network Database Server (NDS) provides a centralized database for use with the
Messaging Server (UMS) or Enhanced Call Logs (ECL) application. It is deployed in a
cluster of three servers for high availability support. The Messaging Server (UMS) and
ECL cannot share a Network Database Server cluster.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 29 OF 76


4.17 Video Server
The Video Server (UVS) provides My Room collaboration capability to the UC-One client.
It is deployed in a farm configuration with N+1 redundancy. A single farm can be used by
the entire system.
The Video Server is dimensioned in terms of the number of required video and audio
streams.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 30 OF 76


5 Hardware Capacities

For the descriptions of the supported hardware configurations for each Cisco BroadWorks
server, see the Cisco BroadWorks Platform Dimensioning Guide [1].
The following tables show base single server capacity based on CPU for the various
supported CPU Unit footprints, where a CPU Unit is equivalent to a CPU core or thread.
For example, a quad core E5405 CPU would be equivalent to 4 CPU Units, while a quad
core L5520 CPU with hyper-threading per core would be equivalent to 8 CPU Units.
These CPU-based numbers can be used for manual capacity planning as described in
section 6 System Capacity Planning. Numbers vary based on service usage and call
model. For more information, see the Cisco BroadWorks System Capacity Planner [2].
Intel Xeon-based Server CPU-based Capacities
Server Type Capacity

Application Server (AS) 18 CPS per CPU


18 PTPS per CPU

Network Server (NS) 188 TPS per CPU

Media Server (MS) See section 7.1.3 Generic Port Calculation


Rules.

Access Mediation Server (AMS) 325 ports per CPU

Service Control Function (SCF) 900TPS

Messaging Server (UMS) 50,000 clients

Sharing Server (USS) 2,000 active participants

Video Server (UVS) See section 7.2 Video Server Engineering


Rules.

WebRTC Server (WRS) 200 audio


100 video (non-transcoding)
20 video (transcoding)

Xtended Services Platform (Xsp)/Application 1,500 iOS + 2000 Android (may further be
Deliver Platform (ADP) – Notification Push limited by connections/latency)
Server (NPS)

Xtended Services Platform (Xsp)/Application 25,000 connected clients (Release 21.sp9 or


Deliver Platform (ADP) – UC-One later)
5,000 connected clients (pre-Release 21.sp9)

Xtended Services Platform (Xsp)/Application 25,000 connected clients (Release 21.sp9 or


Deliver Platform (ADP) – Webex later)
5,000 connected clients (pre-Release 21.sp9)

Xtended Services Platform (Xsp)/Application 150 TPS per CPU (maximum of 2,400)
Deliver Platform (ADP) – Xtended Services
Interface (Xsi)/CTI

NOTE 1: When a hyper-threading capable CPU is used with hyper-threading disabled, the
capacity can be multiplied by a factor of 1.5.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 31 OF 76


NOTE 2: When a Media Server, Video Server, or WebRTC Server is deployed on a box with
hyper-threading enabled, the number of CPUs used for capacity calculations is divided by two.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 32 OF 76


6 System Capacity Planning

This section describes how to estimate system capacity and determine the required
number of nodes of each type required to handle the expected configuration.
Section 6.1 Use System Capacity Planner describes the usage of the Cisco BroadWorks
System Capacity Planner [2] tool.

NOTE: Capacity planning provides an estimate of the required number of nodes and their
capacity during system planning. It is not intended as a substitute for continuous live system
monitoring based on defined system indicators as described in section 9 Server Performance
Guidelines. In addition, future growth expectations should periodically be re-evaluated based on
current system information, as described in section 10 Server Growth Estimation Procedures.

6.1 Use System Capacity Planner


A system’s capacity can be estimated using the Cisco BroadWorks System Capacity
Planner [2]. The first worksheet of the capacity planner describes the call model.
The call model can be input to the Cisco BroadWorks System Capacity Planner using
manual inputs or using one or more packaged configurations.
The Cisco BroadWorks System Capacity Planner can be used for bare metal hardware or
a virtualized system (as described in section 6.1.4 Bare Metal/Virtualized Configurations).
The Cisco BroadWorks System Capacity Planner can also be used to input key
performance indicators from a live system and perform additional capacity planning. This
is described in section 6.1.5 Use Live Data Input.

6.1.1 Use Single-Packaged Configuration


The Cisco BroadWorks System Capacity Planner supports the following packaged
configurations:
 Standard Residential/SOHO User Package with Unified Messaging
 Premium Residential/SOHO User Package with Unified Messaging
 Business Line
 Business Trunk
 Standard Enterprise User Package with Unified Messaging
 Premium Enterprise User Package with Unified Messaging
The configuration can be selected from the drop-down menu. After the configuration is
selected, the number of users, user Erlangs, and hold time can be entered. For the
Business Trunk package, the number of simultaneous calls is entered instead of the
number of users. The Apply packaged configuration(s) button is used to apply the selected
call model for the configuration.

6.1.1.1 Example 1: Standard Residential System


1) Select the Standard Residential/SOHO User Package with Unified Messaging
package from the drop-down menu for Packaged Configuration Selection 1.
2) Enter the number of users and adjust the User Erlang/Call Hold time as necessary.
3) Click the Apply Packaged Configuration(s) button.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 33 OF 76


4) Select the hardware server types for each of the Cisco BroadWorks nodes (that is, the
Application Server, Network Server, and so on).
The System Capacity Planner computes the server requirements.

6.1.1.2 Example 2: Business Trunking System


1) Select the Business Trunking from the drop-down menu for Packaged Configuration
Selection 1.
2) Enter the number of simultaneous calls and adjust the User Erlang/Call Hold time as
necessary. The number of simultaneous calls is equivalent to the number of trunking
lines.
3) Click the Apply Packaged Configuration(s) button.
4) Select the hardware server types for each of the Cisco BroadWorks nodes (that is,
Application Server, Network Server, and so on).
The System Capacity Planner computes the server requirements.

6.1.1.3 Example 3: Standard Residential System with additional Simultaneous Ringing


Service Usage
1) Repeat steps in section 6.1.1.1 Example 1: Standard Residential System.
2) Display the manual configuration settings by clicking on the Toggle View button in the
Manual Configuration area.
3) Update the penetration percentage for the Simultaneous Ringing service.
The System Capacity Planner computes the server requirements.

NOTE: If the Apply Packaged Configurations button is selected again, it overwrites the manual
settings from the previous step.

6.1.2 Use Multiple-Packaged Configurations


Multiple-packaged configurations can be displayed and configured using the More
Configurations button. A mix of up to five configurations is supported. The System
Capacity Planner automatically weighs the profiles to determine the overall processing
and memory requirements. Note that it is assumed that the users with the various profiles
are evenly distributed across the Application Server clusters.

6.1.2.1 Example 1: System with Mix Standard Residential and Business Trunking
1) Select the Standard Residential/SOHO User Package with Unified Messaging
package from the drop-down menu for Packaged Configuration Selection 1.
2) Enter the number of residential users.
3) Click the More Configurations button.
4) Select the Business Trunking package from the drop-down menu for Packaged
Configuration Selection 2.
5) Enter the number of simultaneous trunking calls.
6) Click the Apply Packaged Configurations button.
7) Select the hardware server types for each of the Cisco BroadWorks nodes (that is,
Application Server, Network Server, and so on).

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 34 OF 76


The System Capacity Planner computes the server requirements.

6.1.2.2 Example 2: System with Mixed Standard Residential and Business Trunking but with
Additional Simultaneous Ringing Service Usage
1) Repeat the steps from the previous example.
2) Display the manual configuration settings by clicking the Toggle View button in the
Manual Configuration area.
3) Update the penetration percentage for the Simultaneous Ringing service. Note that
you must manually compute the weighting for this service. This weighting is for the
whole system and overrides the weighting computed for the mix of the packaged
configurations.
4) Select the hardware server types for each of the Cisco BroadWorks nodes (that is,
Application Server, Network Server, and so on).
The System Capacity Planner computes the server requirements.

NOTE: If the Apply Packaged Configurations button is selected again, it overwrites the manual
settings from the previous step.

6.1.3 Use Manual Configuration


Alternate call models can also be manually inputted to the Cisco BroadWorks System
Capacity Planner [2]. The call model can initially be derived from a packaged configuration
and then altered by toggling the view of the Manual Configuration area. Note that the
manual configuration applies to the entire system and not to a particular profile.

6.1.4 Bare Metal/Virtualized Configurations


Once the call model is entered, the next worksheets are used to select the hardware
configuration. The planner then outputs the number of servers required to support the
load.
The “Capacity Planner” worksheet is used for bare metal hardware. On this worksheet, the
hardware type and configuration (CPU units and memory) are selected from drop-down
lists, which correspond to the resources defined in the Cisco BroadWorks Platform
Dimensioning Guide [1].
The “Virtualized Planner” worksheet is used for a virtualized system. On this worksheet,
the hardware is selected automatically and can be overridden by user inputs. In automatic
mode, the hardware is selected as a minimum configuration if it can support the load.
Larger or smaller servers can be selected using drop-downs for CPU Units and Memory.
The Virtualized Planner also computes the total disk space requirements (storage,
throughput, and IOPS) as well as the required network bandwidth for each node. Finally,
on the bottom line, the Virtualized Planner reports the total resource requirements for the
system. For the minimum and maximum specifications for each virtualized Cisco
BroadWorks node, see the Cisco BroadWorks Platform Dimensioning Guide [1]. For
virtualization configuration and supported virtual machines, see the Cisco BroadWorks
Virtualization Configuration Guide [3].
The Cisco BroadWorks System Capacity Planner [2] also supports a mixed configuration
with bare metal Media Server, Video Server, and/or WebRTC Server but all other server
types virtualized. This is configured on the “Call Model” worksheet.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 35 OF 76


6.1.5 Use Live Data Input
The Cisco BroadWorks System Capacity Planner [2] can be used to take key
performance indicators from a live deployed system and perform additional capacity
planning such as selecting new hardware or changing the call model.
The capacity planning of changes to the call model from live data depends on accurately
capturing the current call model.
The following data should be collected and used as input to the planner:
 Current Application Server hardware (CPU and Memory).
 Execution Server heap size: Collected as described in section 10.2.1.1 Execution
Server Post Full Collection Heap Size.
 Database permanent size: Collected as described in section 10.2.1.2 Database
Permanent Size.
 Busy CPU percentage: Collected during the busy hour as described in section
10.2.1.4 Worst Case Busy Hour CPU.
 Number of users: Collected as described in section 10.2.1.6 Provisioned Users.
The Do Estimate button performs the growth estimate. The Clear button removes all live
data inputs and changes the planner to its original state.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 36 OF 76


7 System Engineering Rules

This section outlines system engineering rules for Cisco BroadWorks servers.

7.1 Media Server Engineering Rules

7.1.1 Media Server Resources


The following Media Server resources control capacity and scaling:
 Processor allocation – This is the number of available processes for each service.
This is based on the number of CPU cores and can be configured via the CLI.
 RTP ports – This represents the maximum number of simultaneous RTP streams that
the Media Server can support. This value has a maximum of “4000” and can be
lowered via CLI configuration. The maximum is “1000” for virtualized configurations.
 Audio ports – This is a dynamically calculated value that is based on platform
capabilities that define the upper boundary of generic audio ports that a Media Server
can support. It represents the CPU limit based on profiling. The maximum is “9500” for
Release 20.0 and later. The maximum is “4000” prior to Release 20.0.
 Video ports – This is a dynamically calculated value based on platform capabilities. It
defines the upper boundary of generic video ports that a Media Server can support.
Media Server resources define the limits that apply to the following services: Interactive
Voice Response (Audio/Video), Conferencing (Audio/Video), Fax Messaging, Stream
Mixer (used for call recording), and Repeaters (used for Lawful Intercept and In-Call
Service Activation). Resource allocations can be tuned to favor one service over another.
For example, a Media Server can be tuned to be IVR-centric or conferencing-centric by
optimizing the capacity numbers for that service.
The main controls that affect resource allocation are as follows:
 Number of audio processes
 Number of video processes (default = 1 if more than 8 CPUs; otherwise, 0)
 IVR resource ratio (default = 75)
 Video transcoding resource ratio (default = 100)

7.1.2 Media Server Processor Allocation Rules


The Media Server allocates CPUs to specific processes. The number of processes is a
major factor in port allocation. CPUs are reserved for the Operating System, Repeater,
Stream Mixer, and Control Channel Framework (CFW) processes. By default, this
reserves 4 CPUs. The remaining CPUs are divided between Audio and Video processes.
The number of video processes versus audio processes on a given Media Server is
configured via the numVideoProcesses and numAudioProcesses parameters on the
Media Server CLI. By default, both numVideoProcesses and numAudioProcesses
parameters are set to “automatic”, which means that only one video process is enabled at
start-up. Setting either of the two process parameters results in a change in the other
based on the remaining available CPUs (even though the CLI still references that
parameter as “automatic”). In general, only one should be set and the other should be
automatically calculated.
 If numAudioProcesses is set to a value greater than “3”, then numVideoProcesses =
number of CPUs – reserved CPUs – numAudioProcesses.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 37 OF 76


 If numAudioProcesses is set to a value less than or equal to “3”, then
numVideoProcesses = number of CPUs – reserved CPUs – 2.
 If numVideoProcesses is set, then numAudioProcesses number of CPUs – 1 –
numVideoProcesses.
Video processes are further divided into processors dedicated to either transcoding or
conferencing. The CLI parameter, videoTranscodingResourcesRatio, sets the percentage
of video processors dedicated to transcoding (the remaining parameters are dedicated to
video conferencing). By default, the videoTranscodingResourcesRatio defaults to “100”.
For a video conferencing-only Media Server, the videoTranscodingResourcesRatio should
be set to “0”.

7.1.3 Generic Port Calculation Rules


Media Servers are limited to a number of generic ports that are derived using the following
algorithm:
 If the Media Server hardware platform is listed in the hardware_cap.csv file located in
/usr/local/broadworks/bw_base/conf, the number of Media Server ports for this server
is equal to the lesser of the hardware_cap.csv file value or the numPorts= value listed
in the license.txt file read at server start-up.
 If the Media Server hardware platform is not listed in the hardware_cap.csv file
located in /usr/local/broadworks/bw_base/conf, the number of Media Server ports for
this server is equal to the lesser the numPorts= value listed in the license.txt file or the
port number calculated from the following CPU and memory-based formulas:
The CPU-derived port number is based on the following formula:
CpuMaxPorts = #_of_CPU X CPU_Frequency/3
… where the “#_of_CPU” and “CPU_Frequency” are those provided by cat
/proc/cpuinfo. “CPU_Frequency” is expressed in MHz (for example, 2 GHz would be
2000 MHz).
A final memory-based port calculation is done to ensure that the server can support
the number of ports calculated based on CPU. The memory-based calculation uses
the lesser of the total physical memory on the server (AvailMem) or shared memory
(ShmMem) (both expressed in MB) and takes into account IVRResourcesRatio that
can be configured under the CLI MS_CLI/System level.
− If AvailMem < ShmMem, MemMaxPorts = (AvailMem – 1500)/(2 *
IVRResourcesRatio)
− If AvailMem > ShmMem, MemMaxPorts = (ShmMem * .75)/(2 *
IVRResourcesRatio)
… where ShmMem can be obtained using the df –k command and looking at the
/dev/shm partition size. By default, this is set to 50% of the RAM on a box with 6 GB
or less and 66% on a box with more than 6 GB.
The final maximum generic port number (maxGenericPorts) is the lesser of licensed
ports, CpuMaxPorts, and MemMaxPorts.

7.1.4 Audio Port Calculation Rules


The number of audio ports on a server is determined by multiplying the number of generic
ports by the ratio of audio processes to the total number of CPU.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 38 OF 76


For example, if the maximum number of generic ports is 9000, and there are three audio
processes and 12 CPUs, then the server will be allowed 2250 audio ports. The ports are
split across IVR and conferencing as governed by the ivrResourceRatio.
The maximum generic audio port number is also bound by a maximum port value of
“4000” prior to Release 20.0 and “9500” in Release 20.0 and later. Generic audio ports are
divided between IVR and conferencing as governed by the ivrResourceRatio.

7.1.4.1 Media Server Audio Port Weightings


Media Server port usage is weighted. A port can provide IVR or conferencing functions.
Ports for different codecs, fax, and video are weighted in proportion to the resources used
relative to the G.711 codec. The following table outlines the various ports weighting for
different codecs.
Codec Weighting

G.711 IVR 1

G.711 Conferencing 1

G.722 IVR 1.5

G.722 Conferencing 6

G.726 IVR 1.25

G.726 Conferencing 2

G.729 IVR 1.7

G.729 Conferencing 8

AMR-NB IVR 2.5

AMR-NB Conferencing 11

AMR-WB IVR 4

AMR-NB Conferencing 23

Video IVR 2

FAX 2

EVRC-A IVR 3

EVRC-A Conferencing 14

EVRC-NW IVR 12

EVRC-NW Conferencing 56

The number of RTP sessions is limited to 4000 on a bare metal system and 1000 on a
virtualized system.
A 4000 port Media Server could support 4000 G.711 ports or 3200 (4000/1.25)
G.726 ports.

7.1.5 Video Calculation Rules


Video resources are split between video transcoding and video conferencing.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 39 OF 76


In Release 20.0 and later, the number of processes available for video transcoding is
affected by the parameter videoTranscodingResourcesRatio. This is the percentage of
video processes that are available for transcoding. For example, if numVideoProcesses is
“12” and videoTranscodingResourcesRatio is “25”, then three processes are available for
video transcoding. The remaining processes are available for video conferencing.
Prior to Release 20.0, all video processes are available for video transcoding.

7.1.5.1 Video Transcoding Rules


The Media Server implements video transcoding in a set of processes separate from the
media streaming processes that are used for audio/video IVR and Conferencing. When a
media streaming process needs a video file transcoded, it hands that file off to a
transcoding process that transcodes the file, which is then played back by the media
streaming process.
The number of simultaneous transcodings is equal to the number of transcoding
processes configured on the server. Prior to Release 20.0, this is equal to the number of
video processes that are computed as described above. In Release 20.0 and later, this is
a function of the videoTranscodingResourcesRatio parameter.
For example, if numVideoProcesses is “12” and videoTranscodingResourcesRatio is “25”,
then three processes are available for video transcoding. The remaining processes are
available for video conferencing.
Although the number of simultaneous transcodings is limited to the number of video
processes, the actual transcoding time is a fraction of the actual file size. For example, a
30-second H.264 CIF file is transcoded in approximately two seconds. The transcoding
time to playback time ratio for different target codec values is provided in the following
table.
Environment Transcode Time to Playback Time Ratio

H.264 CIF 1/14

H.264 QCIF 1/23

H.263 CIF 1/41

H.263 QCIF 1/55

Given that transcoding is a fraction of playback time, a given transcoding process is only
involved in a fraction of the actual call hold time and can thus successively deal with many
calls within an average hold time. For example, if the average call hold time for a Media
Server call requiring transcoding is 30 seconds with a 20-second playback file requiring
H264 CIF transcoding, the transcoding time for the 30-second call would be (20/14) 1.4
seconds. Therefore, in that 30-second call hold time window, this transcoding process
could theoretically handle 21 other transcodings given an even distribution of calls.
In addition, the Media Server caches transcoded files. Therefore, a file that was previously
transcoded does not require re-transcoding. This reduces the number of actual
transcodings required on the system as greetings and prompts caching hits a steady state
over a period of time.
The Cisco BroadWorks System Capacity Planner [2] can be used to help calculate the
number of Media Servers and video processes per Media Server that would be required
based on the following:
a) target transcoding environment

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 40 OF 76


b) overall video penetration
c) video-enabled service penetration rate, and d) average file size

7.1.5.2 Video IVR Rules


Video IVR consumes two audio IVR ports.

7.1.5.3 Video Conferencing Rules


The number of video conferencing ports is calculated from the resources remaining on the
server after allocating resources for audio IVR/conferencing and video transcoding.
The number of processes available for video conferencing is affected by the
videoTranscodingResourcesRatio parameter. For example, if numVideoProcesses is “12”
and videoTranscodingResourcesRatio is “25”, then nine processes are available for video
conferencing.
The number of video conferencing ports is calculated from the overall port number
calculated above and is equal to the number of ports multiplied by the ratio of video
conferencing processes to the total number of processes. For example, if 9 out of 16
CPUs are used for video conferencing and there are 9000 generic ports, then 5062 video
conferencing ports are available.

7.1.5.4 Media Server Video Conferencing Port Weightings


The following table outlines the various port weighting for different video codecs.
Video conferencing port weights are measured separately for conferencing services and
monitoring services at the following steps in resolution.
Resolution Conferencing Services Monitoring Services

CIF 70 195

4 CIF 185 430

720p HD 330 850

7.1.6 View Media Server Port Assignments


The number of ports assigned at Media Server start-up can be viewed in the start-up
created msfeXX.log file located in /var/broadworks/logs/mediaserver0.

7.1.7 Playout and Recording Memory Usage Rules


The Media Server requires memory for media playing and recording. The IVR memory
maximum is statically defined under MS_CLI/Applications/MediaStreaming/Service/IVR>
parameter memorySize. When memorySize is set to “automatic”, the Media Server
attempts to calculate IVR memory based on the available shared memory on the server.
Memory usage depends on the codec sampling rate used in the call.
 G.711 sampling rate – 8 KBps
 G.726 sampling rate – 4 KBps
 G.729 sampling rate – 1 KBps
 AMR sampling rate – 1.5 KBps
 Video sampling rate – 80 KBps

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 41 OF 76


The following describes how the Media Server memory behaves for audio and video
playout, as well as voice messaging recording:
Playout Memory Requirements: IVR memory usage for IVR playout actions such as
announcements or treatments, Music On Hold, or IVR prompts is as follows:
Memory_Usage = length_of_file_in_sec x (Codec_Sampling_Rate)
This works out to approximately 500 KB per minute for G.711 audio and 5 MB per minute
for video. IVR memory is released as soon as the file playout ends.
Voice Mail Recording: IVR memory usage for voice mail recording is as follows:
memory_usage = (length_of_greeting_in_sec x Codec_Sampling_Rate) +
(min_duration_of_recording_in_sec x (2*Codec_Sampling_Rate))
… where min_duration_of_ recording_in_sec is 60 seconds
The Media Server reallocates 60 seconds worth of memory for each recording. Once used
up, the Media Server allocates memory in 30-second increments until the recording ends
or maximum recording length (configured on the Application Server) is reached. In
general, 60 seconds is a good value to use in calculations for general recording memory
usage.
On greeting playout, memory is released when the greeting playout ends. For memory
used by recording, memory is released when the recording sessions end (around “send
mail” time).
A good rule of thumb for a G.711 voice mail recording is 1 MB per recording.

7.2 Video Server Engineering Rules


Video Servers are limited to a number of generic ports that are derived using the following
algorithm:
MaxPorts = #_of_CPU X CPU_Frequency/3
… where the “#_of_CPU” and “CPU_Frequency” are those provided by cat
/proc/cpuinfo. “CPU_Frequency” is expressed in MHz (for example, 3 GHz would be
3000 MHz).
The Video Server, like the Media Server, also uses parameters for the number of video
processes versus audio processes. This is by default set to “Automatic”, which means that
the numVideoProcesses is set to the number of CPUs minus 3 and the
numAudioProcesses is set to “2”.
The Video Server uses port weightings similar to the Media Server to determine the
number of supported RTP streams.
Resolution Weight

CIF or smaller 120

4CIF or smaller 345

720p HD or smaller 520

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 42 OF 76


Audio codecs vary by computational complexity and are thus weighted differently. There is
a separate set of weights for audio mixing versus IVR. The System Capacity Planner is
not currently taking into account the IVR usage because it is assumed to be minimal
compared to the mixing usage.
Codec Mixing Weight IVR Weight

u-law, A-law 1 1

G.722 9.3 1.8

G.729 8 1.7

7.3 SNMP Guidelines


All Cisco BroadWorks servers contain an SNMP agent that generates traps and can be
used for polling performance measurements. This section outlines SNMP agent-specific
guidelines.
For SNMP trap generation, each Cisco BroadWorks server should have alarm throttling
enabled under the CLI level CLI/Monitoring/Alarm/Threshold/Default set to “5” traps per
second.
For SNMP polling, the recommended maximum “gets” per second depends on the server
platform.
Platform Maximum SNMP GETS Per Second

Intel Xeon 15 per CPU unit

7.4 Replication Bandwidth Requirements


Both the Application Server and Network Server use database and file replication across
potentially geographically distributed servers. The replication bandwidth requirements are
detailed in this section.

7.4.1 Application Server Replication Bandwidth Requirements


The Application Server replicates the following dynamic data: Basic call logs, registrations,
subscriptions, and media files. In addition, provisioned subscriber data (for example,
users, groups, user service configuration) is also replicated.
A general guideline for replication bandwidth is approximately 1,500 Kbps per Application
Server cluster.
For a detailed calculation specific to a given deployment, the amount of bandwidth can be
computed based on traffic volume.
Approximately 1400 bytes of bandwidth is required to replicate a single call log or
registration. The bandwidth to replicate media files depends on the frequency at which
these files are uploaded and their size.
These are the busy hour assumptions for the following example:
 33 calls per second (CPS)
 50% external incoming calls
 50% external outgoing calls
 50% of users with Basic Call Log feature assigned
 15 registrations per second (RPS)

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 43 OF 76


 Media files:
− Number of files uploaded in busy hour = 1000
− Average media file size = 160 KB (20 seconds u-Law)
 Provisioning
− Number of provisioning modification transactions per second = 6.6
− Number of bytes per transaction = 1,000
Value Formula Example

Steady State 1 KB per second 1 KB per second


Replication Bandwidth

Call Replication (Percent incoming network calls + = (0.5 + 0.5 + (2 * 0)) * (33 * 0.5 *
Bandwidth percent outgoing network calls + 2 * 1400) = 23.1 KB per second
Percent internal calls) * CPS * Basic
call log feature penetration * 1400

Registration RPS * 1400 = 15 * 1400 = 21 KB per second


Replication Bandwidth

File Replication Number of files uploaded in busy = (1000 / 3600) * 160 = 44 KB per
Bandwidth hour/3600 * Average file size second

Provisioning Provisioning modification TPS * 1k = 6.6 * 1K = 6.6 KB per second


Replication Bandwidth

Total Replication Sum of above values = 1 + 23.1 + 21 + 44 + 6.6 = 89


Bandwidth KB per second

In the previous example, the Application Server busy hour peer-to-peer bandwidth
requirements would be 89 KBps or (89 KBps * 8 bits) = 712 Kbps.

7.4.2 Network Server Replication Bandwidth Requirements


The Network Server has less demanding bandwidth requirements. The data that is
replicated is user hosting information (that is, whether the user is hosted on the primary or
secondary server). This hosting information only changes when a user migrates from one
Application Server peer to the other Application Server peer (for example, when there is a
primary Application Server failure). In addition, this migration replication data is “bursty” in
nature. When a primary Application Server fails, each user’s device migrates to the
secondary server on their next call, therefore, the Network Server receives migration
reports from the secondary server at the same call rate as the user calls migrate to the
secondary and replicates them at that same rate. (Note that the Network Server does
have throttling that limits user migration replication events to 100 per seconds.) This
migration update ends when all users have migrated to the secondary server. For
example, if a primary Application Server with 50 K users fails at busy hour with 33 CPS, it
would take ~(50K/33) or ~1,500 seconds until all users are migrated to the secondary
server. The Network Server replication traffic would increase during those 1,500 seconds
and return to normal afterwards. Replication of this information is approximately 560 bytes
per migration.
The following example calculates the bandwidth requirements when a single primary
Application Server fails at busy hour. The assumptions for the following example are:
 An Application Server with 100 K users experiences a primary server failure at busy
hour when traffic rates are at 55 CPS.
 There are four Network Server nodes in the Network Server cluster.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 44 OF 76


Value Formula Example

Steady State 1 KB per second 1 KB per second


Replication Bandwidth

User Hosting Migrations per second per cluster (up = 55 * 1 * 560 = 31 KB per
Replication Bandwidth to a maximum of 100) * Number of second
simultaneous failed Application
Server clusters * 560

Replication Bandwidth Sum of above = 31K + 1K = 32 KB


between the source
Network Server and
any one peer

TOTAL Replication (Number of Network Server nodes – = 32K * 3 = 96 KB


Bandwidth required 1)* User hosting replication
from the source bandwidth
Network Server to ALL
peers

In this example, the steady state busy hour replication bandwidth between a Network
Server and its peer is ~1 KBps or 8 KBps. In the event of a single Application Server
cluster resulting in a mass migration of users at busy hour, the replication bandwidth
requirement between peers would increase to 32 KBps or 256 KBps, but only for the time
required to migrate all users (for example, ~100K/55 CPS = 1,800 seconds).

7.5 Cisco BroadWorks Unified Messaging Storage Server Requirements


The Cisco BroadWorks Unified Messaging service allows for voice, video, and/or fax
messages to be deposited in a user’s account. A user can then retrieve these messages
via the Cisco BroadWorks voice portal and/or have them delivered as e-mail attachments.
The Unified Messaging service uses standard Simple Mail Transfer Protocol (SMTP) to
deposit the message to the store and either POP3 or Internet Message Access Protocol
(IMAP) for mail retrieval. As such, Cisco BroadWorks supports any standard mail server
for the storage repository, for user messages.
This section outlines the engineering requirements for the Unified Messaging storage
server. It explains how to calculate the total storage and overall bandwidth required for the
entire Cisco BroadWorks system. The output from the rules can then be used to derive the
per-storage server requirements, which would depend on the Unified Messaging storage
architecture (that is, a single centralized clustered mail server or distributed N Application
Server to M mail servers mapping).

7.5.1 Message Storage Requirements


Cisco BroadWorks messages have the following per-message footprint:
 Voice messages (Voice_Size)
− 330 KB/minute for G729/G711 attached in dvi-adpcm WAV file format
− 660 KB/minute for G729/G711 attached in u-law WAV file format
− 2,600 KB/minute for G722
 Video messages (Video_Size) – 2.2 MB/minute
 Fax messages (Fax_Size) – 100 KB/minute

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 45 OF 76


Note that the WAV file format used for email attachments in the case of low bitrate codecs
is configured on the Media Server with the sendmail8kHzWavFileDefaultFormat
parameter. To simplify the following calculations, we assume that fax messaging (if
present) is folded into standard voice messaging.

7.5.1.1 User Account Size Quota


A user’s mail server account size quota setting must be set to support the Application
Server defined “Full Mail Limit”, which defines the maximum number of storage minutes
per user. This parameter can be set at a system level via parameter
AS_CLI/Service/VoiceMsg> maxMailboxLength, and can be overwritten at the enterprise,
group, and user levels.
A user’s account quota can be derived from the per-message footprint and the
maxMailboxLength setting. It is best practice to add one minute to the maxMailboxLength
to ensure that no message is unexpectedly lost due to a quota limit being hit. The per-user
account quota setting can be calculated as follows (10-minute maxMailboxLength and u-
Law codec with dvi-adpcm email attachments are used in the example):
Voice messaging-enabled user:
voice_quota = (maxMailboxLength +1) * Voice_Size = (10 +1) * 330 KB/min = 3.6 MB
Video messaging-enabled user:
video_quota = (maxMailboxLength +1) * Video_Size = (10+1) * 2.2 MB/min = 24 MB

7.5.1.2 Total Storage Requirements


The total message storage required to support the entire system depends on the
acceptable oversubscription factor. Even though each user has an account quota set (for
example, 3.5 MB), it would be unlikely that all users would fill the quota. Therefore,
dimensioning the server to support maximum quota for all users would result in unused
storage. In general, one would oversubscribe the storage to reduce unused storage. The
oversubscription factor would depend on the carrier’s requirement, but an oversubscription
factor of 50% would be conservative for a voice messaging environment.
The total storage required would be calculated as follows:
 no_users – Number of users on the system
 um_voice_penetration – Percentage of users with Unified Messaging voice
 um_video_penetration – Percentage of users with Unified Messaging video
 oversub_factor – Oversubscription percentage (for example, assuming that each user
would only use half his or her quota, the oversub_factor would be 50%.
total_storage = (no_users * oversub_factor) * [(um_voice_penetration * voice_quota) +
(um_video_penetration * video_quota)]
For example, a 50 K residential system with 100% voice messaging penetration, per user
voice quota of 3.6 MB and an acceptable oversubscription factor of 50% would require the
following storage:
total_storage = (50 K * 0.5) * [(1 *3.6 MB) + (0 *24 MB)] = 90 GB

7.5.2 Busy Hour Messaging Throughput


The overall bandwidth throughput requirements to/from message storage depend on a
number of variables:
 AS_CPS – Estimated busy hour call per second.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 46 OF 76


 total_AS_ – Total number of Application Server clusters in the system.
 %_incoming _calls – Percentage of incoming calls (default of 50%).
 %_Unanswered_incoming_calls – Percentage of incoming calls that are unanswered
(default of 45%).
 %_Busy_incoming_calls – Percentage of incoming calls that are busy (default of 5%).
 %_Diverted_call_deposited – Percentage of unanswered/busy incoming calls that
actually result in a message deposit (default 30%).
 %_retrieval_to_deposit_ratio – Ratio of how many message retrievals there are per-
message deposit (default 50%).
 Avg_msg_length – Average length of the deposited message (default 30 seconds).
With the variables in the previous list, the number of overall messages in/out can be
calculated as follows:
msg_in/sec = (AS_CPS * total_AS) *(%_incoming _calls *
(%_Unanswered_incoming_calls + %_Busy_incoming_calls)) *
%_Diverted_VM_call_deposited
msg_out/sec = (AS_CPS * total_AS) *(%_incoming _calls *
(%_Unanswered_incoming_calls + %_Busy_incoming_calls)) *
(%_Diverted_call_deposited * %_retrieval_to_deposit_ratio)
For example, with the following variables:
 AS_CPS – 72
 total_AS_ – 5
 %_incoming _calls – 50%
 %_Unanswered_incoming_calls – 45%
 %_Busy_incoming_calls – 5%
 %_Diverted_call_deposited – 30%
 %_retrieval_to_deposit_ratio – 50%
 Avg_msg_length – 30 seconds (.5 min)
The busy hour messages rates would be:
 msg_in/sec = (72 *5) *(0.5 *(0.45 +0.05)) *(0.3) = 27 messages in/sec
 msg_out/sec = (72 *5) *(0.5 *(0.45 +0.05)) *(0.3 *0.5) = 14 messages in/sec
The overall traffic throughput can be calculated per-message footprint as follows:
 bytes_in/sec = msg_in/sec * [(um_voice_penetration *(330 KB * Avg_msg_length)) +
(um_voice_penetration *(2.2 MB * Avg_msg_length))]
 bytes_out/sec = msg_out/sec * [(um_voice_penetration *(330 KB * Avg_msg_length))
+ (um_voice_penetration *(2.2 MB * Avg_msg_length))]
For the previous examples, the overall traffic throughput attributable to message storage
would be:
 bytes_in/sec = 27 * [(1 *(330 KB * 0.5)) + (0 *(2.2 MB * 0.5))]= 4.4 MBps
 bytes_out/sec = 14 * [(1 *(330 KB *0.5)) + (0 *(2.2 MB * 0.5))]= 2.2 MBps
It is important to note that the bytes_in/sec flows between the storage and all Media
Servers, while the bytes_out/sec flows between the storage and all Application Servers.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 47 OF 76


7.6 Service Control Function Network Requirements
The Service Control Function Server peers can be deployed in a geo-redundant fashion
and require a round-trip delay latency of <150 msec and packet loss of <1% between
peers.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 48 OF 76


8 Server Provisioning Guidelines

This section outlines Application Server and Network Server provisioning guidelines.

8.1 Application Server Provisioning Guidelines


To ensure proper scaling of an Application Server, the following provisioning guidelines
should be taken into consideration.
Items Recommended Guideline

System Level Items


 Total number of DNs Up to three times the maximum number of
subscribers
 Number of service providers/enterprises Up to the maximum number of subscribers
(Note that prior to Release 14.sp4, the
maximum number of enterprises was rated
at <1000.)
 Number of communication barring (fixed) <1000
profiles/network class of service
 Number of communication barring criteria
 Number of system-level OCP digit maps
 Number of criteria per communication baring profile <100
(NOTE: digit patterns for a communication baring call
type have a much larger soft limit of 10K)
Service Provider/Enterprise Items
 Number of groups Up to the maximum number of subscribers
 Number of devices <1000
 Number of administrators
 Number of departments
 Number of service packs
 Enterprise trunks
 Session Admission Control groups
 Number of communication baring (hierarchical) profiles
 Device profiles per Session Admission Control group
 Number of disposition/unavailable codes
 Number of enterprise trunk number ranges
Group Level Items
 Number of users <30000
 Number of group administrators <1000
 Number of departments
 Number of holiday and time schedules
 Auto Attendants
 Call Pickup groups
 Hunt groups
 Conference Bridges
 Call Capacity groups
 Call Centers
 Instant Group Call groups
 Series Completion groups
 Trunk groups
 Session Admission Control groups
 Authorization codes
 Users per service group (Call Pickup, Hunt Group, Call
Center, Series Completion)
 Users per Account Authorization Code restricted/non-
restricted list
 Device profiles per Session Admission Control group
 Number of disposition/unavailable codes

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 49 OF 76


Items Recommended Guideline
 Receptionist/Attendant Console – Number of monitored <500
users
 Digit strings for outgoing calling plan <100
 Number of emergency zones
 Group Paging targets
 Common phone list entries
User Level Items
 Number of available phone numbers for assignment <1000
 Number of departments to select from
 Number of available devices
 Number of time schedules
 Number of conferences
 Number of recordings
 Number of call centers for which a user is an
agent/supervisor
 Number of agents per route point
 Number of number ranges/prefixes per route list.

 Number of Call Notify entries <100


 Number of Call Forwarding Selective entries
 Number of Selective Acceptance entries
 Number of Selective Rejection entries
 Number of Custom Ringback user entries
 Number of Priority Alert entries
 Number of monitored users in Phone Status Monitoring
 Number of aliases for messaging
 Number of Simultaneous Ringing Personal criteria
entries
 Number of phone numbers in a distribution list
 Number of personal phone list entries
 Active locations for Simultaneous Ring, Cisco 25
BroadWorks Anywhere, Executive Assistant

8.2 Network Server Provisioning Guidelines


To ensure proper scaling of a Network Server, the following provisioning guidelines should
be taken into consideration.
Items Recommended Guideline

System Level Items


 Total number of DN/URLs/Login aliases <30 million
 Total number of enterprises <10 million
 Total number of groups <10 million
 Total number of network elements (NEs) <32 K
 Policy Related Items
 Number of VoiceVPN dial plan entries per <32 K
instance
 Number of policy route list entries per instance
 Number of PreCallTyping dial plan entries per
instance

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 50 OF 76


8.3 Messaging Server Provisioning Guidelines
To ensure proper scaling of a Messaging Server (UMS), the following provisioning
guidelines should be taken into consideration:
 Maximum number of active chat rooms per node in a UMS cluster: 20,000
 Maximum number of configured domains in a UMS cluster: 10,000

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 51 OF 76


9 Server Performance Guidelines

See the Cisco BroadWorks System Monitoring Quick Reference Guide [4].

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 52 OF 76


10 Server Growth Estimation Procedures

The Cisco BroadWorks System Capacity Planner [2] packaged and/or manual
configuration inputs provide an excellent mechanism for initial system planning when no
existing Cisco BroadWorks infrastructure exists. Once production nodes are deployed and
are in service, live nodes can be used to more accurately estimate the capacity of the
same hardware footprint and usage.
For cluster-type nodes like the Application Server, new cluster pairs must be added when
a cluster is full or close to full.
On farm-type nodes like the Network, Media, Profile, Xtended Services Platform, and
Application Delivery Platform servers, additional nodes can be added to support additional
growth. For farm-type nodes, at least one node is eliminated from the pool of available
resources to account for a single node failure/redundancy. When performing the growth
estimate procedure, any large difference in utilization between nodes should be
investigated since this means the load is not evenly distributed.
The growth estimate procedure can also be used to estimate the capacity of new
hardware with additional memory and/or CPUs.
Note that this procedure assumes no headroom for traffic growth or bursts. It is
based on current traffic levels and system usage. A general best practice would be to
cap systems at 85 to 90 percentage usage to account for bursts and future feature usage.

10.1 General Procedure

10.1.1 Gather and Analyze Data


The growth estimate procedure is possible since a server’s key resource usage increases
generally in a linear fashion as the number of users increase (as long as the per-user
service footprint and traffic pattern remains relatively constant). The key resource
indicators that tend to be growth bottlenecks are:
 Java process heap usage
 CPU usage
 Database size and checkpoint time percentage(for servers with databases)
The indicators vary by server and are detailed in the following sections.
A server can continue to scale as long as its indicators are within acceptable range. For
performance guidelines, see section 8 Server Provisioning Guidelines. To estimate a
server’s growth potential, we first collect information on the previously mentioned key
resource indicators and the number of current users. Then we can calculate the per-user
contribution to these key indicators and plot out the linear growth of each indicator versus
users to see when each one hits a bottleneck condition (for example, reaching the
acceptable limit for that resource). Since heap size and database size are a function of
server physical memory, these bottlenecks can sometimes be addressed easily by adding
more physical memory to the server (up to the supported maximum).
To use this procedure, you need a certain level of growth to have already occurred on the
cluster so that the key resource indicators can start tracking. In general, on a small
configuration system, you could start using this procedure after at least 500 users have
been provisioned, while on a medium to large configuration, these indicators generally
start tracking in a linear fashion after 4,000 to 5,000 users have been provisioned.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 53 OF 76


10.1.2 Revisit Growth Estimate
Once a maximum growth estimate is obtained, this procedure should be rerun periodically
as the server continues to scale (approximately every addition of 5,000 users) to ensure
that user versus key resource indicator is tracking in a linear fashion. Initially, one would
expect a linear tracking with a margin of error no worse than +/-10%. As the server growth
reaches the higher end of the range (for example, > 50% of the original expected users),
the tracking margin of error should decrease to be within +/-5%. Any larger margin
between the original growth estimate and recalculation may be the result of a footprint
change in per-user traffic or service assignment. Key performance indicators (for example,
call attempts, calls per second, SIP messages per second, and OCI messages per
second) as defined in the Cisco BroadWorks System Monitoring Quick Reference Guide
should be collected prior to the calculation of any growth estimate to ensure that the per-
user footprint has not changed.

10.1.3 Server Health


It is critical that as your server grows, you continually monitor the health of the server to
ensure that all key resource indicators and performance indicators are within acceptable
ranges. For a view on what key indicators must be monitored, see the Cisco BroadWorks
System Monitoring Quick Reference Guide [4].

10.2 Application Server Growth Estimate Procedure


This section outlines the Application Server growth estimation procedure. This procedure
is based on two key assumptions:
 Group/user service footprint remains constant – Assigning a new service may
change the resource footprint of the server (for example, adding Cisco UC-One
desktop client support to all users).
 User traffic pattern does not change – Per-user Erlang contribution remains at the
same level as the system grows.
A change in either of these two items would require a recalculation of growth estimate.
As described in section 6.1.5 Use Live Data Input, the Cisco BroadWorks System
Capacity Planner [2] live input area can be also be used to automate the growth estimate
procedure.

10.2.1 Collect Information


The following information should be collected from the primary Application Server since it
should be handling all traffic.

10.2.1.1 Execution Server Post Full Collection Heap Size


From the /var/broadworks/logs/appserver/XSOutputXX.log files, the garbage collection
(GC) line AFTER the lines containing “CMS-concurrent-reset:” indicates the heap size
after a full garbage collection.
Example
483642.181: [CMS-concurrent-reset: 0.242/0.242 secs]
483644.184: [GC 483644.184: [ParNew: 49024K->0K(49088K), 0.1162924 secs]
842711K->804212K(4194240K), 0.1168831 secs]

“804212K” is the heap size after a full garbage collection and “4194240K” is the maximum
heap size. After this full collection, 804212 /4194240 or 19.1% of the 4 GB heap is in use.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 54 OF 76


To obtain the post-collection heap size for the purposes of this procedure, a number of
“busy-day” (days that have the highest traffic)
/var/broadworks/logs/appserver/XSOutputXX.log files should be analyzed. The high value
full post-collection heap sizes can be obtained using a simple awk statement such as the
following.
cat /var/broadworks/logs/appserver/XSOutput24.log |awk '$2=="[CMS-
concurrent-reset:"{getline;print}' >> post-collection_heap.out

You can now obtain the maximum post-collection heap size from the post-
collection_heap.out file, for example, if post-collection_heap.out contains the following.
231662.434: [GC 231662.434: [ParNew: 24448K->0K(24512K), 0.0856301 secs]
811655K->791912K(1572800K), 0.0858999 secs]
231773.201: [GC 231773.201: [ParNew: 24448K->0K(24512K), 0.0795525 secs]
809310K->789781K(1572800K), 0.0798135 secs]
231883.193: [GC 231883.194: [ParNew: 24448K->0K(24512K), 0.0920691 secs]
821889K->803170K(1572800K), 0.0923137 secs]
231993.503: [GC 231993.503: [ParNew: 24448K->0K(24512K), 0.0683859 secs]
812462K->793207K(1572800K), 0.0686410 secs]
232104.661: [GC 232104.661: [ParNew: 24448K->0K(24512K), 0.0772532 secs]
800871K->781200K(1572800K), 0.0774966 secs]
232221.538: [GC 232221.538: [ParNew: 24448K->0K(24512K), 0.0713419 secs]
806474K->786846K(1572800K), 0.0715919 secs]
232335.614: [GC 232335.614: [ParNew: 24448K->0K(24512K), 0.0853518 secs]
803134K->783621K(1572800K), 0.0856315 secs]
232458.578: [GC 232458.579: [ParNew: 24448K->0K(24512K), 0.0922371 secs]
794432K->774487K(1572800K), 0.0924747 secs]
232579.809: [GC 232579.809: [ParNew: 24448K->0K(24512K), 0.0705055 secs]
789489K->769703K(1572800K), 0.0707344 secs]

… then 803170 is the maximum post-collection heap size and 1572800 is the configured
maximum heap size.

10.2.1.2 Database Permanent Size


Database permanent size in use can be obtained using the following command sequence:
bwadmin@as1$ ttIsql

Command> dssize;

PERM_ALLOCATED_SIZE: 1310720
PERM_IN_USE_SIZE: 929406
PERM_IN_USE_HIGH_WATER: 936963
TEMP_ALLOCATED_SIZE: 348160
TEMP_IN_USE_SIZE: 4597
TEMP_IN_USE_HIGH_WATER: 71409

The PERM_IN_USE_SIZE parameter shown in the previous command sequence


indicates that 929 MB is in use out of 1.31 GB (PERM_ALLOCATED_SIZE) available or
71% (929/1310) of the available database.

10.2.1.3 Database Checkpoint Time


Database checkpoint is the process of consolidating database changes into the main data
store. Its frequency depends on the amount of database writes and the speed of the IO
subsystem. The checkpoint load must be less than 95.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 55 OF 76


The following commands can be used to collect the data. These commands should be
executed during the busy hour.
ttIsql -e "set vertical 1;call ttCkptHistory();exit;" | grep TIME | tail
-14
STARTTIME: 2017-05-10 17:54:41.669666
ENDTIME: 2017-05-10 17:57:50.911947
STARTTIME: 2017-05-10 17:49:17.356048
ENDTIME: 2017-05-10 17:52:45.544575
STARTTIME: 2017-05-10 17:44:13.228411
ENDTIME: 2017-05-10 17:47:41.254315
STARTTIME: 2017-05-10 17:38:44.844989
ENDTIME: 2017-05-10 17:42:14.103958
STARTTIME: 2017-05-10 17:33:45.266165
ENDTIME: 2017-05-10 17:37:23.757761
STARTTIME: 2017-05-10 17:28:01.245359
ENDTIME: 2017-05-10 17:31:40.128678
STARTTIME: 2017-05-10 17:22:30.117068
ENDTIME: 2017-05-10 17:25:58.114880

The average checkpoint duration (time between STARTTIME AND ENDTIME) and the
average time between checkpoints (time between ENDTIME and next STARTTIME)
should be computed. The checkpoint load is 100 times the average checkpoint duration
divided by the sum of the average checkpoint duration and average time between
checkpoints.
In the previous example, the average checkpoint duration is 208 seconds and the average
time between checkpoints is 110 seconds.
The average checkpoint load is 65.

10.2.1.4 Worst Case Busy Hour CPU


Worst Case Busy Hour CPU is obtained by analyzing SAR output for a number of “busy-
days”. One needs to look for the highest sum that is obtained by adding the following
numbers together: user CPU usage, system CPU usage, and, if this information is
available, “nice” CPU usage.
Note that any anomaly (for example, a one-time, one-day low reading) should be
discarded. Wait for IO (wio) CPU should not be included. This high reading should occur
during high traffic hours. If the low reading corresponds to a time in, which an item such as
a system backup is running, it should be disregarded. If the following represents the CPU
usage peak that in general reoccurs over multiple days:
06:30:00 18 6 7 69
06:35:00 21 7 3 69
06:40:00 19 6 3 73
06:45:00 18 5 3 74
06:50:00 18 5 7 70
06:55:00 20 6 3 71

… then the Worst Case Busy Hour CPU is 28%.

10.2.1.5 Provisioning Thread Utilization


Provisioning thread utilization can be obtained by polling the thread busy usage every 5
minutes during the busy hour and comparing against the pool size
In the module
provisioningServer/psConcurrentModule/psConcurrentStats/

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 56 OF 76


Row for bwPSMonitoringExecutorName “BCCT Worker”
bwPSMonitoringExecutorAvgActiveThreads /
bwPSMonitoringExecutorMaxPoolSize
Example:
6 BCCT Worker 12 12 6

Thread utilization is 6 / 12 = 50%.

10.2.1.6 Provisioned Users


The number of provisioned users is available via the following SNMP gauge.
executionServer/systemModule/systemStats/bwNumberOfNonVirtualUsers 35092

10.2.2 Calculate Maximum Users per Resource Indicator


Once the data has been collected, we can estimate the maximum number of users that
can be supported by each of the key resource indicators, and then we can choose the
lower of the values. This is done by calculating the per-user contribution to each of the
measured resource indicators, and then extrapolating the number of users that can be
supported until that indicator reaches the acceptable limit, where the acceptable limit for
each indicator is as follows:
 Worst Case Busy Hour CPU idle ≥ 40%
 Database Permanent Size ≤ 90% of MAX_ALLOWED_ALLOC_PERM
 Post Full Collection Heap Size ≤ 60% of MAX_ALLOWED_HEAP
 Checkpoint Time Percentage < 95% of elapsed time
 Provisioning Thread Utilization < 85%
… where:
MAX_ALLOWED_ALLOC_PERM = 38% of MAX_SERVER_MEM
MAX_ALLOWED_HEAP = 25% of MAX_SERVER_MEM
The following example demonstrates the growth estimation procedure.
Given the following data:
Number of Users 35092
Worst Case Busy Hour CPU = 28% busy
Database Perm Size = 929406 (out of a allocated size of 1310720)
Post Full Collection Heap Size = 803170 (out of a allocated size of
1572800)
Checkpoint load percentage = 57
Provisioning thread utilization = 50

1) First, calculate the per-user resource contribution (rounding down the users to
35,000).
Busy_CPU_Per_User = 28/35000 = 0.0008% per user
Post_Collection_Heap_Per_User = 803170 /35000 = 22.9K per user
DB_Perm__Per_User = 929406/35000 = 26.5K per user
Checkpoint+Load_Per_User = 57 / 35000 = 0.00163 per user

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 57 OF 76


2) Then extrapolate the maximum number of users for each resource indicator.
CPU
Acceptable CPU is < 60% busy.
max_num_users_based_on_cpu = 60/Busy_CPU_Per_User= 60/0.0008 = 75000
users

Heap
The current configured allocated heap is 1572800. 60% of this allocated heap value would
be 1572800 X 0.6 = 934680.
max_num_users_based_on_current_heap =
934680K/Post_Collection_Heap_Per_User = 934680K/22.9K = 40815 users

This server currently has 6 GB of memory. This server platform can support a memory
upgrade to 16 GB, so the maximum allocated heap could be increased to 4 GB
(4194240). Sixty percent (60%) of the allocated heap would then be 4194240 x 0.6 =
2516544, which would result in the following.
max_num_users_based_on_max_heap = 2516544K/Post_Collection_Heap_Per_User
= 2516544K/22.9 = 109892 users

Database Size
The current configured allocated PERM_ALLOCATED_SIZE is 1310720K. Ninety percent
(90%) of this is 1310720 X 0.9 = 1179648.
max_num_users_based_on_current_db = 1179648/DB_Perm__Per_User =
1179648K/26.5K = 44515 users

This server currently has 6 GB of memory. If the server platform can support a memory
upgrade to 16 GB then the maximum PERM_ALLOCATED_SIZE could be increased to
6G (6291360). Ninety percent (90%) of the PERM_ALLOCATED_SIZE would then be
6291360 X 0.9 = 5662224, which would result in the following.
max_num_users_based_on_max_db = 5662224/DB_Perm__Per_User =
5662224K/26.5K = 212668 users

Database Checkpoint
The checkpoint load is currently 57 and can be 95.
max_num_users_based_on_checkpoint = 95 / Checkpoint_Per_User = 58333
users

Provisioning Thread Utilization


The provisioning thread utilization is 50% and can be 85%.
max_num_users_based_on_provisioning = 85 / 50 * 35092 = 59656 users

10.2.3 Analyze Data


The existing platform has the following per-resource indicator maximum user estimates.
max_num_users_based_on_cpu = 75000 users
max_num_users_based_on_current_heap = 40815 users
max_num_users_based_on_current_db = 44515 users
max_num_users_based_on_checkpoint = 58333 users
max_num_users_based_on_provisioning = 59656 users

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 58 OF 76


The bottleneck is a heap size since it is the lower of the user values and restricts the
server growth to an estimate of 40,000 users. Since this server can be expanded in terms
of memory, increasing the server to 16 GB changes the bottleneck and the maximum
estimated number of users.
max_num_users_based_on_cpu = 75000 users
max_num_users_based_on_max_heap = 109892 users
max_num_users_based_on_max_db = 212668 users
max_num_users_based_on_checkpoint = 58333 users
max_num_users_based_on_provisioning = 59656 users

With 16 GB, the bottleneck is now CPU and restricts the growth to an estimate of 75,000
users.
When the database size and/or checkpoint load is a limiter, an attempt can be made to de-
fragment the database. See the section “Compact the Database” in the Cisco BroadWorks
Maintenance Guide.

10.3 Network Server Growth Estimate Procedure

10.3.1 Collect Information


The following information should be collected from all Network Server nodes. Since one
Network Server node typically services all provisioning traffic, that node may have higher
CPU usage.

10.3.1.1 Number of Identities


The Network Server stores identities (DNs, Aliases, and Line Ports). The total number of
identities handled by the Network Server is the sum of the three. To determine the number
of identities per user, the total number of users on each Application Server cluster would
need to be measured as described in section 10.2.1.6 Provisioned Users.
The total number of identities is the sum of the following SNMP gauges.
nsProvisioningServer/psSystem/systemNbDNs
nsProvisioningServer/psSystem/systemNbURLs
nsProvisioningServer/psSystem/systemNbLinePorts

10.3.1.2 Heap Size


The Execution Server (XS) heap size should be gathered off all Network Server nodes
using the same procedure detailed in section 10.2.1.1 Execution Server Post Full
Collection Heap Size.

10.3.1.3 CPU
The busy CPU high water mark should be gathered from each Network Server node
according to the procedure in section 10.2.1.4 Worst Case Busy Hour CPU.

10.3.1.4 Database Size


The database permanent size is use should be obtained according to procedure in section
10.2.1.2 Database Permanent Size.

10.3.2 Calculate Max Identities Based per Resource Indicator


The per-identity heap usage can be calculated as follows.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 59 OF 76


Total Heap in Use = Sum of heap in use across all nodes
Heap per Identity = Total Heap in Use / Num Identities

The per-identity CPU usage can be calculated as follows.


Total CPU in Use = Sum of CPU Utilization across all nodes
CPU per Identity = Total CPU in Use / Num Identities

10.3.3 Analyze Data

10.3.3.1 Calculate Maximum System Size


The maximum number of identities supported in the Network Server database can be
calculated as follows. This represents the total Network Server cluster limiter.
DB_Perm_Per_Identity = Database Permanent Size in Use / Number of
Identities
Max_Identities_(Database) = Server RAM * 0.5 * 0.9 / DB_Perm_Per_Identity

More RAM can be added to each node (up to the supported maximum) to support a larger
system.

10.3.3.2 Maximum Identities


The maximum number of identities supported based on the current number of nodes (and
accounting for N+1 redundancy) can be calculated based on heap and CPU
requirements.
Heap
Available Heap per Node = XS Heap Size * 0.6
Total Available Heap = Sum of Available Heap per Node for N -1 nodes
Max Identities (Heap) = Total Available Heap / Heap per Identity

CPU
Total Available CPU = 60 * (N – 1)
Max Identities (CPU) = Total Available CPU / CPU per Identity

The maximum number of identities supported for a given system is the lowest of the three
Max Identities calculations.
Note that if the system is deployed in a redundancy scheme other than N+1, then the
calculation needs to account for failure of those nodes. For example, if redundancy
scheme is 2 * N, then the calculation needs to only assume available resources from N
nodes.

10.3.3.3 Support Additional Growth


Based on the capacity limiter (Database, Heap, and CPU), additional resources can be
added to support additional growth. Note that current limitations with respect to maximum
number of nodes and maximum database/heap size must be met.
Capacity Limiter Growth Options

Database Size Add additional memory to all Network Server


nodes and increase database size.

Heap Add additional nodes or additional memory to all


Network Server nodes.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 60 OF 76


CPU Add additional CPU resources or additional
nodes.

10.4 Media Server Growth Estimate Procedure

10.4.1 Collect Information


The following information should be collected from all Media Server nodes. When Media
Servers are dedicated to IVR and conferencing, they should be analyzed as separate
groups.

10.4.1.1 Port Usage


The following SNMP gauge should be sampled during the busy hour.
mediaServer/mcp/msPortsInUse
mediaServer/mcp/msVideoConferencingPortsInUse
mediaServer/rtp/msRtpSessionsInUse

10.4.1.2 Available Resources


A gauge of available resources is available via SNMP.
mediaServer/system/msMaxCapacityInPorts
mediaServer/system/msVideoConferencingMaxCapacityInPorts

The maximum number of RTP sessions (MAX_RTP) is 1000 on a virtualized server and
4000 on a bare metal server.
The total system resources is the sum of these three individual metrics
(msMaxCapacityInPorts, msMaxCapacityInPorts, max RTP sessions) across all Media
Server nodes except to account for redundancy If nodes have different capacities, then
eliminate the highest capacity node from the overall count.

10.4.2 Analyze Data


Media server utilization is the largest of the following three ratios:
1) Total msPortsInUse / Total msMaxCapacityInPorts
2) Total msVideoConferencingPortsInUse /
Total msVideoConferencingMaxCapacityInPorts
3) Total msRtpSessionsInUse / Total MAX_RT
Capacity headroom can also be calculated from the utilization. A system with 50K users
that is 40% utilized can support another 85K users (at 100% utilization and no headroom).
A better calculation would allow for 10% headroom, which would be an additional 62.5K
users.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 61 OF 76


10.5 Web Container (Xtended Services Platform/Application Delivery
Platform/Profile Server) General Growth Estimate Procedure
The following procedure can be used to estimate growth for a cluster of nodes hosting
applications running in the Cisco BroadWorks Web Container. This procedure can be
used for Xtended Services Platform (Xsp), Application Delivery Platform (ADP), and
Profile Server (PS) nodes. There are a few additional items that need to be collected for
Device Management applications see Section 10.6 Device Management Growth
Estimation Procedure. Also note that this procedure is high-level and does not account for
all variations of web applications.

10.5.1 Collect Information


The following information should be collected from all nodes. When nodes are specially
deployed for different applications (for example, Device Management, Clients, File
Repository), they should be analyzed independently of each other and as a group. For
example, if there are four nodes and two are used for device management access and
two for clients, the two for device management should be analyzed together and the two
for clients separately.

10.5.1.1 Web Container Thread Usage/Connections

10.5.1.1.1 Connections
The following SNMP gauge should be sampled during the busy hour.
bwWebContainer/protocols/http/serverResources/workers/
bwHttpWorkerThreadsBusyMax

The total system usage is the sum of this value across all nodes.

10.5.1.1.2 Executor Pool Usage


The Tomcat process has numerous thread pools for processing. Each thread pool has a
configured maximum and active gauge that can be sampled during the busy hour. All
thread pools with usage should be analyzed (for example, httpnio, ocic.com.broadsoft.cti-
events).
bwWebContainer/executors/bwExecutorTable/bwExecutorThreadsBusy

The total system usage is the sum of this value across all nodes.

10.5.1.2 Web Container Heap Usage


The Web Container (Tomcat) heap usage can be obtained from the via SNMP or from the
/var/broadworks/logs/tomcat/TomcatOutputXX.log files.
The following gauge can be sampled during the busy hour.
bwWebContainer/processMetrics/memory/heap/
bwProcessMetricsHeapLastPostCollectionSize

When using the TomcatOutput files to collect information, the procedure depends on
whether Tomcat is configured to use the Concurrent Mark Sweep collector (like the
AS/XS) or using the throughput collector (like the AS/PS). This can be determined by
looking at the profile the CLI context System/ProfileTuning/GeneralSettings. By default,
the throughput collector is used. For example, the “xsi” profile uses the Concurrent Mark
Sweep collector.
System/ProfileTuning/GeneralSettings> describe xsi

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 62 OF 76


This is a description of the xsi profile. The configuration is not
changed by this command.

The following values are not exposed through the CLI


Modifies, /Applications/WebContainer/Tomcat GC: Parallel to Concurrent

When the Concurrent collector is used the heap usage is obtained as described in section
10.2.1.1 Execution Server Post Full Collection Heap Size.
When the Concurrent collector is not used, obtain the information as follows.
cat /var/broadworks/logs/tomcat/TomcatOutput17.log |grep “Full GC” >>
post-collection_heap.out

You can now obtain the maximum post-collection heap size from the post-
collection_heap.out file. For example, if post-collection_heap.out contains the following:
393240.711: [Full GC (Ergonomics) [PSYoungGen: 2017K->0K(68608K)]
[ParOldGen: 1508556K->1433298K(1485824K)] 1510574K->1433298K(1554432K),
[Metaspace: 210158K->210158K(1255424K)], 2.3703940 secs] [Times:
user=4.29 sys=0.00, real=2.38 secs]
393987.634: [Full GC (Ergonomics) [PSYoungGen: 1867K->0K(61440K)]
[ParOldGen: 1484170K->1428214K(1478656K)] 1486038K->1428214K(1540096K),
[Metaspace: 210286K->210286K(1255424K)], 2.3891670 secs] [Times:
user=4.31 sys=0.01, real=2.39 secs]

… then 1433298K is the maximum post-collection heap size.


Since the Tomcat heap may not be statically sized the maximum size can be obtained via
the CLI.
System/Resources/Memory/Containers> get
Container Min Max Amount Percentage Actual
===========================================================
platform 512 M 512 M 512 M 8.35 % 512 M
tomcat 256 M 6134 M 4294 M 70 % 4294 M

The maximum heap size is 4294 M.


The total system usage is the sum of post collection size across all nodes.

10.5.1.3 Worst Case Busy Hour CPU


This is identical to the procedure for the Application Server described in section 10.2.1.4
Worst Case Busy Hour CPU.

10.5.1.4 Available Resources


For calculating available resources for N nodes, one is removed from the calculation to
account for redundancy. For example, if there are 4 nodes then use the capacity of 3.
A gauge of available resources is available via SNMP.
bwWebContainer/protocols/http/serverResources/workers/bwHttpWorkerThreads
Usable
bwWebContainer/executors/bwExecutorTable/ bwExecutorMaxPoolSize
bwWebContainer/processMetrics/memory/heap/bwProcessMetricsHeapMaxSize

The maximum number of web container threads is the sum of the available threads on all
nodes minus one to account for redundancy. If nodes have different capacities, then
eliminate the largest one from the overall count.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 63 OF 76


The total available memory is the sum of heap available on N-1 nodes times 0.6.
Note that if the system is deployed in a redundancy scheme other than N+1, then the
calculation needs to account for failure of those nodes. For example, if redundancy
scheme is 2 * N, then the calculation needs to only assume available resources from N
nodes.

10.5.2 Analyze Web Container Data


Web Container utilization is the greater of the following ratios:
1) Total Busy HTTP Worker Threads / (0.85 * Total Number of HTTP Worker Threads)
2) Total Busy Threads / (0.85 * Total Number of Available Threads)
3) Total Heap Usage / Total Available Heap
4) Total CPU Usage / Total Available CPU
Server growth can be calculated based on the utilization. Additional nodes and/or
resources can be added to support additional growth.

10.6 Device Management Growth Estimation Procedure


This procedure can be used to predict growth for the Xtended Services Platform/Profile
Server/Application Delivery Platform nodes that are used solely for device management.

10.6.1 Number of Devices


The number of devices on each Application Server can be obtained using the
Performance Measurement table bwDeviceTypeSystemTable. This number needs to be
obtained on each Application Server cluster that a given Xtended Services Platform
services.
executionServer/systemModule/systemStats/bwDeviceTypeSystemTable

This table shows the number of provisioned devices by type at a system level. The
number of devices using Device Management can be calculated from the sum of the total
devices, eliminating device types that are not using Device Management.
The number of devices utilizing a particular Xtended Services Platform / Profile Server can
be determined by dividing the total number of devices in the system by the number of
Xtended Services Platform/ Profile Servers that are actively serving traffic.

NOTE: It is assumed that traffic is disturbed in a round-robin fashion. If this is not the case, a
different weighting must be used.

10.6.2 IOPS (File Repository Only)


Worst Case Busy Hour IOPS is obtained by analyzing SAR output for the disk or disks
that are hosting the file repository. Note that any anomaly (for example, a one-time, one-
day low reading) should be discarded. The IOPS is obtained from the output of “sar – d” in
the tps field.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 64 OF 76


10.6.3 Bandwidth (Device Management Access Only)
Network bandwidth can be obtained by analyzing SAR output for the network interfaces to
the devices and to the file repository. The usage can be obtained from the received and
transmitted KB/s from the output of “sar -n ALL”.

10.6.4 Calculate Maximum Devices per Resource Indicator (Device Management)


The network bandwidth and IOPS calculations need to be incorporated into the general
procedure as described in section 10.4.2 Analyze Data. The useable network bandwidth
and IOPS needs to be determined from the available hardware resources.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 65 OF 76


11 Database Server Disk Estimation Tool

The Database Server provides an estimator tool to provide a view of disk array
characteristics such as total IOPS and data throughput. The tool is wrapped in the dbsctl
script and should be run after installation and schema deployment. The tool requests two
inputs:
 Total number of physical disks in the DATA ASM disk group.
This should be equal to half of the total number of disks.
 Acceptable latency (10 milliseconds [msec] is a good number to use).
Usage: dbsctl [-vxfha] <command> [OPTIONS]
Supported commands:
disk
calibrate the disk subsystem.
[latency] Specify maximum latency (in ms)
[disks] Specify number of physical disks in DATA disk
group.
bwadmin@lin180-3550M3$ dbsctl disk calibrate 10 8
!WARNING! Are you sure you want to run I/O calibration on DATA disk
group?
Do you want to continue (y/n) [n]?y
Running calibration process... [DONE]
Number of physical disks : 8
Maximum IOPS : 1047
Maximum MBPS : 120
Maximum MBPS (large request): 36
Target latency : 10

The input/output (I/O) calibrated results are only for the DATA ASM disk group. The FRA
ASM disk group should be sized the same as DATA ASM disk group, so that the total
disk capability is twice the disk-calibrated numbers provided.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 66 OF 76


12 XS Mode System Engineering (Sh Interface Traffic)

This section provides information about planned Sh traffic, which can be used for capacity
planning purposes.

DISCLAIMER: This section only provides estimations. The real-life behavior highly depends on
numerous configuration parameters, enterprise/service provider organization, and service
penetration.

12.1 Subscriber Profile Sizing


The subscriber profile size has a direct impact on the Sh traffic. The BW-BaseUserInfo
document has a fairly fixed size, while the BW-FullUserInfo can vary slightly depending on
some collections (identities, shared schedules, and so on). The BW-Services and MMTel-
Services data varies based on the service model (number of assigned services). Some
services have a variable length configuration (for example, Speed Dial 100) that affects
the size of subscriber profile.
The following table provides an estimated size for some service models, illustrating lower
bound and upper bound for service penetration.
Service Model BW- BW- BW-Services MMTel- Total
BaseUserInfo FullUserInfo (bytes) Services
size (bytes) (bytes) (bytes)

Basic User 500 4000 500 4000 9000

Full Business 500 10000 20000 4000 34500


User

12.2 Execution Server Traffic Model

12.2.1 Patterns
Cold Start – Upon Execution Server start-up, the profile cache is empty, but the Network
Server is typically not proxying any SIP traffic (for example, the Network Server would
have migrated the subscribers to other Execution Servers in the cluster), and thus would
not generate Sh traffic.
Subscriber Migration – When the Network Server is performing enterprise or service
provider migration, this results in SIP traffic on the Execution Server for users whose
profile is not cached. This can result in a burst of profile fetch operations that gradually
fade as the cache fills up. The HSS also starts to send profile modification notifications for
the profiles just cached. Subscriber migration can take place on an already operating
Execution Server (which can be in steady state) or on a freshly started Execution Server.
The System Engineering traffic model usually sets the SIP REGISTER interval to “60”
minutes, which means that a Sh traffic burst caused by subscriber migration should last for
one hour (that is, it is remotely influenced by the time at which the subscriber migration
has been performed, such as if it was at busy hour or during the night). Subscriber
migration is typically caused by an Execution Server node that has failed or through a
manual subscriber-host calculation performed by the Network Server.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 67 OF 76


Steady State – The Execution Server cache is full and the Network Server is not
performing any subscriber migration. The Sh traffic is characterized by notifications from
the HSS due to provisioning activities and by notification subscription (and optionally
profile) refreshes from the Execution Server, which take place on a configurable interval. It
is expected that provisioning activities are higher during the day and notification
subscription/profile refreshes to converge are uniform throughout the day.

12.2.2 Example
The following figures illustrate the Sh traffic on a given Execution Server that is initially in
steady state. The Execution Server is hosting 300K full business users and is performing
notification subscriptions and profile refreshes as well as processing profile modification
notifications from the HSS, with higher traffic during daytime.
On Day 2, at 10:00 A.M., 200K subscribers are migrated by the Network Server after a
failure of an Execution Server in the farm. This causes a peak of profile fetches due to
cache misses for newly migrated subscribers. The following smaller peaks are profile
refreshes for the newly hosted subscribers, which occur within 70 percent and 90 percent
of the subscription’s expiration, to converge eventually to a constant rate. Note that when
the Execution Server performs profile fetches triggered from SIP traffic (and cache miss),
there are two Subscribe-Notification-Requests (SNRs) sent. The first fetches the user’s
main identity and the second downloads the remainder of the profile. When performing a
profile refresh, the Subscriber Agent already knows the main identity; thus, only one SNR
is sent per profile refresh.
140

120

100

80

SNR txns/sec
60
PNR txns/sec
40

20

0
Day Day Day Day
1 2 3 4

Figure 1 Execution Server Sh Transaction Rate Example, Full Business User

In this example, the 200K subscriber migration causes a peak of ~120 SNRs/sec in the
hour following the migration due to registrations and call setup.
3000

2500

2000

1500
Outgoing traffic volume (kbytes/s)

1000 Incoming traffic volume (kbytes/s)

500

0
Day 1

Day 2

Day 3

Day 4

Figure 2 Execution Server Sh Data Volume Example, Full Business User

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 68 OF 76


The subscriber migration results in a peak of ~2500 KBps generated by the HSS toward
the Execution Server.
The above-mentioned numbers are based on the following key configuration parameters:
 reloadDataOnExpiration set to “true”
 expiryTimeDelay set to “24 hours”
 dynamicNotifEffSupportDiscovery set to “true”; HSS supports Notif-Eff
 supportsSendDataIndication set to “true”
 registration interval: 60 minutes

12.3 Profile Server Traffic Model

12.3.1 Patterns
Initial build-out – This is the initial subscriber profile load for newly installed systems. This
is typically performed by the OSS, which logs in with administrator accounts.
Steady State – This is a mix of administrator traffic (user create/delete) and customer
service and self-care traffic (mostly service configuration modification). Provisioning clients
usually create logical sessions, and all traffic for a given session goes through the same
Profile Servers. The next session created by a given user is likely on another Profile
Server in the farm, thus generating a cache miss on about every session login. Moreover,
it is intended to set the profile cache size to prevent subscriber profile fetches for
provisioning commands that occur within small (few minutes) interval. Therefore, a profile
fetch can even take place within a session, which for self-care usage, is typically set to
“30” minutes. With this pattern, the cache miss is high (compared to an Execution Server
in steady state) but is not impacted much by Profile Server failure.
System engineering guides usually allow for 80 percent reads (which may or may not
trigger profile fetch on the HSS) and 20 percent writes (which trigger profile updates on the
HSS for one or more transparent data and resulting notifications). Reads usually involve a
full profile fetch, while writes cause a Profile Update Request of single Repository-Data
(most likely BW-Services or MM-Tel Services). Even if the Profile Server is registering for
modification notifications on the HSS, the rate of Push-Notification-Requests (PNRs) is
expected to be negligible as the Profile Server is usually the initiator of profile modifications
(the HSS is not notifying the entity having done the profile update).

12.3.2 Example
This example is for a Profile Server in steady state during business activity. It assumes
that the cache hit is on average 66 percent on reads. Note that careful tuning of the profile
cache may yield better results. For a system of 5 million users, the intention is that a single
Profile Server should be able to handle the entire load of 750 PTPS. This represents 600
reads and 150 writes per second and it translates to 200 profile fetches per second (66
percent cache hit) and 150 profile updates per second.
Thus, for a full business user, this represents 200 SNRs/sec and 150 Profile Update
Requests per second (PURs/sec). The corresponding outgoing traffic volume is ~3000
KBps on the outgoing side (from the Profile Server to the HSS, assuming an update to the
BW-Services document) and ~7500 KBps on the incoming side.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 69 OF 76


13 Appendix A: Server Overload Controls

This section describes the overload protections for each Cisco BroadWorks node type.

13.1 Application Server


The Application Server must be able to manage an overload condition without affecting
calls in progress. An additional objective is to provide greater than 90% call completion
rate at 150 percent of rated system capacity. To meet this requirement, the Application
Server samples input queue delay. If the queue delay becomes too long, overload
conditions are triggered. For SIP messaging, separate queues are used for call traffic (for
example, INVITE) versus non-call traffic (for example, REGISTER and OPTIONS). For the
each of the following traffic types the Application Server maintains two overloaded zones
(yellow and red):
 SIP call traffic
 SIP non-call traffic
Each zone has a separate configurable queue-delay threshold that causes the Application
Server to transition to the next zone when exceeded.
In the yellow zone, the Application Server continues to process existing calls in progress. If
new call originations are detected, half of them are processed. The other half of call
originations are denied. In the red zone, the Application Server continues to process
existing calls in progress. If new call originations are detected, they are all denied. The
Application Server remains in the corresponding overload zone until the queue delay
returns to an acceptable threshold value.
An alarm is generated on entry into each zone. If the overload condition subsides, then the
alarm is cleared upon exiting the overload zone.
The Application Server uses an algorithm to prevent hysteresis loops. This is done to
prevent rapid fluctuations between zones. The system remains in a particular zone for a
predefined amount of time before transitioning to another zone. This keeps the system
from experiencing a “ping-pong” effect at the border of two zones.
When the Application Server denies new call originations, it can be configured to send an
“error” response (503 for SIP with or without a Retry-After header), “drop” the request, or
“redirect” the request. In all cases, the end goal is to cause the endpoint to roll over to the
secondary Application Server and complete. The recommended setting is to send an
“error” response.
It is important to note that in an overload zone (yellow or red), the Application Server can
be configured to treat most new “emergency calls” as if they were existing calls in
progress.
The Application Server also implements extreme overload control protections. These
protections limit queue length, time in queue, as well as force the server into overloads
during low memory conditions. Different maximum time in queue (also referred to as the
maximum packet age) values is used during overload and non-overload conditions.
The maximum queue time and memory values are configured via the CLI. The default
maximum queue length is computed based on the server memory but can be overridden
by a start-up parameter. By default, the maximum queue length value is set to “2500”
messages per 256 MB of heap. Typically, the heap size is calculated to be a quarter of the
total amount of memory on the system. For example, a box with 2 GB of RAM would have
a heap size of 512 MB and the default value for the number of messages in the queues
would be “5000”.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 70 OF 76


13.2 Network Server
As with the Application Server, the Network Server supports input queue delay-driven
overload controls. If the queue delay becomes too long, overload conditions are triggered.
For SIP messaging, separate queues are used for call traffic (for example, INVITE) versus
non-call traffic (for example, REGISTER and SUBSCRIBE redirection). For each of the
following traffic types, the Network Server maintains two overloaded zones (yellow and
red):
 SIP call traffic
 SIP non-call traffic
Each zone has a separate configurable queue-delay threshold that causes the Network
Server to transition to the next zone when exceeded.
In the yellow zone, the Network Server denies 50% of incoming traffic. In the red zone, the
Network Server denies 100% of incoming traffic. The Network Server remains in the
corresponding overload zone until the queue delay returns to an acceptable threshold
value.
An alarm is generated on entry into each zone. If the overload condition subsides, then the
alarm is cleared upon exiting the overload zone.
The Network Server uses an algorithm to prevent hysteresis loops. This is done to prevent
rapid fluctuations between zones. The system remains in a particular zone for a
predefined amount of time before transitioning to another zone. This keeps the system
from experiencing a “ping-pong” effect at the border of two zones.
When the Network Server denies new call originations, it can be configured to send an
“error” response (503 for SIP with or without a Retry-After header) or “drop” the request.
The recommended setting is to send an “error” response.
It is important to note that in an overload zone (yellow or red), the Network Server can be
configured to accept all “emergency calls”.
The Network Server also implements extreme overload control protections. These
protections limit queue length and time in queue. They also force the server into overloads
during low memory conditions. Different maximum time in queue (also referred to as the
maximum packet age) values are used during overload and non-overload conditions.
The maximum queue time and memory values are configured via the CLI. The default
maximum queue length is computed based on the server memory; however, it can be
overridden by a start-up parameter. By default, the maximum queue length value is set to
“2500” messages per 256 MB of heap. Typically, the heap size is calculated to be a
quarter of the total amount of memory on the system. For example, a box with 2 GB of
RAM would have a heap size of 512 MB and the default value for the number of
messages in the queues would be “5000”.

13.2.1 Network Server Licensing


Along with delay-driven overload controls, each Network Server is licensed for a fixed
amount of BHCA. The Network Server monitors current BHCA. Once the licensed BHCA
capacity is met, the Network Server rejects or ignores new transactions until it stabilizes to
its peak licensed BHCA capacity.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 71 OF 76


Requests should be distributed evenly across the individual Network Servers in the
cluster. By denying new transactions, incoming INVITEs from Application Servers,
softswitches, and proxies are redirected to the next Network Server in the cluster. Network
Servers are stateless, so any Network Server generates the same response for a given
request. This overload control mechanism allows the load to be distributed throughout the
cluster until the overload condition subsides.
A Network Server generates an alarm once the peak BHCA capacity has been exceeded.
The alarm is cleared once the BHCA returns to normal.

13.3 Media Server


A Media Server is guaranteed to provide an acceptable quality of service while operating
under the rated load. If a request is received while a Media Server is at capacity, the
request is denied. By rejecting traffic when at the rated capacity, a Media Server maintains
an acceptable quality of service by ensuring a predictable processor load.

13.4 Xtended Services Platform


The performance characteristics of the Xtended Services Platform are based on the
incoming requests supported. The Xtended Services Platform supports a configurable
number of maximum requests, with separate settings for the total number of requests and
the maximum number of requests per user.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 72 OF 76


Acronyms and Abbreviations

ACH Average Call Hold


ADP Application Delivery Platform
AMR-WB Adaptive Multi-Rate Wideband
AMS Access Mediation Server
API Application Programming Interface
AS Application Server
BHCA Busy Hour Call Attempts
BL Business Line
BT Business Trunk
CAP CAMEL Application Part
CFW Control Channel Framework
CIF Common Intermediate Format
CLI Command Line Interface
CPS Calls Per Second
CPU Central Processing Unit
DBS Database Server
DMZ Demilitarized Zone
DN Directory Number
ECL Enhanced Call Logs
ESP Encapsulating Security Payload
EVRC-NW Enhanced Variable Rate Codec – Narrowband Wideband
GC Garbage Collection
HSS Home Subscriber Server
IMAP Internet Message Access Protocol
IOPS Input/Output Operations Per Second
IVR Interactive Voice Response
LCA Local Calling Area
LNP Local Number Portability
MAP Mobile Application Part
MDCX Modify Connection
MPS Messages Per Second
MWI Message Waiting Indicator
NDS Network Database Server
NFM Network Function Manager

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 73 OF 76


NNACL NPA-NXX Active Code List
NPS Notification Push Server
OCI Open Client Interface
OCP Outgoing Calling Plan
OCS Open Client Server
OSS Operations Support System
PE Premium Enterprise
PNR Push-Notifications-Request
POP Post Office Protocol (for example POP3)
PR Premium Residential
PSTN Public Switched Telephone Network
PTPS Provisioning Transactions Per Second
PUR Profile Update Request
RAC Real Application Cluster
RPS Registrations Per Second
RPS Requests Per Second
RQNT Notification Request
RTP Real-Time Transport Protocol
SAN Storage Area Network
SAR System Activity Reporting
SBC Session Border Control
SCCP Skinny Call Control Protocol
SCF Service Control Function
SE Standard Enterprise
SIP Session Initiation Protocol
SMTP Simple Mail Transfer Protocol
SNMP Simple Network Management Protocol
SNR Subscribe-Notification-Request
SOAP Simple Object Access Protocol
SOHO Small-Office/Home-Office
SR Standard Residential
TAS Telephony Application Server
TCP Transmission Control Protocol
TFTP Trivial File Transfer Protocol
TPS Transactions Per Second
UC Unified Communications

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 74 OF 76


UMS Messaging Server
URL Uniform Resource Locator
USS Sharing Server
UVS Video Server
VoIP Voice over Internet Protocol
WebDAV Web-based Distributed Authoring and Versioning
WebRTC Web Real-Time Communication
WRS WebRTC Server
XMPP Extensible Messaging and Presence Protocol
XS Execution Server
Xsi Xtended Services Interface

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 75 OF 76


References

[1] Cisco Systems, Inc. 2021. Cisco BroadWorks Platform Dimensioning Guide. Available
here.
[2] Cisco Systems, Inc. 2021. Cisco BroadWorks System Capacity Planner. Available
here.
[3] Cisco Systems, Inc. 2022. Cisco BroadWorks Virtualization Configuration Guide.
Available here.
[4] Cisco Systems, Inc. 2020. Cisco BroadWorks System Monitoring Quick Reference
Guide. Available here.
[5] Cisco Systems, Inc. 2022. Cisco BroadWorks Network Server Product Description.
Available here.

CISCO BROADWORKS SYSTEM ENGINEERING GUIDE PAGE 76 OF 76

You might also like