You are on page 1of 4

Evolving the MSC server architecture

10

Ericsson MSC
Server Blade Cluster
The MSC-S Blade Cluster, the future-proof server part of Ericsson’s
Mobile Softswitch solution, provides very high capacity, effortless
scalability, and outstanding system availability. It also means lower
OPEX per subscriber, and sets the stage for business-efficient network
solutions.

Petri Maekiniemi duce high-capacity servers – preferably the extreme in-service performance
Ja n S c h e u r ic h scalable ones. And the historical solu- requirements put on large nodes. First,
tion to increasing the capacity of MSC it is designed to ensure zero downtime
The Mobile Switching Center servers has been to introduce a more – planned or unplanned. Second, it can
Server (MSC-S) is a key part of powerful central processor. Ericsson’s be integrated into established and prov-
Ericsson’s Mobile Softswitch MSC-S Blade Cluster concept, by con- en network-resilience concepts such as
solution, controlling all circuit- trast, is an evolution of the MSC server MSC in Pool.
switched call services, the user architecture, where server functional- Because MSC-S Blade Cluster opera-
plane and media gateways. ity is implemented on several generic tion and maintenance (O&M) does not
And now, with the MSC-S Blade processor blades that work together in depend on the number of blades, the
Cluster, Ericsson has taken a group or cluster. scalability feature significantly reduc-
its Mobile Softswitch solution The very-high-capacity nodes that es operating expenses per subscriber as
one step further, substantially these clusters form require exception- node capacity expands.
increasing node availability and al resilience both at the node and net- Compared with traditional, non-
work level, a consideration that Ericsson scalable MSC servers, the MSC-S Blade
server capacity. The MSC-S Blade
has addressed in its new blade cluster Cluster has the potential to reduce pow-
Cluster also dramatically sim-
concept. er consumption by up to 60 percent and
plifies the network, creating an
the physical footprint by up to 90 per-
infrastructure which is always Key benefits cent thanks especially to its optimized
available and easy to manage and
The unique scalability of the MSC-S redundancy concepts and advanced
which can be adjusted to handle Blade Cluster gives network opera- components. Many of its generic com-
increases in traffic and changing tors great flexibility when building ponents are used in other applications,
needs. their mobile softswitch networks and such as IMS.
Continued strong growth in circuit- expanding network capacity as traffic The MSC-S Blade Cluster also supports
switched voice traffic necessitates fast grows, all without complicating net- advanced, business-efficient network
and smooth increases in network capac- work topology by adding more and more solutions, such as MSC-S nodes that cov-
ity. The typical industry approach to MSC servers. er multiple countries or are part of a
increasing network capacity is to intro- The MSC-S Blade Cluster fulfills shared mobile-softswitch core network
(Table 1).

Key components
BOX A  Terms and abbreviations The key components of the MSC-S Blade
APG43 Adjunct Processor Group MSC Mobile switching center Cluster (Figure 1) are the MSC-S blades,
version 43 MSC-S MSC server a signaling proxy (SPX), an IP load bal-
ATM Asynchronous transfer mode O&M Operation and maintenance ancer, an I/O system and a site infra-
E-GEM Enhanced GEM OPEX Operating expense structure support system (SIS).
GEM Generic equipment magazine OSS Operations support system The MSC-S blades are advanced gener-
GEP Generic processor board SIS Site infrastructure support ic processor boards, grouped into clus-
IMS IP Multimedia Subsystem system ters and jointly running the MSC serv-
IO Input/output SPX Signaling proxy er application that controls circuit-
IP Internet protocol SS7 Signaling system no. 7 switched calls and the mobile media
M-MGw Media gateway for mobile TDM Time-division multiplexing gateways.
networks The signaling proxy (SPX) serves as

E r i c s s o n r e v i e w • 3 2008
11

the network interface for SS7 signaling


traffic over TDM, ATM and IP. It distrib- Figure 1  Main functional components of the MSC-S Blade Cluster.
utes external SS7 signaling traffic to the
MSC-S blades. Two SPXs give 1+1 redun-
dancy. Each SPX resides on a double-
sided APZ processor. Cluster
The IP load balancer serves as the
network interface for non-SS7-based MSC-S MSC-S MSC-S MSC-S MSC-S
blade blade blade blade blade
IP-signaling traffic, such as the ses-
sion initiation protocol (SIP). It distrib-
utes external IP signaling to the MSC-S
blades. Two IP load-balancer boards give
1+1 redundancy.
The I/O system handles the transfer
of data – for example, charging data, 1+1 1+1 1+1 1+1
hot billing data, input via the man-
machine interface, and statistics to and Signaling proxy IP load balancer
from MSC-S blades and SPXs. The MSC-S
Blade Cluster I/O system uses the APG43
(Adjunct Processor Group version 43) SIS I/O
and is 1+1 redundant. It is connected
to the operation support system (OSS).
Each I/O system resides on a double-
sided processor.
The site infrastructure support sys-
tem, which is also connected to the RAN CN OSS
operation support system, provides the
I/O system to Ericsson’s Infrastructure
components.
Therefore, the MSC-S Blade Cluster con- ter will remain fully operational, expe-
Key characteristics tinues to offer full service availability riencing only a loss of capacity (rough-
Compared with traditional MSC-S when any of its blades is unavailable. ly proportional to the number of failed
nodes, the MSC-S Blade Cluster offers Subscriber records are always main- blades) and minor loss of subscriber
breakthrough advances in terms of tained on two blades to ensure that sub- records.
redundancy and scalability. scriber data cannot be lost. With the exception of very short
In the unlikely event of simultane- interruptions that have no affect
Redundancy ous failure of multiple blades, the clus- on in-service performance, blade
The MSC-S Blade Cluster features tai-
lored redundancy schemes in different
domains. The network signaling inter-
faces (C7 and non-C7) and the I/O sys-
table 1  Key benefits of the MSC-S Blade Cluster.
tems work in a 1+1 redundancy con-
figuration. If one of the components is Feature Benefit
unavailable, the other takes over, ensur- Very high capacity Ten-fold increase over traditional MSC servers
ing that service availability is not affect-
ed, regardless of whether component Easy scalability  xpansion in steps of 500,000 subscribers through the
E
addition of new blades
downtime is planned (for instance, for
No network impact when adding blades
an upgrade or maintenance actions) or
unplanned (hardware failure). Outstanding system availability Zero downtime at the node level
The redundancy scheme in the Upgrade without impact on traffic
domain of the MSC-S blades is n+1. Network-level resilience through MSC in Pool
Although the individual blades do not Reduced OPEX Fewer nodes or sites needed in the network
feature hardware redundancy, the Up to 90 percent smaller footprint
cluster of blades is fully redundant. All As much as 60 percent lower power consumption
blades are equal, meaning that every
Business-efficient network solutions Multi-country operators
blade can assume every role in the sys-
Shared networks
tem. Furthermore, no subscriber record,
mobile media gateway, or neighbor- Future-proof hardware Blades can be used in other applications, such as IMS
ing node is bound to any given blade.

E r i c s s o n r e v i e w • 3 2008
Evolving the MSC server architecture
12

a wide range of cluster capacities from


Figure 2  MSC-S Blade Cluster cabinets. very small to very large.
The individual MSC-S blades are not
visible to neighboring network nodes,
Cabinet 1 Cabinet 2
such as the BSC, RNC, M-MGw, HLR, SCP,
P-CSCF and so on. This first enabler is
Fan Fan essential for smooth scalability: blades
can be added immediately without
affecting the configuration of cooper-
ating nodes.
Other parts of the network might
Optional also have to be expanded to make full
Fan Fan
use of increased blade cluster capac-
ity. When this is the case, these steps
can be decoupled and taken indepen-
dently.
The second enabler is the ability of
Fan Fan the MSC-S Blade Cluster to dynamical-
Fan Fan
ly adapt its internal distribution to a
new blade configuration without man-
SPX ual intervention. As a consequence, the
IO capacity-expansion procedure is almost
TDM devices fully automatic – only a few manual
Fan ATM devices Fan
steps are needed to add a blade to the
running system.
IS Infrastructure
When a generic processor board is
MSC-S blades
inserted and registered with the clus-
ter middleware, the new blade is loaded
with a 1-to-1 copy of the application soft-
failures have no affect on connectiv- The SPX and IP load balancer, for exam- ware and a configuration of the active
ity to other nodes. Nor do blade failures ple, can base their forwarding decisions blades. The blade then joins the cluster
affect availability for traffic of the user- on stateless algorithms. The blades, and is prepared for manual test traffic.
plane resources controlled by the MSC-S on the other hand, use enhanced, For the time being it remains isolated
Blade Cluster. industry-standard redundancy mecha- from regular traffic.
To achieve n+1 redundancy, the nisms when they interact with the 1+1 The blades automatically update their
cluster of MSC-S blades employs a set redundancy domain for, say, selecting internal distribution tables to the new
of advanced distribution algorithms. the outgoing path. cluster configuration and replicate all
Fault-tolerant middleware ensures that The combination of cluster middle- necessary dynamic data, such as sub-
the blades share a consistent view of the ware, data replication and stateless dis- scriber records, on the added blade.
cluster configuration at all times. The tribution algorithms provides a distrib- These activities run in the background
MSC-S application uses stateless distri- uted system architecture that is highly and have no affect on cluster capacity
bution algorithms that rely on this clus- redundant and robust. or availability.
ter view. The middleware also provides a One particular benefit of n+1 redun- After a few minutes, when the inter-
safe group-communication mechanism dancy is the potential to isolate an nal preparations are complete and test
for the blades. MSC-S blade from traffic to allow results are satisfactory, the blade can be
Static configuration data is replicat- maintenance activities – for example, activated for traffic. From this point on,
ed on every blade. This way, each blade to update or upgrade software with- it handles its share of the cluster load
that is to execute a requested service has out disturbing cluster operation. This and becomes an integral part of the clus-
access to the requisite data. Dynamic means zero planned cluster down- ter redundancy scheme.
data, such as subscriber records or the time.
state of external traffic devices, is repli- MSC-S Blade Cluster hardware
cated on two or more blades. Scalability Building practice
Interworking between the two redun- The MSC-S Blade Cluster was designed The MSC-S Blade Cluster is housed in an
dancy domains in the MSC-S Blade with scalability in mind: to increase sys- Enhanced Generic Equipment Magazine
Cluster is handled in an innovative tem capacity one needs only add MSC-S (E-GEM). Compared with the GEM, the
manner. Components of the 1+1 redun- blades to the cluster. The shared clus- E-GEM provides even more power per
dancy domain do not require detailed ter components, such as the I/O system, subrack and better cooling capabilities,
information about the distribution of the SPX and IP load balancer, have been which translates into a smaller foot-
tasks and roles among the MSC-S blades. designed and dimensioned to support print.

E r i c s s o n r e v i e w • 3 2008
13

Generic processor board blades. One can add, isolate or remove


Petri Maekiniemi
The Generic Processor Board (GEP) used blades without disturbing traffic. The
for the MSC-S blades is equipped with an system redistributes the subscribers is master strategic prod-
x86 64-bit architecture processor. There and replicates subscriber data when uct manager for the MSC-S
are several variants of the equipped GEP the number of MSC-S blades changes. Blade Cluster. He joined
board, all manufactured from the same Cluster reconfiguration is an automatic Ericsson in Finland in 1985
printed circuit board. In addition, the procedure; moreover, the procedure is but has worked at Ericsson Eurolab
GEP is used in a variety of configura- invisible to entities outside the node. Aachen in Germany since 1991. Over
tions for several other components in the years he has served as system
the MSC-S Blade Cluster, namely the manager and, later, as product manag-
APG43, the SPX and SIS, and other appli- er in several areas connected to the
cation systems. MSC, including subscriber services,
charging and MSC pool. Petri holds a
Infrastructure components degree in telecommunications engi-
neering from the Helsinki Institute of
The infrastructure components pro-
Technology, Finland.
vide layer-2 and layer-3 infrastructure
for the blades, incorporating routers, SIS
and switching components. They also Jan Scheurich
provide the main on-site layer-2 proto-
is a principal systems
col infrastructure for the MSC-S Blade
designer. He joined
Cluster. Ethernet is used on the back-
Ericsson Eurolab Aachen
plane for signaling traffic.
in 1993 and has been em-
Hardware layout ployed since then as software designer,
system manager, system tester and
The MSC-S Blade Cluster consists of one
troubleshooter for various Ericsson
or two cabinets (Figure 2). One cabi-
products, ranging from B-ISDN over
net houses mandatory subracks for the classical MSC, SGSN, MMS solutions,
MSC-S blades with Infrastructure com- to MSS. He currently serves as system
ponents, the APG43 and SPX. The oth- architect for the MSC-S Blade Cluster.
er cabinet can be used to house a sub- Jan holds a degree in physics from Kiel
rack for expanding the MSC-S blades University, Germany.
and two subracks for TDM or ATM sig-
naling interfaces.
Acknowledgements
Power consumption
The authors thank Joe Wilke whose
Low power consumption is achieved by
comments and contributions have
using advanced low-power processors
helped shape this article. Joe is a man-
in E-GEMs and GEP boards. The high
ager in R&D and has driven several
subscriber capacity of the blade-cluster
technology-shift endeavors, including
node means very low power consump-
the MSC-S Blade Cluster.
tion per subscriber.

Conclusion
The MSC-S Blade Cluster makes
Ericsson’s Mobile Softswitch solution
even better – one that is easy to scale
both in terms of capacity and function-
ality. It offers downtime-free MSC-S
upgrades and updates, and outstanding
node and network availability. What is
more, the hardware can be reused in
future node and network migrations.
The architecture, which is based
on a cluster of blades, is aligned with
Ericsson’s Integrated Site concept and
other components, such as the APG43
and SPX. O&M is supported by the OSS.
The MSC-S Blade Cluster distributes
subscriber traffic between available

E r i c s s o n r e v i e w • 3 2008

You might also like