Professional Documents
Culture Documents
PERFORMANCE DESCR.
Disclaimer
No part of this document may be reproduced in any form without the written
permission of the copyright owner
The contents of this document are subject to revision without notice due to
continued progress in methodology, design, and manufacturing. Ericsson shall
have no liability for any error or damage of any kind resulting from the use
of this document.
Contents
1 Introduction 1
1.1 Purpose 2
1.2 Scope 3
1.3 Typographic Conventions 3
6 Upgrade 93
Glossary 115
1 Introduction
For an overview of COMInf and the environment it is used in, see Figure 1.
For a more detailed description of COMInf refer to the COMInf Description,
Reference [3]
External NMS
system
O&M
Operator COMInf server nodes COMInf Network
clients OSS
Master
Server
COMInf O&M
Network Infrastructure
Access/Core
Access/Core
Network(s)
Access/Core
Network(s)
Network(s)
Network elements
NetworkOverview
Communication with the network elements in the network is performed using the
Internet Protocol (IP) and its companion protocols, and, or, the X.25 protocol.
In the COMInf network, IP traffic is carried over Ethernet. In the Access and
Core networks, IP traffic may be carried over Asynchronous Transfer Mode
(ATM) and or Ethernet.
For the GSM RAN and Core Network, the OSS Master Server sometimes
communicates to network elements using the X.25 protocol. The X.25 protocol
is used for communicating with AXE based nodes using the IOG11 or IOG20
frontend processors.
1.1 Purpose
The purpose of this document is to aid network engineers design their COMInf
network. The document describes the components that comprise a COMInf
network, and how to configure and connect them. The COMInf Installation
Plan, Reference [5], describes the order in which COMInf network components
need to be installed.
1.2 Scope
This document describes how to integrate COMInf and connect it to the
following systems:
• Core Network
• Service Network
This document only covers O&M of the RANs, the Core Network, the Service
Network and the Network deployment for both Blade and Non-Blade OSS
installations. A Non-Blade deployment has rack mounted servers with a
SPARC OSS Master Server and a mixture of SPARC and x86 for all other
server types. A Blade deployment has x86 Blade servers only with no rack
based servers apart from the Solaris Management Workstation (MWS). In a
x86 Blade deployment the MWS must be an x86 rack mounted server. O&M of
the COMInf network and payload traffic are outside the scope of this document.
This integration is on the link layer (IP and X.25 layers) and does not relate to
the application layer.
The COMInf network comprises the COMInf IP Network (see Section 3 on page
7) and the COMInf X.25 Network (see Section 4 on page 85).
Note: X.25 traffic can be carried over IP, in which case, the COMInf IP
network and the COMInf X.25 Network are connected together.
All servers requiring backup use a second LAN interface that is dedicated for
backup. All LAN interfaces used for backup are connected to a dedicated VLAN
for backup and the Backup server (BS) itself. The firewall is not connected to
the backup VLAN so the backup traffic can not be routed to any other VLAN.
All machines are equipped with a dedicated LAN interface for remote
management. All LAN interfaces for remote management are connected to a
dedicated remote management VLAN and the MWS. This makes it possible
to attach to the Integrated Lights Out Manager (ILOM) console and perform
remote maintenance from the MWS. The firewall is not connected to the remote
management VLAN so the remote management traffic can not be routed to
any other VLAN.
Solaris machines for remote management and also the MWS. This makes it
possible to attach to the Sun Integrated Lights Out Manager (ILOM) console
and perform remote maintenance from the MWS. It is worth noting that the
MWS is a normal installation of Solaris. The other extra VLAN is used for
backup. The backup VLAN is not shown in the figure but is connected to
every node via a second LAN interface. So most COMInf nodes use three
Ethernet ports, one for normal traffic, one for backup and one for service.
Traffic within a VLAN is handled solely by the switch.
• The O&M X.25 Router making conversion between X.25 over Ethernet
(also called XOE) to X.25 over TCP/IP (also called XOT).
• OSS Client hardware where users can access the OSS-RC applications
from their Solaris WS or Windows PC via the ICA protocol.
The following chapters are a short description of the different VLANs and the
servers and appliances attached to those VLANs.
are attached to COMInf. In the x86 Blade deployments, O&M Services Primary
and Secondary, UAS, OMSAS, and NEDSS are installed by Solaris Jumpstart
from MWS (using the dedicated DHCP service on MWS).
None of those servers belong to COMInf itself but are attached to COMInf. The
ENIQ product collects performance management data for several OSS-RC
systems so this VLAN is shared among several OSS-RC systems (in contrast
to the other VLANs that only belong to one single OSS-RC system).
• UNIX Application Server (UAS). This machine (or machines) host native
Solaris applications such as most OSS-RC applications.
• OCS Access Portal (OAP). This machine (or machines) host services from
Citrix such as Web Interface and ICA over SSL which is also known as
Citrix Secure Gateway. OSS users access this machine through a standard
web browser to run OSS applications.
0 Unidirectional SMRS.
0 LDAP Server.
0 LDAP Server
In an x86 Blade deployment all servers within the OSS LAN must be connected
to the backup VLAN.
The backup VLAN is mandatory for x86 Blade deployments but not for
SPARC deployments. It is recommended that an RFC 1918 compliant private
addressed is used for this VLAN. This allows communication from other
domains using SSH and https. The storage VLAN is both switched and routed
• VLAN OSS Storage connects to servers within the OSS service domain.
• VLAN OSS Storage is used for OSS support and infrastructure services and
UAS server in an OSS-RC deployments. The UAS requires a dedicated
network interface on this VLAN.
• VLAN OSS Storage for ENIQ connectivity. This VLAN is used only for
communication between ENIQ and the storage devices.
The upgrade VLANs are deployed on the x86 Blade environment. The VLANs
are as follows:
The upgrade VLANs are used during upgrade testing on the Blade environment.
The upgrade VLANs should have the same characteristics as the standard
VLANs for OSS services , OCS Services and INF Services. The upgrade
VLANs should have different VLAN id's so that packets with the same IP
address but with different VLAN tags can be routed correctly in the switch. This
means that the switch has to provide layer 3 capable switching in order to
perform pre-cutover testing.
Network Infrastructure
NETWORK SWITCH
FIREWALL/
ROUTER
O&M
ROUTER
Managed
Networks
OSS Services
MASTER
SERVER 1
VLAN
OSS VLAN
SERVICES, OSS
MASTER BACKUP SERVICES
SERVER 2
COMInf_HA_A
3.2.1 HA-CS
This section describes how the High Availability Cluster Solution (HA-CS)
version of the OSS Master Server is connected to the COMInf network. The
HA-CS Master Server consists of two Master Servers working in a High
Availability configuration. Both are connected to the COMInf network in the
same way as a single Master Server is. In addition, they are interconnected
with a number of directly connected crossed Ethernet cables. The HA-CS
solution is described in High Availability Cluster Solution OSS-RC Function
Description, Reference [8]. The diagram in Figure 3 shows how the HA-CS
Master Servers are connected to COMInf.
Note: During jumpstart of the OSS RC Application Servers towards the MWS,
the firewall must be transparent in both directions OCS-OSS and
OSS-OCS. This means that the settings according to Table 1 shall be
applied after the jumpstart is completed.
3.2.2 HA-RS
This section describes how the High Availability Replication Solution, HA-RS
version of the OSS Master Server is connected to COMInf. HA-RS Master
Server consists of two Master Servers, Primary & Secondary located at two
geographically separated sites. Only one server (active server) is running
OSS-RC application software at any point in time, this server replicates data
at volume level to the secondary server, see Function Description for High
Availability Replication Solution (HA-RS) for OSS-RC, Reference [9] for more
details. Each site requires its own installation of the COMInf network and both
sites are connected to the COMInf network in the same way as a single Master
Server. NE traffic has to be switch from the primary to secondary site at failover.
• VLAN OSS Services contains the OSS-RC Master Server (MS), Event
Based Application Server (EBAS), Service Network Master Server (SNMS)
and the Management Work Station (MWS).
• VLAN OCS Access Services contains the OCS Access Portal in case ICA
over SSL is used or Citrix Web Interface (optional).
• VLAN Infra contains the COMInf Infrastructure servers (O&M INF), O&M
Services server and the Network Element Support Server (NESS).
The backup VLAN is connected to the Backup Server for the OSS nodes it
backs up. This VLAN is only used for backup traffic. It is not routed to other
VLANs or nodes.
The traffic VLANs are connected to the Firewall Router using either a physical
network cable for each VLAN, or using VLAN trunking (802.1q). With VLAN
trunking, several or all VLANs are connected to the Firewall Router using one
single physical network cable. When VLAN trunking is used the bandwidth will
be shared among the participating VLANs. As a general rule use all available
physical ports at the Firewall Router before employing VLAN trunking. This will
ease troubleshooting when looking at LAN traffic for a specific port.
Private and Heartbeat are connected through the switch but as pass through.
In a Blade deployment, firewall rules should not be applied before the SFS/NAS
installation is done. For further information on configuring SFS/NAS, refer to
Installation and Commissioning Guide for Symantec FileStore for OSS-RC
EMC Installation Documentation, Reference [10].
Currently there are four basic security domains. They are independent of the
network type managed by the OSS and of any additional customer specific
products:
The following domains are optional. For example, the Core Network Security
Domain is only required if the OSS manages a core network, the NMS Security
Domain is only required if an NMS is used, and so on. The optional domains
are:
The OSS Services Security Domain contains the nodes that connect to the
VLAN OSS Services. The OSS Master Server, CM (Cluster Mate), EBAS
(Event Based Application Server) are located in this domain. A separate MWS,
makes it possible to install the servers via a GUI as several servers lack a
graphical display card and has to be connected to via the LAN interface.
The MWS also has a second Ethernet interface connected to the remote
management port on all COMInf machines.
The OCS Services Security Domain consists of the nodes connected to the
OCS Services VLAN. The Application Server or Servers are located here. Also
the OCS Infrastructure Server (OIS) is located here.
The SMRS Slave Security Domain consists of the BI-SMRS slave machine.
The OCS Access Security Domain consists of the node OCS Access Portal
Server. This node is necessary if the functions ICA over SSL or Citrix Web
Interface is needed.
The WCDMA RAN Security Domain consists of nodes in the WCDMA RAN,
that is, RBS, RNC, RANAG and Site LAN nodes.
The LTE RAN Security Domain consists of nodes in the LTE RAN.
The GSM RAN Security Domain consists of the nodes in the GSM RAN. These
nodes are the various IP interfaces of the BSCs and also the SMPC.
The Core Network Security Domain consists of the nodes in the Core Network.
The Service Network Security Domain consists of the nodes in the Service
Network.
The OCS Security Domain consists of clients of the OCS Servers. This domain
is special in that it may be part of the customers corporate network. In this
case, the OSS Clients domain will be part of a much larger network. This has to
be considered in the security design of your corporate network. For example,
routing must work between the OSS Client Security Domain and your corporate
network. There are security aspects to cater for as well. You may want to
connect COMInf to your corporate network through its Firewall Router, in order
to protect your corporate network from attacks originating in COMInf. Also, you
may want to implement a policy that allows only a limited part of your corporate
network to access COMInf.
The information in the connectivity matrix is not based on a specific firewall from
a specific vendor but, the following should be noted:
• NAT (Network Address Translation) isonly be used between the OSS Client
Security Domain and the OCS Services/Access Security Domain. Between
all other domains NAT must not be used. The reason is that between all
other domains CORBA is used which will not work if NAT is enabled.
• The services NFS make use of Sun Remote Procedure Call, RPC, services.
The RPC service itself executes at a well known port 111 but dynamically
assigns and opens other TCP/UDP ports for other services defined by
Sun program numbers. The NFS service is composed of nfsprog/100003
and mountd/100005. Without a firewall that is RPC aware you need to
check at the NFS with the Solaris command rpcinfo -p and check the
actual allocated port numbers and open the corresponding ports in the
firewall. With a RPC aware firewall, the firewall itself will open the NFS
port dynamically.
Note: During jumpstart of the OSS RC Application Servers towards the MWS,
the firewall must be transparent in both directions OCS-OSS and
OSS-OCS. This means that the settings according to Table 1 shall be
applied after the jumpstart is completed.
ICA over SSL/TLS introduces a new firewall security domain called OCS Access
containing an OAP (OCS Access Portal) machine. The OAP runs Citrix Web
Interface secured by Citrix Secure Gateway (CSG). This arrangement requires
only a single secure port (443) to be opened between the user's OC machine
and the CSG. The OC web browser uses HTTPS and the Citrix ICA client uses
ICA over SSL/TLS to reach both UNIX and Windows Application servers.
The following figures shows three configurations that Ericsson support when
connecting the clients in the customers' VLAN:
The Enhanced deployment uses the OAP with both the Web Interface and
the Citrix Secure Gateway. The ICA user uses HTTPS to attach to the Web
Interface with all Citrix published applications from OIS/WAS, WAS and UAS.
The Citrix ICA client attaches indirectly via CSG to the UAS or OIS/WAS or
WAS by using ICA over SSL.
• Table 14 needs to changed. Port 80 (ICA browsing) and port 1494 (Citrix
ICA protocol) should be closed.
• A new table with connections from OSS Client domain to OCS Access
domain needs to be defined. See Table 30.
• A new table with connections from WCDMA RAN Security domain to OCS
Access domain needs to be defined. See Table 31.
• A new table with connections from OCS Access domain to OCS Service
domain needs to be defined. See Table 13.
The connectivity matrix, see Table 1, can be used as an aid for configuring the
Firewall Router. An X denotes traffic within an domain, and is not covered. “No”
means that no communication is allowed. “All” means that communication is
allowed for any port, without restriction.
From Security
Administration Table 5 No Table 21 X No No No No No
Domain
From M@H No No No No No No No No No
RAN
From Security
Administration No No No No No No No No
Domain
From Service No No X No No No No No
Network
From PM Services No No No No X No No No
From LTE RAN No No No No No No No No
From M@H No No No No No No No Table 57
Services
From M@H RAN No No No No No No Table 58 No
During jumpstart of all servers from the MWS the firewall must be transparent
in both directions between the OCS Services VLAN and the OSS Services
VLAN. This means that the settings in the following table shall be applied after
the jumpstart is completed.
(1) This port shall just be open in Enhanced deployment otherwise closed.
(1) Only from Network Elements in LTE RAN, not from Site LANs
Ephemeral ports, also called dynamic ports, are ports given out by a computer's
Operating System for an application, client or server hardware which needs
a port but does not care about which port number it uses. An example is a
callback port, where the dynamic port number is sent to the peer over an
previously established connection. The ephemeral port range will differ on
each machine type and configuration.
3.4.2.1 OSS
The table below specifies the ephemeral port range for the different OSS
machines.
• DHCP Relay. The Solaris Application Servers are located in the OCS
Services Security domain and are installed using the Sun jump-start
procedure. This procedure requires the Solaris servers to use DHCP when
booting during installation. As a result, the Firewall Router needs to be
configured as a DHCP Relay, relaying DHCP messages from the OCS
Services Security domain. The DHCP server (for Solaris jump-starting)
is the Management Work Station (MWS) in the OSS Services Security
domain. The firewall configuration matrix also allows DHCP from the OCS
Services to the OSS Services Security domains. NEDSS, OMSAS and
O&M Services are installed on x86 hardware using the Sun jump-start
procedure. This procedure requires that DHCP relay be temporarily
allowed between OSS Security domain and SMRS Slave Security domain
(for NEDSS) or Security Administration domain (for OMSAS) or O&M
Infrastructure Security domain, as MWS has to be used as DHCP server
for this type of installation.
• TCP Connection Timeout. Set the TCP idle timeout parameter in the
firewall to 65 minutes
Some of the TCP connections passing through the firewall are long-lived.
Without traffic, long delays may occur and as a result the firewall may
mistakenly consider these connections abandoned (for example, for
CORBA connections from the Master Server to the Network Elements).
The OSS is configured to have a maximum idle time for TCP connections
set to 60 minutes. The firewall must not drop idle connections that long.
The O&M Router is a general device for connecting managed networks to the
OSS. The router model Ericsson recommends can be equipped with a rich
selection of link layer. For GSM RAN and Core Network, the link layer used is
not defined. WCDMA RANs, on the other hand, have well defined transport
networks. A WCDMA RAN is connected to OSS using an ATM or Ethernet
network.
Note: These access restrictions can not stop an attack mounted from an RBS
or RNC site LAN and targeting the local RBS or RNC.
The Network Switch, Firewall Router and O&M Router are not redundant.
Traditionally, the communication between the OSS and the AXE Network
Elements is realized using X.25 instead of IP. In particular, the OSS Master
Server has an X.25 interface and communicates with the IOG11 or IOG20
frontend processors of the AXE nodes (AXE nodes are part of the GSM RAN
and Core Network).
Figure 9 shows two ways in which the OSS Master Server connects to X.25
devices:
Network Infrastructure
NETWORK
SWITCH FIREWALL/
ROUTER
O&M
ROUTER
MASTER
SERVER
X.25 O&M
ROUTER
COMInf_X25_Overview_B
Both the native X.25 network transport and the X.25 over TCP/IP transport
methods are described in Section 5.4 on page 89.
This chapter describes how the COMInf network connects to the managed
networks. The four types of managed network (WCDMA RAN, GSM RAN,
Core Network, Service Network) connect using a variety of technologies.
Some managed networks, such as the WCDMA or Service Networks employ
well-defined connections methods. Others, such as GSM RAN or Core
Network, offer more flexibility in terms of connection methods.
Network Infrastructure
NETWORK
SWITCH
FIREWALL/
ROUTER
OSS and
Services O&M
nodes
IP over ROUTER IP over WCDMA
Ethernet ATM Radio
Access
Network
Connecting_to_WRAN_A
Despite belonging to COMInf, the O&M Router performs certain tasks that are
only relevant for the WCDMA RAN and not for COMInf. For example, the router
performs WCDMA RAN routing (when IP routing protocols are used in the
WCDMA RAN.) Routing between the O&M Router and the RNCs is done using
the Open Shortest Path First (OSPF) routing protocol.
Network Infrastructure
NETWORK
SWITCH
FIREWALL/
ROUTER GSM RAN
site
O&M Transport
OSS and
ROUTER IP over Network GSM RAN
Services
nodes WAN site
•••
IP over
Ethernet GSM RAN
site
Connecting_to_GRAN_B
There are two types of IP nodes in GSM RAN. BSCs are AXE nodes and can
have several IP interfaces each. Most notably, the APG40 frontend processors
are connected to the OSS using IP.
The other type of node is the SMPC. It is based on Solaris and connects
to the OSS using IP.
Network
Infrastructure
GSM RAN
site
X.25
OSS Services Network GSM RAN
site
MASTER
•••
SERVER
GSM RAN
X.25 over LAPB site
X25_Connecting_to_GRAN_B
The OSS Master Server is equipped with an X.25 interface card. This interface
is connected to a X.25 network, to which the AXE IOGs are connected.
Network Infrastructure
NETWORK
SWITCH FIREWALL/ Transport
ROUTER Network
O&M
ROUTER
•••
VLAN
OSS
OSS Services SERVICES
Example
GSM RAN
site GSM RAN
MASTER X.25 over site
SERVER TCP (XOT) X.25 over TCP
(XOT)
BSC with
IOG
XOT_Connecting_to_GRAN_B
The OSS Master Server is connected to an X.25 O&M Router, either through
a LAPB link or using X.25 over Ethernet. The X.25 O&M Router converts the
data stream to X.25 over TCP (XOT), that is, X.25 is tunneled over TCP. The
X.25 data is carried through the Firewall Router to the IP transport network.
The traffic goes through the transport network until it enters a GSM RAN site,
where it contacts a site X.25 O&M Router. Here the X.25 traffic exits the tunnel
and is carried over a LAPB link to the AXE's IOG.
Network Infrastructure
NETWORK
SWITCH
FIREWALL/ Core
ROUTER Network
O&M site
OSS and ROUTER Transport
Services IP over Network Core
nodes WAN Network
site
•••
IP over
Ethernet
Core
Network
site
Connecting_to_CN_B
The figure only shows connecting to IP nodes. Some Core Network nodes are
AXE based and may need an X.25 connection. In this case, use the same
method as when connecting to X.25 GSM RAN, see Section 5.4 on page 89.
6 Upgrade
The table below shows the ports that have been added or deleted between
deliveries. This is helpful when configuring the firewall after the system has
been upgraded.
Table 73 OSSRC10.2 GA
Table From To Domai Service/Pr Dst Port Added/Re
Domain n otocol moved/Ch
anged
4 OCS Servi OSS Servi UDP 163-168, Added
ces ces
50600-507
19
5 Infrastructu OSS Servi UDP 163-168, Àdded
re Security ces
50600-507
19
6 Security OSS Servi UDP 162 Removed
Administr ces
ation
12 OSS Servi OCS Servi UDP 50720-507 Added
ces ces 39
18 OSS Servi Infrastr UDP 50720-507 Added
ces ucture 39
Security
42 OSS Servi PM Servic UDP 50720-507 Added
ces es 39
Glossary
Glossary
The OSS Glossary is included in Reference
[1]
Reference List
[10] Installation and Commissioning Guide for Symantec FileStore for OSS-RC
EMC Installation Documentation