You are on page 1of 132

Tellabs ® 8000 Network Manager R17A

System Description

70168_04
12.02.2010
Document Information

Revision History

Document No. Date Description of Changes


70168_04 12.02.2010 Tellabs 7305 support added.
Tellabs 7100 optical/OTN performance history collection and
reporting support added.
70168_03 27.11.2009 Tellabs 7300 VLAN VPN support added.
Packet Counters – VLAN Queue Statistics chapter added in
performance statistics.
7100N OTS FOADM support added and existing Tellabs 7100 data
clarified.
Tellabs 8110 network terminating unit data edited.
70168_02 30.09.2009 The description of the MTOSI service updated.
Information on RADIUS authentication added.
Tellabs 8110 network terminating unit data edited.
Tellabs 7100 packet subsystem data edited.

© 2010 Tellabs. All rights reserved.

This Tellabs manual is owned by Tellabs or its licensors and protected by U.S. and international copyright laws, conventions and
treaties. Your right to use this manual is subject to limitations and restrictions imposed by applicable licenses and copyright laws.
Unauthorized reproduction, modification, distribution, display or other use of this manual may result in criminal and civil penalties.
The following trademarks and service marks are owned by Tellabs Operations, Inc. or its affiliates in the United States and/or
other countries: TELLABS ®, TELLABS ® logo, TELLABS and T symbol ®, and T symbol ®.

Any other company or product names may be trademarks of their respective companies.

The specifications and information regarding the products in this manual are subject to change without notice. All statements,
information, and recommendations in this manual are believed to be accurate but are presented without warranty of any kind,
express or implied. Users must take full responsibility for their application of any products.

Adobe ® Reader ® are registered trademarks of Adobe Systems Incorporated in the United States and/or other countries.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

2
Document Information

Terms and Abbreviations

Term Explanation
AAL ATM Adaptation Layer
ACL Access Control List
ADM Add/Drop Multiplexer
APS Automatic Protection Switching
ARP Address Resolution Protocol
ATM Asynchronous Transfer Mode
BFD Bidirectional Forwarding Detection
BGP-4 Border Gateway Protocol version 4
BMP Broadband Management Protocol. A communication protocol which is used between
the Tellabs 8600 network elements and Tellabs 8000 network manager.
BSC Base Station Controller
CE Customer Edge
CESoPSN Circuit Emulation Service over Packet Switched Network
CLI Command Line Interface
CMIP Common Management Information Protocol
CORBA Common Object Request Broker Architecture
CSPF Constrained Shortest Path First
CWDM Coarse Wavelength Division Multiplexing
DHCP Dynamic Host Configuration Protocol
DLCI Data Link Connection Identifier
DMA Deferred Maintenance Alarm
DSL Digital Subscriber Line
DWDM Dense Wavelength Division Multiplexing
ELP Ethernet Link Protection
E-LSP EXP-inferred LSP (EXP bits in the label indicate the required per hop behavior)
EML Element Management Layer
EMS Element Management System
FMS Fault Management System
FR Frame Relay
GPRS General Packet Radio Service
GUI Graphical User Interface
HDLC High-Level Data Link Control
IFM Interface Module

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

3
Document Information

IP Internet Protocol
IP VPN BGP/MPLS VPN
LAN Local Area Network
LDP Label Distribution Protocol
L-LSP Label-inferred LSP (PHB treatment is inferred from the label)
LSP Label Switched Path
MAC Media Access Control
MEI Maintenance Event Information
MIB Management Information Base
MPLS Multiprotocol Label Switching
MSP Multiple Section Protection
MSPP Multi-service Provisioning Platform
MTOSI Multi-Technology Operations System Interface
NBI Northbound Interface
NE Network Element
NID Network Interface Device
NML Network Management Layer
Node In Tellabs 8000 network manager refers to a network element.
NTU Network Terminating Unit
OAM Operations and Maintenance
OLA Optical Line Amplifier
OS Operations System
OSPF-TE Open Shortest Path First - Traffic Engineering
OSR Object Server
OSS Operations Support System
OTS Optical Transport System
PDH Plesiochronous Digital Hierarchy
PE Provider Edge
PMA Prompt Maintenance Alarm
PPP Point-to-Point Protocol
PS Private Subnetwork
PVC Permanent Virtual Circuit
PW Pseudowire
PWE3 Pseudowire Emulation Edge to Edge.
QoS Quality of Service
RADIUS Remote Authentication Dial-In User Service
RNC Radio Network Controller

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

4
Document Information

RNO Real Network Operator


ROADM Reconfigurable Optical Add/Drop Multiplexer
RSVP-TE Resource Reservation Protocol - Traffic Engineering
SAToP Structure-Agnostic Time Division Multiplexing over Packet
SBOADM Single Bay Optical Add/Drop Multiplexer
SDH Synchronous Digital Hierarchy
SGSN Serving GPRS Support Node
SML Service Management Layer
SNC Subnetwork Connection
SNMP Simple Network Management Protocol
TCP Transmission Control Protocol
TDM Time Division Multiplexing
UDP User Datagram Protocol
UMTS Universal Mobile Telecommunications System
Unit In Tellabs 8000 network manager refers to a line or control card.
VCCV Virtual Circuit Connection Verification
VLAN Virtual LAN
VPN Virtual Private Network
VPWS Virtual Private Wire Service
VRF VPN Routing and Forwarding
WDM Wavelength Division Multiplexing
WiMAX Worldwide Interoperability for Microwave Access

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

5
Tellabs ® 8000 Network Manager R17A 70168_04
System Description © 2010 Tellabs.

6
Table of Contents

Table of Contents

About the Manual .............................................................................................................. 10

Objectives....................................................................................................................................................................... 10
Audience......................................................................................................................................................................... 10
Related Documentation .................................................................................................................................................. 10
Document Conventions ...................................................................................................................................................11
Discontinued Products.....................................................................................................................................................11
Documentation Feedback................................................................................................................................................11

1 Introduction ................................................................................................................. 12

1.1 Supported Network Elements.............................................................................................................................. 13


1.1.1 Tellabs 8600 Network Elements.......................................................................................................... 13
1.1.2 Tellabs 8100 Network Elements.......................................................................................................... 18
1.1.3 Tellabs 6300 Network Elements.......................................................................................................... 23
1.1.4 Tellabs 7100 Optical Transport System............................................................................................... 27
1.1.5 Tellabs 7300 Metro Ethernet Switching Series ................................................................................... 28
1.1.6 Tellabs 8800 Multiservice Routers ...................................................................................................... 29
1.2 Main Applications ............................................................................................................................................... 29
1.2.1 TDM Circuits....................................................................................................................................... 31
1.2.2 TDM Pseudowire................................................................................................................................. 32
1.2.3 ATM Pseudowire ................................................................................................................................. 32
1.2.4 Ethernet Pseudowire ............................................................................................................................ 33
1.2.5 Frame Relay DLCI Pseudowire........................................................................................................... 34
1.2.6 HDLC Pseudowire............................................................................................................................... 35
1.2.7 VLAN VPN ......................................................................................................................................... 36
1.2.8 Dense Wavelength Division Multiplexing........................................................................................... 37
1.2.9 IP VPN ................................................................................................................................................ 38
1.2.10 Internet Access .................................................................................................................................... 39

2 Tellabs 8000 Manager Components and Architecture............................................. 40

2.1 Overall Architecture ............................................................................................................................................ 40


2.1.1 Typical Configuration .......................................................................................................................... 40
2.1.2 Minimum Configuration ...................................................................................................................... 41
2.1.3 N-tier Architecture............................................................................................................................... 41
2.1.4 Object Server ....................................................................................................................................... 42
2.2 Workstation.......................................................................................................................................................... 42
2.2.1 Standard Workstation........................................................................................................................... 43
2.2.2 Power Workstation............................................................................................................................... 44
2.2.3 Satellite Workstation............................................................................................................................ 44

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

7
Table of Contents

2.2.4 Private Subnetwork Workstation ......................................................................................................... 44


2.3 Database Server ................................................................................................................................................... 44
2.3.1 The Role of the Database in Tellabs 8000 Manager............................................................................ 45
2.4 Communication Server ........................................................................................................................................ 45
2.4.1 General................................................................................................................................................. 45
2.4.2 Managing the Communication Servers ............................................................................................... 46
2.4.3 Fault Management in Communication Servers ................................................................................... 47
2.4.4 Backup Monitoring in Communication Servers.................................................................................. 47
2.4.5 Communication with Network Elements ............................................................................................ 47
2.4.6 Element Adapters ................................................................................................................................ 48
2.5 Management Server............................................................................................................................................. 49
2.5.1 Service Processes................................................................................................................................. 50
2.5.2 Using Multiple Service Instances ........................................................................................................ 51
2.5.3 Satellite Service ................................................................................................................................... 51
2.5.4 Fault Service ........................................................................................................................................ 53
2.5.5 Northbound Interface........................................................................................................................... 53
2.5.6 MTOSI Service.................................................................................................................................... 53
2.6 Recovery Server .................................................................................................................................................. 54
2.7 Route Master........................................................................................................................................................ 54
2.7.1 Online Core Network Monitoring (OCNM)........................................................................................ 55

3 Tellabs 8000 Manager Application Packages ........................................................... 57

3.1 General................................................................................................................................................................. 57
3.1.1 Other Software Packages..................................................................................................................... 58
3.2 Basic Package ...................................................................................................................................................... 58
3.2.1 Network Editor .................................................................................................................................... 58
3.2.2 Node Manager ..................................................................................................................................... 64
3.2.3 Customer Administration .................................................................................................................... 67
3.2.4 Fault Management System .................................................................................................................. 68
3.2.5 Trouble Ticket...................................................................................................................................... 74
3.2.6 Accounting Management..................................................................................................................... 75
3.2.7 Security Management .......................................................................................................................... 75
3.2.8 Automatic Maintenance Procedures.................................................................................................... 76
3.3 Provisioning Packages......................................................................................................................................... 76
3.3.1 Router .................................................................................................................................................. 77
3.3.2 VPN Provisioning................................................................................................................................ 79
3.3.3 Tunnel Engineering ............................................................................................................................. 85
3.3.4 VLAN Manager................................................................................................................................... 86
3.4 Testing Package ................................................................................................................................................... 87
3.4.1 Overview ............................................................................................................................................. 87
3.4.2 Packet Loop Test ................................................................................................................................. 87
3.4.3 Circuit Loop Test ................................................................................................................................. 99
3.5 Service Fault Monitoring Package .................................................................................................................... 104
3.5.1 Service Fault Monitoring Windows................................................................................................... 104
3.5.2 Service Management ......................................................................................................................... 104
3.6 Performance Management Package................................................................................................................... 106
3.6.1 Overview ........................................................................................................................................... 106
3.6.2 Performance Statistics ....................................................................................................................... 106
3.6.3 History Data Collection and Ageing ..................................................................................................110
3.6.4 Performance Reporting.......................................................................................................................111

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

8
Table of Contents

3.6.5 Interface Utilization Threshold Monitoring .......................................................................................112


3.6.6 Trend Analysis....................................................................................................................................112
3.6.7 Performance Overview Reporting Tool for Tellabs 8600, Tellabs 8800, Tellabs 7100 and Tellabs 6300
Network Elements ..............................................................................................................................113
3.7 Recovery Package...............................................................................................................................................114
3.7.1 Recovery Management System ..........................................................................................................114
3.8 Service Viewing package....................................................................................................................................115
3.8.1 Service and Circuit Component View ................................................................................................115
3.9 Planning Package................................................................................................................................................118
3.9.1 Fault Simulator ...................................................................................................................................118
3.9.2 Network Capacity Calculator .............................................................................................................119
3.10 8100 Service Computer ..................................................................................................................................... 121
3.11 Private Subnetwork............................................................................................................................................ 121
3.11.1 Private Subnetwork Trunk Accounting ............................................................................................. 121
3.11.2 Management Server Running Private Subnetwork Service .............................................................. 121
3.11.3 Private Subnetwork Faults................................................................................................................. 122
3.12 Partitioned Package ........................................................................................................................................... 123
3.12.1 Hierarchy Levels for Partitioning ...................................................................................................... 123
3.12.2 Backbone Overview of Network ....................................................................................................... 124
3.12.3 Backbone Network View................................................................................................................... 126
3.12.4 Regional Network View .................................................................................................................... 127
3.12.5 Fault Management in Partitioned Network ...................................................................................... 128
3.13 Unit Software Management Package ................................................................................................................ 129
3.14 Macro Package .................................................................................................................................................. 131
3.14.1 Macro Manager.................................................................................................................................. 131
3.14.2 Application Development Toolkit ..................................................................................................... 131
3.15 Web Reporter ..................................................................................................................................................... 132
3.16 Off-Line Database Kit for Reporting................................................................................................................. 132

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

9
About the Manual

About the Manual

Objectives

Tellabs ® 8000 Network Manager System Description provides an overview of Tellabs ® 8000
network manager. It contains descriptions of Tellabs 8000 manager features, components and
architecture, as well as all major software tools.

Audience

This manual is designed for all those who wish to familiarize themselves with the functionality and
architecture of Tellabs 8000 manager. It is assumed that you have a basic understanding of network
management systems and their functionality.

Related Documentation

Tellabs ® 8000 Network Manager R17A Describes all the needed third party software
Third Party Hardware and Software Requirements components and versions in the Tellabs 8000
(70002_XX) network manager system (operating systems,
service packs, database server versions, client
versions, web browsers). Describes also
the recommended computer platforms, LAN
configurations and minimum performance
requirements for computers.
Tellabs ® 8000 Network Manager R17A Provides instructions on how to install and
Software Installation Manual (70170_XX) configure Tellabs 8000 network manager in
different Tellabs 8000 network manager computer
platforms (workstation, servers).
Tellabs ® 8600 Managed Edge System Describes the network element features of Tellabs
Tellabs ® 8606 Ethernet Aggregator FP3.60 User’s 8606 Ethernet aggregator and how to configure it
Guide (40020_XX) using the web configurator and CLI commands.
Tellabs ® 8600 Managed Edge System Describes network element features.
Tellabs ® 8605 Access Switch FP1.3 Reference
Manual (40058_XX)
Tellabs ® 8600 Managed Edge System Describes network element features.
Tellabs ® 8607 Access Switch FP1.0A Reference
Manual (40045_XX)
Tellabs ® 8600 Managed Edge System Describes network element features.
Tellabs ® 8620 Access Switch FP2.11 Reference
Manual (40059_XX)

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

10
About the Manual

Tellabs ® 8600 Managed Edge System Describes network element features.


Tellabs ® 8630 Access Switch FP2.11 Reference
Manual (40060_XX)
Tellabs ® 8600 Managed Edge System Describes network element features.
Tellabs ® 8660 Edge Switch FP2.11 Reference
Manual (40061_XX)

Document Conventions

This is a note symbol. It emphasizes or supplements information in the document.

This is a caution symbol. It indicates that damage to equipment is possible if the instructions
are not followed.

This is a warning symbol. It indicates that bodily injury is possible if the instructions are not
followed.

Discontinued Products

Tellabs ® 8100 Managed Access System Discontinued Products list can be found in Tellabs Portal,
www.portal.tellabs.com by navigating to Product Documentation > Managed Access > Tellabs
8100 Managed Access System (Includes all 81XX products) > Technical Documentation >
Network Element Manuals > 8100 List of Discontinued Products.

Documentation Feedback

Please contact us to suggest improvements or to report errors in our documentation:

Email: fi-documentation@tellabs.com

Fax: +358.9.4131.2430

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

11
1 Introduction

1 Introduction
Tellabs ® 8000 network manager is an integrated network and service management system for
managing Tellabs ® 8600 managed edge system, Tellabs ® 8100 managed access system, Tellabs ®
6300 managed transport system, Tellabs ® 7100 optical transport system, Tellabs ® 7300 metro
Ethernet switching series and Tellabs ® 8800 multiservice router series product families. It offers a
toolbox for performing all necessary functions to create, test and monitor services provisioned on
these product platforms. The Tellabs 8000 manager is a multi-technology platform supporting TDM
(PDH, SDH and SONET), ATM, Ethernet, Layer 2 VPN, IP VPN (RFC4364) and Internet access
services provisioned on top of the supported product families.

Tellabs 8000 manager is integrated in its entirety. The tools are integrated together, they share and
utilize the same data and they communicate with each other. The efficient data flow between the
tools is gained by the use of the same relational database for all the Tellabs 8000 manager tools.
The database maintains an up-to-date status of the network, elements and services and this status is
shared in a real-time manner to all the tools and users. In a multi-user environment, this ensures that
all those who operate with the network are aware of events or actions taken by other users in the
network. In addition to maintaining the up-to-date status, Tellabs 8000 network manager provides
possibility for pre-planning of the network and services in advance.

The supported product families have been readily integrated within Tellabs 8000 manager and
adapters for each product family are offered as an integrated part of the management system.
The network management system is thus ready to command and communicate with the network
elements without any additional modeling or adapter work. All communication to the network and
elements are designed to go through the network management system to ensure that the consistency
throughout the Tellabs 8000 manager is maintained at all times.

With no single point of failure, Tellabs 8000 manager is built to be a reliable and fault tolerant
system that performs well in a diverse environment and under variable circumstances. In Tellabs
8000 manager, the needs of a growing network and operations have been considered by ensuring the
system scalability according to the size of the network and the number of services and users.

Tellabs 8000 manager is designed to take full control over its own domain while keeping in mind the
requirements of operational support systems (OSS) and other domains. Northbound interfaces can
be used to adapt and integrate the system into other service provider’s systems. Via the northbound
interfaces, the OSS has access to the network infrastructure (e.g. elements and network topology),
the services (provisioning, monitoring and testing) and other OSS support data.

Tellabs 8000 manager brings a consistent yet adaptable way of managing not only the elements but
also the whole network and the services end-to-end within its domain.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

12
1 Introduction

1.1 Supported Network Elements

1.1.1 Tellabs 8600 Network Elements

The Tellabs 8600 managed edge system product family is designed for providing managed 2G/3G
mobile transport, Ethernet, broadband service aggregation and IP VPN services. It has specifically
been designed to extend MPLS based services from the core network into access networks. Essential
technologies here include the following:

• MPLS for tunneling the customer traffic through the operator network, both in core network and
in access networks.
• RSVP for managing traffic engineered MPLS LSPs with bandwidth reservations, and optionally
operator-defined routes.
• DiffServ for managing different service classes with different QoS characteristics.
• Layer 3 (IP) VPNs based on BGP and MPLS for implementing many-to-many layer 3 connec-
tivity between customer sites through the operator network.
• Layer 2 VPNs based on pseudowires for implementing point-to-point layer 2 (TDM, ATM, Eth-
ernet, Frame Relay and HDLC) connectivity between sites through the operator network.
• BGP for distributing customer routes through the operator network between the service end-
points.
• OSPF and IS-IS for routing IP traffic inside a single core/access network.
• Ethernet, ATM, Frame Relay and TDM as interfacing and access aggregation technology, as
an alternative to MPLS in the access network.
• Traffic Engineering (TE) extensions for RSVP and OSPF protocols to reserve capacity for LSPs
aware of DiffServ based service classes and to select the used path based on available capacity
for the intended service class(es).

Tellabs ® 8660 Edge Switch

Tellabs 8660 edge switch is a modular, scalable router positioned to RNC and large hub sites in
mobile networks. In fixed networks it can act either as a PE router on the edge of an MPLS core
network or as an access aggregator in an MPLS-based access network. In addition to this, it can also
be used as a P router in an MPLS core network or as a general-purpose IP router in places where
good scalability and redundancy is required.

The rack of Tellabs 8660 edge switch has 14 slots, of which slots number 1 and 14 are reserved
for two redundant power/control cards. The remaining 12 slots are available for line cards. Each
line card can be equipped with two separate interface modules. For more information on Tellabs
8660 edge switch, refer to the document Tellabs ® 8600 Managed Edge System Tellabs ® 8660 Edge
Switch Reference Manual.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

13
1 Introduction

Fig. 1 Tellabs 8660 Edge Switch Subrack V3.0

Tellabs ® 8630 Access Switch

Tellabs 8630 access switch is a modular, compact router positioned to hub sites in mobile networks.
Tellabs 8630 access switch has 6 slots with two slots reserved for redundant power/control cards.
The remaining 4 slots are available for line cards. Each line card can be equipped with two separate
interface modules. For more information on Tellabs 8630 access switch, refer to the document
Tellabs ® 8600 Managed Edge System Tellabs ® 8630 Access Switch Reference Manual.

Fig. 2 Tellabs 8630 Access Switch

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

14
1 Introduction

Tellabs ® 8620 Access Switch

Tellabs 8620 access switch is a modular integrated access device. It can be used in the customer
premises of high-end business customers or as a high-capacity aggregation element. Tellabs 8620
access switch offers carrier class operation, cost efficiency and QoS capabilities in a small tabletop
or 19" rack-mountable design. The same interface modules (IFMs) can be used both in Tellabs 8620
access switch, Tellabs 8630 access switch and the Tellabs 8660 edge switch except for 1xSTM-16
POS, which can only be used in the Tellabs 8630 access switch and Tellabs 8660 edge switch. For
more information on Tellabs 8620 access switch, refer to the document Tellabs ® 8600 Managed
Edge System Tellabs ® 8620 Access Switch Reference Manual.

Fig. 3 Tellabs 8620 Access Switch AC

Fig. 4 Tellabs 8620 Access Switch DC

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

15
1 Introduction

Tellabs ® 8607 Access Switch

Tellabs 8607 access switch is a compact and highly modular access element with multiprotocol
support. Integration of Ethernet, DSL and pseudowire technology makes it attractive for 2G, 3G and
WiMAX cell sites and network evolution. Because of its small size and flexibility, Tellabs 8607
access switch is optimal for service providers’ access networks at small traffic aggregation points
or cell sites. The high level of interface and power feed modularity enables meeting various site
requirements and optimizing the inventory with the same product. Due to the integrated DSL
functionality, which is one of the transport options, there is no need for an external NTU to provide
the DSL part. Versatile service capabilities, including support for TDM, ATM, HDLC and Ethernet
based connections as well as IP routing, enable the migration of 2G TDM and 3G ATM, Ethernet
or IP based networks into a single network infrastructure. Tellabs 8607 access switch has packet
based forwarding with QoS support enabling network optimization for voice and data services. For
more information on Tellabs 8607 access switch, refer to the document Tellabs ® 8600 Managed
Edge System Tellabs ® 8607 Access Switch Reference Manual.

Fig. 5 Tellabs 8607 Access Switch

Tellabs ® 8606 Ethernet Aggregator

The Tellabs 8606 Ethernet aggregator is a cost-efficient Ethernet switch for aggregating multiple
customers in dense locations to the Tellabs 8660, 8630 or 8620 access switches. The switch has
24 10/100 Mbps Ethernet ports and four Gigabit ports. For more information on the Tellabs 8606
Ethernet aggregator, refer to the document Tellabs ® 8600 Managed Edge System Tellabs ® 8606
Ethernet Aggregator User’s Guide.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

16
1 Introduction

Tellabs ® 8605 Access Switch

The Tellabs 8605 access switch is a compact and fully managed mobile cell site node with
multiprocotol support. It is designed for cost-efficient delivery of 2G and 3G voice and data
services over a diverse set of access and uplink interfaces. The switch has 16 x E1/T1 multiservice
interfaces, 2 fixed 10/100 Mbps Ethernet ports as well as either 2 optical Ethernet uplink ports with
SFPs or 2 x 10/100/1000 Mbps electrical Ethernet ports. It supports all relevant protocols for
cost-efficient transport of GSM, CDMA, CDMA-2000, EV-DO and WCDMA R99/R5 traffic. For
more information on the Tellabs 8605 access switch, refer to the document Tellabs ® 8600 Managed
Edge System Tellabs ® 8605 Access Switch Reference Manual.

Fig. 6 Tellabs 8605 Access Switch AC

Fig. 7 Tellabs 8605 Access Switch DC

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

17
1 Introduction

1.1.2 Tellabs 8100 Network Elements

The Tellabs 8100 product family is a network element family for managed access and transport
solutions. It consists of a number of nodes and NTUs designed to support business service and
mobile access and transport applications. The Tellabs 8100 nodes can provide the functionality of a
digital cross-connect and multiplexer network element.

Tellabs ® 8110 network terminating units can be used to connect customer premises to an operator’s
POP using an existing copper loop. The Tellabs 8110 NTUs can be used in a stand-alone
environment as well as connected to Tellabs 8100 nodes offering a fully managed customer
premises equipment.

Tellabs 8100 node types are selected based on the access interface requirements. Tellabs ® 8120 mini
and Tellabs ® 8140 midi nodes are typically used at customer premises, Tellabs ®8150 basic, Tellabs ®
8160 A111 accelerator, Tellabs ® 8184 access switch, Tellabs ® 8188 access switch and Tellabs ®
8170 cluster nodes are typically used when there is a requirement for bigger cross-connection
devices or the Tellabs 8100 nodes need to interoperate with an SDH network.

Tellabs ® 8110 Network Terminating Unit CTE-R

Tellabs 8110 network terminating unit CTE-R has a maximum data rate of 4544 kbps over a 2-pair
copper line. Tellabs 8110 CTE-R can act as an IP router, bridge or combined IP router and bridge
(BRouter).

Tellabs ® 8110 Network Terminating Unit CTE-S

Tellabs 8110 network terminating unit CTE-S is available with integrated X.21, V.35, V.36 or G.703
interfaces. Tellabs 8110 CTE-S has a maximum data rate of 4544 kbps over a 2-pair copper line.

Tellabs ® 8110 Network Terminating Unit CTE-R/208

Tellabs 8110 network terminating unit CTE-R/208 has a maximum data rate of 128 kbps. It
has a 10/100-BaseT Ethernet access interface. The CTE-R/208 can act as an IP router, bridge
or combined IP router and bridge (BRouter).

Tellabs ® 8110 Network Terminating Unit CTE-S/208

Tellabs 8110 network terminating unit CTE-S/208 is available with integrated X.21, V.35, V.36 or
G.703 interfaces and it is targeted to lower data rate applications. Tellabs 8110 CTE-S/208 has
a maximum data rate of 128 kbps.

Tellabs ® 8110 G.SHDSL Network Terminating Unit CTE2-R

Tellabs 8110 G.SHDSL network terminating unit CTE2-R has a maximum data rate of 12,224 kbps
over a 4-pair copper line. It is designed for high-speed corporate LAN interconnect applications,
where service reliability and service differentiation is needed. It has a 10/100-BaseT Ethernet access
interface. The CTE2-R can act as an IP router, bridge or combined IP router and bridge (BRouter).

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

18
1 Introduction

Tellabs ® 8110 G.SHDSL Network Terminating Unit CTE2-S

Tellabs 8110 G.SHDSL network terminating unit CTE2-S is primarily designed for leased line data
networks with fast and easy installation. With the G.703 interface it is also suitable for connecting,
for example, mobile base stations or PBXs to the transmission network. Tellabs 8110 CTE2-S is
available with integrated X.21, V.35 or G.703 interfaces and the DSL transmission is based on
ITU-T G.SHDSL technology. Tellabs 8110 CTE2-S has a maximum data rate of 8,128 kbps and it
has AC and DC power feed options.

Tellabs ® 8110 Network Terminating Unit CTU-R

Tellabs 8110 network terminating unit CTU-R is a high-speed NTU designed for managed data
access with rapid deployment. It is a network terminating unit using extended ETSI HDSL based
technology. Tellabs 8110 CTU-R has a maximum data rate of 4,544 kbps over 2 pairs with integrated
IP routing. With a direct 10/100-Base-T Ethernet interface it is able to offer data access services
without the need for a customer premises wide area network routing infrastructure. Typically such
services are dedicated to corporate Intranet and Internet access.

Tellabs ® 8110 Network Terminating Unit CTU-S

Tellabs 8110 network terminating unit CTU-S is primarily designed for managed serial data
access with fast and easy installation. With the G.704 interface it is also suitable for connecting,
for example, mobile base stations or PBXs to the transmission network. Tellabs 8110 CTU-S
is available with integrated X.21, V.35 or G.703 interfaces and the DSL transmission is based
on ETSI HDSL technology. Tellabs 8110 CTU-S has a maximum data rate of 4,544 kbps and it
has AC, DC and remote power feed options.

Tellabs ® 8110 Network Terminating Unit CTU-512

Tellabs 8110 network terminating unit CTU-512 uses extended ETSI HDSL based technology
and it is targeted to lower data rate applications. It has a maximum data rate of 512 kbps and
it is available with V.35 and X.21 interfaces.

Tellabs ® 8110 Network Terminating Unit OTU-2M and Tellabs ® 8110 Network Terminating Unit
OTU-RP

Tellabs 8110 network terminating units OTU-2M and OTU-RP are primarily designed for
connecting mobile network micro base stations to the rest of the mobile transmission network. They
can also be used in leased line networks to connect, for example, PBXs to the Tellabs 8100 nodes.

Tellabs ® 8110 Network Terminating Unit STE-10M

Tellabs 8110 network terminating unit STE-10M has a maximum data rate of 12,160 kbps over a
4-pair copper line. It is designed for high-speed corporate LAN interconnect applications, where
service reliability and service differentiation is needed. It has a 10/100-BaseT Ethernet access
interface. Tellabs 8110 STE-10M can act as an IP router, bridge or combined IP router and bridge
(BRouter).

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

19
1 Introduction

Tellabs 8120 Mini Node

Tellabs 8120 mini node is a small cross-connect device operating as either part of the Tellabs
8100 network or as a stand-alone cross-connect device managed locally. It can integrate multiple
services and transmit them over a single access line.

Tellabs 8130 Micro Node

Tellabs 8130 micro node is a compact, cost-effective node especially targeted for mobile networks.
It is ideally suited to be located at base station sites where space is usually limited. It is often used in
a reliable ring structure to connect base stations to the backbone network. To guarantee the best
service availability at base station level, 1+1 protection can be implemented.

Tellabs 8140 Midi Node

Tellabs 8140 midi node is a flexible access node for customer premises. The Tellabs 8140 midi
subrack consists of eight slots. The heart of the Tellabs 8140 midi node is the one-slot wide
multi-functional unit XCG, which combines the functions of a cross-connect unit and control unit
and offers, in addition, four 2Mbps G.703 interfaces.

Tellabs 8150 Basic Node

Tellabs 8150 basic node can be used as a component in the local exchange or backbone layer of the
network, as well as in customer premises, depending on the needed services and applications. A
Tellabs 8150 basic node consists of a single or double 19–inch subrack with a total cross-connect
capacity of 64 Mbps. Some common units are found in every Tellabs 8150 basic node. These are
the power supply, control and cross-connect units. Any of the free slots can be filled with a variety
of base units and interface modules.

Tellabs 8160 A111 Accelerator Node

Tellabs 8160 A111 accelerator node can be used for service grooming and consolidation at a POP
site or at several POP sites along a ring. It can also be used for high-capacity service delivery to
customer premises. The Tellabs 8160 A111 accelerator node integrates SDH ADM and FlexMux
functionalities in one double 19–inch subrack. It has a 4/1 matrix for adding/dropping traffic from
the aggregate interfaces and a 1/0 matrix to cross-connect the added/dropped traffic from the 4/1
matrix and the traffic coming from the tributary interfaces. The add/drop capacity is 256 Mbps
and MSP 1+1 and sub-network connection protection options are available. Also interface unit
protections and trunk and circuit level protections are supported.

Tellabs 8170 Cluster Node

Tellabs 8170 cluster node is a 1/0 digital cross-connect that terminates up to 512 Mbps of
non-blocking cross-connect capacity. The node can provide a large-scale interconnection point in
the network. Depending on the needed services and applications, Tellabs 8170 cluster node consists
of a master subrack and from one to eight slave subracks. It grooms and fills traffic to make the
most efficient use of E1, and higher-order, transport facilities. The Tellabs 8170 cluster node can be
equipped with STM-1 and synchronous 34M interfaces to implement cost-effective SDH transport
network connectivity.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

20
1 Introduction

Tellabs 8184 Access Switch and Tellabs 8188 Access Switch

Tellabs 8184 access switch and Tellabs 8188 access switch nodes will replace Tellabs 8160 A111
accelerator node and have much bigger capacity. Both nodes have the same cross-connection
capacity, but they use a different size of subrack. Tellabs 8188 access switch has a double subrack
(19-inch) and more slots for X-bus interface units than Tellabs 8184 access switch that uses a
single subrack (19-inch).

The applications of these nodes are the same as with Tellabs 8160 A111 accelerator and Tellabs
8170 cluster nodes. Tellabs 8188 access switch is used as an access node for grooming nx64 kbps -
nx2 Mbps services into STM-N/VC-4/VC-12/P12s, and Tellabs 8184 access switch is used as a 1/0
server node beside Tellabs 6300 nodes for grooming small nx64 kbps signals into VC-12.

Both nodes use the GMX2 SDH interface and cross-connect unit. The other units (SCU-H, PFU,
X-bus interface unit) are the same as in Tellabs 8160 A111 accelerator node. The GMX2 unit
supports STM-16, STM-4 and STM-1 interfaces, and SPF interface modules are utilized.

4/4 cross-connection capacity is 72 x AU-4.

• 1-2 x STM-16
• 4-8 x STM-4 or STM-1 interfaces
• 8 x VC-4 terminations into 4/1 cross-connection
4/1 cross-connection capacity is 16 x TUG ports.

• 8 x VC-4
• 504 x VC-12 towards 1/0 cross-connection
1/0 cross-connection capacity is max. 760 x 2 Mbps ports (1.5 Gbps)

• 504 x VC-12
• 4 x X-bus (Tellabs 8184 access switch) = 128 x 2M ports
• 8 x X-bus (Tellabs 8188 access switch) = 256 x 2M ports
The supported protections are:

• cross-connection core (GMX2) protection,


• STM-N MSP 1+1,
• 4/4 SNC/I,N,
• 4/1 SNC/I,N and
• 1/0 circuit and trunk recovery with Tellabs 8000 manager.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

21
1 Introduction

Fig. 8 Tellabs 8184 Access Switch

Fig. 9 Tellabs 8188 Access Switch

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

22
1 Introduction

1.1.3 Tellabs 6300 Network Elements

The Tellabs 6300 managed transport system is a family of next-generation SDH network elements,
which offer multiple high-speed services and enable scalable utilization of the available bandwidth.
The nodes are supported much like the Tellabs 8100 managed access system nodes, providing
tools for optimized use of the network elements in Tellabs 8100 system networks. However, the
NM6300ct/NM2100ct Craft Terminal is used instead of Node Manager for configuring the nodes.
The following Tellabs 6300 system network elements, Tellabs ® 6310 edge, Tellabs ® 6320 edge,
Tellabs ® 6325 edge, Tellabs ® 6335 switch, Tellabs ® 6340 switch, Tellabs ® 6345 switch and
Tellabs ® 6350 switch are integrated into Tellabs 8000 manager.

Tellabs 6310 Edge

Tellabs 6310 edge node is a complete add/drop or terminal multiplexer housed in a single, highly
scalable system. It is well-suited for access network applications and meets the requirements for
customer premises equipment. The node is available in two versions. The compact 1U-high version
provides you with a complete SDH node with 21 E1 lines. The 2U-high flexible version gives an
extra tributary service in addition to the 21 E1 lines.

Features:

• Compact add-drop or terminal multiplexer (4/1)


• Convenient for outdoor cabinets
• Customer premises equipment; mobile network HUB sites
• 2/34/45/140 Mbps, STM-1, STM-4, Fast Ethernet and Gigabit Ethernet interfaces
• SNC/I and SNC/N protection
• VLAN, MAC and MPLS switching.

Tellabs 6320 Edge

Tellabs 6320 edge node has been designed to enhance the functionality of the access and regional
part of the SDH network. The node offers a complete, compact multi-service add/drop multiplexer
and terminal multiplexer with an STM-1 or STM-4 line interface. It supports a full range of PDH
interfaces, STM-1 optical or electrical interfaces as well as Fast Ethernet and Gigabit Ethernet.

Features:

• Super-compact single-board ADM/TM


• Upgradeable modular system
• 2/34/45/140 Mbps, STM-1, STM-4, Fast Ethernet and Gigabit Ethernet interfaces
• SNC/I and SNC/N protection
• VLAN, MAC and MPLS switching.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

23
1 Introduction

Tellabs 6325 Edge

Tellabs 6325 edge node is a compact multi-service provisioning platform supporting SDH, PDH
and data services. High reliability and redundancy enable the node to be used not only in access
networks, but also in core networks. At only 1RU (44mm) in height, Tellabs 6325 edge node is a
complete, full-scale SDH transport node. It offers speeds of up to 2.5Gbps (STM-16) and enables a
wide mix of services from traditional SDH and PDH to colored WDM and IP interfaces. The
compact design makes the node an ideal choice for customer-located equipment.

Features:

• Compact multi-service provisioning platform


• 2 Mbps, 34 Mbps, 45 Mbps, STM-1, STM-4, STM-16, CWDM, DWDM, Fast Ethernet and
Gigabit Ethernet interfaces
• 80 ports 4/4 cross-connect
• 8 ports 4/3/1 switch matrix
• SNC/I and SNC/N protection
• Redundant cross-connection matrices and power supplies
• VLAN, MAC and MPLS switching.
• CWDM and DWDM multiplexing and optical add/drop.

Fig. 10 Tellabs 6325 Edge Node

Tellabs 6335 Switch

Tellabs 6335 switch node is a compact, full-featured multi-service provisioning platform (MSPP).
Tellabs 6335 switch can handle SDH, PDH as well as data services in a compact and cost-effective
design. It offers reliability with full redundancy on the central switch matrix, the PDH interfaces,
synchronization functionality and power supplies. Tellabs 6335 switch node can also be equipped as
a CWDM or DWDM multiplexer with the support of transponders, muxponders and repeaters.

Tellabs 6335 switch node is designed to fit anywhere in the SDH transport network. The
cost-efficient and compact design makes it a logical choice for the access network, while the
redundant cross-connection matrices are well suited to core networks. Cross-connection redundancy
makes Tellabs 6335 switch node reliable as a HUB node handling high traffic load. The combination
of E1s, STM-1s and Ethernet interfaces in one node makes Tellabs 6335 switch node an important
building block in both GSM (2G) and UMTS (3G) mobile networks.

Features:

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

24
1 Introduction

• Compact multi-service provisioning platform


• 2 Mbps, 34 Mbps, 45 Mbps, STM-1, STM-4, STM-16, CWDM, DWDM, Fast Ethernet and
Gigabit Ethernet interfaces
• Redundant cross-connecting matrices, interface protection and redundant power supplies
• MAC, VLAN and MPLS Ethernet switching
• Support for differentiated service types enables customized service level agreements for individ-
ual customers
• WDM multiplex and transponder modules enable a traditional SDH node to be combined with a
WDM node in the same subrack
• Pluggable transceivers allow Tellabs 6335 switch node to support a wide range of interfaces from
traditional SDH over Ethernet to advanced WDM transceivers
• High reliability with fully redundant matrix protection and E1 equipment protection
• Choice of 2RU and 7RU high subrack

Fig. 11 Tellabs 6335 Switch Node

Tellabs 6340 Switch

Tellabs 6340 switch node is a third-generation multi-service provisioning platform designed for the
transport and delivery of converged services (voice, high-speed data and video-on-demand). It is
an SDH add/drop multiplexer or digital cross-connect with SDH 4/4 and 4/1 connectivity, and it
supports a wide range of physical interface options varying from 2Mbps E1 to STM-16. Its main
strength is in regional and metropolitan networks where you can use the cross-connect functionality
for grooming and consolidation of traffic from various sources.

Features:

• Add-drop multiplexer (16/4/1) and cross-connect in one element


• 132 or 48 ports 4/4 cross-connect (depending on version)
• 32 or 16 ports 4/3/1 cross-connect (depending on version)
• 2/34/45 Mbps, STM-1, STM-4, STM-16, Fast Ethernet and Gigabit Ethernet interfaces
• Colored STM-16 interfaces for 32 channel DWDM integration
• Network protection: SNC/I, SNC/N, MSP 1+1 and MS-SPRing
• Full equipment redundancy
• VLAN, MAC and MPLS switching.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

25
1 Introduction

Tellabs 6345 Switch

Tellabs 6345 switch node is a multi-service provisioning platform, which has been specially
designed to enable a wide range of data, voice, and leased line applications. The unique combination
of high capacity and small footprint makes the unit ideal for compact ADM-64 or STM-16
ring-inter-working applications, since multiple add/drop multiplexers can be configured within a
single sub-rack, thus avoiding external cabling and ensuring high uptime.

Features:

• Add-drop multiplexer and cross-connect in one element


• 128 ports 4/3/1 switch matrix
• 384 ports 4/4 cross-connect
• Colored STM-16 and STM-64 interfaces for 32 channel DWDM integration
• STM-1, STM-4, STM-16 and STM-64 interfaces
• Fast Ethernet, Gigabit Ethernet and 10 Gigabit Ethernet interfaces
• Equipment protection of switch core, synchronization and power
• Network protection: SNC/I, SNC/N, MSP 1+1 and MS-SPRing
• VLAN, MAC and MPLS switching.

Tellabs 6350 Switch

Tellabs 6350 switch node is a third-generation high-capacity cross-connect suited for various data,
voice and leased line applications. This high-density system grooms and consolidates traffic in the
metro and regional networks. Tellabs 6350 switch has a modular structure that enables you to
configure add/drop multiplexers and cross-connects from the same hardware platform.

Features:

• Add-drop multiplexer and cross-connect in one element


• 768 ports 4/4 cross-connect
• 128 ports 4/3/1 switch matrix
• Colored STM-16 interfaces for 32 channel DWDM integration
• STM-1, STM-4, STM-16 and STM-64 interfaces
• Fast Ethernet, Gigabit Ethernet and 10 Gigabit Ethernet interfaces
• Equipment protection of switch core, synchronization and power
• Network protection: SNC/I, SNC/N, MSP 1+1 and MS-SPRing
• VLAN, MAC and MPLS switching.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

26
1 Introduction

1.1.4 Tellabs 7100 Optical Transport System

The Tellabs 7100 optical transport system (OTS) is an advanced services transport system that
supports multiple transport protocols on any port. Tellabs 7100 OTS combines the most advanced
optical networking and services layer technologies on one seamless platform. Designed to
support current add/drop multiplexer (ADM) and wavelength division multiplexing (WDM) ring
capabilities, the Tellabs 7100 OTS ensures a smooth migration to future packet-based services over
mesh networks. Multi-degree reconfigurable optical add/drop multiplexer (ROADM) architecture
gives service providers the ability to provision lightpaths (up to 88 protected wavelengths) where
needed to deliver services and easily respond to unexpected demand in the network.

Hardware elements of the Tellabs 7100 OTS provide support for higher bandwidth applications
and enhanced access. Hardware modules include 2.5G, 10G, and 40G transponders, reconfigurable
multiplexers, multiplexers to support spur applications, packet multiplexers, colorless core
multiplexers for high-density metro applications, optical protection modules, SONET/SDH/Packet
module for digital cross-connects, intermediate- and long-reach amplifiers, system processors, and
data processors. Multirate transponders interface transmission equipment at client rates between 100
Mbps and 10.7 Gbps, including STM-1/OC-3, STM-4/OC-12, STM-16/OC-48, STM-64/OC-192,
STM-256/OC-768, Gigabit Ethernet (GE), 10 GE and fibre channel.

Tellabs ® 7100 Optical Transport System

Tellabs 7100 OTS is a dense wavelength division multiplexing (DWDM) system scaling up to
eight degrees. A fully-populated network element configuration is a 88-channel, multi-shelf
system, in a point-to-point, ring or mesh network. Each channel can be provisioned to carry from
100 Mbps to 40 Gbps signal capacity, or 100 Mbps to 2.488 Gbps broadband signals. Channels
are provisioned to either be added/dropped or optically passed-through a network element or for
unidirectional broadcast (including drop-and-continue application) through the RCMM express
ports. STS or VC facilities carrying GE signals can be grouped for concatenated transport between
SMTM-U/SSM-D/SSM-X units.

Tellabs ® 7100N Optical Transport System

Tellabs 7100 Nano provides a compact version of Tellabs 7100 OTS that can add or drop eight
protected wavelengths and pass through 44 simultaneous channels. This product provides a
2-degree DWDM system (populated in 1-degree increments) comprised of up to four 30-AMP
shelves and a modified subset of Tellabs 7100 OTS transponder modules. It can be deployed in
three distinct configurations: as a one-shelf optical line amplifier (OLA), or as a 2-degree single
bay optical add/drop multiplexer (SBOADM) , or as a 2-degree fixed optical add/drop multiplexer
(FOADM). The SBOADM and FOADM are both equipped with one main shelf and up to three
port shelves in 19- or 23-inch racks.

Tellabs 7100 Packet Subsystem

One or more packet subsystems are supported in a Tellabs 7100 NE based on the Tellabs 7100
architecture. Each packet subsystem consists of a group of packet modules (SMTM-P and/or
TGIM-P), one or two switch fabric modules (SPFAB: SONET/SDH/Packet Fabric), and one or two
corresponding control modules (PSCM), so that the traffic can be switched or forwarded within
the switch domain of the packet subsystem. The PSCM is DPM for Tellabs 7100 OTS and SPM-N
for Tellabs 7100 Nano.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

27
1 Introduction

Packet-based transponder modules, SMTM-P and TGIM-P, manage the ingress and egress ports for
the traffic flow. The ingress and egress ports are connected to the SPFAB switch fabric.

The 10G Subrate Multiplexer Transponder Module – Packet (SMTM-P) supports packet subsystem
functions. Each SMTM-P provides one 10 Gb line-side facility interface and ten port-side facility
interfaces that support signal rates of 10/100/1000 Mbps. Each of the ten port-side facility interfaces
can be configured to support either electrical or optical SFPs. The port-side optical interfaces
support 100BaseFX and GbE.

The 10G Transponder - Packet Interface (TGIM-P) module supports packet subsystem functions
as well. It has one 10 Gb line-side facility interface and one 10 GbE port-side interface (IEEE
802.3ae compliant). The TGIM-P is a one-slot wide module with packet switching capability and
one laser that is tunable across 44 wavelengths.

A proprietary logical entity called a virtual switch containing up to 4094 VLANs (a maximum of
4096 VLANs, but 2 VLANs are reserved for special use) is used. Each packet subsystem may
contain up to 32 virtual switches, which multiplies the VLAN address space by that factor per
packet-enabled shelf.

Supported features include MAC bridging, VLAN bridging (802.1Q), provider bridging (802.1ad),
an 802.3 compliant MAC client interface, packet security, packet fault management, packet
performance monitoring, multicasting, synchronized Ethernet and line-side resilient packet ring
protection. Tellabs 8000 manager initially supports service provisioning based on provider bridging
and LAG-based protection for the packet subsystem of the Tellabs 7100 OTS NEs.

1.1.5 Tellabs 7300 Metro Ethernet Switching Series

The Tellabs 7300 metro Ethernet series is a product family offering a comprehensive end-to-end
portfolio of Ethernet switching products. The Tellabs 7300 series can combine with the Tellabs 7100
optical transport system to build highly scalable connection-oriented Ethernet and carrier Ethernet
networks and, when used with the Tellabs 8800 multiservice router series, extends Ethernet services
from IP/MPLS core networks.

Tellabs ® 7305 Ethernet Demarcation Device

Tellabs 7305 device is a flexible Network Interface Device (NID) that can operate as a Transport
NID or Service NID to provide service demarcation. It includes the standard-based performance and
OAM features for transporting and delivering Carrier Ethernet services. It has 2 optical 10/100/1000
SFP-based Ethernet interfaces and 1 electrical 10/100/1000 UTP-based interface (3-port option).

Tellabs ® 7325 Ethernet Edge Switch

Tellabs 7325 switch is a carrier-class layer 2 switch in one fixed rack unit. It has 24 optical
10/100/1000 SFP-based Ethernet interfaces.

Tellabs ® 7345 Ethernet Aggregation Switch

Tellabs 7345 switch is a carrier-class layer 2 switch supporting delivery of business Ethernet,
broadband data, Internet access services and metro Ethernet networking. The network element
provides up to 40 Gbps throughput in two rack units. It has 24/48 optical 10/100/1000 SFP-based
and at most 2/4 optical 10 Gbps XFP-based Ethernet interfaces.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

28
1 Introduction

1.1.6 Tellabs 8800 Multiservice Routers1

The Tellabs 8800 multiservice router series is a high-performance next-generation multiservice


edge router. The Tellabs 8800 multiservice routers support any-to-any layer 2 and layer 3 network
and/or service interworking. They provide service providers a graceful migration to a converged
MPLS-enabled IP network. The Tellabs 8800 multiservice routers combine Quality of Service and
security with powerful MPLS traffic engineering capabilities.

The Tellabs 8800 multiservice routers provide comprehensive standards-based signaling and routing
support including BGP, MP-BGP, OSPF, IS-IS and PIM-SM as well as LDP and RSVP-TE for
MPLS signaling and traffic engineering.

Tellabs ® 8860 Multiservice Router

Tellabs 8860 multiservice router is a multiservice switch with 320 Gbps full-duplex switching
positioned at the edge of the core and in the core network for multiservice and mobile applications.
The network element has three switch and control cards (SCC), such that one of the switch cards may
backup the two other switch cards. Each network element has 16 slots for line cards (ULC). Each
ULC may be equipped with 4 physical line modules (PLM) with a variety of different module types.

Tellabs ® 8840 Multiservice Router

Tellabs 8840 multiservice router provides 240 Gbps worth of full-duplex switching capacity. It has
three SCCs and 12 slots for ULCs with 4 modules in each line card. The same ULCs and PLMs may
be used in the Tellabs 8840 multiservice router as in the Tellabs 8860 multiservice router.

Tellabs ® 8830 Multiservice Router

Tellabs 8830 multiservice router is a compact network element in the Tellabs 8800 multiservice
router series. It can be equipped with up to 16 PLMs and provides the same fully-redundant
platform for carrier-class reliability as the Tellabs 8860 and Tellabs 8840 multiservice routers. It can
be equipped with the same module options as the other Tellabs 8800 multiservice routers.

1.2 Main Applications

The main applications of the Tellabs 8000 network management system are:

• Mobile transport in 2G and 3G RAN


• Managed voice and data leased line business services
• Managed LAN interconnection services
• Managed IP VPNs
• Broadband service aggregation

1Discontinued Tellabs 8820 multiservice router (FP5.2) is supported as well.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

29
1 Introduction

These applications are supported by a suite of connectivity services, which offer scalability,
security, manageability and QoS comparable or exceeding the levels of current telecom network
infrastructures. The Tellabs 8000 system offers the following connectivity services:

• PDH and SDH circuit management for the 1/0, 4/1 and 4/4 layers. TDM circuits can be used to
provide leased line services and mobile transport for 2G and 3G RAN networks. TDM circuits
can also be used for LAN interconnection services.
• TDM tunneling over MPLS using TDM pseudowires (PWs). TDM pseudowires provide point-
to-point connectivity through the network for tunneling TDM bit streams over an MPLS infra-
structure. The PWE3 tunneling method is used. TDM pseudowires can be used to transport 2G
traffic in mobile radio access networks between the base station and the base station controller
over an MPLS-based access network. TDM pseudowires can also be used to connect mobile
switching centers together over an MPLS backbone. Native nx64 kbps and 1.5/2 Mbps PDH as
well as nx64 kbps and 1.5/2 Mbps PDH bit streams over SDH/SONET are supported.
• ATM tunneling over MPLS using ATM pseudowires. ATM pseudowires provide point-to-point
connectivity through the network for tunneling ATM circuits over an MPLS or IP infrastructure.
The PWE3 (pseudowire emulation edge-to-edge) tunneling method is used to tunnel the ATM
traffic. This point-to-point service is sometimes also referred to as a virtual private wire service
(VPWS). This is the main connectivity service used in the mobile transport application for 3G
traffic in R99 and R4 networks for transporting traffic between the base stations (nodes B) and
the radio network controllers (RNCs). ATM tunneling can also be used in the broadband service
aggregation application for tunneling traffic from ATM DSLAMs over an MPLS infrastructure.
• Ethernet tunneling over MPLS using Ethernet PWs providing point-to-point connectivity through
the network. The PWE3 tunneling method is used. This point-to-point service is also sometimes
referred to as virtual private wire service (VPWS). Ethernet pseudowires are used both in the
managed Ethernet services application and broadband service aggregation application.
• Frame Relay and HDLC tunneling over MPLS using Frame Relay DLCI and HDLC pseudowires.
Frame Relay/HDLC pseudowires provide point-to-point connectivity through the network for
tunneling Frame Relay PVCs or other HDLC based services over an MPLS infrastructure. The
PWE3 tunneling method is used. Frame Relay/HDLC pseudowires can be used e.g. for trans-
porting GRPS or CDMA2000 1xRTT traffic.
• Ethernet switching using VLAN VPNs. For many-to-many Ethernet connectivity, VLAN VPNs
can be used and configured on Tellabs 8100, Tellabs 7100 and Tellabs 6300 network elements.
Ethernet switching is done in Ethernet switching capable units and the Ethernet traffic transported
over shared TDM pipes in the Tellabs 6300 and Tellabs 8100 based network and optical channels
in the Tellabs 7100 based network. Due to the statistical nature of the Ethernet traffic, the operator
may choose to oversubscribe the network.
• DWDM optical wavelength (channel) management. Optical channels can be used to carry purely
optical signals, SDH circuits and Ethernet services in a single optical channel. The DWDM
system can be deployed either in a ring or mesh topology.
• IP VPNs providing many-to-many connectivity through the network. This connectivity service
is used mainly in the managed IP VPN application.
• Basic IP routing for broadband service aggregation applications.
Below, the different connectivity services are described more in detail.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

30
1 Introduction

1.2.1 TDM Circuits

TDM circuits are used for providing leased line services and transport in 2G and 3G radio access
networks. Both PDH and SDH are widely used in these applications as the operator usually
has an existing PDH/SDH infrastructure that can be utilized for these applications. The Tellabs
8100 network elements have been optimized to provide nx64 kbps and E1 PDH circuits as well
as connectivity to SDH networks through mapping of PDH circuits to nxVC-12 and VC-4 SDH
containers. The Tellabs 6300 network elements have been designed for providing cost-efficient
nxVC-12, nxVC-3 and nxVC-4 SDH connections.

Fig. 12 TDM Circuits

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

31
1 Introduction

1.2.2 TDM Pseudowire

TDM pseudowires are used in the mobile application for transporting TDM traffic over an MPLS
infrastructure. The TDM pseudowires are based on either the Structure-Agnostic Time Division
Multiplexing (SAToP) or the Structure-aware TDM Circuit Emulation Service over Packet Switched
Network (CESoPSN) specifications. The TDM payload is encapsulated in the ingress node in an
MPLS packet and tunneled over an MPLS infrastructure to the egress node. At the egress node the
TDM traffic is decapsulated from the MPLS packet and the original TDM bit stream is reconstituted.
In SAToP pseudowires, the bit stream is encapsulated in MPLS without considering any possible
structure that may be imposed on the bit stream. In CESoPSN pseudowires, nx64 kbps channels are
extracted from the bit stream and encapsulated in an MPLS packet. CESoPSN pseudowires allow
traffic grooming and save bandwidth when only a few of the nx64 kbps channels in an interface
carry user traffic. SAToP pseudowires minimize delay and are the most bandwidth efficient when
the entire E1/T1 port needs to be transported over the MPLS network.

Fig. 13 TDM Pseudowire

1.2.3 ATM Pseudowire

ATM pseudowires are used in the mobile application for transporting the ATM traffic over an MPLS
infrastructure. The ATM traffic is encapsulated in the hub site node in an MPLS packet according
to the PWE3 architecture. The MPLS packet is then tunneled over an MPLS infrastructure which
may be, for instance, an SDH/SONET or Gigabit Ethernet ring. At the RNC site the ATM traffic is
decapsulated from the MPLS packet and aggregated onto an unchannelized STM-1/OC-3c interface
towards the RNC. Either an ATM VPC or an ATM VCC may be tunneled in an ATM pseudowire.
Three mapping modes are supported for ATM pseudowires: N-to-one cell mode, one-to-one cell
mode and AAL5 SDU mode.

If no MPLS infrastructure is available, pseudowires can also be transported over an IP infrastructure


using IP tunnels. A typical application for MPLS over IP is the transport of Node B traffic over
DSL lines to the RNC.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

32
1 Introduction

Fig. 14 ATM Pseudowire

1.2.4 Ethernet Pseudowire

Ethernet pseudowires are most suitable for those connections that need network layer transparency
also for other than IP protocols. The Ethernet pseudowires are based on the PWE3 architecture. The
Ethernet pseudowires are point-to-point connections much like with Frame Relay virtual circuits or
TDM leased lines. Connections are implemented in the network in such a way that the traffic is
mapped to MPLS LSPs. If VLANs are used in the network, it is possible to map traffic from each
VLAN into a separate connection. This provides full VLAN transparency. It is also possible to
terminate Ethernet traffic from frame relay circuits and transport it over an Ethernet pseudowire.
Service VLANs can be used to aggregate traffic from multiple Ethernet interfaces to a single
aggregated interface by adding a VLAN tag to the Ethernet frames.

A service class specifying the QoS treatment the traffic gets in the MPLS network can be defined for
each connection separately.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

33
1 Introduction

Fig. 15 Ethernet Pseudowire

1.2.5 Frame Relay DLCI Pseudowire

Frame Relay DLCI pseudowires are used for transporting Frame Relay permanent virtual circuits
(PVC) over an MPLS network. The Frame Relay DLCI pseudowires are based on the PWE3
architecture. The Frame Relay DLCI pseudowires are point-to-point connections, which map
Frame Relay PVCs to MPLS LSPs. Each PVC, identified by a DLCI value, is mapped to a separate
pseudowire (so-called one-to-one mode). In mobile networks, Frame Relay DLCI pseudowires can
be used for transporting GPRS traffic between Base Station Controller (BSC) and Serving GPRS
Support Node (SGSN).

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

34
1 Introduction

Fig. 16 Frame Relay DLCI Pseudowire

1.2.6 HDLC Pseudowire

HDLC pseudowires are used for transporting HDLC based protocols, such as Point-to-Point
Protocol (PPP), Frame Relay and Cisco HDLC over an MPLS network. The HDLC pseudowires are
based on the PWE3 architecture. The HDLC pseudowires are point-to-point connections, which
map HDLC frames to MPLS LSPs. HDLC pseudowires implement a so-called port mode service,
since all frames from one port are mapped to the same pseudowire (as opposed to e.g. Frame Relay
DLCI pseudowires, which map frames to pseudowires based on the DLCI values). In mobile
networks, HDLC pseudowires can be used for transporting CDMA2000 1xRTT traffic.

Fig. 17 HDLC Pseudowire

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

35
1 Introduction

1.2.7 VLAN VPN

VLAN VPNs can be used to provide managed LAN interconnection services. A VLAN tag is added
to the Ethernet frame to identify the customer at the ingress and the VLAN tag is then removed at
the egress point before forwarding the frame at the other endpoint of the VLAN VPN. The frames
are switched based on the VLAN tag and the Ethernet MAC address. A VLAN VPN may have
several endpoints in contrast to Ethernet pseudowires that only support point-to-point Ethernet
connections. TDM circuits are provisioned between the Ethernet switching points and overbooking
of these transport trunks can be exercised due to the statistical nature of the Ethernet traffic.

Ethernet switching units are available in the Tellabs 8100, Tellabs 7100, Tellabs 7300 and Tellabs
6300 network elements.

Fig. 18 Tellabs 6300/Tellabs 8100 Ethernet Packet Network

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

36
1 Introduction

Fig. 19 Tellabs 7100 Ethernet Packet Network

1.2.8 Dense Wavelength Division Multiplexing

Dense wavelength division multiplexing (DWDM) is an optical technology used for increasing the
capacity and flexibility of the optical infrastructure. In a DWDM system, multiple optical signals
are transmitted over multiple optical wavelengths (channels) on a single optical fiber. The optical
layer interfaces the digital layer at the optical termination equipment, which can be considered a
part of both the electrical and the optical layer. The optical layer provides multiplexing schemes
and management that are not present on the digital layer of the network. This optical infrastructure
supports e.g. multiple data rates (for example, 622 Mbps, 2.5 Gbps, 10 Gbps) and SDH signal rates.

DWDM units are available for the Tellabs 6300 and Tellabs 7100 network elements.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

37
1 Introduction

Fig. 20 Ethernet Services and SDH Circuits over a DWDM Network using Tellabs 6300, Tellabs
8600 and Tellabs 7100 NEs

1.2.9 IP VPN

The BGP/MPLS VPN method is the predominant and most scalable way to build MPLS-based
connectivity services. The service is based on RFC4364, which defines the MPLS/BGP-4-based
VPN implementation. Adding a new site to the VPN is easy and automated by the use of BGP
routing advertisements. All the traffic traversing through the VPN can be classified, and traffic
quality can be handled properly through the network. The VPN allows any traffic, including voice,
to be transported. IP traffic from Ethernet frames or ATM/frame relay circuits may be terminated
and routed over the VPN.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

38
1 Introduction

Fig. 21 Standard RFC4364 Based BGP/MPLS VPN

1.2.10 Internet Access

The Tellabs 8600 system nodes can be used for services in which the customer traffic flows through
the nodes as basic IP route packets, without MPLS encapsulation or separate routing tables.
Residential Internet access is a typical service like this. The operator usually provides dynamic IP
addresses for residential users through the DHCP protocol. The Tellabs 8600 nodes support the
DHCP relay agent functionality, which can be used here. The traffic from/to different residential
customers typically enters/leaves the Tellabs 8600 nodes in separate customer-specific VLANs.
Usually there is a shaping/policing function configured in each VLAN to adjust the traffic rate
according to what the customer has paid for.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

39
2 Tellabs 8000 Manager Components and Architecture

2 Tellabs 8000 Manager Components and


Architecture

2.1 Overall Architecture

2.1.1 Typical Configuration

The Tellabs 8000 manager is usually installed in a separate Local Area Network in the operator
Network Operating Centre, here referred to as management LAN. An overall configuration of a
typical Tellabs 8000 manager configuration is shown in the figure below.

Fig. 22 Tellabs 8000 Manager Configuration

A more detailed description about each of these components will be given later in the following
sections. The management LAN can be physically distributed between different locations. For
security reasons it should, however, be kept separate from the managed Wide Area Network. Only
the Communication Servers and Route Masters should have access to the WAN.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

40
2 Tellabs 8000 Manager Components and Architecture

2.1.2 Minimum Configuration

Not all of the management system components need to be physically located in separate computers
when the size of the managed network and the number of management system users are small. In
networks of up to 50 nodes and one concurrent operator, Single Computer Configuration (SCC)
can be used. SCC combines all of the management system functions (excluding Route Master)
into one computer.

2.1.3 N-tier Architecture

In order to increase system flexibility and scalability, an N-tier architectural style has been applied in
the Tellabs 8000 manager software. N-tier architecture, also often referred to as 3–tier architecture,
essentially means that at least the GUI client, application logic (business logic) and data storage are
separated from each other in a way that allows – but however does not force – them to be run in
separate computers. The advantages of this approach include better scalability (more servers can
be added to be able to handle large amounts of data) and relative easiness to make changes to one
part of the system without affecting the others.

The figure below shows the tiers of the system and their dependencies concerning some of the
NMS tools in a simplified way.

Fig. 23 N-tier Architecture in Tellabs 8000 Manager

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

41
2 Tellabs 8000 Manager Components and Architecture

2.1.4 Object Server

Object Server (OSR) isolates the Tellabs 8000 manager business logic and GUI tools from the
database and network elements. They are offered a set of functions, by which they can change the
database contents, and select data from the database and network elements.

The Data Cacher process speeds up the data selection made by the Object Server. It keeps the
user-defined part of the database contents in the local RAM.

The advantages of the Object Server in multi-user environment include:

• Logical checks that prevent database inconsistency in multi-user environment can be collected in
OSR.
• Object Server collects the latest database in a specific database table (osrchan), where also some
additional information on the changes is stored.
• Object Server can inform the NMS tools of the changes in preselected data, on the condition that
all tools that change the corresponding data are also using Object Server. This feature makes it
possible to provide a practically real time view to the network data to all Tellabs 8000 manager
users.

2.2 Workstation

Workstations are the computers through which the operators use the Tellabs 8000 manager through a
graphical user interface. Please see the document Tellabs ® 8000 Network Manager Third Party
Hardware and Software Requirements for detailed requirements concerning workstation hardware
and operating system.

The graphical user interface (GUI) is the interactive tool of network management. It is based
on hierarchical windows for all tools. The windows represent network elements and objects
graphically. In addition, in some windows network elements and objects are displayed in tree
and list view format, which is easy to navigate.

All Tellabs 8000 manager tools are identified by the title bar text. The title bar also identifies the
target object in the network. For example, the Node Manager main window shows the id and name
of the target node in its title bar. The menu bar, where present, displays the sets of actions available
in each tool. The last of these sets is always the Help option.

There are four different types of workstations available: Standard Workstation, Power Workstation,
Satellite Workstation and Private Subnetwork Workstation. See below for more information about
each workstation type.

Normal workstations can be configured to prefer a certain non-unique service instance by name, and
to have backup servers if the preferred one cannot be found. In a large installation, the workstations
should be configured so that each group of workstations uses its own set of preferred non-unique
services, and uses the preferred sets of other workstation groups as backup only. Power workstations
have their own private copy of most of the non-unique services, and cannot use backup instances.

The first window the user sees after launching Tellabs 8000 manager in a workstation is the Toolbox,
shown below. All available management tools can be launched from there.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

42
2 Tellabs 8000 Manager Components and Architecture

Fig. 24 Tellabs 8000 Network Manager Toolbox

2.2.1 Standard Workstation

The standard workstation is a configuration where the object server is located on the same
workstation as the GUI clients. The workstation communicates directly with the database and
handles change notifications locally in the workstation. The workstation does not contain any server
functionality and therefore the CORBA services needed by the MPLS network management tools
(VPN Provisioning, 8600 Node Manager, Packet Loop Testing) are run remotely on management
servers.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

43
2 Tellabs 8000 Manager Components and Architecture

2.2.2 Power Workstation

The power workstation is a configuration where, in addition to the object servers, the workstation
will contain some of the CORBA services for the MPLS network management tools. The power
workstation will contain a private management server with CORBA services for the VPN
Provisioning, Tellabs 8600 Node Manager and Tellabs 8600 Performance Management tools. This
architecture provides scalability when the workstation is powerful enough to run a local management
server as it will take off load from a central management server. The power workstation also
eliminates the waits and delays associated with accessing services over the local LAN.

The power workstation configuration is only recommended for a limited number of workstations
used to build and configure Tellabs 8600 network elements and to provision the connections
between those elements. A standard or satellite workstation is well suited for monitoring and
performing occasional configuration tasks, whereas the power workstation improves noticeably
the response time of the configuration tools.

2.2.3 Satellite Workstation

The satellite workstation does not contain any business logic. The business logic runs in a
management server (Satellite service) and all database communication and change notification
handling is done in the management server. Such a management server was previously known as
a Gateway Server. In the Tellabs 8000 manager, the gateway server has been integrated into the
management server architecture as a Satellite service.

The satellite workstation is a "thin-client" workstation configuration that is recommended when the
workstation computer is not directly connected to the main management LAN or there is a need
for a huge number of workstations.

2.2.4 Private Subnetwork Workstation

The VPN concept from the earlier Tellabs 8100 manager has been renamed to Private Subnetwork
in the Tellabs 8000 manager to avoid confusion with other types of VPNs (IP VPNs, VLAN VPNs
etc.). Thus, the Private Subnetwork Workstation is similar to the Tellabs 8100 manager VPN
workstation. Private Subnetworks can be used to dedicate a logical subnetwork of the operator
network for use by an operator customer. Through the Private Subnetwork Workstation, the
customer may manage the logical network dedicated to the customer.

2.3 Database Server

Database Server contains a Sybase relational database where all Tellabs 8000 manager tools store
their data. Please see the document Tellabs ® 8000 Network Manager Third Party Hardware and
Software Requirements for detailed requirements concerning database server hardware.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

44
2 Tellabs 8000 Manager Components and Architecture

2.3.1 The Role of the Database in Tellabs 8000 Manager

All inventory and configuration data of the Tellabs 8600 and Tellabs 8100 system network elements
are stored in the Tellabs 8000 manager database. This kind of complete documentation about the
network is created automatically when different Tellabs 8000 manager tools, all of which use the
database, are used for building, configuring and monitoring the network. A single database shared
between different tools also greatly reduces problems caused by data inconsistency.

Examples of data stored in the database are:

• line and control cards


• interface modules
• interface configuration: layer 1, 2 and 3 parameters, VLANs etc.
• protocol (BGP, OSPF, DHCP, ARP...) configuration
• equipment fault information (resulting from fault polling)
• performance data collected from the network elements.
If an active network element with any kind of effective configuration is suddenly replaced
with an identically furnished node (similar cards and modules in the same positions) with blank
configuration, the configuration of the previous node can be updated in the new one from the Tellabs
8000 manager database by either replacing the configuration from a recent node configuration
back-up, or by manually returning the configuration using the Tellabs 8000 manager tools to rewrite
the configuration to the node. After this everything should proceed as before replacing the node.

Tellabs 8000 manager database can be used for pre-planning networks, configurations and services
before they are actually implemented. It is possible to plan the network topology with network
elements and trunks that do not exist in real world yet. It is also possible to create the detailed
configuration and parametrization of non-existing nodes in the database and use these nodes in
pre-planned services. After the pre-planned nodes and trunks are in place, it is a straightforward task
to bring the pre-planned parameters from database to the nodes and to implement the pre-planned
services utilizing them.

Note, however, that not quite all the information produced by the control plane of the Tellabs 8600
system nodes is stored in the database. Dynamic information, such as the routes in the default routing
tables and VRFs learned through the BGP/OSPF routing protocols, is not stored in the database.

2.4 Communication Server

2.4.1 General

A Communication Server is required for managing the nodes integrated into the Tellabs 8000
manager. The Communication Server sends cross-connection and other configuration commands to
the nodes, polls them for faults and performance data, maintains real time clock settings etc.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

45
2 Tellabs 8000 Manager Components and Architecture

A domain concept is used to refer to a system or network which internally uses the same
management protocols and paradigms. A network consisting of Tellabs 8600 system nodes is an
example of such a domain. Nodes in this kind of a domain are accessed via an adapter or adapters
in a Communication Server. The adapters perform the conversion of information between Tellabs
8000 manager and the domain, i.e. the Tellabs 8600 domain in this case. The adapter is plugged
into the southbound interface of Tellabs 8000 manager and it communicates with the nodes either
directly or by using a vendor specific mediator, proxy or element management system as a gateway.
In the case of Tellabs 8600 system nodes, the adapter communicates directly with the nodes using a
protocol called BMP and with SNMP.

There can be several Communication Servers per domain, each running its own set of adapter
processes. Communication Servers are associated with areas, so that each Communication Server
handles the nodes of those areas that have been assigned to it. This allows the Tellabs 8000 manager
to manage very large networks.

For each area, it is also possible to define a backup Communication Server, which assumes the
responsibilities of the primary server if the primary server fails. Communication Servers thus
provide a fully redundant system in node communication.

Please see the document Tellabs ® 8000 Network Manager Third Party Hardware and Software
Requirements for detailed requirements concerning communication server hardware.

A separate licence is needed for each of the adapter types deployed in the Communication
Server. In addition, separate licenses are needed also for each network element deployed
in the network.

2.4.2 Managing the Communication Servers

The supervisor process controls the server functionality. It can be monitored and controlled through
a graphical user interface, the Server Monitor tool. The Server Monitor tool shows all the services
and processes that are installed in the server, and their current status. Through the Server Monitor
tool the user may start and stop individual services and restart/shut down the entire server.

The operator can configure Communication Servers by using the Communication Server
Parameters dialog in Network Editor. This dialog allows the setting of the name of the server and
the computer LAN name, its IP address and the backup level, as well as the adapters used by the
server. The Communication Server uses a communication adapter for each network element type or
protocol it supports. Some Communication Server parameters are defined in the configuration file.

The Communication Server can use the Tellabs 8000 Manager Trace Utility to log its run time
prompts, warning messages and errors. The Trace Utility is a system through which different
Tellabs 8000 manager tools generate log messages that are of help especially in the diagnosis
of problem situations.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

46
2 Tellabs 8000 Manager Components and Architecture

2.4.3 Fault Management in Communication Servers

Fault management in Tellabs 8000 manager Communication Servers is divided into several
dedicated processes. The polling controller creates poll jobs specified by polling policies. The
generic poller process uses adapters to poll faults from network elements. The faults are filtered
and reclassified, and then stored in the database. Polling policies can be edited in the Polling
Policies dialog in Network Editor.

2.4.4 Backup Monitoring in Communication Servers

The Communication Server can back up the functions of other Communication Servers in a
multi-server environment. The backup level can be configured in Network Editor. Possible values
for the backup level are Not Used, User Def and Auto.

The backup functionality is implemented by backup processes that run in each Communication
Server of the system. They poll each other to detect the loss of another server. If a server is lost, areas
are moved to running servers. This happens automatically if the backup level is defined as Auto for
the servers in the Communication Server Parameters dialog. It is also possible to set the backup
level to User Def and define the preferred Backup Communication Server in the Area Parameters
dialog in Network Editor. It is only possible to define one backup server for each area. On the other
hand, if both primary and backup server of an area fail the communication responsibility for the area
is moved to one of the running servers, as far as at least one running server exists.

2.4.5 Communication with Network Elements

There are two major ways of arranging the management communication:

• Outband management: There is a separate network for management communication, com-


pletely separate from the network that the Tellabs 8000 manager installation in question is man-
aging. The external management communication network must provide IP connectivity between
the Communication Servers and all managed network elements. The advantage of using outband
management is that the management connectivity is not dependent on the state of the managed
network.
• Inband management: Communication between Communication Servers and managed nodes
goes through the managed network itself. The advantage here is that maintaining a separate
network for management communication is not needed. The disadvantage is that management
communication to the nodes is dependent on the state of the managed network.

When using inband management, care must be taken that configuration changes made in a
network element do not cut the management connectivity to the node itself, or to any other
nodes to which the management communication goes through the changed node.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

47
2 Tellabs 8000 Manager Components and Architecture

2.4.6 Element Adapters

Tellabs 8600 Adapter

Communication Servers communicate with Tellabs 8600 system network elements using BMP as
the communication protocol. This is an efficient, object-oriented management protocol, which is
specifically designed for managing data network elements. BMP uses UDP as a transport protocol,
although in the latest NE feature packs, TCP is also supported as a transport protocol. BMP has
the power of expression that makes it capable of making complex configurations in an effective
way, but is still relatively lightweight and simple compared to other powerful management protocols
such as CMIP. The Tellabs 8600 Adapter is used for communicating with the Tellabs 8600 network
elements using the BMP protocol. Third party management tools can read information from the
Tellabs 8600 network elements using the SNMP protocol and the Tellabs 8000 manager also uses
the SNMP protocol to collect performance statistics from the Tellabs 8600 network elements.

Tellabs 8100 Adapter

The Tellabs 8100 Adapter is needed to communicate with Tellabs 8100 system nodes.
Communication to the nodes is based on a management protocol called the DXX protocol. The
Tellabs 8100 Adapter must be run in a dedicated communication server, this server is also referred
to as a DXX Server. No other adapters may be run in a communication server running with the
Tellabs 8100 Adapter.

Tellabs 6300 Adapter

The Tellabs 6300 Adapter is needed for communicating with Tellabs 6300 system nodes. The
communication to the network elements is done using a protocol called LMIP (Local Management
Information Protocol), which is based on the CMIP protocol (Common Management Information
Protocol). For some parts of the communication, SNMP (Simple Network Management Protocol)
is used as well. LMIP is transported using the OSI TP4 protocol or, in case of the newest NE
feature packs, using the IP TCP protocol, while SNMP is transported using the IP UDP and TCP
protocols. In addition, each Communication Server also runs a so called Gateway Process, which
act as communication gateway between the Tellabs 8000 manager workstations and the control
network of the Tellabs 6300 system nodes. The Gateway Processes are used by the Tellabs 6300
Craft Terminal tools when they are used for node management purposes on the workstations.

Tellabs 7100 Adapter

The Tellabs 7100 Adapter is needed for communicating with Tellabs 7100 Optical Transport
System network elements (Tellabs 7100 OTS and Tellabs 7100 Nano). The protocol used in the
communication to Tellabs 7100 elements is TL1 (Transaction Language 1) and SNMP protocol for
the packet subsystem. TL1, developed by Telcordia Technologies on top of TCP, is a cross-vendor,
cross-technology man-machine language. It is widely used to manage optical (SDH/SONET) and
broadband access infrastructure especially in North America.

The launching of Tellabs 7191 Craft Station for Tellabs 7100 network elements utilizes the Gateway
Process in Communication Server. This enables Tellabs 8000 manager workstations and Tellabs
7100 network elements to be located in separate LAN segments.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

48
2 Tellabs 8000 Manager Components and Architecture

Tellabs 7300 Adapter

The Tellabs 7300 Adapter is needed for communicating with Tellabs 7300 system network elements
(Tellabs 7325 switch and Tellabs 7345 switch). The communication to the network elements is done
using SNMP. The SNMP protocol is mainly used for collection of faults from the network elements.

The launching of Tellabs 7191 Craft Station for Tellabs 7300 network elements utilizes the Gateway
Process in Communication Server. This enables Tellabs 8000 manager workstations and Tellabs
7300 network elements to be located in separate LAN segments.

Tellabs 8800 Adapter

The Tellabs 8800 Adapter is needed for communicating with Tellabs 8800 multiservice routers.
The communication to the network elements is done using SNMP. The SNMP protocol is used both
for configuration of the Tellabs 8800 multiservice routers as well as for collection of the faults
from the network elements.

Generic SNMP Adapter

The Generic SNMP adapter is used for communicating with third party nodes supporting the SNMP
protocol and standard SNMP MIBs.

2.5 Management Server

Management Servers run most of the Tellabs 8000 manager business logic. This is divided into
service processes. The placement of different services in the server computers is flexible: most
of the services can be run in the same server or alternatively different services can be located in
separate servers. It is also possible to have multiple servers running a single service in an installation
where the number of workstations is very large. The clients can be configured to use an alternative
Management Server if the preferred server is unavailable. The setup application offers a number of
pre-configured sets of services for ease if installation and configuration.

The currently installed services and their status can be seen in the Tellabs 8000 Server Command
Center.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

49
2 Tellabs 8000 Manager Components and Architecture

Fig. 25 Server Command Center

2.5.1 Service Processes

Some services must have only one copy running at a time, while there can be several copies of others
running simultaneously, if the service names and workstations have been configured appropriately.

The unique services with only one possible instance are described in the table below.

Service Description
OCNM Front End Online Core Network Monitoring Front End for OCNM running in
the Route Master. Stores the detected bandwidth reservations and
IP/MPLS core topology to the database
Packet Loop Test Service Allows scheduled and immediate testing operations

Running more than one instance of the above services in the system will lead to error
situations.

The rest of the services also depend on each other. As a general rule, there should be at least one copy
of each service installed and running in the management system for the system to be fully functional.

These services are described in the table below.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

50
2 Tellabs 8000 Manager Components and Architecture

Service Description
EMS Service Takes care of Tellabs 8600 network element
level configuration, including QoS and protocol
authentication settings.
VPN Provisioning Service Serves the VPN Provisioning and Tunnel
Engineering tools.
Fault Service Distributes fault information to clients.
Performance Management Service Provides packet performance information to the
PMS GUI.
North-Bound Interface Service Provides northbound interface for 3rd party
software integration purposes.
Satellite Service Serves Satellite Workstations. See below for
more information about the satellite service.
Web Reporter Service Web server that provide information about the
network and services in the Tellabs 8000 manager
to web clients.
Private Subnetwork Service Serves Private Subnetwork Workstations.
Service Management Service Provides information to the Service Management
GUI.
Macro Manager Service Serves macro applications used for configuring
Tellabs 8600 elements.
Automatic Maintenance Procedures Service Takes care of performing the scheduled automatic
maintenance procedures.
MTOSI Service Provides MTOSI 2.0 interface for MTOSI web
clients.

You can relocate services between Management Servers after installation by rerunning the
Setup Wizard and selecting the Re-configure Old Installation option.

For more information on locating, balancing and backing up the services, refer to Tellabs ® 8000
Network Manager Software Installation Manual.

2.5.2 Using Multiple Service Instances

If a Management Server fails, the same services can be activated in other Management Servers
by updating their configuration files (or by reconfiguration) or clients can be preconfigured to
automatically revert to use another management server if the preferred server is unavailable.

2.5.3 Satellite Service

The Satellite service serves an array of Satellite Workstations (SATWS). The Satellite service was
previously known as the Gateway server. The Satellite service provides Tellabs 8000 manager
services for remote workstations that are running with lightened Tellabs 8000 manager software
(see the figure below).

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

51
2 Tellabs 8000 Manager Components and Architecture

Fig. 26 Management Server Running Satellite Service in Tellabs 8000 Manager

The term lightened workstation refers to the lightened processing load of the Tellabs 8000 manager
workstation; part of Tellabs 8000 manager processing is shifted to the Satellite service in the
Management Server. The processing load of the Satellite Workstations is reduced because the
clients do not run Object Server (OSR).

Satellite Workstations do not have a direct connection to the database, as their access to the database
is multiplexed through the Satellite service. Satellite Workstations are connected to the Management
Server running the Satellite service with routed LAN. Individual remote clients and their connection
to the database can be better tolerated because the Tellabs 8000 manager tools of the Satellite
Workstations cannot lock directly onto the database tables. The management Server serializes
transactions to the Database Server.

Satellite Workstations can perform the same functions that are provided for the Standard
Workstation. The operability of the Satellite Workstations is not restricted or limited. Satellite
Workstations are like ordinary Tellabs 8000 manager workstations, and they can use the services of
the Database Server, Management Servers, Communication Servers and Recovery Server through
the Satellite service.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

52
2 Tellabs 8000 Manager Components and Architecture

2.5.4 Fault Service

Fault mapping usually requires a lot of CPU power and memory. To save processing time in normal
workstations, fault mapping can be moved to a dedicated and powerful Fault Server computer.

The Fault Server continuously polls the database for changed faults. When the Fault Server detects
changed faults, it analyzes the fault data and informs Fault Management System of the change in
fault condition. The Fault Server maps the raw fault data both to the basic network elements, like
nodes and trunks, and to the higher level objects, like circuits, customers and service categories.
Fault Management System listens to the notifications from the Fault Server and invokes fault queries
for different kinds of objects under fault monitoring.

The Fault Server can be run in two modes: server mode and workstation mode. In server mode the
Fault Server provides services for several workstations in the system. There can be several Fault
Servers in the same system. In server mode the Fault Server provides better usability, especially for
the users of Circuit Fault Monitoring.

In workstation mode the Fault Server is run locally on the workstation when FMS is open. The Fault
Server is started in workstation mode if the workstation is configured to use the workstation mode
instead of the remote Fault Server. The Fault Server is installed automatically on all workstations
during the typical workstation setup.

The server mode provides several benefits over the workstation mode. For example, fault data is
always processed on the Fault Server, and particularly, circuit fault mapping is always performed.
Some tasks belong naturally to the server, such as circuit fault updates to the database. The Fault
Server also reduces the overall processing load both in the workstation and database.

For more information on changing the mode of the Fault Server, refer to the FMS Fault Server
online help.

2.5.5 Northbound Interface

In most real operator networks it will be necessary to integrate Tellabs 8000 manager as a subsystem
under the operator’s OSS system. The level of this integration as well as the exact OSS components
to integrate with will vary a lot from one operator to another. To address these needs Tellabs 8000
manager provides a framework which supports implementing a number of northbound interfaces
for different purposes, e.g. fault management, VPN provisioning etc. The OSS integration service
can be purchased from Tellabs Oy.

2.5.6 MTOSI Service

TM Forum’s Multi-Technology Operations System Interface (MTOSI) effort has the goal of
defining a unified open interface to be used between Operations Systems (OSs), where an OS is
any management system that exhibits Element Management Layer (EML), Network Management
Layer (NML) and/or Service Management Layer (SML) functionality as defined in the ITU-T
TMN model. Tellabs 8000 manager provides a northbound MTOSI interface using traditional
web-services technologies (WSDL/SXD and SOAP/HTTP) to implement a combination of
standard, pre-standard and proprietary operations. The interface is based on the MTOSI Release 2.0
specifications with some proprietary extensions for MPLS pseudowire management as well as test
and diagnostics functions.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

53
2 Tellabs 8000 Manager Components and Architecture

The following MTOSI operations are supported:

• SAToP TDM pseudowire provisioning


• RSVP-TE tunnel provisioning
• VLAN configuration
• VCCV LSP ping for TDM pseudowires
• Port status check for physical ports and VC-12s
• Port loopback for physical ports
• IP static route configuration
• LDP targeted peer and neighbor authentication configuration
The supported elements include Tellabs 8605 access switch, Tellabs 8620 access switch, Tellabs
8630 access switch and Tellabs 8660 edge switch.

The MTOSI service can be configured as part of the NBI Server configuration.

2.6 Recovery Server

Recovery Server provides facilities to restore user communication links if trunk or node faults occur
in Tellabs 8100 system network. The Recovery Server is used for circuit and trunk protection.

Recovery Server is optional and runs on a dedicated recovery computer. If there is only one DXX
Server in the system, both Recovery and DXX Servers can run in the same computer.

It is possible to install two Recovery Servers in the Tellabs 8100 network to increase reliability. In
case of an unrecoverable failure of the active Recovery Server (e.g. LAN failure, physical damage
causing PC hardware failure, electricity break) the backup Recovery Server will take over the
recovery support for the Tellabs 8100 network. In case of a hardware or software update, the backup
can also replace the primary Recovery Server during the update operation.

Recovery Management System uses a single centralized Recovery Server that provides the recovery
functionality. However, it is possible to switch servers at any time. Server switching operations can
be initiated from Recovery Control Program if you have the required privileges.

2.7 Route Master

Route Masters are server computers that run the Online Core Network Monitoring functionality.
There can be up to two Route Masters in each network, when the functionality is desired.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

54
2 Tellabs 8000 Manager Components and Architecture

Fig. 27 Route Masters in IP/MPLS Network with Tellabs 8600 Based Edge and Access

Please see the document Tellabs ® 8000 Network Manager Third Party Hardware and Software
Requirements for detailed requirements concerning Route Master hardware and operating system.

2.7.1 Online Core Network Monitoring (OCNM)

The OCNM functionality listens to OSPF-TE protocol in IP/MPLS core and access networks. It
serves two main purposes:

1. It runs continuous, almost real time autodiscovery for trunks and network elements in the
IP/MPLS core network. With the data captured in this way, Tellabs 8000 manager can provide
tools for manual routing of traffic engineered tunnels through the core network, even if the
core network nodes are not managed through it and there is no direct communication between
Tellabs 8000 manager and these nodes.
2. It captures the trunk bandwidth allocation information from both core and access networks. In
this way Tellabs 8000 manager keeps aware of free capacity available on each trunk for each
service class in almost real time. This information can be browsed by the user and it is utilized
by the Tunnel Engineering tool for checking the availability of trunks for manual routes.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

55
2 Tellabs 8000 Manager Components and Architecture

Since there may be multiple OSPF areas in the network Route Master has a socket communication
to a Tellabs 8600 node on each OSPF area and queries link state information for the area from the
node. For this mechanism to work the OCNM port needs to be activated in at least one node in each
area. This activation is done through a tool provided by Node Manager. The link state information
is queried from Route Master by a business logic component running in Management Server. This
business logic component then updates the captured data in the Tellabs 8000 manager database.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

56
3 Tellabs 8000 Manager Application Packages

3 Tellabs 8000 Manager Application Packages

3.1 General

Tellabs 8000 manager is divided into the following separately licensed optional application
packages:

• Basic Package - Contains the functionality to build and configure the network, document the
network inventory, monitor network faults, administrate users and customers and get on-line help.
• Provisioning Packages - Includes tools for service provisioning and tunnel engineering as well as
online core network monitoring functionality. With the provisioning packages, the user may con-
figure TDM based services (circuits and pseudowires), ATM based services, Ethernet point-to-
point and multipoint services, Frame Relay based services, HDLC based services and IP VPNs.
• Testing Package - Enables testing the provisioned services with the Circuit and Packet Loop Test
tools. With these tools the user may detect and localize problems in circuits, pseudowires and IP
VPNs in the network. The Packet Loop Test tool can also be used to measure QoS levels in the
MPLS network.
• Service Fault Monitoring - Contains tools that enable management and real-time monitoring of
services in the network. The tools support both PDH and SDH circuits as well as TDM, ATM
and Ethernet pseudowires and IP VPNs.
• Performance Management - The Performance Management tool is targeted for network perfor-
mance monitoring and troubleshooting. The performance monitoring includes systematic col-
lection, storage and reporting of the traffic flowing into and out of trunks and interfaces in the
network.
• Unit SW Management - The unit software management package reports the installed software
and hardware versions in the nodes and downloads new software to a selected set of nodes.
• Web Reporter - The Web Reporter tools provides network statistics through a web interface in
HTML reports. The reports include information on faults, configuration and capacity in easy-to-
read format.
• Macro Package - The Macro Package includes the Macro Manager tool and Application De-
velopment Toolkit that enables the user to develop customized tools and functions to speed up
network management tasks.
• Planning Package - The Planning Package contains tools similar to the ones used in real network
operations and tools dedicated to network design purposes. The tools can be used to optimize the
network when planning a roll-out of a network or when making changes to an existing network.
The network management tools in these packages can all be accessed from the Tellabs 8000
Network Manager Toolbox.

The licensing of each application package consists of a fixed starting fee, depending on the number
of users (up to 5, 10 or unlimited number of users) and an element fee per each managed element.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

57
3 Tellabs 8000 Manager Application Packages

3.1.1 Other Software Packages

In addition to the packages above, there are a few software packages that do not contain specific
Manager tools, but extend the Manager with additional features. The following packages are
available:

• Recovery Management - Restores services when faults occur in the network. Both trunk and
circuit level recovery is included. The Recovery Management supports PDH 1/0 circuits and
trunks for Tellabs 6300 and Tellabs 8100 network elements.
• Service Viewing Package - With the Service Viewing Package users can filter the network view
to cover only specific circuits or regions, or circuits of a specific customer.
• Private Subnetwork - The Private Subnetwork Package allows the operator to dedicate a sub-
network from the physical network to a selected customer or group. The Private Subnetwork
operators see their network as an independent network and can manage their part of the network
independently.
• Partitioning Package - The Partitioning Package allows large networks to be managed and or-
ganized regionally, while still providing an option to the user to also see an overview of the entire
network, if needed.
• Off-Line Database Kit for Reporting - The Off-Line Database Kit can be used to replicate
data in the on-line database to an off-line database, where data can be analyzed with heavy data
processing functions without disturbing the normal use of the Tellabs 8000 manager. It also
allows fault information to be stored over considerably longer periods than can be done with the
on-line database.

3.2 Basic Package

3.2.1 Network Editor

Network Editing Tools are used for building, viewing and editing the model of the Tellabs 8000
system network stored in the database. Network Editing Tools comprise Network Editor as a
top-level window from which you can launch the other tools, including Node Editor and ID Interval
Editor. The objects stored in the database represent the management view of the hardware and
software elements and the interconnections between them. Network Editor displays many of these
objects and connections graphically. It has many text-based dialogs for viewing and changing the
parameters of the database objects. In a few cases Network Editor communicates with the hardware
elements. A special feature of Network Editor is the Toolbox, which floats on the Network Editor
screen. It is a collection of panels that allow new objects (nodes, trunks, locations and networks) to
be created, existing objects to be selected and moved, and independent tool sessions to be launched.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

58
3 Tellabs 8000 Manager Application Packages

Fig. 28 Network Editor Window

Representation of Network

Network Editing Tools represent the network in the Navigator Tree, List and Network View
windows, and switches in one or more Node windows.

The Network View window represents a graphical view of the whole network, or the view can be
zoomed and scrolled to show a part of the network. In this window the nodes, trunks, locations and
networks are represented by small graphic representations, symbols. Each object refers to a specific
item of hardware, a logical object created for the purpose of managing the hardware, or external
equipment connected to it. For example, each switch is represented by a graphic resembling the
front view of the switch subrack. A location is a symbol representing a group of nodes and trunks
that interconnect the network elements, usually all located in close proximity. A location can be
opened to view the subnetwork it represents. Locations can be nested, i.e., a location can contain
another location.

The Navigator Tree View window represents a navigational view of the whole network. Objects
are grouped according to their type and position in the topological hierarchy. The number of visible
objects in the window can be limited by using subdivision groups and filters. The List View window
gives a more detailed report view of object properties.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

59
3 Tellabs 8000 Manager Application Packages

Two other types of Network window can be opened. The secondary network window displays an
overview of the network. A location window can be opened for any location object to display the
network of network elements in that location.

Trunk Symbols

Connections between network elements are represented by a connecting line, referred to as a bundle.
On this line, there are small symbols displaying the trunk type. More than one trunk can be placed
upon a connecting line and to represent multiple trunks between the same pair of end network
elements.

View Options

The types of objects displayed and how they are displayed can be controlled by a number of view
options.

The network view options include the following:

• Customer view
• Trunk view
• Network element view
• Selected area
• Unbound network elements
• Object labels
The network element level views include the following:

• Subrack or shelf view


• Module view
• Interface view
• Network element tree view
A Node window can be opened for any switch by launching a Node Editor session. The Node
window displays an overview of the subracks of a switch and a picture of a subrack with its cards
and interface modules.

State Logic

The main target of the Tellabs 8000 manager State Logic is to allow the user to pre-design the
Tellabs 8000 system networks in the database even if no hardware exists. State changes are initiated
by the user. The state of the object implies certain assumptions about its configuration and behavior
when managed.

The states have the following meaning for the objects in a network element:

Network Element States


planned 1 Object is assumed not to exist physically. As a general rule, configuring the
object will only affect the database representation.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

60
3 Tellabs 8000 Manager Application Packages

Network Element States


installed 2 Object exists also physically. Configuration operations will affect both database
and network element.
in use 3 Object is used in production. From configuration point of view, this is the
same as installed, but end-to-end management and network level management
software considers this object to be suitable for production use.

Usually, planned is seen as the lowest and in use as the highest state. When updating the state from
planned to installed or in use, the parameters are copied from the database to hardware. Parameters
that belong to the whole network element instead of a certain interface or other physical object
are associated to the state of the control card.

Node Editor

Node Editor is a tool for defining the provisioning of network elements and defining and viewing
interface uses and the bindings of interfaces to trunks and nodes.

There are two ways to add new cards and modules into the network element. This can be done
through a pop-up menu or with the help of toolboxes of available card and module types. A Settings
dialog allows the state of objects to be changed. Various Node Manager windows can also be
opened from Node Editor.

Node Editor provides two views for displaying network element furnishing: graphical view and
tree view. The graphical view shows subracks/shelves, cards and interfaces of the node as a picture.
The tree view shows the same information in a tree structure, but shows also binding information for
the interfaces of the node. In addition to the two switch views, there is a network view showing
the trunks and other nodes attached to the node being viewed.

A pop-up menu is available both for the graphical view and the tree view.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

61
3 Tellabs 8000 Manager Application Packages

Fig. 29 Node Editor Window

ID Interval Editor

ID Interval Editor is a tool for defining the range of valid IDs for each class of object that Network
Editor and other tools can add to the database, e.g. node IDs and trunk IDs.

QoS Settings Tool

The QoS Settings tool is used for setting up various Quality of Service related parameters in all
interfaces of the network. The parameters can either be set for one interface at a time or for a
point-to-point trunk, in which case identical parameters are set to both the end interfaces of the
trunk. This tool can be opened from Node Manager main window and Node Editor for an interface
and from Network Editor for a trunk.

The tool provides support for templates that make it possible to define a QoS setup suitable for
certain kinds of interfaces/trunks in a network, to name this template and to apply it for any number
of interfaces or trunks. Applying an existing template to each interface or trunk is done by opening
the QoS Settings tool for the interface or trunk, selecting the template from a drop-down list and
clicking a single button in the dialog. The templates are stored in the Tellabs 8000 manager database
and there can be any number of them.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

62
3 Tellabs 8000 Manager Application Packages

The QoS parameters supported by the tool include:

• Bandwidth Reservation: parameters related to RSVP capacity reservations for traffic engineered
LSPs
• Policer: service class-specific policers for traffic entering the node through the interface in ques-
tion
• Shaper: service class-specific shapers for traffic leaving the node through the interface
• Queue: queuing parameters for the different service classes for traffic leaving the node through
the interface
• Mappings between E-LSP exp bits/Ethernet priority bits and the service classes used by the op-
erator. These affect both ingress and egress traffic.

Fig. 30 Queuing Parameters

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

63
3 Tellabs 8000 Manager Application Packages

Fig. 31 Policer Parameters

Synchronization Monitoring Tool

Synchronization Monitoring Tool is used to monitor the synchronization clock flow throughout the
Tellabs 8100 system network. It can also monitor Tellabs 6300 system nodes that are managed with
Tellabs 8000 manager. This tool can be used, for example, to check that the synchronization network
is configured as planned and to locate faults (e.g. loops) in the synchronization network.

There are two possible methods of monitoring synchronization clock signal distribution in the
network. The first is Default Synchronization Path, and the second is Current Synchronization Path.
The default path shows where the synchronization clock signal is expected to come from, and
the current path shows where the synchronization clock signal is actually coming from. Tracing
can be done to one node at a time.

3.2.2 Node Manager

General

Node Manager provides facilities to manage the network elements: switches, cards and interfaces.
The element management facilities include parameter setting, fault monitoring, consistency
checking and templates. Node Manager can be opened from Network Editor.

Node Manager is a collection of tools that are used for setting parameters for network elements
and monitoring the existing parameter values.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

64
3 Tellabs 8000 Manager Application Packages

Node Manager shows the actual topology of the network element and which cards are active and
in use. With Node Manager it is also possible to add/remove cards and modules and change their
states (planned, installed, in use).

Fig. 32 Node Manager Main Window

Preplanning

Preplanning makes it possible to create elements and set parameters for them in the planned state
independently of the state of the physical hardware. Later, when the state of the elements is raised
from planned to installed or in use, the parameters are copied from the database to real hardware
elements. In normal use, when hardware communication exists and the element is in the in use
state, Node Manager shows the actual layout of network elements and lets the user set parameters
for hardware values.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

65
3 Tellabs 8000 Manager Application Packages

Templates

Templates are used to store a named set of interface parameters to the database. The use of templates
helps the user to set interface parameters faster and more accurately. The system administrator can,
for example, make an interface parameter template which the other operators can make use of later.

Node Manager Tools

There are several different tools in Node Manager. These include, among others, the following:

• Consistency Checker for finding and resolving inconsistencies between database and node. In the
Consistency Checker dialog it is possible to select for which topology elements the consistency
check is done.
• Node Autodiscovery for reading a node topology hierarchy to the database. Autodiscovery can be
initiated when a new network element has been installed or as an incremental discovery operation
when line cards and modules have been added to the network element. The network element must
be added to the database before initiating the autodiscovery operation.
• IP Routing Table for defining routing table.
• MPLS for defining MPLS behavior.
• IP Host for defining protocol parameters.
• BGP-4 Router for defining BGP parameters and creating neighborhoods.
• QoS Settings for defining Quality of Service parameters for the node.
• ACL for defining Access Control Lists.
• OSPF Router Process for defining OSPF router behavior.
• OSPF Area for defining OSPF area parameters.
• IS-IS Router Process for defining IS-IS router behaviour.
• Route Map for defining Route Maps.
• Several dialogs for defining parameters for different types of interfaces including ETSI and ANSI
versions.
• Configuration of protection including MSP 1+1, APS 1+1, ELP and BFD.
• Active Faults for viewing currently active faults.
• Ping/Traceroute for testing connections and for finding problems in the network.
• CLI Show for sending show commands to the network element and for viewing the results.
• ESW Management for managing the embedded software of the network element (for more infor-
mation, refer to 3.13 Unit Software Management Package).

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

66
3 Tellabs 8000 Manager Application Packages

Fig. 33 Access Control List Dialog

3.2.3 Customer Administration

Customer Administration provides tools to update the names and addresses of the operator’s
customers and their different sites in the Tellabs 8000 manager database. Once the customer and site
data has been inserted, it is possible to associate interfaces and entire network elements to different
sites. This kind of association means that the interface in question is connected to the customer site
or that the node in question is located in the customer site. VPN provisioning tool uses customers
and sites for grouping interfaces available as VPN endpoints in its user interface.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

67
3 Tellabs 8000 Manager Application Packages

Fig. 34 Customer Administration Dialog

3.2.4 Fault Management System

Fault Management (FMS) is used for monitoring and reporting the fault status in the Tellabs
8000 system network. The basic function of Fault Management System is fault monitoring, which
includes the continuous supervision of network elements. The Communication Servers and DXX
Servers collect the fault data from network elements (Communication Server used in the text
subsequently to refer both to DXX Servers and Communication Servers). Each server is responsible
for gathering fault events from one or more network areas (a collection of network elements). The
detected fault events are filtered, if necessary, and transferred to the system database. The network
fault status stored in the database is, in turn, continuously monitored by Fault Servers in the system.

This arrangement offers a real-time fault view of the monitored network to the network operator.
Detected fault events are attached to proper object symbols (trunks, user access points, network
element kernel parts, etc.) in the Fault Management windows. The appearance of these symbols
directly indicates the severity of faults in the corresponding network elements.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

68
3 Tellabs 8000 Manager Application Packages

Fig. 35 Main Window of Fault Management System

Fault Monitoring Principles

Fault monitoring is automatically started when the state of the network element is changed to in use.
Fault monitoring is based on network fault polling. ’To poll’ means to perform a status query from
the network element. Whenever a change in the network element fault status is detected, detailed
fault information is provided in the faulty network element itself. This information is read by the
Communication Server and transferred to the database. Communication Server obtains the fault
data through an adapter. Depending on the case, the adapter can interface with another management
system or contact the network element directly. Note that only a single poll has to be performed for
a network element because all the new faults and cleared faults are polled at the same time. By this
arrangement a single Communication Server can take care of the fault polling of a relatively large
area. The Communication Server polling parameters, like fault and network element ping polling
period, are configured in the Polling Policies dialog in the Network Editor.

Fault notifications, initiated by network elements, are used for speeding up the detection of new
or changed faults. When there is the first change in a fault situation of the node since the last
time the node was polled, a fault notification is sent to Tellabs 8000 manager. After receiving the
notification the node that sent the notification is polled immediately – or almost immediately if
there are notifications coming from several nodes in a short period of time – even if it is not yet
that node’s turn in the normal polling cycle.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

69
3 Tellabs 8000 Manager Application Packages

In addition to the network fault status polling and notifications, also the consistency between the
system database (network element and card configurations) and the actual values stored in hardware
are checked periodically. Any inconsistencies detected during the checking are reported via the fault
management user interface.

The faults reported by a card are ignored in case the installation state of the faulty interface or the
card itself (card level faults) is below the configured threshold value. This value can be set to
installed or in use. By this method the faults generated during equipment installation can be omitted.

A card is under fault monitoring as long as it exists in Tellabs 8000 manager. All possible active
faults and the card fault history are deleted when the operator deletes a card.

Fault filtering allows modification of the rules for including and excluding certain types of faults
into and from the database. Those faults already in the database can be viewed by the Fault
Management System windows.

Fault Management Windows

The fault management user interface contains five basic fault monitoring windows, the Network,
Network Element, Bundle, User Access and Unit window. These windows follow the fault
condition in the supervised network in real time, which means that whenever the windows are open
at the workstation, new faults are indicated as soon as the Communication Server has detected them.
The faults are attached to proper fault management object symbols in these windows. The Fault
Management System objects in the windows are the following:

• Network
• Location
• Network element
• Subrack
• Bundle (trunk group between two network elements)
• User access group (user access interfaces of a single network element)
• Trunk
• Card
• Card block (card physical block objects )
• Card logical block (groups of physical block objects)
• Interface (interface of a card)
• Components
The appearance of the object symbol in the window (color coding, blinking status) directly reflects
the fault state of the object. The object is said to be in the normal state when it does not contain
any fault events at all. For example, a user access group object of a network element is in the
normal state when all the user access interfaces in the network element are operating normally.
The object is alerting if it contains at least one unacknowledged fault event. Note that normally
the object is alerting as long as the fault remains unacknowledged, regardless of the fault state in
the network (active or inactive). After the operator has acknowledged the fault, the object state is
changed to faulty. When the fault is fixed, the object state is changed to normal again. If an object
contains several faults, the layout of the object symbol in the window is determined according to
the most serious fault event.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

70
3 Tellabs 8000 Manager Application Packages

In the Fault Management System there are two different card block types: physical block object and
logical block. A physical fault object contains a physical fault block and block class numbers, and
represents real hardware objects where faults originate from. The logical fault block is used to group
several physical fault block objects into a certain logical group.

In addition to basic fault monitoring windows, there are Service Fault Monitoring related windows:
Service Fault Monitoring (SFM) window and Service Fault window. Service Fault Monitoring
enables the real time supervision of all services, i.e. IP VPNs or pseudowires that are assigned to a
certain Service Category. A Service Category is a collection of pre-selected services. It is possible
to configure Service Fault Monitoring so that only the selected service categories are displayed.
In this mode, symbols are displayed in the order they were selected. In addition to the predefined
categories, an operator can select dynamically Customers to be taken under monitoring. The
fault status of categories and customers is displayed in the Service Fault Monitoring window.
The All Faulty Services symbol can be added to the lower right-hand corner of the window. This
symbol represents the total fault status of all services. By default, the selected service categories and
customers are saved on the configuration file and thus remembered. This means that next time the
previous view is automatically restored. One workstation or Fault Server can be selected to save
and update all faulty services in the database.

All faulty services and their fault status details of the corresponding category or customer are listed
in the Service Fault window. The appearance of Category and Customer symbols follow the same
characteristics as other objects in the FMS windows (color coding and blinking status directly
reflects the fault state of the object). From the Service Fault window an operator can open a Service
Fault Report window concerning a specific service and its faults.

The Critical Element Fault Monitoring window enables the real time supervision of faults related
to a particular trunk or node type. The window shows the selected trunk and node types with the
type label texts below the symbols. In the partitioned networks, the fault statuses of the trunk and
node types are shown for the selected regions. It is also possible to see the fault statuses which are
summed from all network faults without dependence on the regions. For more information on
partitioning, refer to 3.12 Partitioned Package.

The Fault Management System view can also be reduced to show only faulty network objects. This
option is very useful in large networks when only a small number of objects are faulty.

A warning beep facility can be activated at the workstation to give an audible warning when the
fault status in the network has triggered the preset severity level.

In addition to the fault monitoring facilities, the Fault Management System user interface also offers
other, supporting tools. The fault view mode (maintenance status/service status/mask status)
for different types (network element/trunk/view/alarm/fault) can be defined. Fault reports with
several search conditions can be retrieved from the database and printed, if necessary. Old events
can be deleted from the fault history when no more needed. The Trouble Ticket facility can be
used to record information on network service problems. Any inconsistencies detected between
the system database and the actual network hardware can be studied in more detail. Tools for
defining the fault filtering characteristics also exist.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

71
3 Tellabs 8000 Manager Application Packages

Fig. 36 Fault Report Window

Fault Acknowledgment

Each fault event must be acknowledged. The fault acknowledging is either performed by the operator
or automatically by the system. When the fault is acknowledged, the alert indication (i.e. blinking)
stops. Alerting is thus active for an unacknowledged fault event if the fault itself has disappeared.
Only after the acknowledgement is performed and the fault is fixed in the network, the corresponding
object returns to the normal state and the fault event can be transferred to the fault history.

Fault Filtering

The purpose of fault filtering is to make it possible to fine-tune the fault management to meet
different needs. Faults that are filtered out (masked) are not stored in the database. If a fault is not
filtered, it is stored in the database. Those faults that are in the database can be later dynamically
view filtered in the Fault Management user interface.

Special Concepts Used in FMS

Fault Status Indication

A fault event reported by a network element contains the fields listed below.

The severity status has four categories:

• Critical
• Major
• Minor
• Warning

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

72
3 Tellabs 8000 Manager Application Packages

The maintenance status has three categories:

• Prompt Maintenance Alarm (PMA)


• Deferred Maintenance Alarm (DMA)
• Maintenance Event Information (MEI)
The maintenance status is converted from severity values in the following way:

• Critical mapped to Prompt Maintenance Alarm (PMA)


• Major mapped to Deferred Maintenance Alarm (DMA)
• Minor mapped to Maintenance Event Information (MEI)
• Warning mapped to Maintenance Event Information (MEI)
The service status of a fault event has two categories:

• Service alarm (S)


• Non-service alarm (non-S)
The fault status categories are used in the color coding of the FMS user interface to indicate the
severity of fault events.

The coding of different FMS object states in the Fault Management windows is by default as follows:

normal object Static color code:


- CYAN for non-faulty objects
alerting object Blinking color code:
- RED for PMA/S/Critical faults
- YELLOW for DMA/Major faults
- GREEN for MEI/non-S/Minor/Warning faults
faulty object Static color code
- RED for PMA/S/Critical faults
- YELLOW for DMA/Major faults
- GREEN for MEI/non-S/Minor/Warning faults

It is possible to define a different color coding for a workstation with the Configure Fault Colors
dialog. Note, however, that the Tellabs 8000 manager documentation in general refers to standard
coding (red, yellow, and green) when fault colors are concerned.

If an object contains several faults, the object symbol status and color are determined in accordance
with the most serious fault event. The order, from the most serious fault to the least serious one, is
as follows:

• Alerting PMA/S/Critical
• Alerting DMA/Major
• Alerting MEI/non-S/Minor/Warning
• Static PMA/S/Critical
• Static DMA/Major
• Static MEI/non-S/Minor/Warning
• Normal

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

73
3 Tellabs 8000 Manager Application Packages

Note that you can select the layout mode to be based either on the fault maintenance status (PMA,
DMA, MEI) or the service status (S, non-S). It is also possible to separate acknowledgement
information from priority calculation in the Set Fault Status View dialog. Thus the order, from the
most serious fault to the least serious one, is as follows:

• PMA/S/Critical
• DMA/Major
• MEI/non-S/Minor/Warning
• Normal

3.2.5 Trouble Ticket

The objective of the Trouble Ticket facility is to record, manage and control the service problems
associated with networks, customers and elements. A trouble ticket is a form filled in partly by
the system and partly by the operators. Tools for creating, viewing, editing and deleting these
tickets are supplied.

First category tickets are opened for general problems that do not involve a customer or a network
element. A customer ticket is attached to a specific customer in the network. Typically, it is created
as a response to a customer complaint. A network ticket is assigned to a specific network element,
either a trunk or a card.

Fig. 37 Trouble Ticket Dialog

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

74
3 Tellabs 8000 Manager Application Packages

3.2.6 Accounting Management

Accounting Management is supported for Tellabs 8100 network elements. Circuit connection
time reports are based on the circuit accounting data that the Tellabs 8000 manager components
(Router, Circuit Loop Tests, and Recovery) produce during the circuit’s lifetime. The accounting
data contains information on all activation, deactivation, testing and recovery actions on the circuit.

3.2.7 Security Management

Operator Access Control

Before any network management operation can be performed, an operator has to first perform a
login operation. In the login, the operator is requested to enter the correct username/password pair.
Several username/password combinations can be created by an operator for different purposes. If
wished, the system can force the operators to change their passwords regularly. The system can also
be set to limit the number of failed login attempts. Each login and logout operation is recorded in an
audit log where they can be viewed by an operator with security privileges.

Privileges

A set of privileges is attached to each operator when the username of an operator is created. A
privilege is a permission to use a network management application or part of the application. In most
applications, privileges are divided into separate read and write privileges, allowing an operator
to use an application either in a read-only mode (no permission to alter any data) or in a modify
all mode (altering data in the database is allowed). In the case of some applications, operators can
be given even more detailed privileges, i.e. an operator can use an application to perform certain
operations while some other operations are prohibited.

In a partitioned network, it is possible to give operators permission to only manage part of the
network (one or several regions). Even inside a region, it is possible to restrict operator privileges
only to the backbone or access level of a network. If wished, other parts of a network can be hidden
altogether from a specified operator. For more information on partitioning, refer to 3.12 Partitioned
Package.

Profiles

Privileges to use different network management applications are grouped to form an operator
profile. A profile can then be attached to any network operator or group of operators. By giving
the same profile to a group of operators it is easy to add or remove privileges to/from several
operators at the same time.

LDAP and RADIUS Authentication

It is also possible to authenticate users from a central LDAP or RADIUS server. This is beneficial
when the user needs access to several tools. The passwords of the users are managed centrally at the
LDAP or RADIUS server. The passwords need to be changed only once in the LDAP or RADIUS
server, instead of being changed in each of the tools where the users have access. In addition, a
Tellabs 8000 manager user profile is assigned to the users in the LDAP or RADIUS server. The user
profile defines the user authorization for each of the Tellabs 8000 manager applications.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

75
3 Tellabs 8000 Manager Application Packages

3.2.8 Automatic Maintenance Procedures

The Automatic Maintenance Procedures (AMP) tool is used for automating the maintenance
procedures of the Tellabs 8000 manager database. The database requires regular maintenance. The
tool has a graphical user interface that can be used for performing and scheduling maintenance
procedures.

Fig. 38 Automatic Maintenance Procedures Dialog

3.3 Provisioning Packages

The TDM, ATM, Ethernet, FR/HDLC, IP VPN and Optical Provisioning Packages allow the operator
to choose what kind of services to configure in the network. There are three tools for provisioning
the services available in the Tellabs 8000 manager: Router, VPN Provisioning and VLAN Manager.
The Router tool is used to provision TDM and optical circuits, the VPN Provisioning tool is used to
configure Ethernet, ATM, TDM, Frame Relay and HDLC pseudowires, VLAN VPNs and IP VPNs,
and the VLAN Manager tool is used to configure VLAN VPNs in the Tellabs 8100 and Tellabs 6300
networks. Each provisioning package contains the following functionality:

• TDM Provisioning - The TDM Provisioning Package allows the user to configure PDH and SDH
circuits with the Router tool and TDM pseudowires with the VPN Provisioning tool.
• ATM Provisioning - The ATM Provisioning Package allows the user to configure ATM pseu-
dowires with the VPN Provisioning tool.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

76
3 Tellabs 8000 Manager Application Packages

• Ethernet Provisioning - The Ethernet Provisioning Package allows the user to configure Ethernet
pseudowires and Tellabs 7100/Tellabs 7300 based VLAN VPNs with the VPN Provisioning tool.
It also allows the user to provision VLAN VPNs in the Tellabs 8100 and Tellabs 6300 networks
with the VLAN Manager tool.
• IP VPN Provisioning - The IP VPN Provisioning Package allows the user to provision IP VPNs
with the VPN Provisioning tool.
• FR/HDLC Provisioning - The FR/HDLC Provisioning Package allows the user to configure
Frame Relay and HDLC pseudowires with the VPN Provisioning tool.
• Optical Provisioning - The Optical Provisioning Package allows the user to establish optical
(DWDM) connections in a network where Tellabs 7100 system is deployed.
In addition, there is a Tunnel Engineering tool to provision manually traffic engineered LSPs in
MPLS networks. The Tunnel Engineering tool is automatically included in each of the Provisioning
Packages, except for the Optical Provisioning Package.

Below, a more detailed description about each of the provisioning tools - Router, VPN Provisioning,
VLAN Manager and Tunnel Engineering - is given.

3.3.1 Router

General

Router is a flexible Tellabs 8000 manager tool where you can set up and manage circuits through the
Tellabs 8000 system network. Router shows an overview of the network with the Navigator Tree,
List and Network View windows, views for different network objects (nodes, trunks, NTUs and
locations) and displays connection routes. You can zoom into nodes and interfaces to view and set
the communication parameters of the interfaces.

Connections are referred to as circuits. Circuits can be routed through Tellabs 8100, Tellabs 6300
and Tellabs 7100 network elements both automatically and manually. It is also possible to terminate
SDH and PDH circuits in the Tellabs 8600 and Tellabs 8800 network elements. The circuits can be
completely or partially connected and disconnected. It is also possible to create planned circuits
without a physical network. In this way the connections can be built in advance using planned
interfaces and trunks. The circuit only has to be connected to be operational when the physical
network is available.

Currently SDH, PDH, ATM, Frame Relay and optical circuits are supported. Circuits are also used
to route virtual VC-4/2/12, VLAN, 2M, 1/1, ATM, Frame Relay and STM-64 trunks through the
high-order network.

By enabling the Multi-Layer Routing feature, you can semi-automatically create VC-12 trunks over
the SDH 4/1 layer with Trunk Wizard when routing PDH 1/0 circuits.

With 2M Cascaded circuits, you can use the 4/1, 1/1 and 1/0 layers in the same circuit without
having to create virtual trunks when switching between layers.

If you have Tellabs 6300 ETEX units in your network, you can use their VC groups in Diverse
SDH circuits. With the diverse circuit you can route each child VC interface of a VC group
separately or together with a selection of any other child VC interface of the VC group. You can
also add or remove VC child interfaces later to / from the VC group and route / unroute them later
without disturbing the traffic. This can be used with LCAS for network protection and dynamic
capacity change.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

77
3 Tellabs 8000 Manager Application Packages

Circuit Components

A circuit is identified by a unique ID and name. The necessary circuit parameters, such as capacity,
customer and signaling, must be specified at the initial state when creating the circuit. The capacity
can be in the range of 8 kbps – 10 Gbps. The basic circuit consists of two endpoints (NTUs
or interfaces) and a route.

A route is created by allocating, manually or automatically, capacity from the trunks. Manual
routing is done by simply selecting (double-clicking on) the trunks with enough capacity (colored
green/dark green). The automatic routing uses a shortest path algorithm to find the optimal route by
calculating a set of weighted criteria (length, cost, delay and occupation).

Reserve backup routes for PDH 1/0 circuits can be pre-routed by Router or dynamically routed by
Recovery Management. They are used by Recovery Management to back up the primary route
when necessary.

In a partitioned network, the network is divided into regions into two levels, backbone and access.
Backbone level trunks are high-capacity trunks carrying only inter-regional data. Access level
trunks are mainly used for carrying data within the region. To keep this structural two-level network
well differentiated, you need to control the way you route. In short, regional routes use access
level trunks, and inter-regional routes mainly use backbone level trunks. For more information on
partitioning, refer to 3.12 Partitioned Package.

A complex circuit with several endpoints consists of several subcircuits to help manage the circuit.
In this way the circuit topology can be changed even while the original circuit is still active. The
subcircuits can, naturally, be separately connected and disconnected.

Circuit Topologies

The types of circuits supported are point-to-point (pp), point-to-multipoint (pmp) and broadcast (bc).

The point-to-point type of circuit, which is the most commonly used, is bi-directional and consists
of two end interfaces and a route, see Fig. 6. Interfaces for test purposes can also be added, but
they are used only in the Circuit Loop Test tool. When transporting a PCM signal, it is possible
to compress the signal to an ADPCM signal in order to save trunk capacity in the network. The
compression can be from 64 kbps to 32, 24, 16 or 8 kbps.

A special Circuit Swap feature can be used for the swap type of point-to-point circuits. The idea is
to allow the replacing of a circuit endpoint by another when, for some reason, the primary endpoint
is damaged. By convenience, several swap circuits can use the same swap point in order to allow
replacement of endpoints for several circuits in a single step.

The broadcast type of circuit is uni-directional and consists of one master and any number of slave
interfaces, branch nodes, subcircuits and routes.

The point-to-multipoint type is similar to the broadcast type of circuit, but the data flow is
bi-directional. Additionally, branch nodes have special Point-to-Multipoint (PMP) Servers that are
used to control data flow in the slave-master direction.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

78
3 Tellabs 8000 Manager Application Packages

Network Optimization Tools

With the Reroute Circuits tool, you can select a set of PDH or SDH circuits and select alternative
endpoints and automatically route alternative paths with desired routing templates. Then the
connections can be swapped, and the unused routes deleted. This is useful when new nodes/trunks
have been installed or when the network topology is changing due to growth.

For the PDH, SDH and ATM circuits you can also route, unroute, connect and disconnect the
original paths.

The routing template basically controls what trunks to use or not to use. Weights can also be set
to optimize the route for cost, length, delay, occupation or a combination of these. The routing
templates are stored in the database, and special privileges are required to edit them. It is also
possible to keep a local routing template that is not stored in the database.

3.3.2 VPN Provisioning

The VPN Provisioning package provides tools for creating, modifying and deleting different kinds
of MPLS VPN services, in other words, pseudowires and BGP/MPLS IP VPNs. In addition,
provider bridging based VLAN VPN services for the Tellabs 7100 packet subsystems and Tellabs
7300 network elements are supported. These services are collectively referred to as VPN services in
the text below. The VPN Provisioning tool provides a high-level user interface for establishing any
type of these VPN services. Interfaces in which VPN endpoints are created can be chosen from a
tree view, where interfaces or entire nodes bound to customer sites are displayed below customer
and site information.

The main window of the VPN provisioning tool is shown in the figure below.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

79
3 Tellabs 8000 Manager Application Packages

Fig. 39 Main Window of VPN Provisioning Tool

The VPN Provisioning tool supports seven different kinds of VPNs: Ethernet Pseudowire Mesh,
ATM Pseudowire Mesh, TDM Pseudowire Mesh, FR Pseudowire Mesh, HDLC Pseudowire Mesh,
BGP/MPLS VPN and Default Routing Access.

A Pseudowire Mesh VPN has a point-to-point topology with two endpoints connected with a
pseudowire. You may create several point-to-point pseudowires under the same VPN, therefore
creating a pseudowire mesh. For the Ethernet pseudowires, the two endpoints will either be in
Ethernet ports or VLAN/S-VLAN sub-interfaces. For ATM pseudowires, the endpoints will either
be ATM VP or VC sub-interfaces. For TDM pseudowires, the endpoints will be either E1/T1
interfaces or nx64 kbps interfaces configured on top of the E1/T1 interfaces. The E1/T1 interface
may be native E1/T1 interfaces or E1/T1 streams aggregated onto SDH/SONET channelized
interfaces. For Frame Relay pseudowires, the endpoints will be Frame Relay PVCs. For HLDC
pseudowires, the endpoints will be HDLC interfaces. Also TDM, ATM, Ethernet, Frame Relay and
HDLC cross-connections (locally connected pseudowires) are supported.

For pseudowires, two different topologies are supported: single-segment and multi-segment. These
differ in how the pseudowire is carried in the packet-based transport network. A single-segment
pseudowire is carried end-to-end by one packet network tunnel, whereas a multi-segment
pseudowire consists of many segments, each carried by a separate tunnel, which are switched in
intermediate nodes. Multi-segment pseudowires provide a more flexible and scalable mechanism for
managing pseudowires, supporting e.g. aggregation of multiple pseudowires into the same tunnel in
hub points and interconnectivity between different packet network domains.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

80
3 Tellabs 8000 Manager Application Packages

Fig. 40 Multi-segment Pseudowire

BGP/MPLS VPNs may have multiple endpoints. Three different topology types are supported for
these kind of VPNs

1. Full mesh: Traffic may be sent from any endpoint to any other endpoint through the VPN.
2. Central services hub-and-spoke: Traffic may be sent from a single hub endpoint to any of the
spoke endpoints or from any of the spoke endpoints to the hub endpoint. However, the spoke
endpoints may not communicate with each other through the VPN.
3. Full-featured hub-and-spoke: Also referred to as firewall hub-and-spoke based on one typi-
cal application. In this VPN topology the spoke endpoints can communicate with each other
through a customer router that lies behind the hub endpoint. Actually there are two endpoints
for hub here, one for incoming traffic from the spokes towards the hub and another for outgoing
traffic from the hub towards the spokes.
In the Default Routing Access type of service, the interfaces associated with the service do
not belong to any VPN. The IP traffic originating from the interfaces will be routed using the
default routing table, thus interfaces belonging to different Default Routing Access instances may
actually communicate with each other. The Default Routing Access instances can be thought of
as administrative instances for the service, used for documenting the interfaces where broadband
services have been configured.

In addition to these topologies available for customer VPNs carrying payload data, VPN
provisioning also supports a single management VPN (one per network) enabling central
management of the non VPN-aware CE routers on the customer premises. The topology of the
management VPN is like that of the central services hub-and-spoke VPN, with the exception that the
management VPN may have more than one hub site.

The workflow for establishing IP VPN services is basically the same as for pseudowires. However,
for the IP VPNs there are some additional steps related to classifying and policing the traffic and
configuring routing options towards the customer networks.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

81
3 Tellabs 8000 Manager Application Packages

Fig. 41 Endpoint Dialog

The Endpoint dialog is used for defining properties based on which different end customer sites
are connected to a BGP/MPLS VPN.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

82
3 Tellabs 8000 Manager Application Packages

Fig. 42 Service Classification Templates Dialog

The Service Classification Templates dialog is used for defining how different kind of customer
traffic is directed to different service classes when entering the VPN from a customer site.

Pseudo Wire Wizard

The Pseudo Wire Wizard is a very handy tool when creating a large number of pseudowires in the
network. The wizard allows the user to select a group of source and destination interfaces from the
network and it will then create the needed pseudowires between the selected interfaces. If required,
the wizard can connect the pseudowires during the same operation.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

83
3 Tellabs 8000 Manager Application Packages

Fig. 43 Pseudo Wire Wizard Dialog

Discovering Pseudowires

It is possible to discover pseudowires that have been configured in the Tellabs 8600 network
elements directly through the command line interface. The discovery functionality has been
implemented for users that want to roll out the network quickly with pre-generated CLI scripts.
After the roll out phase, the pseudowires can be uploaded to the Tellabs 8000 manager and managed
centrally from the NMS with the discovery functionality. The discovery functionality supports ATM
and Ethernet pseudowires.

Reparenting Pseudowires

VPN Provisioning also contains a pseudowire editor that is convenient to use for re-parenting
pseudowires. The user can select a group of interfaces used for pseudowires and select Move
Endpoints from the pop-up menu. He then selects the destination interfaces where he wants to move
the pseudowires and clicks start. All pseudowires in the pseudowire editor will be moved to the
new destination interfaces. The pseudowire editor can also be used to edit basic parameters of
the pseudowires.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

84
3 Tellabs 8000 Manager Application Packages

Fig. 44 Pseudo Wire Editor Dialog

3.3.3 Tunnel Engineering

The Tunnel Engineering tool can be used for creating traffic engineered LSPs either when
provisioning an MPLS VPN or as an entirely separate action. It can also be used for setting explicit
routes for traffic engineered LSPs and for viewing routes of existing LSPs in the network.

The Tunnel Engineering tool can also be used for configuring IP tunnels. IP tunnels are used for
transporting pseudowires over an IP infrastructure when MPLS is not available.

The main window of the Tunnel Engineering tool is shown in the figure below.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

85
3 Tellabs 8000 Manager Application Packages

Fig. 45 Main Window of Tunnel Engineering Tool

With this tool the user may either bind all hops on the path of an LSP or only bind some of them,
and leave the decision about the rest to the node. This tool can also be used for viewing the actual
path of RSVP based LSPs routed entirely by the nodes. The routing of these LSPs is based on
the CSPF (Constrained Shortest Path First) algorithm running in ingress node of the LSP. Note,
however, that the path of LDP based LSPs cannot be viewed. Tellabs 8000 manager supports Traffic
Engineering using E-LSPs and L-LSPs. The difference between these is essentially that while a
single E-LSPs can carry traffic belonging to multiple service classes, a single L-LSP only carries
traffic belonging to a single service class.

The Tunnel Engineering tool also supports the protection of RSVP based LSPs. Two protection
mechanisms are supported: RSVP-TE 1:1 path protection and LSP 1+1 protection.

3.3.4 VLAN Manager

VLAN Manager is an optional tool for Tellabs 8000 manager. It is used for graphical and user
friendly provisioning of VLAN connections, i.e. VLAN VPNs.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

86
3 Tellabs 8000 Manager Application Packages

VLAN Manager is a part of Tellabs Ethernet switched solution (ESS) concept. The concept makes it
possible to use an existing Tellabs 8100/6300 network as a core network for layer 2 Ethernet services.

VLAN Manager is used only for VLAN VPN provisioning. Other management tasks such
as creating layer 1 connections are performed with other Tellabs 8000 manager applications,
which enable seamless integration into TDM provisioning and eliminate the need of redundant
configuration work.

VLAN trunks, i.e. layer 1 connections between Ethernet/VLAN switching network elements,
are created in the same way as the other trunk types in Tellabs 8000 manager by using Network
Editor. VLAN Manager considers VLAN trunks as TDM capacity reserved for VLAN Manager use
and uses the VLAN trunk information stored into the Tellabs 8000 manager database to calculate
optimal routes for VLAN VPNs.

The VLAN Domain Configuration tool has also been integrated into Network Editor. The tool
automates the configuration of VLAN domain wide (e.g. QoS) parameters into the Ethernet/VLAN
switching network elements. VLAN domains are separate Ethernet/VLAN network instances, i.e.
sets of Ethernet/VLAN switching elements and VLAN trunks that connect them.

3.4 Testing Package

3.4.1 Overview

The Testing Package contains a Packet Loop Test and Circuit Loop Test tool for troubleshooting
circuits, pseudowires and IP VPNs. The Circuit Loop Test tool can be used for TDM-based services
whereas the Packet Loop Test tool has been designed for MPLS-based services.

3.4.2 Packet Loop Test

Overview

Packet Loop Testing is a testing tool for detecting and localizing problems in the network and
verifying VPN services, created by Tellabs 8000 manager VPN Provisioning tool. It contains a
collection of test packages including different test items. The network can be tested after setting up
a new node or fixing a trunk fault or before creating a new service. With IP VPN and PW tests the
user can ensure that the IP VPN and PW are working correctly and fulfilling the QoS requirements
configured before taking the service into use.

The tests can be executed manually or they can be scheduled for automatic execution in the
background. The tests running in the background could be used to give the operator a daily view of
the network and the services.

The results of the tests can be stored in the database for further analysis.

Test Packages

Packet Loop Testing contains the following test packages and test items:

Management Communication Test Package contains functionality for testing the communication
between Tellabs 8000 manager and nodes. This package includes the following tests:

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

87
3 Tellabs 8000 Manager Application Packages

• Node Management Connectivity Test pings nodes to check the IP-level connectivity between
the Communication Server and the nodes.
• Node Management Traceroute Test detects the route along the Communication Server and
the nodes. It can be useful when investigating management connectivity problems between the
Tellabs 8000 manager and a node.
• Trunk Management Connectivity Test test the IP-level connectivity between the Communica-
tion Server and one or both end nodes of a specific trunk in the network.
• Trunk Management Traceroute Test detects the route between the Communication Server and
the end nodes of a specific trunk in the network. This test item is not yet supported.
• IP VPN Management Connectivity Test pings all endpoint nodes of the IP VPN to check the
IP-level connectivity between the Communication Server and the nodes. Note that the loopback
interface of the node is tested instead of the real IP VPN endpoint interface.
• IP VPN Management Traceroute Test detects the routes to all endpoint nodes of the IP VPN
from the Communication Server. This can be useful for investigating management connectivity
problems. This test item is not yet supported.
• PW Management Connectivity Test pings all endpoint nodes of the pseudowire to check the
IP-level connectivity between the Communication Server and the nodes. Note that the loopback
interface of the node is tested instead of the real PW endpoint interface.
• PW Management Traceroute Test detects the routes to all endpoint nodes of the pseudowire
from the Communication Server. This can be useful for investigating management connectivity
problems. This test item is not yet supported.
• TE Tunnel Management Connectivity Test pings the endpoint nodes of the tunnel to check the
IP-level connectivity between the Communication Server and the nodes. Note that the loopback
interface of the node is tested instead of the real tunnel endpoint interface.
• TE Tunnel Management Traceroute Test detects the routes to the endpoint nodes of the tunnel
from the Communication Server. This can be useful for investigating management connectivity
problems. This test item is not yet supported.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

88
3 Tellabs 8000 Manager Application Packages

Fig. 46 Trunk Management Connectivity Test

Node Network Test Package contains functionality for testing basic router network communication
between nodes in the network. These tests can be run unidirectionally or bidirectionally.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

89
3 Tellabs 8000 Manager Application Packages

• Node Connectivity Test pings nodes to check the IP-level connectivity between two nodes.
• Node Traceroute Test traces the path between two nodes. It can be useful when investigating
node connectivity problems in the network.
• The Node Quality Test Sub-package can be used for testing the quality of the connectivity be-
tween two nodes.
• Node Throughput Test is an intrusive test for determining the maximum bandwidth avail-
able between two nodes without having the traffic discarded. This test should be used with
care as it will affect live traffic in the network and is usually only used in the initial deploy-
ment phases of the network. This test is supported in the Tellabs 8600 node hardware and
therefore throughput between two nodes can be tested up to the line-speed of the interfaces
in the node. Packet distribution, send interval and QoS distribution can conveniently be
selected using predefined templates. It is also possible for the user to define his own distri-
bution values.
• Node Round-trip Delay Test is used for testing the round-trip delay of packets sent between
two nodes. This test is supported in the Tellabs 8600 node hardware and therefore micro-
second accuracy can be obtained for the test results.
• Node One-way Delay Test can be used for testing the delay of packets sent between two
nodes. It needs accurate time synchronization between the nodes e.g. by using NTP. This
test item is not yet supported.
• Node One-way Delay Variation Test can be used for testing the one-way delay variation
between two nodes. This test is supported in the Tellabs 8600 node hardware and therefore
micro-second accuracy can be obtained for the test results.
• Node One-way Packet Loss Test can be used for testing the packet loss for packets sent
between two nodes. Packet distribution, send interval and QoS distribution can be defined
using predefined templates, or the user can define them manually.

Fig. 47 Node One-way Delay Variation Test

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

90
3 Tellabs 8000 Manager Application Packages

Trunk Test Package contains functionality for testing basic router network communication between
trunk end nodes in a basic router network. These tests can be run unidirectionally or bidirectionally.

• Trunk Connectivity Test pings trunk end nodes to check the IP-level connectivity between those
two end nodes.
• Trunk Traceroute Test traces the path between the end nodes of a trunk. It can be useful when
investigating management connectivity problems of the nodes. This test item is not yet supported.
• The Trunk Quality Test Sub-package can be used for testing the quality of the connectivity
between the end nodes of a trunk. This test sub-package is not yet supported.
• Trunk Throughput Test is an intrusive test for determining the maximum bandwidth avail-
able between the end nodes of a trunk without having the traffic discarded. This test should
be used with care as it will affect live traffic in the network and is usually only used in the
initial deployment phases of the network. This test is supported in the Tellabs 8600 node
hardware and therefore throughput between two nodes can be tested up to the line-speed of
the interfaces in the node. Packet distribution, send interval and QoS distribution can con-
veniently be selected using predefined templates. It is also possible for the user to define his
own distribution values.
• Trunk Round-trip Delay Test is used for testing the round-trip delay of packets sent be-
tween the end nodes of a trunk. This test is supported in the Tellabs 8600 node hardware
and therefore micro-second accuracy can be obtained for the test results.
• Trunk One-way Delay Test can be used for testing the delay of packets sent between the
end nodes of a trunk. This test item is not yet supported.
• Trunk One-way Delay Variation Test can be used for testing the one-way delay variation
between the end nodes of a trunk. This test is supported in the Tellabs 8600 node hardware
and therefore micro-second accuracy can be obtained for the test results.
• Trunk One-way Packet Loss Test can be used for testing the packet loss for packets sent
between the end nodes of a trunk. Packet distribution, send interval and QoS distribution
can be defined using predefined templates, or the user can define them manually.
IP VPN Test Package contains functionality for executing tests on individual VPNs. The IP VPN
tests are a combination of different tests that are organized so that the operator needs to execute
only a few tests manually.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

91
3 Tellabs 8000 Manager Application Packages

• IP VPN Connectivity Test gives the operator the ability to test the connectivity automatically
between all endpoint interfaces of an IP VPN.
• IP VPN Traceroute Test allows the operator to discover the routes between all endpoint inter-
faces of an IP VPN.
• IP VPN Quality Test Sub-Package enables the operator to test the quality of transmission be-
tween each site in an IP VPN. These tests are supported in the Tellabs 8600 node hardware,
therefore, both elements must be Tellabs 8600 nodes.
• IP VPN Throughput Test is an intrusive test for determining the maximum bandwidth that
a certain IP VPN has between the endpoints in the IP VPN. This test will affect the IP VPN
traffic and can also affect other traffic in the network, so it should be used carefully. This
test is typically used when provisioning a new IP VPN to make sure that the IP VPN can
carry the traffic provisioned for the user without discarding traffic. This test is supported in
the node hardware, which means that traffic can be generated up to the interface line-speeds.
Packet distribution, send interval and QoS distribution can conveniently be selected using
predefined templates. It is also possible for the user to define his own distribution values.
• IP VPN Round-trip Delay Test is used to measure the round-trip delay of packets being
sent between the endpoints in the IP VPN. This test is supported in the node hardware, and
therefore micro-second accuracy can be obtained.
• IP VPN One-way Delay Test is used to measure the one-way delay between the endpoints
in an IP VPN. This test item is not yet supported.
• IP VPN One-way Delay Variation Test is used to test the one-way delay variation of test
packets sent between the endpoints in an IP VPN. This test is supported in the node hard-
ware, and therefore micro-second accuracy can be obtained.
• IP VPN One-way Packet Loss Test can be used for testing the packet loss for packets sent
between endpoints in an IP VPN. Packet distribution, send interval and QoS distribution can
be defined using predefined templates, or the user can define them manually.
The following figure shows an example of the GUI of an IP VPN Connectivity Test window:

Fig. 48 IP VPN Connectivity Test

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

92
3 Tellabs 8000 Manager Application Packages

The following figure shows an example of the IP VPN Round-trip Delay Test window:

Fig. 49 IP VPN Round-Trip Delay Test

PW Test Package contains functionality for testing basic router network communication between
pseudowire end nodes in a basic router network. These tests can be run unidirectionally.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

93
3 Tellabs 8000 Manager Application Packages

• PW Connectivity Test can test the connectivity inband and end-to-end for a pseudowire having
the VCCV control channel enabled using the VCCV ping for all PW types except for the Ethernet
PW. This is a powerful test for verifying the consistency between the control plane and forwarding
plane for the tested PW. In addition, the ATM ping is supported for ATM based PWs. It is also
possible to select an ICMP ping based test between the end nodes of the PW e.g. when VCCV
is not enabled for a PW or for an Ethernet PW.
• PW Traceroute Test traces the path inband and end-to-end for a pseudowire having the VCCV
control channel enabled using the VCCV traceroute for all PW types except the Ethernet PW.
This is a powerful test for verifying the consistency between the control plane and forwarding
plane for multi-segment PWs. It is also possible to select an ICMP traceroute based test between
the end nodes of the PW e.g. when VCCV is not enabled for a PW or for an Ethernet PW.
• The PW End Node Quality Test Sub-package can be used for testing the quality of the connec-
tivity between the end nodes of a pseudowire.
• PW End Node Throughput Test is an intrusive test for determining the maximum band-
width available between the end nodes of a pseudowire without having the traffic discarded.
This test should be used with care as it will affect live traffic in the network and is usu-
ally only used in the initial deployment phases of the network. This test is supported in the
Tellabs 8600 node hardware and therefore throughput between two nodes can be tested up
to the line-speed of the interfaces in the node. Packet distribution, send interval and QoS
distribution can conveniently be selected using predefined templates. It is also possible for
the user to define his own distribution values.
• PW End Node Round-trip Delay Test is used for testing the round-trip delay of packets
sent between the end nodes of a pseudowire. This test is supported in the Tellabs 8600 node
hardware and therefore micro-second accuracy can be obtained for the test results. This test
item is not yet supported.
• PW End Node One-way Delay Test can be used for testing the delay of packets sent be-
tween the end nodes of a pseudowire. This test item is not yet supported.
• PW End Node One-way Delay Variation Test can be used for testing the one-way delay
variation between the end nodes of a pseudowire. This test is supported in the Tellabs 8600
node hardware and therefore micro-second accuracy can be obtained for the test results.
• PW End Node One-way Packet Loss Test can be used for testing the packet loss for packets
sent between the end nodes of a pseudowire. Packet distribution, send interval and QoS
distribution can be defined using predefined templates, or the user can define them manually.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

94
3 Tellabs 8000 Manager Application Packages

Fig. 50 PW End Node One-way Delay Variation Test

TE Tunnel Test Package contains functionality for testing TE tunnel communication between the
tunnel ends in the network. These tests can be run only unidirectionally.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

95
3 Tellabs 8000 Manager Application Packages

• TE Tunnel Connectivity Test can test the connectivity inband and end-to-end for an RSVP TE
tunnel using the LSP ping. This is a powerful test for verifying the consistency between the
control plane and forwarding plane for the tested tunnel. It is also possible to select an ICMP
ping based test between the end nodes of the tunnel e.g. when troubleshooting situations where
an LSP ping based test gives a connectivity error as a result.
• TE Tunnel Traceroute Test traces the path inband and end-to-end for an RSVP TE tunnel using
the LSP traceroute. This is a powerful test for verifying the consistency between the control
plane and forwarding plane for the tested tunnel. It is also possible to select an ICMP traceroute
based test between the end nodes of the tunnel e.g. when troubleshooting situations where an
LSP traceroute based test fails to trace the complete path.
• The TE Tunnel Quality Test Sub-package can be used for testing the quality between the end
nodes of a tunnel.
• TE Tunnel Throughput Test is an intrusive test for determining the maximum bandwidth
available between the end nodes of a tunnel without having the traffic discarded. This test
should be used with care as it will affect live traffic in the network and is usually only used in
the initial deployment phases of the network. This test is supported in the Tellabs 8600 node
hardware and therefore throughput between two nodes can be tested up to the line-speed
of the interfaces in the node. Packet distribution, send interval and QoS distribution can
conveniently be selected using predefined templates. It is also possible for the user to define
his own distribution values.
• TE Tunnel Round-trip Delay Test is used for testing the round-trip delay of packets sent
between the end nodes of a tunnel. This test is supported in the Tellabs 8600 node hardware
and therefore micro-second accuracy can be obtained for the test results. This test item is
not yet supported.
• TE Tunnel One-way Delay Test can be used for testing the delay of packets sent between
the end nodes of a tunnel. This test item is not yet supported.
• TE Tunnel One-way Delay Variation Test can be used for testing the one-way delay vari-
ation between the end nodes of a tunnel. This test is supported in the Tellabs 8600 node
hardware and therefore micro-second accuracy can be obtained for the test results.
• TE Tunnel One-way Packet Loss Test can be used for testing the packet loss for packets
sent between the end nodes of a tunnel. Packet distribution, send interval and QoS distribu-
tion can be defined using predefined templates, or the user can define them manually.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

96
3 Tellabs 8000 Manager Application Packages

Fig. 51 TE Tunnel Throughput Test

Background Test Manager

The Background Test Manager contains functionality for defining and scheduling tests that are
executed in the background. If the tests are run by the Tellabs 8600 system nodes, these are
basically performance tests, that is, they produce performance data to be used by, e.g., Integrated
Performance Management tools. Note that not all tests can be scheduled to run in the background.
The test can be scheduled to run once or on a minute, hourly, daily, weekly or yearly basis in a
specified time interval.

The basic operations for managing background tests are the following:

• Adding a new test schedule


• Viewing the status of background tests
• Suspending a running background test
• Resuming a suspended background test
• Deleting an existing background test schedule
• Modifying an existing background test configuration
• Viewing the result of the background tests
The following figure shows an example of the GUI of the Background Test Manager window.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

97
3 Tellabs 8000 Manager Application Packages

Fig. 52 Background Test Manager Window

The following figure shows an example of the result window of a TE Tunnel Management
Connectivity Test run by the Background Test Manager.

Fig. 53 Test Result Details Window

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

98
3 Tellabs 8000 Manager Application Packages

SLA Monitoring

The Background Test Manager can be used for SLA monitoring of services configured on the
Tellabs 8600 system nodes. The node, IP VPN and PW throughput, one-way delay variation,
round-trip delay and packet loss tests can be scheduled in the background and a threshold can be
assigned to the tests. If the threshold is exceeded, the background test manager will insert a fault in
the database notifying the user of an SLA violation.

3.4.3 Circuit Loop Test

The Circuit Loop Test is a testing tool for detecting and localizing problems in those circuits created
by Tellabs 8000 manager. It can help to qualify the circuits or transmission media by giving detailed
performance results. Several features allow the user to test different sections of a circuit. Different
test patterns and loops can be used. The control signal operation can be tested separately, and a
break in a circuit can be easily detected.

The Circuit Loop Test facility supports three categories of circuits: PDH 1/0 and SDH paths and
ATM Virtual Paths. For the PDH 1/0 circuits the Circuit Loop Test window allows activating
test pattern generation and reception in order to retrieve detailed results including error rate, failure
and performance statistics. For the ATM virtual paths there are two tests: loop back test for data
transmission testing and continuity check test for permanent continuity monitoring. With the
Circuit Loop Test window it is also possible to define ATM virtual path segment endpoints required
for loop back and continuity check. The SDH paths have their own test window, which allows access
to several Tellabs 8000 manager tools and can also activate a cross-connection loop on the path.

General Test Properties for PDH 1/0 Circuits

For the PDH 1/0 circuits, the Circuit Loop Test tool supports testing of point-to-point, compressed
point-to-point, swap point-to-point, broadcast and point-to-multipoint circuit types. All types,
except for compressed point-to-point circuits, can be tested at the endpoint level. Compressed
point-to-point circuits must be tested at the subcircuit level because they are not bit-transparent
from end to end.

All different types of point-to-point circuits can be tested either uni-directionally or bi-directionally.
A test is defined by selecting a subset of available resources between the endpoints. If a swap
point-to-point circuit is tested, one of the possible endpoint pairs must be chosen first.

When testing a broadcast circuit, the master endpoint and at least one of the slave endpoints must
be selected. It is normally possible to select any number - from one up to all - slave endpoints for
testing. However, the lack of common test resources can sometimes be a restriction.

Point-to-multipoint circuits are tested uni-directionally in the same way as the broadcast circuits.
The point-to-multipoint circuits can also be tested bi-directionally. In that case the master endpoint
and only one of the slave endpoints can be selected for testing.

A so-called intermediate node can be selected along the route of a circuit. The intermediate node
provides additional resources for testing.

It is also possible to test the predefined backup routes of point-to-point circuits. At the beginning,
the primary route is disconnected and the predefined backup route is connected. After that the
circuit can be tested in a normal way.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

99
3 Tellabs 8000 Manager Application Packages

Test endpoints are additional interfaces that can be used for monitoring the data flow through the
actual circuit. This is a useful way to perform long-term monitoring tests without disturbing the
circuit. This feature is available for point-to-point circuits only.

Available Resources

There are three different resource types for testing purposes. Test resources are objects that can
transmit and receive data and control signals. Loops are used for looping the received signal
back to the original source. Redirections are used for directing the signal to test an endpoint for
monitoring purposes.

• Test resources are used to generate appropriate test data patterns or control signals. The circuit-
specific test resource is, for example, in an interface that is reserved by the circuit. The node
common test resource can also be used in access nodes or intermediate nodes. If the intermediate
node is a branch node in a point-to-multipoint circuit, there are two additional test resources in
the PMP Server units (VCM or GCH-A). The node common test resource is located in a control
unit (e.g. SCU and SCU-H). In addition, there is a test resource in every VCM and GCH-A
interface. There are test resources also in NTUs (for further information, refer to the appropriate
NTU manual or booklet).
• Loops are used for looping the incoming signal back to the original source. There are three
different places where a loop can be activated or created:

X-Connection The signal is looped back using a cross-connection


command to the given port of an endpoint interface
or trunk.
Loop in Interface A loop that is activated by a management command
in the given endpoint interface. This kind of a
loop can be activated in the CAE, CCO, CCS,
GCH-A, GMH, IUM, ISD-LT, ISD-NT and VCM
interfaces.
Loop in NTU A loop that is activated by a management command
in the given Network Terminating Unit (NTU).
There are NTU loops in all NTU types.

• Redirections are located in a special kind of intermediate nodes called Test Branch Nodes. The
test branch node is the place where the incoming signal can be directed to test an endpoint. There
are two types of redirections. Listening of an endpoint means that you direct the signal to the
test endpoint by making a uni-directional connection from the endpoint to the test endpoint. The
normal operation of a circuit is not disturbed in any way. The other way is to replace one of the
endpoints by a test endpoint. The original connection is removed and a bi-directional connection
is made from the other endpoint to the test endpoint. It is possible to create only one redirection
to one endpoint. Redirections to both endpoints can be activated at the same time.

Test Types

Testing of circuits can be separated into two categories. An internal test is performed by using one
or more test resources for transmit/receive purposes. In an external test neither a test pattern nor a
control signal is generated by Circuit Loop Test. External equipment is used for performing the test.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

100
3 Tellabs 8000 Manager Application Packages

• Internal Test includes two different sets of tests. The first one is often referred to as a Control
test which is performed on the circuit control signals (105-109, 108-107, 105-106) by driving the
circuitry directly. The second one is simply called a Test. This means that the data path is tested
using different test patterns.
• External Test can be activated by making a loop-back or by using redirections. The external
equipment can, for example, transmit or receive data like a test resource. Monitoring the incom-
ing or passing signal is another example.
Both asynchronous and synchronous circuits can be tested.

Test Results

The Circuit Loop Test facilities allow the user to activate test pattern generation and reception of
PDH 1/0 circuits in order to retrieve detailed results, including error rate, failure and performance
statistics. In the case of ATM virtual paths, the test results consist of the count of test cells looped
back or the presence of the loss of continuity fault in Fault Management, depending on the ATM test
type. Circuit Loop Test collects no results for SDH cross-connection loops. The current test status is
displayed while a test is active on a workstation. In addition, all types of active tests (PDH 1/0,
ATM, and SDH) can be saved on the test catalogue in the database. The workstation can even be
switched off while the testing remains active in the Tellabs 8100 system network. In these cases the
tests are called background tests. The background tests can be accessed again or terminated at any
time by starting Circuit Loop Test for them again.

The test results are presented as bit error and G.821 counters, signal interruption counters, cell
counters or faults, depending on the test. When external test equipment is used, no test results are
available.

Characteristics of SDH Path Loop Test

The SDH Testing tool has two modes, intrusive and non-intrusive. In the non-intrusive mode one
can view faults, performance and trail trace from accessible VC objects. The intrusive mode adds a
capability to set cross-connection loops; add Supervisory Unequipped VC objects and access the
tool for SNC switch manipulation.

The window shows the whole SDH circuit at the same time. All nodes, trunks, internal connections
and VC objects are drawn. Endpoint information is shown for circuit endpoints. Accessible VC
objects related to the circuit are drawn as gray triangles. Typically, there are terminating VC objects
at circuit endpoints and possibly monitoring VC objects at SNC/N branch points.

Non-Intrusive Testing

Non-intrusive testing does not disturb circuit traffic, unless one changes trail trace. The tools include:

• Trail Trace tool starts the Trail Trace Tool. It can be started for a single VC object or a concate-
nated group of VC objects.
• Performance tool start the Performance Tool. It can be started for a VC object. In the case of a
concatenated circuit, the VC object is selected from the Concatenated Components dialog.
• Faults tool starts the Fault View for the selected VC object. Use the Concatenated Components
dialog to select the component from a concatenated group.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

101
3 Tellabs 8000 Manager Application Packages

• Active monitoring tool enables periodic polling of a VC object. The object is drawn in red if
there are faults or in dark blue if the performance has degraded. Black means that there has been
a communication error when the object was polled, and white means that polling has not been
performed for the object. A normal, well functioning object is shown in light blue. Only one
component of a group can be polled. The component can be selected from the concatenation
group.
• Node Manager tool starts the Node Manager tool for the selected node.
To use a tool, select it from the toolbox, then click on the VC object you wish to access.

The Concatenated Components dialog can be used to specify a component of a concatenated


group. Some of the tools require the selection of a single component. Cross-connection loops and
SNC switch always affect the whole group. Trail Trace can affect the whole group or a single
component. Other tools affect only a single component.

Intrusive Testing

Loops and SNC switch operations can disturb traffic. If the circuit is a virtual trunk, a warning is
given. To appreciate the level of danger involved, consider a virtual trunk which carries hundreds of
circuits. If the virtual trunk is cut by a loop, all circuits will also be cut. If automatic recovery is
active, all of those PDH 1/0 circuits would be candidates for recovery, and that would severely tax
the servers and control channels of the network.

Intrusive testing differs from non-intrusive testing by allowing more tools and options. When the
selection tool is active and there is no active loop in the circuit, all locations where a loop can be
placed are shown by a green rectangle with an arrow inside. An active loop is shown in a yellow
rectangle, and all other possible loop locations are hidden while a loop is active. To activate the
selected loop, click the Activate button. To reset an active loop, click the Reset button. To clear the
selection, click the Clear button. When a loop is activated, no faults are masked in the network.

If one exits from the tool while the loop is active, the loop will remain active in the network. The
test can be opened again for the same circuit, and the loop will be shown as active. It can be reset
normally then.

There are also two more tools in the toolbox than there was in the non-intrusive case, i.e. SNC
switch and Supervisory Unequipped:

• SNC Switch tool starts the SNC Switch Control Tool for the SNC branches. The tool is especially
useful in combination with the loop tool, as it allows the SNC/N protection to be bypassed so that
the looped signal can be forced at the desired position. Normally a working SNC/N protection
would recover from the loop by using the non-looped portion of the circuit, if possible.
• Supervisory Unequipped. Node Manager can be used to create special VC objects called Su-
pervisory VC objects, which have a property of filling the payload with zeros. The signal sent
from Supervisory VC objects can be equipped with a trail trace, which distinguishes the signal
sent from the unequipped fault. The trail trace must be set separately.
This tool can be used to determine the points in the circuit where the Supervisory VC objects
will be connected. The actual connections are made when the test is activated, and removed
when the user resets the test, i.e. the connections are handled in the same manner as the
cross-connection loops.
There are no separate test results beyond those which can be seen from the different tools. The
test has no set duration (cross-connection loops have no time out possibility, and there is no fault
masking involved), but the database test catalog has a nominal 24-hour duration for the test. The
Test Reminder is not used by the SDH test tool.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

102
3 Tellabs 8000 Manager Application Packages

The combination of trail trace, loop, supervisory unequipped testing and SNC switch can be used to
verify that the SNC protection works, signal reaches all parts of the circuit, and the correct faults are
generated at the endpoints under different conditions.

General Test Properties for ATM Virtual Paths

The Circuit Loop Test tool supports testing of point-to-point ATM virtual paths. As they can be
seen as circuits, the term ‘circuit’ is used of them, too.

The testing of ATM virtual paths is quite different from the testing of PDH circuits. All virtual path
tests are non-intrusive, since the normal transmission can continue in spite of the test activities.
The tests can have a slight effect on the overall performance though. The tests are based on the
ATM Operations and Maintenance (OAM) cells.

For the ATM Virtual Paths there are two tests: loop back test for data transmission testing and
continuity check test for permanent continuity monitoring. With the Circuit Loop Test window it
is also possible to define ATM virtual path segment endpoints.

The continuity check is a so-called permanent test, which means that it has no predefined test
duration. The test remains active until it is deactivated.

Testing Level

An ATM virtual path can be seen in different ways from the management point of view. You can
consider it as one entity from one virtual path connection endpoint to the other. In that case an
end-to-end level is in question. On the other hand, a virtual path connection can be divided into
smaller, independently managed portions, called segments. When a test is run in such a portion, it is
run at the segment level. Segment endpoints can be defined with the Circuit Loop Test facility.

The loop back test can be run either at the segment or end-to-end level. The end-to-end level
can be used if the other end of the test path is in external equipment. The continuity check test
is always run at the segment level.

Available Resources

The testing resources for the ATM virtual path tests are so-called Link Termination Points (LTP).
There are two link termination points in one ATM access unit, one in the access interface side and
the other in the trunk interface side. The link termination points can also be defined as segment
endpoints.

For an ATM virtual path test, one link termination point is defined as a source and the other as a
sink. A source inserts test cells into the data stream. A sink receives those cells. Sometimes, as in a
loop back test, the source and the sink are the same link termination points.

A loop is a link termination point on the virtual path connection, where the test cells are looped
back to the original source. With a loop back test, this looping point can be a segment endpoint
or a virtual path connection endpoint. In the first case the testing is done at the segment level
and in the latter case the level is end-to-end.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

103
3 Tellabs 8000 Manager Application Packages

Test Results

Test result for ATM virtual paths is the number of cells sent and received in the loop back test. For
continuity check, the results can be viewed in Fault Management as VPL (Virtual Path Link) down
alarm if LOC (loss of continuity) is detected.

3.5 Service Fault Monitoring Package

The Service Fault Monitoring Package allows the operator to monitor both TDM circuits and MPLS
based services in the network. The operator may react to fault situations in the network based on the
priority of the services being affected. It also includes a service management tool that can be used to
bundle a number of TDM circuits, MPLS VPNs and VLAN VPNs into a logically managed service.

3.5.1 Service Fault Monitoring Windows

There are two Service Fault Monitoring related windows: Service Fault Monitoring (SFM)
window and Service Fault window.

Service Fault Monitoring enables the real time supervision of all services, i.e. pseudowires, TDM
circuits, VLAN VPNs and IP VPNs that are assigned to a certain Service Category. A Service
Category is a collection of pre-selected services. It is possible to configure Service Fault Monitoring
so that only the selected service categories are displayed. In this mode, symbols are displayed in the
order they were selected. In addition to the predefined categories, an operator can select dynamically
Customers to be taken under monitoring. The fault status of categories and customers is displayed
in the Service Fault Monitoring window. The All Faulty Services symbol can be added to the
lower right-hand corner of the window. This symbol represents the total fault status of all services.
By default, the selected service categories and customers are saved on the configuration file and
thus remembered. This means that next time the previous view is automatically restored. One
workstation or Fault Server can be selected to save and update all faulty services in the database.

All faulty services and their fault status details of the corresponding category or customer are listed
in the Service Fault window. The appearance of Category and Customer symbols follow the same
characteristics as other objects in the FMS windows (color coding and blinking status directly
reflects the fault state of the object). From the Service Fault window an operator can open a Service
Fault Report window concerning a specific service and its faults.

3.5.2 Service Management

The Service Management tool allows the operator to manage a number of TDM circuits, VLAN
VPNs and MPLS VPNs as one logical service. The connections can be grouped together to form a
service object which can then be monitored as one entity in Service Fault Monitoring. It is also
possible to associate trunks and nodes with a service object.

In the Service Management window the operator can see all the components associated with the
service and launch associated tools with each of the components such as the Router, Circuit Loop
Test, VPN Provisioning, Packet Loop Test, etc. If fault monitoring has been enabled, the service is
monitored as one entity in Service Fault Monitoring and the user may also see the fault status of the
service in the Service Management window.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

104
3 Tellabs 8000 Manager Application Packages

There is no restriction on how you group the connection in the network into services. It is also
possible to associate the same connection with several services. There are several options to group
the connections. The operator may group the circuits and VPNs such that all connections associated
with a certain base station are grouped together. Another option is to group all leased line trunks
together and monitor them as one entity. If the operator uses some Ethernet switches managed by
the Tellabs 8000 manager to form a management DCN, then the Ethernet switches can be grouped to
form one DCN service etc.

Fig. 54 Service Management Window

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

105
3 Tellabs 8000 Manager Application Packages

3.6 Performance Management Package

3.6.1 Overview

The Performance Management Package monitors and reports the performance data for the Tellabs
8000 managed network. It supports TDM G.826 statistics for the Tellabs 8100 and Tellabs 6300
systems, packet counter statistics for the Tellabs 8100 system, and packet and cell counter statistics
(PWE, ATM port, ATM VP/VC, ATM IMA group, Ethernet, TE tunnel, PPP, ML-PPP group, FR
port, FR DLCI, cHDLC/HDLC, Abis compression group) for the Tellabs 8600 system and for
the ETEX VCG Ethernet and VLAN interfaces of the Tellabs 6300 system nodes. TDM ETSI
G.826 and ANSI GR-253/GR-820 statistics are supported for the Tellabs 8600 system nodes. The
Tellabs 8800 system node support includes SDH port level, IP interface, PWE and RSVP-TE tunnel
statistics. The Tellabs 7100 system node support includes Optical/OTN history counters.

The background collection continuously collects data from the interfaces in the network and stores
the history data in the database. The history data can then be viewed and analyzed later. In case
of real-time performance monitoring, the performance indicators are continuously read from the
network elements and drawn in a moving graph for the Tellabs 8600 system, Tellabs 8800 system,
ETEX VCG Ethernet and VLAN interfaces of the Tellabs 6300 system nodes.

For more information on the supported counters and supported history periods related to Tellabs
8600, Tellabs 8800, Tellabs 7100 and Tellabs 6300 system nodes, refer to the Tellabs 8000 manager
online help.

3.6.2 Performance Statistics

The different types of supported performance statistics are listed below. For more information on
the available counters, refer to Tellabs 8000 manager online help.

G.821/G.826 TDM (ETSI) Performance Statistics for Tellabs 8100/Tellabs 6300 Network Elements

G.821/G.826 history statistics are available for Tellabs 8100 and Tellabs 6300 network elements for
both trunks, circuits and end interfaces.

For trunks, the performance history data at both ends of the trunk may be reported either separately or
by combining the trunk endpoints together. Both 24-hour and 15-minute performance statistics exist.
Both lower and higher order SDH statistics as well as PDH statistics are supported. G.821/G.826
history statistics are also available for 1+1 protected trunks and SNC protected virtual paths.

For circuits, the G.821/G.826 performance history data can be either monitored end-to-end or
summed over trunk sections when end-to-end monitoring is not available. Summing gives the worst
case estimates of the performance statistics for the circuit. For summed performance statistics,
only 24-hour statistics are available. Both 24-hour and 15-minute performance histories exist for
end-to-end monitored circuits.

For end interfaces, G.821/G.826 statistics of MUAP interfaces and GMH/SUAP interfaces (with Rx
Monitoring in use) can be monitored when CRC Monitoring is used. Both 24-hour and 15-minute
statistics exist.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

106
3 Tellabs 8000 Manager Application Packages

G.826 (ETSI) and GR-253/GR-820 TDM (ANSI) Performance Statistics for Tellabs 8600 and Tellabs
8800 Network Elements

G.826 and GR-253/GR-820 performance statistics are available for the interfaces of the Tellabs
8600 network elements. G.826 specifies PM parameters for ETSI STM-n and E1. The Telcordia
specifications GR-253 and GR-820 define PM parameters for SONET and DS1 respectively.
Tellabs 8600 network elements support a subset of these for channelized SONET and T1 interfaces
(chSTM-1/OC-3, 24 x E1/T1 IFMs and Tellabs 8605 access switch T1 interfaces). The network
element stores 32 x 15 minute periods (current + 31 x history) and 2 x 24 hour periods (current + 1 x
history). ANSI performance counters are available for Section, Line, STS path and VT1.5 path for
SONET, and DS1 line and DS1 path for DS1. ETSI performance counters are available for MS, RS,
VC-4 path and VC-12 path for SDH STM-n, and E1 line and P12S path for E1.

For Tellabs 8800 system network elements, ANSI/ETSI Section/RS performance 15–minute history
reporting is supported.

The Performance Management application is used for reporting performance history data from the
database, whereas the 8600 Node Manager application shows current and history counters from the
network element. Performance statistics is collected by Communication Server in the Tellabs 8000
manager database for 15-minute and 24-hour history periods. The polling policy in Network Editor
defines whether statistics is collected periodically or once a day at a specified time.

Optical/OTN Performance Statistics for Tellabs 7100 Network Elements

Performance Management supports statistics collection and reporting of Optical/OTN layer


15-minute and 24-hour PM history for the Tellabs 7100 network elements. The network element has
the recent 7 x 24-hour and 32 x 15-minute history periods stored and history data is transferred from
the network element using TL1 protocol. PM counters are supported for different facility types;
OMS, OCH, OSC, OCH-L, OCH-P, OTU1, OTU2 and OTU3. Performance statistics is collected by
Communication Server in the Tellabs 8000 manager database for 15-minute and 24-hour history
periods. The polling policy in Network Editor defines whether statistics is collected periodically or
once a day at a specified time.

Abis Group Performance Statistics

Abis group performance data is available for an Abis group of the Tellabs 8600 network elements.
The Performance Management tool supports reporting 15-minute and 24-hour history periods.
Performance statistics is collected by Communication Server in the Tellabs 8000 manager database
for 15-minute and 24-hour history periods. The polling policy in Network Editor defines whether
statistics is collected periodically or once a day at a specified time.

ATM Statistics – ATM port, IMA Group, VP and VC

ATM statistics are available for Tellabs 8600 network elements. ATM statistics are continuously
polled from the network element running counters, and the polling interval can be configured in the
Communication Server by the user. ATM statistics are reported for ATM ports, IMA groups and
ATM virtual paths and virtual circuits.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

107
3 Tellabs 8000 Manager Application Packages

Packet Statistics – PWE TDM Counters (SAToP, CESoPSN)

PWE TDM statistics are available for Tellabs 8600 network elements, for SAToP and CESoPSN
pseudowires. History data (15-minute and 24-hour periods) and real-time graphs can be used for
reporting.

For Tellabs 8800 system network elements, PWE TDM SAToP performance real-time and 15-minute
history reporting is supported.

Packet Statistics – PWE Counters

PWE statistics are available for Tellabs 8600 network elements and for ATM (VP/VC/AAL5), FR
DLCI and HDLC pseudowires. History data and real-time graphs can be used for reporting.

For Tellabs 8800 system network elements, PWE performance real-time and 15-minute history
reporting is supported.

Packet Statistics – Ethernet and STM-n POS PPP

Packet statistics are available from the Tellabs 8600 network elements, for Ethernet and STM-n POS
PPP trunks or interfaces. The data is continuously polled from the running counters of the Tellabs
8600 network elements. The polling interval can be configured by the user.

Packet Statistics – PPP and ML-PPP Group

Packet statistics are available from the Tellabs 8600 network elements for PPP and ML-PPP
interfaces. The data is continuously polled from the running counters of the Tellabs 8600 network
elements. The polling interval can be configured by the user.

Packet Counters – FR, FR DLCI and cHDLC/HDLC

Packet statistics are available from the Tellabs 8600 network elements for FR, FR DLCI and cHDLC
interfaces. The data is continuously polled from the running counters of the Tellabs 8600 network
elements. The polling interval can be configured by the user.

Packet Counters – TE Tunnel Statistics

TE tunnel statistics are available from the Tellabs 8600 network elements. Tunnel in counters are
continuously polled from the running counters of the network elements. The polling interval can
be configured by the user.

For Tellabs 8800 system network elements, TE tunnel performance real-time and 15-minute history
reporting is supported.

Packet Counters – VLAN Queue Statistics

VLAN Queue counters are available for DiffServ classes CS7, EF, AF1 and BE. Dropped and
passed octets/packets are supported for the Tellabs 8600 network elements Tellabs 8605 access
switch FP1.2 and Tellabs 8607 access switch FP1.0A or higher. Performance Management has
real-time counter support.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

108
3 Tellabs 8000 Manager Application Packages

Tellabs 6300 ETEX VCG Ethernet and VLAN Interface Statistics

Packet statistics are available from the Tellabs 6300 network elements for the ETEX VCG Ethernet
and VLAN interfaces. From Tellabs 6300 network elements 15-minute history periods are collected
in the Tellabs 8000 manager database, and the running counters of the Tellabs 6300 network elements
are periodically polled for real-time reporting. The polling interval can be configured by the user.

Packet Counters – IP Statistics for Tellabs 8800 Network Elements

For Tellabs 8800 system network elements, performance real-time and 15-minute history reporting
of IP interfaces is supported.

IP Statistics for Tellabs 8100 Network Elements

LAN/WAN interface statistics and the usage percentage of Buffer and CPU of an NTU are available
for Tellabs 8100 NTUs with routing/bridging functionality. The statistics are reported in 15-minute
intervals of current day or previous day(s). The reports are generated by reading statistics from
the hardware or database.

Node 1/0 Cross-Connection 24-Hour History Reporting

The cross-connection capacity reporting application is used for reporting used port capacity and
total port capacity history of subracks in 1/0 nodes of the Tellabs 8100 system. History 24-hour data
is reported for 1/0 nodes. Search criteria have been implemented to help find the most interesting or
heavily loaded nodes, subracks or Tellabs 8160 A111 shelves in the Tellabs 8100 system network.

• Used port capacity means port capacity used or reserved by circuits routed through or terminating
at X-bus (in node or subrack). All circuits, whose state is planned or higher, are included in the
calculation. The value is displayed in 8 kbps or kbps.
• Total port capacity means the total sum of port capacity of unit interfaces in X-bus (in node or
subrack). All interfaces, whose state is planned or higher, are included in the calculation. The
value is displayed in 8 kbps or 1 kbps.

Node 1/0 X-bus Monitoring Tool (1/0 Nodes of Tellabs 8100 System)

Cross-Connection Monitoring Tool (CMT) is a tool for reporting 1/0 cross-connection bus allocation
statistics of X-bus. CMT reads the cross-connection bus allocation status directly from the network
element and the results are displayed in the Cross-Connection Monitoring Tool dialog. The tool
supports 1/0 Tellabs 8100 nodes.

Miscellaneous Statistics

The Circuit Recovery Down-Time Report contains the down time caused by circuit recovery
operations. A reduced set of error counters (TT, AT, UAT) is reported.

The NTU line breaks count is reported for circuits and tells how many seconds the NTU subscriber
line has been unavailable.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

109
3 Tellabs 8000 Manager Application Packages

Trunk capacity 24-hour history data for primary trunks contains trunk used capacity and trunk
total capacity. Search criteria have been implemented to help find the most interesting or heavily
loaded trunks in the Tellabs 8100 system network.

Control channel path 24-hour history data for control channel paths and the alternative paths are
reported at the node level. For example, the statistics of a path ending at a Tellabs 8170 cluster node
are composed of sums of subrack statistics. Errors and time-outs found during fixed one-day periods
are reported. The user may search for poorly functioning control channels on the basis of errors and
time-out criteria. The control paths of different areas may be reported separately.

Trunk 24-hour G.82x history data for primary trunks contains G.82x performance statistics
for TDM trunks. The data is calculated once a day by SLA Server service in Communication
Server and stored in the database. The data may then be exported to third party tools through the
northbound interface.

Circuit 24-hour G.82x history data for circuits contains G.82x performance statistics for TDM
circuits. The data is calculated once a day by SLA Server service in Communication Server and
stored in the database. The data may then be exported to third party tools through the northbound
interface.

3.6.3 History Data Collection and Ageing

Communication Servers take care of the data collection from Tellabs 8600, Tellabs 8800, Tellabs
7100 and Tellabs 6300 network elements. The DXX Servers take care of the data collection from
Tellabs 8100 network elements. The data collection runs continuously without operator intervention
after the server programs have been started.

Statistics are polled from the network elements continuously in the background according to a
polling policy configured for the Communication Servers and stored in the Tellabs 8000 manager
database. The polling policy controls how data is polled from the network element. For Tellabs
8600 network elements, the polling policy contains information on how often the network elements
should be polled for new packet statistics for periodically polled MIBs (MIB2, ATM2 MIB, MPLS
TE, FR DTE). The recommended polling period is 15 minutes, but shorter polling periods can also
be defined, if required. A polling policy for 15-minute and 24-hour PM records can be periodic or
daily polling. Polling of the ETEX VCG Ethernet interface is done according to the polling policy
defined in the Tellabs 6300 Adapter, and the recommended period is 4 hours. The polling policy in
the Tellabs 8800 Adapter defines the polling periods for Tellabs 8800 network elements. The polling
policy in Tellabs 7100 Adapter defines the polling periods for the Tellabs 7100 network elements.

Since the amount of performance history size can grow very big in large networks, the old
performance data must be continuously removed from the database. The user may define how
long the performance data should be stored in the database by configuring the ageing settings.
The Communication and DXX Servers will then automatically remove the old performance data
according to the ageing settings. The Performance Configuration tool can be used for configuring
the ageing settings, and it also shows the current database size of performance history tables.

Trend Calculation

Trend calculation is performed daily by Communication Server. The user needs to enable the trend
calculation, and configure the time of calculation. The trend calculation uses daily interval history
data as the input for the calculation. The daily interval history data contains history records collected
according to a polling policy (performance poll period) that has been configured by the user in
Network Editor. The results of the trend data calculation for each period are stored into the Tellabs
8000 manager database, and the Performance Management Overview tool can be used for reporting.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

110
3 Tellabs 8000 Manager Application Packages

3.6.4 Performance Reporting

Performance reports are typically available both in tabular and graphical forms. The user can
configure the reports in different ways. It is, for example, possible to report the totals of G.821/G.826
performance counters during different time periods in calendar time (weeks, months). It is also
possible to select multiple circuits and trunks to have network level down-time reports. Printouts,
however, are only in tabular form.

Packet Counter Performance Reporting

A graphical tool is available in the Tellabs 8000 manager for viewing packet and cell based
performance statistics in Tellabs 8600, Tellabs 8800 and Tellabs 6300 network elements. The tool
can be used both for history performance reporting as well as real-time performance reporting.

Performance interval history can be viewed in the graphical Performance Management tool. The
user may define the reporting period and the performance indicators to be displayed in the window.
There is a number of performance indicators to choose from including: utilization, sent/received
octets/packets, errored packets, discarded packets for the interface or trunk interface.

It is also possible to view the collection history for the Communication Servers. The performance
indicators for the collection history are: network polling time, average response time, maximum
response time, minimum response time, successful polls and failed polls.

The interval history data can then be exported to third party tools for further analysis through the
northbound interface of Tellabs 8000 manager.

Web Reporter can be used to report performance interval history data.

Real-time performance monitoring polls the data from the network elements in real-time
according to a configured time interval, which can vary from a few seconds to several minutes.
The polled data is not stored in the database, only showed in the graphical window, which is
updated after each data poll. The real-time performance monitoring tool is useful in troubleshooting
situations in which a real-time picture of the current interface or trunk utilization is required to
check, for instance, how many packets are transferred, how many packets have errors or how
many packets are dropped. If the data needs to be analyzed with some external tool, it can also be
exported to a text file. With the Data logging feature it is possible to log polled counter values into a
file in the Tellabs 8000 manager workstation.

Exporting of the performance data showed in the Performance Monitoring tool, both in history
and real-time modes, can be done to a text file. The data is stored in the file in comma-separated
value (CSV) format. The data in CSV format can also be imported from a text file and displayed
in the Performance Monitoring tool.

Performance Configuration is used for configuring the automatic removal of obsolete history data
from the database, enabling the utilization fault monitoring and interval history calculation.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

111
3 Tellabs 8000 Manager Application Packages

3.6.5 Interface Utilization Threshold Monitoring

The Performance Management Package contains a very useful feature to monitor the utilization of
the interfaces (or an associated trunk interface) supported by 8000 Performance Management. The
collected packet performance data is analyzed and the utilization of the interface is calculated. If the
utilization exceeds a user-defined threshold, then a fault is created and the user is alerted through the
fault monitoring tools. This allows the user to monitor the utilization of the network and to add
more capacity to the network before the congestion situations in the network become critical.

3.6.6 Trend Analysis

The performance trend analysis looks at the utilization history data of performance statistics and
calculates trends based on this data. This feature helps the user to plan when a particular interface or
link is becoming full. The trend data calculation uses the linear least squares method for fitting the
measurement data into a straight line. Performance Management extrapolates the straight line to the
future to find out when the utilization exceeds the user-configured threshold. The trend data can be
calculated over different periods, e.g. one day, one week, four weeks or a custom period.

The Performance Overview tool reports the trend data and shows the following trend and
performance data.

• Trend Delta in period [and 24h]; the utilization change during a trend calculation period [and
during a day] to find out the fastest growing interfaces.
• Congestion date In/Out; the estimated date of congestion calculated for the user-defined conges-
tion threshold. The congestion threshold can be changed. Sorting by date gives a report showing
which interfaces will congest first.
• Average Utilization In and Utilization Out during a period.
• Total amount of traffic In and Out during a period.
The utilization trend calculation is supported for interfaces having utilization In or Out supported,
e.g. Ethernet, POS-n PPP, TE tunnel, ATM port, ATM IMA group, ATM VP/VC, FR port, FR
DLCI, HDLC/cHDLC, PPP, ML-PPP group and Tellabs 6300 ETEX VCG Eth per port/VLAN.
The trend calculation is also supported for those interfaces of Tellabs 8800 network elements which
are supported by Performance Management.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

112
3 Tellabs 8000 Manager Application Packages

Fig. 55 Performance Management Overview Reporting Tool with Trend Data

3.6.7 Performance Overview Reporting Tool for Tellabs 8600, Tellabs 8800,
Tellabs 7100 and Tellabs 6300 Network Elements

The Performance Overview report is used for network level trend data analysis and reporting,
utilization reporting, and TDM and Abis group history reporting for the Tellabs 8600, Tellabs
8800 and Tellabs 6300 network elements. Optical/OTN reporting is supported for the Tellabs
7100 network elements. The reporting application has a wizard making it easy to generate a
report. Sorting and filtering is supported, and the report can be saved to a file for further analyzing
the history data. The Performance Reporting tool can be launched for more detailed reporting of
performance history of an interface or specified layer.

The Overview report shows interface utilization to be able to list the congested interfaces or trunks
in the network. Utilization is available for the trunk interfaces (Ethernet, POS-n PPP, and TE tunnel)
and interfaces (ATM port, ATM IMA group, ATM VP/VC, FR port, FR DLCI, HDLC/cHDLC,
PPP, and ML-PPP) supported by Performance Management, Tellabs 6300 ETEX VCG Ethernet and
VLAN interfaces and Tellabs 8800 interfaces supported by Performance Management.

TDM statistics reports are used for searching TDM interfaces having performance problems for
Tellabs 8600 and Tellabs 8800 network nodes. A TDM interface can be sorted so that the worst
interface (lowest Available Time) is on top of the list.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

113
3 Tellabs 8000 Manager Application Packages

For more information on the trend data reporting, refer to 3.6.6 Trend Analysis.

3.7 Recovery Package

3.7.1 Recovery Management System

Recovery Management (RMS) is a centralized restoration system for the Tellabs 8100 system
network. It is used for restoring user communication links if trunk or node faults occur in the
network. Restoring the user communication links can be done at the trunk level (trunk recovery) and
circuit level (circuit recovery). The RMS may keep a log of the incoming network faults and the
RMS actions on them. The operator may exclude some trunks or circuits from recovery actions or
may disable trunk recovery or circuit recovery by setting RMS restrictions. The restrictions can by
set for all trunks or all circuits or they can be set explicitly for some given trunk or circuit.

Trunk Recovery

Trunk recovery restores all connections in a specific trunk when there is a trunk failure. This is done
by reserving some of the trunks in the network for backup purposes.

If a trunk fault occurs in the network, RMS tries to remedy this fault by replacing the faulty trunk
with a backup trunk between the same two nodes. If such a trunk exists, the information flow
from the primary trunk will be switched to the backup trunk. This operation is called swapping.
If the primary trunk gets repaired, the unswap command switches back the information flow from
the backup trunk to the primary one.

If a backup trunk between the two nodes cannot be found, RMS tries to back up the faulty primary
trunk by a trunk route, which consists of two or more trunks between the end nodes of the primary
trunk. This means, that the information flow of the primary trunk will be redirected through other
nodes.

Since a trunk backup procedure is carried out much faster than the circuit recovery in a network,
RMS tries to restore network connections first by trunk recovery. If this fails, circuit recovery is
carried out. This behavior can be altered by the user by setting RMS restrictions.

Priority bumping at the trunk level provides low-cost recovery services. With priority bumping, the
backup trunks can be used by circuits that carry low traffic class data. If a primary trunk fails, it
is recovered by rejecting the low class traffic of a backup trunk and no recovery action is started
for the bumped out low traffic class circuits.

Recovery is not supported in the SDH network, but SDH trunk faults are administered by RMS
and RCPR.

Circuit Recovery

Circuit recovery is used to restore individual circuits in the network and allows a more granular
control over the circuits that are restored in failure conditions. Thus the capacity reserved for
backup purposes in the network can be minimized. Also, if trunk recovery fails, circuit recovery
can be used to restore the circuits in the failing trunk. The circuit types supported by circuit
recovery are: PDH 1/0 point-to-point (pp), point-to-multipoint (pmp), broadcast (bc), compressed,
swap and virtual grooming circuits.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

114
3 Tellabs 8000 Manager Application Packages

Circuit recovery is done either by explicitly defining a backup route for the circuit or by having
the RMS calculate a spontaneous backup route for the circuit.

In addition, the capacity of primary trunks is not always allocated totally to user circuits. Normally,
all primary trunks contain free timeslots that may be used for route allocation if a circuit breaks
down. The timeslot capacity of a primary trunk is divided into three capacity pools: a primary
pool, backup normal pool and backup low pool. Primary routes and predefined backup routes are
allocated from the primary pool. The spontaneous route capacity allocation depends on the circuit
priority and on the allocation mechanism. Circuits have four possible priorities: undefined, high,
medium and low. The allocation mechanism can be specified by the user. A spontaneous route is not
created for the circuit whose priority is undefined.

If a trunk fault cannot be recovered either in trunk or circuit recovery, RMS waits for a while and
retries the circuit recovery for those broken circuits until a successful recovery. The delay and
behavior between the periodical retries can be configured in order to allow a scalable solution
for networks with a large amount of circuits.

Recovery Control Program

The Recovery Control Program (RCPR) serves as the user interface to RMS. Using RCPR the
system operator can control all the RMS actions performed in the Tellabs 8100 system network.
For example, controlling the logging of RMS actions, controlling RMS settings, setting timeslot
allocation rules, refreshing the fault state of a trunk and so on.

Server Redundancy

Recovery Management System uses a single centralized Recovery Server that provides the recovery
functionality. However, if increased reliability is needed, it is possible to install two Recovery
Servers. In case of an unrecoverable failure of the active Recovery Server, the other (standby)
Recovery Server takes over the recovery support. This feature can also be used when updating
hardware or software components in a Recovery Server.

It is also possible to switch servers at any time. Server switching operations can be initiated from
Recovery Control Program if you have the required privileges. The Recovery Server state can also
be viewed from the Recovery Control Program.

It is also possible to have the active server poll the standby server periodically. If there is no standby
server or it is not responding, a notification message appears in the Recovery Control Program.

3.8 Service Viewing package

The Service Viewing package can be used to restrict the network views of the Tellabs 8000 manager
tools. This makes the management of the network easier.

3.8.1 Service and Circuit Component View

It is possible to select views that are based on circuits and customers. In these three views, Circuit
Components View, Customer Components View and VLAN Components View, only those nodes,
trunks etc. that are part of a selected circuit, or belong to a given customer, or are in particular
VLAN domain are shown on screen.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

115
3 Tellabs 8000 Manager Application Packages

Circuit Components View

Fig. 56 Circuit Components View

The Circuit Components View shows the components, i.e. nodes and trunks, which are used for a
selected circuit in selected regions. Also, components of several circuits can be selected in the view.
Target circuits are defined in a dedicated dialog, depending on their attached regions and the nature
of the circuit traffic (region-internal, region-terminating or transit traffic).

Note that not all the trunks and nodes of circuits are necessarily shown in the view. The circuit
components in those regions which are not selected into the view are dropped from the display.

Using the Circuit Components View requires a license for component views. The view can be
activated/deactivated from the Backbone Overview mode or from the Regional Network View.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

116
3 Tellabs 8000 Manager Application Packages

Customer Components View

Fig. 57 Customer Components View

The Customer Components View shows the components (nodes and trunks) which are used for
circuits of a selected customer or several customers in selected regions. In principle, this view is the
same as the Circuit Components View; instead of defining a set of circuits the operator selects a
customer whose circuits should be included. Also, the nodes which do not yet have circuits for the
customer but which are dedicated to the customer, are included in the view.

Using the Customer Components View requires a license for component views. The view can be
activated/deactivated from the Backbone Overview mode or from the Regional Network View.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

117
3 Tellabs 8000 Manager Application Packages

VLAN Components View

Fig. 58 VLAN Components View

The VLAN components view shows the components (nodes, NTUs and trunks) which are used for
the selected VLAN domains.

Using the VLAN components view requires a license for component views. The view can be
activated/deactivated from the All Network mode or from the Regional Network view.

3.9 Planning Package

The Planning Package is supported for Tellabs 8100/6300 network elements only. It contains two
tools: Fault Simulator and Network Capacity Calculator for planning purposes. In addition, the user
may also plan the network in a simulation database with the normal Tellabs 8000 manager tools
without having the actual hardware in place.

3.9.1 Fault Simulator

If a fault occurs in the Tellabs 8100 system network, Recovery Management (RMS) will try to
remedy it by using the backup trunks or the free time slots in the primary trunks. The network
operator can also issue recovery commands through the Recovery Control Program (RCPR), which
is an operator interface to RMS.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

118
3 Tellabs 8000 Manager Application Packages

No matter how the recovery works, the bottom line issue is whether there are sufficient backup
trunks in the network or primary trunks which contain enough free timeslots.

Therefore the question of how to sufficiently and economically arrange primary trunks and backup
trunks is an important issue to be considered when a network is planned and when the performance
of an existing network is examined. Is it possible to test network recovery without collecting real
time data

Fault Simulator (FSMR) is designed to serve this purpose. It simulates the faults which may occur
in the real network and allows you to observe how RMS reacts to faults. This tool is therefore very
useful in finding potential shortages in recovery designs of the planned or existing network, such as
a lack of redundancy or inefficient parameter setups. It is possible to run Fault Simulator only in a
simulation database. This guarantees that Fault Simulator never disturbs normal network operation.

Fault Simulator generates two types of faults, i.e. trunk faults and node faults. It saves the faults in
the database and sends them immediately to RMS through a named pipe.

Fault Simulator is easy to use. Faults are introduced and removed by double-clicking on a node or a
trunk with the mouse or via trunk and node lists in the corresponding dialog boxes. In addition,
command scripts for fault generation can be recorded by Macro Manager (see chapter 3.14.1 Macro
Manager).

3.9.2 Network Capacity Calculator

In the Tellabs 8100 system network, digital signals are carried by trunks and circuit cross-connections
are accomplished by nodes. The maximum amount of information flow through a trunk is called
total trunk capacity, and the maximum cross-connect capability is called total node capacity.
When user circuits are connected in the network, they take both trunk capacity and node capacity.
What has been taken into use is called used capacity, while what has not been taken into use is
called free capacity.

Reporting

Network Capacity Calculator (NWCC) reports the network capacity, including total, free and used
capacity of trunks and nodes when circuit connections are made according to the customer’s
demand. It is capable of calculating traffic between different nodes or locations and traffic internal
to, ending to or going through a location. It can be used when planning a network or evaluating an
existing one. This tool helps the network operator in discovering shortage or redundancy of capacity
in the planning or existing network. Fig. 59 shows a snapshot of NWCC in use.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

119
3 Tellabs 8000 Manager Application Packages

Fig. 59 Snapshot of Network Capacity Calculator (NWCC)

Planning

It is possible to use NWCC in network planning. New nodes, trunks and circuit connections can be
created. First, it provides, as does Network Editor, a set of basic objects such as nodes and trunks to
create a network. In addition to the nodes and trunks referring to a specific item of Tellabs 8100
system network hardware, NWCC introduces ideal nodes and ideal trunks into the network. An
ideal node has unlimited cross-connect capacity, and an ideal trunk has infinite transport capacity.
It is also possible to preset a real maximum capacity for a node type in the initialization file, and
NWCC can use the preset value in calculations instead of infinite values. Given the endpoints and
the capacity requirement, it builds up circuit connections, as does Router, automatically or manually.
In the automatic routing mode, each trunk is given a cost and the shortest path between the endpoints
is found by the program which minimizes the sum of the trunk costs. The user can select one from
various types of cost available before starting automatic routing.

NWCC first reads the network data from the database, while it is used for studying the existing
network. It displays the network, calculates the capacity, shows the statistics, and prints the results.
The user can edit the network by adding or removing objects and make new circuit connections.
However, the changes made by using NWCC do not affect the data in the database. NWCC saves
the data in a file instead of the database.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

120
3 Tellabs 8000 Manager Application Packages

3.10 8100 Service Computer

The Service Computer (SC) is supported for Tellabs 8100 nodes. It is used to monitor and control
the operation of a single node. Node Manager is the only Tellabs 8000 manager user interface
component which can be used in the Service Computer. Therefore it has some facilities of Fault
Management and Performance Management. The Service Computer has no database support.

3.11 Private Subnetwork

The Private Subnetwork feature was previously known as VPN in the Tellabs 8100 manager. This
feature has been renamed to avoid the confusion with IP VPNs, VLAN VPNs etc. in the Tellabs
8000 manager. The Private Subnetwork feature is supported for Tellabs 8100 and Tellabs 6300
network elements and can be used to dedicate portions of the network to a specific user group and
this logical subnetwork can be managed by the selected user group as a separate subnetwork from
the operator’s real network.

3.11.1 Private Subnetwork Trunk Accounting

Private Subnetwork Trunk Accounting displays a report of the trunks allocated for Private
Subnetwork use. The report start and end dates can be defined for required periods of time, and
the reports are then printed.

3.11.2 Management Server Running Private Subnetwork Service

A Management Server with Private Subnetwork service is a gateway between the Private
Subnetwork workstation and all the other components. It executes all commands on behalf of
the Private Subnetwork workstations and notifies the changes in the private subnetworks to the
corresponding Private Subnetwork workstations. The Private Subnetwork Server filters commands
which are not allowed for the Private Subnetwork user and network elements which do not belong to
a Private Subnetwork. Several Private Subnetwork Servers may be used to improve the performance
when servicing a large number of Private Subnetwork workstations. The Private Subnetwork
workstation has no direct access to the Real Network Operator (RNO) central database or hardware,
the access is only through the Private Subnetwork Server.

The PS Server consists of two service layers, Private Subnetwork Service Layer (VSL) and Object
Server Layer (OSL). VSL calls OSR-API functions to gain access to OSL. VSL contains the login
procedures for Private Subnetwork workstations and keeps track of which PS workstations and
Private Subnetwork operators are logged into the Private Subnetwork Server.

VSL also has a Private Subnetwork Change Event Handler process which processes change events
received from OSL. VSL checks which Private Subnetworks are changed by the events and sends
change notifications and change data to the relevant Private Subnetwork workstations.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

121
3 Tellabs 8000 Manager Application Packages

The main task of VSL is to receive network management commands from Private Subnetwork
workstations, then change the commands to restrict the command context only to objects in the
current PS, and pass the commands on to OSL. OSL processes these commands and then sends
answers back to VSL, which in turn passes filtered answers on to Private Subnetwork workstations.
VSL also checks the Private Subnetwork capabilities to carry out different network management
tasks. If capabilities are insufficient to execute the requested service, Private Subnetwork Server
does not pass the command to OSL; instead, it sends an error code to the Private Subnetwork
support layer of the Private Subnetwork workstation.

OSL consists of the same dedicated servers as the Standard Workstations Object Server:

• Network object server


• Node object server
• Fault object server
• Circuit object server
• Test object server
• Performance object server
• Route object server
The network object server provides the management functions to manage network level components
(nodes, trunks, NTUs, customers, operators).

The node object server provides the functions to manage node level objects. Node level tools
use node object server functions to retrieve node data from the database and to gain access to the
actual hardware components in the network.

The fault object server is used to retrieve fault management data from the database. Fault data
is written to the database by the DXX Server.

The circuit object server is used to manage circuit object data (to read, add, update or delete circuit
data).

The test object server is used by the Circuit Loop Test tool to carry out circuit loop test tasks and to
retrieve test data from the database.

The performance object server is used by the Performance Management tool to collect and view
performance data.

Route object server provides functions to carry out routing tasks.

3.11.3 Private Subnetwork Faults

Fault Management System is considered to be a tool mainly for the RNO operators. Fault
acknowledgement, configuration (filtering), history deletion, trouble ticket facility and the
invocation of the Consistency Check dialog are not allowed in a Private Subnetwork workstation.
The corresponding menu options/buttons in the FMS windows are unavailable for a Private
Subnetwork user. However, the fault status of the Private Subnetwork network is also shown at
the Private Subnetwork workstation. All the basic FMS fault monitoring windows are available
as far as the dedicated or shared network objects are concerned. The Private Subnetwork operator
can also retrieve fault reports for those objects.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

122
3 Tellabs 8000 Manager Application Packages

3.12 Partitioned Package

The purpose of network partitioning is to allow large networks to be managed by Tellabs 8000
manager regionally and still provide a centralized view of the entire network. Large networks
consist of smaller regional subnetworks and a backbone network connecting the subnetworks
together. Network division into regions is performed on a geographical basis. The main interest
of regional operators is to manage their part of the network with less attention being paid to other
parts. More or less, these operators want to view their regions as separate networks, which can be
managed efficiently no matter how large and complex the overall network structure is. The regions
are also protected against unauthorized usage by other operators. Special access rights are needed in
order to operate in several regions e.g. when building up the inter-region connections.

Partitioning improves the structural organization of the network and hence the management
procedures. This facilitates the network building and helps the operators to understand the overall
structure. Partitioning also guides the operators to build up the network in a more comprehensible
way; nationwide meshed networks can be avoided.

Users can focus on particular parts of the network; they can create different network views by
selecting the regions and hierarchy levels they wish to display on the screen. The views can be
selected in the Backbone Network Overview (see chapter 3.12.2 Backbone Overview of Network)
of the Network Editor, Router, Fault Management and Performance Management tools. The
selected view also restricts the sphere of operation of these network level tools (refer to the chapters
describing these tools).

In addition to views based on geographical region and hierarchy levels of the network elements, it is
possible to select views that are based on customers and circuits. In these three views, Customer
Components View, VLAN Components View and Circuit Components View, only those nodes,
trunks etc. that belong to a given customer, or are part of a selected circuit, or are in particular
VLAN domain are shown on screen.

The relationships between the different choices of viewing the partitioned network are described
in the following chapters.

3.12.1 Hierarchy Levels for Partitioning

A large flat network is divided into regions on a geographical basis. Each node is attached to
a single region. In addition to this, different hierarchy levels are assigned to network trunks and
nodes. The hierarchy levels of nodes and trunks can be used in dividing the network into the access
and backbone parts (access L1, backbone/access L2 and backbone L3).

A single geographical region thus contains both the access and backbone level components. A
region can also contain only backbone level components. This model supports the assumption that a
regional operator is often responsible for installing, monitoring and maintaining all the network
elements in a region, including the backbone network components.

The hierarchy levels are originally based on three categories of traffic:

• region internal traffic


• traffic terminating at a region
• transit traffic

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

123
3 Tellabs 8000 Manager Application Packages

These traffic categories are used in defining the hierarchy levels for network trunks. The level of a
trunk thus indicates the nature of traffic that is either used or intended to be used. The nodes are
also assigned similar hierarchy levels. These, in turn, are determined according to the hierarchy
levels of the connected trunks.

Router and Recovery Management obey the hierarchy levels when routing circuits or recovering
circuits or trunks if the Partitioning Traffic Rules are enabled. In addition, these tools obey the
regional boundaries via the so-called Partitioning Network Rules. In Recovery Management the
operator can also disable the Partitioning Network Rules for a specific trunk or circuit.

Fig. 60 shows an example of a partitioned network. Note that the backbone network is formed by
trunks at the level L3 and nodes belonging to the categories L2 or L3. The access network consists
of trunks and nodes of the categories L1 and L2. The figure is purely logical and does not present
any geographical relationships between the regions.

Fig. 60 Network Partitioning Hierarchy Levels

3.12.2 Backbone Overview of Network

Selections of view preferences can be made in the Backbone Overview mode. The operator can
select (mark) one or more regions. For each region the operator can select to view either backbone
components, access level components or both. This selection of the level of interest can be different
for each region.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

124
3 Tellabs 8000 Manager Application Packages

Fig. 61 Backbone Network Overview

From this Backbone Network Overview mode the operator can open either a Regional Network View
(chapter 3.12.4 Regional Network View), a Backbone Network View (chapter 3.12.3 Backbone
Network View ) or an All Network View. The selection of the regions and the selection of the
level of interest only affect the Regional Network View and are necessary before activating the
Regional Network View. The Backbone Network View displays the backbone level components
independently from the selection of the level of interest. The All Network View shows all the nodes
and trunks as if the network was not partitioned.

The operator can restrict the Regional Network View to consist of only such network components
which belong to a given customer, or are part of a selected circuit, by activating the Customer
Components View (chapter Customer Components View) or Circuit Components View (chapter
Circuit Components View ) or VLAN Components View (chapter VLAN Components View).
These component views can also be activated/deactivated in the Regional Network View. A separate
license is needed to activate these component views.

An operator with permissions limited to a single region can bypass the Backbone Overview mode
and go directly to the Regional Network View when opening a network level tool. (See chapter
3.12.4 Regional Network View)

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

125
3 Tellabs 8000 Manager Application Packages

3.12.3 Backbone Network View

Fig. 62 Backbone Network View

The Backbone Network View shows the backbone level (L3) trunks and the corresponding end
nodes, i.e. the backbone network. In case the operator has no backbone level access to a region, the
corresponding trunks and nodes are dropped from the view. Real management operations can be
performed in this view, e.g. the operator can zoom into the node with Node Manager, or activate
fault monitoring for the components shown.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

126
3 Tellabs 8000 Manager Application Packages

3.12.4 Regional Network View

Fig. 63 Regional Network View

The Regional Network View shows the access level and/or backbone level nodes and trunks of
selected regions. Different levels of interest can be used in different regions at a time; the selection
of regions to be viewed is performed in the backbone overview mode.

Each node must have a region and a hierarchy level in order to be shown in this view. The nodes and
trunks which are created in a VPN workstation are not shown in this view before a RNO has added
the required information to them. RNO can access such nodes and trunks in the non-partitioned All
Network View, which can be activated from the Backbone Network Overview mode.

A trunk which leaves a region to another region not shown in the view is represented by an arrow
symbol. Fig. 22 contains three such symbols for the backbone level (L3) trunk bundles.

Activating/deactivating the Customer Components View (chapter Customer Components View) or


Circuit Components View (chapter Circuit Components View) or VLAN Components View (chapter
VLAN Components View) is also possible in this Regional Network View.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

127
3 Tellabs 8000 Manager Application Packages

3.12.5 Fault Management in Partitioned Network

In a partitioned network, different views are offered for operators in the network level tools to assist
in focusing only on a relevant set of network elements at a time. In Fault Management System only
the faults originating from the components currently shown in the view are reported. Also, even if
the node is visible in the view, but the fault originates from an interface or a unit which is of no
interest for the current view (backbone/access aspects), the fault is filtered out. The rules for the
faults in different views are explained in the following chapters.

Backbone Overview

No faults are reported in this view.

Backbone Network View

The following faults are included:

• All faults from visible L3 trunks and L3 nodes.


• All faults from SCU, SXU etc. common units of visible nodes.

Regional Network View

The visible faults within each region depend on the level of interest defined (access/backbone/all)
for a region:

1. Level of interest is backbone for region BB1. The following faults are visible:
• Faults originated from L3 nodes in region BB1.
• Faults originated from common units (SCU, SXU etc.) of L2 nodes in region BB1.
• Faults originated from L3 trunk interfaces from L2 nodes in region BB1.
2. Level of interest is access for region ACC1. The following faults are visible:
• Faults originated from L1 nodes in region ACC1.
• Faults originated from L2 nodes, with the exception of L3 trunk faults.
3. Level of interest is all for region ALL1. The following faults are visible:
• All faults originated from all nodes in region ALL1.

Customer Components Network View

• Only those components and faults that are affected by the selected customers are displayed.
• Fault history reports cannot be retrieved in this view.
• Network change messages for the component view are not visible, but the main view will process
them.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

128
3 Tellabs 8000 Manager Application Packages

Circuit Components Network View

• Only those components and faults that are affected by the selected circuits are displayed.
• Fault history reports cannot be retrieved in this view.
• Network change messages for component view are not visible, but the main view will process
them.

L3 Trunk Monitoring

A trunk which leaves a region to another region not shown in the view is represented by an exit
arrow symbol. In Fault Management System these bundles are also under fault monitoring. Two
different operational modes can be defined for the monitoring:

• full L3 trunk monitoring


• partial L3 trunk monitoring
Full monitoring includes faults from both end interfaces of a trunk. Partial monitoring ignores the
L3 trunk faults in case the faulty interface is not within the selected regions.

3.13 Unit Software Management Package

The Unit Software Management tools are used for managing the element software (ESW) files of
Tellabs 8100 and Tellabs 8600 nodes. Two different tools are used: the Unit Software Management
tool for Tellabs 8100 nodes and the 8600 ESW Management tool for Tellabs 8600 nodes.

For Tellabs 8100 system nodes, the Unit Software Management tool provides features to get
reports on software and hardware versions of units of a selected type in a selected set of nodes. The
user can automatically download new software into several units selected anywhere in the Tellabs
8100 system network. Downloading new software into different units can be done in one session,
and the events of download progress can be saved into a file for further review. The unit and file
information in the main window and in the download dialog can be saved in a file or printed on a
printer. The software version, serial number of units and the date of downloading are also updated
into the database.

To prevent any errors in unit software download, the tool also checks for inconsistent software: the
name of the program is compared to the unit hardware type, as well as, checking the software
version numbers.

The time required for downloading can be reduced by opening several download sessions
simultaneously. You can start several download tools for different areas of the network in one or
more computers. In one computer, the recommended maximum number of tools is four, which then
also requires four unique file names for download events to be saved. Unit Software Compatibility
Checker checks the compatibility between the GMU base unit software and other unit software
in a node.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

129
3 Tellabs 8000 Manager Application Packages

For Tellabs 8600 nodes, Tellabs 8000 manager provides some enhanced features for updating the
element software in the network elements. The 8600 Element Software Management tool is used
for managing ESW files in the database, scanning ESW information from the network elements and
creating and performing download plans. Storing the ESW files in the database instead of a file
server makes it easier to maintain the software files used in the network elements. ESW inventory
reports can be created by scanning the information from the network elements. Reports can be saved
as export files which can be imported and processed by other tools and systems.

Updating the software in the network elements is based on download plans. Multiple plans can be
stored in the system. When the download plan is ready to be performed, the plan is activated and the
corresponding download task can be monitored in the tool. It is possible to schedule the download
task to be run later at the time defined in the plan. In order to speed up the download process,
the individual download operations are performed in parallel. Download plans which have been
performed and updated with operation specific status information during the download process, are
stored in the database as documents on the software update in the network elements.

A download plan comprises of generic download properties and a list of download operations. In
download properties, it is possible to define that the new software is transferred and activated in the
network elements fully automatically, or that the files are transferred automatically, but the user can
activate them manually with other Tellabs 8000 manager tools. Each download operation defines
the target node and the software package or software file to be downloaded. The software package
includes software images for all unit types in the network element. Alternatively, the download
operation may define a unit and the corresponding software file. New download plans can be created
from the scratch or by using an old existing plan as a template.

Fig. 64 8600 Element Software Management Dialog

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

130
3 Tellabs 8000 Manager Application Packages

3.14 Macro Package

3.14.1 Macro Manager

Macro Manager is an additional tool for managing the network by special macro commands. The
aim is to speed up the network management processes in the case of a large network and a huge
number of network objects (nodes, trunks, circuits, etc.). The interactive graphical interface is
replaced by macro commands. Additionally, you can create your own Graphical User Interface
for your macros using the Application Development Toolkit, see chapter 3.14.2 Application
Development Toolkit. You can reuse the sample macros delivered with the program or record
your own macros. The macro language supports high-level control structures, such as variables,
conditional statements, loops and submacro calls. It can also handle database change messages and
supports file operation, threading, scheduling, polling and printing. These features allow you to write
complex programs in Macro language to automate sequences of Tellabs 8000 manager operations.

Node Manager macros are used for setting different kinds of hardware parameters. Most of them
also update the database. Network Editor macros are used to build and edit the network in the
database. Router macros are used for managing circuits. Fault Simulator macros are used for adding
and deleting node and trunk faults in the database. Note that simulator macros can only be run in
a simulation environment. Topology macros are used for node inventory and trunk provisioning.
Service provisioning macros are used for IP VPN and pseudowire provisioning. Discovery macros
are used for discovering nodes, IP VPNs and pseudowires.

Add-on macros are macros that can be launched directly from the Tellabs 8000 GUI. An add-on
macro can be associated with specific NE objects (for example, interfaces of a specific type) and
then launched for those objects via pop-up menus in Node Manager.

Refer to the Macro Manager online help for details.

3.14.2 Application Development Toolkit

The Tellabs Application Development Toolkit is a platform for the graphical user interface (GUI)
and network level programming in the Tellabs 8100 system. It is a high-level programming
language with a class library that gives you a network level object model of the Tellabs 8100 system
network. It includes the Tellabs 8100 system network, visual (GUI) and parameter objects. With
Application Development Toolkit you can create your own windows and dialogs to view and
configure parameters of the Tellabs 8100 system network. With Dialog Editor you can design your
own dialogs. Application Development Toolkit supports standard GUI application features, window
and dialog controls like menus, buttons, edit fields and list views with columns.

Application Development Toolkit requires a separate license. It uses the services provided by Macro
Manager and thus requires that the Macro Manager license is active in Tellabs 8000 manager.

The functionality provided by Application Development Toolkit and macros provided by Macro
Manager work seamlessly together. The window macro applications developed with Application
Development Toolkit can be run separately from Macro Manager.

70168_04 Tellabs ® 8000 Network Manager R17A


© 2010 Tellabs. System Description

131
3 Tellabs 8000 Manager Application Packages

3.15 Web Reporter

Web Reporter Server runs in Microsoft Windows Server operating system, on which Microsoft
Internet Information Server is installed. The Web Reporter tool is used to access information in the
Tellabs 8000 system network through the intranet or the Internet.

Web Reporter enables fast distribution of information without the need to consult the personnel
operating the network. For example, customer service staff and sales personnel can view circuit
capacity, availability, free node interfaces or faults when a customer calls. Field service personnel
can see where a fault is located and plan repairs accordingly; they can also receive node information
when installing new hardware and services. Management can monitor the quality and observe
the size and contents of the network. Also the network operators benefit from the clear and
comprehensive reports. End customers may be interested in monitoring the availability of their
circuits.

The system contains a number of predefined reports that are available to the user after a successful
login. The output format is HTML, ASCII or EXCEL

3.16 Off-Line Database Kit for Reporting

The off-line database kit is meant for reporting and statistical calculation purposes. It is a separate
database and software system, which contributes to reduce the load of an online database system.
The off-line kit provides a software platform for moving specific NMS data from the online Tellabs
8000 manager database to the off-line database(s). Thus heavy data processing is transferred from
the online database server to the off-line database server.

The off-line kit software takes care of data replication from the online database to the off-line
database(s). Data can be processed further in the off-line database server by business logic of
different tools. These tools can be Tellabs 8000 manager tools or third party tools.

Fault Management/archiving will use the history database system by replicating Fault Management
tables from the online database to the off-line database.

Tellabs ® 8000 Network Manager R17A 70168_04


System Description © 2010 Tellabs.

132

You might also like