Professional Documents
Culture Documents
Release 1.2
Revision A
UTStarcom Inc.
www.utstarcom.com
Copyright
© 2004 UTStarcom Inc. All rights reserved.
This Manual is the property of UTStarcom Inc. and is confidential. No part of this Manual may be reproduced for any purposes or
transmitted in any form to any third party without the express written consent of UTStarcom.
UTStarcom makes no warranties or representations, expressed or implied, of any kind relative to the information or any portion
thereof contained in this Manual or its adaptation or use, and assumes no responsibility or liability of any kind, including, but not
limited to, indirect, special, consequential or incidental damages, (1) for any errors or inaccuracies contained in the information or (2)
arising from the adaptation or use of the information or any portion thereof including any application of software referenced or utilized
in the Manual. The information in this Manual is subject to change without notice.
Trademarks
UTStarcom® is a trademark of UTStarcom Inc.
GoAhead is a trademark of GoAhead Software, Inc.
All other trademarks in this Manual are the property of their respective owners.
DOC Class A
This digital apparatus does not exceed the Class A limits for radio noise emissions from digital apparatus as set out in the
interference-causing equipment standard titled “Digital Apparatus," ICES-003 of the Department of Communications.
Cet appareil numérique respecte les limites de bruits radioélectriques applicables aux appareils numériques de Classe A prescrites
dans la norme sur le matériel brouilleur: "Appareils Numériques," NMB-003 édictée par le Ministère des Communications.
Warning
This is a class A product. In a domestic environment this product may cause radio interference in which case the user may be
required to take adequate measures.
FDA
This product complies with the DHHS Rules 21 CFR Subchapter J, Section 1040.10, Applicable at date of manufacture.
Contents
Chapter 1 - Introduction
Digital Optical Network Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
UTStarcom TN780 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
UTStarcom Optical Line Amplifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
IQ Networking Operating System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
MPower Network Management Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
UTStarcom MPower Graphical Node Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
UTStarcom MPower Element Management System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Release 1.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Appendix C - Acronyms
Figure 3-18 Hardware Physical Configuration of a 400Gbps Digital Add/Drop Node . . . . . . . . . . . . . . . . . . .3-46
Figure 3-19 Hardware Logical Configuration of a 400Gpbs Digital Add/Drop Node . . . . . . . . . . . . . . . . . . . .3-47
Figure 3-20 Hardware Physical Configuration of a 200Gbps Digital Add/Drop Node . . . . . . . . . . . . . . . . . . .3-48
Figure 3-21 Hardware Physical Configuration of a 200Gbps Digital Repeater Node . . . . . . . . . . . . . . . . . . .3-49
Figure 3-22 Hardware Logical Configuration of a 200Gpbs Digital Repeater Node . . . . . . . . . . . . . . . . . . . .3-50
Figure 3-23 Hardware Physical Configuration of an Optical Line Amplifier Node . . . . . . . . . . . . . . . . . . . . . .3-51
Figure 4-1 Alarm reporting behavior during ARC period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-9
Figure 4-2 Loopbacks supported by the TN780 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-12
Figure 4-3 PRBS Tests Supported by the TN780 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-13
Figure 4-4 Trace Messaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-14
Figure 4-5 Managed Object Entities and Hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-16
Figure 4-6 Express Cross-connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-24
Figure 4-7 Add/Drop Cross-connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-25
Figure 4-8 Hairpin Cross-connects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-26
Figure 4-9 TribY-cable Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-30
Figure 4-10 Physical Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-48
Figure 4-11 Single Network with Topology Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-49
Figure 4-12 Service Provisioning Topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-49
Figure 4-13 Illustration of Using Node Inclusion Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-50
Figure 4-14 Redundant DCN Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-54
Figure 4-15 DCN Link Failure Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-55
Figure 4-16 MCM/OMM Failure Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-56
Figure 4-17 Management Application Proxy Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-57
Figure 4-18 Using Static Routing to Reach External Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-59
Figure 4-19 NTP Server Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-60
Figure 5-1 Digital Optical Network and UTStarcom MPower Management Solution . . . . . . . . . . . . . . . . . . .5-1
Figure 5-2 MPower GNM Main View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-5
Figure 5-3 Multi-window display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-6
Figure 5-4 MCM Redundancy Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-7
Figure 5-5 10G Clear Channel Service Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-8
Figure 5-6 Protection Group Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-9
Figure 5-7 NCT ports on MPower GNM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-10
Figure 5-8 MPower EMS Administrative Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-16
Figure 5-9 Network Information File Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-19
Figure 5-10 Add Administrative Domain Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-20
Figure 5-11 Network Topology Map View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-23
Figure 5-12 Junction Site Topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-24
Figure 5-13 Circuit Layout Record. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-29
Figure 5-14 Cross-Connect Circuit Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-30
Figure 5-15 MPower EMS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-35
Objective
This guide provides an introduction and reference to Digital Optical Networking Systems which includes
the UTStarcom® TN780 (referred to as the TN780) and UTStarcom Optical Line Amplifier (referred to as
the Optical Line Amplifier) used to build Digital Optical Network®. This guide also includes UTStarcom IQ
Network Operating System (referred to as the IQ) operating TN780 and Optical Line Amplifier network
elements, and UTStarcom MPower Management Suite (referred to as the MPower) provided to manage
UTStarcom products.
Audience
The primary audience for this guide includes network planners, network operations personnel and system
administrators who are responsible for deploying and administering the Digital Optical Network. This guide
assumes that the reader is familiar with the following topics and products:
Basic internetworking terminology and concepts
Document Organization
The following table lists the chapters and its description covered in this manual.
Related Documents
The reader can refer to the following documents to further understand the content of this manual.
Document Order
Document Name Number Description
UTStarcom TN780 TN780-HWG-1.2-A Provides the detailed description of the Digital Optical Net-
Hardware Description work hardware modules. This manual covers the func-
tional diagrams, status indicators and technical
specifications for each module.
UTStarcom TN780 DNT-MTG-1.2-A Provides the routine maintenance and alarm troubleshoot-
Maintenance and Trou- ing procedures for UTStarcom TN780 and Optical Line
bleshooting Guide Amplifier network elements. This guide includes the rou-
tine hardware and software maintenance procedures and
various troubleshooting tools. A comprehensive list of
alarms and events, and alarm clearing procedures are
also included.
UTStarcom TL1 User TN780-TL1G-1.2-A Describes the TL1 interface supported by UTStarcom
Guide TN780 and Optical Line Amplifier network elements. This
guide includes the description of the supported TL1 com-
mands and the procedures for the commonly performed
OAM&P functions.
UTStarcom MPower MP-GNUG-1.2-A Describes UTStarcom MPower GNM interface used to
GNM User Guide manage UTStarcom TN780 and Optical Line Amplifier net-
work elements. This guide also includes the procedures
for the commonly performed OAM&P functions.
UTStarcom MPower MP-EUG-1.2-A Describes the UTStarcom MPower EMS interface used to
EMS User Guide manage the Digital Optical Network comprised of TN780
and Optical Line Amplifier network elements. This guide
also includes the procedures for the commonly performed
OAM&P functions.
UTStarcom MPower MP-EAG-1.2-A Describes the MPower EMS server installation, adminis-
EMS Administrator tration, security management and routine maintenance
Guide procedures.
Conventions
The following table lists the document conventions used within this manual.
italic font Book or manual titles and important Refer to the UTStarcom TL1
information. User Guide.
Means reader must make note.
Note: Note: The external
timing syn-
chronization is
not supported
in Release
1.2.
Technical Assistance
Customer support for UTStarcom products is available 24 hours a day, seven days a week. For information
or assistance with any UTStarcom product, please contact an UTStarcom Customer Service and Technical
Support resource using any of the methods listed below.
UTStarcom China
Telephone: 86-10-85205588
Fax: 86-10-85205599
UTStarcom USA
Telephone: 1-510-864-8800
Fax: 1-510-864-8802
UTStarcom corporate website: www.utstarcom.com
Introduction
This chapter provides an introduction to Digital Optical Network, UTStarcom Digital Optical Networking
Systems, MPower Network Management, and Release 1.2 features in the following sections:
“Digital Optical Network Overview” on page 1-2
“IQ Networking Operating System Overview” on page 1-5
“MPower Network Management Overview” on page 1-7
“Release 1.2 Features” on page 1-10
D
i
gi
ta
lL
in
ks
C
l
i
ent C
l
i
ent
C
l
i
ent
C
l
i
ent
UTStarcom TN780
UTStarcom TN780
The UTStarcom TN780, referred to as the TN780, provides digital bandwidth management within a Digital
Optical Network. The TN780 provides a means for direct access to client data at 10Gbps, and 2.5Gbps
wavelength granularity at a site, allowing flexible selection of whether to multiplex, add/drop, amplify,
groom, or wavelength interchange individual channels. The TN780 can be equipped in a variety of network
configurations using a common set of circuit packs. Refer to “TN780 Configurations” on page 2-2 for a
detailed description of the various configurations supported by the TN780. The detailed description of the
TN780 hardware is provided in CHAPTER 3.
Redundant management plane communication paths utilizing Gateway Network Element and Man-
agement Proxy services.
Telcordia compliant TL1 for OSS integration.
Open integration interfaces including TL1, XML, and flat files.
Refer to CHAPTER 4 for a detailed description of the features.
MPower management architecture employs a network-is-master model, allowing the network itself to
asynchronously inform and update all registered management clients and mitigate any synchronization or
accuracy issues. The network state and status is automatically discovered and reported to the
management client. This network-is-master model enables each network element to be managed by
multiple management applications, allowing for full management redundancy and allowing each
management application to maintain synchrony with what is occurring within its purview.
In the current release, MPower includes the following applications:
UTStarcom MPower Graphical Node Manager
UTStarcom MPower Element Management System
Figure 1-3 Digital Optical Network and UTStarcom MPower Management Solution
Feature Description
Network Topologies Point-to-point, linear ADM, Hub and Spoke, and Ring topologies.
Multi-junction System Allows engineers to deploy interconnected rings that will simplify network designs
Application and provide flexible networking implementations.
UTStarcom TN780 Net- Digital Optical Networking System which provides digital add/drop and bandwidth
work Element management capabilities.
Multi-Chassis configuration Enables users to scale system capacity of deployed network equipment in new or
existing systems allowing for multi-chassis/multi-BMM and expanded DLM config-
urations.
UTStarcom Optical Line Optical line amplifier provided to extend the optical reach between the TN780s.
Amplifier Network Element
6x24dB Optical Reach Within a digital link between adjacent TN780s, up to six 24dB optical spans and
up to five Optical Line Amplifiers are supported.
DTC (Digital Transport Supports Digital Optical Node functions; 400Gbps per fiber pair.
Chassis)
MCM-A (Management Performs management and control functions for the TN780 network element.
Control Module)
MCM-B Performs management and control functions for the TN780TN780 network ele-
ment. Provides enhanced CPU frequency, FLASH memory for the persistence
storage, and Physical memory (SDRAM).
MCM redundancy Allows for one MCM-B to be active and the other MCM-B to be stand-by. The
active MCM-B terminates the management interfaces to the system and provides
all of the control and monitoring functions for the system. The standby MCM-B
maintains synchronization with its active partner so that it is capable of becoming
active at any time, but is not actively involved in system control or monitoring.
BMM-C-4-A (Band Mux Performs optical multiplexing and demultiplexing of four Optical Carrier Groups
Module) (OCG). Each OCG contains ten 10Gbps DWDM channels. Three types of BMM-
C-4-As are provided with various combinations of fixed gain, variable gain and
mid-stage access for dispersion compensation fiber.
BMM-C-4-B Performs optical multiplexing and demultiplexing of four Optical Carrier Groups
(OCG). Each OCG contains ten 10Gbps DWDM channels. Contains a new EDFA.
Three types of BMM-C-4-Bs are provided with various combinations of fixed gain,
variable gain and mid-stage access for dispersion compensation fiber.
BMM-C-8-A Performs optical multiplexing and demultiplexing of eight Optical Carrier Groups
(OCG). Each OCG contains ten 10Gbps DWDM channels. Three types of BMM-
C-8s are provided with various combinations of fixed gain, variable gain and mid-
stage access for dispersion compensation fiber.
Feature Description
DLM (Digital Line Module) Performs add/drop or switching of ten 10Gbps optical channels. Performs For-
ward Error Correction (FEC) encoding/decoding on each channel. There are 8
types of DLMs, one for each OCG. Each DLM can house up to five TAM-2-10G,
TAM-4-2.5G and TAM-4-1G modules.
TAM-2-10G (Tributary Houses two 10G Tributary Optical Modules (TOM) and adapts client signals for
Adapter Module) transport over the Digital Optical Network. Up to two TOM-10G-SR-1, and/or
TOM-10G-IR2 modules are supported within each TAM-2-10G.
TAM-4-2.5G (Tributary Houses four 2.5G Tributary Optical Modules and adapts client signals for trans-
Adapter Module) port over Digital Optical Network. Up to four TOM-2.5G-SR-1 and/or TOM-2.5G-
IR1 modules are supported within each TAM-4-2.5G.
TAM-4-1G (Tributary Houses four 1GbE Tributary Optical Modules and adapts client signals for trans-
Adapter Module) port over Digital Optical Network. Up to four TOM-1G-LX modules are supported
within each TAM-4-1G.
TOM-10G-SR1 (Tributary Pluggable XFP optical module supporting client interface operating at 1550nm;
Optical Module) 10km reach; LC connector; SONET OC-192, SDH STM-64, 10GbE LAN Phy, 10G
Clear Channel and 10GbE WAN Phy client signals.
TOM-10G-IR2 (Tributary Pluggable XFP optical module supporting client interface operating at 1550nm;
Optical Module) 40km reach; LC connector; SONET OC-192, SDH STM-64, 10GbE LAN Phy, 10G
Clear Channel and 10GbE WAN Phy client signals.
TOM-2.5G-SR1 (Tributary Pluggable SFP optical module supporting client interface operating at 1310 nm;
Optical Module) 2km reach; SONET OC-48 and SDH STM-16 client signals.
TOM-2.5G-IR1 (Tributary Pluggable SFP optical module supporting client interface operating at 1310 nm;
Optical Module) 15km reach; SONET OC-48 and SDH STM-16 client signals.
TOM-1G-LX (Tributary Pluggable SFP optical module supporting client interface operating at 1310 nm;
Optical Module) 5km reach; 1G Ethernet client signals.
OTC (Optical Transport Supports Optical Line Amplification function.
Chassis)
OAM (Optical Amplifier Performs uni-directional optical amplification. Up to two OAMs can be housed in
Module) one OTC. Three types of OAMs are provided with various combinations of fixed
gain, variable gain and mid-stage access for dispersion compensation fiber.
OMM (Optical Manage- Performs management and control functions for the Optical Line Amplifier network
ment Module) element.
Office alarms Supports 20 external alarm inputs and 20 control outputs.
Datawire Two 10Mbps Ethernet AUX ports to carry customer management data.
Management interfaces Craft serial DCE (DB-9 female/RS232 interface) and craft Ethernet (10Mbps RJ45
interface) on MCM/OMM, and two 10/100Mbps DCN ports on the I/O panel of the
DTC and OTC.
OSC (Optical Supervisory 100Mbps Optical Supervisory Channel for inter-node communication.
Channel)
Feature Description
10G Clear Channel Ser- Provides services and technologies transported at the 10G SONET/SDH line rate
vice in unframed payloads.
Laser safety (ALS) Automatic Laser Shutdown (ALS) during fiber cut.
Automatic channel turn-up Automatically adjusts the power of the amplifiers across the entire link while turn-
ing up new channels or deleting existing channels.
In-service upgrade to Add/ The Digital Repeater sites can be upgraded to an Add/Drop configuration in-ser-
Drop vice by populating the tributary modules.
Eighty channel scalability The limited availability of eighty channel BMMs will allow deployment of equip-
ment that will support eighty channels in the future.
Automatic end-to-end cir- The OSPF routing and GMPLS signaling protocols are implemented to support
cuit provisioning the network topology discovery and end-to-end service provisioning and manage-
ment.
Y-cable Protection Enables 1+1 protection of diverse Sub Network Connection (SNC) paths through
the Digital Optical Network for sub-50ms switching. Y-cable protection increases
the overall reliability and service up-time of the optical path.
Enhanced digital transport Enhanced inter-DLM cross-connecting allows more flexible and efficient use of
path grooming bandwidth at add drop and multi-junction sites.
Export all alarms and A feature provided in MPower EMS and MPower GNM that gives the user the abil-
events ity to export all alarms and events.
Circuit Tracing An EMS feature that gives the user the ability to trace a circuit by displaying inter-
mediate points in the circuit.
Equipment auto-configura- In auto-configuration the software can automatically detect the hardware and con-
tion and pre-configuration figure. In pre-configuration the users can pre-configure the hardware before it is
installed.
Software upgrade protec- Allows the system the ability to gracefully “fall-back” or “down-grade” to a prior
tion release in the rare event that a failure is experienced during the upgrade process.
Remote Hardware FPGA The TN780 hardware modules that support the ability to be remotely upgraded
Upgrade include all types of TAMs, DLMs and BMMs. The ability to remotely upgrade hard-
ware using a controlled process is integrated in Release 1.2.
Network Information File An EMS feature that allows for the addition of administrative domains and Node
Editor information updates while the EMS core server is running.
Optical PM, Digital PM, Optical PM data collection is supported on the Optical Line Amplifier and the
SONET/SDH PM TN780 network elements. Digital PM data collection is supported on the TN780 at
the Terminal, Add/Drop and Digital Repeater sites. SONET/SDH PM data collec-
tion is supported in the TN780 network element for the tributary interfaces at the
Terminal and Add/Drop sites. Both, current and historical PM counters are sup-
ported. The counters can be reset.
PM data upload Automatic and periodic transfer of PM data in Comma Separated Value (CSV) for-
mat enabling customers to integrate with their management applications.
Feature Description
Gateway Network Ele- Minimizes the number of external DCN IP addresses and provides proxy services
ment (GNE) and MAP to management traffic to manage network elements that do not have direct DCN
(Management Application connectivity. Also supports redundant management access to all network ele-
Proxy) functions ments and automatic recovery from single failure in communications path.
Non-Modal Multi-Window Facilitates the ability to launch numerous windows with the GUI, creating ease of
display provisioning, alarm correlation, and troubleshooting.
MPower Graphical Node Supports web based Graphical User Interface (GUI) to manage a network ele-
Manager (GNM) GUI ment. MPower GNM GUI resides on the network element and has the same look
and feel as the MPower EMS. MPower GNM supports log-in to remote network
elements utilizing OSC.
- Event/Alarm management
- Topology navigation
- Inventory management
- Export inventory information in TSV and CSV format
- Automatic end-to-end circuit provisioning
- Manual cross-connect provisioning
- Historical and real-time performance monitoring
- Network element security management
- Software download
- Configuration database backup/restore
MPower Element Manage- Provides full fault management, configuration management, service provisioning,
ment System performance management, and security management (FCPS) support of TN780
and Optical Line Amplifier network elements and network-level end-to-end control
and monitoring.
- Network/network element level event/alarm management
- Network/network element level topology management
- Network/network element level inventory management
- Network element PM archiving and scheduling
- Network element PM report generation
- Network element and MPower EMS security management
- Network element software download
- Network element configuration database backup/restore
MPower SNMP Trap agent - SNMPv2C agent with dynamic trap registration
- Automated generation of current standing alarms upon registration
- Architected for future robust trap implementation
TL1 Interface The Telcordia standards compliant TL1 interface provides full FCPS support of
TN780 and Optical Line Amplifier network elements.
Network Applications
This chapter describes the configurations and network topologies supported by the TN780 in the following
sections:
“TN780 Configurations” on page 2-2
“Network Topologies” on page 2-4
TN780 Configurations
The flexibility of the TN780 eliminates the need for distinct node types, as opposed to the traditional
Wavelength Division Multiplexing (WDM) networks that contain distinct node types performing a
specialized function, such as terminal, add/drop and amplification function. The TN780 provides all these
functions using a common set of circuit packs by allowing the terminal, add/drop or amplification functions
to be selected on a per channel (10Gbps and 2.5Gbps) basis. The TN780 eliminates the “node-type”
concept and introduces dynamically re-configurable 0-100% digital add/drop, terminal and amplification
functions in a single network element. In addition, the TN780 provides digital performance monitoring on a
per channel basis at each digital site for fault isolation and troubleshooting.
Client
Span
Client Client Client
Digital Link
Note: A terminal or a digital repeater site can be upgraded in-service to a re-configurable add/
drop site by populating additional circuit packs. No network engineering is required to
enable add/drop capacity at any digital site.The reconfigurabale add/drop capability does
require a software license.
Network Topologies
The TN780 and Optical Line Amplifier network elements can be arranged to support a broad range of
network topologies uniquely meeting the implementation needs of metro to long-haul applications. The
flexibility of the TN780 and Optical Line Amplifier network elements supports a great number of possible
network topologies the most typical of which are highlighted in the following sections.
Point-to-point Network
In its simplest form, an un-protected point-to-point network consists of two TN780 network elements, each
configured as a Digital Terminal (referred as DT) node, connecting two sites in the network (See Figure 2-
2 on page 2-4.).
T
o
/
Fr
om T
o
/
Fr
om
C
u
s
to
me
r C
u
s
to
me
r
Depending on the distance of the route, the fiber loss and the potential for customer access at intermediate
sites along the route, an optimal selection of Optical Line Amplifiers (referred to as OA) and Digital
Repeaters (referred to as DR) can be included in the route. (See Figure 2-3 on page 2-4). The Digital
Repeater node can be upgraded in-service to a Digital Add/Drop (referred to as DA) node by simply adding
the additional circuit packs, transforming the point-to-point network into a linear add/drop network.
Note: Figure 2-3 on page 2-4 is for the purpose of feature illustration only. The actual number of
Optical Line Amplifier and Digital Repeater sites required between Digital Terminal sites is
dependent upon several factors, including fiber type and physical distance.
Note: In Release 1.2, within a digital link between adjacent TN780s, up to six 24dB optical spans
and five Optical Line Amplifiers are supported.
T
o
/F
rom T
o
/F
rom
C
u
st
ome
r C
u
st
ome
r
To/From To/From
Customer Customer
To/From To/From
Customer Customer
To/From
Customer
To/From
To/From
Customer
Customer
A linear add/drop network can be upgraded in-service to a hub and spoke network configuration by adding
a spoke-route at a Digital Junction site. Additionally, more spoke-routes can be added in-service to an
existing Digital Junction site. Also, a spoke route can be extended in an in-service manner with the addition
of Digital Add/Drop nodes.
Ring Network
A ring network is a special case of linear add/drop network where two Digital Terminal nodes are replaced
by a single Digital Add/Drop node. So, a digital optical ring network consists of TN780s configured to
perform add/drop function and interconnected in a ring topology. (See Figure 2-6 on page 2-7.) As with all
other network configurations, a linear add/drop network is in-service upgradeable to a ring network.The
UTStarcom digital optical ring network eliminates the distance limitations on ring circumference. Removing
the distance limitations on ring circumference allows the digital optical ring to be deployed in metro
applications and core network applications.
To/From
To/From Customer To/From
Customer Customer
To/From
Customer
As described in CHAPTER 1, UTStarcom offers Digital Optical Networking Systems which help carriers build Digital
Optical Networks. The TN780 is the first Digital Optical Networking System offered by UTStarcom. The following
section provides a brief overview of the hardware modules that make up the TN780.
“TN780 Hardware Overview” on page 3-2
UTStarcom also offers Optical Line Amplifiers optimized to extend the optical reach between two TN780s.
The following section provides a brief overview of the hardware modules that make up the Optical Line Amplifier.
“Optical Line Amplifier Hardware Overview” on page 3-13
The TN780 and Optical Line Amplifier network elements provide similar system interfaces, data plane and
control plane functions as described in the following sections. The difference in the functionality of the
TN780 and Optical Line Amplifier network elements is called out as needed.
“System Interfaces” on page 3-17
“System Data Plane Functions” on page 3-21
“System Control Plane Functions” on page 3-35
“System Management Plane Functions” on page 3-40
As described in CHAPTER 2, the TN780 supports multiple configurations. The following sections provide signal
flow within the TN780 for each supported configuration.
“Digital Terminal Site Operation” on page 3-41
“Digital Add/Drop Site Operation” on page 3-44
“Digital Repeater Site Operation” on page 3-49
The following section provides signal flow within an Optical Line Amplifier.
“Optical Line Amplifier Site Operation” on page 3-51
DTC Overview
The DTC is comprised of a DTC and field replaceable circuit packs. The DTC consists of several common
equipment components. “DTC Hardware Equipment” on page 3-2 gives a list of the DTC components and
field replaceable circuit packs. A front view of the DTC with the DTC components and circuit packs is
shown in Figure 3-1 on page 3-4.
PEM A PEM B
Fan Tray A
TAP
Cable
Management
for TAP
TAM Blanks
MCM
BMMs
TAMs
DLM Blanks
MCM Blank
DLMs
Fiber Bend
Radius Control Fiber Guide
Fan Tray B
Air Filter
Air Inlet
Plenum
DTC
The DTC houses the common equipment required for operations and circuit packs that transport and
terminate optical signals. The DTC can function as a Main Chassis to control and manage all chassis
within a TN780 network element and can also function as an Expansion Chassis in a multi-chassis
configuration. The DTC is designed to support the multi-chassis configuration. Each DTC supports
400Gbps bidirectional capacity.
The DTC includes the following common equipment that provides power, performs system supervision,
and enables system-level communication:
Rack Mounting Ears (see “Rack Mounting Ears” on page 3-5)
Two Power Entry Modules (see “Power Entry Module” on page 3-5)
One I/O Panel (see “I/O Panel” on page 3-5)
One Timing and Alarm Panel (see “Timing and Alarm Panel (TAP)” on page 3-6)
Two Fan Trays (see “Fan Tray” on page 3-6)
One Air Filter (see “Air Filter” on page 3-6)
One Card cage (see “Card Cage” on page 3-6)
I/O Panel
The I/O Panel houses the management and operations interfaces as enumerated below:
Two 10/100Mb auto-negotiating Data Communication Network (DCN) RJ-45 interfaces
Two 10Mb Administrative Inter-LAN RJ-45 interfaces to support Datawire application labeled AUX
One Craft RS232 Modem port
Chassis level alarm LEDs (Power, Critical, Major, Minor)
Bay level alarm LEDs (Critical, Major, Minor)
Four inter-chassis interconnect RJ-45 interfaces referred to as Nodal Controller and Timing (NCT),
for multi-chassis configuration
One Lamp Test button
One Alarm CutOff (ACO) button
One ACO LED
Fan Tray
Each DTC accommodates two fan trays, one at the top of the chassis and the other at the bottom. Each
fan tray contains three individually controlled cooling fans. The two fan trays work concurrently to push/pull
air through the system with air flow entering from the bottom front and sides, and exiting from the rear top
and sides.
Air Filter
Each DTC accommodates one replaceable air filter located below the bottom fan tray to filter out particles
at the air intake of the DTC.
Card Cage
Each DTC has a card cage into which field replaceable circuit packs are installed. Each DTC card cage
can accommodate:
Two MCMs (MCM-A and/or MCM-B) in slots 7A and 7B (this circuit pack is half the height of the ser-
vice shelf)
Two BMMs (BMM-4-CX-A, BMM-4-CX-B, or BMM-8-CX-A) in slots 1 and 2
Future support for up to two OrderWire Modules (OWM)s, each pluggable into any BMM
Four DLMs in slots 3, 4, 5 and/or 6
Up to five 10G TAM (TAM-2-10G), 2.5G TAMs (TAM-4-2.5G), 1G TAM (TAM-4-1G) plugged into
each DLM
Provides optical access points for power monitors or optical spectrum analyzers. This includes two
(2) receive access points and one (1) transmit access point
Provides sub-slot access for the OWM supported in future release
Accommodates mid-stage access to Dispersion Compensation Fiber (DCF)
There are three different BMM-4-CX-A types providing different EDFA gain and with/without mid-stage
DCF access.
Provides a C/L-band splitter to support an in-service expansion of the system to enable optical
transmission in the L-band
Provides optical access points for power monitors or optical spectrum analyzers. This includes two
(2) receive access points and one (1) transmit access point
Provides sub-slot access for the OWM supported in future release
Accommodates mid-stage access to Dispersion Compensation Fiber (DCF)
There are three different BMM-8-CX-A types providing different EDFA gain and with/without mid-stage
DCF access.
Note: In R1.2 the support for the BMM-8 is on a limited availability basis. Please contact your
UTStarcom sales account team for more information.
DMC Overview
This section provides an overview of the DMC. For the detailed description and technical specifications
refer to the UTStarcom TN780 Hardware Description manual.
The DMC is a passive chassis and does not require management. Depending on the span characteristics,
the DMC is optionally included in TN780 and Optical Line Amplifier network elements to provide dispersion
compensation.
The DMC is comprised of a chassis and Dispersion Compensation Modules (DCMs).
The DMC is a 1RU chassis. As with the DTC, the DMC can be mounted in a 23” rack (flush-mount and 1”,
2”, 5” and 6” forward-mount) and 600mmx600mm ETSI rack (flush-mount). Each DMC can accommodate
two half-width DCMs (see Figure 3-2 on page 3-12) or one full width DCM (see Figure 3-3 on page 3-12).
Multiple DCMs are available providing 100ps/nm to 1800ps/nm in 100ps/nm increments.
OTC Overview
The OTC is comprised of an OTC and field replaceable circuit packs. Table 3-2 on page 3-13 gives a list of
OTC components and field replaceable circuit packs. A front view of the OTC with the OTC components
and circuit packs is shown in Figure 3-4 on page 3-14.
PEM A PEM B
Fiber Guide
OMMs
OTC
The OTC houses the common equipment required for operations and circuit packs that amplify optical
signals. Each OTC supports bidirectional optical amplification function. The OTC includes the following
common equipment that provides power, performs system supervision, and enables system-level
communication:
Rack Mounting Ears (see “Rack Mounting Ears” on page 3-14)
Two Power Entry Modules (see “Power Entry Module” on page 3-15)
One IO/Alarm Panel (see “IO/Alarm Panel” on page 3-15)
Two Fan Trays (see “Fan Tray” on page 3-15)
One Air Filter (see “Air Filter” on page 3-15)
One Card cage (see “Card Cage” on page 3-15)
IO/Alarm Panel
The IO/Alarm Panel houses the management and operations interfaces as described below:
Two 10/100Mb auto-negotiating Data Communication Network (DCN) RJ-45 interfaces
Two 10Mb Administrative Inter-LAN RJ-45 interfaces to support datawire application
One Craft RS232 Modem port
Chassis level alarm LEDs (Critical, Major, Minor, Power)
Four inter-chassis interconnect RJ-45 interfaces referred to as Nodal Control and Timing, for multi-
chassis configuration
One Lamp Test button
One ACO button
One ACO LED
The IO/Alarm Panel also houses telemetry alarm contacts. It provides 19 user customizable alarm input
contact sets and 10 user customizable alarm contact outputs.
Fan Tray
Each OTC accommodates two fan trays, one on the left side of the chassis and the other on the right side
of the chassis. Each fan tray contains one cooling fan. The two fan trays work concurrently to push/pull air
through the system with air flow entering from the front right and exiting on the left side.
Air Filter
Each OTC accommodates one replaceable air filter located on the right side of the chassis to filter out
particles at the air intake of the OTC.
Card Cage
Each OTC has a card cage into which field replaceable circuit packs are installed. Each OTC card cage
can accommodate:
System Interfaces
The TN780 and Optical Line Amplifier network elements provide several external interfaces as described
in the following sections:
“Operations Interfaces” on page 3-17
“Transport Interfaces” on page 3-18
“Input/Output Alarm Contacts” on page 3-19
“Datawire” on page 3-20
Operations Interfaces
The operations interfaces provide the management and administration of the network element. The TN780
and Optical Line Amplifier network elements provide two kinds of interfaces described below.
Management Interfaces
The network elements provide multiple craft interfaces for local user access to network management and
Operations, Administration, Maintenance and Provisioning (OAM&P) functions and also DCN interfaces
for remote access. Following is a list of external interfaces that can be used to facilitate the connection of
management devices to the TN780 and Optical Line Amplifier network elements.
Craft Serial DCE - This is a DB-9 female/RS-232 DCE interface used to connect a dumb terminal.
This serial port supports TL1 only (not EMS or Craft GUI). Maintenance personnel can use this inter-
face for managing the local network element or any subtending network elements utilizing this net-
work element as a Gateway. The craft serial interface is located on the MCM/OMM.
Craft Ethernet - This is a 10Mbps Ethernet RJ45 interface. This interface can be used to access the
network element through the TL1 Interface or MPower GNM. Maintenance personnel can use this
interface for managing the local network element or any subtending network elements utilizing this
network element as a Gateway. The craft Ethernet interface is located on the MCM/OMM.
DCN - This is an auto-negotiating 10/100Mbps Ethernet RJ45 interface. There are two DCN inter-
faces per network element supporting redundant inter-connectivity to the DCN. OSS personnel can
use this interface to manage the network element remotely. OSS personnel can use any of
UTStarcom Network Management Software applications, such as MPower EMS, MPower GNM or
systems TL1 interface, to manage the local network element or any subtending network elements
utilizing this network element as a Gateway. DCN interfaces are located on the IO Panel of the
TN780 and IO/Alarm Panel of the Optical Line Amplifier.
Craft Serial DTE - This is a DB-9 Male/RS-232 DTE interface used to connect an external modem or
a dumb terminal. This interface is located on the IO Panel of the TN780 and IO/Alarm Panel of the
Optical Line Amplifier.
Refer to UTStarcom TL1 User Guide, UTStarcom MPower GNM User Guide, and UTStarcom MPower
EMS User Guide for more details on how to use these interfaces to access the corresponding network
management applications.
Transport Interfaces
The transport interfaces carry the user data. Two types of transport interfaces are provided as described
below.
Client/Trib Interfaces
The client/trib interfaces are the ingress/egress points of the customer signals into/out of the TN780. These
signals can be added/removed at a terminal site, or an Add/Drop site. The following client/trib signals are
supported:
SONET OC-192 with full SONET overhead transparency
SONET OC-48 with full SONET overhead transparency
SDH STM-64 with full SDH overhead transparency
SDH STM-16 with full SDH overhead transparency
10G clear channel
10GbE LAN Phy
10GbE WAN Phy
1GbE
Line Interface
The line side optical interface carries the aggregate signal coming into/out of the TN780 and Optical Line
Amplifier network elements. The line side signal has the following characteristics:
40x10G channels with integrated OC-3c OSC
Enhanced FEC for 1E-15 end-to-end BER
Digital section layer & digital path level OAM (PM, tracing, alarms)
Traffic-agnostic transport for any 10Gbps/2.5Gbps/1Gbps signals
The line side interface supports multiple fiber types, such as SMF, TW-RS, and E-LEAF.
For more details on the optical characteristics of the line interfaces, refer to UTStarcom TN780 Hardware
Description manual.
Office Alarms
The TN780 and Optical Line Amplifier network elements provide seven office dry alarm contact sets to
connect to the Central Office alarm grid. Following are the office alarms provided:
Critical Audible
Critical Visual
Major Audible
Major Visible
Minor Audible
Minor Visible
Power failure
Each set consists of normally-closed (NC), normally-open (NO) and common contacts. When two or more
chassis are installed in a single bay, the alarm outputs may be Ored by wiring the associated outputs in
parallel (normally-open) or in series (normally-closed), as preferred by the customer.
Note: The ACO function is local to the chassis. It does not affect the audible alarm state in other
chassis.
Parallel Telemetry
The DTC provides sixteen user-customizable environmental alarm input contact sets and the OTC
provides nineteen user-customizable alarm input contact sets through opto-isolators. Each alarm input
contact set consists of a signal and return contact. The users can customize these alarm inputs, and, when
activated, will result in the generation of a customized alarm. The status of all alarms is accessible through
the management applications.
The DTC and OTC provide ten user-customizable parallel telemetry output contact sets using latching,
form-c relays. The control relays are latching, meaning they maintain their relay position (open or closed)
even during a power failure. Each output contact set consists of normally-closed, normally-open and
common contacts. The alarm outputs are controlled by the MCM/OMM.
Datawire
The TN780 and Optical Line Amplifier network elements provide two physical 10Mbs Ethernet RJ45
interfaces to support redundant access to the 10Mbps Datawire channel over the OSC. The Datawire
channel is used for interconnecting customer’s LAN segments at various sites along a route. For example,
the Datawire channel can be used for applications such as backhauling customer’s network management
traffic from the remote sites to a gateway network element site, or for serving as a network management
access port for field personnel to gain management access to a remote network element.
The configured IP addresses and subnets of the Datawire LAN ports are advertised by the GMPLS routing
protocol (see “IQ GMPLS Control Plane Overview” on page 4-47) therefore, the subnets become
reachable from other Datawire ports.
Digital Transport
The DTC and corresponding circuit packs provide the digital transport capability. Figure 3-5 on page 3-22
illustrates the interconnection between the circuit packs and major components along the data path. The
sections that follow describe the data plane functions.
Note: Figure 3-5 on page 3-22 is for the illustration of the function feature. The inter connectivity
between the circuit packs could vary based on the network element configuration and cus-
tomer application.
DC M
D LM (in slot 3)
TO M T AM
TO M BM M
M AP/
FEC
TO M T AM Line fiber
TO M (west)
D LM (in slot 4)
TO M T AM
TO M
M AP/
FEC O SC
TO M T AM
TO M
Line fiber
D LM (in slot 6) (east)
TO M T AM
TO M
M AP/
FEC
TO M T AM
TO M O SC
DC M
Tributary Adaptation
As shown in Figure 3-5 on page 3-22 the DTC data plane performs tributary adaptation function where any
variety of 10Gbps, 2.5Gbps and 1Gbps client signal is adapted to an ITU-compliant optical signal for
transmitting on the line fiber. The tributary adaptation includes conversion of client’s optical signals into
digital signals (performed in the TOM), encapsulation of 10Gbps, 2.5Gbps or 1Gbps payload into a Digital
Transport Frame, referred to as the DTF, (performed in the TAM and DLM) and conversion of the digital
signals into the ITU-compliant optical signals. The DTF architecture (refer to “Digital Transport Frame” on
page 3-23) is designed to support transport for any variety of 10Gbps, 2.5Gbps and 1Gbps client signals
through the network, irrespective of the actual payload format.
In Release 1.2, the TN780 supports the following client interfaces:
SONET OC-192 with full SONET overhead transparency
DTFLine
DTL DTL DTL
DTP DTP
DTFPath
DTP
DTP
Note: Release 1.2 supports only DTP2 (10Gbps) signals, however the Digital Line Module is
designed to multiplex DTP1 (2.5Gbps) signals. No hardware replacement will be needed in
future releases to transport 2.5Gbps client signals.
The DTF overhead contains the characteristic information for the DTF Section, DTF Line, and DTF Path
layers of the network. The DTF Overhead is segmented into seven groups: Digital Transport Frame
Alignment Overhead, DTS Overhead, FEC Overhead, DTL Overhead, DTPk Overhead, DTEk Overhead
and DTEk Payload. The function of the seven groups are:
Bandwidth Grooming
The TN780 system data plane supports10Gbps, and 2.5Gbps grooming and switching between DLMs
utilizing a cross-point switch integrated within the DLM and backplane connector. (See Figure 3-5 on
page 3-22.) As described in the earlier sections, each DLM can terminate 100Gbps line-side capacity and
each DTC can accommodate 4 DLMs in slots 3, 4, 5, and 6.
In Release 1.2, grooming of up to 100Gbps capacity, referred to as the X-OCG grooming, is supported
between adjacent DLM slots (between slots 3 & 4, and slots 5 & 6). (See Figure 3-8 on page 3-27.)
1 2 3 4 5 6 7
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
OUT OUT OUT OUT
MCM
IN IN IN IN
2 2 2 2
OCG 3 OUT OCG 3 OUT OCG 3 OUT OCG 3 OUT
IN IN IN IN
BMM-4-C1-A
BMM-4-C1-A
Ethernet
LIN K DATA
OUT OUT OUT OUT
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
OUT OUT OUT OUT DCE
DLM-3-C1-A
DLM-7-C1-A
DLM-5-C1-A
DLM-1-C1-A
…..
….
IN IN IN IN
2 2 2 2
OUT OUT OUT OUT
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
LINE LINE IN IN IN IN
IN OUT IN OUT 1 1 1 1
OUT OUT OUT OUT
Line W est IN
2
IN
2
IN
2
IN
2
OUT OUT OUT OUT
MCM
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
OCG 1 OCG 1 1 1 1 1
OUT OUT OUT OUT
OCG 3 OCG 3 IN IN IN IN
Ethernet
2 2 2 2
LIN K DATA
OUT OUT OUT OUT
OCG 5 OCG 5
60Gbps DCE
…..
….
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
OUT OUT OUT OUT
OCG 7 OCG 7
IN IN
60Gbps IN IN
2 2 2 2
OUT OUT OUT OUT
Note: Figure 3-8 on page 3-27 illustrates an example where the DLMs in the odd numbered slots
are connected to one BMM towards west direction, DLMs in the even numbered slots are
connected to the other BMM towards east direction. In this example configuration each
DTC can support up to 400Gbps grooming capacity.
Reconfigurable Add/Drop
The TN780 system data plane implements fully flexible 0% to 100% add/drop capabilities on a per-
channel basis (10Gbps, and 2.5Gbps). The channels can be configured to pass-through or add/drop. A
pass-through channel can be re-configured to an add/drop channel by:
Populating the client side circuit packs (TAM and TOM)
Provisioning end-to-end circuit through management applications
There are no restrictions as to how many channels or which channels are added/dropped at any given site.
Whenever an add/drop channel is added or deleted, no network engineering is required. Furthermore, the
add/drop channels are transparent to the client signal format and can carry many client signals.
Digital Regeneration
The TN780 system data plane implements fully flexible 0% to 100% digital amplification capabilities on a
per-channel basis (10Gbps, and 2.5Gpbs). It has the capability to digitally amplify the channels at 10Gbps,
and 2.5Gbps. There are no restrictions as to how many channels or which channels are digitally amplified
at any given site. Whenever a digital amp channel is added or deleted, no network engineering is required.
Furthermore, the digital amp channels are transparent to the client signal format and can carry many client
signals.
Digital Conditioning
The TN780 system data plane includes Forward Error Correction (FEC) encoder/decoder for every
channel on the line side at every digital add/drop, digital terminal and digital repeater node to improve the
overall BER.
UTStarcom implements an enhanced FEC algorithm which has a higher coding gain than the standard
G.709 RS(255,239) algorithm. The Enhanced FEC algorithm provides a coding gain of 8.7 dB at 10Gbps
at BER of 1e-15 with the same 7% overhead ratio as the standard G.709 FEC algorithm.
The Enhanced FEC function is implemented on the DLM.
DTF Section PM
The DTF Section layer includes a BIP-8 counter on each 10Gbps digital channel of a digital link, and it can
be monitored at each digital site.
DTF Line PM
The DTF Line layer defines BIP-8 statistics across multiple consecutive digital links along a route, as
defined by the customer.
This counter is not supported in Release 1.2.
DTF Path PM
The DTF Path layer includes a BIP-8 counter for both 2.5Gbps and 10Gbps client signals, and is
associated with the end-to-end path of the signal. The path performance monitoring data is available at the
DTP end points and also available at the intermediate digital sites where the DTF is regenerated,
analogous to SONET/SDH intermediate path performance monitoring.
FEC PM
As described in “Digital Conditioning” on page 3-28 FEC encoding and decoding is performed on every
digital channel. The FEC statistics are collected at every digital site on every channel, including:
Uncorrected bit error rate
Corrected bit error rate
Corrected number of zeros
Corrected number of ones
Uncorrected number of codewords
Total number of codewords
Raw total bit errors before applying FEC
The DTF defines several maintenance signals which are transmitted in-band to the upstream and
downstream network elements using the overhead bytes. It includes:
DTF BDI-L and DTF BDI-P are Backward Defect Indication signals sent upstream as an indica-
tion that a downstream defect has been detected
DTF AIS-L and AIS-P are Alarm Indication Signals sent downstream as an indication that an
upstream defect has been detected
DTF OCI-L and DTF OCI-P are Open Connection Indication (OCI) signals sent downstream as
an indication that the signal is not connected to a source in the upstream
DTF LCK-L and DTF LCK-P are Locked signals sent downstream as an indication that the con-
nection is locked in the upstream node
Signal Degrade (SD) signal is sent downstream indicating the BER of the received signal is
above set limits
Signal Fail (SF) signal is sent downstream indicating the BER of the received signal is above set
limits
Trace message (TTI) at DTF Section layer providing continuity check along a digital link between
consecutive Digital Optical Nodes
Trace message (TTI) at DTF Path layer providing end-to-end continuity check between the two end-
points within the Digital Transport Network
Optical Transport
The TN780 and Optical Line Amplifier network elements include the optical transport functions which are
described below.
O
pt
ica
lTr
ans
por
tSe
ct
ionO
TSO
TS O
TS O
TSO
TS O
TS O
TSO
TS O
TS O
TS
O
pt
ica
lMu
xSe
ct
ion
(ba
nd)O
MS
bOM
SbO
MS
b O
MS
bOM
SbO
MS
b O
MS
bOM
SbO
MS
bOM
Sb
O
pt
ica
lMu
xSe
ct
ion
(OC
G) O
MS
o O
MS
o O
MS
o O
MS
o
O
pt
ica
lCh
an
ne
l
O
Ch O
Ch O
Ch O
Ch
At the lowest layer, Optical Channel (OCh) is a 10Gbps channel within the C-band channel plan. Next
layer is the Optical Multiplex Section (OMS) layer. UTStarcom defines two-stage multiplexing resulting in
two OMS layers (OMSo and OMSb). The OMSo is a 100Gbps signal, an aggregate of ten Optical
Channels (OChs). The (OMSo) is referred to as the Optical Carrier Group (OCG). The OMSb is a
400Gbps signal, an aggregate of four OCGs with the support for 800Gbps or 8 OCGs in future. The OMSb
is commonly referred to as C-band and L-band. Release 1.2 supports only C-band channel plan with future
support for L-band channel plan. The optical transport section (OTS) is an aggregate of OMSb (C-band),
OMSb (L-band in future release) and OSC channel providing 1.2Tbps capacity per fiber in future.
Thus, an OTS signal may contain 0 to 80 C-band channels (1530.334nm to 1563.455nm), 0 to 80 L-band
channels in future, plus OSC channel at 1510nm, outside of both bands. Each OCh may be arbitrarily
added and dropped multiple times across a route. However, the individual channels are not managed,
instead the OCGs are managed. The OCGs are the basic unit of optical granularity, not the channel; all the
OChs in an active OCG are optically present on the fiber (barring single-channel failures).
Ten
OCh (10Gbps)
100Gbps
signals
OCG
Ten
OCh (10Gbps) 400Gbps
100Gbps
signals C-band
OCG
OTS
(400Gbps in Rls. 1
& up to 1.2Tbps
Ten
in future)
OCh (10Gbps)
100Gbps
signals
OCG
400Gbps
L-band in future
Ten
OCh (10Gbps)
100Gbps
signals
OCG
OSC
(1510nm)
The TN780 network element implements OTS, Band, OCG and OCh layers, while the Optical Line
Amplifier implements only the OTS and Band layers.
The multiplexing of ten optical channels into one Optical Carrier Group (OCG) is performed in the DLM.
The multiplexing of multiple OCGs and the OSC are performed in the BMM. (See Figure 3-5 on page 3-
22.)
The OAM supports multiplexing/demultiplexing of C-band and L-band signals.
Optical Amplification
The amplification function amplifies an aggregate signal along each span of the link. The UTStarcom
Optical Line Amplifier provides optical amplification and is used to extend the reach between the TN780
network elements.
Optical Conditioning
Optical conditioning fine-tunes the optical signals along the link. The BMM includes a mid-stage access to
the Dispersion Compensation Modules (DCMs) providing post-compensation for the dropped channels at
the Digital Terminal and Digital Add/Drop site. The OAM includes a mid-stage access to the DCMs
providing compensation for the amplified line signal.
Note: For the Multi-Chassis configuration, the MCM-B must be used due to the enhanced CPU
frequency, persistence storage, and physical memory (SDRAM).
CPU OSC CPU OSC CPU CPU CPU CPU CPU CPU
Backplane
DCN DCN
Inter-Chassis Inter-Chassis
NCT Ports NCT Ports
I/O Panel
Backplane
Craft Craft
DCN DCN
Inter-Chassis Inter-Chassis
NCT Ports NCT Ports
I/O Panel
Note: For the Multi-Chassis configuration, the MCM-B must be used due to the enhanced CPU
frequency, persistence storage, and physical memory (SDRAM).
Note: In a Multi-Chassis configuration the DCN ports on the Main chassis are active. The DCN
ports on the Expansion shelf are disabled.
D T C C h a s s is
IO P a n e l
N C T 2 -A N C T 1 -A N C T 2 -B N C T 1 -B
M C M -A M C M -B
M aste r C o n tro l
C h a s s is CPU CPU
S w itc h / S w itc h /
R o u te r R o u te r
D T C C h a s s is
IO P a n e l
N C T 2 -A N C T 1 -A N C T 2 -B N C T 1 -B
M C M -A M C M -B
E x p a n s io n
C h a s s is 1 CPU CPU
S w itc h / S w itc h /
R o u te r R o u te r
D T C C h a s s is
IO P a n e l
N C T 2 -A N C T 1 -A N C T 2 -B N C T 1 -B
M C M -A M C M -B
E x p a n s io n
C h a s s is 2 CPU CPU
S w itc h / S w itc h /
R o u te r R o u te r
Orderwire Traffic - voice communication traffic between customer sites through the orderwire inter-
faces which will be supported in a future release
The physical OSC interfaces are located on the BMM and OAM. The packets received on the OSC are
switched to the MCM/OMM for processing. So, though the OSC is terminated on the BMM/OAM, the
packets are processed in the MCM/OMM.
Note: In a Multi-Chassis configuration the DCN ports on the Main chassis are active. The DCN
ports on the Expansion shelf are disabled.
Each DTC must have the following minimum hardware to provide Digital Terminal function (see Figure 3-
14 on page 3-41):
One DTC
One MCM
One BMM
One DLM
One TAM
One TOM
1 2 3 4 5 6 7
TAM-2-10G
TAM-2-10G
TAM-2-10G
TAM-2-10G
IN
1
IN
1
IN
1
IN
1 TOM
DLM OUT OUT OUT OUT
TAM
MCM
IN IN IN IN
BMM Blank OCG 1
2
OUT OCG 3
2
OUT OCG 3
2
OUT OCG 3
2
OUT TOM Blank
IN IN IN IN
BMM-4-C1-A
BMM-4-C1-A
Ethernet
LINK DATA
TAM-2-10G
TAM-2-10G
TAM-2-10G
TAM-2-10G
DLM-1-C1-A
DLM-3-C1-A
DLM-3-C1-A
…...
…
IN IN IN IN
2 2 2 2
MCM
TAM-2-10G
TAM-2-10G
TAM-2-10G
TAM-2-10G
LINE LINE IN IN IN IN
IN OUT IN OUT 1 1 1 1
OUT OUT OUT OUT
Line IN IN IN IN
2 2 2 2
OUT OUT OUT OUT
West / East
MCM
OUT IN OUT IN
TAM-2-10G
TAM-2-10G
TAM-2-10G
TAM-2-10G
IN IN IN IN
OCG 1 OCG 1 1 1 1 1
OUT OUT OUT OUT
TAM Blank
OCG 3 OCG 3 IN IN IN IN
Ethernet
2 2 2 2
LINK DATA
DCE
DLM Blank
Optical fiber OCG 5 OCG 5
TAM-2-10G
TAM-2-10G
TAM-2-10G
TAM-2-10G
…...
…
connection
IN IN IN IN
1
OUT
1
OUT
1
OUT
1
OUT MCM Blank
between OCG 7 OCG 7
circuit packs IN
2
IN
2
IN
2
IN
2
OUT OUT OUT OUT
Note: Figure 3-14 shows a DTC deployed with a BMM-4-CX-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-C-A.
A fully loaded DTC can terminate up to 400Gbps of traffic. A fully loaded DTC includes the following
hardware (see Figure 3-15 on page 3-42):
One DTC
One MCM
One BMM
Four DLMs
Twenty TAMs
Up to 40 10G TOMs, up to eighty 2.5G TOMs, or up to eighty 1G TOMs (or any combination of both
10G, 2.5 G, and 1G TOM)
Figure 3-15 on page 3-42 illustrates an example of optical fiber connections between the modules. The line
side port on the DLM is connected to the corresponding OCG port on the BMM. For example, the line port
on DLM-1-C1 is connected to the OCG 1 port on the BMM. Note that actual connections depend on the
installed configuration.
1 2 3 4 5 6 7
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
OUT OUT OUT OUT
MCM
IN IN IN IN
2 2 2 2
OCG 7 OUT OCG 5 OUT OCG 3 OUT OCG 1 OUT
IN IN IN IN
BMM-4-C1-A
BMM-4-C1-A
Ethernet
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
LINK DATA
IN IN IN IN
OUT OUT OUT OUT
1 1 1 1
OUT OUT OUT OUT
DCE
DLM-7-C1-A
DLM-5-C1-A
DLM-1-C1-A
DLM-3-C1-A
…..
….
IN IN IN IN
2 2 2 2
OUT OUT OUT OUT
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
LINE LINE OUT OUT OUT OUT
IN OUT IN OUT
Line IN
2
IN
2
IN
2
IN
2
OUT OUT OUT OUT
W est / East
TAM -2-10G
OUT IN OUT IN
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
MCM
1 1 1 1
OCG 1 OCG 1 OUT OUT OUT OUT
IN IN IN IN
OCG 3 OCG 3 2 2 2 2 Ethernet
OUT OUT OUT OUT
LINK DATA
DCE
OCG 5 OCG 5
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
…..
….
1 1 1 1
OUT OUT OUT OUT
OCG 7 OCG 7
IN IN IN IN
2 2 2 2
OUT OUT OUT OUT
Optical fiber
connections
between
circuit packs
Note: Figure 3-15 shows a DTC deployed with a BMM-4-CX-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-CX-A.
Figure 3-16 on page 3-43 illustrates the logical configuration of a fully loaded DTC at a Digital Terminal
site.
TOM TAM
TOM
TOM
TAM
TOM
TOM
TOM
TAM D LM
TOM
TOM
TAM ( S lo t 4 )
TOM
OCG 3
TAM
TOM
TOM TAM
TOM
TOM
TAM
TOM
C lie n t TOM
TOM
TAM D LM BMM L in e
TOM
TOM
TAM ( S lo t 5 ) East
TOM
OCG 5
TAM
TOM
TOM
TAM
TOM
TOM
TAM
TOM
TOM
TOM
TAM D LM
TOM
TOM
TAM ( S lo t 6 )
TOM
OCG 7
TAM
TOM OSC
TOM
TAM
TOM
TOM
TAM
TOM
( O p ti o n a l ) DCM
1 2 3 4 5 6 7
TAM -2-10G
TAM-2-10G
TAM-2-10G
TAM-2-10G
IN IN IN IN
1 1 1 1
OUT OUT OUT OUT
IN IN IN IN
MCM
2 2 2 2
OCG 1 OUT OCG 1 OUT OCG 3 OUT OCG 3 OUT
IN IN IN IN
BMM-4-C1-A
BMM-4-C1-A
Ethernet
LINK DATA
TAM-2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
OUT OUT OUT OUT DCE
DLM-1-C1-A
DLM-1-C1-A
DLM-3-C1-A
DLM-3-C1-A
…..
….
IN IN IN IN
2 2 2 2
OUT OUT OUT OUT
TAM-2-10G
TAM-2-10G
TAM -2-10G
TAM -2-10G
LINE LINE IN IN IN IN
IN OUT IN OUT 1 1 1 1
OUT OUT OUT OUT
Line W est IN
2
IN
2
IN
2
IN
2
OUT OUT OUT OUT
TAM -2-10G
TAM-2-10G
TAM-2-10G
IN IN IN IN
OCG 1 OCG 1 1 1 1 1
OUT OUT OUT OUT
OCG 3 OCG 3 IN IN IN IN
Ethernet
2 2 2 2
LINK DATA
DCE
OCG 5 OCG 5
…..
TAM-2-10G
TAM-2-10G
….
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
OUT OUT OUT OUT
OCG 7 OCG 7
IN IN IN IN
2 2 2 2
OUT OUT OUT OUT
Optical fiber
connections
between
circuit packs
Note: Figure 3-17 shows a DTC deployed with a BMM-4-CX-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-CX-A.
Two DTCs are required to add/drop 400Gbps in each direction as shown in Figure 3-18 on page 3-46.
Following hardware is required to add/drop 400Gbps in each direction:
Two DTCs
Two MCM-Bs (One MCM for each DTC)
Two BMMs
Eight DLMs
Forty TAMs
Eighty 10G TOMs
Figure 3-18 on page 3-46 also illustrates the optical fiber interconnection between the modules. As shown,
two BMMs and four DLMs are located in the Main chassis. The remaining DLMs are located in the
Expansion chassis. The DLMs in the Expansion chassis are connected to the BMM in the Main chassis.
1 2 3 4 5 6 7
IN IN IN IN
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
1 1 1 1
O U T O U T O U T O U T
M C M
IN IN IN IN
2 2 2 2
O C G 7 O U T O C G 7 O U T O C G 5 O U T O C G 5 O UT
IN IN IN IN
B M M -4 -C 1 -A
B M M -4 -C 1 -A
E th e rn et
IN IN IN IN
DATA
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
O UT O U T O U T O U T
1 1 1 1
O U T O U T O U T O U T
L IN K
DCE
D L M -7 -C 1 -A
D L M -5 -C 1 -A
D L M -7 -C 1 -A
D L M -5 -C 1 -A
…… ...
IN IN IN IN
2 2 2 2
O U T O U T O U T O UT
IN IN IN IN
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
1 1 1 1
L IN E L IN E O U T O U T O U T O U T
IN O UT IN O UT
L in e W e s t IN
2
IN
2
IN
2
IN
2
O U T O U T O U T O UT
L in e E a s t O UT IN O UT IN IN IN IN IN
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
M CM
1 1 1 1
O C G 1 O C G 1 O U T O U T O U T O U T
IN IN IN IN
O CG 3 O C G 3 2 2 2 2 E th e rn et
O U T O U T O U T O UT
D ATA
L IN K
DCE
O C G 5 O C G 5
IN IN IN IN
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
…… ...
1 1 1 1
O U T O U T O U T O U T
O C G 7 O C G 7
IN IN IN IN
2 2 2 2
O U T O U T O U T O UT
O p tic a l fib e r
c o n n e c t io n s
b e tw e e n
c ir c u it p a c k s
1 2 3 4 5 6 7
IN IN IN IN
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
1 1 1 1
O U T O UT O U T O U T
M CM
IN IN IN IN
2 2 2 2
O C G 3 O U T O C G 3 O U T O C G 1 O U T O C G 1 O U T
IN IN IN IN
B M M -4 -C 1 -A
B M M -4 -C 1 -A
E th e rn et
IN IN IN IN
DATA
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
O U T O U T O U T O U T
1 1 1 1
O U T O UT O U T O U T
L IN K
DCE
D L M -3 -C 1 -A
D L M -3 -C 1 -A
D L M -1 -C 1 -A
D L M -1 -C 1 -A
…… ...
IN IN IN IN
2 2 2 2
O U T O U T O U T O U T
IN IN IN IN
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
1 1 1 1
L IN E L IN E O U T O UT O U T O U T
IN O UT IN O UT
IN IN IN IN
2 2 2 2
O U T O U T O U T O U T
O UT IN O UT IN IN IN IN IN
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
M C M
1 1 1 1
O C G 1 O CG 1 O U T O UT O U T O U T
IN IN IN IN
O C G 3 O C G 3 2 2 2 2 E th e rn et
O U T O U T O U T O U T
D ATA
L IN K
DCE
O C G 5 O CG 5 IN IN IN IN
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
T A M -2 -1 0 G
…… ...
1 1 1 1
O U T O UT O U T O U T
O C G 7 O CG 7
IN IN IN IN
2 2 2 2
O U T O U T O U T O U T
Note: Figure 3-18 shows DTCs deployed with a BMM-4-CX-A. The DTCs can also be deployed
with a BMM-4-CX-B or a BMM-8-CX-A.
Figure 3-16 on page 3-43 illustrates the logical configuration of a network element providing 400Gbps add/
drop capacity.
DTC TOM
Client Client
TOM
TAM TAM
DLM TOM TOM
DLM
TOM TOM
TAM TAM
(Slot 6) TOM TOM (Slot 5)
OCG 1 TAM TOM
TOM
TOM
TOM
TAM OCG 1
TOM TOM
TAM TAM
TOM TOM
TOM TOM
TAM TAM
DLM TOM TOM
DLM
TOM
TAM TOM TAM
(Slot 4) TOM TOM (Slot 3)
OCG 3 TAM TOM
TOM
TOM
TOM
TAM OCG 3
TOM TOM
TAM TAM
TOM TOM
TOM TOM
TAM TAM
TOM TOM
Each DTC can support 200Gbps add/drop traffic in each direction. Figure 3-20 on page 3-48 illustrates the
physical configuration of a network element providing 200Gbps add/drop capacity.
1 2 3 4 5 6 7
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
OUT OUT OUT OUT
MCM
IN IN IN IN
2 2 2 2
OCG 3 OUT OCG 3 OUT OCG 1 OUT OCG 1 OUT
IN IN IN IN
BMM-4-C1-A
BMM-4-C1-A
Ethernet
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
LINK DATA
IN IN IN IN
OUT OUT OUT OUT
1 1 1 1
OUT OUT OUT OUT
DCE
DLM-1-C1-A
DLM-3-C1-A
DLM-3-C1-A
DLM-1-C1-A
…..
….
IN IN IN IN
2 2 2 2
OUT OUT OUT OUT
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
LINE LINE OUT OUT OUT OUT
IN OUT IN OUT
Line W est IN
2
IN
2
IN
2
IN
2
OUT OUT OUT OUT
Line East
TAM -2-10G
TAM -2-10G
O UT IN OUT IN
TAM -2-10G
TAM -2-10G
IN IN IN IN
MCM
1 1 1 1
OCG 1 OCG 1 OUT OUT OUT OUT
IN IN IN IN
OCG 3 OCG 3 2 2 2 2 Ethernet
OUT OUT OUT OUT
LINK DATA
DCE
OCG 5 OCG 5
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
…..
….
1 1 1 1
OUT OUT OUT OUT
OCG 7 OCG 7
IN IN IN IN
2 2 2 2
OUT OUT OUT OUT
Optical fiber
connections
between
circuit packs
Note: Figure 3-20 shows a DTC deployed with a BMM-4-CX4-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-CX-A.
Figure 3-21 on page 3-49 illustrates the logical configuration of a network element providing 200Gbps add/
drop capacity.
1 2 3 4 5 6 7
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
OUT OUT OUT OUT
MCM
IN IN IN IN
2 2 2 2
OCG 3 OUT OCG 3 OUT OCG 1 OUT OCG 1 OUT
IN IN IN IN
BMM-4-C1-A
BMM-4-C1-A
Ethernet
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
LINK DATA
IN IN IN IN
OUT OUT OUT OUT
1 1 1 1
OUT OUT OUT OUT
DCE
DLM-3-C1-A
DLM-3-C1-A
DLM-1-C1-A
DLM-1-C1-A
…..
….
IN IN IN IN
2 2 2 2
OUT OUT OUT OUT
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
1 1 1 1
LINE LINE OUT OUT OUT OUT
IN OUT IN O UT
Line W est IN
2
IN
2
IN
2
IN
2
OUT OUT OUT OUT
Line East
TAM -2-10G
TAM -2-10G
OUT IN O UT IN
TAM -2-10G
TAM -2-10G
IN IN IN IN
MCM
1 1 1 1
OCG 1 OCG 1 OUT OUT OUT OUT
IN IN IN IN
OCG 3 OCG 3 2 2 2 2 Ethernet
OUT OUT OUT OUT
LINK DATA
DCE
OCG 5 OCG 5
TAM -2-10G
TAM -2-10G
TAM -2-10G
TAM -2-10G
IN IN IN IN
…..
….
1 1 1 1
OUT OUT OUT OUT
OCG 7 OCG 7
IN IN IN IN
2 2 2 2
connections
between
circuit packs
Note: Figure 3-21 shows a DTC deployed with a BMM-4-CX-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-CX8-A.
Figure 3-22 on page 3-50 illustrates an example configuration of a Digital Repeater. As shown, the digitally
repeated traffic is switched between the adjacent DLMs.
DLM DLM
(S
lot6
) (S
lot5
)
OC
G1 O
CG1
1
0 0GbpsBa
ckp
laneco
n nection
We
st BMM DLM DLM BMM E
ast
(S
lot4
) (S
lot3
)
OC
G3 OC
G3
O
SC O
SC
1
0 0GbpsBa
ckp
laneco
n nection
DC
M (Optional) (Optional) D
CM
LINKDATA OMM
1A λ λ DCE 10BaseT
LINKDATA OMM
1B λ λ DCE 10BaseT
OAM-C1-A
OSC ToL-BAND ToDCM
IN OUT IN IN IN
OSAMONITOR
IN OUT
OAM-C1-A
OSC ToL-BAND ToDCM
IN OUT IN IN IN
OSAMONITOR
OUT IN
OSCO ptical
patchcord
Figure 3-23 on page 3-51 also indicates the required optical fiber connections. As shown, to provide line
amplification for signals going from West to East, the Line IN port on a given OAM is connected to the
incoming fiber from one direction (e.g. West) while the Line OUT port on the same OAM is connected to
the outgoing fiber in the opposite direction (e.g. East). As a result, the receiver on the OAM receives from
one direction and the transmitter on the same OAM transmits towards the opposite direction. However, an
OAM provides the option to ensure that the OSC Transmitter and OSC Receiver for a given direction are
located in the same OAM so that when an OAM fails, it impacts the OSC in one direction only and the
node will still be accessible. This is done by passing the OSC transmit signals between the OAMs using a
front-panel duplex optical patch cord. The OSC OUT port on one OAM is connected to the OSC IN port on
the other OAM as shown in Figure 3-23 on page 3-51.
UTStarcom IQ Network Operating System, referred to as IQ, is intelligent software operated on all
UTStarcom network elements providing significant usability and operational benefits for the Digital Optical
Network solutions. This chapter describes the major functions provided by IQ.
IQ provides a robust and reliable Operations, Administration, Maintenance, and Provisioning (OAM&P)
functions based on a number of industry standards. The OAM&P functions provided by IQ are described in
the following sections:
“Fault Management” on page 4-2
“Equipment Management and Configuration” on page 4-15
“Service Provisioning” on page 4-23
“Performance Monitoring and Management” on page 4-31
“Security and Access Management” on page 4-35
“Software Configuration Management” on page 4-41
The OAM&P functions are accessible to both human and machine clients through a variety of
management interfaces and applications, referred to as management applications in the rest of this
chapter.
In addition to OAM&P functions, IQ provides intelligent control plane and management plane functions as
described in the following sections:
“IQ GMPLS Control Plane Overview” on page 4-47
“IQ Management Plane Overview” on page 4-53
Fault Management
IQ provides an extensive fault monitoring and management capability that are modeled after Telcordia and
ITU standards. All these capabilities are agnostic to the client signal type and provides the ability to
identify, correlate and correct faults based on actual digital performance indicators leading to quicker
problem resolution. Additionally, IQ communicates all state and status information of the network element
automatically and asynchronously to the other network elements within the Digital Optical Network and to
all the registered management applications, thus maintaining synchrony with in the network.
IQ provides the following fault management capabilities to help users in managing and maintaining the
network element.
The alarm surveillance functions to detect and report degraded conditions in the network element.
Including:
Detection of defects in the TN780 and Optical Line Amplifier network elements and the incoming
signals (See “Defect Detection” on page 4-2).
Declaration of defects as failures (See “Failure Declaration” on page 4-3).
Reporting failures as alarms to the management applications (See “Alarm Reporting” on page 4-
3).
Masking low priority alarms in the presence of high priority alarms (See “Alarm Masking” on
page 4-6).
Reporting alarms through local alarm indicators (See “Local Alarm Summary Indicators” on
page 4-6).
Configuring alarm reporting (See “Alarm Configuration” on page 4-7).
Isolating network faults utilizing Automatic Laser Shutdown feature (See “Network Fault Isola-
tion” on page 4-10).
The wrap-around historical event logging that tracks all changes that occur within the network ele-
men. (See “Event Log” on page 4-10).
In-service and out-of-service maintenance and troubleshooting tools (See “Maintenance and Trou-
bleshooting Tools” on page 4-11).
Alarm Surveillance
Defect Detection
IQ detects and terminates all hardware and software defects within the system. A defect is defined to be a
limited interruption in the ability of an item to perform a required function. The detected defects are
analyzed and localized to the specific network site, network element, facility (or incoming signal) and circuit
pack. On detecting certain defects, for example defects in the incoming signal, IQ transmits maintenance
signals to the upstream and downstream network elements indicating successful localization of the defect.
On termination of defects, IQ stops transmitting maintenance signals. See “Network Fault Isolation” on
page 4-10 for more details.
The detection of facility defects, such as LOL, AIS, FDI, etc., and transmission of maintenance signals to
the upstream and downstream network elements is in compliance with Telcordia and ITU specifications.
Failure Declaration
As specified in GR-253 specification, the defects associated with facilities/incoming signal are soaked for a
pre-defined period before they are declared as failures. It prevents spurious failures being reported. So,
when a defect is detected on a facility, it is soaked for a time interval of 2.5secs before the corresponding
failure is declared. Similarly, when a facility defect terminates, it is soaked for 10secs before the
corresponding failure is terminated. This eliminates pre-mature termination of the failure.
The defects associated with hardware equipment are not soaked. Failure condition is declared as soon as
the defect is detected and similarly, the failure condition is cleared as soon as the defect is terminated.
Alarm Reporting
IQ reports the hardware and software failures as alarms. Detection of a failure condition results in an alarm
being raised which is asynchronously reported to all the registered management applications. The
termination of a failure results in clearing the corresponding alarm, which is again reported asynchronously
to all the registered management applications. IQ stores the alarm conditions locally and they are
retrievable by the management applications. Thus, at any given time users see only the current standing
alarm conditions.
Alarm generation is also dependent on the administrative state (see “Administrative State” on page 4-20)
of the managed object instance and presence of other failure conditions and the user configuration, as
described below:
Administrative State—Alarms are generated when the administrative state of a managed object
instance and its ancestor objects is unlocked. When the administrative state of an object or any of
it’s ancestor objects is locked or in maintenance, the alarms are not generated (except for the Loop-
back related alarms).
Alarm Hierarchy—An alarm is generated only if no high priority alarms exist for the managed object
instance. Thus, only the alarms corresponding to the root cause of the fault condition is reported.
This capability prevents too many alarms being reported for a single fault condition. (See “Alarm
Masking” on page 4-6).
User Configuration—IQ provides users the ability to selectively inhibit the alarm reporting utilizing
alarm reporting control feature. (See “Alarm Reporting Control” on page 4-7).
IQ reports each alarm with sufficient information, as described below, so that the user can take appropriate
corrective actions to clear the alarm. For detailed description of all the parameters of an alarm reported to
the management applications, refer to the corresponding user guides.
Alarm Category—this information isolates the alarm to a functional area (See “Alarm Category” on
page 4-5 for the list of supported alarm types).
Alarm Severity—this information indicates the level of degradation that the alarm causes to the ser-
vice (See “Alarm Severity” on page 4-5 the list of supported severities).
This information is reported as NTFCNCDE parameter in the TL1 notifications.
Probable Cause—this information describes the probable cause of the alarm. This is a short descrip-
tion of the cause of the alarm. More detailed description is provided as Probable Cause Description.
TL1 Condition Type—this field is analogous to the probable cause except that the condition type
string is in accordance with the GR-833-CORE. It is reported as CONDTYPE parameter in the TL1
notifications.
Probable Cause Description—this information provides the detailed description of the alarm and iso-
lates the alarm to a specific area. It is an elaboration of the Probable Cause. This is a string which
provides more information on the cause of the alarm condition.
This information is reported as CONDDESCR parameter in TL1 notifications.
Service Affecting—this information indicates whether the given alarm condition interrupts the data
plane services through the system or network. The two possibilities are: SA for service affecting and
NSA for non-service affecting. An alarm is reported as service-affecting if the alarm condition affects
a hardware or software entity in the data plane, and the affected hardware or software entity is
administratively enabled.
This information is reported as SRVEFF parameter in the TL1 notifications.
Source Object—this information identifies the managed object instance on which the failure is
detected.
This information is reported as AID in the TL1 notifications.
Location—this information identifies the location of the managed object as near end or far end, when
applicable.
This information is reported as LOCN parameter in the TL1 notifications.
Direction—this information indicates whether the alarm has occurred in the receive direction or in the
transmit direction, when applicable.
This information is reported as DIRN parameters in the TL1 notifications.
Time & Date of occurrence—this information provides the time at which the alarm was detected. It is
derived from the system time. IQ provides users the ability to manually configure the system time or
enable Network Timing Protocol (see “Time-of-Day Synchronization” on page 4-59) so that the accu-
rate and synchronized time is reported for all alarms. It allows the root cause analysis of failures
across network elements and networks.
This information is reported as OCRDAT parameter in the TL1 notifications.
Type—As described in “PM Thresholding” on page 4-33, IQ supports performance monitoring and
thresholds, enabling early detection of degradation in system and network performance. The thresh-
old crossing conditions are handled utilizing the same mechanism as alarms. The type field indicates
whether the reported condition is an alarm or a threshold crossing condition.
IQ records all the current alarms with alarm details, as described above, in an alarm table. The alarms are
persisted in the MCM/OMM across reboots. After a system reboot or MCM/OMM reboot, the alarms in the
persistent storage are validated to remove any cleared alarms and raise only the current outstanding
alarms.
Refer to the UTStarcom TN780 Maintenance and Troubleshooting Guide for the detailed description of all
the alarms generated by IQ and the corresponding clearing procedures.
Alarm Category
IQ categorizes the alarms into the following types:
Facility Alarm—alarms of this type are associated with the line and tributary facilities, and incoming
signal. For example: LOL, LOS, AIS, and FDI.
Equipment Alarm—alarms of this type are associated with hardware errors. For example: Equip-
ment Failure, and Equipment Unreachable.
Communications Alarm—alarms of this type are associated with faults which impact the communi-
cation between the modules within the network element and between network elements. For exam-
ple: No Communication with OSC Neighbor, and LOL on OSC.
Software Processing Alarm—alarms of this type are associated with software processing errors. For
example, Software Upgrade has Failed, and Persistence space less than 2%-critical.
Environmental Alarm—alarms of this type are caused by the change in the state of the environmen-
tal alarm input contact.
Alarm Severity
Each alarm generated by IQ has one of four severity levels set by default. These levels are:
Critical—the Critical severity level indicates that a service affecting condition has occurred and an
immediate corrective action is required. This severity is reported, for example, when a managed
object instance becomes totally out-of-service and its capability must be restored.
Major—the Major severity level indicates that a service affecting condition has developed and an
urgent corrective action is required. This severity is reported, for example, when there is a severe
degradation in the capability of the managed object instance and its full capability must be restored.
Minor—the Minor severity level indicates the existence of a non-service affecting fault condition and
that corrective action should be taken in order to prevent a more serious (for example, service
affecting) fault. Such a severity is reported, for example, when the detected alarm condition is not
currently degrading the capacity of the managed object instance.
Warning—the Warning severity level indicates the detection of a potential or impending service
affecting fault, before any significant effects have been felt. Action should be taken to further diag-
nose (if necessary) and correct the problem in order to prevent it from becoming a more serious ser-
vice affecting fault.
The alarm severity is referred to as the notification code in GR-833-CORE and it is reported as such in the
TL1 notifications.
The user can customize the severity associated with an alarm through the management applications. (See
“Alarm Severity Assignment Profile” on page 4-9.)
Alarm Masking
IQ masks (i.e., not autonomously report) a failure that is the result of the same root-cause problem or
maintenance signal as another higher-priority failure reported simultaneously by that network element per
the containment hierarchy, similar to those defined for SONET/SDH protocols. This prevents logs and
management applications from being flooded with redundant information. For example, a circuit pack
failure may cause a LOL alarm. Since the underlying fault is the circuit pack failure, suppressing LOL alarm
prevents redundant information being reported.
The masked condition is neither reported to the management applications nor recorded in the alarm table.
However, the masked condition does not have any effect on changes to the operational state of the
managed object instance on which the condition exists.
Note: The TN780 supports the bay-level alarm indicators. The Optical Line Amplifier does not
support the bay-level alarm indicators. The bay-level indicators provided by the PDU is rec-
ommended to be used whenever it is present in a bay.
Chassis Level Visual Alarm Indicators—These indicators provide the summary of the outstanding
alarm conditions of the chassis. A chassis level visual alarm indicator is lit if there is at least one cor-
responding outstanding alarm condition within the chassis. The following bay level LED indicators
are provided:
Critical LED to indicate the presence of critical alarm within the chassis.
Major LED to indicate the presence of major alarm within the chassis.
Minor LED to indicate the presence of minor alarm within the chassis.
Power LED to indicate the status of power input to the chassis.
Chassis Level Office Alarm Indicators—As described in “Office Alarms” on page 3-19, the TN780
and Optical Line Amplifier network elements provide alarm output contacts to support chassis level
visual and audio indication of critical, major and minor alarms. As described in “Alarm Cutoff (ACO)”
on page 3-19, ACO buttons and ACO LEDs are also supported.
Card Level Visual Indicators—All circuit packs include LEDs to indicate the card status. In general,
all circuit packs provide the following LEDs.
Power (PWR) LED to indicate the status of the power input to the circuit pack.
Active (ACT) LED to indicate administrative state and service state of the circuit pack.
Fault (FLT) LED to indicate the presence of the critical, major or minor alarm.
Port Level Indicators—These indicators are provided for each tributary port and line port. In general,
the port level LEDs include:
Active (ACT) LED to indicate the administrative state and service state of the port.
LOS LED to indicate the incoming signal status.
Note: By default all critical, major, and minor alarms affect the corresponding chassis LED status.
However, through the management applications users can disable the facility alarms not to
affect the chassis LEDs. The equipment alarms always affect the chassis LEDs.
Alarm Configuration
Users can customize the alarms reported by IQ through the management applications and interfaces.
The ARC is provisionable through the management applications. Users can enable the ARC per managed
object instance basis. When ARC is applied to a managed object instance it is propagated to all the
contained and supported managed objects also. For example, when alarm reporting is inhibited for the
chassis object instance, alarm reporting is inhibited for all the circuit pack object instances within that
chassis. See “Managed Object Entities” on page 4-15 for the description of the managed object entities
and relationship between them.
The alarms are inhibited for the duration user has enabled the ARC. On disabling ARC, the standing alarm
conditions, which were inhibited due to ARC, are reported to the management applications. Various
scenarios shown in Figure 4-1 on page 4-9 captures how alarm reporting is handled for each situation.
As shown in Scenario #1, if there are any outstanding alarms prior to enabling the alarm inhibition,
those alarms remain outstanding until the alarm condition is cleared. As shown, if the alarm condi-
tion is cleared during ARC period, an alarm cleared event is reported to the management applica-
tions.
As shown in Scenario #2, if there are any outstanding alarms prior to enabling the alarm inhibition,
those alarms remain outstanding until the alarm condition is cleared. As shown, if the alarm condi-
tion is cleared after the ARC period, an alarm cleared event is reported to the management applica-
tions.
As shown in Scenario #3, if an alarm condition is raised and cleared during ARC, corresponding
events are not reported to the management applications. However, those events are logged in the
event log which are retrievable through the management applications.
As shown in Scenario #4, if an alarm condition is raised during ARC period, the corresponding alarm
raised event is not reported to the management applications. If the ARC period ends prior to the
alarm clearance, the alarm raised event is reported to the management applications with the actual
time stamp at which the alarm was generated. Thus at the end of the ARC, the management appli-
cations and the network element are in sync with regards to the standing alarm conditions.
ARC Period
Scenario #1
Scenario #2
Scenario #3
Scenario #4
Legend:
Alarm is raised and logged, but NOT reported to the management applications
Alarm is cleared and logged, but NOT reported to the management applications
Note: The severity of environmental alarms are assigned by the user when they are provisioned.
The ASAP feature cannot be used to modify the provisioned severity of environmental
alarms.
Event Log
IQ provides a wrap-around historical event log that tracks all changes that occur within the system. The
events are recorded locally in the network element and are retrievable through the management
applications. The event log enables users and management applications to retrieve all events (including
alarms) that occurred during a communication failure between the management applications and the
network element, and will maintain data synchrony between the network element and the management
application.
IQ records the following types of events in the event log:
Alarm related events which include alarm raise and clear events.
PM data thresholding related events which include threshold crossing raise and clear events.
Threshold crossing alerts as described in “PM Thresholding” on page 4-33.
Managed object creation and deletion events triggered by the user actions.
Security administration related events triggered by the user actions.
Network administration events triggered by the user actions to software upgrade, software down-
grade, database restore, etc.
Audit events triggered by the user actions to change attribute value(s) of a managed object.
State change events indicating the state changes of a managed object triggered by the user action
and/or changes in the operation capability of the managed object.
The event logs are stored in the persistent storage on the network element, and therefore, the event logs
will be available after restarts and reboots. Note that the attribute value change events are not stored in the
persistent storage. IQ stores up to 1000 attribute value change events which are not persisted and 3000
remaining events which are persisted over reboots. Users can export the event log information in TSV
format using management applications.
Following are some of the important information stored for each event log record:
Managed object instance that generated the event.
The time at which IQ generated the event.
Event type indicating the event category, including:
Update Event which includes managed object create and delete events.
Report Event which includes security administration related event, network administration
related event, audit events, and threshold crossing events (TCE).
Condition which includes alarm raise and clear event, non-alarmed conditions, and Threshold
crossing condition events.
Refer to UTStarcom TN780 Maintenance and Troubleshooting Guide for a list of events logged in an event
log on TN780 and Optical Line Amplifier network elements.
Loopbacks
Loopbacks are used to test newly created circuits before running live traffic or to logically locate the source
of a network failure. Loopbacks provide a mechanism where the signal under test (either the user signal or
the test pattern signal such as PRBS) is looped back at some location on the network element in order to
test the integrity and validity of the signal being looped back. Since loopbacks affect the normal data traffic
flow, they must be invoked only when the associated facility is in administrative maintenance state.
IQ provides access to the loopback capabilities in a TN780 network element. These loopbacks are
agnostic to the client payload type. Following is a list of loopbacks supported to test each section of the
network as shown in Figure 4-2 on page 4-12 and also various hardware components along the data path
(see “DTC Digital and Optical Transport Architecture” on page 3-22). The loopbacks can be enabled or
disabled remotely through the management applications.
Client Trib Facility Loopback—is performed on the TAM. The tributary port Rx is looped back to the
Tx on the TAM. This loopback test verifies the operation of the tributary side optics in the TOM and
TAM.
DTF Path Terminal Loopback—is performed on the DLM circuit. In this case the cross-point switch
on the DLM loops back the received client signal towards the TAM. This loopback verifies the opera-
tion of the tributary side optics as well as the adaptation of client signals into digital signals per-
formed in the TOM and TAM and the cross-point switch on the DLM.
DTF Path Facility Loopback—is performed on the DLM. In this case the cross-point switch on the
DLM loops back the received line side signal towards the line. This loopback verifies the line side
connectivity and the DTF encapsulation performed in the DLM.
Client Trib Terminal Loopback—is performed on the TAM. In this case the digital signal received
from the line is looped back to the line transmit side in the TAM. This loopback verifies the line side
optics on the DLM, the DTF and FEC Mapper/demapper in the DLM and the cross-point switch.
C
l
ien
tT
ri
b F
a
ci
li
tyL
oo
pb
ac
k
D
T
FPa
t
hTe
r
min
alLo
op
ba
ck
D
T
FPa
t
hFa
ci
li
tyL
oo
pb
ac
k
C
l
ien
tT
ri
b T
e
rmi
nalL
oo
pb
ac
k
PRBS Test
The Pseudo Random Bit Sequence (PRBS) is a test pattern that is used to diagnose and isolate the
troubled spots in the network, without the requirement for valid data signal or customer traffic. This type of
test signal is used during the system turn-up or in the absence of a valid data signal from the customer
equipment. The test is primarily aimed to watch out and sectionalize the occurrence of bit errors in the data
path. Since the PRBS test affects the normal data traffic flow, it must be invoked only when the associated
facility is in administrative maintenance state.
IQ provides access to the PRBS generation and monitoring capabilities supported by the TN780 network
element. The TN780 supports PRBS generation and monitoring for testing circuit quality at both the DTF
Section and DTF Path layers as described below. The PRBS test can be enabled or disabled remotely
through the management applications.
DTF Section-level PRBS Test—here the PRBS signal is generated by the near end DLM and it is
monitored by the adjacent TN780 network elements. This test verifies the quality of the digital link
between two adjacent TN780 network elements.
DTF Path-level PRBS Test—here the PRBS signal is generated by the near end TAM and it is mon-
itored at the far end TAM where the digital path is terminated. This test verifies the quality of the end-
to-end digital path.
C
lie
nt C
lie
nt
D
TFSe
ctio
n
G PRBS M
M D
TFPa
th G
PRBS
GP
RBSGe
ner
a to
r MP
RBSMo
nito
r
Note: The PRBS tests can be coupled with loopback tests so that the pre-testing of the quality of
the digital link or end-to-end digital path can be performed without the need for an external
PRBS test set. While this is not meant as a replacement for customer-premise to cus-
tomer-premise circuit quality testing, it does provide an early indicator of whether or not the
transport portion of the full circuit is providing a clean signal.
Hairpin Circuits
A hairpin circuit refers to a special circuit where the source and destination tributary ports are located on
the same network element and the same DLM. In other words, the client signal received by the DLM on
one tributary port is looped back to another tributary port on the same DLM, without going through the line.
The source and destination tributary ports could be on the same TAM or a different TAM, but they must be
on the same DLM.
Trace Messaging
IQ provides access to the trace messaging feature supported by the TN780 network element. The TN780
supports the following trace messaging functions:
Trace messaging at the DTF Section and DTF Path (see Figure 4-4 on page 4-14). The DTF Sec-
tion trace messaging is utilized to detect any mis-connections between the TN780 network elements
within a Digital Link and DTF Path trace messaging is utilized to detect any mis-connections in the
circuit path along the Digital Optical Network. The DTF trace messaging is agnostic to the client sig-
nal type.
C
l
ien
t C
l
ien
t
D
TFSec
ti
o n
J
0Tr
ace D
TFS
ec
ti
o n Tr
ace
T
ra
ce J
0Tr
ace
D
TFPa
th
Tr
ace
Trace messaging at the SONET/SDH J0 on the tributary ports. The TN780 provides the capability to
passively monitor the J0 messages received from the client equipment. This capability enables the
detection of mis-connections between the client equipment and the TN780. The TN780 can monitor
1, 16 and 64byte J0 trace messages.
Network
element
Chassis Chassis
5
4
3
2
1
TAM
2
1
OW M TOM
Physical ports
DCF Span OCG OCG Trib port
C-band L-band
4
3
2
1
OCG
All circuit packs in the TN780 and Optical Line Amplifier network elements (see “Circuit Pack Dis-
covery” on page 4-17)
The termination points, including physical ports and logical termination points in a TN780 and Opti-
cal Line Amplifier network element
The Digital Optical Network topology including Physical Topology and Service Provisioning topology
(see “Network Topology” on page 4-48)
The optical data plane connectivity which includes the connectivity between the DLM and BMM in a
TN780 network element (see “Optical Data Plane Autodiscovery” on page 4-17)
IQ maintains the inventory of all the automatically discovered resources, as described above, and also the
user provisioned services which includes:
Cross-connects provisioned using Manual Cross-connect Provisioning mode
Circuits provisioned using Dynamically Signaled SNC Provisioning mode
Cross-connects that are automatically created while creating circuits utilizing Dynamically Signaled
SNC Provisioning mode
Protection groups that have been provisioned
Refer to “Service Provisioning” on page 4-23 for more details.
IQ and UTStarcom TN780 network element support the auto-discovery of connectivity between the DLM
and the BMM. The auto-discovery eliminates the mis-connection between the DLM and BMM, including:
Connecting a DLM to a wrong OCG on the BMM-4. For example, connecting a DLM with OCG3 out-
put to a OCG5 port on the BMM-4.
Connecting a DLM to a BMM in conflict with the pre-provisioned association of the BMM and DLM.
For example, OCG3 port on BMM is pre-provisioned to be associated with the DLM in slot 4, but the
user incorrectly connects the fiber to the DLM in Slot 3 (though it is OCG3).
Note: If auto-provisioning is enabled, then the BMM and DLM check only for OCG compatibility.
On detecting the mis-connection, alarms are reported so that the user can correct the connectivity. Also,
the DLM is prohibited from transmitting optical signals towards the BMM to prevent the mis-connection
from interfering with the other operational DLMs. In addition, the operational state of the DLM is changed to
disabled.
The optical data plane auto-discovery involves control message exchanges between the active MCM in the
Main chassis and the BMMs and DLMs in addition to the control message exchange between the DLM and
BMM over the optical data path. The optical data plane auto-discovery requires the control plane to be
available. Following are some limitations imposed by the protocol which prevents from correctly detecting
the BMM and DLM mis-connection:
When the auto-discovery is in progress, there is a 5 second window during which BMM will not dis-
cover any re-cabling performed by the user. Therefore, the user should not perform re-cabling while
the auto-discovery is in progress. Below is a list of events during which BMM and DLM automatically
initiate the optical data plane auto-discovery.
If users inadvertently connect an incorrect high power signal to the OCG port on the BMM (for exam-
ple connecting the line port output to the OCG input port on BMM), it could impact traffic on the other
operational OCG ports on the BMM.
The auto-discovery procedure requires the connectivity between the BMM and DLM be bi-direc-
tional, in other words, the transmit and receive pair of a given OCG port on the BMM must be con-
nected to the transmit and receive pair of the same line port of the DLM. If this is not true, then it will
impact the active traffic.
The BMM may not detect the mis-connection if the fiber is re-cabled under the following conditions
during which the control messages pertaining to the auto-discovery could be lost:
BMM is rebooted
BMM is down
BMM is unplugged
DLM is down
No active MCM in the Main chassis
No active MCM in the Expansion chassis if the BMM and DLM are not in the same chassis
NCT cable is unplugged (inter-chassis connectivity) if BMM and DLM are not in the same chassis
The user must refrain from re-cabling during the above conditions.
In general, the user must perform re-cabling only when the MCM, BMM and DLM are completely
operational. This will ensure that the optical data plane auto-discovery can positively identify all mis-
connections.
The operational state of the DLM is enabled, if the auto-discovery is successful, else it is disabled.
State Modeling
IQ implements a state modeling which meets the various needs of all the supported management
applications and interfaces, and also communicates comprehensive state of the equipment and
termination points. IQ state modeling complies with TMF814, and GR-1093 to meet the TL1 management
interfaces.
IQ defines a standard state model for all the managed entities which includes equipment as well as
termination points as described in “Managed Object Entities” on page 4-15. IQ defines the following states:
Administrative State—represents the user’s operation on a managed object entity (See “Administra-
tive State” on page 4-20).
Operational State—represents the ability of the managed object entity to provide service (See
“Operational State” on page 4-21).
Service State—represents the current state of the managed object entity which is derived from the
administrative state and operational state (See “Service State” on page 4-21).
Administrative State
The administrative state allows the user to allow or prohibit the managed object entity from providing
service. The administrative state of the managed object entity can be modified only by the user through the
management applications. Also, change in administrative state of a managed object entity results in a
operational state change of the contained and supported managed objects. However, the administrative
state of the contained and supported managed objects is not changed.
IQ defines three administrative states as given below.
Locked State—the managed object entity is prohibited from providing services to its users. Service
affecting provisioning, such as modifying attributes or deleting object, and diagnostics, such as loop-
backs, are allowed. Users can change the administrative state of a managed object entity to locked
state from either unlocked state or maintenance state through management applications. This action
results in the following behavior:
The managed object does not provide services to users.
All outstanding alarms on the managed object is cleared. No new alarms are reported on this
object.
The operational state and service state of this managed object are not changed. They are deter-
mined autonomously by the fault conditions that might arise, or by administrative state changes
of its ancestors.
The operational state and service state of all the contained and supported managed objects are
modified; the operational state is changed to disabled and service state is changed to OOS (out-
of-service) state.
The redundant equipment, if there is one (for example, MCM-B), becomes active.
Unlocked State—the managed object entity in unlocked state is allowed to provide services. Using
management applications, users can change the state of a managed object entity to unlocked state
from either locked state or maintenance state. This action results in the following behavior:
If there are any outstanding alarms on the managed object they are reported.
The managed object entity is available to provide services (provided its operational state is
enabled). However, if there is a corresponding redundant managed object entity which is active,
this managed object entity will be in stand-by mode (e.g. MCM-B).
Maintenance State—the managed object entity in this state enables the maintenance operation, like
Trace Messaging, PRBS testing, etc., to be performed. Users can change the state of a managed
object entity to maintenance state from either locked state or unlocked state through management
applications. This action results in the following behavior:
Users can perform service-impacting maintenance operations, such as loopback test, PRBS
test, etc., without having any alarms reported.
All outstanding alarms are cleared on that managed object entity and all new alarm reporting
and alarm logging are suppressed until the managed object entity is administratively unlocked
again.
The operational state and service state of this managed object entity are not changed.
The operational state and service state of all the contained and supported managed objects are
modified; the operational state is changed to disabled and service state is changed to OOS-MT
(out-of-service) state.
Note: When the Admin state of a module is set to Locked or Maintenance, that state is reflected
in the Equipment Tree and the Equipment View of MPower GNM.
Operational State
The operational state indicates the operational capability of a managed object entity to provide its services.
It is determined by the state of the hardware and software; it is not configurable by the user. Two
operational states are defined:
Enabled—The managed object entity is able to provide service. This typically indicates that the cor-
responding hardware is installed and functional.
Disabled—The managed object entity can not provide all or some services. This typically indicates
that the corresponding hardware has detected some faults or is not installed. For example, when a
provisioned circuit pack is removed, the operational state of the corresponding managed object
entity becomes disabled.
Service State
The service state represents the current state of the managed object entity which is dependent on the
operational state and the administrative state. The service state is not maintained by the IQ. It is derived by
the MPower GNM and MPower EMS management applications based on the operational and
administrative states of an object and its ancestors. The following states are defined:
In-service (IS)—indicates that the managed object entity is functional and providing services. Its
operational state is enabled and its administrative state is unlocked.
Out-of-service (OOS)—indicates that the managed object entity is not providing normal end-user
services either due to its operational state is disabled or the administrative state of its ancestor
object is locked, or the operational state of its ancestor object is disabled.
Out-of-service Maintenance (OOS-MT)—indicates that the managed object entity is not providing
normal end-user services, but it can be used for maintenance test purposes. Its operational state is
enabled and its administrative state is maintenance.
Out-of-service Maintenance, Locked (OOS-MT, Locked)—indicates that the managed object entity
is not providing normal end-user services, but it can be used for maintenance test purposes. Its
operational state is enabled and its administrative state is locked.
Service Provisioning
IQ provides service provisioning capabilities which includes establishing data path connectivity between
endpoints for delivery of end-to-end capacity. The services are originated and terminated in a TN780
network element. The services are provisioned at 2.5G and 10G granularity and are full-duplex,
bidirectional services. IQ defines following types of endpoints:
DTF Path Endpoints—are the line-side endpoints which are DTF (refer to “Digital Transport” on
page 3-21 for the description of DTF) encapsulated 10G or 2.5G channels. The line-side endpoints
are sourced and terminated in a DLM. As described in “Digital Line Module (DLM)” on page 3-9,
each DLM supports one OCG which includes ten 10G optical channels.
Trib-side Endpoints—are client payload specific and can be any of the payload type described in
“Client/Trib Interfaces” on page 3-18.
IQ automatically creates the endpoints on configuring the circuit packs as described in “Circuit Pack
Configuration” on page 4-19.
IQ supports two service provisioning modes to meet diverse customers needs as described in:
“Manual Cross-connect Provisioning” on page 4-23
“Dynamically Signaled SNC Provisioning” on page 4-26
“Protection Group Provisioning” on page 4-28
As with equipment configuration, services can also be pre-provisioned as described in “” on page 4-30.
D L M ( in s lo t 3 )
BM M
MAP/
FEC
D L M ( in s lo t 4 ) L in e fib e r
( w e s t)
M AP/
FEC OSC
D L M ( in s lo t 5 )
MAP/
FEC
BM M
D L M ( in s lo t 6 )
M AP/ L in e fib e r
FEC ( e a s t)
OSC
Line fiber
DLM (in slot 4) (west)
TOM TAM
TOM
MAP/
FEC OSC
TOM TAM
TOM
Hairpin Cross-connect—are used to cross-connect two tributary ports within a TN780 network ele-
ment. In Release 1.2, hairpinning is supported within a DLM between two tributary ports, in the
same or different TAMs (see Figure 4-8 on page 4-26). Such hairpin cross-connects do not use the
line-side optical channel resource.
The hairpin cross-connects are used in Metro applications for connecting two buildings within a short
reach without laying new fibers.
Line fiber
DLM (in slot 4) (west)
TOM TAM
TOM
MAP/
FEC OSC
TOM TAM
TOM
Automatic re-establishment of a SNC after network problems are corrected (note that SNCs are not
automatically released on detecting network problems; the SNC must be released by the user at the
source node where the SNC was originated).
User configured circuit identifiers for easy correlation of alarms and performance monitoring infor-
mation to the end-to-end circuit aiding in service level monitoring.
Circuit tracking by storing and making available to the management the hop-by-hop circuit route
along with the source endpoint of the SNC.
Refer to “IQ GMPLS Control Plane Overview” on page 4-47 for a detailed description of the GMPLS
functions.
Service Pre-provisioning
IQ supports pre-provisioning of circuits, enabling users to set up both manual cross-connects and SNCs in
the absence of DLMs and TAMs. Pre-provisioning of data plane connections keeps the resources in a
pending state until the DLM and/or TAM is inserted. IQ internally tracks resource utilization to ensure that
resources are not overbooked. The pre-provisioning of circuits requires that the supporting circuit packs
first be pre-configured.
Note: If a failure occurs on the protect while there is a lockout of working, traffic cannot switch to
the working until the lockout is cleared. This can result in a loss of traffic.
Lockout of protect
A user initiated switch that when invoked causes the traffic that was on the protect line to be
switched to the working line.
Traffic cannot be moved back to the protect until the Lockout of protect has been cleared.
Note: If a failure occurs on the working while there is a lockout of protect, traffic cannot switch to
the protect until the lockout is cleared. This can result in a loss of traffic.
Note: If a higher priority switch is in effect, then the manual switch command will be denied.
Note: The invoking of a lockout of working, a lockout of protect, or an automatic switch will over-
ride a manual switch.
Note: All switches (lockout of working, lockout of protect, manual, and automatic) will result in a
<50ms interruption in customer traffic.
Note: All switches are non-revertive. In a non-revertive switch the traffic is switched from working
to protect, the traffic will stay on protect until there is a failure (resulting in an automatic
switch), or a user initiated switch is invoked (manual switch, lockout of working, lockout of
protect).
Protect
Line
Y-Cable
Working
Line
PM Data Collection
IQ collects digital PM data and optical PM data.
IQ utilizes gauges to collect optical PM data. The gauge attribute type, as defined in ITU X.721
specification, indicates the current value of the PM parameter and is of type float. The gauge value may
increase or decrease by arbitrary amount and it does not wrap around. It is a read-only attribute.
The counters are utilized to collect the digital PM data. The counter value is a non-negative integer. The
value of the counter is reset to zero at the beginning of the PM period and it is counted in upward direction
with an increment of 1. The counter size is selected in a such a way that the counter does not rollover
within the collection period.
PM Thresholding
The PM thresholding provides an early detection of faults before significant effects are felt by the end
users. Degradation of service can be detected by monitoring error rates. Threshold mechanisms on
counters and gauges allow the detection of such trends and provide a warning to users when the error rate
becomes high.
IQ supports thresholding for both optical PM gauges and digital PM counters. During the PM period, if the
current value of a performance monitoring parameter reaches or exceeds corresponding configured
threshold value, threshold crossing notifications are sent to the management applications.
Optical PM Thresholding—IQ performs thresholding on some optical PM parameters by utilizing
high and low threshold values. Note that the thresholds are configurable for some PM parameters,
for others, system utilizes pre-defined threshold values. An alarm is reported when the measured
value of an optical PM parameter is outside of its threshold values. The alarms are automatically
cleared by IQ when the recorded value of the optical PM parameter is within the acceptable range.
Digital PM Thresholding—IQ performs thresholding on some digital PM data utilizing high threshold
values which are user configurable. The Threshold Crossing Alert (TCA) is reported when a PM
counter, within a collection period, exceeds the corresponding threshold value. When a threshold is
crossed, IQ continues to count the errors during that accumulation period. As with PM counters,
TCAs are transient in nature and are reported as events which are logged in the event log buffers as
described in “Event Log” on page 4-10. The TCAs do not have corresponding clearing events since
the PM counter is reset at the beginning of each period.
Note that the PM thresholding is supported for some of the PM parameters, but not for all.
PM Data Transfer
IQ stores the entire PM data in flat files in CSV format. Users (customers) can use these flat files to
integrate PM data analysis into their management applications or simply view the PM data through the
spreadsheet applications.
Users can schedule the TOD (time of day) at which the network element automatically transfers the PM
data to the user specified FTP server. Users can configure primary and secondary FTP server addresses.
If the data transfer to the primary FTP server fails, the PM data is transferred to the secondary FTP server.
PM Data Configuration
IQ allows users to customize the PM data collection. Users can configure the PM data collection through
management applications. IQ supports the following configuration options:
Reset the current 15-minute and 24-hour counters at any time per managed object instance.
Change the default threshold values according to the customer’s error monitoring needs.
Enable or disable the PM threshold crossing alarm and TCA reporting per attribute per managed
object instance.
Configure the frequency of PM flat file uploading to the FTP servers as configured.
User configures periodic uploading of PM data to the client machine
User Identification
Each network element user is assigned a unique user ID. The user ID is case-sensitive and contains 4 to
10 alphanumeric characters. The user specifies this ID (referred to as user login ID) to log into the network
element.
By default, IQ creates three user accounts with the following user login IDs:
secadmin with security administrator privilege enabled. The default password is Infinera1 and the
user is required to change the password at first login. This user login ID is used for initial login to the
network element.
netadmin with network administrator privilege enabled. The default password is Infinera1 and the
user is required to change the password at first login. Additionally, this account is disabled by
default. It must be enabled by the user with security administrator privilege through the TL1 Interface
or MPower GNM. This account is used to turn-up the network element.
emsadmin with all privileges enabled. The default password is Infinera1. This account is disabled by
default. It must be enabled by the user with security administrator privilege through the TL1 Interface
or MPower GNM. MPower EMS Server communicates with the network element using this account,
referred to as MPower EMS account when it is started without requiring additional configuration.
Users can create additional MPower EMS accounts which MPower EMS Server can use to connect
to the network element. These accounts must have the EMS access capability enabled during cre-
ation.
A single user can open multiple sessions. IQ maintains a list of all current active sessions.
Note: IQ supports a maximum of 30 active user sessions at any given time. All login attempts
beyond 30 sessions will be denied and a warning message is displayed.
Authentication
IQ supports standards-based authentication features. These features ensure that only authorized users log
into the network element through management interfaces.
Each time the user logs in, the user must enter a user ID and password. For the initial login, the user
specifies the default password set by the security administrator. The user must then create a new
password based on the following requirements.
The password must contain
Six to ten alphanumeric characters
At least one alphabetic and one numeric or one special character
The password may contain these special characters: @ # $ % ^ ( ) _ + | ~ { } [ ] ? -
The password must not contain:
The associated user ID
Blank spaces
The passwords are case-sensitive and must be entered exactly as specified.
The password is stored in the network element database in a one-way encrypted form.
The password rotation is implemented to prevent users from re-using the same password. The users are
forced to use passwords different from the previously used passwords. The number of history passwords
stored is configurable.
Access Control
In addition to user login ID validation and password authentication, IQ supports access control features to
ensure that the session requester is trusted, such as:
Detection of an unsuccessful user login and if the unsuccessful login exceeds the configured num-
ber of attempts, the session is terminated and a security event is logged in the security audit log.
User session is automatically terminated when the cable connecting the user computer and the net-
work element is physically removed. The user must follow the regular login procedure after the cable
is reconnected.
The activity of each user session is monitored. If, for a configurable period of time, no data is
exchanged between the user and the network element, the user session is timed-out and the ses-
sion is automatically terminated.
Authorization
Multiple access privileges are defined to restrict user access to resources. Each access privilege allows a
specific set of actions to be performed. Assign one or more access privileges to each user account. For the
description of the actions allowed for each access privilege, see Table 4-1 on page 4-37. For the
description of the managed entities, see “Managed Object Entities” on page 4-15.
There are six levels of access privileges:
Monitoring Access (MA)—allows the user to monitor the network element; cannot modify anything
on the network element (read-only privilege). The Monitoring Access is provided to all users by
default.
Security Administrator (SA)—allows the user to perform network element server security manage-
ment and administration related tasks.
Network Administrator (NA)—allows the user to monitor the network element, manage equipment,
turn-up network element, provision services, administer various network-related functions, such as,
auto-discovery and topology.
Network Engineer (NE)—allows the user to monitor the network element and manage equipment.
Provisioning (PR)—allows the user to monitor the network element, configure facility endpoints, and
provision services.
Turn-up and Test (TT)—allows the user to monitor, turn-up, and troubleshoot the network element
and fix network problems.
Managed Object
Entity Operation SA NA NE PR TT MA
Equipment Management
Chassis Create, delete and No Yes Yes No No No
update
DLM Create, delete and No Yes Yes No No No
update
TAM Create, delete and No Yes Yes No No No
update
BMM Create, delete and No Yes Yes No No No
update
Alarm input and out- Update No Yes Yes No No No
put contacts
TOM Create, delete and No Yes Yes No No No
update
OAM Create, delete and No Yes Yes No No No
update
OMM Create, delete and No Yes Yes No No No
update
Managed Object
Entity Operation SA NA NE PR TT MA
Managed Object
Entity Operation SA NA NE PR TT MA
Managed Object
Entity Operation SA NA NE PR TT MA
Users (with any access privilege) can view the audit logs through the management applications
Security Administration
IQ defines a set of security administration functions and parameters that are used to implement site-
specific policies. Security administration can be performed only by users with security administrator
privilege. The supported features include:
View all users currently logged on
Disable and enable a user account (this operation is allowed only when the user is not logged on)
Modify user account parameters, including access privilege and password expiry time
Delete a user account and its attributes, including password
Reset any user password to system default password
Monitor security audit logs to detect unauthorized access
Monitor the security alarms and events raised by the network element and take appropriate actions
Configure system-wide security administration parameters:
Default password
Inactivity time-out period
Maximum number of invalid login attempts allowed
Number of history passwords
Advisory warning message displayed to the user after successful login to the network element
Software Download
The IQ, operating TN780 network element and Optical Line Amplifier network elements, is packaged into a
single software image. The software image includes the software components required for all the circuit
packs in the TN780 and Optical Line Amplifier network elements.
Users can remotely download the software image from a customer specified FTP server to the MCM of the
TN780 and OMM of the Optical Line Amplifier network element. Once users download the software image
to the MCM/OMM and initiate the software upgrade procedure, the software is automatically distributed to
the remaining circuit packs within the chassis.
The network element can store up to three versions of the software image at the same time.
Software Upgrade
The network elements support in-service software upgrade. The software upgrade procedure lets users
activate a different software version from the one currently active. The following software upgrade
operations are supported:
Install Software—this operation lets users activate the new software image version with an empty
database. The software image may be older or newer than the active version.
Upgrade Software—this operation lets users activate the new software image version with the previ-
ously active database. The previously active database version must be compatible with the new
software image version.
Activate Software and Database—this operation lets users activate a new software image version
and a new database version. The database version must be compatible with the software image ver-
sion. Before upgrading the software, the new database image must be downloaded to the network
element.
Restart Software—this operation lets users activate the current software image with an empty data-
base.
In-Service Rollback—allows the system the ability to gracefully “fall-back” or “down-grade” to a prior
release in the rare event that a failure in experienced during the upgrade process.
In general, upgrading the software does not affect existing service. However, if the new software image
version includes a different Firmware/FPGA version than the one currently active, it could impact existing
services. If this occurs, a warning message is displayed.
Users must upgrade the software on a node-by-node basis. Therefore, at any given time, the network
elements within a network may be running at least two software image versions. These different images
must be compatible. In the presence of multiple software versions, the network provides functions that are
common to all the network elements.
The software upgrade procedure:
1. Verifies that the software and database versions are compatible. If they are not compatible, the
upgrade procedure is not allowed.
2. Validates the uncompressed software image. If the software image is invalid, the upgrade proce-
dure is not allowed.
3. Decompresses the software image. If there is not enough memory on the network element to store
the decompressed image, the upgrade procedure is aborted and software image reverts to the pre-
viously active software image version.
4. Reboots the network element so that the new software image becomes active. If the reboot fails,
the upgrade procedure is aborted and software image reverts to the previously active software
image version.
5. When the new software image is activated, the software upgrade procedure updates the format of
the Event Log and Alarm table alarms, if necessary.
Note: When the software is upgraded, the PM historical data is not converted to the new format (if
there is a change in the format) and it is not persisted. Therefore, before you upgrade the
software, you must upload and save the PM data in your local servers.
In general, if the upgrade procedure is aborted, the software reverts to the previously active version. The
procedure reports events and alarms indicating the cause of the failure.
The software upgrade is also supported when there is one MCM or OMM in the Node Controller chassis.
During the upgrade process, the communication with the clients and also with other network elements
within the network is interrupted.
UTStarcom has implemented Foppish within many of the TN780 hardware modules to take advantage of
the field-updatable features of the FPGA. These FPGAs support many different features and functions
within the hardware and can be remotely upgraded within the field to add features or correct design
inefficiencies without requiring replacement repair and return of the hardware modules.
Upgrade of the FPGAs is performed via updating the FPGA “image” – which is a list of programmable
instructions that tell the FPGA how it should operate and what features it should provide. New FPGA
images may (or may not) be provided within a new software release, and any enhancements to FPGA
images will identified within the Software Release Notes describing the functional change to the hardware
that the FPGA image provides.
Note: Although the hardware upgrade can be performed from a remote location, the hardware
module will require a cold reboot.
Note: The FPGA image download may be service impacting to the targeted module.
Database: Download/Backup/Restoration/Rebranding
To ensure that the correct database is activated on a network element, the database image includes this
information:
The database version. This is used to check its compatibility with the software image version. The
database image version must be older or equal to the software image version.
The backplane ID of the network element on which the database was created.
The following database operations are supported:
“Database Download” on page 4-43
“Database Backup” on page 4-43
“Database Restoration” on page 4-44
Database Download
Users can download the previously backed up database file to the network element from a specified FTP
server. Up to three database versions can be stored on the network element at a time. The downloaded
database file does not change the current active database. It is simply stored in the persistent memory of
the network element.
Database Backup
There are two database backup modes:
Manual Database Backup—Users can manually backup the current database image to the specified
FTP server at any time.
Scheduled Database Backup—Users can schedule the database to be backed up automatically, at
either daily or weekly intervals. Users can also specify a primary and secondary FTP server to store
the backup. By default, the database is backed up to the primary server; however, if that server is not
available, the database is backed up to the secondary server.
The database file that has been backed up contains:
Database file, which includes configuration information stored in the persistent memory on the net-
work element.
Alarm table stored in the persistent memory of the network element.
Event Log stored in the persistent memory of the network element.
Database Restoration
Users can perform the restore operation to activate a new database image file with the current active
software image version. The new database image file must be compatible with both the software image
version and the network element. The restore operation restarts the network element and activates the
new database image. Users can restore the database at system reboot time or at time any during normal
operation.
If the restore operation fails, the software rolls back to the previously active database image and an alarm
is raised indicating the failure of the restore operation. When the database is successfully restored, the
alarm is cleared. Users can manually restore the database.
Depending on the differences between the two databases, the database restore operation could affect
service. The database restoration procedure:
Restores the configuration data as per the restored database. The configuration data in the restored
database may differ from the current hardware configuration. In such scenarios, in general, the con-
figuration data takes precedence over the hardware.
Restores the alarms in the Alarm table by verifying the current alarm condition status. For example,
if there is an alarm entry in the restored Alarm table but the condition is cleared, that alarm is cleared
from the current Alarm table. On the other hand, if the alarm condition still exists, the corresponding
alarm entry is stored in the current Alarm table with the original timestamp.
The database image can be restored at system reboot time or at time any during normal operation.
Following is the description of some scenarios where the configuration data in the restored database
differs from the current hardware configuration and how they are handled:
Scenario 1: The restored database contains a managed equipment entity but there is no corre-
sponding hardware present in the chassis. In this scenario, the corresponding equipment is consid-
ered to be pre-configured (refer to “Circuit Pack Pre-configuration” on page 4-19).
connects (see “Dynamically Signaled SNC Provisioning” on page 4-26) along the SNC path. How-
ever, it takes approximately 45mins to release the signaled cross-connects. Note that the SNC con-
figuration information is stored on the source node only. The intermediate nodes contain only the
signaled cross-connects.
For example, consider a SNC that spans 3 nodes: Node A, Node B and Node C and Node A is the
source node. Consider the following sequence of operations:
Backup the database on Node A
Create an SNC from Node A to Node C passing through Node B which results in corresponding
signaled cross-connects being created on Node B and Node C
Restore the database on Node A
In this case, the restored database on Node A does not contain the SNC configuration information.
However, Node B and Node C have signaled cross-connects which are released after 45mins to
match the restored database in the Node A.
Consider the following sequence of operation for the same network configuration as in the previous
example,
Backup the database on Node B
Create an SNC from Node A to Node C passing through Node B which results in corresponding
signaled cross-connects being created on Node B and Node C
Restore the database on Node B which results in signaled cross-connect corresponding to the
SNC created after database backup being deleted.
In this scenario, since Node A contains the SNC configuration, the corresponding, deleted signaled
cross-connect in Node B is recreated. However, it may take up to 15mins for the SNC to come back
up.
Database rebranding
The database from one network element can be restored into another network element by re-brand-
ing. When a MCM is inserted into a chassis there are two options; if the MCM was not commissioned
previously, then the MCM will boot normally; if the MCM was commissioned previously but used in
another network element, then the MCM should be re- branded. For more information on re-brand-
ing refer to the UTStarcom TN780 Turn-up and Test Guide.
Network Topology
IQ utilizes OSPF-TE to discover Digital Optical Network topology. It models the Digital Optical Network
topology by defining the following elements:
A routing node which corresponds to a network element within the Digital Optical Network.
A control link which corresponds to OSC control between the adjacent routing nodes or network ele-
ments. There is one control link for each fiber. So, in the case of a multi-chassis, multi-fiber sites,
there will be multiple control links between adjacent network elements.
A GMPLS link which corresponds to transport capacity between the adjacent TN780s. There is one
GMPLS control link for each fiber. So, in the case of a multi-chassis, multi-fiber sites, there will be
multiple GMPLS links between adjacent network elements. Each GMPLS link supports up to
400Gbps transport capacity which maps to four OCGs or four Traffic Engineering (TE) links.
Within the Digital Optical Network, a routing node corresponds to a network element which could be a
TN780 or an Optical Line Amplifier, a control link corresponds to OSC communication between the
adjacent network elements (TN780 or Optical Line Amplifier) and GMPLS link corresponds to the digital
link between adjacent TN780 network elements.
IQ defines two topology maps:
Physical Network Topology—The physical network topology is defined by the topology of the OSC,
which provides the communication path for the routing and signaling protocols between network ele-
ments. The physical network topology mirrors the physical fiber connectivity between the network
elements, and thus the topology elements include all network elements, TN780 and Optical Line
Amplifier, and control links which corresponds to the fiber connecting the network elements. (See
Figure 4-10 on page 4-48.)
However, independent of the physical fiber connectivity, customers can create topology partitions
where each partition represents a continuous routing and signaling domain. The topology partitions
are created by disabling the OSPF interface. In Figure 4-11 on page 4-49, Domain 1 and Domain 2
are two topology partitions created by disabling GMPLS between network element C and network
element D. Note that in Release 1.2, SNC spanning two topology partitions are not supported and
they are operated as two separate networks.
D
om
ai
n 2
D
om
ain
1
N
od
eB N
od
eC N
od
eE N
od
eF
N
od
eA N
od
eD N
od
eG
G
MPLS
is
d
is
ab
le
d
Service Provisioning Topology—the service provisioning topology is a higher layer logical topology
providing users a view of topological nodes where services can be terminated, groomed or ampli-
fied, and the associated digital links between them. In a Digital Optical Network, the service provi-
sioning topology consists of TN780 network elements and digital links between them. Thus, in a
service provisioning topology, all Optical Line Amplifiers are eliminated. Figure 4-12 on page 4-49
illustrates the service provisioning topology of the physical topology shown in Figure 4-10 on
page 4-48.
T
ELin
k T
ELin
k
Users can view the physical network topology, referred to as physical view, and service provisioning
topology, referred to as provisioning view, through the management applications.
Thus, the physical topology represents the topology of the control plane traffic (e.g. OSPF-TE messages)
and management plane traffic (messages exchanged between the network element and the management
application, such as MPower EMS), whereas the service provisioning topology represents the data plane
traffic (client traffic).
Traffic Engineering
IQ supports several traffic engineering parameters both at the link level and node level. The rich set of
traffic engineering parameters enable users to create networks that are utilized most efficiently.
The node and equipment level traffic engineering parameters include:
Node Inclusion List—specifies an ordered list of nodes a SNC must pass through. The inclusion list
is ordered and must flow from source to destination. This capability is used to constrain a SNC to
traverse certain network elements in a particular order. For example, in the network shown in Figure
on page 4-51, the constraint to include node D can be used to mandate a route with source as A,
one of the intermediate nodes as B and destination as F. This allows the traffic to be dropped at site
D in future. The inclusion list is configurable through the management applications.
D
om
ain
2
D
om
ain
1
N
od
eB N
od
eC N
od
eE N
od
eF
N
od
eA N
od
eD N
od
eG
G
MPLSis
d
i
s a
ble
d
Node Exclusion List—Specifies a list of nodes SNC must not pass through. For example, the exclu-
sion list can be used to avoid congested nodes. The exclusion list is not ordered and it is config-
urable through the management applications.
Use installed equipment only—IQ enables the equipment pre-provisioning where equipment is pre-
provisioned but not installed. This constraint enables a SNC to pass through installed equipment
only. Users can specify this through the management applications.
Disable OCG Port—As described in “Optical Transport Layers” on page 3-30, the TN780 employs
two-stage optical multiplexing where the transport capacity is added to the GMPLS link in increments
of 100Gbps by adding OCGs (DLMs). Using this constraint users can disable the use of an OCG to
set up dynamically signaled SNC circuits. However, the OCG can be used to setup manual cross-
connects. For example, users may want to set aside some bandwidth for manual cross-connect pro-
visioning. This constraint is configurable through the management applications.
Switching Capacity—This parameter considers the switching/grooming capacity of the TN780. As
described in “Bandwidth Grooming” on page 3-26, in Release 1.2, the switching and grooming
between non-adjacent DLMs is supported.
The GMPLS link level traffic engineering parameters include:
Link Cost—the cost of the GMPLS link can be provisioned through the management applications. A
route with least cost is selected. Users can use this to control how the traffic is routed.
Link Inclusion List—specifies an ordered list of control links a SNC must pass through. This is similar
to the node inclusion list described earlier.
Link Exclusion List—specifies a list of control links SNC must not pass through. This is similar to the
node exclusion list described earlier.
Link Capacity—the link capacity is another parameter that is considered during route computation.
IQ maintains the following information based on the hardware state and user configuration informa-
tion, which is retrievable through the management applications:
Maximum capacity of the link based on the installed hardware
Usable capacity of the link based on the hardware and software state
Available capacity of the link for the new service requests
Additionally, users can provision the admin weight or cost for the control link. The control link cost denotes
the desirability of the link to route control traffic and management traffic. The lower (numerically) the cost,
the more desirable the link is.
All the traffic engineering parameters described above are exchanged between the network element as
part of the topology database updates.
SNCs which are already established are neither deleted nor rerouted. When the fault condition is
cleared, the SNCs resume their operation.
The faults, such as fiber cuts, resulting in topology partition: such fault conditions result in topology
database updates. However, the SNCs that span partitioned topologies will not provide service. The
SNC becomes operational after the fault condition is cleared.
MAGELLAN
F T P S e rver M P o w er E M S
R o uter A R ou ter D
R outer B R o ute r C
S w itch / H ub
D C N -A D C N -B
N ode B N o de D
N ode A N o de C N od e E
D C N -A D C N -B
MCM MCM
A c tive M C M o f S tand -b y M C M
th e M ain o f th e M ain
C h as sis C hassis
CPU CPU
D T N M a in C h as s is
MAGELLAN
F T P S erver M P o w er E M S
R ou ter A R o ute r D
R o uter B R ou ter C
L ink F ailed
S w itc h / H u b
D C N -A D C N -B
N ode B N ode D
N ode A N od e C N od e E
D C N -A D C N -B
MCM MCM
A c tive M C M o f S tan d-b y M C M
th e M ain of th e M ain
C ha ss is C hassis
CPU CPU
D T N M a in C ha s sis
Note: The link failures between the switch/hub and the DCN routers is not detected by the net-
work element nor will any redundant path be provided by the network element. It is
assumed that the customer will deploy routers which provide the necessary redundancy to
take care of such failures.
MAGELLAN
FT P Server M Power E M S
R outer A R outer D
R outer B R outer C
S witch / H ub
D C N -A D C N -B
N ode B N ode D
N ode A N ode C N ode E
M C M Failed
D C N -A D C N -B
MCM MCM
Stand-by M C M Active M C M of
of the M ain the M ain
C hassis C hassis
CPU C PU
D T N M ain C hassis
GNE. The GNE provides management proxy service to any management traffic received via its
DCN, OSC or Craft interfaces.
Subtending Network Element (SNE)—this is a network element that does not have physical connec-
tivity to the DCN and is not directly IP addressable from the DCN. The SNE is capable of providing
management proxy support to any management traffic received through its Craft and OSC inter-
faces.
Direct Network Element (DNE)—This is a network element that has physical connectivity to the
DCN and is directly IP addressable from the DCN. The difference between a GNE and DNE is that
the DNE does not provide any proxy management services. The MAP function is disabled by the
user.
MPower EMS
DCN
DNE
SNE
Physical Connectivity
DCN IP Connectivity
The MAP provides proxy services to the following protocols and enables various accessibility options as
described below:
HTTP Protocol—The MAP service on the GNE and SNE network elements relays the HTTP proto-
col messages by listening to a dedicated HTTP Proxy port 10080. This capability enables the
MPower EMS and MPower GNM applications to access all network elements within the purview of
the GNE through the DCN ports. Also, it enables the MPower GNM to access all network elements
within the purview of a network element through the Craft Ethernet and Craft Serial interfaces.
XML/TCP Protocol—The MAP service on the GNE and SNE network elements relays the XML/TCP
protocol messages by listening to a dedicated XML/TCP Proxy port 15073. This capability enables
the MPower EMS and MPower GNM applications to access all network elements within the purview
of the GNE through the DCN ports. Also, it enables the MPower GNM to access all network ele-
ments within the purview of a network element though the Craft Ethernet and Craft Serial interfaces.
Telnet Protocol—The MAP service on the GNE and SNE relays the Telnet protocol messages by
listening to a dedicated Telnet Proxy port 10023. This capability enables the Telnet sessions to be
launched from the MPower EMS and MPower GNM applications to access all network elements
within the purview of the GNE through the DCN ports. Similarly, it enables the Telnet session to be
launched from the MPower GNM to access all network elements within the purview of a network ele-
ment through the Craft Ethernet and Craft Serial interfaces.
FTP Protocol—The MAP service on GNE and SNE relays the FTP protocol messages by listening to
a dedicated FTP Proxy port 10021. This capability enables the communication between the FTP cli-
ent on the SNE and the EMS or external FTP Server through the GNE. The FTP client will be used to
upload performance monitoring data, downloading software, etc.
Configuration Settings
IQ provides several configuration options so that the customers can design their DCN and management
communication access to meet their needs. Following are the various configuration options provided:
MAP Enabled—users must set this option to enable MAP services on a network element.
Primary GNE IP Address—the Primary GNE IP Address is configured on SNEs that do not have a
DCN IP address assigned. The Primary GNE IP Address is the Router ID (also known as the
GMPLS Node ID) of the GNE in the same domain as this SNE. If more than one GNE exists in the
same domain, it is recommended that the closest GNE, in terms of hops, from this SNE should be
selected as the primary GNE. The primary GNEs main function is to upload the historical Perfor-
mance Monitoring data.
Secondary GNE IP Address—as with Primary GNE IP Address parameter, the Secondary GNE IP
Address is configured on SNEs. The Secondary GNE IP Address is the Router ID (also known as
the GMPLS Node ID) of the GNE within the same domain as this SNE. The SNE accesses the Sec-
ondary GNE if the Primary GNE is not available. It is recommended to choose the Secondary GNE
as the GNE which:
Is the next closest network element in terms of number of hops from the SNE
Provides a completely separate path to the management station from the SNE. In other words
the inability to reach the Primary GNE should never mean that the Secondary GNE is also
unreachable and vice-versa.
Static Routing
IQ provides the static routing capability. One application of static routes is to enable the network elements
to reach external networks that are not part of the DCN network. As shown in Figure 4-18 on page 4-59,
the NTP Server may be located in external networks, outside of the DCN network. In this scenario, users
can configure the static routes to external networks.
C u s to m e r N e tw o rk
N T P S e rv e r
R o u te r A
3 0 .3 0 .1 .2 3 0 .3 0 .1 .1
2 0 .2 0 .1 .1
2 0 .2 0 .1 .2 5 4 R o u te r D
R o u te r A
R o u te r B R o u te r C
1 0 .1 0 .1 .2 5 4
S w itc h / H u b
D C N IP A d d re s s :
1 0 .1 0 .1 .1
D e s t. IP A d d re s s : 3 0 .3 0 .1 .2
S u b n e t M a s k : 2 5 5 .2 5 5 .2 5 5 .2 5 5
G a te w a y : 1 0 .1 0 .1 .2 5 4
C o s t: 1
Time-of-Day Synchronization
IQ provides accurate and synchronized timestamps on events and alarms, ensuring proper ordering of
alarms and events at both the network element and network levels. The synchronized timestamp eases
the network-level debugging and eliminates the in-accuracies caused by the manual configuration of
system time on each network element. Additionally, the timestamp complies with UTC format, found in ISO
8601, and includes granularity down to seconds.
IQ supports the Time-of-Day Synchronization by implementing NTP Client which ensures that IQ’s system
time is synchronized with the specified NTP Server operating in the customer network and also
synchronized to the Universal Coordinated Time (UTC). IQ also implements NTP Server, so that one
network element may act as an NTP Server to the other network elements that do not have access to the
external NTP Server. As shown in Figure 4-19 on page 4-60, typically a GNE (GNE-A node) is configured
to synchronize to an external NTP Server in the customer network and the SNEs (SNE-A, SNE-B, and
SNE-C nodes) are configured to synchronize to the GNE.
MAGELLA
N
NTPServer
DCN
SNE
GNE SNE SNE GNE
The TN780 and Optical Line Amplifier network elements also provide local clock with the accuracy of
23ppm or about a minute per month. If the GNE (with NTP enabled) fails to access the external NTP
Server, IQ NTP (Client and Server) uses the local clock as a time reference. When the connectivity to the
external NTP Server is restored, IQ NTP Client and Server on the GNE re-synchronizes with the external
NTP Server, and the new synchronized time is propagated to all the network elements within the routing
domain.
Following are some recommendations for configuring the NTP Server within a Digital Optical Network:
Configure one external NTP Server with Stratum Level 4 or higher for each routing domain of a Dig-
ital Optical Network.
Configure the GNE and SNE network element to point to the external NTP Server. If required config-
ure static routes on the GNE and SNE network elements to reach the external NTP Server through
the DCN port.
Configure the SNEs to point to the GNE as the NTP Server.
UTStarcom offers MPower Network Management Suite, referred to as MPower, a scalable, robust, carrier
class management software suite which simplifies Digital Optical Network operation and OSS integration.
As described in “MPower Network Management Overview” on page 1-7, MPower currently includes two
management software applications: MPower GNM and MPower EMS.
Figure 5-1 Digital Optical Network and UTStarcom MPower Management Solution
This chapter provides a brief description of the supported features in the following sections:
“MPower Graphical Node Manager” on page 5-3
“MPower EMS” on page 5-15
Refer to UTStarcom MPower GNM User Guide, UTStarcom MPower EMS Administrator Guide and
UTStarcom MPower EMS User Guide for a detailed description of the graphical user interface and detailed
procedures to manage Digital Optical Network.
The following sections provide highlights of the features supported by the MPower GNM. For a detailed
description of how to use MPower GNM to manage the network element, refer to UTStarcom MPower
GNM User Guide.
“Graphical User Interface” on page 5-4
“Inventory Manager” on page 5-10
“Network Topology Display” on page 5-11
“Software Configuration Management” on page 5-11
“Service Provisioning” on page 5-13
“Performance Management” on page 5-14
“Security Management” on page 5-14
Main Menu
Equipment Tree
Equipment View
Workspace Area
Alarm Manager
Status Bar
The topology view of all the network elements that are in the same network neighborhood as the tar-
get network element the MPower GNM is logged into.
Support for MCM redundancy. Redundancy state will be displayed on the card (act and stby). New
pop-up menu items, Switchover and Make Stand by have been added. The quick view browser will
indicate the redundancy of the MCM that is selected.
When MCM is
selected it’s
redundancy state
will be shown in the
quick view Panel
New pop-up items added to the MCM; Switchover and Make Standby
Protection Group Manager window. Allows for the creation, and deletion of protection groups. The
protection manager features a right-click accessible menu options for individual protection groups.
Support for 80 channel BMM. When selecting the BMM properties for an 80 channel BMM the BMM
OCG Port field will number 1 through 8.
Support for Nodal Control and Timing (NCT) ports used in a multi-chassis configuration.
NCT ports
NCT in
Equipment Tree
Inventory Manager
The MPower GNM includes Inventory Manager applications through which users can monitor and also
manage various resources in the network element. The following inventory applications are provided:
Equipment Manager—to view and manage the equipment inventory including chassis and circuit
packs.
Facility Manager—to view and manage the inventory of termination points including physical ports
and logical ports.
Cross-connect Manager—to view and manage manual static cross-connects and signaled cross-
connects which are described in “Service Provisioning” on page 4-23.
Circuit Manager—to view and manage dynamically signaled SNCs described in “Dynamically Sig-
naled SNC Provisioning” on page 4-26.
Protection Group Manager-to view and manage the protection groups described in “Protection
Group Provisioning” on page 4-28
The inventory information is displayed as a table from which users can perform context-sensitive launching
of other applications.
Upgrade to a new software image which will use the currently active database.
Restart the currently running Software Image with a new empty database.
Activate new software image and new database in one click.
Back up the database locally on the network element.
In-service software rollback
Displays if a particular software image upgrade/downgrade is service affecting or non-service affect-
ing. In general the software upgrade/downgrade is non-service affecting.
Fault Management
The MPower GNM supports Alarm Manager application to manage and view the alarms reported
asynchronously by the network element, and Event Manager to monitor the Event Log maintained by the
IQ, as described in “Event Log” on page 4-10. The MPower GNM also provides user interfaces and user
access to all the fault management functions provided by the IQ as described in “Fault Management” on
page 4-2. Additionally, the MPower GNM supports following features:
Alarm manager application to view and manage all outstanding alarms along with alarm details prob-
able cause, severity, source, time of occurrence, etc.
Provides the ability to export:
All alarms to file
All events to file
Current view of alarms to file
Current view of events to file
Event log application to view the events logged by the network elements.
Real-time updates to the current alarms and events in the alarm manager and event log, respec-
tively.
Context sensitive alarm summary display based on selected managed object entity. For example,
users can right-click on the chassis and circuit packs, and select the 'Show Alarms' and 'Show
Events' menu options. The Alarm Manager and Event Manager tables are updated to show the
alarms or events, respectively, for the selected Equipment.
Color coded alarm display based on the alarm severity.
Several pre-defined display filters so that users can monitor a specific category of alarms.
Ability to acknowledge alarms. Alarms that have been acknowledged will have a check mark in the
Ack field of the alarm.
Ability to navigate from an active alarm display in the alarm manager window to the source of the
alarm.
Users can export the alarms and events in TSV file format.
Service Provisioning
The MPower GNM provides user interfaces to provision and manage services supported by the IQ as
described in “Service Provisioning” on page 4-23. It includes Cross-connect Manager to provision and
manage manual cross-connects, Circuit Manager to provision and manage Dynamically Signaled SNCs,
and the Protection Group Manager to provision and manage protection groups. The following functions are
supported to simplify the service provisioning and management procedures:
For the Cross-connect Manager:
The Cross-connect Manager can be launched from the Equipment Manager and the top level
menu bar.
The available end points are displayed, allowing users to select end-points in order to create
cross-connects.
Users can view PMs for a selected cross-connect.
Users can assign a circuit ID to each cross-connect for end-to-end management. The circuit ID
is a logical name given to the cross-connect.
For the Circuit Manager
The Circuit Manager can be launched from the Equipment Manager and the top level menu bar.
The available termination points are displayed, allowing users to select end-points in order to
create circuits.
Users can select to use pre-provisioned capacity, feature that allows a circuit to be provisioned
with a minimum of pre-provisioned equipment.
User can select to use local DLM route only, feature that when enabled allows route computation
to utilize equipped and unequipped DWDM capacity.
Note: Users can export the cross-connect, circuit and protection groups inventory in TSV file for-
mat.
Performance Management
The MPower GNM provides a user interface to support performance management functions supported by
IQ as described in “Performance Monitoring and Management” on page 4-31. In addition:
Users can reset PM counters locally and view the delta between the current value and last reset
value.
Automatically refresh the PM data at configured intervals.
Users can monitor the PM data from the Circuit Manager.
Both real-time and historical PM data are displayed to the user.
Security Management
The MPower GNM provides a user interface to perform user access and security management procedures
supported by the IQ as described in “Security and Access Management” on page 4-35.
MPower EMS
The MPower EMS is a robust, real-time management software used to administer and manage Digital
Optical Networks. MPower EMS provides end-users in the NOC with an integrated network-level and
network-element-level functions including, fault and performance management, circuit provisioning,
configuration, topology and inventory management, testing and maintenance functions, and security
management. The MPower EMS provides the following functions:
Ability to manage the network independent of physical network deployment.
Automated network topology discovery and drill-down topology displays with integrated real-time
alarm status updates (see “Release Compatibility” on page 5-16).
Enhanced network-level OAM&P functions (see “Network-level OAM&P Functions” on page 5-26).
MPower server security and access management based on Telcordia GR-815-CORE standard (see
“MPower EMS Security and Access Management” on page 5-31).
Scalable and reliable software architecture (see “MPower EMS Architecture” on page 5-34).
The MPower server is certified to be deployed on a Sun Microsystems Solaris server platform and
the MPower client is certified to run on Microsoft Windows and Sun Microsystems Solaris platform
(see “MPower EMS Platform Requirements” on page 5-36).
Administrative Domains
The administrative domain enables a group of network elements to be managed as a single network entity
independent of the underlying GMPLS routing domain (see “Network Topology” on page 4-48 for details).
For instance, in Figure 5-8 on page 5-16, at the network-element level, two separate networks are defined
(GMPLS Routing Domain 1 and GMPLS Routing Domain 2). At the management level, three
administrative domains: EastRoute Domain, NorthRoute Domain and WholeNet Domain are defined. Each
administrative domain includes a subset of network elements from the GMPLS Routing Domain 1 and
GMPLS Routing Domain 2 networks. Thus, the scope of the administrative domain is separated from the
scope of the GMPLS routing domain. For example, one can define the administrative domains along the
organizational boundaries, functional boundaries or geographic boundaries. In Figure 5-8 on page 5-16,
the administrative domains are defined along the geographic boundaries.
Each user can be assigned to manage one or more administrative domains.
A given network element can be included in one or more administrative domains. For example, in Figure 5-
8 on page 5-16, Node 15 is included in EastRoute Domain, NorthRoute Domain and WholeNet Domain.
The MPower EMS provides a user interface to create, modify and delete administrative domains (see
“Network Element Information File Editor” on page 5-18).
W
ho
leN
etD
oma
in
N
ode1
0 N
ode1
1 N
ode1
2 N
ode1
3 N
ode1
4 N
ode1
5
N
ode2
0 N
ode2
7
N
ode2
1
N
ode2
6
N
ode2
2
N
ode2
3 N
ode2
5
N
ode2
4
N
or
thR
out
e D
oma
in
EastR oute Dom ain
N
od
e10 N
od
e11 N
od
e12 N
od
e13 N
od
e14 N
od
e15
Node 14 Node 16
N
od
e20 N
od
e27
Node 27
Node 26
A
d
m
i
ni
s
tr
at
i
v
eDo
ma
i
nV
ie
w
si
n
MP
ow
er
E
MS
N
od
e10 N
od
e11 N
od
e12 N
od
e13 N
od
e14 N
od
e15
G
MPL
S R
out
ing
D
oma
in1
N
od
e20 N
od
e27
N
od
e21
N
od
e26 G
MPL
S R
out
ing
N
od
e22
N
od
e23
N
od
e24
N
od
e25 D
oma
in2
Release Compatibility
MPower EMS manages UTStarcom Digital Optical Networking systems, which include UTStarcom TN780
and UTStarcom Optical Line Amplifier. UTStarcom Digital Optical Networking systems are supported by
the IQ Network Operating System (IQ NOS) software. Table 5-1 on page 5-17 specifies the compatibility
between the IQ NOS and MPower EMS version.
Java Web Start manages the compatibility between MPower server and MPower client software versions.
Note: If the file is edited offline, then the EMS server must be cold started.
Consider an example network shown in Figure 5-8 on page 5-16. As shown, two networks (GMPLS
Routing Domain 1 and GMPLS Routing Domain 2) are deployed and three administrative domains
(EastRoute Domain, NorthRoute Domain and WholeNet Domain) are defined in the MPower EMS.
The user must configure the EastRoute domain by specifying the DCN IP address of all the nodes in that
domain (Node 14, Node 15, Node 26 and Node 27) since only a partial GMPLS routing domain is included
in the administrative domain.
The NorthRoute Domain can be defined by specifying the DCN IP address of Node 10, Node 20 and Node
27. In addition, the auto-discovery option can be enabled on Node 10 so that the remaining nodes in the
corresponding GMPLS Routing domain are automatically discovered and included in the administrative
domain.
The WholeNet Domain can be defined by specifying the DCN IP address of Node 10 and Node 20 with the
auto-discovery option enabled so that all nodes in the corresponding GMPLS Routing domains are
automatically discovered and included in the administrative domain.
Note: Only a user with security administrator privilege can open the seed file editor from the menu
item.
Note: A default MPower server specific user account (with user-ID emsadmin and password
Infinera1) is created in the network element. However, by default, the account is disabled.
The user may enable this pre-defined account or create a new MPower server specific
account using the management interfaces, such as MPower GNM or TL1.
The MPower server must be provided with a list of MPower server accounts created on the network
elements to which it must establish connectivity. The MPower server provides a user interface so that an
EMS User with Security Administrator privilege can configure this list of user-ID and password, referred to
as the discovery key ring.
The MPower server walks through the user-IDs configured in the discovery key ring to establish
connectivity with the network elements. If none of the user-IDs (and password) configured in the key ring
are accepted by the network element, then the network element is marked as unreachable and the
MPower server retries continuously to establish connectivity until it is successful. The user must either fix
the key ring configuration or the MPower server account in the network element that is unreachable.
Topology Discovery
The MPower server initiates topology discovery when it detects events and alarms that cause changes to
the network topology. Following are some examples of events and alarms that trigger the topology
discovery:
Addition or deletion of a control link or GMPLS link
Addition or deletion of network elements
Alarms (raise and clear) reported on the control link or GMPLS link (e.g. alarms reported due to fiber
cut)
Loss of connectivity between the MPower server and the network element
During topology re-discovery the MPower server discovers all the network elements specified in the
configuration database and also the dynamically discovered network elements stored in the persistent
database. If any of the network elements or links are not available, it dynamically updates the nework view
displayed to the user along with a color coding providing visual indication of the network problems. When
the nework problem is corrected, it performs network re-discovery to discovers the changes in the network
and displays the updated network view to the user.
Provides a visual display of the current alarm summary at the entire network-level (refers to the
entire UTStarcom network managed by a given MPower server), administrative domain level or net-
work element level.
Provides historical (up to 90 days by default) listing of alarms and events which can be optionally
exported in TSV file format. Users can configure the event log size before starting the MPower
server.
Ability to define custom filters which helps in analyzing the historical data and therefore, quick prob-
lem resolution.
Integrated search and sorting.
Provides the ability to export:
All alarms to file
All events to file
Current view of alarms to file
Current view of events to file
Circuit Layout
MPower EMS provides a circuit layout record for every end-to-end circuit. The circuit layout feature allows
the user to view every object that comprises a circuit. MPower EMS supports the creation of signalled and
manual cross-connect based circuits, allowing the circuit layout record to be launched with a circuit or a
cross-connect as the context.
The circuit layout record displays state and alarm conditions of all the objects comprising the circuit
drastically improving troubleshooting and fault isolation.
For a given end-to-end circuit the order of object display is from trib-port to trib-port. The following
intermediate points will also be displayed:
Trib DTF Path
Cross-Connect
Line DTF Path
DLM Channel
DLM OCG Port
BMM OCG Port
BMM OTS Port (egress)
BMM OTS Port (ingress)
BMM OCG Port
DLM OCG Port
DLM Channel
Line DTF Path
Cross-Connect
Trib DTF Path
Performance Management
The MPower server supports all the network element-level functions described in “Performance
Management” on page 5-14. Following are the additional network-level functions supported:
Provides historical (up to 90 days by default) archiving of all historical 15min and 24hr PM data for
each network element which can be optionally exported in CSV file format.
Provides End-to-end Circuit PM view for viewing intermediate PM across a whole circuit.
Includes a network performance reporting tool for parsing all historical PM data in the database for
generating web-based reports, including:
List of all SONET/SDH circuits based on the pre- and post- FEC BER from highest to lowest.
List of all SONET client circuits sorted based on the ES-S (errored seconds section) from highest
to lowest. Only the ES encountered within the digital optical network is considered.
List of all SDH client circuits sorted based on RS-ES (regenerator section errored seconds) from
highest to lowest. Only the ES encountered within the digital optical network is considered.
List of all SONET client circuits based SEFS-S (severely errored frame seconds section) from
highest to lowest. Only the SEFS-S encountered within the digital optical network is considered.
List of all SONET client circuits based RS-LOSS (regenerator section LOSS) from highest to
lowest. Only the LOSS encountered within the digital optical network is considered.
Ability to generate customized PM reports for each termination point.
User Identification
Each MPower user is assigned a unique MPower user ID. The MPower user ID is case-sensitive and
contains 6 to 10 alphanumeric characters. The user specifies this ID to log into MPower server.
Note that the MPower user ID is not passed to the target network element (network element managed by
the user using MPower EMS). MPower server uses the network element user ID to log into the target
network element (see “Dynamic Seed File Editor” on page 5-21) to log into the target network element.
MPower is equipped with a user account that allows for an initial login. The user ID is admin, the
password is infinera1, and the account has the security administrator privilege enabled.
This default account differs from the typical user account in that:
It cannot be disabled or deleted
The Security Administrator privilege cannot be removed
Password expiration cannot be set (it is set to 0 by default which means, it never expires)
A user may open multiple active sessions. MPower server maintains a list of all current active users, but
not active sessions.
Authentication
MPower server supports standards-based authentication features. These features ensure that only the
authorized users can log into the MPower server through the MPower client interface.
Each time the MPower user logs in, the user must enter a user ID and password. For the initial login, the
user specifies the default password. The user must then change the password based on the following
requirements.
The password must contain
Six to ten alphanumeric characters
At least one alphabetic and one numeric or one special character
The password may contain these special characters: ! @ # $ % ^ ( ) _ + | ~ { } [ ] ? -
The password must not contain:
The associated user ID
Blank spaces
The passwords are case-sensitive and must be entered exactly as specified.
The password is stored in the MPower server database in a one-way encrypted form.
Password aging is enabled by default. When the password expires, the user must create a new one. The
security administrator can configure the password aging interval -- the length of time the password is valid.
Password aging can also be disabled by setting the aging interval to 0.
Access Control
In addition to user-ID validation and password authentication, MPower server supports access control
features to ensure that the session requester is trusted.
The activity of each user session is monitored. If, for a configurable period of time, no data is exchanged
between the user (MPower client) and MPower server, the user session is declared inactive.The MPower
server defines two system-wide inactivity timeout intervals:
Lockout Interval—When the user session is inactive for this interval, the user is locked out. To reac-
tivate the session, the user must re-enter the password.
Idle Interval—When the user session is inactive for this interval, the session is terminated. The user
must launch a new session.
User session activity monitoring is disabled by default. A user with security administrator privileges can
enable monitoring and also configure the lockout period and the idle period based on the needs of the
particular site.
Authorization
Multiple access privileges are defined to restrict user access to resources. The access privileges defined
in MPower server are in synchronization with the access privileges defined in Digital Optical Networking
systems. Each MPower EMS access privilege is directly mapped to the access privilege defined at the
network element level. In other words, a MPower User with a given access privilege can perform the
actions allowed for that privilege on the target network element.
As described in, there are six levels of access privileges. The following description provides the actions
allowed for each access privilege within MPower server.
Monitoring Access (MA)—provides read-only access to various MPower EMS logs and inventory
screens.
Security Administrator (SA)—allows the user to perform MPower server security management and
administration related tasks, to shut down MPower server, and to configure the Discovery Key Ring
(see “Dynamic Seed File Editor” on page 5-21).
Network Administrator (NA)—there are no MPower EMS specific tasks defined for this privilege.
Network Engineer (NE)—there are no MPower EMS specific tasks defined for this privilege.
Provisioning (PR)—there are no MPower EMS specific tasks defined for this privilege.
Turn-up and Test (TT)—there are no MPower EMS specific tasks defined for this privilege.
Security Administration
MPower server defines a set of security administration functions and parameters that are used to
implement site-specific policies. Security administration can be performed only by the MPower user with
security administration privilege. The supported features include:
View all users currently logged on
Disable and enable a MPower user account
Note: Disabling an MPower user account automatically terminates all active sessions correspond-
ing to this account.
Modify user account parameters, including access privilege, password expiry time, and administra-
tive domains.
Delete a MPower user account and all its attributes, including its password
Reset any user password to the
MPower server default password
Monitor security audit logs to detect unauthorized access to MPower server
Monitor the security alarms and events raised by MPower server and take appropriate actions
Configure the security administration parameters applicable to all MPower users
Default password
Inactivity time-out intervals
Advisory warning message displayed to the user after successful login to the network element
MPower
Client
MPower UI
Frontend Server
Oracle
MPower Database
Server Server
MPower EMS
Core Server
MPower PM
XML Mediator Server
XML FTP
Customer
DCN
Network DCN
DCN
Infinera Digital
Optical Network
MPower Frontend Server—The MPower Frontend server processes the requests from the MPower
clients and it interacts with the Oracle database server directly for all the read operations. However,
if a user request requires a write operation to the database, it passes the request to the MPower
Core server. Thus the database read-only operations are processed separate from the database
read/write operations.
MPower Core Server—The MPower Core server manages and processes the information from the
network elements and performs all the management tasks. It interacts with the Oracle database
server in order to manage the information. The MPower Core server is architected to support multi-
ple MPower Frontend servers each running on separate hardware platforms. This allows multiple
MPower Frontend servers to be deployed depending on the number of MPower EMS Clients
deployed.
MPower PM Server—The MPower PM server collects, processes and manages the performance
monitoring data from the network elements. It provides a variety of pre-defined reports to the users
so that the network problems can be quickly isolated. User customizable reports are also supported.
Note: In Release 1.2, by default, MPower Frontend server, MPower Core server, and MPower PM
server are automatically installed on the same hardware platform. The Oracle database
server must also be installed on the same hardware platform as the MPower server. Users
must launch the MPower Core server, which also includes MPower Frontend server. Users
can optionally launch MPower PM server.
or a Sun Fire V880 server (for large networks), configured as shown in Table 5-2 on page 5-37. It also
shows the maximum number of network elements and MPower clients supported in each configuration for
optimal performance.
Number of Number of
Network MPower Sun Server Hard Disk
Elements clients Platform Processors RAM in GB (GB)
Perceived Severity
The current severity of the alarm.
Asserted Severity
The severity of the alarm when it was asserted earlier. for e.g. if there is an alarm that is raised
as “CR” and if we raise a clear for that, the perceived severity of the current alarm shall be
“Clear” and the asserted severity shall be “CR”.
Timestamp
There are two Time attributes.
neTime - The Network element time at which the trap (Alarm) is generated.
emsTime - The EMS time at which the trap is generated.
EMS Notification ID
A unique ID assigned to each alarm in the EMS. It is sent as EMS notification ID as part of a trap
attribute.
Event/Trap description
The description of the alarm.
Note: Prior to installing MPower SNMP Manager plugin, MPower EMS must be installed on your
machine.
The EMS provides a single registration point for non-robust SNMP Trap notifications and support North-
bound SNMP v2c trap notification. Only alarms reported by network elements are notified as SNMP traps.
SNMP Manager
A Software which resides on the machine which is managing the devices. It is the console
through which an administrator performs management related functions.
SNMP Agent
A Software which resides on the device to be managed. In our case, it resides on the EMS
Server. The device can be a bridge, router, Network Element (as in our case), hub etc.
Object
The objects in MIBs are identified by the object identifiers.
Configurable Parameters
The parameters below can be configured through InfineraSnmp.conf file which is located under
EMS_INSTALL_DIR/conf/Infinerasnmp directory. All these parameters will get affected only after
restarting (Cold/Warm) the EMS Server.
EMSName can be set through this. EMSName is default to “MPower”, which get propagated in all the
traps that are generated by the system. If you want to change, you can do so using configurable
parameter.
IsCorrelationIDSupportNeeded
IsEmsTimeWhenReceived
GenerateOutstandingAlarm
SNMP MIBs
MIB rules define the object ID and provide them a valid name. Typically, objects that can be managed by
SNMP are defined in MIBs, which are ASCII text files in a structured format.
INFINERA-REG-MIB
INFINERA-TRAP-MIB
TN780 PM Data
UTStarcom TN780 and Optical Line Amplifier network elements collect extensive PM data, including
Optical performance monitoring (PM) data within the optical domain (see “Optical PM Parameters
and Thresholds” on page A-2)
Client signal agnostic DTF PM data at every TN780 network element (see “DTF PM Parameters and
Thresholds” on page A-10)
FEC PM data enabling BER calculation (see “FEC PM Parameters and Thresholds” on page A-15)
Native client signal PM data at the tributary ports (see “Client Signal PM Parameters and Thresh-
olds” on page A-16)
Optical supervisory channel performance monitoring data (see “Client Signal PM Parameters and
Thresholds” on page A-16)
7
OCG 1 IN
OUT SC Tx-
MUX EDFA MUX MUX SC Line OUT
6
Total OCG OPT
IN 7 1.C-Band Normalized
OCG 7 OPR
OUT SC
6 1.C-Band Rx EDFA LBC2
2.C-Band Measured DCM Loss
SC SC SC
M
e
as
ur
edO
pt
i
calP
MD
at
a P
o
in
ts
OSA RX IN OUT OSA Monitor IN
D
e
ri
ved
Opt
i
calP
MDa
t
a P
o
in
ts AMP OUT DCM Optional
Table A-1 on page A-3 captures the optical PM parameters supported at each layer. The historical data is
maintained for some PM parameters. For the rest, only the real-time data is maintained.
Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM
Current
&
historical
(15-min
PM Parameter as PM Parameter while Real- & 24-
displayed in GNM/ exporting the file to time hour)
EMS FTP server Description Unit data data
Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM
Current
&
historical
(15-min
PM Parameter as PM Parameter while Real- & 24-
displayed in GNM/ exporting the file to time hour)
EMS FTP server Description Unit data data
Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM
Current
&
historical
(15-min
PM Parameter as PM Parameter while Real- & 24-
displayed in GNM/ exporting the file to time hour)
EMS FTP server Description Unit data data
Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM
Current
&
historical
(15-min
PM Parameter as PM Parameter while Real- & 24-
displayed in GNM/ exporting the file to time hour)
EMS FTP server Description Unit data data
OCG Total Optical Total OCG optical power leaving dBm Yes Yes
Power Transmitted the BMM towards its attached
OCG Total Optical DLM. One attribute for each
BMMOcgOptMin OCG.
Power Transmitted
Min
OCG Total Optical
Power Transmitted BMMOcgOptAvg
Avg
OCG Total Optical
BMMOcgOptMax
Power Transmitted
Max
OCG Total Optical Total OCG optical power arriv- dBm Yes Yes
Power Received ing at the BMM from the local
OCG Total Optical DLM. One attribute for each
BMMOcgOprMin OCG.
Power Received Min
OCG Total Optical
Power Received Avg BMMOcgOprMax
Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM
Current
&
historical
(15-min
PM Parameter as PM Parameter while Real- & 24-
displayed in GNM/ exporting the file to time hour)
EMS FTP server Description Unit data data
OCG Total Optical Total OCG optical power trans- dBm Yes Yes
Power Transmitted mitted by the DLM to the BMM.
OCG Total Optical Total OCG optical power dBm Yes Yes
Power Received received by the DLM to the BMM
(has reading inaccuracy of
+2.5dB/-1.0dB).
OCh Optical Power Average optical channel power dBm Yes Yes
Received received by the DLM. One mea-
Och Optical Power surement for each optical chan-
ChanOchOprMin nel.
Received Min
Och Optical Power
Received Avg ChanOchOprAve
Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM
Current
&
historical
(15-min
PM Parameter as PM Parameter while Real- & 24-
displayed in GNM/ exporting the file to time hour)
EMS FTP server Description Unit data data
OCh Optical Power Average optical channel power dBm Yes Yes
Transmitted transmitted by the DLM. One
Och Optical Power measurement for each of the ten
ChanOchOptMin optical channels within an OCG.
Transmitted Min
One measurement for each opti-
Och Optical Power cal channel.
Transmitted Avg ChanOchOptAve
Thresholding is supported for some of the optical PM parameters. Table A-2 on page A-9 lists those PM
parameters, corresponding thresholds and alarms reported when thresholds are exceeded.
PM Parameter as
displayed in file
PM Parameter exported to FTP server Ranges Alarms
DLMMidplaneConnector
Rx IN
DTF PIC
Mapper
N
UT TOM SerDes
PMData collectedbythe Mapper:
DTFCV-S
System DTFSection
DTFES-S
Clock ref Level
DTFSES-S
Client Clock Gen
DTFCV-P
DTFES-P DTFPath
PMData collectedbythe DTF DTFSES-P Level
Mapper: DTFUAS-P
DTFCV-P FECUncorrected BER
DTFES-P FECCorrected BER
DTFSES-P FECPMData FECCorrected Bits
DTFUAS-P FECUncorrectable Codewords
FECTotal Codewords
Table A-3 on page A-11 captures the PM parameters and corresponding thresholds defined for the DTF
Section and DTF Path layers.
TCA
Reportin
15-min g Default Threshold
PM Real-time and 24-hr supporte Values
Parameter Description data data d? 15-min 24-hour
DTF CV-S Count of BIP errors Yes Yes Yes 1500 15000
detected at the DTF Sec-
tion layer (i.e., using the B1
byte in the incoming sig-
nal). Up to 8 BIP errors can
be detected per frame, with
each error incrementing
the DTF CV-S current reg-
ister.
TCA
Reportin
15-min g Default Threshold
PM Real-time and 24-hr supporte Values
Parameter Description data data d? 15-min 24-hour
DTF ES-S Count of the number of Yes Yes Yes 120 1200
seconds during which (at
any point during the sec-
ond) at least one DTF Sec-
tion layer BIP error was
detected or an LOF or LOL
defect was present.
DTF SES-S Count of the seconds dur- Yes Yes Yes 3 7
ing which K (=10000) or
more DTF Section layer
BIP errors were detected or
an LOF or LOL defect was
present.
DTF CV-P Count of BIP errors Yes Yes Yes 1500 15000
detected at the DTF Path
layer. Up to 8 path BIP
errors can be detected per
frame, with each error
incrementing the DTF-
DLM-CV-S current register.
DTF ES-P Count of the number of Yes Yes Yes 120 1200
seconds during which (at
any point during the sec-
ond) at least one DTF Path
layer BIP error was
detected or an AIS-P, TIM-
P, OCI-P, or BDI-P defect
was present.
DTF SES-P Count of the seconds dur- Yes Yes Yes 3 7
ing which K (= 2,400 as
specified in GR-253-CORE
Issue 3 specification) or
more DTF Path layer BIP
errors were detected or an
AIS-P, TIM-P, OCI-P, or
BDI-P defect was present.
TCA
Reportin
15-min g Default Threshold
PM Real-time and 24-hr supporte Values
Parameter Description data data d? 15-min 24-hour
TCA
Reportin
15-min g Default Threshold
PM Real-time and 24-hr supporte Values
Parameter Description data data d? 15-min 24-hour
FEC PM Parameter as
displayed in the file
exported to FTP Real-time 15-min and Threshold
FEC PM Parameter server Description data 24-hr data Supported
The thresholding is supported only for the pre-FEC BER. If the BER before error correction is equal to
greater than the user configured value over an interval associated with the configured value, a ‘Pre-FEC
BER-based Signal Degrade’ alarm is reported. The alarm is cleared when the pre-FEC BER is below the
threshold.
T A M -2 -1 0 G
C l ie n t C lo c k G e n
IN
TOM S e rD e s OUT
DTF
M apper
IN
TOM S e rD e s OUT
D L M M id p la n e C o n n e c t o r
S y s te m
C lo c k r e f
C l ie n t C lo c k G e n
S O N E T C lie n t S D H C lie n t S ig n a l
S ig n a l P M D a ta : P M D a ta :
R x C V -S R x R S -B E
R x E S -S R x R S -E S
R x S E S -S R x R S -S E S
R x S E F S -S R x R S -O F S
R X R S -L O S S
Tx C V -S
Tx E S -S Tx R S -B E
Tx S E S -S Tx R S -E S
Tx S E F S -S Tx R S -S E S
Tx R S -O F S
TX R S -L O S S
Default
Threshold
PM Parameter as Real- 15-min Values
PM displayed in file to time and 24- 15-min
Parameter export to FTP server Description data hr data 24-hour
SONET Section Rx Parameters Collected in the TAM for SONET OC-192/OC-48 Trib Interfaces
Rx CV-S RxCV Count of BIP errors detected at Yes Yes 1500 15000
the Section layer incoming in the
incoming client’s SONET signal).
Up to eight Section BIP errors
can be detected per STS-N
frame, with each error incre-
menting the Sonet-Rx-CV-S cur-
rent second register.
Default
Threshold
PM Parameter as Real- 15-min Values
PM displayed in file to time and 24- 15-min
Parameter export to FTP server Description data hr data 24-hour
Rx ES-S RxES Count of the number of seconds Yes Yes 120 1200
during which (at any point during
the second) at least one Section
layer BIP error was detected or
an LOS or SEF defect was
present.
Rx SES-S RxSE Count of the seconds during Yes Yes 3 7
which K (=10000) or more Sec-
tion layer BIP errors were
detected or an LOS or SEF
defect was present.
Rx SEFS- RxSEFS Count of the seconds during Yes Yes 3 7
S which (at any point during the
second) an SEF defect was
present.
SONET Section Tx Parameters Collected in the TAM for SONET OC-192/OC-48 Trib Interfaces
Tx CV-S TxCV Count of BIP errors detected at Yes Yes 1500 15000
the Section layer in the SONET
signal received from the line/sys-
tem side and to be transmitted to
the receiving client. Up to eight
Section BIP errors can be
detected per STS-N frame, with
each error incrementing the
Sonet-Rx-CV-S current second
register.
Tx ES-S TxES Count of the number of seconds Yes Yes 120 1200
during which (at any point during
the second) at least one SONET
Tx BIP error was detected or an
LOS or SEF defect was present.
Tx SES-S TxSES Count of the seconds during Yes Yes 3 7
which K (=10000) or more
SONET TX BIP errors were
detected or an LOS or SEF
defect was present.
Tx SEFS- TxSEFS Count of the seconds during Yes Yes 3 7
S which (at any point during the
second) an SEF defect was
present.
Default
Threshold
PM Parameter as Real- 15-min Values
PM displayed in file to time and 24- 15-min
Parameter export to FTP server Description data hr data 24-hour
SDH Regenerator Section Rx Parameters Collected in the TAM for SDH STM-64/STM-16Trib Interfaces
Rx RS-BE RxBE Count of the number of errors Yes Yes 1500 15000
within a block in the incoming cli-
ent’s SDH signal.
Rx RS-ES RxES Count of the number of seconds Yes Yes 120 1200
during which (at any point during
the second) at least one RS
block error was detected or an
LOS or SEF defect was present.
Rx RS- RxSES Count of the seconds during Yes Yes 3 7
SES which30% or more RS block
errors were detected or an LOS
or SEF defect was present.
Rx RS- RxOFS Yes Yes 3 7
OFS
Rx RS- RxLOSS Yes Yes 3 7
LOSS
SDH Regenerator Section Tx Parameters Collected in the TAM for SDH STM-64/STM-16 Trib Interfaces
Tx RS-BE TxBE Count of the number of errors Yes Yes 1500 15000
within a block in the SDH signal
received from the network and to
be transmitted to the receiving
client.
Tx RS-ES TxES Count of the number of seconds Yes Yes 120 1200
during which (at any point during
the second) at least one Tx RS
block error was detected or an
LOS or SEF defect was present.
Tx RS- TxSES Count of the seconds during Yes Yes 3 7
SES which30% or more Tx RS block
errors were detected or an LOS
or SEF defect was present.
Tx RS- TxOFS Yes Yes 3 7
OFS
Tx RS- Yes Yes 3 7
LOSS
OSC PM Parameters
UTStarcom TN780 and Optical Line Amplifier network elements support OSC, a dedicated 1510nm optical
channel, to carry traffic and management traffic between adjacent network elements. The OSC is
terminated on the BMM on the TN780 and OAM on Optical Line Amplifier.
Current
&
historic
al
PM Parameter as Real- (15-min
PM Parameter displayed in file time & 24-
exported to FTP server Description Unit data hr) data
Optical Power
Transmitted Max OscOPRMax
Optical Power
Received Max OcsOprMax
Current
&
historic
al
PM Parameter as Real- (15-min
PM Parameter displayed in file time & 24-
exported to FTP server Description Unit data hr) data
1 1 1563.455 191.75
1 2 1561.826 191.95
1 3 1560.200 192.15
1 4 1558.578 192.35
1 5 1556.959 192.55
1 6 1555.343 192.75
1 7 1553.731 192.95
1 8 1552.122 193.15
1 9 1550.517 193.35
1 10 1548.915 193.55
2 1 1563.047 191.80
2 2 1561.419 192.00
2 3 1559.794 192.20
2 4 1558.173 192.40
2 5 1556.555 192.60
2 6 1554.940 192.80
2 7 1553.329 193.00
2 8 1551.721 193.20
2 9 1550.116 193.40
2 10 1548.515 193.60
3 1 1562.640 191.85
3 2 1561.013 192.05
3 3 1559.389 192.25
3 4 1557.768 192.45
3 5 1556.151 192.65
3 6 1554.537 192.85
3 7 1552.926 193.05
3 8 1551.319 193.25
3 9 1549.715 193.45
3 10 1548.115 193.65
4 1 1562.233 191.90
4 2 1560.606 192.10
4 3 1558.983 192.30
4 4 1557.363 192.50
4 5 1555.747 192.70
4 6 1554.134 192.90
4 7 1552.524 193.10
4 8 1550.918 193.30
4 9 1549.315 193.50
4 10 1547.715 193.70
5 1 1545.720 193.95
5 2 1544.128 194.15
5 3 1542.539 194.35
5 4 1540.953 194.55
5 5 1539.371 194.75
5 6 1537.792 194.95
5 7 1536.216 195.15
5 8 1534.643 195.35
5 9 1533.073 195.55
5 10 1531.507 195.75
6 1 1545.322 194.00
6 2 1543.730 194.20
6 3 1542.142 194.40
6 4 1540.557 194.60
6 5 1538.976 194.80
6 6 1537.397 195.00
6 7 1535.822 195.20
6 8 1534.250 195.40
6 9 1532.681 195.60
6 10 1531.116 195.80
7 1 1544.924 194.05
7 2 1543.333 194.25
7 3 1541.746 194.45
7 4 1540.162 194.65
7 5 1538.581 194.85
7 6 1537.003 195.05
7 7 1535.429 195.25
7 8 1533.858 195.45
7 9 1532.290 195.65
7 10 1530.725 195.85
8 1 1544.526 194.10
8 2 1542.936 194.30
8 3 1541.349 194.50
8 4 1539.766 194.70
8 5 1538.186 194.90
8 6 1536.609 195.10
8 7 1535.036 195.30
8 8 1533.465 195.50
8 9 1531.898 195.70
8 10 1530.334 195.90
Acronyms
Table C-1 List of Acronyms
Abbreviation Description
A
ACLI application command line interface
ACO alarm cutoff
ACT active
AD add/drop
ADM add/drop multiplexer
ADPCM adaptive differential pulse code modulation
AGC automatic gain control
AID access identifier
AINS administrative inservice
AIS alarm indication signal
ALS automatic laser shutdown
AMP amplifier
ANSI American National Standards Institute
AO autonomous output
APD avalanche photo diode
API application programming interface
APS automatic protection switching
ARC alarm reporting control
ARP address resolution protocol
Abbreviation Description
B
BDFB battery distribution fuse bay
BDI backward defect indication
BDI backward defect indication
BEI backward error indication
BER bit error rate
BERT bit error rate testing
BGA ball grid array
BIP-8 bit interleaved parity
BITS building-integrated timing supply
BLSR bi-directional line switched ring
BMM-C Band Mux Module - C band
BNC Bayonet Niell-Concelman; British Naval Connector
BOL beginning of life
BOM bill of material
BOOTP bootstrap protocol
bps bits per second
BPV bipolar violations
C
C Celsius
CCITT Consultative Committee on International Telegraph and Telephone
CCLI commissioning command line interface
CDE chromatic dispersion equalizer
CDR clock and data recovery
CDRH Center for Devices and Radiological Health
Abbreviation Description
D
DA digital amplifier
dB decibel
DB database
DCC data communications channel
DCE data communications equipment
DCF dispersion compensation fiber
DCM dispersion compensation module
DCN data communication network
DEMUX de-multiplexing
Abbreviation Description
E
EDFA erbium doped fiber amplifier
EEPROM electrically-erasable programmable read only memory
EMC electromagnetic compatibility
EMI electro-magnetic interference
EMS element management system
EOL end-of-life
ESD electrostatic discharge; electrostatic-sensitive device
ES-L line-errored seconds
ES-P path-errored seconds
ES-S section-errored seconds
ETS IEEE european test symposium
ETSI European Telecommunications Standards Institute
F
F fahrenheit
FA frame alignment
Abbreviation Description
G
GbE gigabit ethernet
Gbps gigabits per second
GCC general communication channel
GFP general framing protocol
GHz gigahertz
GMPLS generalized multi protocol label switching
GNE gateway network element
GNM graphical node manager
GUI graphical user interface
H/I
HTML hypertext markup language
HTTP hypertext transfer protocol
IAP input, output and alarm panel
ID identification
IDF invalid data flag
IEC International Electrical Commission
I/O Input/Output
IOP input output panel
IP Internet protocol
Abbreviation Description
IQ see IQ NOS
IQ NOS UTStarcom IQ network operating system
IR intermediate reach
IS in-service
ITU-T International Telecommunications Union - Telecommunications
J/K/L
JDK Java Development Kit
JRE Java Runtime Environment
LAN local area network
LBC laser bias current
LC fiber optic cable connector type
LCK locked
LED light-emitting diode
Linear ADM linear add/drop multiplexer
LOF loss of frame
LOL loss of light
LOP loss of pointer
LOS loss of signal
LR long reach
LSB least significant bit
LTE line-terminating equipment
LVDS low voltage differential signaling
M
MA monitoring access
MAC media access control
MB megabyte
Mb/s megabits per second
MIB management information base
MCM management and control module
MEMS micro electro mechanical systems
MFAS multi frame alignment signal
MIB management information base
MMF multimode fiber
Abbreviation Description
MS multiplex section
MSA multi source agreement
MSB most significant bit
MSOH multiplex section overhead
MTBF mean time between failure
MTU maximum transmission unit
MX multiplex, multiplexer, multiplexing
N
NA network administrator
NAND flash type
NC normally closed; node controller
NCC node controller chassis
NCT nodal control and timing
NDSF non zero dispersion shifted fiber
NE network engineer
NEBS network equipment building system
NECG net electrical coding gain
NEPA national fire protection association
NJO negative justification opportunity
nm nanometer
NML network management layer
NMS network management system
NNI network-to-network interface
NO normally open
NSA non-service affecting
NTP network time protocol
NVRAM nonvolatile random access memory
O
OAM optical amplification module
OAM&P operation, administration, maintenance and provisioning
OC-12 optical carrier signal at 622.08 mb/s
OC-192 optical carrier signal at 9.95328 gb/s
OC-3 optical carrier signal at 155.52 mb/s
Abbreviation Description
P/Q
PC personal computer
PCPM per channel power monitoring
Abbreviation Description
R
RAM random access memory
RDI remote defect indication
REI-L remote error indication-line
REI-P remote error indication-path
RFI remote failure indication
ROM read-only memory
Abbreviation Description
S
SA service affecting; security administrator
SAPI source access point identifier
SC square shaped fiber optic cable connector
SD signal degrade
SDH synchronous digital hierarchy
SDRAM synchronized dynamic random access memory
SEF severely errored frame
SEFS severely errored frame second
SELV safety extra low voltage
SERDES serializer and deserializer
SES severely errored seconds
SF signal fail
SFP small form factor plug
SID source identifier; system identifier
SMF single-mode fiber
SML service management layer
SNC sub network connection
SNE subtending network element
SNMP simple network management protocol
SNR signal-to-noise ratio
SOH section overhead
SOL start of life
SONET synchronous optical network
SPE synchronous payload envelope
SQ signal quality
Abbreviation Description
SR short reach
SSL secure sockets layer
STE section terminating equipment
STM synchronous transfer mode
STM-1 SDH signal at 155.52 Mb/s
STM-16 SDH signal at 2.48832 Gb/s
STM-4 SDH signal at 622.08 Mb/s
STM-64 SDH signal at 10 Gb/s
STM-n synchronous transport module of level n (for example, STM-64, STM-16)
STS synchronous transport signal
STS-n synchronous transport signal of level n (for example, STS-12, STS-48)
SW software
T/U/V
TAM tributary adapter module
TAP timing and alarm panel
TCA Threshold Crossing Alert
TCP transmission control protocol
TE traffic engineering
TEC thermo-electric cooler
TERM terminal
TFTP trivial file transfer protocol
TID target identifier
TIM trace identifier mismatch
TL1 transaction language 1
TMN telecommunications management network
TOM tributary optical module
TP termination point
TR transceiver
TT test and turn-up
TTI trail trace identifier
Tx Transmitter; Transmit
UA unavailable seconds
UART universal asynchronous receiver transmitter
UAS unavailable seconds
Abbreviation Description
W/X/Y/Z
WAN wide area network
WDM wavelength division multiplexing
XC cross-connect
XFP name of a small form factor 10 Gbps optical transceiver
XML extensible markup language
MISC
1R re-amplification
2R re-amplification, re-shape
3R re-amplification, re-shape, re-time
4R re-amplification, re-shape, re-time, re-code