0% found this document useful (0 votes)
57 views131 pages

2024-09-27 - Final Draft Report - Signed

The document is a report by the Government of India’s Ministry of Railways on IP-MPLS routers, focusing on improvements based on field experiences. It outlines the committee's terms of reference, including planning for network connectivity, router specifications, and performance criteria. The report emphasizes the need for a robust communication infrastructure to support diverse applications and the adoption of IP-MPLS technology as the backbone for Indian Railways' communication needs.

Uploaded by

railnetrtm2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views131 pages

2024-09-27 - Final Draft Report - Signed

The document is a report by the Government of India’s Ministry of Railways on IP-MPLS routers, focusing on improvements based on field experiences. It outlines the committee's terms of reference, including planning for network connectivity, router specifications, and performance criteria. The report emphasizes the need for a robust communication infrastructure to support diverse applications and the adoption of IP-MPLS technology as the backbone for Indian Railways' communication needs.

Uploaded by

railnetrtm2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

GOVERNMENT OF INDIA

MINISTRY OF RAILWAYS

Report on
IP-MPLS Routers and Suggestions for
Improvement / Changes based
on field experiences.

Sh. C. K. Prasad, PCSO/CR (Convener)


Sh. Dinesh Verma, ED/Tele/RDSO
Sh. Rakesh Gupta, CCE/WR
Sh. J. P. Shivaji, CCE/SWR
Sh. J. P. Meena, CCE/NWR

Page 1 of 131
Signature Sheet

Digitally
RAKES signed by
RAKESH JAI Digitally signed by
JAI Digitally signed

H GUPTA JAI PRAKASH by JAI PRAKASH


Date: PRAKASH SHIVAJI
PRAKASH MEENA
Date: 2024.09.27
2024.09.27 SHIVAJI Date: 2024.09.27
GUPTA [Link]
[Link] +05'30' MEENA [Link] +05'30'
+05'30'
Sh. Rakesh Gupta Sh. J. P. Shivaji Sh. J. P. Meena
CCE/WR CCE/SWR CCE/NWR

CHANDR Digitally signed


Digitally signed by CHANDRA
DINESH by DINESH
VERMA
A KISHORE

VERMA Date:
2024.09.27
KISHORE PRASAD
Date: 2024.09.27
[Link] +05'30'
PRASAD [Link] +05'30'
Sh. Dinesh Verma Sh. C. K. Prasad
ED/Telecom/RDSO PCSO/CR

Page 2 of 131
INDEX

[Link]. Description Page No.

1. Abbreviations 5-6

2. Background 7

3. Terms of Reference 7

4. Executive Summary 8-9

TOR Item No. 1 – Planning for CORE network and its connectivity
5. with zones and other networks like CRIS etc and the configuration 10 - 43
of Routers in the CORE and at Edge Network

TOR Item No. 2 – Specification of IP-MPLS routers and


6. 44 - 46
amendment/ changes required in TAN version 2.0

TOR Item No. 3 – Fixing performance criterion for OEMs of IP-


7. 47 - 49
MPLS routers.

TOR Item No. 4 – Formation of standard migration document for


8. 50 - 65
network services on IP-MPLS network.

TOR Item No. 5 – Standardization of station LAN infrastructure for


9. 66 - 70
using common LAN for all network services at stations.
TOR Item No. 6 – Management of IP-MPLS and LTE network,
10. establishing all India Network Operation Centre (NOC) along with 71 - 84
its staffing and operations.
Annexure 1: Railway Board Letter No. 2024/Te1e/9(3)/1 (3460739),
dated 08.04.2024 regarding "Nomination of Committee for IP-MPLS
11. 85
routers and suggestion for improvement/changes based on field
experiences".
Annexure 2: Uniform IP Addressing scheme for "Implementation of
12. IP-MPLS Technology for Unified Communication Backbone on 86 - 90
Indian Railway".

13. Annexure 3: Specifications for Core/Aggregate layer Routers. 91 - 94

Annexure 4: Proposed Specifications of Network Management


14. 95 - 99
System & Automation.
Annexure 5: Detailed migration plan by SWR including IP
15. 100 - 124
addressing planning.
Annexure 6: Software-Defined Networking (SDN) in IP/MPLS
16. 125 - 131
Networks

Page 3 of 131
LIST OF FIGURES

Page
S. No. Description
No.
1. Figure 1 – ISIS Topology for SWR 15

2. Figure 2 – BGP RR Logical Topology 18

3. Figure 3 – Unified Segment Routing Transport 20

4. Figure 4 – Transport over MPLS Topology 23

5. Figure 5 – High Level Architecture for Data Centre Network and MPLS 29

6. Figure 6 – Example Map of Zonal connectivity 35

7. Figure 7 – Proposed diagram of SWR IPMPLS network 36

8. Figure 8 – Proposed MPLS network diagram of NWR 38

9. Figure 9 – Protection Path for Rewari-Ringas (RE-RGS) 39

10. Figure 10 – Protection Path for Ajmer-Manwar Junction (AII-MJ) 40

11. Figure 11 – Proposed architecture of IPMPLS network for IR 41

Figure 12 – A typical schematic diagram for implementation of IP-MPLS


12. 45
network

13. Figure 13 – Typical example for VPNs 53

14. Figure 14 – Typical logical Service Mapping Architecture 53

15. Figure 15 – Carrier: RailTel & Customer Carrier: Indian Railways 54

16. Figure 16 – Proposed Migration Scheme 61

17. Figure 17 – Normal Way Side Stations without any LH 67

18. Figure 18 – Junction stations with LH other than CORE network 68

19. Figure 19 – Junction Stations (including Gateway stn) in CORE network 69

Figure 20 – Maintenance Support System at Zonal Railway NOC and


20. 74
Central NOC

Page 4 of 131
1. Abbreviations:

SN Short Form Full Form


1. 802.1Q IEEE specification for adding virtual local area network (VLAN)
tags to an Ethernet frame.
2. 802.3ad link Process that enables grouping of Ethernet interfaces at the
Distribution physical layer to form a single link layer interface, also known as
a link Distribution group (LAG) or LAG bundle
3. 802.3ah IEEE specification defining Ethernet between the subscriber and
the immediate service provider. Also known as Ethernet in the
first or last mile.
4. LE Logical Aggregate Ethernet
5. ARP Address Resolution Protocol.
6. AS Autonomous System
7. BA classifier Behaviour Aggregate classifier.
8. BFD Bidirectional Forwarding Detection.
9. BGP Border Gateway Protocol
10. BUM Broadcast, Unknown unicast, and Multicast traffic.
11. CEF Cisco Express Forwarding
12. CEM Circuit Emulation
13. CLI Command-Line Interface
14. CoS Class of Service.
15. DF Designated Forwarder.
16. EBGP External BGP
17. FEC Forwarding Equivalence Class
18. FIB Forwarding Information Base
19. Gbps Gigabits per second
20. GRES Graceful Routing Engine Switchover.
21. HA High Availability
22. HLD High Level Design
23. IBGP Internal BGP
24. IEEE Institute of Electrical and Electronics Engineers
25. IGP Interior Gateway Protocol
26. IP Internet Protocol
27. ISIS Intermediate System to Intermediate System
28. LACP Link Aggregation Control Protocol.
29. LAG Link Aggregation Group.
30. LAN Local Area Network
31. L-FIB Label Forwarding Information Base

Page 5 of 131
SN Short Form Full Form
32. LFM Link Fault Management.
33. LLD Low Level Design
34. LSA Link-State Advertisement
35. LSI Label-Switched Interface
36. LSP Label-Switched Path (MPLS)
37. LSR Label-Switched Router
38. MF classifier Method for classifying traffic flows.
39. MPLS Multiprotocol Label Switching
40. NLRI Network Layer Reachability Information.
41. NSF Nonstop forwarding
42. NTP Network Time Protocol.
43. OAM Operation, Administration, and Maintenance.
44. P2P Point-to-point
45. PDU Protocol Data Unit
46. PE Provider Edge
47. PEE Packet Forwarding Engine.
48. pps packets per second
49. QoS Quality of Service.
50. RE Routing Engine
51. RIB Routing Information Base, also known as routing table
52. RR Route Reflector
53. SAFI Subsequent Address Family Identifier
54. SPF Shortest Path First
55. SR Segment Routing
56. SSO Stateful Switch over
57. VLAN Virtual LAN
58. VPLS VPLS Virtual Private LAN service
59. VPN Virtual Private Network
60. VRF A routing instance of type Virtual Routing and Forwarding

Page 6 of 131
2. Background:

In 41st TCSC meeting various issues related to IP-MPLS backbone network for Indian
Railways were taken up as agenda item. As per recommendations of 41st TCSC, Railway
Board has nominated a committee for “IP-MPLS routers and suggestion for
improvement/changes based on field experiences”, vide letter no. 2024/Tele/9(3)/1
(3460739), dated 08.04.2024 (attached as Annexure-1).

The committee consist of following officials:

1. PCSO/CR -At Present (Convener) - Sh. C. K. Prasad


2. ED/Tele/RDSO - Sh. Dinesh Verma
3. CCE/WR - Sh. Rakesh Gupta
4. CCE/SWR - Sh. J. P. Shivaji
5. CCE/NWR - Sh. J. P. Meena

3. Terms of Reference:

The Terms of Reference (TOR) for the committee as given by Railway Board are as under:

1. Planning for CORE network and its connectivity with zones and other networks like
CRIS etc and the configuration of Routers in the CORE and at Edge Network

2. Specification of IP-MPLS routers and amendment/changes required in TAN version


2.0

3. Fixing performance criterion for OEMs of IP/MPLS routers

4. Formation of standard migration document for network services on IP-MPLS network

5. Standardization of station LAN infrastructure for using common LAN for all network
services at stations

6. Management of IP-MPLS and LTE network, establishing all India Network Operation
Centre (NOC) along with its staffing and operations.

The committee examined the existing network and its pros and cons, capacity constraints,
its obsolescence in particular and studied the role that recent IP-MPLS technology
enhancements supporting the evolution of circuit-switched transport to packet switched
network will play in IR next-generation access and aggregation network infrastructure.

Page 7 of 131
4. Executive Summary:

The role of communications in growth and progress of organizations can never be


overemphasized. As communication is crucial for day to day operations and working.
Railways has, for its part, made large investments and created a dedicated
communication system and manpower to maintain the Telecommunication
Infrastructures. Because of the diverse communication requirements, communication
technologies adopted are as diverse, from state-of-the-art OFC from initial rudimentary
magneto-phone. As Railway operations and working are geographically spread all over
the country, communication needs to cover the entire country and that brings with it many
challenges.

Railway communication was predominantly voice based and hence major part of the
network is designed and implemented accordingly. Railways is adopting Information
technology and computerization on a large scale for its internal use and also for meeting
passenger expectations. A number of mission critical applications are being developed.
This necessitates transformation of the existing network to deal with data and video with
the same standards of reliability and availability as is being delivered for voice
requirements. The transformation of the network will, first and foremost, involve identifying
appropriate technologies that will meet future needs while also ensuring compatibility with
the existing equipment that are having residual life. Also, it must be ensured that assets
which are due for replacement, and the latest technology that is being adopted for
replacement, seamlessly integrate with this network and service requirements of all users
and applications. An important criterion is integrating with existing organizational structure
and processes and ease of adoption by existing maintenance organization.

With mission critical applications in use, for both train operation and working of various
departments, one important issue that did not affect circuit switched networks and shall
determine availability of future communication is network security and threat
management. This would mean that the network, while being secure itself, shall integrate
into a comprehensive network security and threat management system. There is now a
necessity for the network to distinguish communication traffic in terms of importance,
assign priorities and limit access while discerning users, passengers and intruders. The
backbone of the communication system shall be the network, both long haul and short
haul. It will have to transport various services that shall include high bandwidth latency
sensitive real time applications, such as Telepresence, CCTV feed, Wi-Fi backhaul, LTE-
R back haul etc. to low bandwidth applications, such as remote monitoring of S&T assets,
control communication, E-tender, e-office e-mail etc. Depending on the work flow and
processes of various departments, the network shall provide seamless connectivity
between various groups of users and servers/services offered from geographically diverse
locations with desired service level.

The world is witnessing an explosion of information driven by the Internet and smart
devices like IoT. The cause of which is propagation of networks that handle voice, video
and data seamlessly. While voice communications, using circuit switched network, started
the communication revolution, data communications, using packet switched network for
interconnecting computers and a myriad of devices, propelled its exponential growth.
Need for ubiquitous connectivity has compelled convergence of these two types of
networks and evolution of necessary standards. IP based networks has emerged as the
default standard for adoption. Large scale adoption of this standard has resulted in
equipment being available commercially off the shelf and driving down cost to
Organizations.

Page 8 of 131
Railway communication is constantly evolving to meet the Organization’s requirements.
Internet and intranet connectivity is being extended to users across the length and breadth
of the organization. Over aged Exchanges are progressively being replaced by IP-
Exchanges. Specifications/TANs are issued for adoption IP-based train traffic control
communication. All these are reflective of the fact that Railways is also embracing IP
based technologies.

Three major technologies are evolving in communication each having their own strengths,
viz Carrier Ethernet, MPLS-TP and IP-MPLS. The choice of the technology considering
the various factors elucidated above in the context of the Railways future communication
needs is IP-MPLS. Hence, the adoption of IP-MPLS Technology as the backbone for
Indian Railways is recommended. The detailed requirements of Indian Railways that
necessitate the adoption of IP-MPLS as the unified communication backbone are covered
in this report.

Page 9 of 131
5. TOR Item No. 1:

Planning for CORE network and its connectivity with zones and other networks
like CRIS etc and the configuration of Routers in the CORE and at Edge Network

In the realm of Railway networks, operators are grappling with the imperative to deliver
advanced services that can swiftly adapt to evolving passenger demands and
requirements. Emerging trends such as the integration of high-speed Rail services,
increasing passenger volumes, adoption of smart railway technologies, and the need
for seamless connectivity require unprecedented flexibility, scalability, and efficiency
from the network infrastructure. Moreover, there is a growing pressure to optimize
operational costs amidst rising demands for enhanced services and customer
expectations.

Railway Access and Aggregation solutions have progressed from traditional systems
to integrated IP-MPLS architectures to address these challenges. IP-MPLS Unified
Communication Backbone will be used to the vital Signalling applications such as
Electronic Interlocking (E.I.), Datalogger, BPAC, UFSBI etc. and Telecom applications
such as LTE, voice, data & video SCADA, TPC/Remote Control, VOIP based TCCS,
VOIP based Exchanges, VSS/CCTV, Wi-Fi, Integrated Passenger Amenities,
UTS/PRS/FOIS, Loco Pilot/Asst Loco Pilot & Guard communication with Station
Master through Radio Communication etc. and real time applications of TCAS/Kavach
of Safety Integrity Level 4 (SIL-4) standards, Centralized Traffic Control(CTC) and
such other vital futuristic safety applications.

This unified approach offers a consolidated network infrastructure with standardized


operational processes. It provides significant benefits such as improved network
convergence, scalability to handle growing traffic volumes, enhanced reliability, and
efficient data forwarding mechanisms. However, managing such a complex
architecture remains a significant challenge, especially in large-scale railway
networks, due to the multitude of distributed protocols involved, which increases
operational intricacies.

Converged SDN Transport design introduces a forward-looking approach by


transitioning traditional railway network designs towards an SDN-enabled,
programmable infrastructure capable of supporting diverse railway services
(passenger operations, freight logistics, real-time telemetry, etc.) with simplicity, full
programmability, and seamless integration with cloud services. This evolution
promises to uphold stringent service level agreements (SLAs) while ensuring
operational efficiency and adaptability in a dynamic railway environment. For more
detailed description on Software-Defined Networking (SDN) in IP/MPLS Networks
refer to Annexure-6.

Communication in Railways

An understanding of the existing communication system of Indian Railways shall serve


to define the broad parameters and requirements that are the basis for the planning of
CORE network and its connectivity with zones and other networks like CRIS etc. The
different communication circuits that are in use can be classified as:
A. Operational
B. Administrative and
C. Data.

Page 10 of 131
5.1 Operational Communication

These are circuits provided for the safe and punctual operation of trains. Hence, they
cover train traffic management, crew management and various aspects related to train
operations at stations.

5.1.1 Section Control

Section Control is a communication system that is used for effecting precedence and
crossing in train operations. It is a unique conference call where the section controller
is always off-hook. Any of the stations in the section can lift the phone handset and
get connected to the conference call. Further, only the section controller is provided
with the facility to call a station. This communication is geographically spread over all
the stations located in that section. The circuit originates at the Divisional HQ.

5.1.2 TPC/TLC/EC

Traction Power control, Traction Loco Control are used for operation and maintenance
of traction power distribution and OHE while TLC is used for Loco interception, power
change and crew management. Emergency Communication is the communication
system used in the event of any emergency that provides a means for authorized
personnel to talk to a person in the divisional control from anywhere in the block-
section of the controlled section. This is done using a special phone call Portable
Control Phone (PCP) available with only authorised railway personnel like guards,
drivers etc. Hence this communication is of vital importance in event of
accidents/unusual.
This communication is geographically spread over all the stations located in the
division. The circuit originates at the Divisional HQ.

5.1.3 Block Communication

The absolute block system ensures spatial segregation of trains in a section of track.
This is one of the most important circuits used in conjunction with block instruments
for movement of trains between adjacent block stations. These are point to point
dedicated voice circuits and shall in the foreseeable future continue unchanged. The
block communication is an exclusive point to point communication that connects two
adjacent stations.

5.1.4 LC Gate communication

The level-crossing gates are provided with a telephone so that the gateman can talk
to the station masters. The station master exchanges information with regards to train
movements in the block section. This is required for coordination for gate control. The
gate telephone works on a nominated quad of the 6-Quad cable laid between stations.
There is a point to multi point circuit that connects the station to the gated LC that are
within control of the station. The geographical spread is typically within a block section
on either side of a station.

Page 11 of 131
5.1.5 BPAC

Communication circuits are also required for the purpose of Block Proving by Axle
Counters (BPAC). This system is used to eliminate the requirement of manual check
for the last vehicle clearing the block section. It is one of the most important circuits
that enhances the line capacity and is very vital for the train working. The BPAC circuits
are working on the 6-Quad and are point to point circuits connecting adjacent BPAC
equipment and are spread over a block section.

5.1.6 Data Logger

Data Loggers are used for logging of relay states in the stations. The Data-logger
communication systems is used to gather the dataloggers data at a central location of
storage and analysis. The conventional network as per extant scheme of RDSO is a
non-IP network and instead it as a daisy chain connecting the dataloggers provided at
each station and the Divisional HQ. This arrangement loads each datalogger to act as
transport nodes of Data on the network in addition to its primary function of event
logging, alarm generations etc. The geographical spread is usually over the Division.

5.1.7 Driver/Guard to Station

All drivers are guards are provided VHF handheld set and station are provided with
VHF set enabling communications among themselves. These sets work on a
nominated frequency. In event of a disruption to wireline communications in a station
it is used as a standby.

5.2 Administrative Communication

5.2.1 Exchanges (Existing & Future Inter-connectivity)

Railways has a Railway telephony network that spans across all the zones and
divisions. It also covers most of the important sub divisional locations. Telephone
exchanges are provided at each of these locations. An STD network has also been
established interconnecting all the zonal and divisional exchanges using IP Based
Next Generation Network (NGN). Many of these exchanges are already VoIP
exchanges and many are being replaced by the VoIP exchanges. Remote subscribers
of these exchanges at various stations are connected using Primary Drop-Insert (PD)
Muxes which in turn are networked through SDH equipment.

5.3 Data Communication

5.3.1 UTN

Unified Ticketing Network (UTN) is the ticketing network utilised to issue tickets to
Railway passengers both reserved and unreserved. It spans most of the stations on
IR today over a separate IP network. The WAN interfaces of the UTN uses E1 circuits
built through Railways SDH equipment provided at the stations as well as hired E1
channels from BSNL/MTNL.

Page 12 of 131
5.3.2 FOIS Network

This is also a separate IP network which is used for operational purpose. This network
although initially created for Freight Operation Information System, today it is also
providing connectivity to the users for other management applications viz, Parcel
Management systems, Control Office Application, Crew Mgt System etc.
This network connects important Freight/Commercial station to the central servers
located in various locations including Zonal / Divisional hqrs and CRIS at Delhi. The
WAN interfaces of this network also uses E1 circuits built through the SDH equipment
provided at the stations as well as hired E1 channels from BSNL/MTNL.

5.3.3 RC/SCADA

The SCADA/RC are circuits that are dropped for operation of Traction power
distribution and control. The SCADA control is located in the divisional office in RE
territories. These are used for operation of the switchgear available at traction Sub
stations/Sectioning Posts/Sub sectioning posts, AT etc. These are low bandwidth real
time applications that are critical for operation of traction switchgear and hence critical
for trains operation in RE areas.
The SCADA system is located in the divisional HQ. Connectivity of the existing
network is provided through VF channels built over PD Muxes at the nearest station
which in turn uses E1 circuits built over SDH network and last mile is through 6-
Quad/PIJF cables. Of late this network is being upgraded to IP Based network in the
similar scheme of UTN/FOIS networks.

5.3.4 Video Surveillance

Nowadays, video surveillance at station has become very important due to network
security and safety considerations. The requirement of the surveillance network is very
different from other networks. It is high speed, high bandwidth network and a lot of
data travels from the station cameras to the storage and monitoring locations. With
many stations being provided with surveillance system, the Railways backbone
network capacity will have to be enhanced multi-fold to cater for these requirements.

5.3.5 Railnet

Railnet is the Enterprise Wide Area network of Indian Railways. It is general purpose
network to serve the administrative and management requirements of Railways as
well as to connect to the Internet. It is available to all the officers and supervisors in
the field and has become a vital component of day to day working.
It is engineered as an L3-VPN using the MPLS architecture of RCIL. The network
spans over all the Zonal Railway Hqrs, Divisional Railway Hqrs, production units,
workshops, training institutes, RDSO, Railway Board, stores depot, important stations
etc. Further, it will have to increase its reach to almost all the stations in near future.
Hence, the technology for future should be so chosen so as to satisfy this need.

5.3.6 Railway Display Network

Railway Display Network is another network now being worked upon by RCIL. This
network is supposed to provide live content for displays on the platform. These
contents shall be train information, public warnings / announcements, advertisements

Page 13 of 131
etc. As live content shall be beamed from a central location, it will also demand a lot
of bandwidth.

5.3.7 Wi-Fi Service

Railways have started providing free Wi-Fi services to its passengers at the stations.
A separate TCP/IP Wi-Fi network is available at the stations to cater for this
requirement. In the foreseeable future this service shall be have to be extended for a
separate wireless communication network for the staff of the various departments
working in a Railway station for operation and maintenance replacing the VHF
handheld set that is being used presently and all its associated limitations. These Wi-
Fi enable handsets can also run apps.

5.3.8 Application Developed by CRIS, RCIL and various users

CRIS and RCIL are developing a number of mission critical applications for the
Railway. This will result in the computerization of all the department necessitating
connectivity to all the offices of the Railways and with all the staff being users of the
network.

5.3.9 LTE-R

LTE for Railways has been sanctioned for providing communication from running
trains. KAVACH data will also use LTE-R in future. IP-MPLS network will work as
backbone network for carrying data of LTE also.

5.4 From the above it can be concluded that the services needed cover Voice, Video and
Data covering the spectrum of bandwidth intensive latency sensitive real time
application to low bandwidth latency tolerant applications. Also, while voice is a
significant part of Railway communications as on date, it is obvious that in keeping
with the worldwide trend, Railway communication is also set to experience exponential
growth of Data Traffic.

From the above applications it must be noted that a wayside station needs connectivity
to adjacent stations, LC Gates, IBS, Sub-divisional locations, Divisional HQ, Zonal HQ,
Regional locations (PRS/UTS), CRIS / RCIL data centres. It can now be appreciated
that at any stations, across the length and breadth of our country, a diversity of
services are to be extended geographically dispersed location. This gives an insight
into the routing complexity and connectivity requirements.

5.5 Backhaul Network Design

This design outlines a high-level architecture for a nationwide backhaul IP-MPLS


network supporting LTE services, including mission-critical applications and
broadband services for railways, over a converged IP-MPLS network with Quality of
Service (QoS) and Precision Time Protocol (PTP) support. The Unified MPLS
architecture provides a single converged network infrastructure with a common
operational model. It has great advantages in terms of network convergence, high
scalability, high availability, and optimized forwarding.

Planning & engineering of the IP-MPLS network for Indian Railways involves
considering various factors. Important factors for MPLS Control and Management
Plane High Level Design are detailed below:

Page 14 of 131
5.5.1 IGP design

(i) ISIS

IS-IS is a link-state protocol that uses a least-cost algorithm to calculate the best
path for each network destination.
It uses link-state information to make routing decisions based on a shortest-path-
first (SPF) algorithm.

Integrated IS-IS provides routing capabilities to multiple address families and


topologies, and requires that all routers run a single routing algorithm. Link-state
PDUs sent by routers running Integrated IS-IS include all destinations running
either IPv4, IPv6 or CLNP Network Layer protocols.

In IS-IS, a single routing domain can be divided into smaller groups referred to as
areas. Routing between areas is organized hierarchically, allowing a domain to be
divided administratively into smaller areas. IS-IS accomplishes this organization by
configuring Level 1 and Level 2 ISs. Level 1 systems route within an area, and Level
2 ISs route between areas and toward other ASs. A Level 1 and Level 2 system
routes within an area on one interface and between areas on another.

All the LSR and Zone boundary routers will be part of ISIS Level 2 and all routers
between LSR will be part of ISIS Level 1

Figure 1 - ISIS Topology for SWR

Page 15 of 131
(ii) ISIS Metrics

All IS-IS interfaces have a cost, which is a routing metric that is used in the IS-IS
link-state calculation. Routes with lower total path metrics are preferred over those
with higher path metrics. By default, IS-IS has a metric of 10.
Normally, IS-IS metrics can have values up to 63, and by default, the total path
metric is limited to 1023. This metric value is insufficient for large networks and
provides too little granularity for traffic engineering, especially with high- bandwidth
links. So, in the network, the wide metrics only configuration will be applied at ISIS
protocol. The wide metrics the range is from 1 to 16777214.
The cost of a route is described by a single dimensionless metric that is
determined using the following formula: cost = reference-bandwidth/bandwidth.
For example, if you set the reference bandwidth to 1 Gbps (that is, reference-
bandwidth is set to 1,000,000,000), a 100- Mbps interface has a routing metric of
10.

The recommendation is to set the reference-bandwidth to 1000g.

With this configuration interfaces of 10Gbps will have a metric of 100, and
Interfaces of 1Gbps will have a metric of 1000.

(iii) Loop Free Alternate

In IS-IS network, a loop free alternate (LFA) is a directly connected neighbour that
provides pre computed backup paths to destinations reachable through the
protected link on the point of local repair (PLR). The primary goal of the remote
LFA is to increase the backup coverage for the IS-IS network and provide
protection within the rings.

To calculate remote LFA backup path, the IS-IS protocol determines the remote
LFA node in the following manner:

1. Calculates the reverse shortest path first from the adjacent router across the
protected link of a PLR. The reverse shortest path first uses incoming link metric
instead of outgoing link metric to reach a neighbouring node. The result is a set
of links and nodes, which is the shortest path from each leaf node to the root
node.
2. Calculates the shortest path first (SPF) on the remaining adjacent routers to find
the list of nodes that can be reached without traversing the link being protected.
The result is another set of links and nodes on the shortest path from the root
node to all leaf nodes.
3. Determines the common nodes from the above results. These nodes are the
remote LFAs.

(iv) ISIS Policies

With ‘wide-metrics’ enabled in ISIS all L1 routes are automatically exported into L2
database. Loopback addresses of other section routers will be redistributed into
ISIS L1 database via ISIS export policy on ISIS L1/L2 routers i.e LSR routers.

Page 16 of 131
(v) ISIS Recommendations

• LSR Jn routers and Zone border routers will in ISIS Level 2, whereas all other
routers LERs Type-I&II will be in ISIS Level 1
• Keeping routing information in the IGP to the minimum needed, all interfaces are
configured as passive default and backbone interfaces exempted to form
neighborship.
• Include only basic routing reachability information in the IGP (loopbacks and
interface routes). /31s are usual for p2p links.
• Tuning a maximum IGP prefix-export-limit for external routes can be considered a
best practice.
• Manually configure router-ids for better troubleshooting.
• ISIS Area address assignment across the Zonal Railway network.
• Extended/wide metrics are considered a best practice. Ensure reference
bandwidth is set to 1000g.
• Configure Ethernet media interfaces as point-to-point where possible to avoid
additional useless IGP resource utilization and quick neighbour convergence.
• Enable MD5 authentication for L1 and L2 point-to-point links for IS-IS.
• In the IS-IS case, increase LSP lifetime to maximum 65535 limit to avoid
unnecessary CPU consumption.
• Enabling BFD to IGP adjacencies in backbone. We recommended value is 30ms as
tx_interval & 4 as multiplier. This is proposed value & subjected to change as per
requirement or field behaviour during testing.
• Use remote LFA for IS-IS to establish are precomputed backup path for the quick
failover during main path fails.
• The recommended SPF algorithm parameters:
• delay – 200 ms.
• rapid-runs – 5.
• hold-down – 10000.

5.5.2 BGP design

The Border Gateway Protocol (BGP) is an exterior gateway protocol used


primarily to establish point-to-point connections and transmit data between peer
ASs. BGP must explicitly advertise the routes between its peers. The route
advertisements determine prefix reachability and the way packets are routed
between BGP neighbors.

Multiprotocol BGP (MP-BGP) is an extension to BGP that enables BGP to carry


routing information for multiple network layers and address families.

Page 17 of 131
Figure 2 - BGP RR Logical Topology

(i) BGP Route-Reflector

The Internal BGP (IBGP) is generally used to peers with other neighbours in the
same AS. EBGP is when the node peers with a neighbour outside it’s AS. To avoid
routing loops, IBGP does not advertise routes learned from an internal BGP peer
to other internal BGP peers. For this reason, BGP cannot propagate routes
throughout an AS by passing them from one router to another. Instead, BGP
requires that all internal peers be fully meshed so that any route advertised by one
router is advertised to all peers within the AS, or use scaling mechanisms as route-
reflectors or confederations.

▪ MPLS network is planned to design with hierarchical RR topology.


▪ In the above topology the backbone has 2 clusters with 4 routers in total forming
hierarchical BGP RR architecture.
▪ The SBC and UBL is having more than 2 directions, so RR Cluster is planned
at this location. At these locations planned redundant LSR routers.
▪ UBL LSR1 & SBC LSR1 will use zone cluster-id.
▪ Each Inline-RR (Jn LSR) forms RR cluster each direction LERs.
▪ All Inline-RRs will form the neighborship with Zonal RRs.
▪ All LERs routers from respective section will choose nearest RR(LSRs) cluster
to receive BGP routes.
▪ The local-address is the local loopback address. The neighbour addresses is
the remote loopback addresses. All internal BGP sessions must be configured
between primary loopback interface addresses.
▪ BFD will be used to protect BGP neighbours to detect failures faster & improve
convergence.

Page 18 of 131
(ii) BGP Network Reachability Information

Multiprotocol BGP (MP-BGP) is an extension to BGP that enables BGP to carry


routing information for multiple network layers and address families. MP-BGP
carries the different address families through network layer reachability
information (NLRI) exchanged between the BGP peers.

The NLRI for IPv4 Unicast, IPv6 Unicast, VPNv4, VPNv6, IPv6 labelled unicast,
l2vpn evpn signalling will be enabled in the network based on current requirement.

▪ The VPNv4 is being used for different IPv4 L3VPNs.


▪ The VPNv6 is being used for different IPv6 L3VPNs
▪ l2vpn evpn signalling for EVPN-VPWS (P2P & P2MP L2 VPNs)

(iii) Recommendations for BGP

The following guidelines must be considered for BGP architecture:

▪ Segregate types of BGP peers & address-families in different BGP groups for
better performance and manageability.
▪ Enable precision-timers. The default hold-time is 90 seconds, meaning that the
default frequency for keepalive messages is 30 seconds. More frequent
keepalive messages and shorter hold times might be desirable in large- scale
deployments with many active sessions.
▪ RT Constrained Route Distribution is a feature that can be used by service
providers in Multiprotocol Label Switching (MPLS) Layer 3 VPNs to reduce the
number of unnecessary routing updates that route reflectors (RRs) send to
Provider Edge (PE) routers. The reduction in routing updates saves resources
by allowing RRs, Autonomous System Boundary Routers (ASBRs), and PEs
to have fewer routes to carry.
▪ Enable BGP router and route authentication across all peers.
▪ Every BGP sessions must be protected via Multi-Hop BFD sessions.
▪ Ensure unique route-distinguisher is configured for each VPN. The
recommended format for the route- distinguisher configuration is
#loopback:#value. The Route-Distinguisher has local significance, unlike the
Route-Target.
▪ A log message is generated whenever a BGP peer makes a state transition.

5.5.3 MPLS Design

Multi-Protocol Label Switching (MPLS) provides the mechanism for the


engineering of network traffic patterns that is independent of routing tables. MPLS
assigns short labels to network packets that describe how to forward them through
the network. MPLS is independent of any routing protocols. MPLS is particularly
useful in situations where:

▪ Multiple types of traffic share a data connection, with some traffic requiring
priority over others.
▪ Uptime is important, key locations have multiple connections so that alternative
paths always exists.
▪ Network congestion occurs sometimes on some connections.
▪ New sites will need to be connected to many different locations, while being

Page 19 of 131
entirely invisible to many other sites on the network.

(i) Segment Routing

Segment Routing is the latest advanced MPLS technology that is in the process
to replace the traditional LDP and RSVP-TE protocols with the introduction of label
distribution and traffic engineering under one umbrella and to make it happen only
via link-state IGP/BGP protocols.

Segment routing is a method to forward packets on the network based on the


source routing paradigm. The source chooses a path and encodes it in the packet
header as an ordered list of segments. Segments are an identifier for any type of
instruction. For example, topology segments identify the next hop toward a
destination. Each segment is identified by the segment ID (SID) which consists of
a flat unsigned 20-bit integer.

(ii) SR-MPLS Design summary

▪ Topology independent loop-free alternate (TI-LFA) is a feature that protects


links and nodes.

▪ Fast Reroute for SR-TE traffic-engineered paths is configured as a means to


switch traffic in case of failover scenarios from the primary path to backup paths
within as close to 50 msec as feasible. The fast reroute feature is configured
under IGP ISIS protocol. The convergence time depends on the method by
which the link failure detection happens. In the case of a fiber cut the detection
is immediate and the possibility to get a sub 50 msec of convergence is high.
However, in case the link failure detection has to be done by BFD with an
interval of 15 msec (multiplier x3). The convergence time is mostly more than
50 msec.

▪ The Segment Routing Microloop Avoidance feature detects if microloops are


possibly followed by a topology change. If a node computes that a microloop
can occur on the new topology, the node creates a loop-free SR-TE policy path
to the destination with the use of a list of segments. After the RIB update delay
timer expires, the SR-TE policy is replaced with regular forwarding paths.
There is a default timer for RIB update delay which is taken care of by TI-LFA.
Unified Segment routing transport
• Unified Segment routing (SR/MPLS) based transport network

• ACI (VXLAN) to SR/MPLS handoff to use single data plane protocol on DC-PE and Transport Network

IP/MPLS/SR ACI Remote Leaf Network


L2 (VLAN/VXLAN) (VXLAN) ACI (VXLAN) ACI (VXLAN) Technology/Encapsulation
SR-MPLS SR-MPLS SR-MPLS
Interworking Interworking Interworking
Interworking
Internet

IP/MPLS/SR IP/MPLS/SR IP/MPLS/SR


SR/MPLS SR/MPLS
IP/MPLS/SR
Public Cloud
Provider

Access/Pre-Agg. Regional DC Zonal DC Central DC


Peering/Co-Lo

DistributedSegment
Figure 3 - Unified Data Centre Routing Transport
5

Page 20 of 131
5.5.4 L3VPN Design

Virtual private networks (VPNs) are private networks that use a transport network
to connect two or more remote sites. Instead of dedicated connections between
networks. VPNs use virtual connections routed (tunnelled).
The primary goal of an MPLS VPN is to provide connectivity between tenant CEs
that are attached to different PEs. L3VPN services is used in the network to
provide layer 3 connectivity between remote sites and will constitute the full
forwarding path that all layer 3 traffic should take. The default behavior of PE
routers in a VPN is to advertise all VPN routes to peers configured with address-
family vpnv4, and it is up to the receiving PE to decide which to keep and which to
ignore based on the policies of its local VPNs.

There are two main components of a L3VPN:


▪ Route Distinguisher
▪ Route-Target.

Route Distinguisher is a 64-bit value prepended to an IP address. This unique tag


helps identify the different customers routes as packets flow across the same
service provider tunnel.

A route distinguisher is a locally unique number that identifies all route information
for a particular VPN. A Route Target (RT) is also a 64-bit value used to identify the
final egress PE device for customer routes in a particular VRF to enable complex
sharing of routes. The route target defines which route is part of a VPN. A unique
route target helps distinguish between different VPN services on the same router.

The concept of a VRF (virtual routing and forwarding table) distinguishes the
routes for different customers, as well as customer routes from provider routes on
the PE device. A separate VRF table is created for each VPN that has a
connection to a CE router. The VRF table is populated with routes received from
directly connected CE sites associated with the VRF instance, and with routes
received from other PE routers in the same VPN.

A single “per-vpn” label is advertised for the VRF as opposed to one label per prefix.
This makes troubleshooting easier

(i) 6PE - IPv6 Transport in a MPLS Core

The primary goal of an MPLS VPN is to provide connectivity between tenant CEs
that are attached to different PEs. The VPN concept is global, whereas a VRF is
a local instance at a specific PE

An IPv6 Provider Edge router (6VPE), tunnels IPv6 traffic over an IPv4 tunnel
using MPLS as the transport medium. The main advantage of the 6VPE approach
over using dual-stack tunnels is that it avoids the provisioning overhead of
configuring the IPv6 addresses on all core links and nodes.

The 6PE/6VPE routers exchange the IPv6 reachability information transparently


over the IPv4 MPLS core using MP- BGP

Page 21 of 131
The hosts, CEs, and PEs, are dual-stack devices that support both IPv4 and IPv6.
The PE assigns MPLS labels to IPv6 prefixes, and this information is sent via iBGP
to the RRs.

(ii) L3VPN Design Summary

▪ Per-table label allocation shall be enabled


▪ Import RT community will be applied when need.
▪ 6VPE for IPv6 transport over MPLS network.

5.5.5 L2VPN Design

EVPN-VPWS: EVPN is a next-generation solution that provides Ethernet


p2p/multipoint services over MPLS networks. EVPN operates in contrast to the
Virtual Private LAN Service (VPLS) that exists which enables BGP control-plane-
based MAC learning in the core. In EVPN, PEs that participate in the EVPN
instances learn user MAC routes in Control- Plane with the use of MP-BGP
protocol.

EVPN brings a number of benefits as mentioned:


▪ Per-flow redundancy and load balancing.
▪ Simplified Provisioning and Operation.
▪ Optimal forwarding.
▪ Fast Convergence.
▪ MAC address scalability.
▪ Multivendor solutions under IETF standardization.

The MAC addresses learned on one device need to be learned or distributed on


the other devices in a VLAN. EVPN Software MAC Learning feature enables the
distribution of the MAC addresses learned on one device to the other devices
connected to a network. The MAC addresses are learned from the remote devices
with the use of BGP.

These EVPN modes are supported:


Single homing - This enables you to connect a user edge (CE) device to one
Provider Edge (PE) device. In this ESI value is null for each PE-CE link.
Multihoming - This enables you to connect a user edge (CE) device to two or
more Provider Edge (PE) devices to provide redundant connectivity. No Inter-
chassis link is required. The redundant PE device ensures that there is no traffic
disruption when there is a network failure.

5.5.6 Circuit Emulation (CEM)

Circuit Emulation (CEM) is a technology that provides a protocol-independent


transport over IP/MPLS networks. It enables proprietary or legacy applications to
be carried transparently to the destination, similar to a leased line.

CEM provides a bridge between a Time-Division Multiplexing (TDM) network


and Multiprotocol Label Switching (MPLS) network. The chassis encapsulates
the TDM data in the MPLS packets and sends the data over a CEM pseudowire
to the remote Provider Edge (PE) chassis. As a result, CEM functions as a
physical communication link across the packet network.

Page 22 of 131
The chassis supports the pseudowire type that utilizes CEM transport: Structure-
Agnostic TDM over Packet (SAToP). L2VPN over IP/MPLS is also supported on
the interface modules

Pseudo wires manage encapsulation, timing, order, and other operations in


order to make it transparent to users. The pseudowire tunnel acts as an
unshared link or circuit of the emulated service. CEM is a way to carry TDM
circuits over packet switched network. CEM embeds the TDM circuits into
packets, encapsulates them into an appropriate header, and then sends that
through Packet Switched Network. The receiver side of CEM restores the TDM
circuits from packets.

Figure 4 - Transport over MPLS Topology

5.5.7 VRF Routing solution

As explained in the previous chapter, the concept of a VRF (virtual routing and
forwarding table) distinguishes the routes for different customers, as well as
customer routes from provider routes on the PE device.

Routing policies allow controlling the routing information between the routing
protocols and the routing tables and between the routing tables and the forwarding
table. Route-maps/prefix-lists allows you to control which routes the routing
protocols store in and retrieve from the routing table.

(i) Inbound and outbound routing policies

(a) BGP Communities

A community is a route attribute used by BGP to administratively group routes with


similar properties.

One main role of the community attribute is to be an administrative tag value used
to associate routes together. Generally, these routes share some common
properties, but that is not required. Communities are a flexible tool within BGP to
manipulate and control routing exchange using communities. The communities
will be used in the routing-police to match the corresponding group of routes to
apply a route condition.

Page 23 of 131
(b) VRF Routing Summary:

• The CE advertises all its routes to the PE, independent of what protocol is used
between them.
• The VRF import policy allows routes matching the VRF-target.
• Communities will be applied for the prefixes from different sites in different
functions.

5.5.8 Interfaces

(i) Link Aggregation Group

IEEE 802.1ad link aggregation enables Ethernet interfaces to form a single link
layer interface. The link aggregation group (LAG) balances the traffic across the
member links within an Ethernet bundle and effectively increases the uplink
bandwidth and another advantage of LAG is that increases availability, because
it is composed of multiple member links. If one member link fails, the LAG
continues to carry traffic over the remaining member links.

(ii) LACP

LACP is part of the IEEE specification 802.3ad that allows you to bundle several
physical ports to form a single logical channel. When you change the number of
active bundled ports on a port channel, traffic patterns will reflect the rebalanced
state of the port channel.

LACP is one method of bundling several physical interfaces to form one logical
interface. An aggregated Ethernet interfaces can be configured with or without
LACP enabled.

LACP was designed to achieve the following:

• Automatic addition and deletion of individual links to the bundle without user
intervention
• Link monitoring to check whether both ends of the bundle are connected to the
correct group

Also enabling LACP helps detect hard and soft failures of the link, and
misconfiguration of the member links.

(iii) Link Layer Discovery Protocol

The Link Layer Discovery Protocol (LLDP) is an industry-standard protocol to allow


networked devices to advertise capabilities, identity, and other information onto a
LAN. LLDP allows network devices that operate at the lower layers of a protocol
stack (such as Layer 2 bridges and switches) to learn some of the capabilities and
characteristics of LAN devices available to higher layer protocols, such as IP
addresses. The information gathered through LLDP operation is stored in a network
device and can be queried via SNMP. Topology information can also be gathered from
this database. Some of the information that can be gathered by LLDP (only minimal
information is mandatory) is:

Page 24 of 131
• System name and description
• Port name and description
• VLAN name and identifier
• IP network management address
• Capabilities of the device (for example, switch, router, or server)
• MAC address and physical layer information
• Power information
• Link aggregation information
• Enabling LLDP in all LSR and LER backbone interfaces.

(iv) Damping Physical Interface Transitions

When the link between a router interface and the transport devices is not stable,
this can lead to periodic flapping. For shorter physical interface transitions, you
configure interface damping with the hold-time statement on the interface. The hold
timer enables interface damping by not advertising interface transitions until the
hold timer duration has passed. When a hold-down timer is configured and the
interface goes from up to down, the down hold-time timer is triggered. Every
interface transition that occurs during the hold-time is ignored. The actual value
could be adjusted based on field results & based on transmission protection
available.

Physical interface damping limits the advertisement of the up and down transitions
(flapping) on an interface. An unstable link between a router Interface and the
transport devices can lead to periodic flapping. Longer flaps occur with a period of
about five seconds or more, with an up-and-down duration of one second. For these
longer periodic interface flaps, you configure interface damping with the damping
statement on the interface. This damping method uses an exponential back-off
algorithm to suppress interface up and down event reporting to the upper-level
protocols. Every time an interface goes down, a penalty is added to the interface
penalty counter. If at some point the accumulated penalty exceeds the suppress
level max-suppress, the interface is placed in the suppress state, and further
interface state up and down transitions are not reported to the upper-level
protocols. Every time an interface goes down, a penalty is added to the interface
penalty counter. Penalty added on every interface flap is 1000.
The default values are:

• half-life-period: 5 sec
• max-suppress: 20 sec
• reuse-threshold: 1000
• suppress-threshold: 2000
These timers tuned to requirement during the testing phase.

(v) MTU

MTU is the largest transmissible unit size of a datagram on a link. If a router


needs to send a packet that is larger than the outgoing interface MTU, this packet
will need to be fragmented. Fragmentation is best avoided in a well-planned
network, as it has performance implications. The MTU must match on both ends
of the link.

• PHY MTU = IP MTU + header

Page 25 of 131
• MPLS MTU = IP MTU – (<number of labels> x 4 bytes)

IP MTU is the maximum size of the IP packet (with IP header).


The 9000 value should be configured in the backbone interfaces. For other
interfaces it is necessary to make sure both are connected using the same MTU
value. The best practice is leaving the access interfaces at the default MTU 1500.
This can be changed when any application demands more MTU size.

5.5.9 Class of Service and Quality of Service

This section covers design principles and high-level design details for
implementing class of service mechanisms and methods to provide proper
treatment to the various traffic types traversing through networks devices.
In order to ensure proper end-to-end treatment through the MPLS network, we
must map our QoS into the available classes.

QoS refers to the ability of a network to provide improved service to selected


network traffic over various underlying technologies.

QoS features provide improved and more predictable network service by


implementing the following services:

• Supporting guaranteed bandwidth


• Improving loss characteristics
• Avoiding and managing network congestion
• Shaping network traffic
• Setting traffic priorities across the network

(i) Class of service design

When a network experiences congestion and delay, some packets must be dropped.
The class of service (CoS) enables to divide traffic into classes and offer various
levels of throughput and packet loss when congestion occurs. This allows packet
loss to happen according to rules that are configured.

For interfaces that carry IPv4/IPv6, and MPLS traffic, CoS provides multiple
classes of service for different applications. On the router, it can be configured
with multiple classes for transmitting packets, define which packets are placed into
each output queue, schedule the transmission service level for each queue, and
manage congestion using a random early detection (RED) algorithm.

The network shall provide dedicated transport for video, voice, and data services.
These services/applications place different requirements upon the network in terms
of the behaviour of the packets as they traverse the network from source to
destination. These behaviours are often expressed in terms of three basic
attributes; loss, latency, jitter, and the behaviours are independent of any particular
application. Therefore, traffic from apparently unrelated applications may require
the same behaviour and should, therefore, be treated identically.

Proposed categorization of services and traffic flow

Page 26 of 131
• Internet Default
• Default
• Gold
• Silver
• Bronze
• Management
• Platinum
• Diamond

Traffic Policing: Traffic policing allows you to control the maximum rate of traffic
sent or received on an interface, and to partition a network into multiple priority
levels or class of service (CoS).

Congestion Avoidance: Congestion avoidance techniques monitor network


traffic loads in an effort to anticipate and avoid congestion at common network
bottlenecks. Congestion avoidance is achieved through packet dropping. Among
the more commonly used congestion avoidance mechanisms is Random Early
Detection (RED), which is optimum for high-speed transit networks. QoS includes
an implementation of RED that, when configured, controls when the router drops
packets. If you do not configure Weighted Random Early Detection (WRED), the
router uses the cruder default packet drop mechanism called tail drop.

(ii) Summary of COS settings

Any CoS implementation must work consistently end to end through the network.
CoS features interoperate with other vendors’ CoS implementations because they
are based on IETF Differentiated Services (DiffServ) standards. The purpose of
a CoS configuration is to define the forwarding treatment of a packet at each router
(hop) along a forwarding path toward a destination.

The following table presents the COS for the Zonal Railway services. The values
recommended as TX bandwidth are based on best practices, Railways to decide
on the actual values based on their requirements. These values can be tuned
during the implementation and testing phase.

QOS Mapping Table


Priorit Tx Band DSCP/EXP
Class Name Example Service
y Width (% ) Mapping
* Pseudo wire
(E1), TCCS,
Diamond high 10 EF/7
VOIP, Safety
Critical
Mission Critical/
Platinum low 8 AF42/6 Multicast/ LTE
traffic
L3-VPN /L2-VPN
Gold low 10 AF41/2 (UTS, FOIS,
SCADA etc.)

Page 27 of 131
Services Type 1
Silver low 40 AF31/3
(Video) VSS
Services Type 2
Bronze low 10 AF21/4
(RailNet)
Network
Management high 2 NC/5
Management

Wi-Fi/Public
Default low 10 Af12/1
hotspot

All best effort or


Internet
low remainder 0/0 low priority
Default
traffic
Table: Proposed COS Summary

5.6 Unified SR/MPLS transport across Data Centres

Service-Providers (SP) use SR/MPLS encapsulation in the transport networks. Data


center to transport handoff with SR/MPLS allows SP to use a single SR/MPLS data-
plane protocol on their transport devices. Application Centric Infrastructure (ACI) using
Leaf & Spine architecture uses VXLAN within the fabric to solve data center challenges
such as workload mobility and integration with different types of virtual environment to
support automation and visibility. With the SR/MPLS handoff from ACI, a single BGP
EVPN session can exchange the information of all the prefixes in all the VRFs instead
of from each routing protocol session and sub-interface per VRF. This architecture
simplifies operational complexity, results in better scale and automation for both
transport and the data center. Figure below shows SR/MPLS handoff across Multi-
Site, Multi-Pod, and remote leaf at Divisions/Edge locations.

ACI provides consistent policy, automation, telemetry, intelligent service chaining, and
ease of operations to geographically distributed central, regional, and edge telecom
data centers. The advantage includes automation and consistent policy across data-
center and transport domains. Telecom Service Providers are using Segment Routing
(SR) Multi-Protocol Label Switching (MPLS) in the transport domain accordingly a
High Level Architecture for Data Centre as per the figure below is proposed.

Page 28 of 131
High Level Architecture for Data
Centre Network and MPLS

Divisional/Edge DC Divisional/Edge DC Zonal Headquarters-DC(s)


SR/MPLS Central Site- Secunderabad
SR/MPLS SR/MPLS Handoff
SR/MPLS Handoff
Handoff Handoff

EVPN VXLAN
VXLAN
EVPN
IP MPLS Backbone

SR/MPLS Handoff
SR/MPLS
SR/MPLS Handoff
SR/MPLS
Handoff Handoff

Divisional/Edge DC Divisional/Edge DC
Zonal Headquarters-DC(s)

Central Site- Delhi 15

Figure 5 - High Level Architecture for Data Centre Network and MPLS

5.7 Proposed IP-MPLS Network Architecture for Indian Railways:

The IP-MPLS network is a multi-layer centrally managed IP backbone network


designed to provide reliable routes to cover all possible destinations. It shall primarily
consist of MPLS enabled Provider and Provider edge Routers interconnected in such
a way as to ensure no single point of failure. It will facilitate the convergence of voice,
data and video networks into a single unified packet-based multi-service network
capable of providing all the current requirements.
The network architecture is a collection of logical and physical functions distributed in
different levels of hierarchies.

Key Considerations:

● Scalability: The network should be designed with scalability in mind to accommodate


future growth in traffic volume and user demand. This includes having the ability to
add additional capacity and functionalities easily.

● Security: Robust security measures are essential to protect sensitive data transmitted
over the network. This includes encryption, access control, and intrusion
detection/prevention systems.

● Redundancy: Network redundancy is critical for ensuring high availability and


mission-critical service continuity. This can be achieved through route diversity,
backup links, and geographically dispersed network elements.

Page 29 of 131
● QoS: QoS mechanisms should be implemented within the MPLS network to prioritize
mission-critical traffic and ensure consistent performance for different service types.
This could involve techniques like traffic shaping, differentiated services (DiffServ),
and resource reservation protocol (RSVP).

● PTP Support: PTP ensures precise time synchronization across the network, crucial
for applications like location-based services and financial transactions. This can be
achieved through dedicated PTP servers strategically placed within the network.

Traffic Classes are used internally for determining fabric priority and as the match
condition for egress queuing. QoS Groups are used internally as the match criteria for
egress CoS header re-marking. IPP/DSCP marking and re-marking of ingress MPLS
traffic is done using ingress QoS policies. MPLS EXP for imposed labels can be done
on ingress or egress, but if you wish to rewrite both the IPP/DSCP and set an explicit
EXP for imposed labels, the MPLS EXP must be set on egress.

The priority-level command used in an egress QoS policy specifies the egress transmit
priority of the traffic vs. other priority traffic. Priority levels can be configured as 1-7
with 1 being the highest priority. Priority level 0 is reserved for best-effort traffic.

5.8 Routed Optical Networking

Routed Optical Networking simplifies the network design and collapses complex
technologies and network layers into a single cost efficient and easy to manage
network infrastructure.

Routed Optical Networking tackles the challenges of building and managing networks
by simplifying both the infrastructure and operations.

There are two main concepts driving changes in the Routed Optical Networking
architecture:

• Direct integration of digital coherent WDM interfaces in the router


• Photonic infrastructure simplification by reducing/eliminating reliance on ROADM

The direct integration of digital coherent WDM interfaces in the router eliminates the
traditional manually intensive service hand-off across the demarcation between the
optical transport and packet domains.

The result is a single network infrastructure that can be planned, designed,


implemented, and operated as a single entity. Network automation is a key element to
plan, optimize, manage, and maintain all the network functions to enable a true SDN
and drive network intelligence on an end-to-end basis. The path computation,
orchestration, and management toolkits that form the automation ecosystem are open,
programmable, modular, operationally ready, and consistent with existing practices.

The automation architecture includes unified capacity planning, path optimization, and
element management for both IP and optical layers.

IP/optical convergence through optical transponder functions integration. The new


pluggable coherent optics incorporate the coherent DSP on the optical pluggable
modules instead of the host line card.

Page 30 of 131
A key pillar of the Routed Optical Network is the integration of the coherent pluggable
modules. A key enabler for the scale is the 400GbE line rate. 400GE coherent optics
leverage a multi-vendor Quad Small Form-Factor Pluggable (QSFP) with standardized
specifications, which allows interoperability for easier adoption and scale. In order to
ensure full flexibility, these line cards will support both coherent and grey optical
interfaces (or a combination thereof) in the form of QSFP56-DD form-factor
pluggables. The use of QSFP56-DD for both coherent and grey optical interfaces can
be leveraged to limit any trade-off in terms of port densities and IP fabric capacity
commonly associated with coherent optics on routing platforms (previously referred to
as IPoDWDM). This means that a specific line card required to host coherent DWDM
optical interfaces will no longer be required.

A universal line card which can be flexibly deployed to support the termination of
coherent DWDM, or grey optical line interfaces, can be realized. QSFP form-factor
has been widely leveraged in the industry and many OEMs are contributor in
promoting QSFP-DD Multiple Supplier Agreement (MSA) through the Optical
Internetworking Forum (OIF) as well as other standards organizations. Evolution of
Digital Coherent Optics In the IP aggregation application space, IP/optical integration
can already be served by leveraging IP aggregation devices which feature Modular
Port Adapters (MPA) and Digital Coherent Optical (DCO) pluggable modules. The
integration of DCO modules allows the direct termination of coherent DWDM
interfaces directly into the IP aggregation devices without incurring the cost and
complexity of optical transponder line cards.

5.8.1 DWDM

Most modern Telecom networks start at the physical DWDM fiber optic layer. Above
the physical fiber is technology to allow multiple photonic wavelengths to traverse a
single fiber and be switched at junction points.

5.8.2 Ethernet/IP

In all high bandwidth networks, there is an Ethernet layer on which IP services


traverse, since almost all data traffic today is IP. Ethernet and IP is used due to its
ability to support statistical multiplexing, topology flexibility, and widespread
interoperability between different vendors based on well-defined standards.

5.8.3 Enabling Technologies

(i) Pluggable Digital Coherent Optics


Simple networks are easier to build and easier to operate. As networks scale to handle
traffic growth, the level of network complexity must decline or at least remain flat.

Transponder or muxponders have typically been used to aggregate multiple 10G or


100G signals into a single wavelength. However, with reach limitations, and the fact
transponders are still operating at 400G wavelength speeds, the transponder becomes
a 1:1 input to output stage in the network, adding no benefit.

The Routed Optical Networking architecture removes this limitation and brings the
efficiency for networks of all sizes, due to advancements in coherent pluggable
technology.

Page 31 of 131
(ii) QSFP-DD, 400ZR, and OpenZR+ Standards

The advancement to improve network efficiency has led to shifting coherent DWDM
functions to router pluggable. Technology advancements have shrunk the Digital
Coherent Optics (DCO) components into the standard QSFP-DD form factor, meaning
no specialized hardware and the ability to use the highest capacity routers available
today. ZR/OpenZR+ QSFP-DD optics can be used in the same ports as the highest
speed 400G non-DCO transceivers.

The routed optical networking solution works by converging IP and private-line


services (TDM, OTN, fiber channel, and ethernet over SONET) onto a single Internet
Protocol/Multiprotocol Label Switching (IP/MPLS) network where all switching is done
at Layer 3. Leveraging standardized protocols such as Circuit Style Segment Routing
(CS-SR). This simplified architecture enables open data models and standard APIs,
allowing a provider to focus on automation initiatives for a simpler network topology.
The benefits of the routed optical networking architecture result in simplified planning,
design, activation, troubleshooting, and management.

Reducing devices in the network enhances resiliency and availability, while it also
optimizes wavelength and the embedded fiber capacity.

5.8.4 Typical Link Budget of an OFC Connection with SFPs upto 80Km

The link budget of an Optical Fiber Cable (OFC) connection is the total amount of
optical power loss a system can tolerate while still maintaining a reliable data
transmission. It's calculated by taking into account the power output of the transmitter,
the sensitivity of the receiver, and the losses incurred along the optical path, such as
fiber attenuation, connector losses, and splice losses.

Page 32 of 131
For SFPs used for up to 80 km, such as 10GBASE-ZR or DWDM SFP+, the link budget
calculation can be broken down as follows:

Components of the Link Budget:

(i) Transmitter Power (Tx Power):

o The optical power output from the SFP module.


o Typical range for 10GBASE-ZR and DWDM SFP+ is around +3 dBm to +5 dBm.

(ii) Receiver Sensitivity:

o The minimum optical power required at the receiver to ensure proper signal reception.
o Typical range for 10GBASE-ZR and DWDM SFP+ is −23 dBm to −27 dBm.

(iii) Fiber Attenuation:

o The amount of signal loss due to the fiber itself. For Single-Mode Fiber (SMF), the
typical attenuation is around 0.25 dB/km at 1550 nm.
o For 80 km: Fiber Loss = 80 km×0.25 dB/km = 20 dB

(iv) Connector and Splice Losses:

o Connectors typically add 0.5 dB per connector, and each splice adds about 0.1-0.2
dB.
o For a typical system with 2 connectors and 26-27splices in 80 km length of fibre
considering 3km drum length, the loss might be around (1+27x0.1=3.7) 3.7 dB.

(v) System Margin:

o To ensure stable performance, a safety margin (typically 3-4 dB) is factored in to


account for unexpected losses and aging of components.

Link Budget Calculation: Let's break this down for a typical 80 km 10GBASE-ZR or
DWDM SFP+ link:

a) Tx Power: +3 dBm (typical).


b) Receiver Sensitivity: − 24 dBm (typical).
c) The basic link budget (without safety margin) is:
Link Budget = Tx Power−Receiver Sensitivity
Link Budget = (+3 dBm) − (−24 dBm) = 27 dB
d) Fiber Loss: 80 km × 0.25 dB/km = 20 dB.
e) Connector and Splice Losses: Assume 2 dB.
f) System Margin: Assume 2 dB.
Total Losses=Fiber Loss + Connector/Splice Losses + System Margin
= 20 dB+ 3.7 dB+2 dB = 25.7 dB

Page 33 of 131
The link budget required for an 80 km connection is approximately 25.7 dB. Given that
the calculated budget is 27 dB (for typical 10GBASE-ZR or DWDM SFP+), this link is
feasible, but it is very near to the limit. It is needed to optimize factors such as splicing
or use additional components like optical amplifiers to ensure stability in real-world
conditions. If the SFP has slightly better transmitter power or receiver sensitivity, you
would have additional margin.

Example Map of Zonal Connectivity is given below:-

Interconnectivity of CORE/Aggregate network between Zonal Railways has been


shown to understand the topology of network.

Each Zonal HQ is connected to other Railways via 3-4 routes to provide path
redundancy and routing the traffic flow through different routes in case of congestion
in any one route.

(i) Connectivity of WCR:

Zonal HQ of WCR is at Jabalpur. Jabalpur is also one of the three divisions of the
WCR zone. The common core/aggregate layer router provided at Jabalpur shall be of
higher specification as per Zonal router. Other two divisions of WCR i.e. Bhopal and
Kota will also be connected to this router.

On backbone side the connectivity of WCR HQ with other zones will be through the
divisions on different routes. For example – WCR to NR will be connected through

a) Bhopal – Kota –Mathura - Delhi


b) Bhopal – Jhansi –Agra - Delhi
c) Prayagraj – Kanpur – Delhi

Page 34 of 131
Figure 6 - Example Map of Zonal connectivity

Legend:

Interconnectivity
of CORE network
of Indian Railways

Page 35 of 131
Figure 7 - Proposed diagram of SWR IPMPLS network:

Page 36 of 131
Salient features :-

1. LER available at all stations.


2. LSR available at all Major, Junction and Terminal Stations at a distance of 55 to 60
Kms in normal contiguous section.
3. L3 switches available at all stations.
4. Entire IP-MPLS Network is protected.
Within SWR is protected through internal rings of SWR own Network. Outside the
SWR is protected through RCIL’s MPLS from neighbour Railway Zones. Linear
sections are protected through BSNL Bandwidth.

SWR Protection Details:

Protection ring from Protection ring from


[Link]. Division Station
(Internal) (Out of Zone)

1 MYS MYS HAS, SBC MAQ(SR)

2 MYS HAS MYS, ASK, SBC MAQ(SR)

3 MYS PADIL HAS, MAO(SWR) KGQ(SR)

4 MYS JRU UBL, SBC, BAY

GTL(SCR),
MRJ(CR),
5 UBL UBL DVG, LD, GDG
MAO(KR)
HG(CR)
HAS, MYS, MYS, SA(SR), JTJ(SR)
6 SBC SBC
BWT DMM (SCR)

7 UBL VSG

8 UBL DED

9 UBL RNJP

10 UBL SMLI
LINEAR SECTION
(protected through BSNL bandwidth)
11 MYS CMNR

12 MYS TLGP

13 MYS CMGR

14 SBC MKM

Page 37 of 131
Figure 8 - Proposed MPLS Network Diagram of NWR

Page 38 of 131
1. NWR has 4 Divisions viz. Ajmer, Bikaner, Jaipur and Jodhpur with Zonal HQ at Jaipur.

2. In NWR junction to junction distance goes up to 252 km (for MVJ-HMT section).

3. LSR/Repeater are used in sections, where distance between two junctions is greater
than 60 km. 64 such LSR/repeaters are placed at various wayside stations between
junctions.

4. NWR has boundary with NR, NCR, WR & WCR at 12 junction stations. Protection
paths through other railways is also shown via these stations.

5. Connectivity of NWR with NR, WR, NCR and WCR Zonal HQs is also shown via
100/400G links on Core routers.

6. Division to Division connectivity is shown via different 40/100G links on core routers.

Following two sample cases have been taken for working of protection paths in case
of failure:

Figure 9 - Protection Path for Rewari-Ringas (RE-RGS)

Case 1: Rewari-Ringas (RE-RGS). In case of failure of main path, following other


possible 8 paths are there via different routes. Out of which four paths are from other
railways, while four paths are through NWR itself.

Page 39 of 131
Figure 10 - Protection Path for Ajmer-Manwar Junction (AII-MJ)

Case-2: Similar case is taken for Ajmer-Marwar Junction (AII-MJ) section. In case of
failure of main path two paths are there for protection. Both the protection paths are
within NWR.

5.9 Recommendations of Committee

After detailed discussion regarding the architecture of IPMPLS backbone network for
Indian Railways, following is recommended by the committee

5.9.1 The IPMPLS backbone network of IR shall be three layered architecture. Different
layers of proposed network hierarchies are as follows:

(i) Core/Aggregate Layer: A high-capacity, redundant core/aggregate network will form


the backbone of the Railway Network. This will consist of geographically dispersed
data centres at Zonal & Divisional HQs interconnected with high-bandwidth fiber optic
links in a mesh topology.

(a) High-end routers to aggregate data from various divisions and provide high-speed
connections to other zones are required. Bandwidth of 100G upgradable upto 400G is
proposed for connecting the CORE/Aggregate routers. Connectivity between
CORE/Aggregate routers and Divisional Aggregate router is to be provided with 40G
bandwidth upgradable upto 100G.

(b) Proposed Specification for these routers is mentioned in Annexure-3.

(c) Separate pair of fiber in mesh topology to connect the high-end routers for
CORE/Aggregate network is proposed.

Page 40 of 131
Figure 11 - Proposed architecture of IPMPLS network for IR

Page 41 of 131
(ii) Pre-Aggregate Layer: The Pre-Aggregate layer between Junction/Major stations in a
Railway IP-MPLS system serves as a critical intermediate layer that collects and
organizes data from access network, preparing it for efficient transmission to the
higher-level core/aggregate layer at the Divisional level.

(a) LSR (Label Switch Routers) are provided at every junction/major station separated
approximately 60 to 70 Kms to collect data from different stations and prepare it for
further aggregation. Bandwidth of 2x10G or higher is proposed for connecting these
Pre-Aggregate routers.

(b) Specification for these routers shall be as mentioned in RDSO TAN Version 2.0.

(c) Separate pair of fiber in ring topology to connect the LSR routers at junction/major
stations for Pre-Aggregate network is proposed.

(iii) Access Layer: The Access layer in a Railway IP-MPLS system is the foundational
layer that interfaces directly with various railway endpoints, such as stations, train
control systems, and other local operational devices. Positioned below the Pre-
Aggregate layer, the Access layer collects data from these endpoints and sends it to
the Pre-Aggregate layer at junction/major station or regional hubs.

(a) LER (Label Edge Router) at each station and L3/L2 Switches that connect local
equipment (like signalling systems and control system, PRS/UTS, SCADA, VSS, etc.)
to the network. Bandwidth of 10G is proposed for connecting these Access routers.

(b) Specification for these routers shall be as mentioned in RDSO TAN Version 2.0.

(c) Separate pair of fiber in ring topology to connect the Access Routers at all stations
for this layer is proposed.

(iv) Cell Site Routers: Base stations (eNBs) of LTE network will be deployed throughout
the country to provide LTE coverage. These eNBs are proposed to connect to the
backhaul network using 1G link between cell site router and station LER through
separate fiber at the time of implementation of LTE.

5.9.2 For specifications of Pre-aggregate routers (LSR) at Junction station level and access
routers (LER) at stations RDSO TAN Ver 2.0 shall be followed. Configuration of LSR
and LER shall be decided by the Zonal Railways as per requirement.

5.9.3 As per proposed architecture separate pair of fibre is required for the three different
layers, however at present only two fibres are available. To start implementation of the
work access network and pre-aggregate network shall be taken up in first phase with
backup connectivity taking from RCIL as being done presently for SDH network.

5.9.4 Core/Aggregate network layer shall be planned in next phase when separate fibers
are available as per new policy of 2x96 fibres.

5.9.5 The converged optical network for Core/Aggregate layer should be designed and
implemented as a unified project. This project must seamlessly integrate with the Pre-
aggregate networks being deployed by each of the Zonal Railways, ensuring a
cohesive network infrastructure with unified Classes of Service (CoS) and a common
management layer. Preferably, the complete work of core/aggregate network shall be
done by any one Zonal Railway to have uniformity in higher-level network.

Page 42 of 131
5.9.6 Uniform IP addressing scheme issued for the Railways IPMPLS backbone network
shall be followed by all the Zonal Railways while implementing the network. Copy
attached as Annexure-2.

5.9.7 Service provisioning and management of this network is very complex and committee
recommends having proper NOC setup ready for management of this network along
with the implementation of network in every Division and Zone as per detailed
recommendations for item No.6 of this report.

Page 43 of 131
6. TOR Item No. 2:

Specification of IP-MPLS routers & amendment/changes required in TAN ver. 2.0

6.1 Background

Railway Board nominated a committee for Future of Telecommunications Backbone for


Indian Railways. Nominated committee submitted report in September 2019 for Carrier
Ethernet Technology for Unified Communications Backbone on Indian Railways. Based
on committee’s report Railway Board vide Telecom Circular 4/2020, dated 25.02.2020
advised RDSO to issue scheme for functional requirements of system including
interfaces and NMS to use IP-MPLS technology for Unified Communications Backbone
on Indian Railways.

Based on above, Technical Advisory Note (TAN) Version 1.0 was issued dated
16.12.2020 based on comments of Zonal Railways and various stakeholders. Railway
Board vide letter dated 20.10.2022, advised RDSO to issue PoC (Proof of Concept)
guidelines for implementing IP-MPLS network by Zonal Railways with reference to TAN
issued by RDSO.

Accordingly, PoC guidelines and Interoperability - Integration Testing with RailTel


network based on existing TAN were issued in consultation with RailTel and Railway
Board to Zonal Railways dated 28.11.2022 and 14.12.2022.

Guidelines for POC including Integration with RailTel mainly include:

a) Functional testing of various Service requirements of Railways. Typical scheme for


wayside station connecting different services and connectivity with other stations was
also included.
b) Functional & Technical requirements of LER & LSR.
c) Performance.
d) Interoperability / Integration with RailTel of different OEMs Equipment.

Further revision of the TAN was taken up based on feedback / suggestions received
from Zonal Railways, various stakeholders including OEMs and directives from
Railway Board. TAN Version 2.0 for “Implementation of IP-MPLS Technology for
Unified Communication Backbone on Indian Railway” including PoC guidelines was
issued w.e.f. 29.03.2023 following the RDSO ISO procedure.

6.2 Discussion on TAN Version 2.0:

Following major changes / additions were made in TAN Version 2.0:

a) A typical schematic diagram for implementation of IP-MPLS network including details


of various services to be connected on a wayside station was included, as given below:

Typical scheme shows connectivity between wayside stations on 10G link. The data
from wayside stations will be aggregated at junction stations and these junction
stations will have Label Switch Routers (LSR) in addition to LER of that station for
aggregating the data and sending it to divisional level routers through IP-MPLS
network. In addition to Railway’s own connectivity between these aggregation routers
(LSR) connectivity with RailTel IP-MPLS network is also proposed for availability of
network in case of Railway’s network failure.

Page 44 of 131
Figure 12 - A typical schematic diagram for implementation of IP-MPLS network

b) Provision of redundancy of different cards including control plane in LER.


c) Distribution of different interface configurations (i.e. 10G, 1G etc.) in minimum two
cards for redundancy.
d) Legacy interfaces like E1 and STM made optional (to be decided by Zonal Railways
as per requirement).
e) Clause for router interoperability from different OEMs and integration with RailTel IP-
MPLS network included through PoC.
f) Minimum non-blocking through put for LER enhanced to 60 GBPS from 30GBPS and
for LSR enhanced to 200 GBPS from 80 GBPS.
g) Provision of MTCTE certification and training included.

6.3 The issue of discussion on Specification / TAN version 2.0 for IP-MPLS and suggestion
for improvement / changes based on field experiences was taken up as agenda item in
41st TCSC meeting held on 29th & 30th Jan’2024 also.

All the Zonal Railways have given comments / suggestions on TAN Version 2.0 for IP-
MPLS routers. The comments / suggestions of Railways and remarks on the same are
as follows:

a) Issues related with higher port capability, more no. of ports, retention of legacy ports
in LER / LSR were suggested.

Remarks: All the services running on stations have been catered in the typical scheme
given in the TAN and accordingly type of interface port is decided. Capacity of interface
port for LER as well as LSR is based on bandwidth requirement for the services /
connectivity. Moreover, the requirements given in TAN are minimum and these can be
changed based on the site requirements by Zonal Railways.

Page 45 of 131
As far as legacy ports are concerned, these ports are required till the migration of all
services on ethernet, however, the decision has been left with Zonal Railways
regarding number of legacy ports required.

b) Introduction of DWDM / SDWAN for scalability.

Remarks: DWDM / SDWAN is not included in the TAN as a permanent feature as


these are required in some specific cases for higher level routers and are being
proposed in the Core and aggregation networks at Zonal and Divisional level.

c) TAN is biased towards chassis-based IP routers.

Remarks: Chassis based router has been preferred for more flexibility as various
types of interfaces including legacy interfaces are required for the migration from
existing SDH backbone to IP-MPLS backbone over Indian Railways. Apart from being
helpful in migration, it is also better for providing redundancy and replacement of faulty
cards instead of the complete router. In future, once the IP-MPLS network is stabilized,
the next replacement whenever due can be planned as per Railways requirement.

d) Connection to RCIL at all junction stations with LSR and Long-haul bandwidth through
RCIL should be increased to 25G.

Remarks: 3 to 4 connections to RCIL from a division as proposed in TAN is sufficient


at present, however, long-haul bandwidth from RCIL may be increased by making
necessary changes in LSR configuration as per site requirement.

6.4 Recommendations of Committee:

6.4.1 The present TAN specifies the details of wayside station router (LER) of access
network and junction station aggregation router (LSR) of pre-aggregate network only.
Configuration of LSR and LER shall be decided by the Zonal Railways as per local
requirement. These two routers i.e. LER & LSR are edge routers, however,
configuration of core routers to be used in Divisional as well as at Zonal level has been
discussed in the committee report against TOR item no.1 of report.

6.4.2 Specifications of router for MPLS based transport network has been issued by TEC in
March 2022 vide TEC 48050:2022 and is being used by the Telecom Service
Providers extensively. To meet the Railway specific requirements details have been
covered in the TAN 2.0 issued by RDSO. There is no further requirement for
developing RDSO specification for this item or changing the TAN 2.0 at present.

Page 46 of 131
7. TOR Item No. 3:

Fixing performance criterion for OEMs of IP-MPLS routers.

The issue of discussion on fixing performance criterion for OEMs of IP-MPLS routers
for tenders being invited by Zonal Railways was taken up as agenda item in 41st TCSC
meeting held on 29th & 30th Jan’2024. Later on the item has been given to the
committee as TOR item no. 3 vide RB’s letter dated 08.04.2024.

Committee has deliberated this item including discussions on the item held in 41 st
TCSC with the PCSTE’s of all ZRs. The issues discussed during the 41st TCSC and
remarks on these issues are given below:

7.1 Discussion on issues raised in 41st TCSC meeting for fixing performance
criterion for OEMs of IP-MPLS routers:

For fixing performance criterion for OEMs of IP-MPLS routers following major issues
were discussed based on comments from Zonal Railways:

a) Inclusion of PoC.

Remarks: PoC has been already included in TAN Version 2.0. PoC is required to be
done for this item as per PoC guidelines issued in TAN Version 2.0 to check the
suitability of equipment being offered for Railway requirement. In addition to that
guidelines for PoC of new technology telecom items has been issued by RDSO
recently for implementation.

b) TEC / MTCTE certification.

Remarks: MTCTE certification from TEC has been included in the TAN Version 2.0
for IP-MPLS routers.

c) Only RDSO approved vendors to participate & RDSO approved vendor list for IP-
MPLS.

Remarks: Being COTS item vendor approval is not being done by RDSO, however,
guidelines for PoC of new telecom items has been issued on 26.07.2024. As per new
guidelines, the details of OEMs and their equipment are published on the RDSO
website after successful completion of PoC to facilitate the Zonal Railways.

d) For quantity of routers installed and working satisfactorily following suggestions were
received:

• OEMs should have supplied minimum 30% of the tender value to any Govt.
organization.
• 20% of quantity to be procured should be installed in Govt./PSUs/Telcos and working
satisfactorily for the period of at least 3 years including successful upgradation during
the 3-year period.
• 15% of tender quantities of LER’s and LSR’s installed and working satisfactorily.
• The OEM offered Routers models at least 10% of scheduled quantity of the tender
document, should have working satisfactorily in a single work in any Government/
PSU/Telecom Service Providers Network in India.

Page 47 of 131
Remarks:

• The item was discussed in detailed by the committee and it is found that if this
criterion is kept very stringent it is difficult to find suitable vendors.
• Performance criteria should not be very stringent at the same time it should not be
very simple treating it as end equipment.
• Performance criteria should have level playing field for the OEMs.

e) For maintenance and technical support after the installation and commissioning of the
equipment, following suggestions were given by Zonal Railways:

• Maintenance including upgradation/transfer of circuits for a period of 5 years past


of contract.
• MAF/MoU for after sales support up to codal life.
• Guarantee to provide technical, service and maintenance support to the
Contractor/Railway that may be required during installation, commissioning and
working period of the equipment till the OEM announces End of Life (EOL) of the
product and/or support with spares till 8 years from the date of supply.

Remarks: IP-MPLS network will be used as backbone of IR network, so maintenance


and technical support after the installation and commissioning of the equipment
becomes very important. Reliability shall be ensured and long-term support from OEM
is required.

7.2 Recommendations of Committee:

After detailed discussion and deliberations, committee suggests the performance


criteria in two categories. First category items are required to be included and second
category items are desirable and can be decided by the competent authority in the
zones for inclusion.

7.2.1 Items required to be included as performance criteria:

(i) For IP-MPLS Router, OEM must be in the list of Trusted Telecom suppliers on the
Trusted Telecom portal of the Government of India. Proof of which needs to be
enclosed along with MAF.

(ii) OEM should have TEC/ MTCTE certification for IP-MPLS Routers as per TEC GR.

(iii) All Routers should be IPv6 ready from Day One. All the hardware and Software for the
same should be provided with the system

(iv) OEMs should have 24*7 Technical Assistance Centre (TAC) support in India. Relevant
document in support of the same shall be submitted.

(v) OEM should have proven facilities for Engineering, Manufacture, assembly,
integration, testing and basic facilities with respect to Space, Engineering, Personnel,
Test equipment, Manufacture, Training, Logistic Supports for at least past three years
from where the proposed equipment shall be supplied. In case OEM is located outside
India, it should have engineering, testing, training, and service centre facilities in India.

Page 48 of 131
The OEM shall provide support during the operation of the Equipment/ Routers till the
codal life of the equipment and support with spares till 08 years from the date of supply/
installation of Router.

(vi) Clause for Bidder to provide AMC with BG to be included. Life cycle cost analysis to
be submitted by the bidder. Warranty plus AMC shall cover the complete codal life of
equipment i.e. 08 years.

(vii) Tender specific authorization certificate (MAF) from the OEM is required for the bidder
to participate in the bid.

7.2.2 Items desirable for inclusion as performance criteria: To be decided by the


competent authority in Zones.

(i) Equipment/IP-MPLS Routers offered shall have a proven and satisfactorily working
record in Mainline Railways/ Metro Rail network/ Government-Centre or State/
PSUs/Defence / Telecom Service Provider/ ISP/ Bharatnet/ Public Listed Company
etc for minimum one year for 25% of the quantity mentioned in bid document as on
date of opening of tender in India or outside India other than the country of origin.
Relevant document (like satisfactory performance etc. of the offered equipment from
the Client / customer) in support of the same shall be submitted for approval of the
Engineer.

(ii) OEMs should have their spares depots in India and location & address for same need
to be submitted for at least 10 such depots across PAN India.

(iii) The IPMPLS Router OEM must conduct pilot sites implementation for minimum 5%
sites. HLD, LLD, Design documents, AT documents must be signed off by the OEM
and the customer along with the bidder for final acceptance. In no case, bidder is
allowed to sign-off on behalf of OEM.

(iv) OEMs should be having valid ISO 9000 & ISO 14000 certification.

(v) The Router/Router (OS) should be tested and certified for EAL2/NDPP under common
criteria programme for security related functions or under Indian common criteria
certification scheme (IC3S) by STQC, MeiTY, Government of India.

Note: The bidder shall submit all relevant documents, certifications and undertaking
from the OEM related to above performance criteria.

Page 49 of 131
8.0 TOR Item No. 4:

Formation of Standard Migration Document for Network Services on IP-MPLS.

The existing network carries crucial communication circuits that cover train operations
and Railway working. Hence, it is essential that a detailed migration plan is prepared
and meticulously executed.

This item outlines the migration plan for transitioning from the existing Synchronous
Digital Hierarchy (SDH) network to an Internet Protocol Multiprotocol Label Switching
(IP/MPLS) network for Indian Railways. The migration aims to enhance the operational
efficiency, scalability, reliability, and flexibility of network services such as Passenger
Reservation System (PRS), Unreserved Ticketing System (UTS), Freight Operations
Information System (FOIS), CCTV, Signalling Circuits, Data logger, Train Control
Communication, Supervisory Control and Data Acquisition (SCADA), and other
essential communication services like Railnet, Station Wi-Fi.

8.1 Objective: To form a standard migration document for

● Upgrading the existing SDH-based legacy network to IP/MPLS to accommodate


growing traffic demands.
● Improve quality of service (QoS), flexibility, and scalability of Railway communication
systems.
● Ensure high availability, security, and efficient resource utilization.
● Support layer-2 (L2) and layer-3 (L3) VPN services for different Railway applications.

8.2 Network Architecture Overview

8.2.1 Existing SDH Network: The current SDH-based network of Indian Railways is a point-
to-point connection with static bandwidth allocation, which limits flexibility and
scalability. SDH circuits are provisioned for various services like PRS/UTS, FOIS,
CCTV, Signalling circuits, and Train Communication. However, it lacks the advanced
traffic management capabilities and dynamic routing of modern IP/MPLS networks.

8.2.2 IP/MPLS Network Overview: IP/MPLS (Internet Protocol/Multiprotocol Label


Switching) combines the flexibility of IP routing with the efficiency of MPLS. MPLS
provides traffic engineering capabilities that enhance the quality of service (QoS) and
ensure reliable and scalable services over the same infrastructure.

8.3 Key Factors for Migration from SDH to IP/MPLS Network

A smooth migration from the existing SDH-based network to an IP/MPLS network


requires careful planning, coordination, and execution to ensure minimal service
disruption and full integration of legacy systems with modern IP/MPLS technology.
Here are the key factors to ensure a smooth migration:

8.3.1 Comprehensive Network Assessment

● Inventory Existing Services: Conduct a detailed survey of all services running on


the SDH network, including UTS, PRS, FOIS, CCTV, signalling circuits, datalogger,
train control communication, SCADA, Railnet, and station Wi-Fi and prepare inventory.
● Bandwidth Requirements: Assess the bandwidth requirements for each service to
ensure adequate capacity is available in the IP/MPLS network.

Page 50 of 131
● Criticality of Services: Prioritize the migration of critical services like UTS, PRS,
FOIS, and Signalling, ensuring redundancy and failover mechanisms are in place
before moving them to the new network.

8.3.2 Phased and Gradual Migration

● Phased Migration Plan: Adopt a phased migration approach to ensure that only a
small portion of services are transferred at a time, minimising disruptions. Migrate non-
critical services first (e.g., CCTV, Station Wi-Fi), followed by more critical services
(e.g., UTS, PRS, FOIS, Signalling).
● Dual-Stack Operation: Maintain both SDH and IP/MPLS networks simultaneously
(dual-stack operation) during the transition period. This ensures that services can be
switched back to SDH in case of issues during migration.

8.3.3 Ensuring Redundancy and High Availability

● Dual Paths and Redundancy: Set up redundant paths for critical services like UTS,
PRS, FOIS, signalling circuits, and SCADA to ensure they remain operational even in
case of a failure.
● Load Balancing: Use load-balancing mechanisms for distributing traffic evenly across
redundant links, ensuring that network performance remains optimal even during high-
traffic periods.

8.3.4 Quality of Service (QoS) Configuration

● Traffic Prioritization: Implement QoS policies to prioritize time-sensitive services like


signalling, train control, SCADA, and Data logger over less critical services like CCTV
or public Wi-Fi.
● Bandwidth Allocation: Ensure that critical services like UTS, PRS, and FOIS receive
sufficient bandwidth even during network congestion, preventing service degradation.

8.3.5 Security Considerations

● Segregation using VRF and VLANs: Maintain strict segregation between services
using VRF and VLANs to prevent cross-traffic between critical and non-critical
services.
● Encryption: Implement encryption (IPsec) for services that require secure
communication, such as UTS, PRS, and FOIS.
● Network Monitoring: Deploy advanced monitoring tools to track network
performance, detect anomalies, and secure the network against potential threats.

8.3.6 Coordination with Stakeholders (CRIS and RailTel)

● Coordination with CRIS: Since CRIS (Centre for Railway Information Systems)
manages services like UTS, PRS, and FOIS, it’s critical to closely coordinate with
CRIS during the migration. CRIS should be involved in the testing and validation
process for services before and after migration.
● Coordination with RailTel: RailTel, which provides the communication backbone at
present, should be closely involved in planning the migration, ensuring that the
underlying MPLS infrastructure is ready to handle critical services with the required
QoS, security, and redundancy.

Page 51 of 131
8.3.7 Dual-Stack Operation

● During migration, both SDH and IP/MPLS will coexist. Dual-stack architecture will
ensure that services continue to run smoothly on SDH while being gradually shifted to
IP/MPLS.

8.4 Technical Planning for Migration

8.4.1 Traffic Segregation using VPN, VLAN, and VRF

● VLAN (Virtual Local Area Networks): Use VLANs to separate traffic at the layer-2
level for different services. This ensures that various service types (e.g., PRS, UTS,
FOIS, CCTV, Datalogger) can operate on the same physical infrastructure without
interference.
● VRF (Virtual Routing and Forwarding): Use VRF for logical segregation of routing
instances, ensuring that traffic for different services is routed independently,
enhancing security and isolation between services.
● VPN (Virtual Private Networks): Use L2VPN and L3VPN services for isolating critical
applications like UTS, PRS, FOIS, and signalling. This allows controlled and secure
access to the MPLS network for different services.

8.4.2 Virtual Private Networks (VPNs)

VPN are private networks that use a transport network to connect two or more remote
sites. Instead of dedicated connections between networks. VPNs use virtual
connections routed (tunnelled). The IP/MPLS network supports L2 and L3 VPNs:

● Layer 2 VPN (L2VPN): For services requiring low latency and a fixed path, such as
signalling circuits and train control communication.
● Layer 3 VPN (L3VPN): L3VPN services is used in the network to provide layer 3
connectivity between remote sites and will constitute the full forwarding path that all
layer 3 traffic should take. For higher-level services like PRS, UTS, FOIS, and CCTV,
where dynamic routing and flexibility are crucial.

8.4.3 Circuit Emulation (CEM)

CEM is a technology that provides a protocol-independent transport over IP/MPLS


networks. It enables proprietary or legacy applications to be carried transparently to
the destination, similar to a leased line.

CEM provides a bridge between a Time-Division Multiplexing (TDM) network and


Multiprotocol Label Switching (MPLS) network. The chassis encapsulates the TDM
data in the MPLS packets and sends the data over a CEM pseudo wire to the remote
Provider Edge (PE) chassis. As a result, CEM functions as a physical communication
link across the packet network.

1) Data Logger – point to point pseudo wire


2) SCADA – point to point pseudo wire
3) Analog TCC – point to point pseudo wire

Page 52 of 131
Figure 13 - Typical example for VPNs

Figure 14 - Typical logical Service Mapping Architecture

Page 53 of 131
8.4.4 MPLS VPN CSC with BGP

Carrier supporting carrier is where one service provider allows another service
provider to use a segment of its backbone network. The service provider that provides
the segment of the backbone network to the other provider is called the backbone
carrier. The service provider that uses the segment of the backbone network is called
the customer carrier. For example-Backbone Carrier: RailTel & Customer Carrier:
Indian Railways

Figure 15 - Carrier: RailTel & Customer Carrier: Indian Railways

● The integration of the IP/MPLS network of the division will be done using MPLS VPN
CSC.
● Each Division will have its own MPLS domain with unique BGP AS numbers.
● The IP/MPLS network of the division will be interconnected with RCIL IP-MPLS PoP
at two or more Jn locations.
● BGP-LU sessions will be required at junction location (LSR) between Division and
RCIL for exchanging labelled infrastructure routes among divisions.
● The division will be able to create, extend and delete services on their own without any
intervention from RCIL with this integration scheme.

8.5 Uniform IP Addressing Scheme in Network Migration

A uniform IP addressing scheme is the foundation to the successful migration of


railway services like UTS/PRS, CCTV, SCADA and others to an IP/MPLS network. It
streamlines network management, enhances security, simplifies routing, and ensures
scalability and redundancy. For large, critical infrastructures such as Railways, a
standardised IP address plan is essential for maintaining operational efficiency and
minimising service disruptions during and after migration.

When migrating from an SDH-based network to an IP/MPLS network in a large-scale


environment like Railways, a uniform IP addressing scheme is crucial. Below are the
key reasons why a uniform IP addressing scheme is important during migration:

Page 54 of 131
8.5.1 Simplified Network Management

● Consistency across the Network: A uniform IP addressing scheme provides a


consistent structure across the entire network, making it easier for administrators to
manage and troubleshoot. It helps avoid conflicts and confusion when managing a
large number of devices and services.
● Easier Configuration and Maintenance: With a standardised IP scheme, configuring
routers, switches, and other network devices becomes more straightforward. Changes
or updates can be made more quickly without the risk of introducing errors due to
inconsistent IP addressing.
● Automated Processes: Tools like DHCP and IP Address Management (IPAM)
solutions work more efficiently with a well-planned IP scheme, automating tasks like
IP allocation and reducing manual errors.

8.5.2 Efficient Routing and Reduced Complexity

● Simplified Routing: A uniform IP addressing plan allows for more efficient routing,
reducing the size and complexity of routing tables. Consistent address blocks can be
aggregated or summarized (route summarization), which minimizes the number of
routes that need to be propagated between routers.
● Improved Performance: Fewer routing entries and simpler routing paths can improve
the overall performance of the network by reducing the processing overhead on
routers.
● Seamless VPN Integration: For MPLS VPNs (L2VPN/L3VPN), a consistent
addressing scheme ensures that virtual private networks (VPNs) are easily maintained
and routed across the IP/MPLS backbone, simplifying traffic flow between sites.

8.5.3 Improved Network Scalability

● Easier Expansion: A uniform IP addressing scheme makes it easier to scale the


network as new stations, services, or devices are added. Administrators can reserve
IP blocks for future expansion without disrupting the existing IP structure.
● Hierarchical Design: By implementing a hierarchical IP addressing plan (e.g.,
assigning different subnets for regions, stations, services), the network can grow
without creating conflicts or overlapping IP ranges, ensuring smooth scaling.
● Efficient Use of Resources: Consistent IP allocation avoids the risk of IP exhaustion
in certain parts of the network while others have surplus capacity.

8.5.4 Enhanced Security

● Segmentation and Isolation: A uniform IP addressing scheme helps enforce network


segmentation and isolation using mechanisms such as VRF (Virtual Routing and
Forwarding) or VLANs. It allows clear separation of different services (e.g., CCTV,
signalling, SCADA, Wi-Fi), ensuring security by isolating sensitive data.
● Simplified Firewall Rules: A structured IP scheme makes it easier to define and
enforce security policies at firewalls and other security appliances. Administrators can
apply access control lists (ACLs) more effectively by defining rules based on IP ranges
for different departments or services.

Page 55 of 131
8.5.5 Consistency in Service and Device Allocation

● Categorization of Services: A well-designed IP scheme can allocate specific address


ranges to different services (e.g., CCTV, UTS/PRS, TCCS), making it easier to
manage and monitor them.
● Device Addressing Standardization: Devices such as routers, switches, SCADA
systems, CCTV cameras, and Wi-Fi access points can have a systematic addressing
structure, making it easier to identify and locate them in the network.

8.5.6 Improved Interoperability between Systems

● Consistency across Multiple Services: Railways run multiple critical systems, such
as ticketing (UTS/PRS), freight (FOIS), train control (TCCS), and signalling. A uniform
IP addressing scheme ensures these systems can communicate effectively over the
IP/MPLS network, reducing the risk of IP conflicts or misrouting.
● Seamless Integration of New Technologies: As new services or technologies are
introduced, such as IoT devices for smart railway stations, a consistent addressing
scheme ensures they can be smoothly integrated into the existing network without
requiring major reconfigurations.

8.5.7 Minimization of Downtime during Migration

● Reduced Configuration Errors: A uniform IP addressing plan minimizes the risk of


errors during migration, which is crucial for critical railway services like UTS/PRS,
SCADA, and TCCS. Incorrect IP allocation can lead to service disruptions, affecting
train operations or passenger services.
● Smoother Transition: When migrating to an IP/MPLS network, a well-organized IP
scheme allows for smooth transitions between legacy systems and the new
infrastructure. It ensures that both old and new systems can coexist and communicate
effectively during the migration phase.

8.5.8 Facilitating Redundancy and Failover

● Consistent Addressing for Redundancy: A uniform IP addressing scheme aids in


setting up redundant paths and failover mechanisms. For example, dual MPLS paths
for high-priority services like TCCS or SCADA can be easily managed if the addressing
is consistent across the network.
● Faster Recovery: In case of network failures, standardized IP addressing simplifies
troubleshooting and recovery, allowing network engineers to quickly pinpoint and
resolve issues.

8.6 Service-wise Migration Scheme – Some examples

8.6.1 UTS/PRS (Passenger Reservation System and Unreserved Ticketing System)

Current Setup: UTS and PRS are critical services managed by CRIS. These services
are accessed by railway stations across the country, and any downtime can lead to
disruptions.

Migration Strategy:

● VLANs: Use a dedicated VLAN for UTS/PRS to segregate traffic at layer-2, isolating
it from other services like CCTV, station Wi-Fi, and Signalling.

Page 56 of 131
● VRF: Deploy VRF instances specifically for UTS and PRS, ensuring that routing is
isolated from other services. This prevents traffic mix-ups and enhances security.
● L3VPN: Use MPLS L3VPN to securely route UTS/PRS traffic across the IP/MPLS
network. This allows dynamic routing and ensures that the services are scalable and
reliable.
● QoS: Apply strict QoS policies to prioritize UTS/PRS traffic, ensuring that they have
guaranteed bandwidth and low latency, especially during high-demand periods like
festival seasons.
● Redundancy: Set up redundant MPLS paths to ensure continuous availability of UTS
and PRS even during network failures.
● Testing and Validation: After migration, thorough testing with CRIS should be done
to ensure that ticketing operations (UTS/PRS) remain functional and stable under real-
time load conditions.

8.6.2 FOIS (Freight Operations Information System)

Current Setup: FOIS manages freight train operations, making it a crucial system for
railway operations. Presently, it is managed by CRIS.

Migration Strategy:

● VLANs: Assign a specific VLAN for FOIS to separate its traffic from UTS, PRS, and
other services at the layer-2 level.
● VRF: Deploy a dedicated VRF instance for FOIS traffic to isolate its routing from other
services.
● L3VPN: Use MPLS L3VPN for FOIS, enabling dynamic routing and ensuring
scalability as freight operations grow. L3VPN also supports policy-based routing,
ensuring optimal network performance for freight operations.
● QoS: Prioritize FOIS traffic with QoS to ensure seamless operation, even during
congestion.
● Redundancy: Dual-homed links and redundant MPLS paths will ensure that FOIS
services remain operational even during failures.
● Testing and Validation: Extensive testing in coordination with CRIS and RailTel to
validate that FOIS continues to operate smoothly and reliably post-migration.

8.6.3 Signalling Circuits

Current Setup: Signalling circuits are point-to-point circuits on the SDH network,
requiring low-latency and highly reliable connections.

Migration Strategy:

● VLANs: Assign a dedicated VLAN to signalling circuits to isolate traffic at layer-2 and
ensure minimal interference with other services.
● VRF: Deploy a dedicated VRF instance for signalling circuits to prevent any routing
conflicts with other services.
● L2VPN: Use MPLS L2VPN for signalling circuits, ensuring that low-latency, point-to-
point communication is maintained. L2VPN mimics the circuit-switched nature of SDH,
making it ideal for signalling circuits.
● QoS: Configure QoS to prioritize signalling traffic, ensuring that it always has the
necessary bandwidth and low latency.
● Redundancy: Redundant MPLS paths with fast failover mechanisms (e.g., MPLS-TE
Fast Reroute) will ensure that signalling circuits remain operational in case of failures.

Page 57 of 131
● Testing and Validation: Test signalling circuit performance after migration, focusing
on latency, jitter, and failover capabilities.

8.6.4 Data logger

Current Setup: Data loggers collect and transmit real-time data from field devices such
as signalling equipment.

Migration Strategy:

● VLANs: Assign a VLAN to Data logger systems to separate real-time data traffic from
other services.
● VRF: Use VRF to isolate Data logger traffic at the routing layer, ensuring that real-time
data follows predetermined routes without interference.
● L2VPN: Implement MPLS L2VPN for Data logger to ensure low-latency
communication with field devices, similar to the SDH setup.
● QoS: Prioritize Data logger traffic for low-latency delivery, ensuring that critical
monitoring data reaches its destination in real time.
● Redundancy: Implement redundant MPLS paths for Data logger traffic to ensure
continuous operation in case of failure.
● Testing and Validation: Validate real-time data collection and transmission under
various traffic loads and failover scenarios.

8.6.5 Railnet

Current Setup: Railnet provides internet and intranet services to railway employees
and offices.

Migration Strategy:

● VLANs: Use VLANs to segregate Railnet traffic for internal communications (e.g.,
administrative tasks, internal email, etc.).
● VRF: Deploy VRFs for different internal services within Railnet, such as separating
administrative access from general internet usage.
● L3VPN: Use MPLS L3VPN to provide secure, scalable routing for Railnet traffic across
different zones and regions.
● QoS: Apply QoS to prioritize internal administrative traffic over general Internet usage.
● Redundancy: Set up redundant MPLS paths to ensure continuous availability of
Railnet services.
● Testing and Validation: Test Railnet performance after migration to ensure that
internal applications and Internet access remain functional and secure.

8.6.6 Station Wi-Fi

Current Setup: Station Wi-Fi services are currently operated independently or over
SDH and managed by RCIL.

Migration Strategy:

● VLANs: Create separate VLANs for public Wi-Fi (passengers) and staff Wi-Fi to
ensure isolation at the layer-2 level.
● VRF: Use VRF instances to segregate public Wi-Fi traffic from internal staff Wi-Fi,
ensuring security and preventing unauthorized access to internal resources.

Page 58 of 131
● L3VPN: Use L3VPN for staff Wi-Fi to connect securely to Railnet, while public Wi-Fi
traffic is routed to the Internet.
● QoS: Prioritize staff Wi-Fi traffic over public Wi-Fi to ensure sufficient bandwidth for
railway operations.
● Redundancy: Redundant MPLS paths should be configured to ensure high availability
for staff Wi-Fi.
● Testing and Validation: Verify that both public and staff Wi-Fi services function
properly and securely after migration.

8.6.7 CCTV (Closed-Circuit Television) Surveillance System

Current Setup: CCTV systems typically use point-to-point SDH circuits to transmit
video feeds from stations to a central monitoring system.

Migration to IP/MPLS:

● VLAN: Implement a dedicated VLAN for CCTV traffic at the Layer 2 level to segregate
it from other services like Wi-Fi or UTS/PRS. This ensures security and prevents
bandwidth contention with other services.
● L2VPN: MPLS L2VPN is ideal for CCTV as it allows a point-to-point connection that
emulates the current SDH setup while leveraging IP/MPLS. L2VPN keeps the CCTV
traffic within a virtual private connection, maintaining service isolation.
● VRF: For stations with more complex routing requirements or larger surveillance
networks, VRF instances can be deployed to isolate CCTV routing from other services.
● QoS: Apply QoS policies to ensure that the video feeds have guaranteed bandwidth
and low latency, especially important for live monitoring.
● Redundancy: Dual MPLS paths should be implemented to ensure continuous
surveillance in case of a network failure.
● Testing: Conduct end-to-end testing to ensure video feeds are transmitted without
degradation.

8.6.8 TCCS (Train Control Communication System)

Current Setup: TCCS circuits are used for real-time communication between the Train
Controllers and field Station Masters, typically on SDH.

Migration to IP/MPLS:

● VLAN: Create a VLAN for TCCS to segregate real-time train control traffic from other
services. This will ensure that the critical TCCS data has an isolated path and is not
affected by other station activities.
● L2VPN: MPLS L2VPN can be used to maintain low-latency, point-to-point connections
that emulate the current SDH circuits. This ensures that the real-time nature of train
control communications is preserved.
● VRF: Use VRF instances for TCCS to isolate its routing from other services at the
layer-3 level. This is particularly useful in larger networks where routing segregation is
needed.
● QoS: Implement strict QoS policies to prioritize TCCS traffic, ensuring uninterrupted
communication, with low latency, and responsive in all situations.
● Redundancy: Deploy dual MPLS paths with fast failover mechanisms (e.g., MPLS Fast
Reroute) to ensure continuous operation in case of a failure.
● Testing: Perform real-time failover and latency testing to ensure uninterrupted
communication during migration.

Page 59 of 131
8.7 Migration plan of SWR: SWR has prepared a detailed migration plan for transferring
circuits to their IPMPLS network. This also includes IP addressing plan for different
divisions and services as per uniform IP addressing plan issued by RDSO. Detailed
migration plan prepared by SWR including IP addressing planning is enclosed as
Annexure-5 for ready reference.

8.8 Recommendations of Committee:

After detailed discussion the committee recommends the procedure to be followed


during the migration of existing services to the IP MPLS network:

8.8.1 Assessment of current network is the first activity for starting the migration. A section
wise map of available SDH equipment is to be prepared specifically covering the
availability of Ethernet ports and the circuits that are dropped at each of the stations in
the sections. Typical items to be covered during assessment of network are given in
Annexure-I.

8.8.2 Creating an extra Divisional unit for implementation and OAM of the network will
ensure the effective implementation and migration by suitable redeployment of existing
staff. The staff should be trained in IP-MPLS and IP/WAN/LAN/Network security
technologies.

8.8.3 A detailed implementation scheme should be worked out wherein the sections where
existing SDH with Ethernet interfaces can be utilized, and where SDH equipment are
to be replaced by L2 switches and MPLS equipment, station wise circuit migration plan
for implementation shall be detailed and followed. “Migration Plan” of the Division shall
get approved by the Competent Authority from the Zonal HQ before its actual
implementation to ensure seamless switch over from existing SDH network to latest
IP-MPLS Technology.

8.8.4 The IP-MPLS equipment shall be standardized for different categories of stations
inclusive of the interfaces needed and the IP numbering scheme.

8.8.5 Sectional control of unimportant branch lines should be migrated first to understand
the effect of IP-MPLS/Ethernet on the working. Once stable, the experience can be
used to successfully migrate other section controls further.

8.8.6 A uniform IP addressing scheme is the foundation to the successful migration of


railway services like UTS/PRS, CCTV, SCADA, and others to an IP/MPLS network. It
streamlines network management, enhances security, simplifies routing, and ensures
scalability and redundancy. For large, critical infrastructures such as Railways, a
standardized IP address plan is essential for maintaining operational efficiency and
minimizing service disruptions during and after migration.

8.8.7 Traffic Segregation using VPN, VLAN, and VRF - Maintain strict segregation between
services using VRF and VLANs to prevent cross-traffic between critical and non-critical
services.

8.8.8 Implement QoS policies to prioritize time-sensitive services like signalling, train control,
SCADA, and Data logger over less critical services like CCTV or public Wi-Fi.

Page 60 of 131
8.8.9 Co-ordination with CRIS and RCIL for services managed by them. CRIS (Centre for
Railway Information Systems) manages services like UTS, PRS, and FOIS, it’s critical
to closely coordinate with CRIS during the migration. CRIS should be involved in the
testing and validation process for services before and after migration. Similarly,
RailTel, which provides the communication backbone, should be closely involved in
planning the migration, ensuring that the underlying MPLS infrastructure is ready to
handle critical services with the required QoS, security, and redundancy.

8.8.10 Detailed service wise pre and post commissioning checklist shall be the integral part
of migration plan. These lists shall be followed while migrating services to IPMPLS
network. Typical check list for L3 and L2 VPN are at Annexure-II.

8.8.11 Adequate Training Program should be conducted for Officers and Staff before
migration and after immediate migration.

8.8.12 During migration, both SDH and IP/MPLS will coexist. Dual-stack architecture will
ensure that services continue to run smoothly on SDH while being gradually shifted to
IP/MPLS.

Figure 16 - Proposed Migration Scheme.

Page 61 of 131
Annexure-I

Assessment of current network

Before doing the migration, assessment of current network shall be done and following
parameters shall be recorded.

a) Type of physical medium (Copper/Fiber)


● Copper - E1/RJ-45
● Fiber - Single mode or multimode
● 1G or 10G interface
● Type of SFP (LX/LR/ER/Bi-Di)

b) Server/Service location and communication requirements with clients.

c) Type of transport point-to-point or point-to-multipoint of full-mesh.


● For point-to-point circuits - E1 links
● Any delay sensitive applications - Ethernet links
● For point-to-multipoint/full-mesh - Ethernet links.
● MTU size required for Ethernet links.
● Duplex?

d) Type of traffic - Unicast or Multicast or both

e) For L2 point-to-point/point-to-multipoint
● Existing VLANs details.
● Any Q-n-Q tunnelling is configured.

f) For L3 point-to-point/point-to-multipoint.
● If Any CE routers or Firewalls
● WAN and LAN IP addresses (location wise)
● Current NNI location
● Routing - Static routing or Dynamic routing
● Protocol, metric and metric type used.
● Existing routing table size (no of routes)
● Intranet only or Intranet with Internet enabled.

g) Migration Plan and Validation:


● Plan required BOQ/Material.
● Discuss with all concerned stakeholders involved for other end changes.
● Prepare configuration templates and documentations.
● Assess the migration time.
● “Migration Plan” of the Division shall be get approved by the Competent
Authority from the Zonal HQ before its actual implementation.

h) Provision the circuit and test:


● RFC 2544 network testing using iperf. Iperf servers should be hosted at
division/HQ location.
● After testing compare the throughput, latency, and packet loss results with
requirements of application.

Page 62 of 131
i) Create Change request and inform to concern team:
● Plan change request along with rollback approach.

j) Strictly execute the change during change window.


● Pre-checklist for backup and capturing details. If pre-checks failed, cancel
the change, inform and document.
● Post migration check list for confirming the services working or not.
● Send notification to concern team after change execution

k) Post migration monitoring:


● Closely monitor the migrated services for 24 hours.
● Handover to operations teams.
● Adding circuit to NMS for monitoring.

Page 63 of 131
Annexure-II

Pre-checklist and post-checklist

For smooth migration to IPMPLS network pre-checklist and post-checklist shall be


prepared. This will also help in verifying the proper migration of the different services.

A. L2 VPN: Pre-Checklist

Requirements:

▪ Confirm the Layer 2 protocol to be used (e.g., Ethernet type or WAN type).
▪ Confirm whether you need configure Transparent L2 VPN or VLAN-Based L2
VPN.

Connectivity and Physical Layer:

▪ Check for any physical layer issues such as faulty cables or interfaces or SFPs
modules b/w Router and End device
▪ Ensure MTU settings should be match between PE router and the End device.

Backup Configurations:

▪ Backup current configurations on all routers.


▪ Ensure there is a recovery plan in case of failure.

● Post-Checklist

Configuration Verification:

▪ Verify that all configurations have been correctly configured or not with
template.
▪ Use the following show commands to check L2 VPN status on routers:

(i) L2vpn connection status


(ii) mpls lsp for LDP signalled L2VPNs

Connectivity Testing:

▪ Verify end-to-end (ping response of end systems)


▪ Bandwidth testing.

B. L3 VPN: Pre-Checklist

Physical Layer and IP Routing:

▪ Check for any physical layer issues such as faulty cables or interfaces or SFPs
modules b/w Router and End device.
▪ LLDP/CDP neighbour details
▪ MAC address learning on existing port.
▪ Ensure MTU settings should be match between PE router and the End device.
▪ Routing protocol and adjacency status of routing protocol
▪ Routing table

Page 64 of 131
Backup Configurations:

▪ Backup current configurations on all routers.


▪ Ensure there is a recovery plan in case of failure.

● Post-Checklist

Physical Layer and IP Routing:

▪ Verify that all configurations have been correctly configured or not with
template.
▪ Check for any physical layer issues such as faulty cables or interfaces or SFPs
modules b/w Router and End device.
▪ LLDP/CDP neighbour details
▪ MAC address learning on existing port.
▪ Ensure MTU settings should be match between PE router and the End device.
▪ Adjacency status of routing protocol
▪ Routing table

Connectivity Testing:

▪ Verify end-to-end (ping response of PE to PE WAN and end systems/ networks)


▪ Bandwidth testing.

Page 65 of 131
9.0 TOR Item No. 5:

Standardization of Station LAN infrastructure for using common LAN for all
network services at stations

Standardizing the LAN infrastructure at stations to support all network services involves
designing a common LAN architecture capable of accommodating diverse network
services, including IP-MPLS, VoIP based Control Communications such as Section
Control/TPC, Wi-Fi, VSS, Railnet, FOIS, UTS/PRS, SCADA etc.

Services available at any station can be divided in two groups based on their
requirement:

9.1 Critical Services: These services include services related to Train Operation,
Passenger Safety & Financial Transactions/Passengers Reservations over Indian
Railways and are as follows-

(i) Signalling circuits i.e. UFSBI/BPAC, AXLE Counter, EI etc


(ii) Data Logger
(iii) Control Communication /VoIP based Train Control Communication System-
TPC, EC etc.
(iv) SCADA (Remote Control)
(v) UTS/PRS/FOIS
(vi) LTE
(vii) Video Surveillance System-VSS etc.

9.2 Non-Critical Services: Remaining services other than critical are Non-Critical Services
and are as follows-

(i) Railnet
(ii) Railway Telephone
(iii) Station Wi-Fi
(iv) RDN
(v) VC etc.

While deriving the services from LSR/LER/Layer-3 switch/ Layer-2 switch, cascading of
switches should be avoided as per extent possible. Cascading of switches results into
data congestion in the network due to their broadcasting nature.

9.3 In view of above, three LAN structures are being proposed:

A. For Way side stations


B. For Junction stations
C. For HQ/Divisional Office not situated at Station Building

Page 66 of 131
A. Scheme for IP-MPLS network at Way side stations:

Proposed standard LAN infrastructure for way side station is shown below:

Figure 17 - Normal Way Side Stations without any LH

• Maximum number of Railway stations is covered under this category.

• Every station is provided with LER which is further connected with both adjacent Station
LER through 10G link.

• Services shown in yellow are considered as critical services and in white are considered
as non-critical services.

• IP services (Layer-3) viz. LTE, FOIS, UTS/PRS, VC and any other Layer-3 services may
be directly taken from 1G port of LER or through L2 switch.

• Similarly, for existing services working on E1 like Signalling Ccts, Data Logger, Control
Phone etc. can be taken from E1 interface card. Required number of E1 ports may be
decided accordingly.

• Other services can be derived from Layer-2 switch as shown in the diagram.

• Once the E1 services are migrated to Ethernet, E1 card will no longer be required and
the same slot of LER may be used for 1G cards.

Page 67 of 131
B. Scheme for IP-MPLS network at Junction/Major Stations:-

Proposed standard LAN infrastructure for Jn/Major station is shown below:

Figure 18 - Junction stations with LH other than CORE network

All Major/Junction Railway stations are covered under this category.

• Every station is provided with one LSR for pre aggregate layer and one LER for access
layer. Connectivity with RCIL is also proposed at some of these locations for back up till
the separate fiber is available.

• LSR is further connected with all adjacent section LSR’s through 10G/2x10G link on
separate fibre. It is also connected with LERs of all adjacent stations.

• Services shown in yellow are considered as critical services and in white are considered
as non-critical services.

• Critical services viz. FOIS, UTS/PRS, VC, SCADA etc and any other Layer-3 services
may be taken through L3 switch and LTE may be directly taken from 1G port of LER of
the station.

• Similarly, for existing services working on E1 like Signalling Ccts, Data Logger, Control
Phone etc. can be taken from E1 interface card. Required number of E1 ports may be
decided accordingly.

• Other services can be derived from L-2 switch as shown in the diagram.

• Once the E1 services are migrated to Ethernet, E1 card will no longer be required and
the same slot of LER may be used for 1G cards.

Page 68 of 131
C. Scheme for IP-MPLS network at Junction/Major Stations:

Proposed standard LAN infrastructure for Junction stations with Divisional/Zonal


connectivity is shown below:

Figure 19 - Junction Stations (including Gateway stn) in CORE network

• These are Divisional and Zonal HQ locations.

• Core/Aggregate router is placed at these locations in addition to normal LER and LSR.

• This Core/Aggregate router is used to provide Zone to Zone 100/400G connectivity in


addition to Division to Division and Division to Zonal HQ, 40/100G connectivity.

• Rest all remain same as scheme shown for Major/Junction station.

Page 69 of 131
9.4 Recommendations of Committee:

After detailed discussion the committee recommends the following standard LAN
infrastructure for station

9.4.1 Schemes for Wayside station, Major/Jn station and Divisional Hq stations have been
discussed, however separate pair of fibre is required for each layer. Core/Aggregate
layer at Zonal / Divisional level shall be planned separately once the additional fibres
are available.

9.4.2 At every station one LER shall be provided for connecting various services available
at the station.

9.4.3 Signalling and other services working through E1 shall be connected directly to the E1
interface card of the LER.

9.4.4 Services provided by CRIS i.e. UTS, PRS and FOIS through their router/switch shall
be directly connected to the 1G port of LER.

9.4.5 LTE routers provided in the section shall be directly connected to the LER through 1G
port.

9.4.6 CCTV network being provided at the stations by RCIL. This service shall be connected
through switch to LER. Switch provided by RCIL for the purpose can be used or it can
be connected to the Railway switch being provided.

9.4.7 Other services like Train control communication, SCADA, Railnet, Station Wi-fi etc can
be connected to a L2/L3 switch provided at the station. Switch shall be connected to
LER through 1G port.

9.4.8 Number of switches required at Major/Junction stations shall be decided by the


assessment of existing services available at the station.

9.4.9 At layer-2 level, Separate VLAN shall be created for services like CCTV, VoIP based
TCCS, FOIS, RailNet, Station Wi-Fi etc. This ensures security and prevents bandwidth
contention with other services.

Page 70 of 131
10.0 TOR Item No. 6:

Management of IP-MPLS and LTE network, establishing all India Network


Operation Centre (NOC) along with its staffing and operations.

Effective management of Indian Railways communication networks—IP-MPLS, LTE,


Optical Fiber Cable (OFC), CCTV, VoIP and other critical infrastructure—requires a
unified strategy to ensure operational efficiency, safety, and service continuity. These
networks form the backbone of the Railways' digital communication, enabling signalling,
train control, security, and passenger services.

This item focuses on the management of IP-MPLS, LTE & other networks to ensure
robust connectivity and communication across the all India Railway system.

10.1 Important networks of IR with key management strategies are outlined below:

(i) IP/MPLS Network: The IP/MPLS (Multiprotocol Label Switching) network is the
backbone of Indian Railways' communication infrastructure, used for carrying signaling,
control systems, VoIP, and data traffic. Key Management Strategies for IPMPLS
network are:

● Traffic Engineering - Use to manage and prioritize critical traffic, such as signaling and
control data, over less-critical services like passenger Wi-Fi. Implement Quality of
Service (QoS) rules to prioritize train operations.
● Redundancy & Failover - Configure the network to automatically reroute traffic in case
of failures (e.g., link failure or router malfunction), ensuring high availability and
reliability.
● Capacity Planning - Regular monitoring of bandwidth usage and traffic patterns allows
for capacity planning and scaling of resources, ensuring network readiness for future
expansion or increased data load.

(ii) LTE Network: The LTE (Long Term Evolution) network provides high-speed wireless
communication for Railway operations, including real-time communication between
trains and control centers, as well as supporting CCTV and passenger services. Key
Management Strategies for network are:

● Mission-Critical Communication: Ensure that LTE is configured for reliable


communication, especially for real-time data exchange between trains and the control
center, as well as emergency voice communications.
● Bandwidth Management: Monitor LTE network bandwidth, particularly for high-data
applications like CCTV video streaming and internet services for passengers. Use
bandwidth reservation techniques to prioritize mission-critical traffic over non-critical
services.
● Network Coverage & Optimization: Ensure consistent LTE coverage along rail routes,
minimizing coverage gaps and optimizing signal strength for continuous
communication.
● Security & Encryption: Implement encryption and secure access for LTE to protect
sensitive operational data, such as train control communications.

Page 71 of 131
(iii) OFC (Optical Fiber Cable) Network: The OFC network of IR provides high-speed,
long-distance data transmission, serving as the backbone connecting different Railway
stations, control centers, and other infrastructure. Key Management Strategies for
network are:

● Fiber Health Monitoring: Implement systems to monitor the physical health of the fiber,
detecting signal degradations or cuts early. Use automated alerts to trigger maintenance
requests.
● Redundancy: Design the OFC network with redundant fiber paths, ensuring that any
fiber cut automatically triggers a reroute to maintain communication.
● Preventive Maintenance: Regularly schedule inspections and maintenance of fiber
infrastructure to prevent outages and maintain long-term reliability.

(iv) CCTV Network: CCTV systems are crucial for security monitoring at stations, on trains,
and in control rooms, providing live video feeds to ensure passenger safety and asset
protection. Key Management Strategies for network are:

● Real-Time Monitoring: Ensure real-time video surveillance feeds are available at the
CCC, enabling immediate response to security incidents.
● Video Analytics: Use AI-based video analytics to automatically detect suspicious
activities, overcrowding, or safety hazards, and trigger alerts.
● Storage & Bandwidth Optimization: Implement storage management for archiving video
footage, while optimizing network bandwidth to prevent CCTV data from overwhelming
the network. Use edge computing to process video locally, reducing the need to send
large video streams over the network.

(v) PBX & VoIP Systems: Indian Railways operates telephone exchanges and
communication systems for voice services. The VoIP network is used for internal
communication between Railway staff and management, providing a reliable voice
communication system over the MPLS and LTE networks. Key Management Strategies
for network are:

● Voice Traffic Prioritization: Use QoS settings in the MPLS network to ensure VoIP traffic
receives priority, guaranteeing clear, uninterrupted voice communication even during
high network loads.
● Monitoring and Troubleshooting: Monitoring call routing, trunk usage, and exchange
system uptime. Continuously monitor call quality metrics (such as latency, jitter, and
packet loss) to maintain high-quality voice communication. Diagnose and resolve issues
promptly.
● Emergency Communication: Ensure the VoIP network is available for emergency
communication between train crews and control centers, with redundancy built into the
network to ensure uptime.

(vi) Passenger Information Systems (PIS): Digital Displays and Announcements: PIS at
stations and on trains rely on networked systems for providing real-time information.
Key Management Strategies for network are:

● Monitor connectivity to displays and announcement systems.


● Ensure accurate, real-time passenger information, such as train schedules and safety
updates.
● Detect faults and alert local teams for quick repair of PIS components.

Page 72 of 131
10.2 Unified Network Management System (NMS)

A Unified Network Management System (NMS) is essential for centralizing the


monitoring and management of all these networks (IP/MPLS, LTE, OFC, CCTV, VoIP).
This NMS should be capable of handling:

(i) Real-Time Monitoring: Provide real-time visibility into the performance and status of
each network component, including alarms for faults or degradation.
(ii) Fault Detection & Resolution: Detect and resolve network faults across multiple systems
from a single interface, ensuring fast resolution times. Incident response teams in the
NOC quickly troubleshoot issues, escalate them to field teams for on-site repairs if
needed, and coordinate between divisions and zones for multi-regional issues.
(iii) Redundancy and Failover: Networks incorporate redundancy, allowing automatic
failover in the event of a fault. For example, an MPLS router failure would trigger traffic
rerouting, ensuring uninterrupted service.
(iv) Performance Monitoring: Track performance metrics for all networks (bandwidth,
latency, uptime, etc.) and optimize resource allocation to meet current and future
demands.
(v) Multi-Vendor Support: Support multiple hardware and software vendors, enabling
seamless integration across different technologies used in Indian Railways'
communication infrastructure.

10.3 Network Operations Center (NOC)

The NOC is the heart of the network management strategy. This NOC is responsible for
monitoring, managing, and maintaining all Railway communication networks—
IP/MPLS, LTE, OFC, CCTV, and VoIP—from a single location. Three Tiered structure
for NOC of IR is proposed, it includes:

● Central NOC: Manages the overall architecture and coordinates with Zonal and
Divisional NOCs. Handles inter-zonal connectivity issues and major incidents.
● Zonal NOCs: Oversee network performance across zones, handle escalations from
Divisional NOCs, and optimize zonal traffic.
● Divisional NOCs: Handle day-to-day local network management, fault resolution, and
incident response within the division.

10.3.1 Key Responsibilities of these Centralized NOCs:

a) Unified Monitoring: All networks (IP/MPLS, LTE, OFC, CCTV, VoIP) are monitored via
a single platform using a Network Management System (NMS), providing real-time
visibility into each system's health, performance, and security.
Benefits of a Unified NOC are:
(i) Operational Efficiency: Integrating multiple systems into one NOC allows for streamlined
operations, reducing the need for multiple monitoring centers and ensuring that all
systems can be managed from one location.
(ii) Cost Savings: Centralizing network operations into a single NOC avoids duplication of
infrastructure, reduces staffing costs, and minimizes the need for separate management
systems for each network type.
(iii) Enhanced Incident Response: A unified NOC improves coordination during incidents,
as operators can see the relationships between different systems. For example, a fiber
cut impacting both CCTV and telecommunication can be resolved faster through an
integrated workflow.

Page 73 of 131
(iv) Improved Security and Monitoring: By centralizing monitoring of CCTV and
telecommunication systems, the NOC enhances the ability to identify threats and
security breaches in real time, leading to a safer railway environment.
(v) Scalability for Future Needs: A centralized NOC allows for easier scaling as new
technologies (such as IoT sensors or 5G networks) are introduced to the railway’s
infrastructure.

b) 24/7 Fault Management: Detect and troubleshoot network issues, ensuring minimum
downtime and efficient response to incidents.

c) Performance Optimization: Analyze key performance metrics to ensure the best


network performance across all systems.

d) Security Management: Security is paramount across all networks, given the critical
nature of railway operations. Each network must be managed under strict cyber security
protocols to protect against potential breaches or failures. Monitor threats and ensure
compliance with cyber security standards across networks. Key Security Measures are:

(i) Firewalls and Intrusion Detection Systems (IDS): Implement firewalls and IDS across
all network layers (IP/MPLS, LTE, OFC) to detect and mitigate unauthorized access or
cyber-attacks.
(ii) Encryption: Encrypt data transmission, particularly for mission-critical communications
such as train signaling, control data, and CCTV footage.
(iii) Access Control: Use strict access control mechanisms, ensuring that only authorized
personnel can access network resources, CCTV feeds, or VoIP systems.
(iv) Regular Security Audits: Conduct periodic security audits and vulnerability assessments
across all systems to ensure compliance with railway safety and cyber security
standards.

Figure 20 - Maintenance Support System at Zonal Railway NOC and Central NOC

Page 74 of 131
10.3.2 Training and Development

a) Technical Training: Regular technical training on IP/MPLS, LTE networks, network


management tools, and security protocols.

b) Incident Response: Training on how to handle incidents, prioritize alarms, and resolve
network outages quickly.

c) Security Protocols: Comprehensive security training, including identifying and mitigating


cyber threats and following incident response best practices.

10.3.3 Operational Policies and Procedures

a) Standard Operating Procedures (SOPs): Comprehensive SOPs for incident response,


escalation, maintenance activities, and performance management must be developed
and followed.

b) Incident Reporting and Communication: A clear communication protocol for reporting


incidents, escalations, and coordination with field teams and regional Railway Zones.

c) 24/7 Shifts: Staffing must follow a shift pattern to ensure continuous coverage, with
rotational shifts to avoid fatigue and ensure efficient monitoring.

d) Compliance with Railway and Government Regulations: The NOC must ensure
compliance with Indian Railways operational policies, cyber security guidelines, and
other government regulations.

10.4 Responsibilities of Zonal and Divisional Network Operations Centers (NOCs)

The Indian Railways' vast and complex communication network, which includes both
IP/MPLS and LTE infrastructure, requires efficient management across various levels.
Apart from central NOC, establishing both Zonal and Divisional NOCs ensures
effective coordination, monitoring, and maintenance of the network, with clear
delineation of responsibilities to prevent overlaps and streamline operations.

a) Zonal NOC Responsibilities

The Zonal NOCs are higher-level operational canters responsible for overseeing and
coordinating the network's performance across their respective zones. Each zone
comprises several Divisions, and the Zonal NOC plays a critical role in supervising the
Divisional NOCs while ensuring alignment with broader network policies and goals.

(i) Overall Network Oversight: Zonal NOCs provide centralized control and monitoring of
the entire network within their zones, covering multiple divisions.
(ii) Coordination with Central NOC: Act as the intermediary between the central (All-India)
NOC and divisional NOCs. They receive directives from the central NOC and ensure
that these are implemented at the divisional level.
(iii) Strategic Planning and Optimization: Zonal NOCs are responsible for long-term network
planning, optimization, and capacity management. This includes traffic engineering for
the IP/MPLS backbone and ensuring LTE performance across divisions.
(iv) Major Incident Management: Handle high-priority incidents, such as large-scale network
outages, inter-divisional connectivity failures, or cyber-attacks. They coordinate with

Page 75 of 131
divisional NOCs for incident response and work closely with the central NOC for critical
escalations.
(v) Performance Monitoring & Reporting: Zonal NOCs generate performance reports
covering the entire zone and submit them to the central NOC. They also track and
analyze network KPIs across divisions, including bandwidth usage, latency, and uptime.
(vi) Security Management: Zonal NOCs oversee security implementations, including
encryption and intrusion detection systems, ensuring all divisions adhere to security
standards. They also conduct security audits and manage regional cyber threats.
(vii) Inter-Zonal Coordination: Facilitate communication and coordination between different
zones, especially for network traffic that crosses zonal boundaries, ensuring seamless
inter-zonal operations.
(viii) Policy Implementation: Ensure that Indian Railways' network policies and procedures,
issued by the central NOC, are uniformly implemented across all divisional NOCs.

b) Divisional NOC Responsibilities

The Divisional NOCs are responsible for localized, day-to-day network management
and operations within their specific divisions. They ensure smooth functioning of the
network infrastructure at the ground level and handle routine tasks like fault detection,
troubleshooting, and local optimization.

(i) Day-to-Day Network Monitoring: Divisional NOCs perform real-time monitoring of


network elements such as routers, switches, and LTE base stations within their division.
They identify and resolve localized issues before escalating them to the zonal level.
(ii) Incident Response & Resolution: Handle first-line troubleshooting for all network-related
incidents in their division. For significant issues, the divisional NOC escalates incidents
to the zonal NOC, particularly if multiple divisions or zonal infrastructure is impacted.
(iii) Field Coordination: Divisional NOCs coordinate with on-ground maintenance and
technical staff for hardware replacements, repairs, and software updates. They also
handle local configurations and adjustments to ensure network efficiency.
(iv) Local Performance Management: Responsible for tracking and optimizing local network
performance, including managing bandwidth distribution, monitoring latency, and
ensuring network uptime for mission-critical services.
(v) Security Implementation: Divisional NOCs apply security policies as directed by the
Zonal NOC, managing local firewalls, encryption protocols, and access controls. They
monitor the division for security breaches and report to the Zonal NOC for critical
incidents.
(vi) Minor Fault Repairs: Local hardware failures or software bugs are primarily managed
by the divisional NOC. They engage with field teams for physical repairs and system
reboots, ensuring minimal downtime for passengers and operational systems.
(vii) Reporting & Communication: Regularly report network health, incident logs, and
maintenance activities to the Zonal NOC. They also act as the first point of contact for
any network-related issues affecting their division.
(viii) Local Compliance & Documentation: Ensure that divisional operations comply with the
policies set by the Zonal and Central NOCs. Maintain detailed logs of network issues,
repairs, and performance data for future audits.

c) Coordination Between Zonal and Divisional NOCs

Efficient network management requires seamless coordination between Zonal and


Divisional NOCs. Below are key coordination mechanisms:

Page 76 of 131
(i) Escalation Process: Local issues that cannot be resolved at the divisional level are
escalated to the Zonal NOC. If the Zonal NOC cannot resolve a problem, it is further
escalated to the Central NOC. This tiered approach ensures efficient troubleshooting
and minimizes downtime.
(ii) Incident Sharing: In the event of a significant network incident affecting multiple
divisions, Zonal NOCs will take charge of coordinating with multiple Divisional NOCs to
ensure a coordinated response.
(iii) Data & Performance Reports: Divisional NOCs are responsible for collecting local
network performance data, which they share with the Zonal NOC. This allows for higher-
level analysis, optimization, and planning at the zonal level.
(iv) Security Compliance & Audits: Divisional NOCs implement the security protocols while
Zonal NOCs oversee compliance and conduct regular audits to ensure security
standards are met across all divisions.

10.5 Staffing to Manage Multiple Network Management Systems (NMS) from One NOC

When a single Network Operations Center (NOC) is tasked with handling multiple
Network Management Systems (NMS), such as those for IP/MPLS, LTE, Optical Fiber
Cable (OFC) networks, CCTV, telecommunication exchanges, and other networks,
careful planning is needed to ensure efficient management and monitoring. This
requires specialised staff and cross-functional teams with clear responsibilities across
different systems.

10.5.1 Key Considerations for Staffing Planning:

a) System-Specific Expertise: Each NMS requires staff with specialised knowledge to


manage specific networks like OFC, CCTV, and IP/MPLS. Cross-training can ensure
flexibility, but primary roles should be assigned based on expertise.

b) 24/7 Operations: Since the NOC operates 24/7, staffing should be planned in shifts to
ensure continuous coverage, with rotating teams for different NMS.

c) Tiered Support Levels: Establish a tiered support structure to handle issues based on
complexity. This ensures that basic issues are resolved at the lower levels (L1), while
more complex issues escalate to specialised engineers (L2, L3).

d) Automation and Efficiency: Use automation tools where possible to reduce manual
intervention. This may reduce staffing requirements, especially for routine monitoring
tasks.

10.5.2 Proposed Staffing Structure:

a) NOC Manager (1 per NOC): 1 manager, typically operating in regular business hours
with escalations available 24/7.

● Responsibilities: Oversee the entire NOC, manage staff across different shifts,
coordinate operations for all NMS, and ensure seamless integration of multiple systems.

b) Network Engineers:

● Network engineers will be the core of the NOC team, managing different networks such
as IP/MPLS, LTE, OFC, and other critical systems. These engineers will be divided by

Page 77 of 131
their areas of expertise. At least two levels of Engineers will be available with clear
responsibilities-L1 & L2.

c) IP/MPLS & LTE Engineers: 3-4 engineers per shift to cover IPMPLS & LTE networks.
Total Staffing: 10-12 engineers

● Responsibilities: Monitor and manage the IP/MPLS backbone and LTE network,
troubleshoot routing and switching issues, configure QoS, and optimize bandwidth
usage.

d) OFC Engineers: 1-2 engineers per shift to handle OFC-specific tasks.


Total Staffing: 5-6 engineers

● Responsibilities: Monitor fiber cable health, detect and troubleshoot fiber cuts or signal
degradation, and coordinate repairs.

e) CCTV System Engineers: 1-2 engineers per shift to monitor the health and
performance of the CCTV network. Total Staffing: 5-6 engineers.

● Responsibilities: Ensure all cameras are operational, monitor video feeds, manage
storage systems (NVR/DVR), and perform video analytics.

f) Telecommunication Exchange Engineers: 1-2 engineers per shift for


telecommunication systems. Total Staffing: 4-5 engineers.

● Responsibilities: Monitor PBX systems, handle VoIP traffic, ensure reliable


communication for rail staff and emergency services.

g) Security Engineers: 1 security engineer per shift to cover overall security monitoring.
Total Staffing: 4-5 security engineers.

● Responsibilities: Manage network security across all systems (IP/MPLS, LTE, OFC,
CCTV, and telecommunication), ensure compliance with security policies, monitor for
intrusions, and perform regular audits.

h) Performance Analysts: 1 performance analyst per shift. Total Staffing: 3-4 analysts.

● Responsibilities: Monitor network performance metrics, ensure optimal bandwidth


usage, and generate regular performance reports for all systems (IP/MPLS, LTE, CCTV,
etc.).

i) Incident Response Team: 2-3 team members per shift to handle immediate
troubleshooting across systems. Total Staffing: 6-8 incident response personnel.

● Responsibilities: Respond to alarms and incidents across any of the NMS,


troubleshoot issues, escalate complex problems to senior engineers, and coordinate
field repairs.

j) System Administrators: 1 system administrator per shift to maintain the NMS


platforms. Total Staffing: 3-4 system admins.

Page 78 of 131
● Responsibilities: Manage servers and software that host the NMS platforms, ensure
backups, and handle system-level issues across different network management
systems.

k) Helpdesk & Support Staff: 2-3 support staff per shift. Total Staffing: 6-9 helpdesk
staff.

● Responsibilities: First-line support for minor issues, logging incidents, and escalating
unresolved problems to network engineers. Handle basic troubleshooting for all
systems.

l) Shift Structure:

● Three Shift Model: To ensure 24/7 coverage, the NOC should be divided into three
shifts (morning, evening, night). A balanced staff distribution ensures that all NMS are
covered round-the-clock.
● Rotational Staffing: Engineers and support staff should rotate between shifts to
maintain staff flexibility and avoid fatigue.

10.5.3 Summary of Staffing in NOC:

Role Staffing (Per Shift) Total Staffing


NOC Manager N/A 1
Helpdesk & Support Staff 2-3 per shift 6-9
IP/MPLS & LTE Engineers 3-4 per shift 10 -12
OFC Engineers 1-2 per shift 5-6
CCTV Engineers 1-2 per shift 5-6
Exchange Engineers 1-2 per shift 4-5
Security Engineers 1 per shift 4-5
Performance Analysts 1 per shift 3-4
Incident Response Team 2-3 per shift 6-8
System Administrators 1 per shift 3-4

10.5.4 Benefits of Centralized Staffing for Multiple NMS:

● Efficient Resource Use: Staffing can be optimized by training engineers to manage


multiple systems, reducing redundancy and allowing more effective resource allocation.
● Improved Coordination: By centralizing the NMS under one NOC, incident response
times are reduced, and issues spanning multiple networks can be handled more
efficiently.
● Cost Savings: Centralizing staffing for multiple systems into a single NOC reduces the
need for separate teams and facilities, cutting operational costs.
● Scalability: The centralized NOC staffing model is scalable, meaning additional staff
can be added as new networks or technologies are integrated.

Page 79 of 131
10.5.5 Overall Staffing Summary:

Total Staff
NOC Level Key Roles
(approx.)
NOC Manager, Network Engineers, Security
Central NOC 50-60 Engineers, Performance Analysts, Incident
Response Team, System Admins, Helpdesk
Zonal NOC Manager, Network Engineers, Security
40-50
Zonal NOC Engineers, Performance Analysts, Incident
per Zone
Response, System Admins, Helpdesk
Divisional NOC Manager, Network Engineers,
25-30
Divisional NOC Incident Response Engineers, System Admins,
per Division
Helpdesk

This staffing structure ensures comprehensive network management across all levels,
with specialized teams to monitor, optimize, and respond to incidents for the efficient
functioning of Indian Railways' communication systems.

10.6 Infrastructure Requirements for setting up NOC:

NOC for managing the different networks of IR is very important and crucial. Important
key aspects taken into consideration for deciding are as detailed below:-

a) Location:

(i) A central, strategically located facility, preferably near a major city with easy access to
transportation and emergency services.
(ii) A backup Disaster Recovery (DR) site in a geographically distant location to maintain
continuity in case of NOC failure.

b) Data Center Facilities:

(i) High-speed connectivity, redundancy in power supply (including UPS systems and
backup generators), and cooling systems.
(ii) Secured server rooms for hosting management software, storage systems and network
devices.
(iii) A high-bandwidth internet connection for real-time data and analytics monitoring.

c) Hardware:

(i) High-performance servers for running NMS software, storage devices for logs and
backups, and large display screens for monitoring live network status.
(ii) Routers, switches, and firewalls to manage the IP/MPLS and LTE backbone network.

d) Software:

(i) Network Management Software (NMS) for end-to-end network monitoring and control,
integrated with automation tools for configuration, fault detection, and diagnostics.

Page 80 of 131
(ii) Security Information and Event Management (SIEM) software for real-time threat
monitoring and incident response.
(iii) Performance Management Systems to track network KPIs, including latency, bandwidth
utilization, and QoS.
(iv) Collaboration Tools for efficient communication among teams, including helpdesk
software for ticket management and resolution.

e) Redundancy and Failover:

(i) Dual power supplies, dual internet service providers (ISPs), and dual access to all
critical systems.
(ii) Backup network infrastructure to minimize downtime and enable failover in the event of
hardware failure.

f) The proposed specifications of Network Management System & Automation are


attached as Annexure-4.

10.6.1 Size and Layout of the NOC

The size of the NOC depends on factors such as the scale of operations, the number of
network elements monitored, and the staff needed to operate it. Here’s a breakdown of
the components that will determine the size:

a) NOC Size Based on Staff and Equipment

(i) Workstations for NOC Engineers:

Each NOC engineer needs a dedicated workstation equipped with monitors (multi-
screen setups for monitoring). Typically, you would require 5 to 15 engineers per shift,
depending on the size of the Railway Division. Staff includes Network engineers,
System administrators, Security specialists and Incident managers.

(ii) Network Monitoring Screens:

Large video walls or display units that present real-time status updates on network
performance, CCTV footage, LTE signal coverage, etc. Typically, a video wall occupies
a significant portion of the NOC, requiring 30-50 square meters depending on the size.

(iii) Supervisor and Manager Workstations:

Dedicated desks for NOC managers and supervisors who oversee operations. These
desks should have an unobstructed view of the main video wall.

(iv) Estimated Space Requirement:

For a medium-sized Railway division, a NOC should be designed to accommodate 20-


30 workstations, covering an area of approximately 100 to 200 square meters.

Page 81 of 131
b) Space for Server Room

Dedicated Server Room for housing critical IT infrastructure like servers, storage,
routers, switches, and backup systems. Redundant Power Supply, including UPS and
backup generators. Cooling Systems for equipment protection.

Space requirement: 50 to 100 square meters depending on the number of systems.

c) Meeting Room

A small conference room within or adjacent to the NOC for coordination meetings,
incident debriefs, and high-level strategy sessions. Size: 20 to 30 square meters.

d) Break Rooms and Rest Areas

NOC operates 24x7, so staff will need break areas with amenities. Size: 20 to 30 square
meters. The NOC for a railway division managing a unified NMS for IP/MPLS, LTE,
OFC, CCTV, and exchange systems must be designed with adequate space,
equipment, and facilities to ensure 24x7 monitoring, fault management, and security.

The NOC should have the following key areas:

(i) Main Control Room with video walls and workstations for NOC engineers.
(ii) Server Room for housing critical IT infrastructure.
(iii) Break Rooms and Meeting Areas for staff comfort and coordination.

For a medium-sized railway division, the NOC should occupy an estimated space of
200 to 300 square meters, with room for future expansion and scalability as technology
evolves.

10.7 Recommendation of Committee:

After detailed discussion the committee recommends the following:

10.7.1 For management of IPMPLS and LTE networks all India Network Operation Center as
well as Zonal and Divisional NOC shall be created. Apart from managing these two
networks other networks like OFC, CCTV, Railnet, PBX, VoIP etc. operational in the
Divisions shall also be managed through these NOCs

10.7.2 All India or Central NOC shall be located strategically in DC-DR configuration. One
location can be New Delhi and other can be Secunderabad for the same.

10.7.3 Zonal NOC can be co-located with the Divisional NOC, however roles of Zonal NOC
and Divisional NOC shall be dealt by different set of people.

10.7.4 Every NOC shall be headed by NOC manager of appropriate level. In centralised NOC,
the NOC manager shall be of SA Grade with supporting team and in Zonal NOC the
manager shall be of JA Grade with supporting team. However, in Division the manager
can be of Senior Scale with supporting team.

Page 82 of 131
10.7.5 NOC has to be manned 24x7 by deploying required staff in each shift. Recommended
manning of staff is as given below:

Role Staffing (Per Shift) Total Staffing

NOC Manager N/A 1

Helpdesk & Support Staff 2-3 per shift 6-9

IP/MPLS & LTE Engineers 3-4 per shift 10 -12

OFC Engineers 1-2 per shift 5-6

CCTV Engineers 1-2 per shift 5-6

Exchange Engineers 1-2 per shift 4-5

Security Engineers 1 per shift 4-5

Performance Analysts 1 per shift 3-4

Incident Response Team 2-3 per shift 6-8

System Administrators 1 per shift 3-4

Based on responsibilities of different level of NOCs required number of manpower shall


be deployed.

10.7.6 Suggested manpower for the different NOCs is given below :-

Total Staff
NOC Level Key Roles
(approx.)

NOC Manager, Network Engineers, Security


Central NOC 50-60 Engineers, Performance Analysts, Incident
Response Team, System Admins, Helpdesk

Zonal NOC Manager, Network Engineers, Security


40-50
Zonal NOC Engineers, Performance Analysts, Incident
per Zone
Response, System Admins, Helpdesk

Divisional NOC Manager, Network Engineers,


25-30
Divisional NOC Incident Response Engineers, System Admins,
per Division
Helpdesk

10.7.7 Wherever the Zonal NOC and Divisional NOC are co-located staffing can be planned
depending upon the common activities which can be handled by the same set of staff.

Page 83 of 131
10.7.8 Specialized network engineers for different networks i.e. IPMPLS, LTE etc shall be
planned in two levels. Normally L2 engineers shall deal the issue and resolve it. If it
cannot be resolved by them, it should be escalated to Level-1 engineers.

10.7.9 Outsourcing of different system experts is also recommended for most of the activities
of the NOC.

10.7.10 Clear responsibilities of different level NOCs shall be defined to avoid any duplicity of
work.

10.7.11 Preferably separate NOC building of suitable size shall be planned in every Division.
Size of these divisional NOCs can be approximately in the range of 200-400 sqm based
on size of the Division and available networks.

Page 84 of 131
Annexure-1

Page 85 of 131
Annexure-2

Page 86 of 131
Page 87 of 131
Page 88 of 131
Page 89 of 131
Page 90 of 131
Annexure-3

Specifications for Core/Aggregate layer Routers

General Specifications:

1. The Mission Critical IP-MPLS Network shall be based on highly resilient, multiservice
technology to provide traffic engineered service assurance and bandwidth guaranteed
behaviour for mission critical, delay sensitive and bandwidth intensive services &
applications.
2. The network design should cater for capability to engineer traffic links between nodes
with user defined bandwidth guarantee and QoS profile. Allocation of user configurable
queues should be supported for differentiated treatment to traffic for speed and
reliability. It should be possible to re-route traffic from failed routes to protected routes
with no impact on active sessions.
3. The network should be implemented with standard based protocols as defined by IETF,
IEEE, ITU-T, etc.
4. The router/series should be compliant/certified for IEEE 1613, IEEE 1613.1, IEC 61000-
6-5, IEC 61850-3, IEC/AS 60870.2.1, EN 50121-4 standards or equivalent standards.
5. All the network equipment should be IPv4 and IPv6 fully capable and should fully
support IPv4 and IPv6.
6. Router should have IPv4 Routing, Border Gateway Protocol, Intermediate System- to-
Intermediate System [IS-IS], and Open Shortest Path First [OSPF]), Virtual Router
Redundancy Protocol (VRRP) OR EQUIVALENT, IPv6 Routing, and BGP Prefix
Independent Convergence, Segment Routing.
7. High Availability features like node protection, path protection, link protection as per
media availability.
8. Should able to support multiple VPN’s for different services with traffic engineering
defined.
9. All the routers, Routing EMS/SDN controller shall be of the same make (OEM) for
seamless integration and interworking. NMS/ EMS must be capable to
support/integrate/manage multi OEM devices (Routers and Switches).
10. All licenses for stated functionalities and features must be built in the provided IP-MPLS
solution from Day-1.
11. All the interfaces as requested must be equipped with perpetual licenses without any
year-based capping on interface usage from Day-1. None of interface must stop working
after end of warranty and support.
12. The same Routing EMS system must be capable to be upgraded to SDN controller by
additional licenses/plugins. In case separate solution for SDN is required, must be
considered in the solution from Day-1 without any additional cost to the employer.
13. It may be noted that in the specification wherever support for a feature has been asked
for, it will mean that the feature should be available without requirement of any other
hardware/software/licenses. Thus, all hardware/software/licenses required for enabling
the support/feature shall be included in the offer.
14. The ‘slot’ for router means a main slot or full slot on the router chassis. Only such a slot
shall be counted towards determining the number of free slots. Any sub slot or daughter
slot or a half slot shall not be considered as a slot.
15. The bidder must supply same make & model of controller cards, chassis hardware,
interface modules, for all locations for common sparing to the buyer. Different types of
controller cards, chassis, modular cards, etc. are not allowed to be quoted in the solution
16. The IPMPLS Router OEM /OEM certified Trainer must provide OEM training to buyer
at OEM premises or any other as per training schedule mentioned in the BoQ.

Page 91 of 131
17. Router should be chassis based & should have modular architecture for scalability.
Chassis should be 19” rack mountable type.
18. Should have power and fan redundancy and hot swappable.
19. All interface modules, line cards should be hot swappable for high availability.
20. All interfaces on the routers shall provide wire-rate throughput.
21. All line-card slots should be universal. All the line-cards should be capable to be
configured on all given line-card slots without any restriction.
22. The modular operating system shall run all critical functions like various routing protocol,
forwarding plane and management functions in separate memory protected modules.
Failure of one module shall not impact operations of rest of the OS.
23. Shall support On-line insertion and removal for cards, fast reboot for minimum network
downtime, VRRP or equivalent.
24. Shall support link aggregation using LACP as per IEEE 802.3ad and MC-LAG or EVPN
Multihoming.
25. Shall support MPLS Provider/Provider Edge functionality. MPLS VPN, MPLS mVPN
(Multicast VPN), AS VPN, DiffServ Tunnel Modes, MPLS TE (Fast re-route), DiffServ-
Aware TE, Inter-AS VPN, Resource Reservation Protocol (RSVP), VPLS, VPWS,
Ethernet over MPLS, EVPN, Segment routing and Segment routing Traffic engineering.
26. The router should support Netconf, YANG and other modern system management
protocols.
27. The routers shall support both L2 and L3 services on all interfaces.
28. The router should support BGP link-state (BGP-LS).
29. The Router should support various software models/sensors for capturing different
health parameters from the devices.
30. The router shall have the ability to interact with open standard based tools.
31. Shall support: Traffic Classification using various parameters like source physical
interfaces, source/destination IP subnet, protocol types (IP/TCP/UDP),
source/destination ports, IP Precedence, 802.1p, MPLS EXP, DSCP.
32. Shall support Strict Priority Queuing or Low Latency Queuing to support real time
application like Voice and Video with minimum delay and jitter.
33. Congestion Management: Priority queuing, Class based weighted fair queuing.
34. Traffic Conditioning: Committed Access Rate/Rate limiting.
35. Platform must support hierarchical shaping, scheduling, and policing for the control
upstream and downstream traffic.
36. Router should have min 3 level of scheduling for HQOS. Per VLAN QoS. Shall support
at least 8 hardware queues to be available for each GE interface on the router.
37. Support Access Control List to filter traffic based on Source & Destination IP Subnet,
Source & Destination Port, Protocol Type (IP, UDP, TCP, ICMP etc) and Port Range
etc.
38. Support per-user Authentication, Authorization and Accounting through RADIUS or
TACACS.
39. Multiple privilege level authentications for console and telnet access through Local
database or through an external AAA Server.
40. Support for monitoring of Traffic flows for Network planning and Security purposes.
41. Display of input and output error statistics on all interfaces.
42. Display of Input and Output data rate statistics on all interfaces.
43. Router shall support System & Event logging functions as well as forwarding of these
logs onto a separate Server for log management.
44. Router shall have Debugging features to display and analyse various types of packets.
45. Should have to support Out of band management through Console / external modem
for remote management.

Page 92 of 131
46. Event and System logging: Event and system history logging functions shall be
available. The Router shall generate system alarms on events. Facility to put selective
logging of events onto a separate hardware here the analysis of log shall be available.
47. After fulfilling Day One interface requirements, the router must have minimum of 2
interface slots vacant for future expansion.
48. Shall support online insertion and removal (OIR) that is non-disruptive in nature. Online
insertion and removal of one-line card shall not lead to ANY packet loss for traffic flowing
through other line cards for both unicast and multicast traffic.

Proposed Specifications of CORE Router:

1. Should support redundant controller cards (1+1) and redundant fabric cards (N+1) for
high availability.
2. The router should have capability of minimum 2 Million IPv4, 500K IPv6 routes.
3. The router should support minimum 128K MAC address.
4. Router should support 6k multicast routes.
5. Router should support minimum 8K MPLS PWE3.
6. Router should support min 8K VPLS.
7. Router should support 2K MPLS L3 VPN.
8. The router should support 100K labels and 10 label stacks.
9. Should support 64 ECMP (equal cost multipath).
10. Ability to configure hierarchical queues in hardware for IP QoS at the egress to the edge.
Minimum 128K queues per system.
11. Router shall support minimum non-blocking capacity of 6 Tbps full-duplex or higher at
full services scales.
12. The router must support 1GE, 10GE, 40GE, 100GE, 400GE interface pluggable up to
80Km distances.
13. The router must support multi-rate interfaces: 1/10GE, 10GE/25GE, 40GE/100GE,
100GE/400GE.
14. The router must support 100GE, 200GE, 400GE interfaces with coherent optics for
longer distances over dark fiber.
15. Router should support 400 Gbps full-duplex per slot capacity.
16. The router must have capability to support following minimum interfaces:
• 8 x 100/400GE,
• 8 x 40/100GE,
• 16 x 1/10GE
17. The Router should be supplied with following interfaces on Day-One:
18. 8 x 100GE interfaces distributed across minimum two-line cards with 100G LR optics.
19. 8 x 40GE interfaces distributed across minimum two-line cards with 40GE LR Optics.
20. 16 x 10GE interfaces, equipped with 10GE LR Optics.
21. Operating temperature: +5°C to +40°C guaranteed.
22. Humidity: 5% to 85% Non-Condensing.
23. Super core routers and Divisional routers should have common cards and
interchangeable.

Proposed Specifications of DIVISION Router:

1 Should support redundant controller cards (1+1) and redundant fabric cards (N+1) for
high availability.
2 The router must be equipped with fan filter to avoid accumulation of dust on main board.
3 The router must support on-board GNSS receiver.
4 The router should support minimum 128K MAC address.
5 The router should have capability of minimum 240K IPv4, 120K IPv6 routes (FIB).

Page 93 of 131
6 Router should support 4K MPLS PWE3.
7 Router should support 4K VPLS.
8 Router should support 1K MPLS L3 VPN.
9 The router should support 12K labels and 10 label stack depth.
10 Should support minimum 16 ECMP (equal cost multipath).
11 Ability to configure hierarchical queues in hardware for IP QoS at the egress to the edge.
Minimum 20k egress/VoQ egress hardware queues.
12 The router must support 1GE, 10GE, 40GE, 100GE interface pluggable up to 80Km
distances.
13 Router should support minimum 200 Gbps full-duplex per slot capacity.
14 Router shall support minimum non-blocking throughput capacity of 1200 Gbps full-
duplex or higher at full services scale.
15 The router must support multi-rate interfaces: 1/10GE, 10/25GE, and 40/100GE.
16 The router must support line-rate 10GE, 100GE interfaces with both grey and coloured
pluggable.
17 The router must support 100GE interfaces with coherent optics for longer distances over
dark fiber.
18 The router must have capability to support following minimum interfaces:
• 10 x 40/100GE,
• 20 x 1/10GE
19 The Router should be supplied with following interfaces on Day-One:
20 8 x 40 coherent interfaces distributed across minimum 2 line cards with 40G LR
optics.
21 16 x 10GE distributed across minimum two (2) interface slots.
22 Operating temperature: +5°C to +40°C guaranteed.
23 Humidity: 5% to 85% Non-Condensing.
24 Super core routers and Divisional routers should have common cards and
interchangeable.

Note:- In case of any technical specifications are coinciding with RDSO TAN v2.0, tender
clause will supersede.

Page 94 of 131
Annexure-4

Proposed Specifications of Network Management System & Automation:

1. Network Provisioning Platform should be based on open, secure, and scalable software
for optimizing network infrastructure and operations.
2. Network Provisioning Platform Should support APIs for customization and integration.
3. Network Provisioning Platform should support RADIUS/TACACS based access control
4. The proposed Network Provisioning Platform shall be deployed with High Availability
(HA) and Geo-redundancy. The bidder shall describe in detail the network architecture
for Proposed Network Provisioning Platform to ensure continuous operations and
provisioning at both sites
5. The Network Provisioning Platform shall support capability for monitoring and
configuration provisioning of IP-MPLS network through centralized NOC
6. The Network Provisioning Platform shall support single management system for IP-
MPLS network to provide ease of operation.
7. The Network Provisioning Platform should support CORBA/JAVA/XML/REST-
API/Kafka interfaces to facilitate integration for end-to-end processes such as flow
through service provisioning and service assurance
8. The proposed Network Provisioning Platform shall support North Bound Interface (NBI)
REST-API protocol.
9. The proposed Network Provisioning Platform shall have the capability to expose the
following information via NBI;
a. Network and service topology and details (i.e., bandwidth, customer info, service info)
b. Telemetry data (i.e., latency, utilization)
c. Network slice and service lifecycle API (CRUD: create, read, update and delete).
10. The proposed Network Provisioning Platform shall support South Bound Interface (SBI)
protocols minimum as the following: SNMPv1, v2 & v3, NETCONF (RFC 6241)/YANG
(RFC 6020 & RFC 7950), BGP-LS (RFC 7552), PCEP (RFC 5440), Telemetry (Please
specify telemetry protocol supported)
11. The proposed Network Provisioning Platform shall collect telemetry information from the
managed devices;
a. Bandwidth utilization for Links and Ports
b. Latency of Path and Link
12. Network Provisioning Platform shall support client–server based architecture. Client
being GUI/web browser based access with secure interface to the server.
13. The role of the Network Provisioning Platform is to control and manage all aspects of
the domain such as Fault, Configuration, Auditing, Performance, and Security (FCAPS)
to ensure maximum usage of the devices resources.
14. The Network Provisioning Platform should be supplied with all applicable feature
perpetual-licenses from day one. No feature should have year based capping for usage.
15. Network Provisioning Platform should allow the user to zoom down to the port level of
any given card /equipment.
16. The Network Provisioning Platform and the network elements shall provide Operation,
Administration, Maintenance & Provisioning (OAM&P) functions in accordance with the
Telecommunications Management Network (TMN) concept described in ITU-T
Recommendations Y.1714 (01/2009)/Y.1711(02/2004) or equivalent standards /
recommendations.
17. The Network Provisioning Platform shall provide a proactive and efficient monitoring
that helps to detect and avoid potential network backbone problems.
18. The Network Provisioning Platform should be flexible and modular. It should be able to
provide configurations from a single machine to powerful and open client/server
architecture.

Page 95 of 131
19. Network Provisioning Platform should support administrative operations to be
performed repeatedly such as: NE configuration backup, software image download,
operator login/logout attempts, etc.
20. Network Provisioning Platform must support below network element software
management: -
a. Loading of new software images.
b. Management of multiple versions of software
c. Installation of software updates.
d. Software download status reporting.
e. Administrator authorization for the loading of software
f. Coordination for the software download to multiple end element based on a single
software source.
g. Version control for all network
h. Administrator authorization for the loading of software
21. The Network Provisioning Platform GUI should allow authorised personnel to create
and activate end-to-end services.
22. The Network Provisioning Platform should be able to provision, configure and manage
network for DWDM and IP-MPLS
23. Network Provisioning Platform should allow service and equipment provisioning
24. The proposed Network Provisioning Platform shall have the capability to automatically
retrieve network information and create topology upon network initiation
25. The proposed Network Provisioning Platform shall have the capability to automatically
display the network topology, including physical and logical links between network
elements.
26. The proposed Network Provisioning Platform shall have the capability to automatically
update the topology information upon network or service changes.
27. The Network Provisioning Platform should support health monitoring of all modules and
indicate health of the system and connectivity.
28. The Management System shall support the provisioning of :-
a. All NE parameters.
b. Threshold Crossing Alert(TCA) Alarm Severity
29. Alarms should be categorised into different categories e.g. Emergency/Critical,
Flash/Major, Immediate/Minor, Priority/Warning, Deferred/Informative depending upon
the severity of the alarm
30. Network Provisioning Platform should be able to display the Network Elements and the
links in different colours depending upon their status for healthy, degraded and critical
alarm conditions.
31. Dashboard should indicate the number of active alarms with filtering options based on
the period, duration, severity, event type and location.
32. The Network Provisioning Platform system should support integration with emailing
system for informing network admin user(s)
33. All failure and restoration events should be time-stamped
34. The GUI shall provide the ability to create, delete and modify topology views of the
network.
35. The solution must support Service Level Agreements & Lifecycle Management including
Version Control, Status Control, Effectively and audit Trail to ensure accountability for
the project.
36. The solution must have the ability to define and calculate key performance indicators
from an End to End Business Service delivery perspective related to the Project under
discussion
37. The solution should support requirements of the auditors requiring technical audit of the
whole system

Page 96 of 131
38. Solution should support effective root cause analysis, support capabilities for
investigating the root causes of failed service levels and must make it possible to find
the underlying events that cause the service level contract to fail.
39. The solution should provide historical and concurrent service level reports for the project
in order to ensure accountability of the network performance
40. Automatic Report creation, execution and Scheduling, must support variety of export
formats including Microsoft Word, Adobe PDF, HTML, etc
41. The solution must support Templates for report generation, Report Filtering and
Consolidation and Context sensitive Drill-down on specific report data to drive
standardization and governance of the project
42. The solution must support security for drill-down capabilities in dashboard reports
ensuring visibility for only relevant personnel only
43. Support real-time reports as well as historical analysis reports (like Trend, TopN,
Capacity planning reports etc.)
44. The proposed Network Provisioning Platform shall automate the provisioning of service
and path for IP network services based on following:
a. Latency
b. Bandwidth
c. Shortest Path
d. User Defined (Custom)
45. The proposed Network Provisioning Platform shall have the capabilities to create and
update LSP in real-time.
46. The proposed Network Provisioning Platform shall calculates and configure the LSP
path in the IP network.
47. The proposed Network Provisioning Platform shall supports the discovery, control, and
creation of main and protection LSPs.
48. The proposed Network Provisioning Platform shall automatically provision the LSPs f or
both directions.
49. The proposed Network Provisioning Platform shall provide the capability to provision
full-meshed LSP for services running on more than two nodes.
50. The proposed Network Provisioning Platform shall be able to provision the service
based on the following policies;
a. Shared Risk Link Group (SRLG)
b. Diverse path
51. The policies shall capable to be enforced via strict and preferred mode for diverse path
provisioning.
52. The proposed Network Provisioning Platform shall have the capabilities to provision and
manage LSPs using PCEP and Netconf.
53. The proposed Network Provisioning Platform shall automate the provisioning for all IP
network order types i.e., new install, modify, terminate.
54. The bidder to describe in detail the capability of IP Service Provisioning.
55. The proposed Network Provisioning Platform shall supports the provisioning of IP
services, i.e., L2VPN, L3VPN and internet services.
56. The proposed Network Provisioning Platform shall analyse and optimize the network
path using:
a. RSVP-TE
b. Segment Routing SR-TE

57. The proposed Network Provisioning Platform shall obtain link/path topology and
utilization from IP/MPLS network and provide global network view.
58. The proposed Network Provisioning Platform shall store the topology and utilization.
59. The proposed Network Provisioning Platform shall perform path computation and
distribute the traffic between multiple paths for optimization or to avoid congestion.

Page 97 of 131
60. The proposed Network Provisioning Platform shall support the following LSP types;
a. PCE-initiated LSP - Creation and manage
b. PCC-delegated LSP - Discovery and manage
c. PCC-initiated LSP – Discovery

61. The proposed Network Provisioning Platform shall have the capabilities to routes the
LSPs to alternative path around affected nodes and links that under maintenance event.
62. The bidder shall describe other IP-MPLS TE Optimization features supported.
63. The proposed Network Provisioning Platform shall be accessible via a secured Web-
based interface.
64. The proposed Network Provisioning Platform shall provide visualization of the managed
network elements (i.e., bayface layout).
65. The proposer shall ensure the communication between Network Provisioning Platform
and managed device shall be secured.
66. The proposed Network Provisioning Platform shall provides real-time views of available
resources.
67. The proposed Network Provisioning Platform shall display information of the following
managed devices:
a. Operating system versions
b. IP addresses
c. License
d. Connection status
e. Physical and logical interfaces

68. The configuration management shall have the following functionalities:


a. Setting up all NEs
b. Adding, editing and removing all managed NEs from service
c. Uploading software or firmware version to selected NEs when scheduled
d. Collecting all managed NEs status on regular and on demand basis
e. Security Configuration
f. Configuration of the features and functionalities specified in the each IP Network
Domain component.

69. The proposed Network Provisioning Platform shall provide a feature to validate the delta
configuration before deploying the configuration changes to the device.
70. The proposed Network Provisioning Platform shall push configuration, firmware and
software updates to devices.
71. The proposed Network Provisioning Platform shall backup and restore devices
configuration files for the managed devices.
72. The proposed Network Provisioning Platform shall backup and restore its own
configuration files.
73. The proposed Network Provisioning Platform shall have the capability to remote
upgrade managed NEs to a new software release.
74. The proposed Network Provisioning Platform shall indicate the following status of
upgrading activities:
a. Mismatch of software version for a particular NE.
b. Progress indicator (percentage)
c. Checklist of items that indicate the upgrading is successful

75. In the event of software upgrading failure, The proposed Network Provisioning Platform
shall allow automatic fallback to previous running software version.
76. The proposed Network Provisioning Platform shall re-synchronise its configuration with
the current state of managed NEs once the connection is re-established.

Page 98 of 131
77. Any configuration changes made by Network Provisioning Platform shall be
synchronized in real-time to the managed NE automatically.
78. The proposed Network Provisioning Platform shall allow for configuration rollback to
previous versions of the configuration or return to the last saved configuration.

Page 99 of 131
Annexure-5

Page 100 of 131


Page 101 of 131
Page 102 of 131
Page 103 of 131
Page 104 of 131
Page 105 of 131
Page 106 of 131
Page 107 of 131
Page 108 of 131
Page 109 of 131
Page 110 of 131
Page 111 of 131
Page 112 of 131
Page 113 of 131
Page 114 of 131
Page 115 of 131
Page 116 of 131
Page 117 of 131
Page 118 of 131
Page 119 of 131
Page 120 of 131
Page 121 of 131
Page 122 of 131
Page 123 of 131
Page 124 of 131
Annexure-6
Software-Defined Networking (SDN) in IP/MPLS Networks:

Software-Defined Networking (SDN) is a network architecture approach that allows for


the centralized control and management of network resources through software-based
controllers, rather than relying on the traditional hardware-based, distributed network
control protocols. When applied in IP/MPLS (Internet Protocol/Multiprotocol Label
Switching) networks, SDN enhances flexibility, programmability, and automation of
network operations. IP/MPLS networks have been the backbone of carrier-grade
networks, offering scalability, reliability, and the ability to transport various types of
services (voice, data, video) over a unified network.

By integrating SDN with IP/MPLS, service providers can improve operational efficiency,
reduce costs, and enable faster service innovation and provisioning. SDN decouples the
control plane from the data plane in the MPLS network, centralizing network intelligence
and policy-based traffic management.

SDN Components in an IP/MPLS Network

An SDN-enabled IP/MPLS network consists of the following key components:

1. SDN Controller: This is the brain of the SDN architecture. The SDN controller centralizes
network control, making routing decisions and managing traffic flows based on the
network’s global state. In an IP/MPLS environment, it can control MPLS label distribution
and IP routing dynamically.
2. Data Plane: This consists of the forwarding devices, such as MPLS routers and switches,
that carry out the actual data forwarding based on the rules set by the controller. The
routers in MPLS networks use Label-Switching Routers (LSRs) and Label Edge Routers
(LERs) to handle traffic flows.
3. Southbound APIs: Protocols such as OpenFlow or NETCONF enable communication
between the SDN controller and the data plane devices. These APIs are responsible for
programming the forwarding tables in MPLS routers based on controller decisions.
4. Northbound APIs: These are interfaces between the SDN controller and applications or
orchestration platforms. They provide policy-based controls, traffic engineering, network
analytics, and service provisioning.

Functionalities of SDN in IP/MPLS Networks

1. Centralized Traffic Engineering and Control:

• In traditional MPLS networks, traffic engineering (TE) relies on distributed protocols like
RSVP-TE or Segment Routing for setting up Label-Switched Paths (LSPs). With SDN,
the controller has a global view of the network, enabling more efficient and dynamic traffic
engineering decisions.
• The SDN controller can optimize the placement of LSPs across the MPLS network based
on real-time traffic demand, congestion status, and Quality of Service (QoS)
requirements.

2. Dynamic MPLS Path Provisioning:

• SDN facilitates the dynamic establishment, modification, and teardown of LSPs based
on traffic conditions, user demands, and application requirements. The controller can

Page 125 of 131


reroute MPLS traffic on-demand without relying on traditional protocol convergence
times, reducing latency and packet loss during network changes.

3. Network Slicing:

• SDN enables network slicing in MPLS networks, which allows the partitioning of the
MPLS infrastructure into multiple virtual networks, each with its own distinct traffic
policies and SLAs (Service Level Agreements). Each slice can be configured
dynamically by the SDN controller to accommodate different service requirements
(e.g., for IoT, 5G, or enterprise services).

4. Service Function Chaining (SFC):

• In IP/MPLS networks, SDN can be used to implement service function chaining, where
specific traffic flows are steered through a sequence of network services (such as
firewalls, load balancers, or DPI systems) before reaching their destination. The SDN
controller ensures that traffic follows the desired path while MPLS labels are used for
efficient forwarding.

5. Simplified Network Management and Automation:

• One of the core functionalities of SDN is the automation of network operations. In an


MPLS environment, SDN can automate tasks such as MPLS tunnel setup, label
distribution, bandwidth allocation, and monitoring. This automation reduces
operational complexity and the risk of human error, especially in large-scale networks.

6. Seamless Multi-Layer Integration:

• SDN in IP/MPLS networks allows for better integration between different network
layers (IP, MPLS, Optical). The SDN controller can optimize resource allocation and
traffic management across these layers, ensuring efficient use of network resources
and improved QoS for end-to-end services.

7. Enhanced Security and Policy Enforcement:

• SDN allows for the centralized enforcement of security policies in MPLS networks. The
controller can dynamically apply security rules and manage firewalls, intrusion
detection systems, or access control mechanisms, based on traffic patterns. This
centralized policy management is more efficient than relying on distributed security
mechanisms.

8. Real-Time Network Analytics and Monitoring:

• SDN controllers can collect and analyze real-time data on traffic flows, network
congestion, and device performance in the IP/MPLS network. Based on this data, the
controller can make informed decisions on traffic routing and resource allocation,
improving overall network performance and user experience.

9. Quality of Service (QoS) and SLA Management:

• SDN controllers can enforce QoS policies by dynamically adjusting MPLS label-
switched paths based on service-level agreements. The controller ensures that critical

Page 126 of 131


services (e.g., voice, video) receive priority over less sensitive traffic, guaranteeing the
required bandwidth, low latency, and jitter.

Benefits of SDN in IP/MPLS Networks

Integrating SDN into IP/MPLS networks brings significant operational improvements,


including centralized traffic engineering, dynamic MPLS path management, enhanced
security, and simplified automation. This convergence empowers service providers to
offer more flexible, efficient, and scalable services while reducing costs and improving
network agility. SDN, therefore, represents a transformative approach to enhancing
traditional IP/MPLS networks and making them ready for future demands such as 5G,
IoT, and next-generation cloud services.

Architecture for SDN Controller of IP-MPLS.

1. The SDN controller should include a unified cloud architecture that integrates
management, control, and analysis. The SDN controller should have capability to
manage 2000 Nodes of IP-MPLS.
2. The SDN controller must support a unified portal to access all SDN components,
including device management, service provisioning, network optimization, and network
monitoring. The SDN controller must support multivendor devices through standard
protocols. License/software required for fault & performance management for
multivendor devices should be included in offer.
3. The SDN controller must provide a unified user authentication mechanism. Multiple
logins are not required when all SDN functional components are used.
4. All SDN components should support a unified user authorization mechanism. The SDN
system should be a complete system with uniform authentication to avoid jumping out
of different applications and ensure security.
5. The SDN controller should provide unified service monitoring capabilities in one service
window, including service DASHBOARD, service-related alarms, historical performance
curves, and OAM tools.
6. The SDN controller must provide unified deployment and installation tools and deploy
all components at a time.
7. The SDN controller must support RADIUS/LDAP authentication.
8. The SDN controller must support standard southbound protocols, including:
(a) Netconf
(b) PCEP
(c) BGP-LS
(d) BGP-SR
(e) OSPF
(f) ISIS

Page 127 of 131


(g) SNMP
(h) Telemetry
(i) CLI
(j) SFTP-Client
(k) Telnet/SSH
9. The SDN controller must provide unified northbound interfaces to integrate with the
upper-layer OSS management system, orchestrator, and super controller. Multiple
components do not need to provide multiple northbound interfaces for interconnection.
10. The SDN controller must be deployed on VMs/dockers/Physical servers and should be
scalable to support the IP/MPLS devices/Nes.
11. The SDN controller must support the local active/standby protection mechanism.
12. The SDN controller must support the remote disaster recovery capability. The active
and standby controllers must support the database synchronization mechanism.
13. The SDN controller must have the full lifecycle management capability of the network,
including the following functions:
(a) Zero Touch device deployment
(b) Automatic service provisioning
(c) MPLS network optimization and optimal path computation
(d) Network traffic monitoring and analysis
(e) Network SLA and E2E service performance analysis
(f) IP network optimization
(g) Network assurance
(h) Configuration Management
(i) Fault management
(j) Network discovery and basic configuration management
14. The SDN controller should support automatic discovery of network devices and
connection relationships through protocols such as LLDP, and generate network
topology of the entire network.
15. The SDN controller should automatically discover the logical topology of the network
through protocols such as BGP-LS.
16. The SDN controller should support real-time network topology change update.
17. The SDN controller should support the display of L3 topology by region.
18. The SDN controller should support GIS map-based network topology display.
19. The SDN controller should support automatic combination of topology and inventory
information and display topology information on the WebUI.
20. The SDN controller inventory can be queried based on multiple criteria and reports can
be exported.
21. The SDN controller must be capable of hierarchical path view and visualization from
services to tunnels to links.
22. The SDN controller must support the graphical display of the entire network topology
and support subnet topology division and visualization.
23. The SDN controller should support the zero-touch function, automatically discover new
routers on the network, and automatically generate and deploy basic configurations for
new routers. (LSR ID, IP address, IGP, protocol...) to automatically add new routers and
build the physical topology.
24. MPLS/SRv6 Path Computation and Network Optimization.
25. The SDN controller should be able to manage RSVP-TE LSPs, SR-TE LSPs, and SRv6
policies at the same time.
26. The SDN controller supports the use of link SRLG attributes, link affinity attributes, hop
limit and real measured link delays as constraints for optimal LSP calculation.
27. The SDN controller supports SR policy-based path computation based on packet loss
rate constraints/ BW congestion/Latency.

Page 128 of 131


28. The SDN controller supports route computation based on real-time bandwidth and
reserved bandwidth.
29. The SDN controller supports the use of link/LSP bandwidths collected in real time, link
bandwidth and bandwidth priority as constraints for LSP calculation.
30. The SDN controller supports forcible specified LSPs to pass through specified nodes or
links. The SDN controller can force a specified LSP to bypass a specified node or link.
31. The SDN controller supports the establishment of different tunnels. The paths do not
traverse the same node, link, and SRLG.
32. The SDN controller supports the establishment of a pair of LSPs with disjoint paths.
(that is, the primary and secondary paths do not traverse the same node, link, or SRLG).
33. The SDN controller supports the establishment of a pair of bidirectional LSPs that use
the same path to ensure symmetric routes for upstream and downstream traffic.
34. The SDN controller supports path computation for inter-AS SR-TE, SR policy, and SRv6
policies.
35. The SDN controller can lock the preferred path to ensure that high-value services are
always on the optimal path.
36. The SDN controller supports LSP selection for multiple services based on SLAs.
37. The SDN controller provides a graphical interface to implement traffic policy-based
access to different tunnels.
38. The SDN controller supports batch tunnel import.
39. The SDN controller can deliver LSP paths for SR policies using BGP.
40. The SDN controller supports UCMP/ECMP traffic load balancing based on SR policies
and the primary and secondary LSPs of SR policies.
41. SRv6 policies support load balancing among multiple lists and intelligent generation and
adjustment of UCMP/ECMP weights.
42. SR Policy supports load balancing among multiple lists/links. When bandwidth
increases or original links are congested, a maximum of eight paths can be
automatically split.
43. The SDN controller supports inter-domain SR policy paths based on BSID stitching,
which reduces the depth of E2E path stacking.
44. The SDN controller supports slice-based latency topology by fulfilling the functionary of
the IETF Network Slice Controller as described in YANG Data Model for the IETF
Network Slice Service (draft-ietf-teas-ietf-network-slice-nbi-yang).
45. The SDN controller automatically restores the tunnel to the original path after the
bandwidth and delay of the original path are restored.
46. The SDN controller supports global path computation after network-wide LSPs are
managed. After bandwidth constraints are modified, paths can be automatically
recalculate.
47. The SDN controller supports manual global LSP re-optimization & manual local traffic
optimization on the GUI. The SDN controller can re-optimize specified tunnels on the
GUI.
48. The SDN controller supports switchover between automatic and manual optimization.
49. The SDN controller must support global re-optimization based on the network status
and performance data collected in real time.
50. When a node or link fails, the SDN controller supports automatic LSP re-optimization.
51. When the real-time bandwidth of a link exceeds the threshold, the SDN controller
supports automatic global LSP re-optimization.
52. When the real-time tunnel bandwidth change rate exceeds the threshold, the SDN
controller supports automatic LSP re-optimization.
53. The SDN controller can view the link, tunnel information, and optimization logs that
trigger optimization. You can also view details about historical optimization.
54. The SDN controller supports link+node hybrid optimization to ensure bandwidth.

Page 129 of 131


55. The SDN controller supports path computation and optimization for PECP-hosted
primary and secondary TE LSPs.
56. The SDN controller supports re-optimization based on the packet loss rate and delay.
57. The SDN controller supports route computation and optimization for SRv6/SR-MPLS
Policy tunnels.
58. The SDN controller can set optimization policies, such as link bandwidth thresholds and
maximum latency.
59. The SDN controller supports route calculation and optimization based on the GIS map.
60. The SDN controller supports network path adjustment and optimization driven by
service delay and quality deterioration.
61. The SDN controller supports tunnel-level and global optimization policies.
62. The SDN controller supports path adjustment and optimization based on the Latency
deterioration of links.
63. The SDN controller supports setting and cancelling maintenance windows. For
scheduled maintenance jobs, a dialog box is provided to ask the user whether to start
or end maintenance.
64. The SDN controller supports maintenance windows - when a link/node needs to be
upgraded or maintained, the controller has the ability to re-optimize the global LSP and
bypass the link/node to be maintained.

65. Automatic service provisioning and management


66. The SDN controller supports E2E configuration of the following service types:
(a) RSVP-TE/SR-TE
(b) SRv6 Policy
(c) L3VPN
(d) EVPN
(e) VLL
(f) VPLS
67. The SDN controller supports model-driven VPN service deployment to define the
mapping from network services to device configurations.
68. The SDN controller supports the creation of QoS profiles, batch deployment of QoS
profiles on devices or interfaces, and quick deployment of QoS profiles and VPN
services.
69. The SDN controller supports template-based, Web-based & RESTCONF API based
tunnel creation and tunnel bandwidth, delay, and hot-standby configuration.
70. The SDN controller must support the creation of TE policies optimized by the user
specified intent including minimization of IGP/TE/Delay metrics that meets service SLA
requirements during service creation.
71. The SDN controller can automatically create a tunnel that meets service SLA
(bandwidth, such as the shortest path) requirements during service creation.
72. The SDN controller supports multiple path selection policies, including the shortest path,
load balancing, and minimum latency.
73. The SDN controller supports E2E service trail preview based on different route selection
policies. Users can manually select required trails.
74. The SDN controller supports service resource pool management, including
RD/RT/BD/EVI/EVPL/ServiceID/VNI resource pool management.
75. The SDN controller supports monitoring and visualization of service SLA performance,
including delay, packet loss, jitter, and bandwidth, by using (TWAMP and Y.1731) upto
2000 sessions
76. The SDN controller supports service connectivity detection, including ping/tracert
diagnosis for L3VPN (including L3EVPN), EVPN, VLL, PWE3, and VPLS services and
tunnels
77. The SDN controller should provide a GUI to support batch configuration of BGP peers

Page 130 of 131


78. The SDN controller must support batch configuration of RSVP-TE/SR-TE
79. The SDN controller supports 360-degree service monitoring on a unified interface,
including the service layer topology, centralized status display of service-related
resources, and automatic alarm and service association.
80. The SDN controller supports E2E slice deployment/capacity expansion and slice
monitoring topologies.

81. Network monitoring and performance analysis


82. The SDN controller must support real-time collection of network performance
information through SNMP.
83. The SDN controller must support real-time collection of network performance
information through Telemetry.
84. The SDN controller must support dynamic collection of link and tunnel data delay
information.
85. The SDN controller should support statistical report and visual display of network
performance, including packet loss, delay, jitter, and bandwidth utilization.
86. The SDN controller must support statistics reports and visualized display of QoS
queue/VPN service performance and quality, including packet loss, delay, and jitter.
87. The SDN controller should support the generation of alerts when performance exceeds
threshold limits.
88. The SDN controller Should support BW utilization & QoS monitoring.
89. The SDN controller should provide network performance reports based on multiple
resources. (Such as ring network, NE, board, port, link, and service) Performance
analysis report
90. The SDN controller should support 15 minutes, hour, day, week, month, and yearly
reports.
91. The SDN controller shall support BW Utilization monitoring for TE tunnels and provide
TE tunnel BW Utilization reporting: tunnel rates (minimum, maximum, and average)
92. The SDN controller should provide IP link traffic reporting: rate and bandwidth utilization.
93. The SDN controller should support the detection of service flow quality one by one to
demarcate faults.
94. The SDN controller should support the playback of historical SR tunnels, compare
historical KPIs, and quickly locate faults.
95. The SDN controller stores minute-level performance data for a maximum of 30 days
96. The SDN controller stores minute-level day-level performance data for a maximum of
one year
97. The SDN controller supports Services Quality detection (packet loss, jitter, and delay)
to restore the E2E path of service Quality and diagnose root cause.
98. The SDN controller supports L3VPN/L3EVPN, L2EVPN VPWS service quality
detection.
99. The SDN controller must support flow-based quality monitoring (packet loss, jitter, and
delay).
-X-X-X-

Page 131 of 131

You might also like