2024-09-27 - Final Draft Report - Signed
2024-09-27 - Final Draft Report - Signed
MINISTRY OF RAILWAYS
Report on
IP-MPLS Routers and Suggestions for
Improvement / Changes based
on field experiences.
Page 1 of 131
Signature Sheet
Digitally
RAKES signed by
RAKESH JAI Digitally signed by
JAI Digitally signed
VERMA Date:
2024.09.27
KISHORE PRASAD
Date: 2024.09.27
[Link] +05'30'
PRASAD [Link] +05'30'
Sh. Dinesh Verma Sh. C. K. Prasad
ED/Telecom/RDSO PCSO/CR
Page 2 of 131
INDEX
1. Abbreviations 5-6
2. Background 7
3. Terms of Reference 7
TOR Item No. 1 – Planning for CORE network and its connectivity
5. with zones and other networks like CRIS etc and the configuration 10 - 43
of Routers in the CORE and at Edge Network
Page 3 of 131
LIST OF FIGURES
Page
S. No. Description
No.
1. Figure 1 – ISIS Topology for SWR 15
5. Figure 5 – High Level Architecture for Data Centre Network and MPLS 29
Page 4 of 131
1. Abbreviations:
Page 5 of 131
SN Short Form Full Form
32. LFM Link Fault Management.
33. LLD Low Level Design
34. LSA Link-State Advertisement
35. LSI Label-Switched Interface
36. LSP Label-Switched Path (MPLS)
37. LSR Label-Switched Router
38. MF classifier Method for classifying traffic flows.
39. MPLS Multiprotocol Label Switching
40. NLRI Network Layer Reachability Information.
41. NSF Nonstop forwarding
42. NTP Network Time Protocol.
43. OAM Operation, Administration, and Maintenance.
44. P2P Point-to-point
45. PDU Protocol Data Unit
46. PE Provider Edge
47. PEE Packet Forwarding Engine.
48. pps packets per second
49. QoS Quality of Service.
50. RE Routing Engine
51. RIB Routing Information Base, also known as routing table
52. RR Route Reflector
53. SAFI Subsequent Address Family Identifier
54. SPF Shortest Path First
55. SR Segment Routing
56. SSO Stateful Switch over
57. VLAN Virtual LAN
58. VPLS VPLS Virtual Private LAN service
59. VPN Virtual Private Network
60. VRF A routing instance of type Virtual Routing and Forwarding
Page 6 of 131
2. Background:
In 41st TCSC meeting various issues related to IP-MPLS backbone network for Indian
Railways were taken up as agenda item. As per recommendations of 41st TCSC, Railway
Board has nominated a committee for “IP-MPLS routers and suggestion for
improvement/changes based on field experiences”, vide letter no. 2024/Tele/9(3)/1
(3460739), dated 08.04.2024 (attached as Annexure-1).
3. Terms of Reference:
The Terms of Reference (TOR) for the committee as given by Railway Board are as under:
1. Planning for CORE network and its connectivity with zones and other networks like
CRIS etc and the configuration of Routers in the CORE and at Edge Network
5. Standardization of station LAN infrastructure for using common LAN for all network
services at stations
6. Management of IP-MPLS and LTE network, establishing all India Network Operation
Centre (NOC) along with its staffing and operations.
The committee examined the existing network and its pros and cons, capacity constraints,
its obsolescence in particular and studied the role that recent IP-MPLS technology
enhancements supporting the evolution of circuit-switched transport to packet switched
network will play in IR next-generation access and aggregation network infrastructure.
Page 7 of 131
4. Executive Summary:
Railway communication was predominantly voice based and hence major part of the
network is designed and implemented accordingly. Railways is adopting Information
technology and computerization on a large scale for its internal use and also for meeting
passenger expectations. A number of mission critical applications are being developed.
This necessitates transformation of the existing network to deal with data and video with
the same standards of reliability and availability as is being delivered for voice
requirements. The transformation of the network will, first and foremost, involve identifying
appropriate technologies that will meet future needs while also ensuring compatibility with
the existing equipment that are having residual life. Also, it must be ensured that assets
which are due for replacement, and the latest technology that is being adopted for
replacement, seamlessly integrate with this network and service requirements of all users
and applications. An important criterion is integrating with existing organizational structure
and processes and ease of adoption by existing maintenance organization.
With mission critical applications in use, for both train operation and working of various
departments, one important issue that did not affect circuit switched networks and shall
determine availability of future communication is network security and threat
management. This would mean that the network, while being secure itself, shall integrate
into a comprehensive network security and threat management system. There is now a
necessity for the network to distinguish communication traffic in terms of importance,
assign priorities and limit access while discerning users, passengers and intruders. The
backbone of the communication system shall be the network, both long haul and short
haul. It will have to transport various services that shall include high bandwidth latency
sensitive real time applications, such as Telepresence, CCTV feed, Wi-Fi backhaul, LTE-
R back haul etc. to low bandwidth applications, such as remote monitoring of S&T assets,
control communication, E-tender, e-office e-mail etc. Depending on the work flow and
processes of various departments, the network shall provide seamless connectivity
between various groups of users and servers/services offered from geographically diverse
locations with desired service level.
The world is witnessing an explosion of information driven by the Internet and smart
devices like IoT. The cause of which is propagation of networks that handle voice, video
and data seamlessly. While voice communications, using circuit switched network, started
the communication revolution, data communications, using packet switched network for
interconnecting computers and a myriad of devices, propelled its exponential growth.
Need for ubiquitous connectivity has compelled convergence of these two types of
networks and evolution of necessary standards. IP based networks has emerged as the
default standard for adoption. Large scale adoption of this standard has resulted in
equipment being available commercially off the shelf and driving down cost to
Organizations.
Page 8 of 131
Railway communication is constantly evolving to meet the Organization’s requirements.
Internet and intranet connectivity is being extended to users across the length and breadth
of the organization. Over aged Exchanges are progressively being replaced by IP-
Exchanges. Specifications/TANs are issued for adoption IP-based train traffic control
communication. All these are reflective of the fact that Railways is also embracing IP
based technologies.
Three major technologies are evolving in communication each having their own strengths,
viz Carrier Ethernet, MPLS-TP and IP-MPLS. The choice of the technology considering
the various factors elucidated above in the context of the Railways future communication
needs is IP-MPLS. Hence, the adoption of IP-MPLS Technology as the backbone for
Indian Railways is recommended. The detailed requirements of Indian Railways that
necessitate the adoption of IP-MPLS as the unified communication backbone are covered
in this report.
Page 9 of 131
5. TOR Item No. 1:
Planning for CORE network and its connectivity with zones and other networks
like CRIS etc and the configuration of Routers in the CORE and at Edge Network
In the realm of Railway networks, operators are grappling with the imperative to deliver
advanced services that can swiftly adapt to evolving passenger demands and
requirements. Emerging trends such as the integration of high-speed Rail services,
increasing passenger volumes, adoption of smart railway technologies, and the need
for seamless connectivity require unprecedented flexibility, scalability, and efficiency
from the network infrastructure. Moreover, there is a growing pressure to optimize
operational costs amidst rising demands for enhanced services and customer
expectations.
Railway Access and Aggregation solutions have progressed from traditional systems
to integrated IP-MPLS architectures to address these challenges. IP-MPLS Unified
Communication Backbone will be used to the vital Signalling applications such as
Electronic Interlocking (E.I.), Datalogger, BPAC, UFSBI etc. and Telecom applications
such as LTE, voice, data & video SCADA, TPC/Remote Control, VOIP based TCCS,
VOIP based Exchanges, VSS/CCTV, Wi-Fi, Integrated Passenger Amenities,
UTS/PRS/FOIS, Loco Pilot/Asst Loco Pilot & Guard communication with Station
Master through Radio Communication etc. and real time applications of TCAS/Kavach
of Safety Integrity Level 4 (SIL-4) standards, Centralized Traffic Control(CTC) and
such other vital futuristic safety applications.
Communication in Railways
Page 10 of 131
5.1 Operational Communication
These are circuits provided for the safe and punctual operation of trains. Hence, they
cover train traffic management, crew management and various aspects related to train
operations at stations.
Section Control is a communication system that is used for effecting precedence and
crossing in train operations. It is a unique conference call where the section controller
is always off-hook. Any of the stations in the section can lift the phone handset and
get connected to the conference call. Further, only the section controller is provided
with the facility to call a station. This communication is geographically spread over all
the stations located in that section. The circuit originates at the Divisional HQ.
5.1.2 TPC/TLC/EC
Traction Power control, Traction Loco Control are used for operation and maintenance
of traction power distribution and OHE while TLC is used for Loco interception, power
change and crew management. Emergency Communication is the communication
system used in the event of any emergency that provides a means for authorized
personnel to talk to a person in the divisional control from anywhere in the block-
section of the controlled section. This is done using a special phone call Portable
Control Phone (PCP) available with only authorised railway personnel like guards,
drivers etc. Hence this communication is of vital importance in event of
accidents/unusual.
This communication is geographically spread over all the stations located in the
division. The circuit originates at the Divisional HQ.
The absolute block system ensures spatial segregation of trains in a section of track.
This is one of the most important circuits used in conjunction with block instruments
for movement of trains between adjacent block stations. These are point to point
dedicated voice circuits and shall in the foreseeable future continue unchanged. The
block communication is an exclusive point to point communication that connects two
adjacent stations.
The level-crossing gates are provided with a telephone so that the gateman can talk
to the station masters. The station master exchanges information with regards to train
movements in the block section. This is required for coordination for gate control. The
gate telephone works on a nominated quad of the 6-Quad cable laid between stations.
There is a point to multi point circuit that connects the station to the gated LC that are
within control of the station. The geographical spread is typically within a block section
on either side of a station.
Page 11 of 131
5.1.5 BPAC
Communication circuits are also required for the purpose of Block Proving by Axle
Counters (BPAC). This system is used to eliminate the requirement of manual check
for the last vehicle clearing the block section. It is one of the most important circuits
that enhances the line capacity and is very vital for the train working. The BPAC circuits
are working on the 6-Quad and are point to point circuits connecting adjacent BPAC
equipment and are spread over a block section.
Data Loggers are used for logging of relay states in the stations. The Data-logger
communication systems is used to gather the dataloggers data at a central location of
storage and analysis. The conventional network as per extant scheme of RDSO is a
non-IP network and instead it as a daisy chain connecting the dataloggers provided at
each station and the Divisional HQ. This arrangement loads each datalogger to act as
transport nodes of Data on the network in addition to its primary function of event
logging, alarm generations etc. The geographical spread is usually over the Division.
All drivers are guards are provided VHF handheld set and station are provided with
VHF set enabling communications among themselves. These sets work on a
nominated frequency. In event of a disruption to wireline communications in a station
it is used as a standby.
Railways has a Railway telephony network that spans across all the zones and
divisions. It also covers most of the important sub divisional locations. Telephone
exchanges are provided at each of these locations. An STD network has also been
established interconnecting all the zonal and divisional exchanges using IP Based
Next Generation Network (NGN). Many of these exchanges are already VoIP
exchanges and many are being replaced by the VoIP exchanges. Remote subscribers
of these exchanges at various stations are connected using Primary Drop-Insert (PD)
Muxes which in turn are networked through SDH equipment.
5.3.1 UTN
Unified Ticketing Network (UTN) is the ticketing network utilised to issue tickets to
Railway passengers both reserved and unreserved. It spans most of the stations on
IR today over a separate IP network. The WAN interfaces of the UTN uses E1 circuits
built through Railways SDH equipment provided at the stations as well as hired E1
channels from BSNL/MTNL.
Page 12 of 131
5.3.2 FOIS Network
This is also a separate IP network which is used for operational purpose. This network
although initially created for Freight Operation Information System, today it is also
providing connectivity to the users for other management applications viz, Parcel
Management systems, Control Office Application, Crew Mgt System etc.
This network connects important Freight/Commercial station to the central servers
located in various locations including Zonal / Divisional hqrs and CRIS at Delhi. The
WAN interfaces of this network also uses E1 circuits built through the SDH equipment
provided at the stations as well as hired E1 channels from BSNL/MTNL.
5.3.3 RC/SCADA
The SCADA/RC are circuits that are dropped for operation of Traction power
distribution and control. The SCADA control is located in the divisional office in RE
territories. These are used for operation of the switchgear available at traction Sub
stations/Sectioning Posts/Sub sectioning posts, AT etc. These are low bandwidth real
time applications that are critical for operation of traction switchgear and hence critical
for trains operation in RE areas.
The SCADA system is located in the divisional HQ. Connectivity of the existing
network is provided through VF channels built over PD Muxes at the nearest station
which in turn uses E1 circuits built over SDH network and last mile is through 6-
Quad/PIJF cables. Of late this network is being upgraded to IP Based network in the
similar scheme of UTN/FOIS networks.
Nowadays, video surveillance at station has become very important due to network
security and safety considerations. The requirement of the surveillance network is very
different from other networks. It is high speed, high bandwidth network and a lot of
data travels from the station cameras to the storage and monitoring locations. With
many stations being provided with surveillance system, the Railways backbone
network capacity will have to be enhanced multi-fold to cater for these requirements.
5.3.5 Railnet
Railnet is the Enterprise Wide Area network of Indian Railways. It is general purpose
network to serve the administrative and management requirements of Railways as
well as to connect to the Internet. It is available to all the officers and supervisors in
the field and has become a vital component of day to day working.
It is engineered as an L3-VPN using the MPLS architecture of RCIL. The network
spans over all the Zonal Railway Hqrs, Divisional Railway Hqrs, production units,
workshops, training institutes, RDSO, Railway Board, stores depot, important stations
etc. Further, it will have to increase its reach to almost all the stations in near future.
Hence, the technology for future should be so chosen so as to satisfy this need.
Railway Display Network is another network now being worked upon by RCIL. This
network is supposed to provide live content for displays on the platform. These
contents shall be train information, public warnings / announcements, advertisements
Page 13 of 131
etc. As live content shall be beamed from a central location, it will also demand a lot
of bandwidth.
Railways have started providing free Wi-Fi services to its passengers at the stations.
A separate TCP/IP Wi-Fi network is available at the stations to cater for this
requirement. In the foreseeable future this service shall be have to be extended for a
separate wireless communication network for the staff of the various departments
working in a Railway station for operation and maintenance replacing the VHF
handheld set that is being used presently and all its associated limitations. These Wi-
Fi enable handsets can also run apps.
CRIS and RCIL are developing a number of mission critical applications for the
Railway. This will result in the computerization of all the department necessitating
connectivity to all the offices of the Railways and with all the staff being users of the
network.
5.3.9 LTE-R
LTE for Railways has been sanctioned for providing communication from running
trains. KAVACH data will also use LTE-R in future. IP-MPLS network will work as
backbone network for carrying data of LTE also.
5.4 From the above it can be concluded that the services needed cover Voice, Video and
Data covering the spectrum of bandwidth intensive latency sensitive real time
application to low bandwidth latency tolerant applications. Also, while voice is a
significant part of Railway communications as on date, it is obvious that in keeping
with the worldwide trend, Railway communication is also set to experience exponential
growth of Data Traffic.
From the above applications it must be noted that a wayside station needs connectivity
to adjacent stations, LC Gates, IBS, Sub-divisional locations, Divisional HQ, Zonal HQ,
Regional locations (PRS/UTS), CRIS / RCIL data centres. It can now be appreciated
that at any stations, across the length and breadth of our country, a diversity of
services are to be extended geographically dispersed location. This gives an insight
into the routing complexity and connectivity requirements.
Planning & engineering of the IP-MPLS network for Indian Railways involves
considering various factors. Important factors for MPLS Control and Management
Plane High Level Design are detailed below:
Page 14 of 131
5.5.1 IGP design
(i) ISIS
IS-IS is a link-state protocol that uses a least-cost algorithm to calculate the best
path for each network destination.
It uses link-state information to make routing decisions based on a shortest-path-
first (SPF) algorithm.
In IS-IS, a single routing domain can be divided into smaller groups referred to as
areas. Routing between areas is organized hierarchically, allowing a domain to be
divided administratively into smaller areas. IS-IS accomplishes this organization by
configuring Level 1 and Level 2 ISs. Level 1 systems route within an area, and Level
2 ISs route between areas and toward other ASs. A Level 1 and Level 2 system
routes within an area on one interface and between areas on another.
All the LSR and Zone boundary routers will be part of ISIS Level 2 and all routers
between LSR will be part of ISIS Level 1
Page 15 of 131
(ii) ISIS Metrics
All IS-IS interfaces have a cost, which is a routing metric that is used in the IS-IS
link-state calculation. Routes with lower total path metrics are preferred over those
with higher path metrics. By default, IS-IS has a metric of 10.
Normally, IS-IS metrics can have values up to 63, and by default, the total path
metric is limited to 1023. This metric value is insufficient for large networks and
provides too little granularity for traffic engineering, especially with high- bandwidth
links. So, in the network, the wide metrics only configuration will be applied at ISIS
protocol. The wide metrics the range is from 1 to 16777214.
The cost of a route is described by a single dimensionless metric that is
determined using the following formula: cost = reference-bandwidth/bandwidth.
For example, if you set the reference bandwidth to 1 Gbps (that is, reference-
bandwidth is set to 1,000,000,000), a 100- Mbps interface has a routing metric of
10.
With this configuration interfaces of 10Gbps will have a metric of 100, and
Interfaces of 1Gbps will have a metric of 1000.
In IS-IS network, a loop free alternate (LFA) is a directly connected neighbour that
provides pre computed backup paths to destinations reachable through the
protected link on the point of local repair (PLR). The primary goal of the remote
LFA is to increase the backup coverage for the IS-IS network and provide
protection within the rings.
To calculate remote LFA backup path, the IS-IS protocol determines the remote
LFA node in the following manner:
1. Calculates the reverse shortest path first from the adjacent router across the
protected link of a PLR. The reverse shortest path first uses incoming link metric
instead of outgoing link metric to reach a neighbouring node. The result is a set
of links and nodes, which is the shortest path from each leaf node to the root
node.
2. Calculates the shortest path first (SPF) on the remaining adjacent routers to find
the list of nodes that can be reached without traversing the link being protected.
The result is another set of links and nodes on the shortest path from the root
node to all leaf nodes.
3. Determines the common nodes from the above results. These nodes are the
remote LFAs.
With ‘wide-metrics’ enabled in ISIS all L1 routes are automatically exported into L2
database. Loopback addresses of other section routers will be redistributed into
ISIS L1 database via ISIS export policy on ISIS L1/L2 routers i.e LSR routers.
Page 16 of 131
(v) ISIS Recommendations
• LSR Jn routers and Zone border routers will in ISIS Level 2, whereas all other
routers LERs Type-I&II will be in ISIS Level 1
• Keeping routing information in the IGP to the minimum needed, all interfaces are
configured as passive default and backbone interfaces exempted to form
neighborship.
• Include only basic routing reachability information in the IGP (loopbacks and
interface routes). /31s are usual for p2p links.
• Tuning a maximum IGP prefix-export-limit for external routes can be considered a
best practice.
• Manually configure router-ids for better troubleshooting.
• ISIS Area address assignment across the Zonal Railway network.
• Extended/wide metrics are considered a best practice. Ensure reference
bandwidth is set to 1000g.
• Configure Ethernet media interfaces as point-to-point where possible to avoid
additional useless IGP resource utilization and quick neighbour convergence.
• Enable MD5 authentication for L1 and L2 point-to-point links for IS-IS.
• In the IS-IS case, increase LSP lifetime to maximum 65535 limit to avoid
unnecessary CPU consumption.
• Enabling BFD to IGP adjacencies in backbone. We recommended value is 30ms as
tx_interval & 4 as multiplier. This is proposed value & subjected to change as per
requirement or field behaviour during testing.
• Use remote LFA for IS-IS to establish are precomputed backup path for the quick
failover during main path fails.
• The recommended SPF algorithm parameters:
• delay – 200 ms.
• rapid-runs – 5.
• hold-down – 10000.
Page 17 of 131
Figure 2 - BGP RR Logical Topology
The Internal BGP (IBGP) is generally used to peers with other neighbours in the
same AS. EBGP is when the node peers with a neighbour outside it’s AS. To avoid
routing loops, IBGP does not advertise routes learned from an internal BGP peer
to other internal BGP peers. For this reason, BGP cannot propagate routes
throughout an AS by passing them from one router to another. Instead, BGP
requires that all internal peers be fully meshed so that any route advertised by one
router is advertised to all peers within the AS, or use scaling mechanisms as route-
reflectors or confederations.
Page 18 of 131
(ii) BGP Network Reachability Information
The NLRI for IPv4 Unicast, IPv6 Unicast, VPNv4, VPNv6, IPv6 labelled unicast,
l2vpn evpn signalling will be enabled in the network based on current requirement.
▪ Segregate types of BGP peers & address-families in different BGP groups for
better performance and manageability.
▪ Enable precision-timers. The default hold-time is 90 seconds, meaning that the
default frequency for keepalive messages is 30 seconds. More frequent
keepalive messages and shorter hold times might be desirable in large- scale
deployments with many active sessions.
▪ RT Constrained Route Distribution is a feature that can be used by service
providers in Multiprotocol Label Switching (MPLS) Layer 3 VPNs to reduce the
number of unnecessary routing updates that route reflectors (RRs) send to
Provider Edge (PE) routers. The reduction in routing updates saves resources
by allowing RRs, Autonomous System Boundary Routers (ASBRs), and PEs
to have fewer routes to carry.
▪ Enable BGP router and route authentication across all peers.
▪ Every BGP sessions must be protected via Multi-Hop BFD sessions.
▪ Ensure unique route-distinguisher is configured for each VPN. The
recommended format for the route- distinguisher configuration is
#loopback:#value. The Route-Distinguisher has local significance, unlike the
Route-Target.
▪ A log message is generated whenever a BGP peer makes a state transition.
▪ Multiple types of traffic share a data connection, with some traffic requiring
priority over others.
▪ Uptime is important, key locations have multiple connections so that alternative
paths always exists.
▪ Network congestion occurs sometimes on some connections.
▪ New sites will need to be connected to many different locations, while being
Page 19 of 131
entirely invisible to many other sites on the network.
Segment Routing is the latest advanced MPLS technology that is in the process
to replace the traditional LDP and RSVP-TE protocols with the introduction of label
distribution and traffic engineering under one umbrella and to make it happen only
via link-state IGP/BGP protocols.
• ACI (VXLAN) to SR/MPLS handoff to use single data plane protocol on DC-PE and Transport Network
DistributedSegment
Figure 3 - Unified Data Centre Routing Transport
5
Page 20 of 131
5.5.4 L3VPN Design
Virtual private networks (VPNs) are private networks that use a transport network
to connect two or more remote sites. Instead of dedicated connections between
networks. VPNs use virtual connections routed (tunnelled).
The primary goal of an MPLS VPN is to provide connectivity between tenant CEs
that are attached to different PEs. L3VPN services is used in the network to
provide layer 3 connectivity between remote sites and will constitute the full
forwarding path that all layer 3 traffic should take. The default behavior of PE
routers in a VPN is to advertise all VPN routes to peers configured with address-
family vpnv4, and it is up to the receiving PE to decide which to keep and which to
ignore based on the policies of its local VPNs.
A route distinguisher is a locally unique number that identifies all route information
for a particular VPN. A Route Target (RT) is also a 64-bit value used to identify the
final egress PE device for customer routes in a particular VRF to enable complex
sharing of routes. The route target defines which route is part of a VPN. A unique
route target helps distinguish between different VPN services on the same router.
The concept of a VRF (virtual routing and forwarding table) distinguishes the
routes for different customers, as well as customer routes from provider routes on
the PE device. A separate VRF table is created for each VPN that has a
connection to a CE router. The VRF table is populated with routes received from
directly connected CE sites associated with the VRF instance, and with routes
received from other PE routers in the same VPN.
A single “per-vpn” label is advertised for the VRF as opposed to one label per prefix.
This makes troubleshooting easier
The primary goal of an MPLS VPN is to provide connectivity between tenant CEs
that are attached to different PEs. The VPN concept is global, whereas a VRF is
a local instance at a specific PE
An IPv6 Provider Edge router (6VPE), tunnels IPv6 traffic over an IPv4 tunnel
using MPLS as the transport medium. The main advantage of the 6VPE approach
over using dual-stack tunnels is that it avoids the provisioning overhead of
configuring the IPv6 addresses on all core links and nodes.
Page 21 of 131
The hosts, CEs, and PEs, are dual-stack devices that support both IPv4 and IPv6.
The PE assigns MPLS labels to IPv6 prefixes, and this information is sent via iBGP
to the RRs.
Page 22 of 131
The chassis supports the pseudowire type that utilizes CEM transport: Structure-
Agnostic TDM over Packet (SAToP). L2VPN over IP/MPLS is also supported on
the interface modules
As explained in the previous chapter, the concept of a VRF (virtual routing and
forwarding table) distinguishes the routes for different customers, as well as
customer routes from provider routes on the PE device.
Routing policies allow controlling the routing information between the routing
protocols and the routing tables and between the routing tables and the forwarding
table. Route-maps/prefix-lists allows you to control which routes the routing
protocols store in and retrieve from the routing table.
One main role of the community attribute is to be an administrative tag value used
to associate routes together. Generally, these routes share some common
properties, but that is not required. Communities are a flexible tool within BGP to
manipulate and control routing exchange using communities. The communities
will be used in the routing-police to match the corresponding group of routes to
apply a route condition.
Page 23 of 131
(b) VRF Routing Summary:
• The CE advertises all its routes to the PE, independent of what protocol is used
between them.
• The VRF import policy allows routes matching the VRF-target.
• Communities will be applied for the prefixes from different sites in different
functions.
5.5.8 Interfaces
IEEE 802.1ad link aggregation enables Ethernet interfaces to form a single link
layer interface. The link aggregation group (LAG) balances the traffic across the
member links within an Ethernet bundle and effectively increases the uplink
bandwidth and another advantage of LAG is that increases availability, because
it is composed of multiple member links. If one member link fails, the LAG
continues to carry traffic over the remaining member links.
(ii) LACP
LACP is part of the IEEE specification 802.3ad that allows you to bundle several
physical ports to form a single logical channel. When you change the number of
active bundled ports on a port channel, traffic patterns will reflect the rebalanced
state of the port channel.
LACP is one method of bundling several physical interfaces to form one logical
interface. An aggregated Ethernet interfaces can be configured with or without
LACP enabled.
• Automatic addition and deletion of individual links to the bundle without user
intervention
• Link monitoring to check whether both ends of the bundle are connected to the
correct group
Also enabling LACP helps detect hard and soft failures of the link, and
misconfiguration of the member links.
Page 24 of 131
• System name and description
• Port name and description
• VLAN name and identifier
• IP network management address
• Capabilities of the device (for example, switch, router, or server)
• MAC address and physical layer information
• Power information
• Link aggregation information
• Enabling LLDP in all LSR and LER backbone interfaces.
When the link between a router interface and the transport devices is not stable,
this can lead to periodic flapping. For shorter physical interface transitions, you
configure interface damping with the hold-time statement on the interface. The hold
timer enables interface damping by not advertising interface transitions until the
hold timer duration has passed. When a hold-down timer is configured and the
interface goes from up to down, the down hold-time timer is triggered. Every
interface transition that occurs during the hold-time is ignored. The actual value
could be adjusted based on field results & based on transmission protection
available.
Physical interface damping limits the advertisement of the up and down transitions
(flapping) on an interface. An unstable link between a router Interface and the
transport devices can lead to periodic flapping. Longer flaps occur with a period of
about five seconds or more, with an up-and-down duration of one second. For these
longer periodic interface flaps, you configure interface damping with the damping
statement on the interface. This damping method uses an exponential back-off
algorithm to suppress interface up and down event reporting to the upper-level
protocols. Every time an interface goes down, a penalty is added to the interface
penalty counter. If at some point the accumulated penalty exceeds the suppress
level max-suppress, the interface is placed in the suppress state, and further
interface state up and down transitions are not reported to the upper-level
protocols. Every time an interface goes down, a penalty is added to the interface
penalty counter. Penalty added on every interface flap is 1000.
The default values are:
• half-life-period: 5 sec
• max-suppress: 20 sec
• reuse-threshold: 1000
• suppress-threshold: 2000
These timers tuned to requirement during the testing phase.
(v) MTU
Page 25 of 131
• MPLS MTU = IP MTU – (<number of labels> x 4 bytes)
This section covers design principles and high-level design details for
implementing class of service mechanisms and methods to provide proper
treatment to the various traffic types traversing through networks devices.
In order to ensure proper end-to-end treatment through the MPLS network, we
must map our QoS into the available classes.
When a network experiences congestion and delay, some packets must be dropped.
The class of service (CoS) enables to divide traffic into classes and offer various
levels of throughput and packet loss when congestion occurs. This allows packet
loss to happen according to rules that are configured.
For interfaces that carry IPv4/IPv6, and MPLS traffic, CoS provides multiple
classes of service for different applications. On the router, it can be configured
with multiple classes for transmitting packets, define which packets are placed into
each output queue, schedule the transmission service level for each queue, and
manage congestion using a random early detection (RED) algorithm.
The network shall provide dedicated transport for video, voice, and data services.
These services/applications place different requirements upon the network in terms
of the behaviour of the packets as they traverse the network from source to
destination. These behaviours are often expressed in terms of three basic
attributes; loss, latency, jitter, and the behaviours are independent of any particular
application. Therefore, traffic from apparently unrelated applications may require
the same behaviour and should, therefore, be treated identically.
Page 26 of 131
• Internet Default
• Default
• Gold
• Silver
• Bronze
• Management
• Platinum
• Diamond
Traffic Policing: Traffic policing allows you to control the maximum rate of traffic
sent or received on an interface, and to partition a network into multiple priority
levels or class of service (CoS).
Any CoS implementation must work consistently end to end through the network.
CoS features interoperate with other vendors’ CoS implementations because they
are based on IETF Differentiated Services (DiffServ) standards. The purpose of
a CoS configuration is to define the forwarding treatment of a packet at each router
(hop) along a forwarding path toward a destination.
The following table presents the COS for the Zonal Railway services. The values
recommended as TX bandwidth are based on best practices, Railways to decide
on the actual values based on their requirements. These values can be tuned
during the implementation and testing phase.
Page 27 of 131
Services Type 1
Silver low 40 AF31/3
(Video) VSS
Services Type 2
Bronze low 10 AF21/4
(RailNet)
Network
Management high 2 NC/5
Management
Wi-Fi/Public
Default low 10 Af12/1
hotspot
ACI provides consistent policy, automation, telemetry, intelligent service chaining, and
ease of operations to geographically distributed central, regional, and edge telecom
data centers. The advantage includes automation and consistent policy across data-
center and transport domains. Telecom Service Providers are using Segment Routing
(SR) Multi-Protocol Label Switching (MPLS) in the transport domain accordingly a
High Level Architecture for Data Centre as per the figure below is proposed.
Page 28 of 131
High Level Architecture for Data
Centre Network and MPLS
EVPN VXLAN
VXLAN
EVPN
IP MPLS Backbone
SR/MPLS Handoff
SR/MPLS
SR/MPLS Handoff
SR/MPLS
Handoff Handoff
Divisional/Edge DC Divisional/Edge DC
Zonal Headquarters-DC(s)
Figure 5 - High Level Architecture for Data Centre Network and MPLS
Key Considerations:
● Security: Robust security measures are essential to protect sensitive data transmitted
over the network. This includes encryption, access control, and intrusion
detection/prevention systems.
Page 29 of 131
● QoS: QoS mechanisms should be implemented within the MPLS network to prioritize
mission-critical traffic and ensure consistent performance for different service types.
This could involve techniques like traffic shaping, differentiated services (DiffServ),
and resource reservation protocol (RSVP).
● PTP Support: PTP ensures precise time synchronization across the network, crucial
for applications like location-based services and financial transactions. This can be
achieved through dedicated PTP servers strategically placed within the network.
Traffic Classes are used internally for determining fabric priority and as the match
condition for egress queuing. QoS Groups are used internally as the match criteria for
egress CoS header re-marking. IPP/DSCP marking and re-marking of ingress MPLS
traffic is done using ingress QoS policies. MPLS EXP for imposed labels can be done
on ingress or egress, but if you wish to rewrite both the IPP/DSCP and set an explicit
EXP for imposed labels, the MPLS EXP must be set on egress.
The priority-level command used in an egress QoS policy specifies the egress transmit
priority of the traffic vs. other priority traffic. Priority levels can be configured as 1-7
with 1 being the highest priority. Priority level 0 is reserved for best-effort traffic.
Routed Optical Networking simplifies the network design and collapses complex
technologies and network layers into a single cost efficient and easy to manage
network infrastructure.
Routed Optical Networking tackles the challenges of building and managing networks
by simplifying both the infrastructure and operations.
There are two main concepts driving changes in the Routed Optical Networking
architecture:
The direct integration of digital coherent WDM interfaces in the router eliminates the
traditional manually intensive service hand-off across the demarcation between the
optical transport and packet domains.
The automation architecture includes unified capacity planning, path optimization, and
element management for both IP and optical layers.
Page 30 of 131
A key pillar of the Routed Optical Network is the integration of the coherent pluggable
modules. A key enabler for the scale is the 400GbE line rate. 400GE coherent optics
leverage a multi-vendor Quad Small Form-Factor Pluggable (QSFP) with standardized
specifications, which allows interoperability for easier adoption and scale. In order to
ensure full flexibility, these line cards will support both coherent and grey optical
interfaces (or a combination thereof) in the form of QSFP56-DD form-factor
pluggables. The use of QSFP56-DD for both coherent and grey optical interfaces can
be leveraged to limit any trade-off in terms of port densities and IP fabric capacity
commonly associated with coherent optics on routing platforms (previously referred to
as IPoDWDM). This means that a specific line card required to host coherent DWDM
optical interfaces will no longer be required.
A universal line card which can be flexibly deployed to support the termination of
coherent DWDM, or grey optical line interfaces, can be realized. QSFP form-factor
has been widely leveraged in the industry and many OEMs are contributor in
promoting QSFP-DD Multiple Supplier Agreement (MSA) through the Optical
Internetworking Forum (OIF) as well as other standards organizations. Evolution of
Digital Coherent Optics In the IP aggregation application space, IP/optical integration
can already be served by leveraging IP aggregation devices which feature Modular
Port Adapters (MPA) and Digital Coherent Optical (DCO) pluggable modules. The
integration of DCO modules allows the direct termination of coherent DWDM
interfaces directly into the IP aggregation devices without incurring the cost and
complexity of optical transponder line cards.
5.8.1 DWDM
Most modern Telecom networks start at the physical DWDM fiber optic layer. Above
the physical fiber is technology to allow multiple photonic wavelengths to traverse a
single fiber and be switched at junction points.
5.8.2 Ethernet/IP
The Routed Optical Networking architecture removes this limitation and brings the
efficiency for networks of all sizes, due to advancements in coherent pluggable
technology.
Page 31 of 131
(ii) QSFP-DD, 400ZR, and OpenZR+ Standards
The advancement to improve network efficiency has led to shifting coherent DWDM
functions to router pluggable. Technology advancements have shrunk the Digital
Coherent Optics (DCO) components into the standard QSFP-DD form factor, meaning
no specialized hardware and the ability to use the highest capacity routers available
today. ZR/OpenZR+ QSFP-DD optics can be used in the same ports as the highest
speed 400G non-DCO transceivers.
Reducing devices in the network enhances resiliency and availability, while it also
optimizes wavelength and the embedded fiber capacity.
5.8.4 Typical Link Budget of an OFC Connection with SFPs upto 80Km
The link budget of an Optical Fiber Cable (OFC) connection is the total amount of
optical power loss a system can tolerate while still maintaining a reliable data
transmission. It's calculated by taking into account the power output of the transmitter,
the sensitivity of the receiver, and the losses incurred along the optical path, such as
fiber attenuation, connector losses, and splice losses.
Page 32 of 131
For SFPs used for up to 80 km, such as 10GBASE-ZR or DWDM SFP+, the link budget
calculation can be broken down as follows:
o The minimum optical power required at the receiver to ensure proper signal reception.
o Typical range for 10GBASE-ZR and DWDM SFP+ is −23 dBm to −27 dBm.
o The amount of signal loss due to the fiber itself. For Single-Mode Fiber (SMF), the
typical attenuation is around 0.25 dB/km at 1550 nm.
o For 80 km: Fiber Loss = 80 km×0.25 dB/km = 20 dB
o Connectors typically add 0.5 dB per connector, and each splice adds about 0.1-0.2
dB.
o For a typical system with 2 connectors and 26-27splices in 80 km length of fibre
considering 3km drum length, the loss might be around (1+27x0.1=3.7) 3.7 dB.
Link Budget Calculation: Let's break this down for a typical 80 km 10GBASE-ZR or
DWDM SFP+ link:
Page 33 of 131
The link budget required for an 80 km connection is approximately 25.7 dB. Given that
the calculated budget is 27 dB (for typical 10GBASE-ZR or DWDM SFP+), this link is
feasible, but it is very near to the limit. It is needed to optimize factors such as splicing
or use additional components like optical amplifiers to ensure stability in real-world
conditions. If the SFP has slightly better transmitter power or receiver sensitivity, you
would have additional margin.
Each Zonal HQ is connected to other Railways via 3-4 routes to provide path
redundancy and routing the traffic flow through different routes in case of congestion
in any one route.
Zonal HQ of WCR is at Jabalpur. Jabalpur is also one of the three divisions of the
WCR zone. The common core/aggregate layer router provided at Jabalpur shall be of
higher specification as per Zonal router. Other two divisions of WCR i.e. Bhopal and
Kota will also be connected to this router.
On backbone side the connectivity of WCR HQ with other zones will be through the
divisions on different routes. For example – WCR to NR will be connected through
Page 34 of 131
Figure 6 - Example Map of Zonal connectivity
Legend:
Interconnectivity
of CORE network
of Indian Railways
Page 35 of 131
Figure 7 - Proposed diagram of SWR IPMPLS network:
Page 36 of 131
Salient features :-
GTL(SCR),
MRJ(CR),
5 UBL UBL DVG, LD, GDG
MAO(KR)
HG(CR)
HAS, MYS, MYS, SA(SR), JTJ(SR)
6 SBC SBC
BWT DMM (SCR)
7 UBL VSG
8 UBL DED
9 UBL RNJP
10 UBL SMLI
LINEAR SECTION
(protected through BSNL bandwidth)
11 MYS CMNR
12 MYS TLGP
13 MYS CMGR
14 SBC MKM
Page 37 of 131
Figure 8 - Proposed MPLS Network Diagram of NWR
Page 38 of 131
1. NWR has 4 Divisions viz. Ajmer, Bikaner, Jaipur and Jodhpur with Zonal HQ at Jaipur.
3. LSR/Repeater are used in sections, where distance between two junctions is greater
than 60 km. 64 such LSR/repeaters are placed at various wayside stations between
junctions.
4. NWR has boundary with NR, NCR, WR & WCR at 12 junction stations. Protection
paths through other railways is also shown via these stations.
5. Connectivity of NWR with NR, WR, NCR and WCR Zonal HQs is also shown via
100/400G links on Core routers.
6. Division to Division connectivity is shown via different 40/100G links on core routers.
Following two sample cases have been taken for working of protection paths in case
of failure:
Page 39 of 131
Figure 10 - Protection Path for Ajmer-Manwar Junction (AII-MJ)
Case-2: Similar case is taken for Ajmer-Marwar Junction (AII-MJ) section. In case of
failure of main path two paths are there for protection. Both the protection paths are
within NWR.
After detailed discussion regarding the architecture of IPMPLS backbone network for
Indian Railways, following is recommended by the committee
5.9.1 The IPMPLS backbone network of IR shall be three layered architecture. Different
layers of proposed network hierarchies are as follows:
(a) High-end routers to aggregate data from various divisions and provide high-speed
connections to other zones are required. Bandwidth of 100G upgradable upto 400G is
proposed for connecting the CORE/Aggregate routers. Connectivity between
CORE/Aggregate routers and Divisional Aggregate router is to be provided with 40G
bandwidth upgradable upto 100G.
(c) Separate pair of fiber in mesh topology to connect the high-end routers for
CORE/Aggregate network is proposed.
Page 40 of 131
Figure 11 - Proposed architecture of IPMPLS network for IR
Page 41 of 131
(ii) Pre-Aggregate Layer: The Pre-Aggregate layer between Junction/Major stations in a
Railway IP-MPLS system serves as a critical intermediate layer that collects and
organizes data from access network, preparing it for efficient transmission to the
higher-level core/aggregate layer at the Divisional level.
(a) LSR (Label Switch Routers) are provided at every junction/major station separated
approximately 60 to 70 Kms to collect data from different stations and prepare it for
further aggregation. Bandwidth of 2x10G or higher is proposed for connecting these
Pre-Aggregate routers.
(b) Specification for these routers shall be as mentioned in RDSO TAN Version 2.0.
(c) Separate pair of fiber in ring topology to connect the LSR routers at junction/major
stations for Pre-Aggregate network is proposed.
(iii) Access Layer: The Access layer in a Railway IP-MPLS system is the foundational
layer that interfaces directly with various railway endpoints, such as stations, train
control systems, and other local operational devices. Positioned below the Pre-
Aggregate layer, the Access layer collects data from these endpoints and sends it to
the Pre-Aggregate layer at junction/major station or regional hubs.
(a) LER (Label Edge Router) at each station and L3/L2 Switches that connect local
equipment (like signalling systems and control system, PRS/UTS, SCADA, VSS, etc.)
to the network. Bandwidth of 10G is proposed for connecting these Access routers.
(b) Specification for these routers shall be as mentioned in RDSO TAN Version 2.0.
(c) Separate pair of fiber in ring topology to connect the Access Routers at all stations
for this layer is proposed.
(iv) Cell Site Routers: Base stations (eNBs) of LTE network will be deployed throughout
the country to provide LTE coverage. These eNBs are proposed to connect to the
backhaul network using 1G link between cell site router and station LER through
separate fiber at the time of implementation of LTE.
5.9.2 For specifications of Pre-aggregate routers (LSR) at Junction station level and access
routers (LER) at stations RDSO TAN Ver 2.0 shall be followed. Configuration of LSR
and LER shall be decided by the Zonal Railways as per requirement.
5.9.3 As per proposed architecture separate pair of fibre is required for the three different
layers, however at present only two fibres are available. To start implementation of the
work access network and pre-aggregate network shall be taken up in first phase with
backup connectivity taking from RCIL as being done presently for SDH network.
5.9.4 Core/Aggregate network layer shall be planned in next phase when separate fibers
are available as per new policy of 2x96 fibres.
5.9.5 The converged optical network for Core/Aggregate layer should be designed and
implemented as a unified project. This project must seamlessly integrate with the Pre-
aggregate networks being deployed by each of the Zonal Railways, ensuring a
cohesive network infrastructure with unified Classes of Service (CoS) and a common
management layer. Preferably, the complete work of core/aggregate network shall be
done by any one Zonal Railway to have uniformity in higher-level network.
Page 42 of 131
5.9.6 Uniform IP addressing scheme issued for the Railways IPMPLS backbone network
shall be followed by all the Zonal Railways while implementing the network. Copy
attached as Annexure-2.
5.9.7 Service provisioning and management of this network is very complex and committee
recommends having proper NOC setup ready for management of this network along
with the implementation of network in every Division and Zone as per detailed
recommendations for item No.6 of this report.
Page 43 of 131
6. TOR Item No. 2:
6.1 Background
Based on above, Technical Advisory Note (TAN) Version 1.0 was issued dated
16.12.2020 based on comments of Zonal Railways and various stakeholders. Railway
Board vide letter dated 20.10.2022, advised RDSO to issue PoC (Proof of Concept)
guidelines for implementing IP-MPLS network by Zonal Railways with reference to TAN
issued by RDSO.
Further revision of the TAN was taken up based on feedback / suggestions received
from Zonal Railways, various stakeholders including OEMs and directives from
Railway Board. TAN Version 2.0 for “Implementation of IP-MPLS Technology for
Unified Communication Backbone on Indian Railway” including PoC guidelines was
issued w.e.f. 29.03.2023 following the RDSO ISO procedure.
Typical scheme shows connectivity between wayside stations on 10G link. The data
from wayside stations will be aggregated at junction stations and these junction
stations will have Label Switch Routers (LSR) in addition to LER of that station for
aggregating the data and sending it to divisional level routers through IP-MPLS
network. In addition to Railway’s own connectivity between these aggregation routers
(LSR) connectivity with RailTel IP-MPLS network is also proposed for availability of
network in case of Railway’s network failure.
Page 44 of 131
Figure 12 - A typical schematic diagram for implementation of IP-MPLS network
6.3 The issue of discussion on Specification / TAN version 2.0 for IP-MPLS and suggestion
for improvement / changes based on field experiences was taken up as agenda item in
41st TCSC meeting held on 29th & 30th Jan’2024 also.
All the Zonal Railways have given comments / suggestions on TAN Version 2.0 for IP-
MPLS routers. The comments / suggestions of Railways and remarks on the same are
as follows:
a) Issues related with higher port capability, more no. of ports, retention of legacy ports
in LER / LSR were suggested.
Remarks: All the services running on stations have been catered in the typical scheme
given in the TAN and accordingly type of interface port is decided. Capacity of interface
port for LER as well as LSR is based on bandwidth requirement for the services /
connectivity. Moreover, the requirements given in TAN are minimum and these can be
changed based on the site requirements by Zonal Railways.
Page 45 of 131
As far as legacy ports are concerned, these ports are required till the migration of all
services on ethernet, however, the decision has been left with Zonal Railways
regarding number of legacy ports required.
Remarks: Chassis based router has been preferred for more flexibility as various
types of interfaces including legacy interfaces are required for the migration from
existing SDH backbone to IP-MPLS backbone over Indian Railways. Apart from being
helpful in migration, it is also better for providing redundancy and replacement of faulty
cards instead of the complete router. In future, once the IP-MPLS network is stabilized,
the next replacement whenever due can be planned as per Railways requirement.
d) Connection to RCIL at all junction stations with LSR and Long-haul bandwidth through
RCIL should be increased to 25G.
6.4.1 The present TAN specifies the details of wayside station router (LER) of access
network and junction station aggregation router (LSR) of pre-aggregate network only.
Configuration of LSR and LER shall be decided by the Zonal Railways as per local
requirement. These two routers i.e. LER & LSR are edge routers, however,
configuration of core routers to be used in Divisional as well as at Zonal level has been
discussed in the committee report against TOR item no.1 of report.
6.4.2 Specifications of router for MPLS based transport network has been issued by TEC in
March 2022 vide TEC 48050:2022 and is being used by the Telecom Service
Providers extensively. To meet the Railway specific requirements details have been
covered in the TAN 2.0 issued by RDSO. There is no further requirement for
developing RDSO specification for this item or changing the TAN 2.0 at present.
Page 46 of 131
7. TOR Item No. 3:
The issue of discussion on fixing performance criterion for OEMs of IP-MPLS routers
for tenders being invited by Zonal Railways was taken up as agenda item in 41st TCSC
meeting held on 29th & 30th Jan’2024. Later on the item has been given to the
committee as TOR item no. 3 vide RB’s letter dated 08.04.2024.
Committee has deliberated this item including discussions on the item held in 41 st
TCSC with the PCSTE’s of all ZRs. The issues discussed during the 41st TCSC and
remarks on these issues are given below:
7.1 Discussion on issues raised in 41st TCSC meeting for fixing performance
criterion for OEMs of IP-MPLS routers:
For fixing performance criterion for OEMs of IP-MPLS routers following major issues
were discussed based on comments from Zonal Railways:
a) Inclusion of PoC.
Remarks: PoC has been already included in TAN Version 2.0. PoC is required to be
done for this item as per PoC guidelines issued in TAN Version 2.0 to check the
suitability of equipment being offered for Railway requirement. In addition to that
guidelines for PoC of new technology telecom items has been issued by RDSO
recently for implementation.
Remarks: MTCTE certification from TEC has been included in the TAN Version 2.0
for IP-MPLS routers.
c) Only RDSO approved vendors to participate & RDSO approved vendor list for IP-
MPLS.
Remarks: Being COTS item vendor approval is not being done by RDSO, however,
guidelines for PoC of new telecom items has been issued on 26.07.2024. As per new
guidelines, the details of OEMs and their equipment are published on the RDSO
website after successful completion of PoC to facilitate the Zonal Railways.
d) For quantity of routers installed and working satisfactorily following suggestions were
received:
• OEMs should have supplied minimum 30% of the tender value to any Govt.
organization.
• 20% of quantity to be procured should be installed in Govt./PSUs/Telcos and working
satisfactorily for the period of at least 3 years including successful upgradation during
the 3-year period.
• 15% of tender quantities of LER’s and LSR’s installed and working satisfactorily.
• The OEM offered Routers models at least 10% of scheduled quantity of the tender
document, should have working satisfactorily in a single work in any Government/
PSU/Telecom Service Providers Network in India.
Page 47 of 131
Remarks:
• The item was discussed in detailed by the committee and it is found that if this
criterion is kept very stringent it is difficult to find suitable vendors.
• Performance criteria should not be very stringent at the same time it should not be
very simple treating it as end equipment.
• Performance criteria should have level playing field for the OEMs.
e) For maintenance and technical support after the installation and commissioning of the
equipment, following suggestions were given by Zonal Railways:
(i) For IP-MPLS Router, OEM must be in the list of Trusted Telecom suppliers on the
Trusted Telecom portal of the Government of India. Proof of which needs to be
enclosed along with MAF.
(ii) OEM should have TEC/ MTCTE certification for IP-MPLS Routers as per TEC GR.
(iii) All Routers should be IPv6 ready from Day One. All the hardware and Software for the
same should be provided with the system
(iv) OEMs should have 24*7 Technical Assistance Centre (TAC) support in India. Relevant
document in support of the same shall be submitted.
(v) OEM should have proven facilities for Engineering, Manufacture, assembly,
integration, testing and basic facilities with respect to Space, Engineering, Personnel,
Test equipment, Manufacture, Training, Logistic Supports for at least past three years
from where the proposed equipment shall be supplied. In case OEM is located outside
India, it should have engineering, testing, training, and service centre facilities in India.
Page 48 of 131
The OEM shall provide support during the operation of the Equipment/ Routers till the
codal life of the equipment and support with spares till 08 years from the date of supply/
installation of Router.
(vi) Clause for Bidder to provide AMC with BG to be included. Life cycle cost analysis to
be submitted by the bidder. Warranty plus AMC shall cover the complete codal life of
equipment i.e. 08 years.
(vii) Tender specific authorization certificate (MAF) from the OEM is required for the bidder
to participate in the bid.
(i) Equipment/IP-MPLS Routers offered shall have a proven and satisfactorily working
record in Mainline Railways/ Metro Rail network/ Government-Centre or State/
PSUs/Defence / Telecom Service Provider/ ISP/ Bharatnet/ Public Listed Company
etc for minimum one year for 25% of the quantity mentioned in bid document as on
date of opening of tender in India or outside India other than the country of origin.
Relevant document (like satisfactory performance etc. of the offered equipment from
the Client / customer) in support of the same shall be submitted for approval of the
Engineer.
(ii) OEMs should have their spares depots in India and location & address for same need
to be submitted for at least 10 such depots across PAN India.
(iii) The IPMPLS Router OEM must conduct pilot sites implementation for minimum 5%
sites. HLD, LLD, Design documents, AT documents must be signed off by the OEM
and the customer along with the bidder for final acceptance. In no case, bidder is
allowed to sign-off on behalf of OEM.
(iv) OEMs should be having valid ISO 9000 & ISO 14000 certification.
(v) The Router/Router (OS) should be tested and certified for EAL2/NDPP under common
criteria programme for security related functions or under Indian common criteria
certification scheme (IC3S) by STQC, MeiTY, Government of India.
Note: The bidder shall submit all relevant documents, certifications and undertaking
from the OEM related to above performance criteria.
Page 49 of 131
8.0 TOR Item No. 4:
The existing network carries crucial communication circuits that cover train operations
and Railway working. Hence, it is essential that a detailed migration plan is prepared
and meticulously executed.
This item outlines the migration plan for transitioning from the existing Synchronous
Digital Hierarchy (SDH) network to an Internet Protocol Multiprotocol Label Switching
(IP/MPLS) network for Indian Railways. The migration aims to enhance the operational
efficiency, scalability, reliability, and flexibility of network services such as Passenger
Reservation System (PRS), Unreserved Ticketing System (UTS), Freight Operations
Information System (FOIS), CCTV, Signalling Circuits, Data logger, Train Control
Communication, Supervisory Control and Data Acquisition (SCADA), and other
essential communication services like Railnet, Station Wi-Fi.
8.2.1 Existing SDH Network: The current SDH-based network of Indian Railways is a point-
to-point connection with static bandwidth allocation, which limits flexibility and
scalability. SDH circuits are provisioned for various services like PRS/UTS, FOIS,
CCTV, Signalling circuits, and Train Communication. However, it lacks the advanced
traffic management capabilities and dynamic routing of modern IP/MPLS networks.
Page 50 of 131
● Criticality of Services: Prioritize the migration of critical services like UTS, PRS,
FOIS, and Signalling, ensuring redundancy and failover mechanisms are in place
before moving them to the new network.
● Phased Migration Plan: Adopt a phased migration approach to ensure that only a
small portion of services are transferred at a time, minimising disruptions. Migrate non-
critical services first (e.g., CCTV, Station Wi-Fi), followed by more critical services
(e.g., UTS, PRS, FOIS, Signalling).
● Dual-Stack Operation: Maintain both SDH and IP/MPLS networks simultaneously
(dual-stack operation) during the transition period. This ensures that services can be
switched back to SDH in case of issues during migration.
● Dual Paths and Redundancy: Set up redundant paths for critical services like UTS,
PRS, FOIS, signalling circuits, and SCADA to ensure they remain operational even in
case of a failure.
● Load Balancing: Use load-balancing mechanisms for distributing traffic evenly across
redundant links, ensuring that network performance remains optimal even during high-
traffic periods.
● Segregation using VRF and VLANs: Maintain strict segregation between services
using VRF and VLANs to prevent cross-traffic between critical and non-critical
services.
● Encryption: Implement encryption (IPsec) for services that require secure
communication, such as UTS, PRS, and FOIS.
● Network Monitoring: Deploy advanced monitoring tools to track network
performance, detect anomalies, and secure the network against potential threats.
● Coordination with CRIS: Since CRIS (Centre for Railway Information Systems)
manages services like UTS, PRS, and FOIS, it’s critical to closely coordinate with
CRIS during the migration. CRIS should be involved in the testing and validation
process for services before and after migration.
● Coordination with RailTel: RailTel, which provides the communication backbone at
present, should be closely involved in planning the migration, ensuring that the
underlying MPLS infrastructure is ready to handle critical services with the required
QoS, security, and redundancy.
Page 51 of 131
8.3.7 Dual-Stack Operation
● During migration, both SDH and IP/MPLS will coexist. Dual-stack architecture will
ensure that services continue to run smoothly on SDH while being gradually shifted to
IP/MPLS.
● VLAN (Virtual Local Area Networks): Use VLANs to separate traffic at the layer-2
level for different services. This ensures that various service types (e.g., PRS, UTS,
FOIS, CCTV, Datalogger) can operate on the same physical infrastructure without
interference.
● VRF (Virtual Routing and Forwarding): Use VRF for logical segregation of routing
instances, ensuring that traffic for different services is routed independently,
enhancing security and isolation between services.
● VPN (Virtual Private Networks): Use L2VPN and L3VPN services for isolating critical
applications like UTS, PRS, FOIS, and signalling. This allows controlled and secure
access to the MPLS network for different services.
VPN are private networks that use a transport network to connect two or more remote
sites. Instead of dedicated connections between networks. VPNs use virtual
connections routed (tunnelled). The IP/MPLS network supports L2 and L3 VPNs:
● Layer 2 VPN (L2VPN): For services requiring low latency and a fixed path, such as
signalling circuits and train control communication.
● Layer 3 VPN (L3VPN): L3VPN services is used in the network to provide layer 3
connectivity between remote sites and will constitute the full forwarding path that all
layer 3 traffic should take. For higher-level services like PRS, UTS, FOIS, and CCTV,
where dynamic routing and flexibility are crucial.
Page 52 of 131
Figure 13 - Typical example for VPNs
Page 53 of 131
8.4.4 MPLS VPN CSC with BGP
Carrier supporting carrier is where one service provider allows another service
provider to use a segment of its backbone network. The service provider that provides
the segment of the backbone network to the other provider is called the backbone
carrier. The service provider that uses the segment of the backbone network is called
the customer carrier. For example-Backbone Carrier: RailTel & Customer Carrier:
Indian Railways
● The integration of the IP/MPLS network of the division will be done using MPLS VPN
CSC.
● Each Division will have its own MPLS domain with unique BGP AS numbers.
● The IP/MPLS network of the division will be interconnected with RCIL IP-MPLS PoP
at two or more Jn locations.
● BGP-LU sessions will be required at junction location (LSR) between Division and
RCIL for exchanging labelled infrastructure routes among divisions.
● The division will be able to create, extend and delete services on their own without any
intervention from RCIL with this integration scheme.
Page 54 of 131
8.5.1 Simplified Network Management
● Simplified Routing: A uniform IP addressing plan allows for more efficient routing,
reducing the size and complexity of routing tables. Consistent address blocks can be
aggregated or summarized (route summarization), which minimizes the number of
routes that need to be propagated between routers.
● Improved Performance: Fewer routing entries and simpler routing paths can improve
the overall performance of the network by reducing the processing overhead on
routers.
● Seamless VPN Integration: For MPLS VPNs (L2VPN/L3VPN), a consistent
addressing scheme ensures that virtual private networks (VPNs) are easily maintained
and routed across the IP/MPLS backbone, simplifying traffic flow between sites.
Page 55 of 131
8.5.5 Consistency in Service and Device Allocation
● Consistency across Multiple Services: Railways run multiple critical systems, such
as ticketing (UTS/PRS), freight (FOIS), train control (TCCS), and signalling. A uniform
IP addressing scheme ensures these systems can communicate effectively over the
IP/MPLS network, reducing the risk of IP conflicts or misrouting.
● Seamless Integration of New Technologies: As new services or technologies are
introduced, such as IoT devices for smart railway stations, a consistent addressing
scheme ensures they can be smoothly integrated into the existing network without
requiring major reconfigurations.
Current Setup: UTS and PRS are critical services managed by CRIS. These services
are accessed by railway stations across the country, and any downtime can lead to
disruptions.
Migration Strategy:
● VLANs: Use a dedicated VLAN for UTS/PRS to segregate traffic at layer-2, isolating
it from other services like CCTV, station Wi-Fi, and Signalling.
Page 56 of 131
● VRF: Deploy VRF instances specifically for UTS and PRS, ensuring that routing is
isolated from other services. This prevents traffic mix-ups and enhances security.
● L3VPN: Use MPLS L3VPN to securely route UTS/PRS traffic across the IP/MPLS
network. This allows dynamic routing and ensures that the services are scalable and
reliable.
● QoS: Apply strict QoS policies to prioritize UTS/PRS traffic, ensuring that they have
guaranteed bandwidth and low latency, especially during high-demand periods like
festival seasons.
● Redundancy: Set up redundant MPLS paths to ensure continuous availability of UTS
and PRS even during network failures.
● Testing and Validation: After migration, thorough testing with CRIS should be done
to ensure that ticketing operations (UTS/PRS) remain functional and stable under real-
time load conditions.
Current Setup: FOIS manages freight train operations, making it a crucial system for
railway operations. Presently, it is managed by CRIS.
Migration Strategy:
● VLANs: Assign a specific VLAN for FOIS to separate its traffic from UTS, PRS, and
other services at the layer-2 level.
● VRF: Deploy a dedicated VRF instance for FOIS traffic to isolate its routing from other
services.
● L3VPN: Use MPLS L3VPN for FOIS, enabling dynamic routing and ensuring
scalability as freight operations grow. L3VPN also supports policy-based routing,
ensuring optimal network performance for freight operations.
● QoS: Prioritize FOIS traffic with QoS to ensure seamless operation, even during
congestion.
● Redundancy: Dual-homed links and redundant MPLS paths will ensure that FOIS
services remain operational even during failures.
● Testing and Validation: Extensive testing in coordination with CRIS and RailTel to
validate that FOIS continues to operate smoothly and reliably post-migration.
Current Setup: Signalling circuits are point-to-point circuits on the SDH network,
requiring low-latency and highly reliable connections.
Migration Strategy:
● VLANs: Assign a dedicated VLAN to signalling circuits to isolate traffic at layer-2 and
ensure minimal interference with other services.
● VRF: Deploy a dedicated VRF instance for signalling circuits to prevent any routing
conflicts with other services.
● L2VPN: Use MPLS L2VPN for signalling circuits, ensuring that low-latency, point-to-
point communication is maintained. L2VPN mimics the circuit-switched nature of SDH,
making it ideal for signalling circuits.
● QoS: Configure QoS to prioritize signalling traffic, ensuring that it always has the
necessary bandwidth and low latency.
● Redundancy: Redundant MPLS paths with fast failover mechanisms (e.g., MPLS-TE
Fast Reroute) will ensure that signalling circuits remain operational in case of failures.
Page 57 of 131
● Testing and Validation: Test signalling circuit performance after migration, focusing
on latency, jitter, and failover capabilities.
Current Setup: Data loggers collect and transmit real-time data from field devices such
as signalling equipment.
Migration Strategy:
● VLANs: Assign a VLAN to Data logger systems to separate real-time data traffic from
other services.
● VRF: Use VRF to isolate Data logger traffic at the routing layer, ensuring that real-time
data follows predetermined routes without interference.
● L2VPN: Implement MPLS L2VPN for Data logger to ensure low-latency
communication with field devices, similar to the SDH setup.
● QoS: Prioritize Data logger traffic for low-latency delivery, ensuring that critical
monitoring data reaches its destination in real time.
● Redundancy: Implement redundant MPLS paths for Data logger traffic to ensure
continuous operation in case of failure.
● Testing and Validation: Validate real-time data collection and transmission under
various traffic loads and failover scenarios.
8.6.5 Railnet
Current Setup: Railnet provides internet and intranet services to railway employees
and offices.
Migration Strategy:
● VLANs: Use VLANs to segregate Railnet traffic for internal communications (e.g.,
administrative tasks, internal email, etc.).
● VRF: Deploy VRFs for different internal services within Railnet, such as separating
administrative access from general internet usage.
● L3VPN: Use MPLS L3VPN to provide secure, scalable routing for Railnet traffic across
different zones and regions.
● QoS: Apply QoS to prioritize internal administrative traffic over general Internet usage.
● Redundancy: Set up redundant MPLS paths to ensure continuous availability of
Railnet services.
● Testing and Validation: Test Railnet performance after migration to ensure that
internal applications and Internet access remain functional and secure.
Current Setup: Station Wi-Fi services are currently operated independently or over
SDH and managed by RCIL.
Migration Strategy:
● VLANs: Create separate VLANs for public Wi-Fi (passengers) and staff Wi-Fi to
ensure isolation at the layer-2 level.
● VRF: Use VRF instances to segregate public Wi-Fi traffic from internal staff Wi-Fi,
ensuring security and preventing unauthorized access to internal resources.
Page 58 of 131
● L3VPN: Use L3VPN for staff Wi-Fi to connect securely to Railnet, while public Wi-Fi
traffic is routed to the Internet.
● QoS: Prioritize staff Wi-Fi traffic over public Wi-Fi to ensure sufficient bandwidth for
railway operations.
● Redundancy: Redundant MPLS paths should be configured to ensure high availability
for staff Wi-Fi.
● Testing and Validation: Verify that both public and staff Wi-Fi services function
properly and securely after migration.
Current Setup: CCTV systems typically use point-to-point SDH circuits to transmit
video feeds from stations to a central monitoring system.
Migration to IP/MPLS:
● VLAN: Implement a dedicated VLAN for CCTV traffic at the Layer 2 level to segregate
it from other services like Wi-Fi or UTS/PRS. This ensures security and prevents
bandwidth contention with other services.
● L2VPN: MPLS L2VPN is ideal for CCTV as it allows a point-to-point connection that
emulates the current SDH setup while leveraging IP/MPLS. L2VPN keeps the CCTV
traffic within a virtual private connection, maintaining service isolation.
● VRF: For stations with more complex routing requirements or larger surveillance
networks, VRF instances can be deployed to isolate CCTV routing from other services.
● QoS: Apply QoS policies to ensure that the video feeds have guaranteed bandwidth
and low latency, especially important for live monitoring.
● Redundancy: Dual MPLS paths should be implemented to ensure continuous
surveillance in case of a network failure.
● Testing: Conduct end-to-end testing to ensure video feeds are transmitted without
degradation.
Current Setup: TCCS circuits are used for real-time communication between the Train
Controllers and field Station Masters, typically on SDH.
Migration to IP/MPLS:
● VLAN: Create a VLAN for TCCS to segregate real-time train control traffic from other
services. This will ensure that the critical TCCS data has an isolated path and is not
affected by other station activities.
● L2VPN: MPLS L2VPN can be used to maintain low-latency, point-to-point connections
that emulate the current SDH circuits. This ensures that the real-time nature of train
control communications is preserved.
● VRF: Use VRF instances for TCCS to isolate its routing from other services at the
layer-3 level. This is particularly useful in larger networks where routing segregation is
needed.
● QoS: Implement strict QoS policies to prioritize TCCS traffic, ensuring uninterrupted
communication, with low latency, and responsive in all situations.
● Redundancy: Deploy dual MPLS paths with fast failover mechanisms (e.g., MPLS Fast
Reroute) to ensure continuous operation in case of a failure.
● Testing: Perform real-time failover and latency testing to ensure uninterrupted
communication during migration.
Page 59 of 131
8.7 Migration plan of SWR: SWR has prepared a detailed migration plan for transferring
circuits to their IPMPLS network. This also includes IP addressing plan for different
divisions and services as per uniform IP addressing plan issued by RDSO. Detailed
migration plan prepared by SWR including IP addressing planning is enclosed as
Annexure-5 for ready reference.
8.8.1 Assessment of current network is the first activity for starting the migration. A section
wise map of available SDH equipment is to be prepared specifically covering the
availability of Ethernet ports and the circuits that are dropped at each of the stations in
the sections. Typical items to be covered during assessment of network are given in
Annexure-I.
8.8.2 Creating an extra Divisional unit for implementation and OAM of the network will
ensure the effective implementation and migration by suitable redeployment of existing
staff. The staff should be trained in IP-MPLS and IP/WAN/LAN/Network security
technologies.
8.8.3 A detailed implementation scheme should be worked out wherein the sections where
existing SDH with Ethernet interfaces can be utilized, and where SDH equipment are
to be replaced by L2 switches and MPLS equipment, station wise circuit migration plan
for implementation shall be detailed and followed. “Migration Plan” of the Division shall
get approved by the Competent Authority from the Zonal HQ before its actual
implementation to ensure seamless switch over from existing SDH network to latest
IP-MPLS Technology.
8.8.4 The IP-MPLS equipment shall be standardized for different categories of stations
inclusive of the interfaces needed and the IP numbering scheme.
8.8.5 Sectional control of unimportant branch lines should be migrated first to understand
the effect of IP-MPLS/Ethernet on the working. Once stable, the experience can be
used to successfully migrate other section controls further.
8.8.7 Traffic Segregation using VPN, VLAN, and VRF - Maintain strict segregation between
services using VRF and VLANs to prevent cross-traffic between critical and non-critical
services.
8.8.8 Implement QoS policies to prioritize time-sensitive services like signalling, train control,
SCADA, and Data logger over less critical services like CCTV or public Wi-Fi.
Page 60 of 131
8.8.9 Co-ordination with CRIS and RCIL for services managed by them. CRIS (Centre for
Railway Information Systems) manages services like UTS, PRS, and FOIS, it’s critical
to closely coordinate with CRIS during the migration. CRIS should be involved in the
testing and validation process for services before and after migration. Similarly,
RailTel, which provides the communication backbone, should be closely involved in
planning the migration, ensuring that the underlying MPLS infrastructure is ready to
handle critical services with the required QoS, security, and redundancy.
8.8.10 Detailed service wise pre and post commissioning checklist shall be the integral part
of migration plan. These lists shall be followed while migrating services to IPMPLS
network. Typical check list for L3 and L2 VPN are at Annexure-II.
8.8.11 Adequate Training Program should be conducted for Officers and Staff before
migration and after immediate migration.
8.8.12 During migration, both SDH and IP/MPLS will coexist. Dual-stack architecture will
ensure that services continue to run smoothly on SDH while being gradually shifted to
IP/MPLS.
Page 61 of 131
Annexure-I
Before doing the migration, assessment of current network shall be done and following
parameters shall be recorded.
e) For L2 point-to-point/point-to-multipoint
● Existing VLANs details.
● Any Q-n-Q tunnelling is configured.
f) For L3 point-to-point/point-to-multipoint.
● If Any CE routers or Firewalls
● WAN and LAN IP addresses (location wise)
● Current NNI location
● Routing - Static routing or Dynamic routing
● Protocol, metric and metric type used.
● Existing routing table size (no of routes)
● Intranet only or Intranet with Internet enabled.
Page 62 of 131
i) Create Change request and inform to concern team:
● Plan change request along with rollback approach.
Page 63 of 131
Annexure-II
A. L2 VPN: Pre-Checklist
Requirements:
▪ Confirm the Layer 2 protocol to be used (e.g., Ethernet type or WAN type).
▪ Confirm whether you need configure Transparent L2 VPN or VLAN-Based L2
VPN.
▪ Check for any physical layer issues such as faulty cables or interfaces or SFPs
modules b/w Router and End device
▪ Ensure MTU settings should be match between PE router and the End device.
Backup Configurations:
● Post-Checklist
Configuration Verification:
▪ Verify that all configurations have been correctly configured or not with
template.
▪ Use the following show commands to check L2 VPN status on routers:
Connectivity Testing:
B. L3 VPN: Pre-Checklist
▪ Check for any physical layer issues such as faulty cables or interfaces or SFPs
modules b/w Router and End device.
▪ LLDP/CDP neighbour details
▪ MAC address learning on existing port.
▪ Ensure MTU settings should be match between PE router and the End device.
▪ Routing protocol and adjacency status of routing protocol
▪ Routing table
Page 64 of 131
Backup Configurations:
● Post-Checklist
▪ Verify that all configurations have been correctly configured or not with
template.
▪ Check for any physical layer issues such as faulty cables or interfaces or SFPs
modules b/w Router and End device.
▪ LLDP/CDP neighbour details
▪ MAC address learning on existing port.
▪ Ensure MTU settings should be match between PE router and the End device.
▪ Adjacency status of routing protocol
▪ Routing table
Connectivity Testing:
Page 65 of 131
9.0 TOR Item No. 5:
Standardization of Station LAN infrastructure for using common LAN for all
network services at stations
Standardizing the LAN infrastructure at stations to support all network services involves
designing a common LAN architecture capable of accommodating diverse network
services, including IP-MPLS, VoIP based Control Communications such as Section
Control/TPC, Wi-Fi, VSS, Railnet, FOIS, UTS/PRS, SCADA etc.
Services available at any station can be divided in two groups based on their
requirement:
9.1 Critical Services: These services include services related to Train Operation,
Passenger Safety & Financial Transactions/Passengers Reservations over Indian
Railways and are as follows-
9.2 Non-Critical Services: Remaining services other than critical are Non-Critical Services
and are as follows-
(i) Railnet
(ii) Railway Telephone
(iii) Station Wi-Fi
(iv) RDN
(v) VC etc.
While deriving the services from LSR/LER/Layer-3 switch/ Layer-2 switch, cascading of
switches should be avoided as per extent possible. Cascading of switches results into
data congestion in the network due to their broadcasting nature.
Page 66 of 131
A. Scheme for IP-MPLS network at Way side stations:
Proposed standard LAN infrastructure for way side station is shown below:
• Every station is provided with LER which is further connected with both adjacent Station
LER through 10G link.
• Services shown in yellow are considered as critical services and in white are considered
as non-critical services.
• IP services (Layer-3) viz. LTE, FOIS, UTS/PRS, VC and any other Layer-3 services may
be directly taken from 1G port of LER or through L2 switch.
• Similarly, for existing services working on E1 like Signalling Ccts, Data Logger, Control
Phone etc. can be taken from E1 interface card. Required number of E1 ports may be
decided accordingly.
• Other services can be derived from Layer-2 switch as shown in the diagram.
• Once the E1 services are migrated to Ethernet, E1 card will no longer be required and
the same slot of LER may be used for 1G cards.
Page 67 of 131
B. Scheme for IP-MPLS network at Junction/Major Stations:-
• Every station is provided with one LSR for pre aggregate layer and one LER for access
layer. Connectivity with RCIL is also proposed at some of these locations for back up till
the separate fiber is available.
• LSR is further connected with all adjacent section LSR’s through 10G/2x10G link on
separate fibre. It is also connected with LERs of all adjacent stations.
• Services shown in yellow are considered as critical services and in white are considered
as non-critical services.
• Critical services viz. FOIS, UTS/PRS, VC, SCADA etc and any other Layer-3 services
may be taken through L3 switch and LTE may be directly taken from 1G port of LER of
the station.
• Similarly, for existing services working on E1 like Signalling Ccts, Data Logger, Control
Phone etc. can be taken from E1 interface card. Required number of E1 ports may be
decided accordingly.
• Other services can be derived from L-2 switch as shown in the diagram.
• Once the E1 services are migrated to Ethernet, E1 card will no longer be required and
the same slot of LER may be used for 1G cards.
Page 68 of 131
C. Scheme for IP-MPLS network at Junction/Major Stations:
• Core/Aggregate router is placed at these locations in addition to normal LER and LSR.
Page 69 of 131
9.4 Recommendations of Committee:
After detailed discussion the committee recommends the following standard LAN
infrastructure for station
9.4.1 Schemes for Wayside station, Major/Jn station and Divisional Hq stations have been
discussed, however separate pair of fibre is required for each layer. Core/Aggregate
layer at Zonal / Divisional level shall be planned separately once the additional fibres
are available.
9.4.2 At every station one LER shall be provided for connecting various services available
at the station.
9.4.3 Signalling and other services working through E1 shall be connected directly to the E1
interface card of the LER.
9.4.4 Services provided by CRIS i.e. UTS, PRS and FOIS through their router/switch shall
be directly connected to the 1G port of LER.
9.4.5 LTE routers provided in the section shall be directly connected to the LER through 1G
port.
9.4.6 CCTV network being provided at the stations by RCIL. This service shall be connected
through switch to LER. Switch provided by RCIL for the purpose can be used or it can
be connected to the Railway switch being provided.
9.4.7 Other services like Train control communication, SCADA, Railnet, Station Wi-fi etc can
be connected to a L2/L3 switch provided at the station. Switch shall be connected to
LER through 1G port.
9.4.9 At layer-2 level, Separate VLAN shall be created for services like CCTV, VoIP based
TCCS, FOIS, RailNet, Station Wi-Fi etc. This ensures security and prevents bandwidth
contention with other services.
Page 70 of 131
10.0 TOR Item No. 6:
This item focuses on the management of IP-MPLS, LTE & other networks to ensure
robust connectivity and communication across the all India Railway system.
10.1 Important networks of IR with key management strategies are outlined below:
(i) IP/MPLS Network: The IP/MPLS (Multiprotocol Label Switching) network is the
backbone of Indian Railways' communication infrastructure, used for carrying signaling,
control systems, VoIP, and data traffic. Key Management Strategies for IPMPLS
network are:
● Traffic Engineering - Use to manage and prioritize critical traffic, such as signaling and
control data, over less-critical services like passenger Wi-Fi. Implement Quality of
Service (QoS) rules to prioritize train operations.
● Redundancy & Failover - Configure the network to automatically reroute traffic in case
of failures (e.g., link failure or router malfunction), ensuring high availability and
reliability.
● Capacity Planning - Regular monitoring of bandwidth usage and traffic patterns allows
for capacity planning and scaling of resources, ensuring network readiness for future
expansion or increased data load.
(ii) LTE Network: The LTE (Long Term Evolution) network provides high-speed wireless
communication for Railway operations, including real-time communication between
trains and control centers, as well as supporting CCTV and passenger services. Key
Management Strategies for network are:
Page 71 of 131
(iii) OFC (Optical Fiber Cable) Network: The OFC network of IR provides high-speed,
long-distance data transmission, serving as the backbone connecting different Railway
stations, control centers, and other infrastructure. Key Management Strategies for
network are:
● Fiber Health Monitoring: Implement systems to monitor the physical health of the fiber,
detecting signal degradations or cuts early. Use automated alerts to trigger maintenance
requests.
● Redundancy: Design the OFC network with redundant fiber paths, ensuring that any
fiber cut automatically triggers a reroute to maintain communication.
● Preventive Maintenance: Regularly schedule inspections and maintenance of fiber
infrastructure to prevent outages and maintain long-term reliability.
(iv) CCTV Network: CCTV systems are crucial for security monitoring at stations, on trains,
and in control rooms, providing live video feeds to ensure passenger safety and asset
protection. Key Management Strategies for network are:
● Real-Time Monitoring: Ensure real-time video surveillance feeds are available at the
CCC, enabling immediate response to security incidents.
● Video Analytics: Use AI-based video analytics to automatically detect suspicious
activities, overcrowding, or safety hazards, and trigger alerts.
● Storage & Bandwidth Optimization: Implement storage management for archiving video
footage, while optimizing network bandwidth to prevent CCTV data from overwhelming
the network. Use edge computing to process video locally, reducing the need to send
large video streams over the network.
(v) PBX & VoIP Systems: Indian Railways operates telephone exchanges and
communication systems for voice services. The VoIP network is used for internal
communication between Railway staff and management, providing a reliable voice
communication system over the MPLS and LTE networks. Key Management Strategies
for network are:
● Voice Traffic Prioritization: Use QoS settings in the MPLS network to ensure VoIP traffic
receives priority, guaranteeing clear, uninterrupted voice communication even during
high network loads.
● Monitoring and Troubleshooting: Monitoring call routing, trunk usage, and exchange
system uptime. Continuously monitor call quality metrics (such as latency, jitter, and
packet loss) to maintain high-quality voice communication. Diagnose and resolve issues
promptly.
● Emergency Communication: Ensure the VoIP network is available for emergency
communication between train crews and control centers, with redundancy built into the
network to ensure uptime.
(vi) Passenger Information Systems (PIS): Digital Displays and Announcements: PIS at
stations and on trains rely on networked systems for providing real-time information.
Key Management Strategies for network are:
Page 72 of 131
10.2 Unified Network Management System (NMS)
(i) Real-Time Monitoring: Provide real-time visibility into the performance and status of
each network component, including alarms for faults or degradation.
(ii) Fault Detection & Resolution: Detect and resolve network faults across multiple systems
from a single interface, ensuring fast resolution times. Incident response teams in the
NOC quickly troubleshoot issues, escalate them to field teams for on-site repairs if
needed, and coordinate between divisions and zones for multi-regional issues.
(iii) Redundancy and Failover: Networks incorporate redundancy, allowing automatic
failover in the event of a fault. For example, an MPLS router failure would trigger traffic
rerouting, ensuring uninterrupted service.
(iv) Performance Monitoring: Track performance metrics for all networks (bandwidth,
latency, uptime, etc.) and optimize resource allocation to meet current and future
demands.
(v) Multi-Vendor Support: Support multiple hardware and software vendors, enabling
seamless integration across different technologies used in Indian Railways'
communication infrastructure.
The NOC is the heart of the network management strategy. This NOC is responsible for
monitoring, managing, and maintaining all Railway communication networks—
IP/MPLS, LTE, OFC, CCTV, and VoIP—from a single location. Three Tiered structure
for NOC of IR is proposed, it includes:
● Central NOC: Manages the overall architecture and coordinates with Zonal and
Divisional NOCs. Handles inter-zonal connectivity issues and major incidents.
● Zonal NOCs: Oversee network performance across zones, handle escalations from
Divisional NOCs, and optimize zonal traffic.
● Divisional NOCs: Handle day-to-day local network management, fault resolution, and
incident response within the division.
a) Unified Monitoring: All networks (IP/MPLS, LTE, OFC, CCTV, VoIP) are monitored via
a single platform using a Network Management System (NMS), providing real-time
visibility into each system's health, performance, and security.
Benefits of a Unified NOC are:
(i) Operational Efficiency: Integrating multiple systems into one NOC allows for streamlined
operations, reducing the need for multiple monitoring centers and ensuring that all
systems can be managed from one location.
(ii) Cost Savings: Centralizing network operations into a single NOC avoids duplication of
infrastructure, reduces staffing costs, and minimizes the need for separate management
systems for each network type.
(iii) Enhanced Incident Response: A unified NOC improves coordination during incidents,
as operators can see the relationships between different systems. For example, a fiber
cut impacting both CCTV and telecommunication can be resolved faster through an
integrated workflow.
Page 73 of 131
(iv) Improved Security and Monitoring: By centralizing monitoring of CCTV and
telecommunication systems, the NOC enhances the ability to identify threats and
security breaches in real time, leading to a safer railway environment.
(v) Scalability for Future Needs: A centralized NOC allows for easier scaling as new
technologies (such as IoT sensors or 5G networks) are introduced to the railway’s
infrastructure.
b) 24/7 Fault Management: Detect and troubleshoot network issues, ensuring minimum
downtime and efficient response to incidents.
d) Security Management: Security is paramount across all networks, given the critical
nature of railway operations. Each network must be managed under strict cyber security
protocols to protect against potential breaches or failures. Monitor threats and ensure
compliance with cyber security standards across networks. Key Security Measures are:
(i) Firewalls and Intrusion Detection Systems (IDS): Implement firewalls and IDS across
all network layers (IP/MPLS, LTE, OFC) to detect and mitigate unauthorized access or
cyber-attacks.
(ii) Encryption: Encrypt data transmission, particularly for mission-critical communications
such as train signaling, control data, and CCTV footage.
(iii) Access Control: Use strict access control mechanisms, ensuring that only authorized
personnel can access network resources, CCTV feeds, or VoIP systems.
(iv) Regular Security Audits: Conduct periodic security audits and vulnerability assessments
across all systems to ensure compliance with railway safety and cyber security
standards.
Figure 20 - Maintenance Support System at Zonal Railway NOC and Central NOC
Page 74 of 131
10.3.2 Training and Development
b) Incident Response: Training on how to handle incidents, prioritize alarms, and resolve
network outages quickly.
c) 24/7 Shifts: Staffing must follow a shift pattern to ensure continuous coverage, with
rotational shifts to avoid fatigue and ensure efficient monitoring.
d) Compliance with Railway and Government Regulations: The NOC must ensure
compliance with Indian Railways operational policies, cyber security guidelines, and
other government regulations.
The Indian Railways' vast and complex communication network, which includes both
IP/MPLS and LTE infrastructure, requires efficient management across various levels.
Apart from central NOC, establishing both Zonal and Divisional NOCs ensures
effective coordination, monitoring, and maintenance of the network, with clear
delineation of responsibilities to prevent overlaps and streamline operations.
The Zonal NOCs are higher-level operational canters responsible for overseeing and
coordinating the network's performance across their respective zones. Each zone
comprises several Divisions, and the Zonal NOC plays a critical role in supervising the
Divisional NOCs while ensuring alignment with broader network policies and goals.
(i) Overall Network Oversight: Zonal NOCs provide centralized control and monitoring of
the entire network within their zones, covering multiple divisions.
(ii) Coordination with Central NOC: Act as the intermediary between the central (All-India)
NOC and divisional NOCs. They receive directives from the central NOC and ensure
that these are implemented at the divisional level.
(iii) Strategic Planning and Optimization: Zonal NOCs are responsible for long-term network
planning, optimization, and capacity management. This includes traffic engineering for
the IP/MPLS backbone and ensuring LTE performance across divisions.
(iv) Major Incident Management: Handle high-priority incidents, such as large-scale network
outages, inter-divisional connectivity failures, or cyber-attacks. They coordinate with
Page 75 of 131
divisional NOCs for incident response and work closely with the central NOC for critical
escalations.
(v) Performance Monitoring & Reporting: Zonal NOCs generate performance reports
covering the entire zone and submit them to the central NOC. They also track and
analyze network KPIs across divisions, including bandwidth usage, latency, and uptime.
(vi) Security Management: Zonal NOCs oversee security implementations, including
encryption and intrusion detection systems, ensuring all divisions adhere to security
standards. They also conduct security audits and manage regional cyber threats.
(vii) Inter-Zonal Coordination: Facilitate communication and coordination between different
zones, especially for network traffic that crosses zonal boundaries, ensuring seamless
inter-zonal operations.
(viii) Policy Implementation: Ensure that Indian Railways' network policies and procedures,
issued by the central NOC, are uniformly implemented across all divisional NOCs.
The Divisional NOCs are responsible for localized, day-to-day network management
and operations within their specific divisions. They ensure smooth functioning of the
network infrastructure at the ground level and handle routine tasks like fault detection,
troubleshooting, and local optimization.
Page 76 of 131
(i) Escalation Process: Local issues that cannot be resolved at the divisional level are
escalated to the Zonal NOC. If the Zonal NOC cannot resolve a problem, it is further
escalated to the Central NOC. This tiered approach ensures efficient troubleshooting
and minimizes downtime.
(ii) Incident Sharing: In the event of a significant network incident affecting multiple
divisions, Zonal NOCs will take charge of coordinating with multiple Divisional NOCs to
ensure a coordinated response.
(iii) Data & Performance Reports: Divisional NOCs are responsible for collecting local
network performance data, which they share with the Zonal NOC. This allows for higher-
level analysis, optimization, and planning at the zonal level.
(iv) Security Compliance & Audits: Divisional NOCs implement the security protocols while
Zonal NOCs oversee compliance and conduct regular audits to ensure security
standards are met across all divisions.
10.5 Staffing to Manage Multiple Network Management Systems (NMS) from One NOC
When a single Network Operations Center (NOC) is tasked with handling multiple
Network Management Systems (NMS), such as those for IP/MPLS, LTE, Optical Fiber
Cable (OFC) networks, CCTV, telecommunication exchanges, and other networks,
careful planning is needed to ensure efficient management and monitoring. This
requires specialised staff and cross-functional teams with clear responsibilities across
different systems.
b) 24/7 Operations: Since the NOC operates 24/7, staffing should be planned in shifts to
ensure continuous coverage, with rotating teams for different NMS.
c) Tiered Support Levels: Establish a tiered support structure to handle issues based on
complexity. This ensures that basic issues are resolved at the lower levels (L1), while
more complex issues escalate to specialised engineers (L2, L3).
d) Automation and Efficiency: Use automation tools where possible to reduce manual
intervention. This may reduce staffing requirements, especially for routine monitoring
tasks.
a) NOC Manager (1 per NOC): 1 manager, typically operating in regular business hours
with escalations available 24/7.
● Responsibilities: Oversee the entire NOC, manage staff across different shifts,
coordinate operations for all NMS, and ensure seamless integration of multiple systems.
b) Network Engineers:
● Network engineers will be the core of the NOC team, managing different networks such
as IP/MPLS, LTE, OFC, and other critical systems. These engineers will be divided by
Page 77 of 131
their areas of expertise. At least two levels of Engineers will be available with clear
responsibilities-L1 & L2.
c) IP/MPLS & LTE Engineers: 3-4 engineers per shift to cover IPMPLS & LTE networks.
Total Staffing: 10-12 engineers
● Responsibilities: Monitor and manage the IP/MPLS backbone and LTE network,
troubleshoot routing and switching issues, configure QoS, and optimize bandwidth
usage.
● Responsibilities: Monitor fiber cable health, detect and troubleshoot fiber cuts or signal
degradation, and coordinate repairs.
e) CCTV System Engineers: 1-2 engineers per shift to monitor the health and
performance of the CCTV network. Total Staffing: 5-6 engineers.
● Responsibilities: Ensure all cameras are operational, monitor video feeds, manage
storage systems (NVR/DVR), and perform video analytics.
g) Security Engineers: 1 security engineer per shift to cover overall security monitoring.
Total Staffing: 4-5 security engineers.
● Responsibilities: Manage network security across all systems (IP/MPLS, LTE, OFC,
CCTV, and telecommunication), ensure compliance with security policies, monitor for
intrusions, and perform regular audits.
h) Performance Analysts: 1 performance analyst per shift. Total Staffing: 3-4 analysts.
i) Incident Response Team: 2-3 team members per shift to handle immediate
troubleshooting across systems. Total Staffing: 6-8 incident response personnel.
Page 78 of 131
● Responsibilities: Manage servers and software that host the NMS platforms, ensure
backups, and handle system-level issues across different network management
systems.
k) Helpdesk & Support Staff: 2-3 support staff per shift. Total Staffing: 6-9 helpdesk
staff.
● Responsibilities: First-line support for minor issues, logging incidents, and escalating
unresolved problems to network engineers. Handle basic troubleshooting for all
systems.
l) Shift Structure:
● Three Shift Model: To ensure 24/7 coverage, the NOC should be divided into three
shifts (morning, evening, night). A balanced staff distribution ensures that all NMS are
covered round-the-clock.
● Rotational Staffing: Engineers and support staff should rotate between shifts to
maintain staff flexibility and avoid fatigue.
Page 79 of 131
10.5.5 Overall Staffing Summary:
Total Staff
NOC Level Key Roles
(approx.)
NOC Manager, Network Engineers, Security
Central NOC 50-60 Engineers, Performance Analysts, Incident
Response Team, System Admins, Helpdesk
Zonal NOC Manager, Network Engineers, Security
40-50
Zonal NOC Engineers, Performance Analysts, Incident
per Zone
Response, System Admins, Helpdesk
Divisional NOC Manager, Network Engineers,
25-30
Divisional NOC Incident Response Engineers, System Admins,
per Division
Helpdesk
This staffing structure ensures comprehensive network management across all levels,
with specialized teams to monitor, optimize, and respond to incidents for the efficient
functioning of Indian Railways' communication systems.
NOC for managing the different networks of IR is very important and crucial. Important
key aspects taken into consideration for deciding are as detailed below:-
a) Location:
(i) A central, strategically located facility, preferably near a major city with easy access to
transportation and emergency services.
(ii) A backup Disaster Recovery (DR) site in a geographically distant location to maintain
continuity in case of NOC failure.
(i) High-speed connectivity, redundancy in power supply (including UPS systems and
backup generators), and cooling systems.
(ii) Secured server rooms for hosting management software, storage systems and network
devices.
(iii) A high-bandwidth internet connection for real-time data and analytics monitoring.
c) Hardware:
(i) High-performance servers for running NMS software, storage devices for logs and
backups, and large display screens for monitoring live network status.
(ii) Routers, switches, and firewalls to manage the IP/MPLS and LTE backbone network.
d) Software:
(i) Network Management Software (NMS) for end-to-end network monitoring and control,
integrated with automation tools for configuration, fault detection, and diagnostics.
Page 80 of 131
(ii) Security Information and Event Management (SIEM) software for real-time threat
monitoring and incident response.
(iii) Performance Management Systems to track network KPIs, including latency, bandwidth
utilization, and QoS.
(iv) Collaboration Tools for efficient communication among teams, including helpdesk
software for ticket management and resolution.
(i) Dual power supplies, dual internet service providers (ISPs), and dual access to all
critical systems.
(ii) Backup network infrastructure to minimize downtime and enable failover in the event of
hardware failure.
The size of the NOC depends on factors such as the scale of operations, the number of
network elements monitored, and the staff needed to operate it. Here’s a breakdown of
the components that will determine the size:
Each NOC engineer needs a dedicated workstation equipped with monitors (multi-
screen setups for monitoring). Typically, you would require 5 to 15 engineers per shift,
depending on the size of the Railway Division. Staff includes Network engineers,
System administrators, Security specialists and Incident managers.
Large video walls or display units that present real-time status updates on network
performance, CCTV footage, LTE signal coverage, etc. Typically, a video wall occupies
a significant portion of the NOC, requiring 30-50 square meters depending on the size.
Dedicated desks for NOC managers and supervisors who oversee operations. These
desks should have an unobstructed view of the main video wall.
Page 81 of 131
b) Space for Server Room
Dedicated Server Room for housing critical IT infrastructure like servers, storage,
routers, switches, and backup systems. Redundant Power Supply, including UPS and
backup generators. Cooling Systems for equipment protection.
c) Meeting Room
A small conference room within or adjacent to the NOC for coordination meetings,
incident debriefs, and high-level strategy sessions. Size: 20 to 30 square meters.
NOC operates 24x7, so staff will need break areas with amenities. Size: 20 to 30 square
meters. The NOC for a railway division managing a unified NMS for IP/MPLS, LTE,
OFC, CCTV, and exchange systems must be designed with adequate space,
equipment, and facilities to ensure 24x7 monitoring, fault management, and security.
(i) Main Control Room with video walls and workstations for NOC engineers.
(ii) Server Room for housing critical IT infrastructure.
(iii) Break Rooms and Meeting Areas for staff comfort and coordination.
For a medium-sized railway division, the NOC should occupy an estimated space of
200 to 300 square meters, with room for future expansion and scalability as technology
evolves.
10.7.1 For management of IPMPLS and LTE networks all India Network Operation Center as
well as Zonal and Divisional NOC shall be created. Apart from managing these two
networks other networks like OFC, CCTV, Railnet, PBX, VoIP etc. operational in the
Divisions shall also be managed through these NOCs
10.7.2 All India or Central NOC shall be located strategically in DC-DR configuration. One
location can be New Delhi and other can be Secunderabad for the same.
10.7.3 Zonal NOC can be co-located with the Divisional NOC, however roles of Zonal NOC
and Divisional NOC shall be dealt by different set of people.
10.7.4 Every NOC shall be headed by NOC manager of appropriate level. In centralised NOC,
the NOC manager shall be of SA Grade with supporting team and in Zonal NOC the
manager shall be of JA Grade with supporting team. However, in Division the manager
can be of Senior Scale with supporting team.
Page 82 of 131
10.7.5 NOC has to be manned 24x7 by deploying required staff in each shift. Recommended
manning of staff is as given below:
Total Staff
NOC Level Key Roles
(approx.)
10.7.7 Wherever the Zonal NOC and Divisional NOC are co-located staffing can be planned
depending upon the common activities which can be handled by the same set of staff.
Page 83 of 131
10.7.8 Specialized network engineers for different networks i.e. IPMPLS, LTE etc shall be
planned in two levels. Normally L2 engineers shall deal the issue and resolve it. If it
cannot be resolved by them, it should be escalated to Level-1 engineers.
10.7.9 Outsourcing of different system experts is also recommended for most of the activities
of the NOC.
10.7.10 Clear responsibilities of different level NOCs shall be defined to avoid any duplicity of
work.
10.7.11 Preferably separate NOC building of suitable size shall be planned in every Division.
Size of these divisional NOCs can be approximately in the range of 200-400 sqm based
on size of the Division and available networks.
Page 84 of 131
Annexure-1
Page 85 of 131
Annexure-2
Page 86 of 131
Page 87 of 131
Page 88 of 131
Page 89 of 131
Page 90 of 131
Annexure-3
General Specifications:
1. The Mission Critical IP-MPLS Network shall be based on highly resilient, multiservice
technology to provide traffic engineered service assurance and bandwidth guaranteed
behaviour for mission critical, delay sensitive and bandwidth intensive services &
applications.
2. The network design should cater for capability to engineer traffic links between nodes
with user defined bandwidth guarantee and QoS profile. Allocation of user configurable
queues should be supported for differentiated treatment to traffic for speed and
reliability. It should be possible to re-route traffic from failed routes to protected routes
with no impact on active sessions.
3. The network should be implemented with standard based protocols as defined by IETF,
IEEE, ITU-T, etc.
4. The router/series should be compliant/certified for IEEE 1613, IEEE 1613.1, IEC 61000-
6-5, IEC 61850-3, IEC/AS 60870.2.1, EN 50121-4 standards or equivalent standards.
5. All the network equipment should be IPv4 and IPv6 fully capable and should fully
support IPv4 and IPv6.
6. Router should have IPv4 Routing, Border Gateway Protocol, Intermediate System- to-
Intermediate System [IS-IS], and Open Shortest Path First [OSPF]), Virtual Router
Redundancy Protocol (VRRP) OR EQUIVALENT, IPv6 Routing, and BGP Prefix
Independent Convergence, Segment Routing.
7. High Availability features like node protection, path protection, link protection as per
media availability.
8. Should able to support multiple VPN’s for different services with traffic engineering
defined.
9. All the routers, Routing EMS/SDN controller shall be of the same make (OEM) for
seamless integration and interworking. NMS/ EMS must be capable to
support/integrate/manage multi OEM devices (Routers and Switches).
10. All licenses for stated functionalities and features must be built in the provided IP-MPLS
solution from Day-1.
11. All the interfaces as requested must be equipped with perpetual licenses without any
year-based capping on interface usage from Day-1. None of interface must stop working
after end of warranty and support.
12. The same Routing EMS system must be capable to be upgraded to SDN controller by
additional licenses/plugins. In case separate solution for SDN is required, must be
considered in the solution from Day-1 without any additional cost to the employer.
13. It may be noted that in the specification wherever support for a feature has been asked
for, it will mean that the feature should be available without requirement of any other
hardware/software/licenses. Thus, all hardware/software/licenses required for enabling
the support/feature shall be included in the offer.
14. The ‘slot’ for router means a main slot or full slot on the router chassis. Only such a slot
shall be counted towards determining the number of free slots. Any sub slot or daughter
slot or a half slot shall not be considered as a slot.
15. The bidder must supply same make & model of controller cards, chassis hardware,
interface modules, for all locations for common sparing to the buyer. Different types of
controller cards, chassis, modular cards, etc. are not allowed to be quoted in the solution
16. The IPMPLS Router OEM /OEM certified Trainer must provide OEM training to buyer
at OEM premises or any other as per training schedule mentioned in the BoQ.
Page 91 of 131
17. Router should be chassis based & should have modular architecture for scalability.
Chassis should be 19” rack mountable type.
18. Should have power and fan redundancy and hot swappable.
19. All interface modules, line cards should be hot swappable for high availability.
20. All interfaces on the routers shall provide wire-rate throughput.
21. All line-card slots should be universal. All the line-cards should be capable to be
configured on all given line-card slots without any restriction.
22. The modular operating system shall run all critical functions like various routing protocol,
forwarding plane and management functions in separate memory protected modules.
Failure of one module shall not impact operations of rest of the OS.
23. Shall support On-line insertion and removal for cards, fast reboot for minimum network
downtime, VRRP or equivalent.
24. Shall support link aggregation using LACP as per IEEE 802.3ad and MC-LAG or EVPN
Multihoming.
25. Shall support MPLS Provider/Provider Edge functionality. MPLS VPN, MPLS mVPN
(Multicast VPN), AS VPN, DiffServ Tunnel Modes, MPLS TE (Fast re-route), DiffServ-
Aware TE, Inter-AS VPN, Resource Reservation Protocol (RSVP), VPLS, VPWS,
Ethernet over MPLS, EVPN, Segment routing and Segment routing Traffic engineering.
26. The router should support Netconf, YANG and other modern system management
protocols.
27. The routers shall support both L2 and L3 services on all interfaces.
28. The router should support BGP link-state (BGP-LS).
29. The Router should support various software models/sensors for capturing different
health parameters from the devices.
30. The router shall have the ability to interact with open standard based tools.
31. Shall support: Traffic Classification using various parameters like source physical
interfaces, source/destination IP subnet, protocol types (IP/TCP/UDP),
source/destination ports, IP Precedence, 802.1p, MPLS EXP, DSCP.
32. Shall support Strict Priority Queuing or Low Latency Queuing to support real time
application like Voice and Video with minimum delay and jitter.
33. Congestion Management: Priority queuing, Class based weighted fair queuing.
34. Traffic Conditioning: Committed Access Rate/Rate limiting.
35. Platform must support hierarchical shaping, scheduling, and policing for the control
upstream and downstream traffic.
36. Router should have min 3 level of scheduling for HQOS. Per VLAN QoS. Shall support
at least 8 hardware queues to be available for each GE interface on the router.
37. Support Access Control List to filter traffic based on Source & Destination IP Subnet,
Source & Destination Port, Protocol Type (IP, UDP, TCP, ICMP etc) and Port Range
etc.
38. Support per-user Authentication, Authorization and Accounting through RADIUS or
TACACS.
39. Multiple privilege level authentications for console and telnet access through Local
database or through an external AAA Server.
40. Support for monitoring of Traffic flows for Network planning and Security purposes.
41. Display of input and output error statistics on all interfaces.
42. Display of Input and Output data rate statistics on all interfaces.
43. Router shall support System & Event logging functions as well as forwarding of these
logs onto a separate Server for log management.
44. Router shall have Debugging features to display and analyse various types of packets.
45. Should have to support Out of band management through Console / external modem
for remote management.
Page 92 of 131
46. Event and System logging: Event and system history logging functions shall be
available. The Router shall generate system alarms on events. Facility to put selective
logging of events onto a separate hardware here the analysis of log shall be available.
47. After fulfilling Day One interface requirements, the router must have minimum of 2
interface slots vacant for future expansion.
48. Shall support online insertion and removal (OIR) that is non-disruptive in nature. Online
insertion and removal of one-line card shall not lead to ANY packet loss for traffic flowing
through other line cards for both unicast and multicast traffic.
1. Should support redundant controller cards (1+1) and redundant fabric cards (N+1) for
high availability.
2. The router should have capability of minimum 2 Million IPv4, 500K IPv6 routes.
3. The router should support minimum 128K MAC address.
4. Router should support 6k multicast routes.
5. Router should support minimum 8K MPLS PWE3.
6. Router should support min 8K VPLS.
7. Router should support 2K MPLS L3 VPN.
8. The router should support 100K labels and 10 label stacks.
9. Should support 64 ECMP (equal cost multipath).
10. Ability to configure hierarchical queues in hardware for IP QoS at the egress to the edge.
Minimum 128K queues per system.
11. Router shall support minimum non-blocking capacity of 6 Tbps full-duplex or higher at
full services scales.
12. The router must support 1GE, 10GE, 40GE, 100GE, 400GE interface pluggable up to
80Km distances.
13. The router must support multi-rate interfaces: 1/10GE, 10GE/25GE, 40GE/100GE,
100GE/400GE.
14. The router must support 100GE, 200GE, 400GE interfaces with coherent optics for
longer distances over dark fiber.
15. Router should support 400 Gbps full-duplex per slot capacity.
16. The router must have capability to support following minimum interfaces:
• 8 x 100/400GE,
• 8 x 40/100GE,
• 16 x 1/10GE
17. The Router should be supplied with following interfaces on Day-One:
18. 8 x 100GE interfaces distributed across minimum two-line cards with 100G LR optics.
19. 8 x 40GE interfaces distributed across minimum two-line cards with 40GE LR Optics.
20. 16 x 10GE interfaces, equipped with 10GE LR Optics.
21. Operating temperature: +5°C to +40°C guaranteed.
22. Humidity: 5% to 85% Non-Condensing.
23. Super core routers and Divisional routers should have common cards and
interchangeable.
1 Should support redundant controller cards (1+1) and redundant fabric cards (N+1) for
high availability.
2 The router must be equipped with fan filter to avoid accumulation of dust on main board.
3 The router must support on-board GNSS receiver.
4 The router should support minimum 128K MAC address.
5 The router should have capability of minimum 240K IPv4, 120K IPv6 routes (FIB).
Page 93 of 131
6 Router should support 4K MPLS PWE3.
7 Router should support 4K VPLS.
8 Router should support 1K MPLS L3 VPN.
9 The router should support 12K labels and 10 label stack depth.
10 Should support minimum 16 ECMP (equal cost multipath).
11 Ability to configure hierarchical queues in hardware for IP QoS at the egress to the edge.
Minimum 20k egress/VoQ egress hardware queues.
12 The router must support 1GE, 10GE, 40GE, 100GE interface pluggable up to 80Km
distances.
13 Router should support minimum 200 Gbps full-duplex per slot capacity.
14 Router shall support minimum non-blocking throughput capacity of 1200 Gbps full-
duplex or higher at full services scale.
15 The router must support multi-rate interfaces: 1/10GE, 10/25GE, and 40/100GE.
16 The router must support line-rate 10GE, 100GE interfaces with both grey and coloured
pluggable.
17 The router must support 100GE interfaces with coherent optics for longer distances over
dark fiber.
18 The router must have capability to support following minimum interfaces:
• 10 x 40/100GE,
• 20 x 1/10GE
19 The Router should be supplied with following interfaces on Day-One:
20 8 x 40 coherent interfaces distributed across minimum 2 line cards with 40G LR
optics.
21 16 x 10GE distributed across minimum two (2) interface slots.
22 Operating temperature: +5°C to +40°C guaranteed.
23 Humidity: 5% to 85% Non-Condensing.
24 Super core routers and Divisional routers should have common cards and
interchangeable.
Note:- In case of any technical specifications are coinciding with RDSO TAN v2.0, tender
clause will supersede.
Page 94 of 131
Annexure-4
1. Network Provisioning Platform should be based on open, secure, and scalable software
for optimizing network infrastructure and operations.
2. Network Provisioning Platform Should support APIs for customization and integration.
3. Network Provisioning Platform should support RADIUS/TACACS based access control
4. The proposed Network Provisioning Platform shall be deployed with High Availability
(HA) and Geo-redundancy. The bidder shall describe in detail the network architecture
for Proposed Network Provisioning Platform to ensure continuous operations and
provisioning at both sites
5. The Network Provisioning Platform shall support capability for monitoring and
configuration provisioning of IP-MPLS network through centralized NOC
6. The Network Provisioning Platform shall support single management system for IP-
MPLS network to provide ease of operation.
7. The Network Provisioning Platform should support CORBA/JAVA/XML/REST-
API/Kafka interfaces to facilitate integration for end-to-end processes such as flow
through service provisioning and service assurance
8. The proposed Network Provisioning Platform shall support North Bound Interface (NBI)
REST-API protocol.
9. The proposed Network Provisioning Platform shall have the capability to expose the
following information via NBI;
a. Network and service topology and details (i.e., bandwidth, customer info, service info)
b. Telemetry data (i.e., latency, utilization)
c. Network slice and service lifecycle API (CRUD: create, read, update and delete).
10. The proposed Network Provisioning Platform shall support South Bound Interface (SBI)
protocols minimum as the following: SNMPv1, v2 & v3, NETCONF (RFC 6241)/YANG
(RFC 6020 & RFC 7950), BGP-LS (RFC 7552), PCEP (RFC 5440), Telemetry (Please
specify telemetry protocol supported)
11. The proposed Network Provisioning Platform shall collect telemetry information from the
managed devices;
a. Bandwidth utilization for Links and Ports
b. Latency of Path and Link
12. Network Provisioning Platform shall support client–server based architecture. Client
being GUI/web browser based access with secure interface to the server.
13. The role of the Network Provisioning Platform is to control and manage all aspects of
the domain such as Fault, Configuration, Auditing, Performance, and Security (FCAPS)
to ensure maximum usage of the devices resources.
14. The Network Provisioning Platform should be supplied with all applicable feature
perpetual-licenses from day one. No feature should have year based capping for usage.
15. Network Provisioning Platform should allow the user to zoom down to the port level of
any given card /equipment.
16. The Network Provisioning Platform and the network elements shall provide Operation,
Administration, Maintenance & Provisioning (OAM&P) functions in accordance with the
Telecommunications Management Network (TMN) concept described in ITU-T
Recommendations Y.1714 (01/2009)/Y.1711(02/2004) or equivalent standards /
recommendations.
17. The Network Provisioning Platform shall provide a proactive and efficient monitoring
that helps to detect and avoid potential network backbone problems.
18. The Network Provisioning Platform should be flexible and modular. It should be able to
provide configurations from a single machine to powerful and open client/server
architecture.
Page 95 of 131
19. Network Provisioning Platform should support administrative operations to be
performed repeatedly such as: NE configuration backup, software image download,
operator login/logout attempts, etc.
20. Network Provisioning Platform must support below network element software
management: -
a. Loading of new software images.
b. Management of multiple versions of software
c. Installation of software updates.
d. Software download status reporting.
e. Administrator authorization for the loading of software
f. Coordination for the software download to multiple end element based on a single
software source.
g. Version control for all network
h. Administrator authorization for the loading of software
21. The Network Provisioning Platform GUI should allow authorised personnel to create
and activate end-to-end services.
22. The Network Provisioning Platform should be able to provision, configure and manage
network for DWDM and IP-MPLS
23. Network Provisioning Platform should allow service and equipment provisioning
24. The proposed Network Provisioning Platform shall have the capability to automatically
retrieve network information and create topology upon network initiation
25. The proposed Network Provisioning Platform shall have the capability to automatically
display the network topology, including physical and logical links between network
elements.
26. The proposed Network Provisioning Platform shall have the capability to automatically
update the topology information upon network or service changes.
27. The Network Provisioning Platform should support health monitoring of all modules and
indicate health of the system and connectivity.
28. The Management System shall support the provisioning of :-
a. All NE parameters.
b. Threshold Crossing Alert(TCA) Alarm Severity
29. Alarms should be categorised into different categories e.g. Emergency/Critical,
Flash/Major, Immediate/Minor, Priority/Warning, Deferred/Informative depending upon
the severity of the alarm
30. Network Provisioning Platform should be able to display the Network Elements and the
links in different colours depending upon their status for healthy, degraded and critical
alarm conditions.
31. Dashboard should indicate the number of active alarms with filtering options based on
the period, duration, severity, event type and location.
32. The Network Provisioning Platform system should support integration with emailing
system for informing network admin user(s)
33. All failure and restoration events should be time-stamped
34. The GUI shall provide the ability to create, delete and modify topology views of the
network.
35. The solution must support Service Level Agreements & Lifecycle Management including
Version Control, Status Control, Effectively and audit Trail to ensure accountability for
the project.
36. The solution must have the ability to define and calculate key performance indicators
from an End to End Business Service delivery perspective related to the Project under
discussion
37. The solution should support requirements of the auditors requiring technical audit of the
whole system
Page 96 of 131
38. Solution should support effective root cause analysis, support capabilities for
investigating the root causes of failed service levels and must make it possible to find
the underlying events that cause the service level contract to fail.
39. The solution should provide historical and concurrent service level reports for the project
in order to ensure accountability of the network performance
40. Automatic Report creation, execution and Scheduling, must support variety of export
formats including Microsoft Word, Adobe PDF, HTML, etc
41. The solution must support Templates for report generation, Report Filtering and
Consolidation and Context sensitive Drill-down on specific report data to drive
standardization and governance of the project
42. The solution must support security for drill-down capabilities in dashboard reports
ensuring visibility for only relevant personnel only
43. Support real-time reports as well as historical analysis reports (like Trend, TopN,
Capacity planning reports etc.)
44. The proposed Network Provisioning Platform shall automate the provisioning of service
and path for IP network services based on following:
a. Latency
b. Bandwidth
c. Shortest Path
d. User Defined (Custom)
45. The proposed Network Provisioning Platform shall have the capabilities to create and
update LSP in real-time.
46. The proposed Network Provisioning Platform shall calculates and configure the LSP
path in the IP network.
47. The proposed Network Provisioning Platform shall supports the discovery, control, and
creation of main and protection LSPs.
48. The proposed Network Provisioning Platform shall automatically provision the LSPs f or
both directions.
49. The proposed Network Provisioning Platform shall provide the capability to provision
full-meshed LSP for services running on more than two nodes.
50. The proposed Network Provisioning Platform shall be able to provision the service
based on the following policies;
a. Shared Risk Link Group (SRLG)
b. Diverse path
51. The policies shall capable to be enforced via strict and preferred mode for diverse path
provisioning.
52. The proposed Network Provisioning Platform shall have the capabilities to provision and
manage LSPs using PCEP and Netconf.
53. The proposed Network Provisioning Platform shall automate the provisioning for all IP
network order types i.e., new install, modify, terminate.
54. The bidder to describe in detail the capability of IP Service Provisioning.
55. The proposed Network Provisioning Platform shall supports the provisioning of IP
services, i.e., L2VPN, L3VPN and internet services.
56. The proposed Network Provisioning Platform shall analyse and optimize the network
path using:
a. RSVP-TE
b. Segment Routing SR-TE
57. The proposed Network Provisioning Platform shall obtain link/path topology and
utilization from IP/MPLS network and provide global network view.
58. The proposed Network Provisioning Platform shall store the topology and utilization.
59. The proposed Network Provisioning Platform shall perform path computation and
distribute the traffic between multiple paths for optimization or to avoid congestion.
Page 97 of 131
60. The proposed Network Provisioning Platform shall support the following LSP types;
a. PCE-initiated LSP - Creation and manage
b. PCC-delegated LSP - Discovery and manage
c. PCC-initiated LSP – Discovery
61. The proposed Network Provisioning Platform shall have the capabilities to routes the
LSPs to alternative path around affected nodes and links that under maintenance event.
62. The bidder shall describe other IP-MPLS TE Optimization features supported.
63. The proposed Network Provisioning Platform shall be accessible via a secured Web-
based interface.
64. The proposed Network Provisioning Platform shall provide visualization of the managed
network elements (i.e., bayface layout).
65. The proposer shall ensure the communication between Network Provisioning Platform
and managed device shall be secured.
66. The proposed Network Provisioning Platform shall provides real-time views of available
resources.
67. The proposed Network Provisioning Platform shall display information of the following
managed devices:
a. Operating system versions
b. IP addresses
c. License
d. Connection status
e. Physical and logical interfaces
69. The proposed Network Provisioning Platform shall provide a feature to validate the delta
configuration before deploying the configuration changes to the device.
70. The proposed Network Provisioning Platform shall push configuration, firmware and
software updates to devices.
71. The proposed Network Provisioning Platform shall backup and restore devices
configuration files for the managed devices.
72. The proposed Network Provisioning Platform shall backup and restore its own
configuration files.
73. The proposed Network Provisioning Platform shall have the capability to remote
upgrade managed NEs to a new software release.
74. The proposed Network Provisioning Platform shall indicate the following status of
upgrading activities:
a. Mismatch of software version for a particular NE.
b. Progress indicator (percentage)
c. Checklist of items that indicate the upgrading is successful
75. In the event of software upgrading failure, The proposed Network Provisioning Platform
shall allow automatic fallback to previous running software version.
76. The proposed Network Provisioning Platform shall re-synchronise its configuration with
the current state of managed NEs once the connection is re-established.
Page 98 of 131
77. Any configuration changes made by Network Provisioning Platform shall be
synchronized in real-time to the managed NE automatically.
78. The proposed Network Provisioning Platform shall allow for configuration rollback to
previous versions of the configuration or return to the last saved configuration.
Page 99 of 131
Annexure-5
By integrating SDN with IP/MPLS, service providers can improve operational efficiency,
reduce costs, and enable faster service innovation and provisioning. SDN decouples the
control plane from the data plane in the MPLS network, centralizing network intelligence
and policy-based traffic management.
1. SDN Controller: This is the brain of the SDN architecture. The SDN controller centralizes
network control, making routing decisions and managing traffic flows based on the
network’s global state. In an IP/MPLS environment, it can control MPLS label distribution
and IP routing dynamically.
2. Data Plane: This consists of the forwarding devices, such as MPLS routers and switches,
that carry out the actual data forwarding based on the rules set by the controller. The
routers in MPLS networks use Label-Switching Routers (LSRs) and Label Edge Routers
(LERs) to handle traffic flows.
3. Southbound APIs: Protocols such as OpenFlow or NETCONF enable communication
between the SDN controller and the data plane devices. These APIs are responsible for
programming the forwarding tables in MPLS routers based on controller decisions.
4. Northbound APIs: These are interfaces between the SDN controller and applications or
orchestration platforms. They provide policy-based controls, traffic engineering, network
analytics, and service provisioning.
• In traditional MPLS networks, traffic engineering (TE) relies on distributed protocols like
RSVP-TE or Segment Routing for setting up Label-Switched Paths (LSPs). With SDN,
the controller has a global view of the network, enabling more efficient and dynamic traffic
engineering decisions.
• The SDN controller can optimize the placement of LSPs across the MPLS network based
on real-time traffic demand, congestion status, and Quality of Service (QoS)
requirements.
• SDN facilitates the dynamic establishment, modification, and teardown of LSPs based
on traffic conditions, user demands, and application requirements. The controller can
3. Network Slicing:
• SDN enables network slicing in MPLS networks, which allows the partitioning of the
MPLS infrastructure into multiple virtual networks, each with its own distinct traffic
policies and SLAs (Service Level Agreements). Each slice can be configured
dynamically by the SDN controller to accommodate different service requirements
(e.g., for IoT, 5G, or enterprise services).
• In IP/MPLS networks, SDN can be used to implement service function chaining, where
specific traffic flows are steered through a sequence of network services (such as
firewalls, load balancers, or DPI systems) before reaching their destination. The SDN
controller ensures that traffic follows the desired path while MPLS labels are used for
efficient forwarding.
• SDN in IP/MPLS networks allows for better integration between different network
layers (IP, MPLS, Optical). The SDN controller can optimize resource allocation and
traffic management across these layers, ensuring efficient use of network resources
and improved QoS for end-to-end services.
• SDN allows for the centralized enforcement of security policies in MPLS networks. The
controller can dynamically apply security rules and manage firewalls, intrusion
detection systems, or access control mechanisms, based on traffic patterns. This
centralized policy management is more efficient than relying on distributed security
mechanisms.
• SDN controllers can collect and analyze real-time data on traffic flows, network
congestion, and device performance in the IP/MPLS network. Based on this data, the
controller can make informed decisions on traffic routing and resource allocation,
improving overall network performance and user experience.
• SDN controllers can enforce QoS policies by dynamically adjusting MPLS label-
switched paths based on service-level agreements. The controller ensures that critical
1. The SDN controller should include a unified cloud architecture that integrates
management, control, and analysis. The SDN controller should have capability to
manage 2000 Nodes of IP-MPLS.
2. The SDN controller must support a unified portal to access all SDN components,
including device management, service provisioning, network optimization, and network
monitoring. The SDN controller must support multivendor devices through standard
protocols. License/software required for fault & performance management for
multivendor devices should be included in offer.
3. The SDN controller must provide a unified user authentication mechanism. Multiple
logins are not required when all SDN functional components are used.
4. All SDN components should support a unified user authorization mechanism. The SDN
system should be a complete system with uniform authentication to avoid jumping out
of different applications and ensure security.
5. The SDN controller should provide unified service monitoring capabilities in one service
window, including service DASHBOARD, service-related alarms, historical performance
curves, and OAM tools.
6. The SDN controller must provide unified deployment and installation tools and deploy
all components at a time.
7. The SDN controller must support RADIUS/LDAP authentication.
8. The SDN controller must support standard southbound protocols, including:
(a) Netconf
(b) PCEP
(c) BGP-LS
(d) BGP-SR
(e) OSPF
(f) ISIS