You are on page 1of 13

Is your backbone network

ready for FRMCS?

White paper

Future Railway Mobile Communication System (FRMCS) is the future wireless communication
system for digital rail. In FRMCS, the transport network domain is foundational as it links the
RAN and core domains together. While new railway backbone communication networks can
support a number of rail applications, they fall short on multiple fronts for FRMCS transport.
This paper identifies the gaps and articulates how the backbone network can evolve to meet
new requirements.
Contents
The advent of FRMCS 3
Is your backbone network ready for 5G transport? 4
5G transport challenge 5
Dynamic 5G transport slicing 5
Cross-layer IP/optical network management 6
Time synchronization 7
Cloud interconnect 7
Evolve the backbone for FRMCS 7
Adopting network automation for 5G transport slicing 7
Ushering in unified cross-layer management for multi-layer 5G transport network 9
Automated multi-layer topology detection 9
Harnessing IEEE 1588v2 in the backbone to distribute time synchronization for 5G RAN 11
Cloud interworking for cloud-native 5G core 12
Conclusion 13
Abbreviations 13

2 White paper
Is your backbone network ready for FRMCS?
The advent of FRMCS1
The future of digital railways hinges on fully harnessing advanced communications systems to increase
safety, reduce costs, improve the journey experience and achieve climate neutrality. Led by the European
Railway Agency (ERA) and UIC, Future Railway Mobile Communications System (FRMCS) is the emerging
global standard for railway mobile communications to unleash new opportunities and smart applications
that can be classified into three categories:
• Safety-critical applications essential for train movements and safety or a legal obligation, such as
emergency communications, shunting, presence, trackside maintenance, Automatic Train Operation
(ATO), Automatic Train Control (ATC) and Automatic Train Protection (ATP)
• Operation-critical applications that help to improve the performance of the railway operation, such as
train departure procedures and telemetry
• Business-essential applications that support the general business operation such as wireless internet
and ticketing support.
FRMCS is architected to achieve maximum flexibility by separating the railway functions from the network
and radio bearer. In other words, FRMCS can use standard mobile radio technologies such as LTE, 5G or
even satellites. However, for interoperability and economy of scale, it is also important to harmonize radio
technologies across countries. The latest thinking of ERA and UIC shows a clear preference for 5G, a choice
supported by many major European railway operators.
5G is a global radio standard defined by 3GPP. A high-level 5G network architecture comprises three
domains (Figure 1):
1. The radio access network (RAN) connecting to wireless users and devices
2. The 5G core receiving wireless data from the RAN and forwarding to applications in data centers,
the cloud or the internet
3. The transport network connecting the RAN and the core.
While 5G is a wireless technology, it requires the transport network to backhaul data from the RAN to the
core. Hence the transport network is considered to be the foundation of 5G. The rest of this paper will
focus on 5G transport: the backbone network.

Figure 1. A high-level 5G architecture

RAN Transport network Core

gNb (5G 5G core


base station)

1 For more information on FRMCS, please read Nokia 5G for Railways White Paper

3 White paper
Is your backbone network ready for FRMCS?
Is your backbone network ready for 5G transport?
Deploying a dedicated 5G transport network is not feasible for many operators. A sound strategy would
be to utilize the railway backbone communication network (abbreviated as backbone network hereafter).
Railway operators have been modernizing their backbone networks to reliably support many critical and
essential rail applications (Figure 2).

Figure 2. Common digital applications used by railway operators today


Digital applications

Safety-critical Operation-critical Service-essential


• Signaling • LMR/DMR/UHF • PIS/PID
• Interlock • SCADA • Infotainment
• CCTV • Access control • Wi-Fi
• Critical voice • Ticketing

Converged backbone communication network

The backbone network is a multi-layer secure network, comprising IP/MPLS for multiservice and packet
optical and microwave packet for transport. The backbone is built with these four mission-critical network
essentials: strong resiliency, multiservice capabilities, deterministic quality of service (QoS) and robust
cyber security:
• Multiservice capability — The backbone network needs to support a mix of new IP-based applications
such as ATO and CCTV, as well as TDM-based applications like the old SCADA system and emergency
voice.
• Strong resiliency — Network outages can have grave safety consequences. The backbone network
can leverage the full set of IP/MPLS multi-layer/multi-fault redundancy protection for high network
availability.
• Deterministic QoS — The backbone needs strong and flexible QoS capabilities to constantly meet
QoS needs for each application, including TDM-based applications that have stringent delay and jitter
requirements.
• Robust cyber security – The wide adoption of IP-based rail applications significantly expands the
cyberattack surface, making cybersecurity a top concern. The backbone needs to serve as an effective
first line of cyber defense, safeguarding railway infrastructure.
To extend connections throughout the expansive railway infrastructure, the backbone links with various
access domains, from cable wire and optical fiber to different generations of wireless technology. In the
new FRMCS era, the backbone will connect to the 5G domain (Figure 3). To do that, the backbone needs
to meet the new 5G transport challenges explained below.

4 White paper
Is your backbone network ready for FRMCS?
Figure 3. The converged IP/MPLS backbone for 5G transport blueprint
On-board/trackside Access network Converged multi-layer backbone network Operations center

ATO FRMCS/ Railway


5G applications
IP/MPLS
GSM-R/ router
Interlock LTE/LMR/ Dispatcher
DMR

CCTV Satellite Data center

Radio core
Handheld Wi-Fi/
LPWA
Packet Packet
microwave optical
IoT/IT
IoT sensor Fiber applications

5G transport challenge
Dynamic 5G transport slicing
5G brings an innovative networking slicing capability that enables a shared wireless network to offer multiple
services while consistently meeting stringent QoS requirements such as bandwidth and deterministic delay
for each rail application. A 5G network slice is a virtual network partition that contains dedicated resources
to support a specified set of services, applications or users with different QoS or security levels (Figure 4).
For example, ATO requires stringent latency for train control while CCTV is very delay tolerant. Operators can
provision a slice for each application with dedicated network resources. With this service-centric paradigm,
5G enables operators to harness the full power of digital applications not just on board; it can also wirelessly
connect to devices along tracks, in rail stations and switchyard, at speed and scale.

Figure 4. 5G brings in network slicing dedicated to different applications


Pre-5G network 5G network
ATO (5 Mb/s)

CCTV (20 Mb/s)

IoT (1 Mb/s)

Passenger Wi-Fi (50 Mb/s)

Creating a 5G slice requires orchestrating the provisioning of a slice in each of these three domains
(RAN, core and transport) and seamlessly interconnecting them. This is an immense task if left to be
done manually. Therefore, it is necessary to automate orchestration across all three domains to efficiently
deliver, optimize and assure 5G services. This can be accomplished with an end-to-end network slice
orchestrator (abbreviated as the orchestrator hereafter), orchestrating the controller in each domain
(Figure 5). Furthermore, as 5G core is a cloud-native technology, the transport network also needs to
interconnect with the cloud gracefully.

5 White paper
Is your backbone network ready for FRMCS?
Figure 5. End-to-end 5G network slicing
End-to-end network slice orchestrator
Central cloud

RAN slice Transport Core slice


controller slice controller controller
ATO
Core slice Core slice
for ATO for CCTV
CCTV

Backbone
IoT analytics network

Passenger Wi-Fi

Core slice Core slice


for Wi-Fi for IoT
analytics

5G RAN slices Transport slices 5G core slices


E2E network slices: ATO slice CCTV slice IoT analytic slice Passenger Wi-Fi slice

Cross-layer IP/optical network management


As the abundant 5G capacity naturally triggers a flood of new bandwidth-intensive applications, the
backbone needs to run on an extensive optical transport layer, leveraging DWDM technology for a
significant bandwidth boost (Figure 6).
The traditional approach to manage such a multi-layer network is to manage them as separate layers.
However, with each layer treated as a separate domain, any network activities would require intensive
manual coordination. This takes time, hampering operations speed.

Figure 6. The railway IP/optical multi-layer backbone blueprint


Stations/shelters
Multi-layer IP/optical
FRMCS backbone network
Control center

Management
Interlock
systems

Application
Operational controllers
comms Multi-100G
DWDM transport

CCTV

6 White paper
Is your backbone network ready for FRMCS?
Time synchronization
Railway operators are no strangers to synchronization. They have been distributing frequency
synchronization for radio networks such as GSM-R and LMR. However, 5G requires time-of-day
synchronization for TDD base stations and advanced cellular capabilities such as coordinated multiple
point transmission and reception (CoMP).

Cloud interconnect
With a cloud-native architecture, 5G core can fully harness the power of cloud to maximize its flexibilities
and capabilities. The 5G core slice can be hosted in a data center or at the edge cloud to be closer to the
source of information to boost application performance. This requires the backbone network to seamlessly
interwork with the data center switch fabric and edge cloud.
It is evident from the discussion above that in addition to the four mission-critical network essentials
(multiservice, resiliency, QoS and cybersecurity), the backbone network needs to further evolve to tackle
backhaul of 5G data.

Evolve the backbone for FRMCS


Adopting network automation for 5G transport slicing
Transport slicing is the foundation of the end-to-end 5G slice. The backbone network manager needs to
take on the role of transport slice controller as part of the end-to-end orchestration. Utilizing software-
defined networking (SDN) technology and network automation, the backbone becomes programmable
through the northbound API of the network manager. With the API the orchestrator can order a transport
slice in the backbone. The orchestrator provides the backbone manager the service requirements (also
known as service intent) including RAN and core endpoint locations, as well as transport QoS requirements.
The backbone manager uses its intent-based network automation capability to build a dedicated transport
slice, which is essentially a backbone VPN, to connect the endpoints with the right QoS settings. Since this
process requires neither manual configuration or intervention, it significantly accelerates the provisioning
speed and eliminates human errors.
For example, a rail operator is to create two slices: one for ATO, which only needs moderate bandwidth
but is sensitive to delay, and another one for CCTV, which is bandwidth intensive with no delay constraints.
The service intent for the ATO transport slice is to minimize delay, while for CCTV, it is to optimize
for bandwidth.
Based on this intent information, the network manager will allocate the necessary network resources to
build two separate slices (green and red) in the backbone (Figure 7). For the ATO transport slice, as the
intent is to minimize delay, the WAN manager will find a path with the least number of routers. In this
case, the path uses a 10GE link going through the DWDM optical core before reaching the far-end router.
Regarding the CCTV slice, it will find a path with 100GE, which is rich in bandwidth.

7 White paper
Is your backbone network ready for FRMCS?
Figure 7. Intent-based automated transport slice provisioning
End-to-end network slice orchestrator

ATO transport slice intent: CCTV slice intent:


Bandwidth: 5 Mb/s Bandwidth: 20 Mb/s
Delay budget: 50 ms Delay budget: 100 ms
QoS class: Gold QoS class: Silver

Transport slice controller


Central cloud

ATO
WAN

CCTV
10GE
ATO core
100GE

E2E network slices: ATO slice CCTV slice CCTV core

As the backbone network is dynamic and the amount of data carried in the network constantly changes to
impact network performance, the transport slice needs to be responsive to consistently meet the intents.
The backbone manager performs the following:
1. Continually collects telemetry data for network statistics and OAM measurement such as delay
2. Proactively monitors the delay performance
3. Re-optimizes the path if the performance falls short of the intent
4. Reroutes the data to the new path with bigger bandwidth (Figure 8).
This closed-loop mechanism of service assurance and optimization is critical to ensure constant high
application performance, particularly for delay-sensitive applications such as ATO.

Figure 8. Automated assurance and optimization for transport slice

2 Delay intent
violation detected

3 Controller re-optimizes Transport slice controller


the path to remedy
delay violatoin
1 Network delay
telemetry Central cloud

ATO

10GE
ATO core
4 Data are sent
on new path

8 White paper
Is your backbone network ready for FRMCS?
Ushering in unified cross-layer management for multi-layer 5G transport network
Managing a multi-layer IP/optical backbone network (Figure 9) using disjoint network managers and
ad hoc tools is a daunting challenge since information about each layer is stored in its own management
silo with no coordination. A unified cross-layer manager has full access to each layer’s information. More,
with network automation capabilities, a unified network manager can exercise cross-domain control with
cross-layer coordination, removing these barriers and significantly improving network operation efficiency
and service reliability.2

Figure 9. A multi-layer IP/optical backbone architecture

IP/MPLS

Optical

Automated multi-layer topology detection


With the present mode of operation using disjoint network managers, operators only have an isolated view
of the IP/MPLS and optical topologies (Figure 10). As a result, operators need a lot of manual effort to
identify and visualize the end-to-end paths between two peering routers that traverse the optical layer.

Figure 10. Disjoint managers provide isolated IP and optical topology views
IP/MPLS Optical
network manager network manager

Optical Optical
Optical switch 1 switch 3
transport

Router 1 Router 2

IP/MPLS topology

Optical Optical
switch 2 switch 4

Optical transport topology

2 To learn about the full coordination capability, read solution sheet IP Optical Multi-layer and Cross-Domain Coordination

9 White paper
Is your backbone network ready for FRMCS?
Network automation with a cross-layer network manager provides an intuitive visualization of the multi-
layer topology by leveraging protocols such as LLDP snooping. It can also compare traffic counts to gather
cross-layer connection information to build out a full multi-layer topology (Figure 11). This provides full
visualization of both IP and optical topologies in a fully correlated way.

Figure 11. A unified network manager auto-discovers the multi-layer topology


Unified manager

Optical Optical
switch 1 switch 3

Router 2

Router 1

Optical Optical
switch 2 switch 4

Analysis of multi-layer path diversity


Equipped with the multi-layer topology information, a unified manager can analyze IP/MPLS path diversity
and identify shared failure risk along the underlying optical path. This is pivotal to reliable service delivery.
In Figure 11, Router 1 appears to have diverse routes to reach Router 2 through the red and green paths.
However, the two paths are not actually diverse as they share the same fiber trunk between Optical Switch 1
and 3. Should a fiber cut occur there, both paths will be affected. Router 1 would lose connectivity to Router
2 despite the dual-homing network architecture. With disjoint IP/MPLS and optical topology information in
a disjoint management paradigm (Figure 5), the operator is challenged to evaluate the shared risk of what
appears to be diverse paths in the network. Network automation can perform path diversity analysis to
detect the shared risk. The cross-layer manager can re-calculate the redundant green path and re-reroute
it through Optical Switch 4 for true path diversity, thereby improving service reliability (Figure 12).

Figure 12. A unified manager brings multi-layer path diversity


Unified manager

Red and green paths


on diverse path

Optical Optical
switch 1 switch 3

Router 2

Router 1

Optical Optical
switch 2 switch 4

10 White paper
Is your backbone network ready for FRMCS?
Harnessing IEEE 1588v2 in the backbone to distribute time synchronization for 5G RAN
The straightforward solution for time synchronization is to deploy a GNSS receiver throughout the rail
infrastructure. However, this is not always feasible. With the advent of IEEE1588v2 technology, the
backbone network can distribute both time and frequency synchronization everywhere.
1588v2 was originally designed for critical industrial automation. A 1588v2 synchronization scheme is
based on a hierarchical topology of 1588v2 clocks (Figure 13) where synchronization is distributed along
each 1588v2 downstream. A 1588v2 clock comprises the following types:
1. Grand master clock (GMC) – the primary clock reference with a high-precision time source, which is
typically a GPS signal or atomic clock; it acts as a master clock to other clocks down in the hierarchy.
2. Boundary clock (BC) – a device that acts as a slave clock to the upstream master clock and master to
the downstream slave clock
3. Transparent clock (TC) – a device that forwards all 1588v2 messages received downstream with
hardware-based capabilities that modify timestamp information in the messages to account for delay
caused by the device; it has no master/slave peering relationship to other 1588v2 clocks.
4. Slave clock – receives PTP messages from associated master clock to recover frequency, phase and
time information.

Figure 13. A hierarchical 1588v2 synchronization network topology


Grand
master
clock

Boundary
clock

Slave Transparent
clock clock

Railways have been distributing frequency synchronization in their legacy TDM-based backbone for a
long time. A major application that needs frequency synchronization is GSM-R base stations along the
track. GSM-R base stations need a frequency reference to drive RF carriers. Traditionally, frequency
synchronization is distributed using synchronous physical links such as T1/E1, DS3 and OC-3/STM-1.
With an IP/MPLS-based backbone network, synchronous Ethernet has become the prevalent means to
distribute frequency synchronization.
With FRMCS/5G, time synchronization is needed. Rail operators need to resort to IEEE1588v2. Backbone
routers now need to support advanced 1588v2 capabilities such as GMC and BC to distribute accurate
timing information required by 5G base stations. In addition, as 5G base stations (gNB in 5G terminology)
would be deployed along the tracks, track switches would also need to support BC capability (Figure 14).

11 White paper
Is your backbone network ready for FRMCS?
Figure 14. Railway 1588v2 blueprint for time and frequency synchronization
Train station Train station OCC

Trackside Backbone Trackside Backbone


switch/BC router/BC switch/BC router/BC GNSS signal

Headend
router/GMC

gNb

Cloud interworking for cloud-native 5G core


Using an IP/MPLS data center gateway, IP/MPLS can interwork flexibly with data center fabric (Figure 15).
Harnessing Ethernet virtual private network (EVPN) service, the backbone can interwork with data center
fabric at the Ethernet layer, IP layer and BGP layer, enabling network agility to support dynamic cloud
computing. IP/MPLS also provides a graceful evolution path to segment routing, ready for total adoption
of a dynamic, elastic cloud-based digital rail environment in the future.

Figure 15. Cloud interworking with cloud-native 5G core


Control center

GSM-R Railway IP/MPLS


BTS backbone

Management Controllers
GSM-R backhaul systems
Interlock
Interlock
Data center
CCTV
CCTV
5G CCTV slice

5G gNb 5G ATO slice Data Data center 5G core


center switch (CCTV
gateway fabric slice)
Edge cloud

5G core
(ATO slice)

12 White paper
Is your backbone network ready for FRMCS?
Conclusion
Railway operators are embracing a broadband T2G future with FRMCS/5G to wirelessly connect railway
personnel, rolling stocks, machines, applications and services everywhere. The converged IP/MPLS
backbone network is the foundation of 5G deployment. It needs to evolve to acquire new capabilities
including network automation, multi-layer management, time synchronization distribution and cloud
interworking to meet new 5G backhaul requirements.
Nokia offers a broad product portfolio that spans IP/MPLS, SDN, packet optical, microwave, FRMCS/5G,
GSM-R, LTE, security, IoT and analytics. Complemented by a full suite of professional services (network
audit, design and engineering practices), Nokia has the unique capability and flexibility to help railway
operators implement their network transformation to thrive in the digital future.
To learn more about Nokia solutions for railways and the Nokia IP/MPLS portfolio, visit our Railways
web page and IP/MPLS portfolio web page.

Abbreviations
ATO automatic train operation MPLS multiprotocol label switching
BC boundary clock OAM operations, administration
CCTV closed circuit television and maintenance
CoMP coordinated multiple point PIS passenger information system
transmission and reception PID passenger information display
DMR digital mobile radio QoS quality of service
DWDM dense wavelength division multiplexing RF radio frequency
ERA European Railway Agency SCADA supervisory control and data acquisition
FRMCS Future Railway Mobile Communication SDN software-defined networking
System TC transparent clock
GMC grand master clock TDD time division duplexing
GPS global positioning system TDM time division multiplexing
GSM-R Global System for Mobile UIC International Union of Railways
Communications – Railway (Union Internationale des Chemins
IoT Internet of Things de fer)
LMR land mobile radio UPF user plane function
LPWA low power wide area VPN virtual private network
LTE long term evolution

About Nokia
At Nokia, we create technology that helps the world act together.

As a B2B technology innovation leader, we are pioneering the future where networks meet cloud to realize the full potential of digital in every industry.

Through networks that sense, think and act, we work with our customers and partners to create the digital services and applications of the future.

Nokia is a registered trademark of Nokia Corporation. Other product and company names mentioned herein may be trademarks or trade names of their respective owners.

© 2023 Nokia

Nokia OYJ
Karakaari 7
02610 Espoo
Finland
Tel. +358 (0) 10 44 88 000

Document code: CID210545 (February)

You might also like