You are on page 1of 22

Optical Transport Network

Related terms:

Optical Network, Multiplexing, Dense Wavelength Division Multiplexing, Full Du-


plex, Optical Networking

View all Topics

Learn more about Optical Transport Network

Routing in Optical Networks, Multilay-


er Networks, and Overlay Networks
Deep Medhi, Karthik Ramasamy, in Network Routing (Second Edition), 2018

24.1.2 OTN
The Optical Transport Network (OTN) system is structured in layered networks
consisting of several sublayers [404]. It is a recent modernized standard aimed
to replace SONET/SDH. Each sublayer is responsible for specific services and is
activated at its termination points. The Optical Data Unit (ODU) sublayer currently
defines five bit-rate client signals, i.e., 1.25, 2.5, 10, 40, and 100 Gbps that are
referred to as ODUk (), respectively (see Table 24.2).

Table 24.2. OTN Signals, Data Rates and Multiplexing.

Uk Signal Bit-Rate in Gbps (approx.) Remark

ODU0 1.25 For transporting 1000BASE-X


signal (such as Gigabit Ethernet)
ODU1 2.5 For transporting two ODU0 sig-
nals or a STS-48/STM-16 signal
ODU2 10 For transporting eight ODU0s
or four OD1s
ODU2e 10 For transporting 10 Gigabit Eth-
ernet
ODU3 40 For transporting 32 ODU0s, or
16 ODU1s, or four ODU2s, or an
STS-768/STM-256 signal, or 40
Gigabit Ethernet
ODU4 100 For transporting 80 ODU0s, or
40 ODU1s, or ten ODU2s, or
two ODU3s, or a 100 Gigabit
Ethernet

OTN also defines the ODUk time division multiplexing sublayer. It supports the
multiplexing and transporting of several lower bit-rate signals into a higher bit-rate
signal and maintains an end-to-end trail for the lower bit-rate signals. The multi-
plexing of ODUk signals is easy to visualize from the bit-rates shown in Table 24.2.
There are two additional specifications: ODU2e and ODUflex. For the purpose of
capacity planning modeling, ODU2e can be treated as ODU2, and is not necessary
to consider separately. ODUflex is any rate over ODU1, which from a modeling
purpose, can be treated as a real variable with lower bound 1 Gbps.

> Read full chapter

SDN in Other Environments


Paul Göransson, ... Timothy Culver, in Software Defined Networks (Second Edition),
2017

9.6 Optical Networks


An (OTN) is an interconnection of optical switches and optical fiber links. The
optical switches are layer one devices. They transmit bits using various encoding
and multiplexing techniques. The fact that such optical networks transmit data over
a lightwave-based channel as opposed to treating each packet as an individually
route-able entity lends itself naturally to the SDN concept of a flow. In the past, data
traffic was transported over optical fiber using protocols such as Synchronous Op-
tical Networking(SONET) and Synchronous Digital Hierarchy (SDH). More recently,
however, OTN has become a replacement for those technologies. Some companies
involved with both OTN and SDN are Ciena, Cyan (now acquired by Ciena), and
Infinera. Some vendors are creating optical devices tailored for use in data centers.
Calient, for example, uses optical technology for fast links between racks of servers.

9.6.1 SDN Applied to Optical Networks


In multiple-use networks, there often arise certain traffic flows which make intense
use of network bandwidth, sometimes to the point of starving other traffic flows.
These are often called elephant flowsdue to their sizable nature. The flows are charac-
terized by being of relatively long duration yet having a discrete beginning and end.
They may occur due to bulk data transfer, such as backups that happen between the
same two endpoints at regular intervals. These characteristics can make it possible
to predict or schedule these flows. Once detected, the goal is to re-route that traffic
onto some type of equipment, such as an all-optical network which is provisioned
specifically for large data offloads such as this. OTNs are tailor-made for these
huge volumes of packets traveling from one endpoint to another. Packet switches’
ability to route such elephant flows at packet-level granularity is of no benefit, yet the
burden an elephant flow places on the packet switching network’s links is intense.
Combining a packet switching network with an OTN into the kind of hybrid network
shown in Fig. 9.10 provides an effective mechanism for handling elephant flows.

Fig. 9.10. Optical offload application overview.

In Fig. 9.10 we depict normal endpoints (A1, A2) connected through Top-of-Rack(ToR)
switches ToR-1 and ToR-2, communicating through the normal path, which traverses
the packet-based network fabric. The other elephant devices (B1, B2) are transferring
huge amounts of data from one to the other; hence, they have been shunted over
to the optical circuit switch, thus protecting the bulk of the users from such a large
consumer of bandwidth.

The mechanism for this shunting or offload entails the following steps:

1. The elephant flow is detected between endpoints in the network. Note that,
depending on the flow, detecting the presence of an elephant flow is itself a
difficult problem. Simply observing a sudden surge in the data flow between
two endpoints in no way serves to predict the longevity of that flow. If the flow
is going to end in 500 ms, then this is not an elephant flow and we would
not want to incur any additional overhead to set up special processing for it.
This is not trivial to know or predict. Normally, some additional contextual
information is required to know that an elephant flow has begun. An obvious 2.
example is the case of a regularly scheduled backup that occurs across the
network. This topic is beyond the scope of this work, and we direct the
interested reader to [11, 12].
The information regarding the endpoints’ attaching network devices is noted,3.
including the uplinks (U1, U2) which pass traffic from the endpoints up into
the overloaded network core.
The SDN controller program flows in ToR-1 and ToR-2 to forward traffic to and4.
from the endpoints (B1, B2) out an appropriate offload port (O1, O2), rather
than the normal port (U1, U2). Those offload ports are connected to the optical
offload fabric.
The SDN controller programs flows on the SDN-enabled optical offload fabric5.
to patch traffic between B1 and B2 on the two offload links (O1, O2). At this
point, the re-routing is complete and the offload path has been established
between the two endpoints through the OTN.
The elephant flow eventually returns to normal and the offload path is removed
from both connecting network devices and from the optical offload device.
Subsequent packets from B1 to B2 traverse the packet-based network fabric.

We discuss details of programming an offload application in Section 12.10.

9.6.2 Example: Fujitsu’s Use of SDN in Optical Networks


Fujitsu has been an early adopter of SDN and focused on optical networks. One
of their first forays into optical SDN was to leverage the technology to accelerate
network storage access [13]. The SDN controller observes the storage access (storage
flow) in the network on Fiber Channel over Ethernet(FCoE) and performs flow manip-
ulation. Fujitsu separated the storage flow detection and storage flow manipulation
from the functions needed for FCoE data relays. They then created a converged fabric
switch with a software interface to the centralized controller. Fujitsu reported that
this SDN implementation resulted in a twofold increase in throughput.

Other Fujitsu optical SDN efforts are targeted toward OTN. Fujitsu, a found-
ing partner of the Open Network Operating System(ONOS) community, recently
demonstrated a use case of packet-over-optical transport [14]. The ONOS Cardinal-
release was used to demonstrate this packet-over-optical use case, which is central
to the application of SDN to OTN. With Cardinal, Fujitsu was able to leverage
new southbound plugins to develop transaction language 1(TL1) interfaces from
the ONOS controller to the FLASHWAVE 9500Packet Optical Networking Platform.
These interfaces allowed the ONOS controller to provide Dense Wavelength Division
Multiplexing(DWDM) services such as on-demand bandwidth, bandwidth calendar-
ing, and multilayer optimization.
Through these SDN efforts, Fujitsu has expanded its VirtuoraSDN/NFV platform [15].
This platform has been built on the OpenDaylight (ODL) controller, but Fujitsu has
purposefully ensured that its platform is portable to other controllers. For instance,
they note that the Virtuora platform is easily portable to ONOS. The Virtuora NC 3.0
SDN framework was recently launched and it is based on ODL [16]. This framework
has southbound interfaces that support TL1 and NETCONF. Based on Fujitsu’s op-
tical work with ONOS, the TL1 interface can support the DWDM services previously
mentioned.

Discussion Question

If you were asked to map the Fujitsu FLASHWAVE 9500 onto the hypothetical
example in Fig. 9.10, which functional box would it be?

> Read full chapter

Emerging Trends in Packet Optical


Convergence
Vinod Joseph, Srinivas Mulugu, in Network Convergence, 2014

The Optical Transport Network (OTN)


The aim of the optical transport network (OTN) is to combine the benefits of
SONET/SDH technology with the bandwidth expandability of DWDM. In short,
OTNs will apply the operations, administration, maintenance and provisioning
(OAM&P) functionality of SONET/SDH to DWDM optical networks. The OTN is
specified in ITU-T G.709 Network Node Interface for the Optical Transport Network
(OTN).

This recommendation, sometimes referred to as digital wrapper (DW), takes single


wavelength SONET/SDH technology a step farther, enabling transparent, wave-
length-manageable multi-wavelength networks. Forward error correction (FEC)
adds an additional feature to the OTN by offering the potential for network operators
to reduce the number of regenerators used, leading to reduced network costs.

ITU-T G.709 Standards for OTNs


The ITU-T G.709 standard, Network Node Interface for the Optical Transport Net-
work (OTN), defines the OTN IrDI (inter-domain interface) in the following ways:

• Functionality of the overhead in preparing the multi-wavelength optical net-


work
• Optical Transport Unit framing structure

• Bit rates and formats permitted for mapping of the clients

Two types of interface are described in the ITU-T G.872 recommendation, Archi-


tecture of the Optical Transport Networks, the locations of which are illustrated in
Figure 6.10.

Figure 6.10.

Inter-Domain Interfaces (IrDI)


Inter-domain interfaces define:

• the location between the networks of two operators;

• the location between the sub-networks of two vendors in the same operator
domain; and
• the location within the sub-network of one vendor.

Intra-Domain Interfaces (IaDI)


Intra-domain interfaces define the location of an individual manufacturer’s sub-net-
work between the equipment.

As with SONET/SDH, the OTN has a layered structure design.

The basic OTN layers are visible in the OTN transport structure and consist of
the optical channel (OCh), the optical multiplex section (OMS), and the optical
transmission section (OTS), as shown in Figure 6.11.
Figure 6.11.

The aim of the OTN is to enable the multi-service transport of packet-based data
and legacy traffic, while DW technology accommodates non-intrusive management
and monitoring of each optical channel assigned to a particular wavelength. The
“wrapped” overhead (OH) would therefore make it possible to manage and control
client signal information. Figure 6.12 illustrates how the OTN’s management capa-
bilities are achieved with the addition of OH at several positions during the transport
of the client signal.

Figure 6.12.

The steps are as follows:

• OH is added to the client signal to form the optical channel payload unit (OPU).


OH is then added to the OPU, thus forming the optical channel data unit •
(ODU).
Additional OH plus FEC are added to form the optical channel transport unit •
(OTU).
Adding further OH creates an optical channel (Och), which is carried by one •
color.
Additional OH may be added to the OCh to enable the management of •
multiple colors in the OTN.
The OMS and the OTS are then constructed.

The result is an optical channel comprising an OH section, a client signal, and an


FEC segment, as shown in Figure 6.13.

Figure 6.13.

The OCh OH, which offers the OTN management functionality, contains four
substructures:

• Optical channel payload unit (OPU)

• Optical channel data unit (ODU)

• Optical channel transport unit (OTU)

• Frame alignment signal (FAS)

The client signal or actual payload to be transported could be of any existing protocol
such as SONET/SDH, GFP, IP, and GbE.

The optical channel payload unit (OPU): OH is added to the OPU payload and
is used to support the various client signals. It regulates the mapping of the
many client signals and provides information on the type of signal transported.
The ITU-T G.709 currently supports asynchronous as well as synchronous
mappings of client signals into the payload.
The optical channel data unit (ODU): OH allows the user to support tandem
connection monitoring (TCM), path monitoring (PM), and APS. End-to-end
path supervision and client adaptation via the OPU (as previously described)
are also made possible.
The optical channel transport unit (OTU): OTU is used in the OTN to support
transport via one or more optical channel connections. It also specifies both
frame alignment and FEC.
Forward error correction (FEC): In conjunction with the OCh OH of the digital
wrapper “envelope,” additional bandwidth—in this case FEC—is added. The
implemented algorithm/FEC enables the correction and detection of errors in
an optical link.

The FEC implementation defined in the G.709 recommendation uses the so-called
“Reed-Solomon” code RS(255/239). An OTU row is split into 16 sub-rows, each
consisting of 255 bytes. The sub-rows are formed byte interleaved, meaning that
the first sub-row consists of both the first OH byte and the first payload byte. The
first FEC byte is inserted into byte 240 of the first sub-row, as shown in Figure 6.14.
This is true for all 16 sub-rows.

Figure 6.14.

Of these 255 bytes, 239 are used to calculate the FEC parity check, the result of which
is transmitted in bytes 240 to 255 of the same sub-row. See Figure 6.15.
Figure 6.15.

The Reed-Solomon code detects 16-bit errors or corrects 8-bit errors in a sub-row.
The FEC RS(255,239) is specified for the fully standardized IrDI interface. Other
OTUkV interfaces (eg; IaDI), which are only functionally standardized, may use other
FEC codes.

FEC in Optical Networks


FEC enables the detection and correction of bit errors caused by physical impair-
ments in the transmission medium. These impairments are categorized into linear
(attenuation, noise, and dispersion) and nonlinear (four-wave mixing, self-phase
modulation, cross-phase modulation) effects.

When FEC is used in a network link, the network operator can accept a lower quality
signal in the link so that potential errors can be corrected.

Advantages of FEC
The quality of a fiber optic link is determined by a variety of parameters. The span
of a link is typically determined by the optical power budget, which is the difference
in power between what the optical transmitter can produce and what the optical
receiver can detect. Within the link, the attenuation from every kilometer of fiber
and every connector or coupler adds together to consume this budget. To create
links with large spans, system designers can choose to use amplifiers or repeaters. A
system with a larger optical power budget preserves the link’s bit error rate (BER) and
has less need for amplifiers and repeaters. Designers typically achieve this increased
optical power budget by using higher-quality, higher-cost optical components.

FEC effectively adds a significant gain to the link’s optical power budget, while
keeping the BER as low as possible.

Because FEC systems must detect an error before correcting it, using FEC across a
link lets the designer measure performance and allows early identification of link
degradation.

FEC provides the following benefits for fiber optic communications links:

• Improves performance of an existing link between two points

• Increases the maximum span of the link in systems without repeaters

• Increases the distance between repeaters in optically amplified systems or


relaxes the specifications of the optical components or fiber
• Improves the overall quality of the link by early diagnosis of degradation and
link problems
Once the optical channel is formed, additional non-associated OH is added to
individual OCh wavelengths, which then form the optical multiplexing sections
(OMS) and the optical transmission sections (OTS), as shown in Figure 6.16.

Figure 6.16.

In the optical multiplex section (OMS) layer, both the OMS payload and non-associ-
ated overhead (OMS OH) are transported. The OMS payload consists of multiplexed
OChs. The OMS-OH, although undefined at this point, is intended to support the
connection monitoring and assist service providers in troubleshooting and fault
isolation in the OTN.

The optical transmission section (OTS) layer transports the OTS payload as well as
the OTS overhead (OTS-OH). Similar to the OMS, the OTS transports the optically
multiplexed sections described above. The OTS OH, although not fully defined, is
used for maintenance and operational functions. The OTS layer allows the network
operator to perform monitoring and maintenance tasks between the Nes, which
include: OADMs, multiplexers, de-multiplexers, and optical switches.

> Read full chapter

Software-Defined Networking and


OpenFlow
Saurav Das, ... Rob Sherwood, in Handbook of Fiber Optic Data Communication
(Fourth Edition), 2013

17.6 SDN in optical networks


Motivation: In WANs today, packet-switched IP networks and circuit-switched optical
transport networks are separate networks, typically planned, designed, and operated
by separate divisions, even within the same organization. Such structure has many
shortcomings. First and foremost, it serves to increase the TCO for a carrier—op-
erating two networks separately results in functionality and resource duplication
across layers, increased management overhead, and time/labor-intensive manual
coordination between different teams, all of which contribute toward higher CapEx
and OpEx.

Second, such structure means that IP networks today are completely based on packet
switching. This in turn results in a dependence on expensive, power hungry, and
sometimes fragile backbone routers, together with massively over-provisioned links,
neither of which appear to be scalable in the long run. The Internet core today simply
cannot benefit from more scalable optical switches nor take advantage of dynamic
circuit switching.

Finally, lack of interaction with the IP network means that the transport network has
no visibility into IP traffic patterns and application requirements. Without interaction
with a higher layer, there is often no need to support dynamic services, and therefore
little use for an automated control plane. As a result, the transport network provider
today is essentially a seller of dumb pipes that remain largely static and under the
provider’s manual control, where bringing up a new circuit to support a service can
take weeks or months.

Why OpenFlow/SDN? The single biggest reason is that it converges the operation of
different switching technologies under one common operating system. It provides
a way to reap the benefits of both optical circuit switching and packet switching, by
allowing a network operator to decide the correct mix of technologies for the services
they provide. If both types of switches are controlled and used the same way, then
it gives the operator maximum flexibility to design their own network.

As optical circuits (TDM, WDM) can readily be defined as flows, a common-flow


abstraction fits well with both packet and circuit switches, provides a common
paradigm for control using a common-map abstraction, and makes it easy to control,
jointly optimize, and insert new functionality into the network [15]. It also paves the
path for developing common management tools, planning and designing network
upgrades by a single team, and reducing functionality and resource duplication
across layers.

Similarly, there are a number of large enterprises that own and operate their own
private networks. These networks include packet switching equipment as well as
optical switching equipment and fiber. Such enterprises can also benefit from
SDN-based common control over all of their equipment. For example, they could
use packet switching at the edge, interfacing with their branch offices and data
centers. And they could use dynamic circuit switching in the core, to perform the
functions that circuits perform exceedingly well in the core—functions like recovery,
bandwidth-on-demand, and providing guarantees. Finally they could manage their
entire infrastructure via SDN, to intelligently map packets into optical circuits that
have different characteristics, to meet end-application requirements.

Finally, common control over packet and optical switching opens up a lot of inter-
esting applications in networking domains that are not traditionally considered for
optical switching—e.g., within a data center [16].

Current and future work: Over the last few years, a number of researchers have worked
closely with optical switch vendors to add experimental extensions to OpenFlow,
implement those extensions in vendor hardware, and demonstrate its capabilities
and benefits. One early collaboration demonstrated a converged OpenFlow-enabled
packet optical network, where circuit flow properties (guaranteed bandwidth, low
latency, low jitter, bandwidth-on-demand, fast recovery) provided differential treat-
ment to dynamically aggregated packet flows for voice, video, and web traffic [17,18].
Since then several demonstrations have repeatedly shown the benefits of such ideas
[19,20].

The Open Networking Foundation (ONF) [21] is the industry standards organization
responsible for the standardization of OpenFlow and other related protocols that
belong in SDN. At the time of this writing, the New Transport Discussion Group in
the ONF is actively debating where and how OpenFlow and SDN concepts can be
applied to transport networks.

> Read full chapter

Client Layers of the Optical Layer


Rajiv Ramaswami, ... Galen H. Sasaki, in Optical Networks (Third Edition), 2010

The predominant client layers in backbone networks today are SONET/SDH, Ether-
net, and the Optical Transport Network (OTN). These protocols would correspond
to the physical layer in the OSI hierarchy (see Figure 1.6). SONET/SDH as part of
the first generation of optical networks was the earliest to be deployed in backbone
networks and has been very successful over the years. It is particularly adept at
supporting constant bit rate (CBR) connections, and it multiplexes these connections
into higher speed optical connections by using time division multiplexing. Originally
designed for low speed voice and CBR connections, up to 51 Mb/s, it now supports
data network, packet traffic that can have link transmission rates in the tens of
gigabits per second. An important feature of SONET/SDH is that it provides carrier
grade service of high availability.

> Read full chapter


Network Survivability
Rajiv Ramaswami, ... Galen H. Sasaki, in Optical Networks (Third Edition), 2010

Survivability can be addressed within many layers in the network. Protection can be
performed at the physical layer, or layer 1, which includes the SONET/SDH, Optical
Transport Network (OTN), and the optical layers. Protection can also be performed at
the link layer, or layer 2, which includes MPLS, Ethernet, and Resilient Packet Ring.
Finally, protection can also be performed at the network layer, or layer 3, such as the
IP layer. There are several reasons why this is the case. For instance, each layer can
protect against certain types of failures but probably not protect against all types of
failures effectively. In this chapter, we will focus primarily on layer 1 restoration, but
will also briefly discuss the protection techniques applicable to layers 2 and 3.

> Read full chapter

Control and Management


Rajiv Ramaswami, ... Galen H. Sasaki, in Optical Networks (Third Edition), 2010

8.2 Optical Layer Services and Interfacing


The optical layer provides lightpaths to other layers such as the SONET/SDH,
IP/MPLS, and Ethernet layers, as well as the electronic layer of the Optical Transport
Network (OTN), which includes the the optical channel transport unit (OTU) and
optical channel data unit (ODU) sublayers (see Section 6.2). In this context, the optical
layer can be viewed as a server layer, and the higher layer that makes use of the
services provided by the optical layer as the client layer. From this perspective, we
need to specify clearly the service interface between the optical layer and its client
layers. The key attributes of such a managed lightpath service are the following:

▪ Lightpaths need to be set up and taken down as required by the client layer
and as required for network maintenance.
▪ Lightpath bandwidths need to be negotiated between the client layer and
the optical layer. Typically, the client layer specifies the amount of bandwidth
needed on the lightpath.
▪ An adaptation function may be required at the input and output of the optical
network to convert client signals to signals that are compatible with the optical
layer. This function is typically provided by transponders, as we discussed in
Section 7.1. The specific range of signal types, including bit rates and protocols
supported, need to be established between the client and the optical layer.
▪ Lightpaths need to provide a guaranteed level of performance, typically spec-
ified by the bit error rate (typical requirements are 10−12 or less). Adequate
performance management needs to be in place inside the network to ensure
this.
▪ Multiple levels of protection may need to be supported, as we will see in
Chapter 9, for example, protected, unprotected, and protected on a best-effort
basis, in addition to being able to carry low-priority data on the protection
bandwidth in the network. In addition, restoration time requirements may also
vary by application.
▪ Lightpaths may be unidirectional or bidirectional. Almost all lightpaths today
are bidirectional. However, if more bandwidth is desired in one direction
compared to the other, it may be desirable to support unidirectional lightpaths.
▪ A multicasting, or a drop-and-continue, function may need to be supported.
Multicasting is useful to support distribution of video or conferencing infor-
mation. In a drop-and-continue situation, a signal passing through a node is
dropped locally, but a copy of it is also transmitted downstream to the next
node. As we will see in Chapter 9, the drop-and-continue function is particu-
larly useful for network survivability when multiple rings are interconnected.
▪ Jitter requirements exist, particularly for SONET/SDH connections. In order
to meet these requirements, 3R regeneration may be needed in the network.
Using 2R regeneration in the network increases the jitter, which may not
be acceptable for some signals. We discussed 3R and 2R in the context of
transparency in Section 1.5.
▪ There may be requirements on the maximum delay for some types of traffic.
This may place restrictions on maximum allowed propagation delay (or equiv-
alent link length) on links. This will need to be accounted for while designing
the lightpaths.
▪ Extensive fault management needs to be supported so that root-cause alarms
can be reported and adequate isolation of faults can be performed in the
network. This is important because a single failure can trigger multiple alarms.
The root-cause alarm reports the actual failure, and we need to suppress the
remaining alarms. Not only are they undesirable from a management per-
spective, but they may also result in multiple entities in the network reacting
to a single failure, which cannot be allowed. We will look at examples of this
later.

Enabling the delivery of these services requires a control and management interface
between the optical layer and the client layer. This interface allows the client to
specify the set of lightpaths that are to be set up or taken down and set the service
parameters associated with those lightpaths. The interface also enables the optical
layer to provide performance and fault management information to the client layer.
This interface can take on one of two facets. The simple interface used today is
through the management system. A separate management system communicates
with the optical layer EMS, and the EMS in turn then manages the optical layer.

The present method of operation works fine as long as lightpaths are set up fairly
infrequently and remain nailed down for long periods of time. It is quite possible
that, in the future, lightpaths are provisioned and taken down more dynamically in
large networks. In such a scenario, it would make sense to specify a signaling interface
between the optical layer and the client layer. For instance, an IP router could signal
to an associated optical crossconnect to set up and take down lightpaths and specify
their levels of protection through such an interface. Different philosophies exist as
to whether or not such an interface is desirable. Some carriers are of the opinion
that they should decouple optical layer management from its client layers and plan
and operate the optical network separately. This approach makes sense if the optical
layer is to serve multiple types of client layers and allows them to decouple its
management from a specific client layer. Others would like tight coupling between
the client and optical layers. This makes sense if the optical layer primarily serves a
single client layer, and also if there is a need to set up and take down connections
rapidly as we discussed above. We will discuss this issue further in Section 8.6.

> Read full chapter

Submarine line terminal


Arnaud Leroy, Omar Ait Sab, in Undersea Fiber Communication Systems (Second
Edition), 2016

Performance requirement in submarine systems


The performance requirements for submarine transmission systems were first de-
duced from the ITU-T G.826 and ITU-T G.828 recommendations for synchronous
digital paths [49,50]. A new recommendation called ITU-T G.8201 was issued to
define error performance objectives within optical transport network paths (OTN)
[51]. The quality of the transmission is assessed through the measurement of two
rates: the background block error rate (BBER) and the severely errored second ratio
(SESR). A severely errored second is a 1-s period that contains more than 15% errored
blocks. An errored block (EB) is a block in which one or more bits are in error. A
background block error (BBE) is an errored block not occurring as part of an SES. An
errored block (EB) is a block that includes at least one errored bit. According to the
ITU-T G.8201 recommendation, the number of blocks per second for a 100-Gbps bit
rate is 856,388. The performance requirements are defined for a 27,500-km digital
section called the hypothetical reference path (HRP), as follows:
SESR<2 10−3

BBER <2.5 10−6

For a link length below 27,500-km, the calculation must be done as follows:

The submarine section should comply with 5% of the previous figures.


A distance-based allocation of 0.2% per 100 km is added. The distance-based
allocation is the product of the air route distance by a routing factor which is
specified as follows:if the air route distance is <1000 km, the routing factor is
1.5;if the air route distance is ≥1000 km and <1200 km, the calculated distance
based route is 1500 km;if the air route is ≥1200 km, the routing factor is 1.25.

For example, for a 2000-km link, the total percentage is 5%+5×1%=10% and the
following performances have to be met:

SESR <2 10−4

BBER <2.5 10−7

For an 8000-km link, the total percentage is 5%+20×1%=25% and the following
performances have to be met:

SESR <5 10−4

BBER <6.25 10−7

In addition, submarine system operators usually request a performance 10 times


better than the ITU-T G.8201 recommendation. Therefore, for a 5000-km link, the
performance requirements are typically:

SESR <3.5 10−4

BBER <4.38 10−7

The BBER can be translated in a BER requirement through the following relation-
ship: In the worst case of error distribution, each errored block includes only one
error. Therefore, for 100-Gbps transmission system, a BBER lower than 4.38 10−7
is ensured if the BER is below 4.38 10−7· 856,388/1011=3.75 10−12. Typically, the
submarine transmission systems are designed to meet a BER lower than 10−13. The
ITU-T G.8201 recommendation does not assume nor require that FEC be used.
However, when FEC is implemented, all the performance parameters are defined
after FEC error correction.

> Read full chapter

Lossless Ethernet for the Data Center1


Casimer DeCusatis, in Handbook of Fiber Optic Data Communication (Fourth
Edition), 2013

9.3.2 40 and 100 gigabit Ethernet


In late November 2006, an IEEE high speed study group agreed to target 100 Gbps
Ethernet as the next version of the technology. These requirements included 100GbE
optical fiber Ethernet standards for at least 100 m on OM3 multimode fiber, 10 m
over copper links, and 10–40 km on single-mode fiber, all with full-duplex operation
using current frame format and size standards. Another objective of this work is to
meet the requirements of an optical transport network (OTN). The study group also
adopted a 40 Gbit/s data rate support at the MAC layer, meeting the same conditions
as 100 Gbit/s Ethernet with the exception of not supporting 10–40 km distances. The
proposed 40 Gbit/s standard will also allow operation over up to 1 m on backplanes.
The nomenclature for these options is shown in the following table:

Physical Layer 40 Gigabit Ethernet 100 Gigabit Ethernet


Backplane 40GBASE-KR4
Copper cable 40GBASE-CR4 100GBASE-CR10
100 m over OM3 MMF 40GBASE-SR4 100GBASE-SR10
125 m over OM4 MMF
10 km over SMF 40GBASE-LR4 100GBASE-LR4
40 km over SMF 100GBASE-ER4
Serial SMF over 2 km 40GBASE-FR

Backplane distances are achieved using four lanes of 10G Base-KR forming a PHY
for 40GBase-KR. Copper cable distances are achieved using either 4 or 10 differential
lanes using SFF-8642 and SFF-8436 connectors. The objective to support 100 m
laser optimized multimode fiber (OM3) was met by using a parallel ribbon cable
with 850 nm VCSEL sources (40GBASE-SR4 and 100GBASE-SR10). The 10 and
40 km 100G objectives were addressed with four wavelengths (around 1310 nm) of
25G optics (100GBASE-LR4 and 100GBASE-ER4) and the 10 km 40G objective was
addressed with four wavelengths (around 1310 nm) of 10G optics (40GBASE-LR4).

Another IEEE standard defining a 40 Gbit/s serial single-mode optical fiber standard
(40GBASE-FR) was approved in March 2011. This uses 1550 nm optics, has a reach
of 2 km, and is capable of receiving 1550 nm and 1310 nm wavelengths of light.
The capability to receive 1310 nm light allows it to interoperate with a longer reach
1310 nm PHY should one ever be developed. 1550 nm was chosen as the wavelength
for 802.3 bg transmission to make it compatible with existing test equipment and
infrastructure.
Line rates of 40–100 Gbit/s may be implemented using multiple parallel optical links
operating at lower speeds. Optical transceiver form factors are not standardized by a
formal industry group, but are defined by multisource agreements (MSAs) between
component manufacturers. For example, the C form-factor pluggable (CFP) MSA
specifies components for 100G links (the “C” represents the Roman numeral 100)
at distances of slightly over 100 m (see http://www.webcitation.org/5k781ouJn). As
another example, the CXP MSA specifies a 12-channel parallel optical link operating
at 10 Gbit/s/channel with 64/66B encoding, suitable for 100 Gbit/s applications.
The 12 links may also operate independently as 10 Gbit/s Ethernet or be com-
bined as three 40 Gbit/s Ethernet channels. Both electrical and optical form factors
are specified (see http://portal.fciconnect.com/portal/page/portal/fciconnect/high-
speediocxpcablessystem) with the optical version using an multi fiber push on
(MPO) multifiber connector; this form factor is also used by the InfiniBand quad
data rate (QDR) standard (see http://members.infinibandta.org/kwspub/specs/reg-
ister/publicspec/CXP_Spec.Release.pdf). Finally, the quad small form-factor plug-
gable (QSFP or sometimes with later enhancements QSFP+) MSA (see ftp://ftp.sea-
gate.com/sff/SFF-8436.PDF) provides a 12-channel interface, also using the optical
MPO connector, suitable for 40 Gbit/s Ethernet as well as other standards, such as
serial attached small connector serial interface (SCSI) and 20G/40G InfiniBand.

> Read full chapter

NYSE Euronext Data Center Consolida-


tion Project
Tomas Lora, Jeff Welch, in Handbook of Fiber Optic Data Communication (Fourth
Edition), 2013

Design
The new data center was designed to be a Tier 3 data center facility, as defined in the
TIA-942 standard, with two main distribution area (MDA) rooms, each serving two
data halls with completely independent physical plant capabilities. The migration
network was designed physically as two overlapping rings, one ring included the
two NYC-based data centers (Data Centers 1 and 2) and passed through both MDA
rooms in the new data center, and the other ring included the NJ-based data center
(Data Center 3). The optical network was built on the Ciena CN4200 WDM platform
using the ITU-T G.709 Optical Transport Network (OTN) protocol as the transport
mechanism. This allowed for the transport and aggregation of multiple data rates
and services, including both Fibre Channel (FC) and Internet Protocol traffic, over a
single wavelength. The design team planned to repurpose the equipment elsewhere
in the enterprise network once the migration was complete (Figure 1).

Figure 1. Migration network fiber design.

The design for the new data center already included two dedicated fiber links,
originating at existing points of presence (PoPs) 1 and 2 and laid out to be com-
pletely independent of one another, complying with the minimum 20 m separation
requirement of the TIA-942 standard. The DWDM equipment operating these links
was capable of supporting 40 wavelengths. Our production network into the data
center was based on early adoption of 100 Gbps wavelength technology. Using
100 Gbps muxponders, we were able to conserve optical spectrum and thus allow
the migration network designers to reuse this fiber and to reserve 30 wavelengths
for use by the migration network. These 30 wavelengths would later be used to sup-
port long-term data center capacity needs. Using reconfigurable optical add/drop
multiplexers (ROADMs), based on wavelength selectable switch (WSS) technology,
the design team created a 3-degree network allowing for the add/drop of the DC3
ring onto the DC1, DC2 ring. This design allowed the coexistence of the production
network for the new data center along with the two rings for the migration network
(Figure 2).
Figure 2. Production network and migration network. DC, Data Center.

Each ring was overlaid with both an FC storage network and an MPLS Ethernet
network to provide application connectivity. To support the storage team, the op-
tical network offered wavelengths supporting 12×2 Gbps FC circuits, supported by
Brocade 5100 FC switches. Two of the three legacy data centers, Data Centers 1 and
2, were used to back one another up, and so were routed with wavelengths from
Data Center 1 terminating in the new data center in MDA Room 1 and Data Center
2 wavelengths terminating in MDA Room 2. Both data halls were available from
each MDA room, and the storage network used all available paths to maximize both
throughput for data migration and network resilience. The wavelengths from Data
Center 3 were also configured to terminate in MDA Room 1. These wavelengths
were configured with both a primary and a protect path, with 50 ms recovery upon
failure detection, to allow ongoing data migration activities to continue in the event
of interruption of the physical network.

The MPLS network was designed to provide 3×10 Gbps circuits from each data center
to each MDA room and data hall. It was comprised of Cisco 6500 switches in the
legacy data centers and Juniper EX8200 switches in the new data center. The IP
design team used Ethernet link aggregation protocols to create a single 30 Gbps
trunk link to each data hall, and traffic from each data center was forwarded via
one link or the other depending on the IP routing cost. As low-latency transport
was a requirement for several applications, the wavelengths were not protected,
but instead the MPLS label-switched paths (LSPs) were configured with presignaled
backup paths. This allowed the IP/MPLS network operators to guarantee a deter-
ministic failover for each failure scenario. Across these circuits, the team created
four RFC-2547-bis BGP L3VPNs, one for each of the primary security zones. Using
the L3VPNs allowed the IP/MPLS network to conserve IP routing equipment, as
they were able to use a single, logically segmented IP router and to share the
DWDM circuit bandwidth across all security domains, rather than providing separate
network infrastructure and circuits for each of the security domains. While the new
data center was not, itself, divided into MPLS VPNs to maintain security domains,
using the VPNs on the migration network delivered data directly to the appropriate
domains, allowing the provisioning of automated routing policies coupled with
minimal filtering, freeing the IP/MPLS network team from becoming a bottleneck
as IP services were stretched across the WAN during the migration phase.

Finally, as all data centers were already connected to one another via another
MPLS network for normal business traffic, clear rules needed to be laid down and
maintained to avoid routing loops, the primary one being that only temporary
traffic would be allowed on the temporary DWDM network, facilitating its ultimate
decommissioning.

> Read full chapter

ScienceDirect is Elsevier’s leading information solution for researchers.


Copyright © 2018 Elsevier B.V. or its licensors or contributors. ScienceDirect ® is a registered trademark of Elsevier B.V. Terms and conditions apply.

You might also like