You are on page 1of 22

ON TOPIC:

Data Center
Interconnect Trends

2 The New
Datacom
Imperative: Next-
Generation Optical
Ethernet and Multi-
Fiber Connector
Inspection

7 The Birth of
Cognitive Networks
for Data Center
Interconnect

10 Cabling
Advances for Data
Center Interconnect

15 ODTN Paves
SPONSORED BY
the Way toward
Disaggregated
Optical Networks

Reprinted with revisions to format from LIGHTWAVE. Copyright 2019 by Endeavor Business Media.
The New Datacom
Imperative: Next-Generation
Optical Ethernet and Multi-
Fiber Connector Inspection
MAURY WOOD, AFL parallel full-duplex lanes of transceiver optics
for performance purposes). Concurrently,
While structured cabling using multi-fiber the relentless drive for high optical network
connectors such as MPO/MTP® have been operating efficiency, and minimal lost
in use in enterprise data centers for many productivity, is leading to an expanding desire,
years, the prevalence of this connector type particularly by hyperscale network operators,
continues to steeply increase. This is due for 100% microscopic inspection of their
infrastructure connectivity.

AFL estimates there are more than 10 million


MPO/MTP connectors in use across the world
today, with more than 1 million forecasted to
be fielded in 2019. An MPO connector market
compound annual growth rate (CAGR) of at least
10% is expected to sustain for the next five years.

As all communications engineers and


technicians are aware, nearly all link failures
occur at points of connection, and very rarely
across unbroken network spans. At the same
time, the signal modulation technology that
underlies the new short-reach 200G/400G optical
to the confluence of commercial dynamics Ethernet standard reduces the available link
(including the apparently insatiable consumer budget in high-bandwidth datacom applications
demand for broadband data services) and – making the operational need for immaculate
technical dynamics (including the need for connector endfaces all the more imperative.

2 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


Next-Generation Optical short-reach cabling common in data centers.
Ethernet Transport Standards However, in 200G and 400G systems using
(200G/400G) in the Data Center PAM4 modulation, the same level of endface
Data centers typically rapidly adopt the latest contamination will erode a greatly reduced
networking technologies as their operators seek link budget margin, driving optical network
a competitive performance edge. The new IEEE technologists to demand pristine connector
802.3bs 200G/400G Ethernet standard specifies endfaces. Best-practice connector cleaning and
the use of PAM4 modulation, a departure from inspection procedures will become essential to
the older non-return to zero (NRZ) modulation maintain the highest levels of performance and
method. PAM4 provides higher spectral reliability.
efficiency, but because it encodes two bits (four
states) into the same carrier signal dynamic A simple real-world analogy might paint a
range as NRZ (which encodes one bit or two picture here. To a family car driving along at 30
states), PAM4 requires about 9.6 dB more miles per hour, road debris is a mere annoyance.
link optical signal-to-noise ratio than NRZ to To a sports car racing along at 120 miles per
maintain the same symbol error rate statistics hour, the same road debris is a big risk that may
(see Figure 1). even cause a fatal crash.

The light-carrying core of a single-mode MPO


fiber has a diameter of 9 microns or an endface
surface area (πr2)of about 64 square microns. A
2-micron-diameter speck of dust has a surface
area of about 3 square microns, or about 5% of
the endface surface area. A 5% reduction in laser
power is about -0.2 dB. In an environment in
which link budgets are narrowing and transmit
laser power is a significant contributor to
overall data center power consumption (there
are tens of thousands of semiconductor lasers
in a typical modern data center), it is quite
Figure 1. Example PAM4 eye diagram show- easy to understand the opex-driven desire of
ing the four encoding states. (Photo courtesy of network technologists for completely clean
Keysight Technologies) transport optics.

While this physical layer transmission Moving into 2020, hyperscale data centers
technology change may initially seem unrelated are expected to become even larger, leading
to optical connector cleanliness, there is a to structured MPO cabling that is naturally
distinct and important link. In 10G, 40G, longer in some spans. Unlike the signal losses
and 100G systems using NRZ modulation, a (attenuation) due to cable-reach physics (about
contaminated connector endface may cause 0.4 dB per kilometer at 1550 nm on single-mode
optical losses that can be largely ignored on the fiber), connector contamination losses can be

3 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


identified using proper microscopic inspection The Rise of Fast MPO
techniques and fully mitigated using proper Inspection Tools
endface cleaning methods. Endface inspection of multi-fiber connectors
at turnup and during normal maintenance
operations can easily identify connector
Data Center Cable contamination. But until recently, the standard
Infrastructure – An method of MPO connector inspection involved
Increasingly Valuable Asset the use of awkward and expensive mechanical
It is possible to quickly model the asset value scanning stages attached to the front-end snout
of a 400G link for a hypothetical broadband of an inspection microscope probe. This labor-
internet service provider. The major players intensive method yields good results, but might
in this global market are chasing the goal of take up to a minute to collect IEC 61300-3-35
providing the majority of their residential and auto-analysis pass/fail results for each fiber.
business subscribers with 1-Gbps downstream The high opex associated with mechanically
service by 2020. Today in the United States, a scanning tens of thousands of MPO connectors
typical consumer pays about $100 per month in hyperscale data centers has until recently
or $1200 per year for fiber-to-the-home internet made the goal of 100% endface inspection
service. With no statistical oversubscription, unrealistic.
a 400G link serves 400 customers, and thus
places the asset value of each 400G MPO Serendipitously, the availability
terminated cable (eight fibers at 100 Gbps per of high-resolution image sensors,
fiber full duplex) at $480,000 per year. With a microcontrollers, flash memories, and
conservative 2:1 oversubscription ratio, this field programmable gate array (FPGA)
rises to nearly $1 million per year. These rough semiconductors, all cost-driven
economics underscore the importance of proper by the high-volume mobile
maintenance to avoid network downtime, device market, has enabled
including multi-fiber connector inspection and the development of wide
cleaning as needed. field of view inspection
probes that slash multi-fiber
In 2002, NTT-AT published a finding that up to auto-analysis connector
80% of failures in optical networks are caused inspection time by an
by contaminated connector endfaces (https:// order of magnitude (see
sticklers.microcare.com/resources/faqs/why- Figure 2). With fast MPO
clean-fiber-optic-connectors/). And in 2016, the inspection tools costing
Ponemon Institute/Emerson Network Power $5,000 or so, the required
reported that the average cost of a data center capital investment is not
outage is about $740,000 (https://www.emerson. challenging in the context
com/en-us/news/corporate/network-power-study).
These numbers provide strong quantitative
Figure 2. A fast MPO inspection microscope
motivation for data center operators to routinely showing two rows of 12 fibers with pass/fail
inspect and clean their multi-fiber connectors. results (Photo courtesy of AFL)

4 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


of billion-dollar data center build-outs. The Conclusion
economics of multi-fiber connector inspection Just as rising consumer and industrial demand
have changed dramatically and favorably over for internet cloud services and changes to
the past 12 months. physical layer transport technology have
made 100% optical connector inspection an
Hyperscale and other scale-out optical network operational imperative, fast MPO inspection
operators must now drive their operations tools have appeared on the market to meet
to 100% “inspection before connection,” this critical need. Both the capex and the opex
particularly given the increased sensitivity economics associated with fast multi-fiber
to endface contamination in 200G/400G endface inspection are now favorable and very
transmission systems, the high cost of data compelling.
center service interruptions, and the increasing
enterprise asset value of multi-fiber cabling.
Forward-looking internet content providers Maury Wood is senior product line manager
are now insisting that their infrastructure at AFL’s Test & Inspection Division, where
equipment suppliers conduct 100% pluggable he is responsible for inspection products.
transceiver connector inspection as well, to Prior to joining AFL three years ago,
avoid initial network turnup problems. Maury was employed at Broadcom, NXP,
and Analog Devices in senior technical
Adding to the appeal of fast multi-fiber marketing roles. He recently wrote a six
inspection tools is the trend toward cloud- part blog series on 100G single-lambda
based workflow management tools that make technology (https://www.aflglobal.com/
integrated Tier 1 (loss test)/Tier 2 (OTDR test) AFL-Blog/May-2018/The-Path-to-100G-
plus inspection reports a breeze. Multi-fiber Single-Lambda-in-the-Data-Center.aspx).
connector inspection reporting and cloud-based
workflow management platforms are a natural fit
moving into 2020 and beyond.

5 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


Fujikura splicers

Because CLOSE isn’t good enough.

True core alignment. True quality. True performance.


Many splicer manufacturers falsely advertise their products are “core alignment” when all they’re really providing
is cladding alignment. Unfortunately this only becomes clear when you need it most, when splicing fiber with
poor geometry. Don’t be fooled by cheap imitations. Fujikura introduced the world’s first fusion splicer in 1979
and has continued to perfect the process for the past 40 years. Our core alignment
splicers do exactly what we claim – they align the cores, not the cladding. We offer
field-proven products and the best product support in the industry. Splice with
confidence, buy Fujikura. You won’t regret it.

www.AFLglobal.com/TRUE
864.433.0333
The Birth of Cognitive
Networks for Data
Center Interconnect
WALID WAKIM, CISCO

Customers today expect better performance


on a wide range of applications like streaming
television, enhanced 4K video, and interactive
gaming, all of which are increasing traffic at a
steady 26% CAGR (Cisco VNI). This traffic is
running over data center interconnect networks
that have grown into a more than $1 billion market
according to Ovum. To manage this growth,
service providers as well as content providers must
increase their operational efficiencies to gain more analytics and machine learning. The increasing
from their existing infrastructure. use of software to control, analyze, and manage
the network has led to the application of what is
Some of this has been solved at the optical today called Cognitive Networks (CN).
layer, where the network is more reconfigurable
and adjustable than ever before. By leveraging Cognitive networking is an application of
flexible symbol rates, flexible modulation, and artificial intelligence (AI), where the platform
flexible grid options, significant improvements collects, learns, plans, and then acts on the
have been made to balance capacity versus network. To do this, we leverage machine
distance. Large amounts of data are collected learning, which is a subset of AI — machines
now with knobs that help enable a more learn the behavior of the network by analyzing
dynamic optical layer. The question is, how will the data using mathematical and statistical
gathering and analyzing all this data help drive tools. An machine learning tool collects the
down the total cost of the network? data and delivers it to a rule and/or decision-
making software block that ultimately sends
To enable such cost reduction requires a shift in the proper corrective action(s) to adjust the
focus to the software advances in fields such as network. Previously, offline tools and processing

7 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


did a minute part of this process, but today These are valuable use cases that improve
with software-defined networks, machines can operational efficiencies. Today, there is a
learn the network and associate rules, policies, growing amount of network sensor data
and actions to correct performance issues or available. The issues are how to capture the
outages/events, and in fact, even anticipate a data, as well as how accurate and relevant is
network event. the data that’s been collected, and how do
we ensure proper correlation of the correct
parameters? The market still needs common
Ready or not? collection methods, and it has to be across
But are we really ready for AI and machines multi-vendor environments. For the machine to
to respond and reconfigure the network or to learn, the information collected and shared must
make these types of changes in real time? It is be in a similar format. This means we need
unlikely that a service or content provider today to use common data models that would make
would allow these actions without some human the network vendor-agnostic. There is a lot of
interaction or intervention. However, it is a great standardization progress taking place in forums,
target to work towards, so let’s look at some such as OpenROADM, OpenConfig, Telecom
realistic approaches that can be done today: Infra Project (TIP), etc. We see service and
content providers pushing harder and in many
–– Today we use static offline design tools. If cases driving common model creation.
we had dynamic learning tools that could feed
activity back to the operator, we could predict What are the next steps? Identifying what
the proper amplifier settings or the modulation anomalies should be captured, what data is
technique to minimize network margins and relevant, and what data needs to be discarded
provide optimized data rates across the data is a big step. Then more work needs to be
center interconnect metro, regional, or long-haul done on the algorithms. It’s unlikely a single
network. Since machine learning is continually algorithm will solve these issues; it will likely be
updating and adapting, the margins in the a combination of algorithms working together
network can be pushed tighter, thus lowering to deliver a solution. Leveraging cognitive
the overall network cost. networks, we can solve real customer problems
and bring real value by lowering the total cost of
–– Network failures and restoration become ownership, delivering services faster and more
more predictable with cognitive networks. reliably—all on an optimized network.
The machine learning algorithms may suggest
shifting traffic if the link is degrading, but
shifting the traffic when utilization is the lowest, Walid Wakim is a Distinguished
so that impact to the network is minimized. Engineer at Cisco.

8 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


Cabling Advances for Data
Center Interconnect
DAVID HESSONG, CORNING, INC.
Best Practices for Designing
and Deploying Extreme-Density
The data center interconnect (DCI) application Data Center Interconnects
was a hot topic at the recent Optical Fiber A quick internet search of large hyperscale
Communications conference in San Diego. or multi-tenant data center spending
Having emerged as an important and fast- announcements returns several expansion
growing segment in the network landscape, the plans, totaling into the billions of dollars. What
space has been the focus of several exciting does this kind of investment get you? Often,
advances in fiber-optic cabling. This article will that answer is a data center campus consisting
explore some of the reasons for the growth of of several data halls in separate buildings that
this segment and focus on several of the new are often bigger than a football field and that
cabling technologies aimed at making this typically have over 100 Tbps of data flowing
space more installer friendly. between them (Figure 1).

10 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


Without diving too far into the details about to note that these lengths are often 2-3 km or
why these data centers are growing so large, we shorter. Modeling shows that lower data rates
can simplify the explanation to two trends. The over more fibers will remain the most cost-
first is the exponential east-west traffic growth effective approach for at least the next few years.
machine-to-machine communication has created. This cost modeling shows why the industry is
The second trend is related to the adoption of investing so much money developing high-fiber-
flatter network architectures, such as spine-and- count cables and associated hardware.
leaf or Clos networks. The goal is to have one
large network fabric on the campus — which Now that we understand the need for high-
drives the need for 100 Tbps of data or greater fiber-count cables, we can turn our attention to
flowing between the buildings. the alternatives on the market for data center
interconnect. The industry has agreed that
As you can imagine, building on this scale ribbon cables are the only feasible solution for
introduces several unique challenges across the this application space. Traditional loose tube
network, from power and cooling down to the cables and single-fiber splicing would take
connectivity used to network the equipment much too long to install and result in splice
together. On this last point, multiple approaches hardware too large to be practical. For example,
have been evaluated to deliver transmission a 3456-fiber cable using a loose tube design
rates at 100 Tbps (and eventually higher), but would require more than 200 hours to terminate
the prevalent model is to transmit at lower rates assuming 4 minutes per splice. If you use a
over many single-mode fibers. It is important ribbon cable configuration, splicing time drops

Figure 1. Sample data center campus layout.

11 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


to less than 40 hours. In addition to these time
savings, ribbon splice enclosures typically have
four to five times the splice capacity in the same
hardware footprint compared to single-fiber
splice density.

Once the industry determined that ribbon


cables were the best option, it was quick to
realize that traditional ribbon cable designs Figure 3. Transition splice cabinet from extreme-
were not able to achieve the required fiber density OSP cable to pre-terminated ISP cable.
density in existing conduit. The industry
therefore set out to essentially double the fiber hardware (hardware preloaded with pigtailed
density in traditional ribbon cables. cable) in an extreme-density splice cabinet. This
application is where end-users are no longer just
Two design approaches emerged. The first evaluating the OSP cable design, but instead a
approach uses standard matrix ribbon with full tip-to-tip solution for these costly and labor-
more closely packable subunits, and the other intensive link deployments (Figure 3).
approach uses standard cable designs with
a central or slotted core design with loosely Several areas must be evaluated when deciding
bonded net design ribbons that can fold on each on the best tip-to-tip approach. Time studies have
other (see Figure 2). shown that the most time-consuming process
is ribbon identification and furcation to prepare
Now that we understand these new ribbon cable the ribbons to route into a splice tray. “Furcation”
designs, we must also explore the challenges refers to the process of removing the cable jacket
of terminating them. Since the cables carry to protect the ribbons with tubing or mesh as
an outside plant (OSP) flame rating they are they route inside the hardware to a splice tray.
This step becomes more time consuming as the
fiber count of the cable increases.

Often, hundreds of feet of mesh or tubing are


required to install and splice a single 3456-fiber
link. This same time-consuming process also
Figure 2. Different ribbon cable designs for must be completed for the ISP cables, whether
extreme-density applications. they are pigtailed cables or in the form of
stubbed hardware. Cables on the market today
required to transition to an inside plant (ISP) vary greatly in furcation time. Some incorporate
rated cable within 50 feet of entrance into the routable fiber subunits on both the OSP and ISP
building per the National Electric Code (NEC). cables (Figure 4), which require no furcation to
This is typically done by splicing pre-terminated bring the fibers to the splice tray, while others
MTP®/MPO or LC ribbon pigtails (cable with require multiple furcation kits to protect and
connectors preinstalled on one end) or stubbed route the ribbons (Figure 5). Cables with routable

12 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


the magnitude of fibers that must be identified
and routed. This ribbon labeling also becomes
critical in terms of network repair, when cables
get damaged or cut after initial installation.

Forward-Looking Trends
Cable with 3456 fibers looks to be just a starting
point, as cables with more than 5000 fibers have
been discussed. Since conduit size is not getting
Figure 4. Cable with routable subunits to elimi-
bigger, the other emerging trend is to use fibers
nate furcation process.
where the coating size has been reduced from the
industry-standard 250 microns to 200 microns.
subunits are typically installed in purpose-built Fiber core and cladding sizes remain unchanged,
splice cabinets optimized with splice trays to therefore not affecting optical performance.
match the fiber count of the routable subunit. This reduction in fiber coating size can allow
hundreds or thousands of additional fibers in the
Another time-consuming task is ribbon same size conduits as before.
identification and correct ordering to ensure
the correct splicing. Ribbons need to be clearly The other trend will be the rising customer
labeled so they can be sorted after the cable demand for tip-to-tip solutions. Sticking
jacket is removed, as a 3456-fiber cable contains thousands of fibers in a cable solved the problem
288 twelve-fiber ribbons. Standard matrix of conduit density but created many challenges
ribbons can be ink jet printed with identifying in terms of risk and network deployment speed.
print statements, while many net designs rely on Innovative solutions that help eliminate these
dashes of varying lengths and numbers to help risks and reduce deployment velocity will
identify ribbons. This step is critical because of continue to mature and evolve.

Demands for extreme-density cabling seem


to be accelerating. Machine leaning, 5G, and
bigger data center campuses are all trending
in a way that drive demand for these DCI links.
These deployments will continue to challenge
the industry to develop tip-to-tip solutions that
can scale effectively to enable maximum duct
utilization while not becoming increasingly
cumbersome to deploy.

Figure 5: Sample furcation kit for extreme- David Hessong is a manager of global data
density cable. center market development at Corning Inc.

13 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


We’ve got you connected!
Increased data volumes are driving the construction of data center
campuses and hyperscale data centers. Can your network support
emerging 5G technology where high-fiber availability is critical? We can Are you interconnected?
help with our new data center interconnect (DCI) solution. Our Corning®
Visit corning.com/dci/cim to discover
RocketRibbon™ extreme-density cables, combined with our innovative
splice closures and hardware, can help meet the increasing bandwidth how Corning can help with your data
demands in your most challenging environment. center interconnect solution.

Empowering the world with data center solutions you can trust.

© 2019 Corning Optical Communications. LAN-2521-AEN / May 2019


ODTN Paves the Way
toward Disaggregated
Optical Networks
ANDREA CAMPANELLA,
OPEN NETWORKING FOUNDATION

A lack of interoperability between optical


components has been a problem plaguing the
telecom industry for more than 30 years, driving
up costs and complexity for telecom carriers.
The Open Networking Foundation (ONF), along
with its operator leadership and vendor partners,
are continuously seeking new opportunities to
drive innovation in computer networks through
open standards, open source software, and
collaboration. In particular, with interest from to services, service providers are looking for more
NTT Communications and Telefónica, ONF is cost-effective ways to build their networks. As
actively pursuing optical disaggregation for data a result, the drive for optical disaggregation in
center interconnect (DCI) deployments. The DCI now comes in many forms, not only to drive
service providers’ and members’ requirements, capex and opex reduction but also, and maybe
a use case overview, and implementation more importantly, in rapid cycles for introduction
details such as application programmable of innovations and new technologies, such as
interfaces (APIs) are described in ONF’s Open the ONF’s CORD technology that supports edge
Disaggregated Transport Network (ODTN) cloud applications.
Reference Design.
Disaggregation of the different optical
components and of hardware from software
ONF and partner service enables a better life-cycle cost approach
providers’ view on disaggregation (LCCA) since optical components can be
in optical networks interchanged at the end of their lifespan without
With very low margins on the networks the need to completely modify the network
themselves and the bulk of revenue transitioning architecture. At the same time, suppliers can

15 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


more narrowly focus on a specialty component partners seek an improved disaggregation
(e.g., a transponder) without having to build solution using open APIs and an open source
a complete solution themselves. Optical controller while at the same time offering the
network disaggregation shifts a deployment cohesive control that operators desire.
from a vertically integrated, single-vendor
system managed by proprietary software to In the disaggregated scenario, ONF is building
a disaggregated, multivendor environment the open source controller that understands the
(Figure 1). semantics of both the open and standard APIs
and the open common data models. By exposing
these open APIs and models, devices from
How SDN with open APIs any possible vendor become “plug and play”
makes disaggregation possible within the system, opening the door for best-
The shift to disaggregated network components of-breed product selection and a more optimal
creates the challenge of bringing the deployment LCCA approach.
back with the need for control and management
in a multi-vendor environment. The shift is To build such a disaggregated network ONF,
enabled first by exposing visibility and control in collaboration with NTT Communications,
of the optical domain to the upper levels of Telefónica, Telecom Italia, and many vendors
control. ONF and its service provider and vendor and supply chain partners, launched ODTN.

Figure 1. Disaggregation enables an open, multivendor network environment.

16 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


ODTN solution overview NETCONF or gNMI with the transponders and
The first step for optical disaggregation in REST/RESTCONF with the OLS controller. Once
a DCI point-to-point deployment is called a device discovery is complete, ONOS requests
“partially disaggregated” optical network. device information, such as ports through
The transponders are disaggregated from the OpenConfig models for the transponders
the line system that aggregates and controls and TAPI for the OLS. Upon receiving device-
other optical equipment, such as amplifiers, specific information, ONOS builds a graph
ROADMs, etc. model of topology and exposes it through TAPI
to the northbound interface.
An optical domain controller is introduced to
manage the end-to-end optical connection To turn up a connection, ONOS receives a
by managing the two transponders and the request from the upper layers to provision
line system controller through APIs. In this connectivity from a client port of a transponder
partially disaggregated scenario the optical to a client port of another, book-ended
domain controller sees the open line system transponder. Upon receiving the request,
(OLS) as a “big switch”
with many input and
output ports to which
the transponders
are connected.

ONF and its


collaborators are
building ODTN using
the Open Network
Operating System
(ONOS) as the open
source controller, the
OpenConfig Models
as the transponder’s
API, and the Transport
API (TAPI) to manage Figure 2. ODTN disaggregates the transponders and creates an open line
the line system, which system (OLS) that can be managed within an SDN environment.
becomes an OLS. ONOS
also offers the means to program the whole optical ONOS stores it internally and then starts
domain as one entity by exposing the same the negotiation for obtaining the end-to-end
TAPI models northbound for an OSS/BSS to use path. First it asks, through TAPI, the OLS for
(Figure 2). connectivity between the ports facing the
two transponders. The OLS does internal
ONOS initiates channels with the different path computation, wavelength assignment,
elements by using the communication channels and power computation, based on the power

17 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


capability of the receiving transponders. Once software and leverage integrated pluggable
the OLS returns a viable wavelength for the optical modules. Thanks to its partnership with
path, ONOS computes the required output the Telecom Infra Project (TIP) and Edgecore,
power and provisions the wavelength and ODTN is also working actively to integrate with
the power value to the transponders through the CASSINI optical white-box packet-optical
Openconfig. ONOS treats the connectivity transponder.
request always as
bidirectional to enable
traffic to flow in both
directions.

The configuration
is stored in ONOS
and replicated
across its controller
instances. Such
replication among a
configurable number
of ONOS instances
enables the network
to both scale up as
needed and handle
controller failures
since ONOS instances
can distribute loads
and back each other
Figure 3. ONF is working with TIP to enable ODTN via white box hardware.
up for carrier-grade
resiliency. ONOS also
is capable of handling data plane failures by Cassini introduces, though TIP’s Transponder
recomputing paths across the network when Abstraction Interface (TAI), the possibility of
such a failure occurs. having plug-and-play transceivers on the line
side by providing a common abstraction to the
operating system running on the packet-optical
One step further: optical transponder system. This type of abstraction
disaggregated white boxes enables a white box with merchant silicon
The ODTN deployment and workflow can to be built to provide Layer 2/3 functionality
work with various transponders and open line (Figure 3). ONF is building on this white box in
systems from many different vendors as long collaboration by also collaborating with many
as they support the OpenConfig and TAPI transceiver vendors.
models. Internally these devices run proprietary

18 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


Current status and development Italia has also tested ODTN in its lab with a
ONF is currently developing a disaggregated network composed of Coriant (now Infinera)
DCI solution in collaboration with our partners transponders and different ROADMs, showing
and collaborators. The development builds on how ODTN can flexibly adapt to different
top of what was described above to include networks. Sterlite Technology is also deploying
even more configurable parameters, such ODTN in its two testing sites in India where
as modulation, and to gather more data and system integration and production hardening of
alarms for a better view of the state of the the platform is currently underway.
optical system.
ONF and TIP have also shown the capability
ONF, in parallel with the partially disaggregated of ODTN with the white box CASSINI device
scenario based on transponders and OLSs, in multiple demos and are on track to deploy
is also working on a ROADM-based solution. in the TIP community labs around the world,
Here, the ODTN system will have the same such as in London and Menlo Park, to give the
role of provisioning the optical domain but on a community a testbed to continuously develop
mesh or ring ROADM topology. ONOS drivers and test the platform with new hardware
compliant with OpenROADM 2.2 are being used and software enhancements. The integration
to program wavelength, power, and port state to of ODTN has also been shown with other
OpenROADM 2.2 compliant boxes. Thanks to vendors such as NEC, Fujitsu transponders and
the many layers of abstraction, ONOS provides Lumentum ROADMs.
the same TAPI interface northbound and
internally reuses all the constructs of the OLS- These trials with different operators show
based deployment, with changes happening the value of ODTN as a complete solution for
only at the device driver layer. controlling disaggregated optical networks,
reaping the benefits of disaggregation while not
having to compromise on the abstractions to
Operator Trials the upper layer or introduce complexity in the
ODTN is currently being trialed and tested deployment.
by different service providers and vendors.
Telefónica deployed the system in its labs Thanks to the great progress we’ve been seeing,
leveraging Nokia transponders and an ADVA ONF expects to see growing engagement and
OLS. End-to-end testing was performed with field trials of ODTN in the upcoming months
good results, showing the feasibility of the and next year.
approach and the quality of the implementation.
NTT Communications tested ODTN with
Infinera hardware in its labs in Japan, again Andrea Campanella is ODTN project leader
with good results that demonstrated system within Open Networking Foundation.
capability with different hardware. Telecom

19 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


Founded in 1984, AFL is an international manufacturer providing end-to-end
solutions to the energy, service provider, enterprise and industrial markets as
well as several emerging markets. The company’s products are in use in over
130 countries and include fiber optic cable and hardware, transmission and
substation accessories, outside plant equipment, connectivity, test and inspection
equipment, fusion splicers and training. AFL also offers a wide variety of services
supporting data center, enterprise, wireless and outside plant applications.

Headquartered in Spartanburg, SC, AFL has operations in the U.S., Mexico,


Canada, Europe, Asia and Australia, and is a wholly-owned subsidiary of Fujikura
Ltd. of Japan. For more information, visit www.AFLglobal.com.

www.AFLglobal.com/true

learn.AFLglobal.com

learn.aflglobal.com/data-center

learn.aflglobal.com/fttx

20 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


Cisco is leading the disruption in the industry with its technology innovations
in systems; silicon; software and security; and unrivalled expertise in mass-
scale networking, automation, optical, optics, cable access, video, and mobility.
Combining these capabilities with Cisco’s portfolio of go-to-market security,
collaboration, IoT, and professional services, we enable service providers and
media and web companies to reduce cost and complexity, help secure their
networks, and grow revenue.

Data Center Interconnect Solutions

DCI Solution Brief

NCS 1004 Data Sheet

Cisco.com Optical site

21 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds


Corning (www.corning.com) is one of the world’s leading innovators in materials
science, with more than 165-year track record of life-changing inventions. Corning
applies its unparalleled expertise in glass science, ceramic science, and optical
physics along with its deep manufacturing and engineering capabilities to develop
category-defining products that transform industries and enhance people’s lives.
Our Optical Communications division (www.corning.com/opcomm) delivers
connectivity to every edge of the network, with optical fiber, cable, hardware and
equipment to fully optimized solutions for high-speed communication networks.

Learn more about Data Center Interconnect Solutions

Learn more about RocketRibbon™ Extreme Density Cables

Download the Bill-of-Materials Tool for Local


Area Network and Data Centers

22 LI G H T WAV E O N TO P I C : D at a C enter Interco nn e ct Tren ds

You might also like