Professional Documents
Culture Documents
1. Introduction 3
3. What is 400ZR? 5
3.1 Background 5
3.2 Main Advantages 5
3.3 Specifications 5
6. Conclusions 9
1. Introduction
The amount of data processed in data centers is increasing dramatically, and there is rising demand for higher speeds and higher
capacities. Data transmission speeds have already entered the era of 400 Gbps and new data centers are being constructed. At the
same time, networks between data centers are switching to 400GBASE-ZR (400ZR) compatible equipment and are installing new
equipment to reduce operating costs and expand communications capacity.
This document describes market trends surrounding 400ZR, the main technical specifications, and test solutions required for
deployment and quality maintenance.
120
100
80
60
40
20
0
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025
Source: Data Age 2025, sponsored by Seagate with data from IDC Global DataSphere, Nov 2018
3
2.2 Scale Expansion of Data Centers
One approach to processing high-speed, large-capacity data is building stand-alone hyperscale data centers with massive
computing capacity. One definition of a hyperscale data center is a facility of 1000 m2 or more with housing 5000 servers or more.
Securing a large site and sufficient electric power are some of the challenges when constructing a hyperscale data center.
Another approach is to distribute smaller data centers at different locations, such as in metropolitan areas, nationwide, and
worldwide, interconnected by optical fiber to share and backup data with each other at high data transmission rates of 100 Gbps or
more. Transmission speeds have been increasing in recent years, moving to 400 Gbps or faster. Latency increases as distances
between data centers becomes far, making network stability and reliability an issue. Consequently, as one example, the Optical
Internetworking Forum (OIF) 400ZR standard specifies a maximum distance of 80 km between data centers.
Distributed data centers also have the advantage of mitigating the risk of complete failures and system outages in emergencies,
such as natural disasters. As a result, to achieve scale expansion and risk mitigation, data center operators are promoting the
distribution of data centers.
Switch/
Router
WDM System
ROADM
Transponder
The cost of WDM network equipment has been decreasing annually and transponders account most of the cost. If transponders can
be replaced by switches/ routers and optical transceivers, data center operators can reduce equipment and operating costs
significantly. The replacement can also reduce power consumption by transponders. Since the replacement can be achieved by
adopting 400ZR compatible equipment, more data center operators are introducing 400ZR technology to their networks and are
taking responsibility for maintaining and managing network equipment.
Cost savings are not the only benefit of introducing 400ZR. Some data center operators are already switching from carrier-provided
dark fibers to their own dark fibers. Owning and managing dark fibers in-house also increases the degree of freedom in terms of
maintainability, customizability, scalability, etc., and has the advantage of improving data delays and service response to network
problems. However, sometimes, dark fibers have not been tested at transmission speeds of 100 Gbps or more, so the required
performance must be confirmed before introducing 400ZR.
While the introduction of 400ZR brings many benefits to data center operators, the operators themselves must guarantee the
quality of the communications in their own networks, making maintenance and management an issue.
4
3. What is 400ZR?
3.1 Background
400ZR or 400GBASE-ZR is one Ethernet-based optical networking standard. It is called 400ZR by OIF, and 400GBASE-ZR by the
Institute of Electrical and Electronics Engineers (IEEE). The OIF started working in 2016 on an Implementation Agreement (IA) for
400ZR coherent optical interfaces to achieve multi-vendor interoperability, reduce operating costs, and increase transmission
capacity for optical networks between data centers over short distances. This OIF IA was published in 2020.
Adopting 400ZR optical transceivers eliminates the need for expensive network equipment (transponders, ROADMs, etc.). In addition,
400ZR transceivers have a small form factor and can be connected directly to a switch/router, simplifying the overall network system.
These features reduce space requirements, power consumption, and installation/operation costs.
Previously, network equipment was supplied by a single vendor as a turnkey solution offering the advantage of high component
reliability. However, equipment supply-chain flexibility is an issue. In contrast, configuring 400ZR optical transceivers with switches/
routers supports multi-vendor interoperability, offering businesses greater freedom of choice in terms of equipment performance,
reliability, cost, and other important items, and allowing businesses to select alternative equipment in the event of equipment
supply difficulties, effectively securing the supply chain.
3.3 Specifications
The OIF 400ZR IA specifies the following for the digital coherent 400ZR interface:
• Client signals limited to 400 GbE
• 16-level quadrature amplitude modulation (DP-16QAM) technology with two polarizations
• 60 Gbaud modulation
• Signal frequency (wavelength) of 191.3 THz to 196.1 THz (1529 nm to 1567 nm) using C-band
• Concatenated FEC
• Point-to-point connection between data centers
• 80 km max between data centers (without optical amplifiers)
• Pluggable transceivers mounted on switch/router ports
• Multi-vendor interoperability
5
400ZR optical transceivers in a network configuration have transponder functions, enabling direct connection to switch/router ports.
Although the 400ZR IA does not specify the transceiver geometry, most designs use the QSFP-DD or OSFP form factor.
(Source) QSFP-DD MSA "QSFP-DD: Enabling 15 Watt Cooling Solutions White Paper"
Data centers are connected point-to-point by a single optical fiber. The simplest configuration (Figure 5 (a)) transfers data signals by
modulating one optical wavelength; adding more optical wavelengths increases the data transmission capacity. This type of network
includes a passive optical Mux/Demux such as an optical fiber coupler (Figure 5 (b)). Using the 400ZR, the maximum distance
between data centers is up to 80 km without optical amplifiers and up to 120 km with optical amplifiers.
400ZR 400ZR
(a)
400ZR 400ZR
(b)
Figure 5: 400ZR Network
6
4. Test Solutions for 400ZR Installation
400ZR uses 400 Gbps Ethernet-based signals. Such high-speed networks require testing to confirm performance, maintain high-
quality communication, and avoid failures. Since 400ZR optical transceivers can switch output optical wavelengths, it is important to
use an optical spectrum analyzer to confirm that the expected optical wavelength is output before connecting transceivers to the
network and performing the Ethernet frame test. Test solutions for each 400ZR implementation are explained below.
Optical Fiber
Optical fibers between data centers may be either leased from a telecommunications carrier or installed by the data center operator.
In both cases, any large transmission or connection loss in the optical fiber causes a drop in the signal optical power, preventing
signal reception if the optical power falls below the specification. As a result, the network cannot maintain high-speed 400 Gbps
transmissions. To prevent this problem, the optical fiber should be tested using an optical time domain reflectometer (OTDR) before
network commissioning to ensure that the transmission and connection losses are within the permissible range. Testing with an
OTDR also supports short-haul optical connections within data centers.
Before connecting either a new 400ZR transceiver to a switch/router port or replacing a failed unit, the performance of the optical
transceiver itself must be checked first by performing a stand-alone evaluation. This prevents network problems caused by output
of abnormal optical power or wavelength from the optical transceiver. At stand-alone evaluation, after the connecting the optical
transceiver to a 400G Ethernet tester, the 400G Ethernet tester sets the transceiver output optical wavelength. And then, an optical
spectrum analyzer measures and confirms that the output power, wavelength, and signal bandwidth meet the required network
specifications.
7
Scenario 3: Ethernet frame test between data centers
Optical Fiber
After passing the stand-alone evaluation, the 400 Gbps Ethernet frame test is performed on the data center interconnect (DCI).
The operator needs to measure the bit error rate (BER), the number of sent and received frames, FEC rate, etc., and confirm the
network transmission performance and margin using a 400G tester at each data center (Fig. 8).
400GBASE-ZR 400GBASE-ZR
(400ZR) (400ZR)
After passing the 400 Gbps Ethernet frame test, the tested 400ZR transceiver is removed from the 400G tester and connected to the
network switch/ router. Last, an optical spectrum analyzer verifies that the specified optical wavelength, bandwidth, and power are
output from the 400ZR transceiver.
8
5. Using Test Instruments
Solving the following issues when verifying 400ZR optical networks improves job efficiency and cuts costs.
• Training workers on unfamiliar measuring equipment
• Referencing operating procedures in manuals
• Understanding and training on various network standards and verifying standards contents
• Reducing test operation errors
Automated testing using scenarios is one solution to cutting training time, operational errors, and time to understand verification
content. As shown in Figure 10, test scenarios for the MT1040A automate tests and prevent operation errors to reduce the
measurement work burden and improve work efficiency.
Graphic guidance is included, e.g., when switching Pass/Fail result is automatically shown, simple to
3 4
the cable from 400 GbE to OSA clarify!
6. Conclusions
Although introducing 400ZR technology cuts costs, etc., for data center operators, the operators themselves become responsible for
guaranteeing the quality of communications between data centers, which previously was guaranteed by telecommunications carriers.
Therefore, in order to deploy and maintain high-quality networks, operators must choose effective test solutions carefully.
9
Specifications are subject to change without notice.