You are on page 1of 18

A Survey on Optical Technologies and their Impact on Future

Datacenter Networks
Neelakandan Manihatty Bojan (, Jingyun Zhang (
AbstractFuture Datacenters requirements demands the deployment of optics within the Datacenter. A better understanding
of optical technologies will enable better design choices. The aim
of the paper is enlighten the reader with the developments in
optics and its technology (for short range communication ie., <5
km) and how they can be exploited to meet the demands of future
Data center networks (DCN). We start with a general overiew
of current DCN architecture, its bottleneck (at chip, board and
rack level) and the motivation to move towards optics. Then we
shall discuss on the recent developments (up-to January 2015) in
optical components and their technologies, comparing the various
design choices (through parameters) and their applicability in a
datacenter center scale networks. In the later section of the paper
we discuss about how the developments in optical components
and their technology are changing the design space of optical
subsystem used in DCN. Finally we give some insights on how the
above developments can be exploited to increase the throughput,
lower the latency and power consumption in the Datacenters
(addressing in terms of the compute, interconnect and storage



It is no secret that most data centers are seeing explosive

growth in traffic as more people are using more applications
that are performing more complex activities. But the changes
facing the data center extend well beyond mere scale. To
meet growing application workload requirements, newer applications are employing scale-out architectures. The result
is a growing set of applications that operate in a distributed
compute environment. For these multi-tier applications, data
exchange between components is frequently more intense than
the interaction between the application and the end user. This
phenomenon is the driving force behind the rise of east-west
traffic in modern data centers. [1]



In todays data center there are millions of machines that

needs to be interconnected. There are many different ways to
interconnect machines, but different interconnection topologies
have different performance characteristics. The following are
some of the important metrics that needs to be considered
while building a interconnection network:

Bisection bandwidth

Network diameter

Path diversity

Ideal throughput

Average Distance and Latency


Based on the above performance metrics, we can either use

a Direct topology or an Indirect topology. Direct topologies
provide better performance and latency compared to Indirect
toplogies but they have scalability issues. As data centers
follows the scale-out [3] architecture, scalabity is one of their
prime requirements. Hence Indirect toplogies are most prevelant in datacenters. Amongst the Indirect topologies, Fattree is
the most preferred in data centers.

Cisco Global Cloud Index [2] projects that cloud traffic

will represent more than three-fourths of global Datacenter
traffic by 2018. The report forecasts that by 2018:

Approximately 17 percent of data center traffic will

be fueled by end users (north-south traffic) accessing

8.5 percent of data center traffic will be generated between data centers (data replication and software/system updates).

Remaining 74.5 percent of data center traffic will

stay within the data center (east-west traffic) and
will be largely generated by storage, production and
development data in a virtualized environment.

To understand how the aggressive bandwidth demands

are going to impact datacenters, we need to first have an
understanding of current data center architectures and why they
are built that way.

Fig. 1.

Network architecture of a Datacenter

Figure 1 represents the network architecture of a datacenter

that has a Fattree topology. Datacenters comprises of a group
of pods [4]. Pods are practical units of machines comprising
large number of servers, an interconnection networks and a
cooling solution. Each pod has a number of racks, and each
rack has Top Of Rack (TOR) switch which connects to all the
servers in that rack. All the TORs are interconnected with
each other through multiple layers of switching to provide
the interconnection network between different servers. The
additional layers of switching (the interconnnection network)
includes aggregation and core switches that are predominantly
made up of electronic switches.

Fig. 2.

Diagram showing the electrical bottlenecks at board level

Fig. 3.

Some of the fundamental problems in current interconnect

networks are:

Inability of the electronic switches to cope with the

future bandwidth requirements.

Electronic switches are very expensive in terms of


Switches are one of the important hurdles in achieving

energy proportional data center. They consume a lot
of power because they are optimized for higher link
utilization [5] and not energy efficiency.

Network latency has also been traded for higher

bandwidth and network utilization [6]. As the datcenter scales, we have to add additional switches
and switching layers to the interconnect infrastructure.
This significantly increases the latency experienced by
the end hosts.

Adding to the above issues is the unpredictable traffic profiles of the data centers. Datacenter have a slightly more varied
traffic patterns compared to the supercomputer traffic that have
very high locality. There has been a lot of investigation about
the type of traffic in a data center [7],[8],[9]. The trace analyis
indicates that data center exhibit ON/OFF behavior and are
heavily tailed [10].
New applications of the future will push the Datacenters
to its limits. This raises the following questions:

Are the current Datacenter design and network architecture future proof?

Are there other techniques and technologies can be

used to solve the issues in data center? If so what will
be their implications on the data center architectures?

Future data center needs to sustain the increasing bandwidth requirements, reduce the power consumption and decrease the overall latency. The above requirements require us
to do a critical evaluation of the limits of current data center
infrastructure and explore new technologies and architectures
to meet the future data center requirements.

Current Datacenter networks based on electronic technologies



Every four years we have the performance requirement

is growing by ten times. The gap between processor and IO
performances is also widening (Needs to cite).
This is resulting in increasing bandwidth density requirements across the network. Electronics based interconnects
is a mature and dominant technology that has wide spread
applications. Inspite of being a mature technology, it has some
bottlenecks as shown in the Figure 2. Below we discuss the
reasons for such limitations.

Chip IO bandwidth limitation :

The processors have been increasing its density at the
rate of Moores law, but the I/O pins are not increasing
at the same rate. This means that the bandwidth
density per port is increasing for every generation of
the processor. There is a fundamental limit on how
fast the electrical IO can be driven, this imposes the
bandwidth bottleneck that can be achieved at chip IO

Growing gap between processor and memory speeds:

The continuous growing gap between CPU and memory speeds is an important drawback in the overall
computer performance.

Increasing electronic link loss as we go for higher

In electronics, the signal is the carrier. So we charge
and discharge the link to transmit data. So when we
want to change the frequency of the data, link needs
to be changed. So as we go for higher frequencies,
the signal integrity degrades due to Electro-Magnetic
Interference (EMI) resulting in the shorter reach for
signals transmitted.
For longer distances we can regenerate the electrical
signal through regenerators and equalizers, but this not
viable as it increases the power and occupies more
space in the PCB. There are some exotic materials
that can increase the reach (by as small margin) but
they are costly and are not future proof.

Figure 3 shows the block diagram of a current datacenter

network (DCN) that is build based mostly on the electronic

technologies. Current connectivity options based on copper and

multimode fibers at the edges may not be capable of withstanding the growing bandwdith needs and latency requirements.
Below are the reasons:

Data rate vs distance tradeoff for copper:

Electrical links have a Bandwidth-Length (BL) product of 100 MHz km. This means that when we aim
for higher data rates (by increasing frequencies) we
will be limited in distance due to the increasing loss of
electrical interconnects.The electrical links have a loss
of around 1dB/m. So electrical interconnects requires
repeaters for maintaining the signal quality. Multimode fibers can help in increasing the bandwidth, but
they too are inherently limited to shorter distance and
lower bandwidth compared to single mode fibers.
Increase the number of channels:
When we increase the number of channels in electronics, this will result in electrical signals travelling adjacent to each other resulting in higher Electromagnetic
Interference (EMI). Higher EMI results in degradation
of signal integrity. Unlike Single Mode Fibers (SMF),
Multimode fibers cannot exploit wavelength division
mulitplexing (WDM) technology due to modal dispersion effects that can result in signal interference.
Electronics based interconnects also consume a lot of
power and occupy larger footprint resulting in poor
energy savings. This is one of the fundamental limits
in realizing energy proportional data centers [11].

Table I gives the comparison between the electronics and

optical technology for different parameters.


Bandwidth density
Power efficiency
Space footprint
Robustness to damage
Mature Technology



Research efforts in optics and photonics for long range

transmission have propelled great advancement towards high
bandwidth, low latency and power efficient optical network.
Some of the latter features are of great interest in the Datacenter environment. This makes optical networks an interesting
candidate for deployment in Datacenters and computer systems
[12]. This has also propelled a lot of research activities for exploiting optics for datacom applications. As a result of all these
efforts, last few years have seen the gradual proliferation of
the optical technologies from the long distance communication
networks to the shortest possible communication network. By
bringing optics closer to the processor, we are able to overcome
the IO bottleneck and also avoid the loss incurred due to the
electrical transmissions.
Despite the advantages provided by optics, there are also
some challenges that also need to be addressed. Some of them
are mentioned below.

Need for massive efforts towards integration and assembly of discrete optical components

Reliability and resilience of optics based solution

Effort towards accepting optics as a credible solution

by computer architects


Difficulty in building an optical transistor [13].

Absence of optical memories forces us to explore novel scheduling techniques for optical packet
switched networks [14].

Thanks to the growing research interest in optics and

collaborative research efforts many of the above challenges
are being solved.


In this section we will provide a brief description on the

development of optical components, their technologies and
how they perform against the current technologies through
various design parameters.
A. Optical Source
Lasers are used as light source in optical systems. Lasing
(the process of generating light) involves continuous reflection
of the light between mirrors (one of them is partially reflecting)
in an optical cavity. The optical cavity contains a gain medium,
that amplifies light of a specific wavelength through stimulated
emission there by generating a coherent beam of light.
Developments in the field of laser science over the past few
decades have resulted in more compact and efficient optical
sources. Vertical-cavity surface-emitting lasers (VCSELs) and
Semiconductor laser based on heterogeneous integration of IIIV materials (acting as gain medium) on Silicon (providing
the optical cavity) are the two most promising optical sources
for optical communication. The growing requirement for highcapacity short-reach data-communication links has spurred
a lot of research activity towards exploiting VCSELs and
semiconductor laser for short range optical communication.
The speed of a semiconductor laser can, in principle, be
increased by scaling down its volume so that a higher internal
photon density (and thus a higher resonance frequency) is
achieved [15]. Following the above, VCSELs operating at
up to 40Gb/s have been achieved in different wavelengths
(980nm [16] and 1.1m [17]) but they have a very high current
density. Lasers increased current density reduces its reliability
[18]. This resulted in more focussed research efforts towards
the 850 nm wavelength range which had a lower current
density. Authors in [19] reported a directly modulated 850nm
VCSEL-based optical link operating error free (BER <1E-12)
at 64Gb/s over 57m of OM4 multimode fiber. At lower bit
rates (60Gb/s), the error free distance increased up to 107m.
VCSELs and silicon photonics based optical sources are
strong competitors for being used in datacenters. There are
certain properties that are desirable in VCSELs (like opening
up ultra-high capacity networking [20]) and certain others in
silicon photonics. Authors in [21] compare the performance





Advanced Multi level Modulation (Image tbd)

Increase the per channel capacity

no need for specialized optical components,
more specialized electronics (faster market time)

Wavelength Division Multiplexing (WDM) (Image tbd)

Space Division Multiplexing (SDM) (Image tbd)
Mode Division Multiplexing (MDM) (Image tbd)

Increasing the parallelization for higher bandwidth

MUX/DEMUX are passive components, scalable
Increasing bandwidth by having multiple adjacent cores,
Multicore fibers are cheaper than fiber bundles,
no special active components are necessary
Multiplexing data in different modes

achievable with the best VCSEL sources available today with

the ones provided by means of Silicon Photonics (SiP) technology in order to achieve 50-Gb/s and beyond transmission
per channel, taking into account also the energy-saving point
of view.
Table III shows the comparison of various optical sources
(that are considered for short range optical communication for
data center scale networks) against standard parameters.

Hybrid III-V/Silicon Evanescent Lasers:

There is no oxide layer between III-V on Silicon.
The gain of the hybrid modes can be engineered by
changing the width properties of the active materials.
The authors in [22] have created Fabry-Perot laser,
mode locked laser, DFB laser and racetrack laser using
the above approach.

Hybrid III-V/SOI Lasers based on adiabatic evanescent coupling:

Authors in [23] demonstrate SOI Lasers based on
adiabatic evanescent coupling. There is good laser
emission and maximum power inside silicon waveguide material. As the refractive index is same so there
is good optical coupling. In the InP layer they have
higher refractive index and they use tapers to confine
the modes and in the Silicon layer they do the reverse.

III-V Nano/Micro lasers evanescently coupled to SOI

In [24] authors demonstrate microdisk laser coupled to
SOI through evanescent coupling. These have very low
footprint and low power consumption. By varying the
opto-geometric parameters of photonic crystals (low
q/v) they observe better intensity due to light matter
interaction that are analogous to atomic lattices.

B. Modulator
A modulator is a device that is used to change the properties of the carrier signal that is used for the transmission
of the data. An optical modulator is a device that is used
for manipulating the light beam from source. Based on the

Only possible in multi mode fibers and limited to short distances

In general, there are two types of modulation, Direct

Modulation and External Modulation.

Direct modulation:
Direct modulation involves changing the intensity of
the light beam by modulating the current that is
driving the light source. This technique is limited
by the bandwidth chirping effect when applying
and removing current to laser diodes having narrow
linewidth. This technique is ideal when the data rates
are in the low gigabit range (<3 GHz) and transmission distances are less than 100 km.

External modulation:
In this case modulation is performed outside the laser
cavity. A separate device called light modulator is used
to modulate the intensity or phase of the light source.
The light source is turned on continuously and the
light modulator acts like a shutter controlled by the
information being transmitted. This type of application
is suited for high bandwidth application but this comes
at the cost of expensive and complex circuitry to
handle high frequency RF modulation signal.
External modulation uses a Mach-Zehnder interferometer (MZI) waveguide structure fabricated on Lithium
Niobate. Lithium Niobate is used because it has low
optical loss and high electro-optic coefficient. Electrooptic effect is the one in which the refractive index of
the material changes in response to an applied electric
field. The The waveguide region is slightly doped with
impurities to increase the refractive index for guiding
the light. Light entering the input of the modulator is
split between the two paths. One of the paths (bottom
one) is unmodulated while the upper path has two
electrodes across it. So when a voltage is applied due
to the electro-optic effect, the upper path experiences
a higher delay (resulting in a phase change) compared
to the lower path. Hence based on the application of
the voltage across the electrodes, the light will have
high output (due to constructive interference when no
voltage is applied) or low output (due to destructive
interference when voltage is applied). Phase modulation can be performed using a very similar effect using
Polarization Maintaining Fiber (PMF).
Electro-absorption modulator (EAM) is a device that
is used to control the intensity of the laser beam by
electric voltage. EAM exploit the Franz-Keldysh effect
wherein, the applied electric field changes the ab-

III-V on SOI Laser:

Still in research,
need to make multi-core fibers more affordable and couplers cheap

property of the light that is being modified, we can have

amplitude, phase or polarization modulators.

Silicon photonics with an external laser can be used for

long distances. These are more appropriate for chip to chip
interconnects as it needs to be CMOS compatible.

The main goal of using III-V on SOI Laser is to make

cheaper and power efficient lasers in 1.3 and 1.55 microns.
There are three different approaches:

Complex modulation formats results in
receiver complexity and higher power consumption,
need linear electronic components,
should also be made cost and power effective,
tighter loss budget.
Number of channels are limited (160)
WDM lasers are not commodity yet (might not be cost efficient)



Datarate per channel
Operation in Optical bands
Typical Distance
Threshold Current
Bias Current
Side Mode Supression Ratio
Output Power
Wavelength Stability with temperature variance

VCSEL [25]
Direct Modulation
40 Gbps (SM)/ 56 Gbps (MM)
850 nm
upto 100m
0.99 mA
12 mA
7 mW
reduced (can build VCSEL arrays)
Upto 40G for temperature upto 85 C

sorption spectrum which in turn changes the bandgap

energy. EAM can operate at higher speeds (in tens of
GHz) and use lower voltage compared to electro-optic
modulators. Higher extinction ratio can be obtained
through the exploitation of Quantum Confined Stark
Effect phenomenon in Quantum wells instead of using
the waveguide structure with electrodes in EAM.
EAM can be easily integrated with photonic integrated
An acousto-optic modulator (AOM) is a device which
can be used for controlling the power, frequency or
spatial direction of a laser beam with an electrical
drive signal. It is based on the acousto-optic effect,
i.e. the modification of the refractive index by the
oscillating mechanical pressure of a sound wave.
Multi-Level Modulation (MLM) allows to increase the
bandwidth without the need to increase the number of
optical components but it trades off against the OSNR
at the receiver. This has sparked the discussion about
Forward Error Correction (FEC) in datacom. MLM
has the potential to reduce the number of components,
cost and the footprint of optical components. Many
advanced modulation formats like PAM-4, DP-PAM4, NRZ and ENRZ are exploited to meet the demands
of high speed transmissions [30].
C. Receiver
Matured technologies of Receivers are mainly based on IIIV material due to their high sensitivity within interested optical
spectra window. Available candidates are Silicon(Si), Germanium(Ge), Indium(In), Gallium(Ga), Arsenide(As) Phosphide(P), Aluminum(Al). The utilization of Silicon is of great
interest for photonics/electronics integration. However, this
idea is suffering from the indirect energy gap defect [31].
In order to overcome the indirect energy gap dependence,
doped material are introduced such as InGaAsP, InGaAs.
The interest of Silicon photodiode still remains within sight
where binary elements SiGe, with its thermal sensitive indirect
band gap, is currently reported as one of the solutions for Si
based photodiode [32][33][34][35]. A very high responsivity
of 0.84 A/W at 1550 nm is reported. Recently, the research
of Graphene introduces progress in merging with photonics
technology where the fabricated photo response does not
degrade for optical intensity modulations up to 40 GHz, and
further analysis suggests that the intrinsic bandwidth may
exceed 500 GHz[36]. Some researches were done by [37]
where Graphene/Si-heterostructure waveguide photodetector is
proposed, introducing high responsibility even up to 2.75 um.
A summary of some typical material based photodiodes are
listed in Table IV [38] [39] [40] in which Silicon, Germanium,

First Hybrid Laser [26]

1560 nm
65 mA

1.8 mW

SiP [27]
Electro Absorbtion Modulation
25 Gbps-30 Gbps
1320 nm - 1337 nm
Upto 2 Km
tbd (lowered Ith [28])
20mA - 40 mA
>40 dB
30 mW [[29]]
become lower as technology matures
better wavelength stability [29]

InGaAsP and InGaAs has already been commercialized or

manufactured while the other two still remains experimental.
The packaging of the commercial receiver will offer both
P-I-N photodiode and Avalanche Photodiode (APD) option
according to the user requirement. SiGe and SiGraphene
receivers related researches report APD as their experimental
test prototype since they do not offer good quantum efficiency.
The material induced the trade-off is difficult to evaluate since
the receiver requires to be packaged as a certain structure to
be functional. The structure of photodiode usually involve two
major options: P-I-N photodiode and Avalanche Photodiode
(APD). A PIN structure contains an intrinsic region between
p- and n- doped material. An APD is usually operating with a
high reversed voltage in order to conduct strong electrical field
which means second carrier will be generated during operation
that efficiently amplifies photocurrent. In some designs, heterostructure is implemented to achieve better O-E efficiency.
Commercially available PIN photodiode are generally
made of silicon, hence the operation window will be limited.
As one of the solution, other material such as InGaAs is used
while it is still considerably expensive. APD generally have
better noise performance but narrower bandwidth compare
to PIN, its high wavelength solution also depends on the
utilization of InGaAs which again suffers from high expenses.
Booming material physics research as well as photonics
engineering offers great opportunity and potentials in optical
telecommunication in both data center and on-chip perspective. The development of receiver is towards high bandwidth
in order to meet high speed communication system in the
future, meanwhile the silicon compatibility is also the trend
of photodiode research.
D. Amplification
The researches of SOAs date back to 1960s[41], it was then
received a wider research interest in 1980s, largely resulting
from their potential as in-line amplifiers. However, due to
the invention of erbium doped fiber amplifier (EDFA), the
development of SOA suffered a dip. In the last decades,
SOAs have been considered again owing to their fast gain
dynamics. This has enabled SOAs to be used for high speed
all-optical switching. SOAs can be categorized owing to the
dimensionality of the electronic system of their active region.
Currently available architectures such as:

Bulk SOAs, the carrier motion is not restricted which

is known as a three dimensional structure.

Quantum Well (QW) SOA or Multi-Quantum-well

(MQW) where the carrier motion is limited to a layer

Operation Window (nm)
Bandwidth (GHz)
Noise Performance
Quantum Efficiency








There are different types of multiplexing as mentioned

below. Suitable multiplexing technology can be used based
on the application requirements and available technology.

Fig. 4.

Density of States vs energy for SOAs[42]

of certain thickness. In other words, it is restricted to

two dimensions.

Quantum Wire or Quantum Dash (QDash) SOA, the

carrier motion is restricted to one dimension (wire).

Quantum Dot (QD) SOA, the carrier is fully restricted

to zero dimension.

The Density of States (DoS) versus energy is illustrated in

Figure. Bulk SOAs [43] as well as quantum devices based
SOAs [44][45] have been realised. The reduced dimensionality
contributes to better performance of SOAs. For example, A
QD based SOA is reported to have a ultra-fast gain response,
a large gain bandwidth, low temperature dependence of the
gain and low chirp even under saturation regime [46][47].In
this project, however, the physics of SOAs will be based on
Bulk SOAs and then move on to MQW SOAs as the elements
for architecture evaluation.
E. Multiplexer and Demultiplexer
A multiplexer is a passive device that combines the different wavelengths in its input port to a common output
port. The demultiplexer does the reverse. Photonics provides
the opportunity to do multiplexing along time, wavelength,
mode and space. Multiplexer have been extensively used to
increase the bandwidth of a transmitter by adding multiple
channels operating at different wavelengths. Multiplexers and
demultiplexers can be combined to form Wavelength cross
connects (WXCs).
The Mux/Demux assembled through micro-optics or hybrid PLC technology are a bit complex. So there has been
growing interest towards simpler and more effecient structures
(flatter response and low temperature dependence) like Arrayed
Waveguide Grating (AWG) [48], Silicon Photonics [49], ring
resonator [50], Echelle grating [51] based solutions for multiplexing and demultiplexing.
Table II gives the various ways to increase the bandwidth
of a channel, their advantages and challenges.

Wavelength Division Multiplexing (WDM): WDM is

the widely used multiplexing technique to increase the
data rates. Here multiple wavelengths (each modulated
at the same/different data rates) are multiplexed together and send to the destination wherein the receiver
sub-stem demultiplex them retrieves the data. There
are two types of WDM: CWDM (Coarse WDM) and
DWDM (Dense WDM). Table V gives the difference
between CWDM and DWDM [52]. Authors in [53]
discuss some of the opportunities and challenges in
using WDM for increasing the optical interconnection



<8 active wavelengths per fiber
Defined by wavelengths
Short-range communications
Uses wide-range frequencies
Wavelengths spread far apart
Wavelength drift is possible
Breaks the spectrum into big chunks
Light signal is not amplified

>8 active wavelengths per fiber
Defined by frequencies
Long-haul transmissions
Narrow frequencies
Tightly packed wavelengths
Needs precision lasers to avoid drifting
Dices the spectrum into small pieces
Signal amplification maybe used

Space Division Multiplexing (SDM):

SDM using multi-core fibers (MCFs) has been recognized as a crucial key technology to extend the physical limit of transmission capacity of optical fibers.
Increasing attention has been paid to SDM and MCFs
as the data rate of conventional fiber transmission is
reaching the estimated limit of around 100 Tb/s [[54]],
[[55]], and unlikely to be able to accommodate the
expected rise in the traffic demand of fiber optics
networks in the coming decades. Recent research are
looking at how to exploit Multi-Core Fibers (MCF)
for higher bandwidth [56] [57].

Table II gives the different types of multiplexing, their

advantages and challenges.
F. Optical Switch
The network interconnect are usually categorized as:
Broadcast Network, Point-to-Point Network and Switched
Network. Considering the random data distribution nature of
datacenter [8] [9], the Switched Network is considered as one
of candidates for topologies elements due to its flexibility and
In order to merging with the long haul optics, Traditional
switching apply Opto-Electro-Opto (O/E/O) Swithces that converts optical signal to electrical. Signal Processing can then

be conducted under matured domain. However, the increasing

bandwidth demand has driving the interest towards transparent
optical (O/O/O) switches which have low latency and wide
bandwidth. The optical switches are superior in terms of [58]:
Switching Speed:
One of the most interesting parameters that indicating how
fast the switch can switch from one port to another. Switch
Fabrics with fast reconfiguration time are particular interested
in photonic based interconnects.
Insertion Loss or gain:
The insertion loss is the signal power loss due to the switch,
it various depending on the switching technology that is
applied, and for some switches such as Semiconductor Optical
Amplifiers (SOAs) will amplifier the signal to compensate
the loss hence would even produce gain. The insertion loss
is preferred to be small, but in system design perspective, it
is more preferred to be controlled in similar value at each
Network hierarchy.
In some switching devices, non-switched inputs will also
contributes to the output signal of switched inputs which lead
crosstalk noises. The Crosstalk defines as the the ratio of the
power of a specific output from the desired input to the power
for all other inputs. It is desired to be as small as possible.
Extinction ratio:
This is the ratio of the power of specific output signal when it
is enabled to the power of this output signal when it is disabled.
It is considered to be as large as possible. Optical-based device
usually offer very good extinction ratio, i.e. if you switch off
the light, it goes dark.
Polarization dependent loss (PDL):
If the switch do not offer equal losses for different input signal
polarization states, it is considered to introduce polarization
dependent loss. It is desired to be low.
This is a very application related property that refers to the
capability of the switch of building with large port counts.
It is an interesting parameter to exploit since novel optical
switch usually do not have enough scalability that is capable
to replace with the conventional electrical switches.
input power dynamic range (IPDR):
This parameter refers to the input power range for a certain
device that can offer error free switching [59]. Error free
standard various from different system requirement, a BER
target of 1e-9 is commonly used.
Typical available Optical switches will be discussed such as
Micro electromechanical System Switches (MEMS):
Switching is achieved through adjusting its micro-scale mirrors
mechanically by electrostatic, magnetic, piezoelectric, and
thermal expansion methods. This lead to slow switching time
typically milliseconds to microseconds. MEMS introduce low
insertion loss, crosstalk and PDL and its scalability is very high
(> 1000) with low power consumption. However, reliability
reduced due to moving parts [62].
Electro-Optic Switches:
It changes the refractive index by applying an electric field.

Electro-Optic switches have advantages in low power and high

switching speed (nanosecond region), but the insertion loss is
high and large switch size is demonstrated.
Thermo-Optic Switches:
In order to achieving switching, this device changes the
waveguide refractive index with thermal effects. Thermo-Optic
switches have disadvantages in high power consumption and
large scalability. However, the low crosstalk and PDL is
demonstrated [63].
Acousto-Optic Switches:
Acousto-Optic switches exploit the change in reflective index
in response to acoustic waves. It has a moderate switching
time (hundreds of nanoseconds) and small size. However, high
insertion loss and cross talk is demonstrated [64].
Liquid Crystal Optical Switches:
Liquid crystal switches rely on the birefringence and
anisotropic properties of liquid crystal materials for optical
switching. It produces high switching time while the insertion
loss and crosstalk is moderate. The main advantage is low
power consumption.[65]
Semiconductor Optical Amplifiers (SOA):
Switching in SOA based devices operates using injection of
electrical current into the SOA active region, resulting in
refractive index and optical gain changes in its medium. SOA
provides a fast switching time with low crosstalk and PDL.
Moreover, it produces gain rather than insertion loss, but this
would contribute noise.[66]
citation required here
Meanwhile some novel researches have been done recently
in which one of the interesting perspective is SOI (siliconon-insulator) optical switch. Using this technology, we can
control the propagation of light by light. Initial efforts involved
changing the refractive index of the medium through either
external light pumping or thermal effect. The pumping light
aborbed by the material changes the carrier concentration and
hence the refractive index.
Switching speed is one of the most important factor for
a switch. The switching speed is highly dependent on carrier
recovery time (typically in ns). When drilling holes, due to
surface inconsistencies there can be carrier recombinations at
the edges which needs to be avoided. Some reasearch groups
have worked on the clearing of the carriers along the edges and
observed the carrier recovery time of about 30 pico seconds
and the switching power in the order of femto joules.
The ultrafast switching group at LPM,Nancy reduced the
switching speed further with the use of surface recombination
of charges on quantum wells. They did a pump probe experiment and observed switching speed in the order of 15 pico
seconds and switching power in the order of 10 femto joules.
Next moved on with a system level experiment.
G. Photonic Network on chip
Photonic network on chip [67] are possible only through
3D integration technology. This is already been done for electronics, need to squeeze in photonic layer. Fundamentally, it
involves stacking of different layers (chips) one over the other
thereby providing additional functionality and reducing energy

Fig. 6. a) Off-resonance modulator: A ring resonator with off-resonance

state which allows a wavelength pass through the crossing. b) On-resonance
modulator: A ring resonator with on-resonance state which couples a wavelength into the ring waveguide.c) Injector: A resonant wavelength in the
bottom linear waveguide is coupled into the ring and then injects into the top
wavelength.d) Detector: Silicon-germanium is coupled into the ring with a
resonant wavelength coupled into the ring waveguide.

Fig. 5.
a) physical structure of Micro Ring Resonator b) Transmission
Spectrum of Micro Ring Resonator[68]

requirements. [cite university of colarado]. In such a photonics

network on chip, different layers communicate to each other
through silicon via called TSV [cite Mirage project]. The
photonic components (transmitter and receiver) are flip-chip
bonded to a photonic layer (that acts as a waveguide). Through
an interposer layer, all the active electronic components can
be bonded to the bottom side. The electronic components
communicate with the optical components through the TSV.
There is also optical grating coupler, that is used for coupling
light from transmitter into the photonic layer and out to the

Fig. 7. (a) Schematic of overall MOTOR chip. (b) Expanded view of a

single-input wavelength converter showing several key device elements.[69]

Tunable Optical Router) is introduced to operate at 40 Gb/s

per port with BERs below 1e-9 and the power penalty as low
as 4.5 dB.

1) Photonic nano technology: Similar to semiconductor

manufacture, nano technology is also been merged into photonic devices which indicates the possibility of hybrid electronic/photonic physical layer design. Compare to conventional
electronic semiconductor, photonic devices have potential of
overcoming the problem caused by thermal generation, limited
bandwidth, electromagnetic interference,interconnect, quantum
effect and so on. Researchers have shown great interest in this
area where nano photonics devices such as waveguide, modulators were developed. The Photonic Micro Ring Resonator
presents as a nano ring structure. The linear waveguide3 is
coupled with ring waveguide and light with certain wavelength
is coupled into the ring waveguide when passing through the
crossing junction. The wavelength spectrum shows in Figure
5 which shows dependence on biased electrical field. In this
way, electrical field can be used to shift the spectrum hence
switching for a certain wavelength. The ring can be applied as
modulators, injectors, detectors and so on.

2) Photonic Architecture on-chip: On-chip interconnect

is one of factors that contributes to the performance of a
many-core system. Conventional electrical interconnect has
limitations due to the nature electrons. The idea of optical interconnect was proposed [70] in 1984, while by that
time, on-chip optical interconnect. The development of nano
technology and photonic technology enable the fabrication
of photonic modulators and small bending radius waveguide
with micron size or even smaller which is capable to fit in a
chip. On-chip photonic projects are booming recently. One of
the interesting research done by Vantrease etc. whose team
proposed Corona architecture (IV-G2) where it discussed a
256-many-core architecture with hybrid electrical and optical
interconnect [71]. Corona is proposed to accelerate memoryintensive workloads for 2 to 6 times. Meanwhile Dragonfly
architecture is introduced focusing on a trade-off between
reduced-cost and considerable performance [72]. Further research has been done in Northwestern University where Firefly
[73] and FlexiShare [74] were born. Photonics technology
trends to be an interesting perspective of future NoC.

Other than switches, Optical Router applications were also

reported in [69] that MOTOR (InP based 8x8 Monolithic

H. Memories

3 The coupled wavelength is related to the refractive index difference

between linear waveguide and ring waveguide and the size of ring radius.

Memories are necessary but it is very challenging and

distant. Memory wall, speed of the CPU the are increasing 3

that differentiates optical waveguide medium. The telecommunication networks, long and ultra long-haul networks were
heavily dependent on the SMF for most of their data and voice
transmission because of its high bandwidth capabilities. MMF
were primarily used for solutions wherein bandwidth can be
traded off for lower cost. The data center networks have a
different set of requirements that can exploit the benefits of
both SMF and MMF. Choosing a right waveguide medium
requires a careful understanding on the performance requirements of the deployed infrastructure, upfront capital investment
and long term scaling requirements. Exploring novel optical
waveguides is currently a hot research area. Polymer based
waveguides have received significant interest amongst research
groups trying to over the limits of electronic PCB. Authors
in [75] give an interesting approach to use Silicon photonics
based optical link instead of VCSEL based multimode fibers.
Industries have shown a lot of interest in Active Optical
Cables (AOC) for short range high data rate transmissions.
It is called active because all the opto-electronic components
required for transmission and reception are assembled inside
the package of the connectors. Current 10G AOCs are more
prominent but there are research efforts towards developing a
Terabit/second AOC by exploiting WDM, PAM4 modulation,
spacial multiplexing using multi core fibers.
Fig. 8.

A Corona structure based 64 cores topology

times that of the speed of development of memory circuits.

Develop an optical memory in the system on chip and it
communicates with WDM to the rest of the circuits. Demonstration of single optical memory cell (Flipflop) by using two
coupled MZI integrated on a single chip. Proposed optical
RAM architectures and supported through simulation.
III-V on SOI Memory:
It is a far reaching goal to make a memory with photon.
Need to exploit bistable systems, which for a single input
power, we have two possible output powers. In optics we need
a resonator with a non linear material. Two types of nonlinearity (change refractive index, change gain or adsrorbtion).
Bistable operation of microdisk lasers (cite Ghent) Initially
in the microdisk lasers, the modes are moving in both directions. But giving a pulse, it changes the property of the material
and makes it to unidirectional operation. Switching speed is in
the order of few pico seconds and energy in the order of femto
joules. photonic crystal laser, based on injection locking of the
lasers. There is a signal laser and an injection laser. When the
intensity of the injection laser is increased, it increases the
stimulated emission, there by depleting the signal. In order to
recover from this condition, we need to lower the power of
the injection laser.

AOC cables has electrical interfaces, this increases its

robustness while handling them in the field. These can also
be brought based on our requirements. There are different
types of receptacles for AOC. Most commonly used in the
datacenters are the SFP, QSFP (4 Tx and Rx), CXP (12 Tx and
Rx), CDFP (16 Tx and Rx, targeted towards 400G standard).
CDFP based AOC have been demonstrated by Te Connectivity
based on VCSEL 850nm. A complementary solution based on
Si photonics was demonstrated by Molex (previously Luxtera)
on 1550nm.
Challenges in AOC:

electrical signal integrity at higher rates: Requires

more equalization, more components, more power and
cost. So industry is looking at alternative approaches
like MBO (Mid Board Optics), but this is challenging
because of its feasibility (what if it fails). SNAP-12
module, Avago MicroPOD, Holey optochip.

front panel density (upto 11 CDFP, handle upto 4.4TB

which is 20 percent more than QSFP)


Photonics can overcome the drawbacks of electronics in

the interconnection networks [76] if the following challenges
are met:

Develop the necessary active and passive functionalities

Perform low power consumption (femto joules) at high


Small footprint for high density

Integration with Silicon photonics and CMOS compatibility

I. Optical waveguide medium

Optical waveguide is a structure that is use to guide the
light generated by the optical source using the phenomenon of
total internal reflection. Based on the number of modes that
can propagate through the optic fiber (waveguide medium), it
can be either a Single Mode Fiber (SMF) or a Multimode
Fiber (MMF). Table VI represents the various parameters


Driving light source
Distance (for 100 GbE)
Spectral efficiency
Bandwidth support
Assembly Tolerance
Application areas
pros and cons

Very short distance
Upto 30 Gb/s
Optical PCB


VCSELs (850 nm)
Upto 50 Gb/s
short distance DCI
Low cost (Trades bandwidth for cost),
has speed and bandwidth constraints

FP laser, SiP laser (1310 nm)
upto 1.3 km
>50 Gb/s
Long distance DCI
Unlimited bandwidth enables better
network design due to reduced constraints

to grow III-V on Silicon (at high temperature). This opens up

new venues of research like:

Fig. 9. Process flow for heterogeneous integration of III-V semiconductors

on silicon. SOI: Silicon-on-insulator. InP: Indium phosphide.[78]

Along with the above requirements, good materials and

better fabrication technology will enable the production of
optical components at low cost and be competitive against
electronics. There are currently two main options:

Growing nano structures on Silicon.

Using metamorphic oxide structures on Silicon.

Wafer bonding (polymer based materials acting as

glue) of III-V on Silicon (cite LETI)

Plasmonics for datacom, as they reduce the size of optochips. Plasmonics can confine light to much smaller dimensions that optics cannot do as it is diffraction limited. Initial results were obtained in EU-Platone project. The project
involved using Plasmonics with Si photonics. The following
were the observations:

Appropriate for interfacing photonics and electronics

allows for thermo-optic induced switching phenomenon

low switching power consumption

but high propagation losses

A. Si based
Silicon photonics [77] is the ideal candidate as it is CMOS
compatible. But there is still no Si based Laser.
This is a very interesting fabrication technology that encompasses two complementary technologies



Packaging is a core technology to leverage the functions

and the performances of micro and nano structures in a
system, in order to bring them to application, the gap between
component and environment has to be bridged by providing
reliable interfaces.

Silicon photonics is compatible with microelectronics.

Many dense circuitry is possible with SOI ( as oxide
layer helps in creating a high index contrast). SOI
is mainly beneficial for building passive devices like
AWG based filters, Mux/Demux, low loss waveguides
and they are also used to make modulators.

This is the most important factor that can reduce the

cost of optical solutions. The assembly and packaging should
support high density, cost effectiveness, low power, thermal
and mechanical stability.

Compound semiconductor materials like InP, GaAs

used to make Lasers because of high quantum efficiency and for high non-linearity compared to other
materials.InP can be used to make good optical
sources (in 1.3 to 1.55 microns) for Silicon photonics
based on quantum wells.

Design of photonic integrated circuits is going to be

complex and this requires design tools that are similar to
its electronics counterpart. With such design tools the design
process can be made simpler and effective from the design
stage till the tape out phase.

Current techniques to integrate these two technologies

involve flip chip bonding of the InP based components on
SOI. But this becomes cumbersome when a lot of components
need to be added to the SOI. Since the lattice structures of
the III-V is very different from the Silicon, it is very difficult



There has also been the rise of companies like Luceda

photonics [79] that provide a complete design framework
(component design and simulation, circuit definition and layout, till tape out and testing) for photonic integrated circuits.
Phoenix software [80] Bright photonics [81]



The data center hosts a lot of technical infrasturcture

ranging from HVAC devices (to provide power and cooling
solutions) to network devices. In this section we will provide
some insights on how the optical technologies described in the
previous section fits in the datacenter environment. Inorder
to provide a better understanding, we divide the datacenter
infrastructure into three aspects:

Compute network: This is primarily focussed on the

individual servers, their clusters (ie, rack) that do most
of the computation work in a data center and thier
cabling infrastructrue.

Interconnection network: This refers to the interconnect infrastructure mainly provided by the
switches(TOR switch, aggregation swtiches and core
switches) and their cabling.

Storage network: The storage network refers to a

dedicated block level data storage provided by a lot
of enhanced storage devices, disk arrays and their
associated cabling.

Fig. 10.

Fig. 11. Diagram showing the penetration of optics closer to the ASIC (will
be updated)

For each of the above network we do an individual analysis

on how the optical technologies will enable high bandwidth,
low latency and energy efficiency.

the layout of the Optical PCB to connect maximum

number of optical interposers (compute cores) to the
optical router chip (also called photonic network-onchip). Figure 10 shows how a speculative board level
diagram of optics based PCB.
One of the most important things that should be
taken into consideration for achieving high bandwidth
and low loss transmission is the placement of the
transceiver. Figure 11 shows the gradual penetration
of the optics closer to the ASIC (processor or switch).
Initially the transceivers were at the board edges.
Then there were efforts to move transceivers on the
board (termed as mid-board optics) and companies
have commercial products in this area [84]. The midboard optics based solution requires optical cabling
between chips. For a more scalable and efficient solution, we need to avoid massive optical cabling. This
motivates the requirement to build hybrid electricaloptical PCB. There have also been approaches to
integrate optical transceivers on the silicon chip [85].
Initially people are using polymer waveguides to realize the optical PCB. This is a very challenging task
as it involves understanding how the following factors
will impact the Optical PCB:
channel interference and dispersion compensation.
impact of material, fabrication and packaging
of photonic components
employment of high bandwidth on chip routing
(WDM or advance modulation formats)


A. High bandwidth
There is fundamental limitation in the bandwidth scaling
of electonic based systems due to Chip-IO bottleneck (due
to poor scaling of the pins in the processor package against
Moores law), electronic PCB limitations at high frequencies
and rack level bottlenecks ( power and form factor requirements resulting in lower port densities). Optics has tried to
explore newer venues to overcome chip level, board level and
rack level bottlenecks. Some of them are mentioned below:

Optical interposers:
Chip level bandwidth limitations can be overcome
using optical interposers. Optical interposer is the convergence of photonic and electronic system through
Silicon photonic technology. These have been primarily used to achieve high bandwidth inter chip communication but they can also be used to achieve high
bandwidth communication between interposer and onboard memory. Authors in [82] demonstrated a high
density (3.5 Tbps/sqarecm) inter-chip optical interconnect based on Silicon photonics. Further improvement
in transmission density (6.6 Tbps/squarecm) have been
achieved for the above system using better optical
components and a 1x4 optical splitter [83].

Optics for on-board chip to chip communication:

To connect all the components, there is the necessity
for the optical interconnection network on the board.
This has motivated the work towards an all optical
PCB. Projects like Phoxtrot and Firefly are working on
optical PCB. Physical layer constraints of the optical
components will impact the interconnect topology
on the board. The main aim will be to optimize

Diagram showing an Optical PCB at board level

Optical backplane for board to board communication:

Optical technologies have received large interest in
recent years for use in board-level interconnects. Polymer multi-mode waveguides in particular, constitute a
promising technology for high-capacity optical backplanes as they can be cost-effectively integrated onto
conventional printed circuit boards (PCBs). Authors in

trends continue, there will be a need to reconsider the system

design current generation NICs and the Server to match the
speeds/latency of the transceivers. There are a lot of challenges
in meeting these aggresive bandwidth requirements with the
current electronic systems [87]. There have been promising
solutions like the Silicon Interposer on Optical PCBs but they
require a lot more time and effort to achieve the economies of
scale to become commodity products. Optics can significantly
lower the latency on the hardware areas but this alone is
not sufficient. There needs to be significant work done in
the software (protocols,operating systems and the applications
running on it) to achieve significant gain in overall latency
Article [6] gives some detail on how to achieve low latency
from an operating systems perspective.
Fig. 12.

Future Datacenter networks based on optical technologies

[86] a four-channel optical backplane demonstrator is

designed based on a regenerative optical architecture
and PCB-integrated polymer waveguides, and fabricated with low-cost commercially available electronic
and photonic components.
Light can be coupled from board to board through
vertical coupling structures. Figure 12 shows the
block diagram of a datacenter network (DCN) that
exploits optical technologies to provide higher bandwidth, lower latency and better energy efficiency.

Optics based rack to rack communication

Current datacenter that were using MMF based interconnects are slowly migrating towards SMF and AOC
based solutions to achieve higher bandwidth scaling
and cost efficiency. Developments in the technologies based on VCSELs and Silicon photonics have
enabled AOC to provide high bandwidth rack to rack
communication through short reach interconnects. The
developments in high speed electronics and packaging
are playing a vital role in achieving the performance,
power, cost, reliability for datacenter interconnects.
Most of the activities are propelled by industries like
TE connectivity and Mellanox.




Network Switch
Network Interface Card
OS Network Stack
Speed of light (in Fiber)

10-30 s
2.5-32 s
15 s

100-300 s
10-128 s
60 s
0.6-1.2 s

Some of the techniques can be used to achieve lower


Keeping application in the cache

Having fast hw (CPU, NICs)

Have small number of instructions, and executing

them fast using dedicated cores

Kernel Bypass (avoid using interrupts as they are


Caching is also important for throughput (fetching

from memory is costly)

Using DDIO for higher throughput

Decide which transmit protocol to use TCP, UDP,

capnproto , JSON, BISON

If TCP change the default window size (to greater 64k)

B. Low latency

C. Energy efficiency

The largest contribution of the latency at the compute

network is the end host latency. In this section we shall look
at the constituents of the end host latency, impact of optics on
this latency and how it can be reduced.

At higher data rates, electrical interconnect systems can

consume a lot of power due to their equalization circuits
(that are used for overcoming signal losses). With increasing
bandwidth requirements, distance and multiple processor on a
single board, optical interconnect appears to be the promising
solution. Authors in [88] provide a detailed comparison on
the electrical and optical IO for chip to chip interconnect.
They also mention that optical interconnects will be initially
introduced as optical package to package IO using MCM
single package technology. But it will eventually move towards
monolithic integration of optical components to achieve energy
consumption less that 1 pJ/bit for Tbps interconnects.

Our context of the end host latency considers the time taken
from the arrival of the data (as optical signal) to the receiver
interface (in a NIC card) to the time it has been received and
processed by the application in the host operating system.
Hence the end host latency is a combintaion of delays of
the transceiver circuitry, NIC processing, buffering, operating
systems (containing the protocol stacks) and the applications
running on it.
The advancements in optics has lead to better and faster
receivers with high bandwidth reception capabilites leading to
low latency high bandwidth optical transceivers. If the above

Reducing the energy consumption per compute unit:

Reducing the energy consumption at the granularity
of the compute unit requires the use of energy efficient components, processing technology and careful

integration. By exploiting the passive nature of the

optical components and technologies descibed above
in the datacenter, a significant reduction in the power
consumption can be achieved.

Intelligent and dynamic resource provisioning: Various

studies show that datacenter rarely operate at their
full utilization [3]. This can result in increased energy
consumption due a large number of tiny jobs (not
machines specific updates) running on multiple servers
eventhough the all the jobs can be exectuted by a single server. This forces us to explore techiques through
which multiple small jobs running on individual machines can be aggregated and run on a single machine.
Virtualization is a technique to increase the utilization
of servers through the deployment of muliple virtual
machines (that can run multiple tasks). Hence through
intelligent and dynamic resource provisioning we can
improve the overall efficiency and reduce power consumption in datacenters.

A. High bandwidth
Increasing bandwidth in point to point networks involves
imporoving the fiber infrastructure or using advance multi-level
modulation schemes. But interconnect networks do not have
dedicated point to point links connecting two nodes instead
they are connected through switches (for scalability and cost
reasons). Hence there are two important factors that needs to
be considered for increasing the bandwidth of the interconnect
networks: the first is scaling up the bandwidth of the cabling
infrastructure to push higher data rates, and the second is the
need for faster processing of the data at the switches.

Scaling up the bandwidth of cabling infrastructure:

The servers at the edge of the datacenters are connected to the top of rack switch through 1G links,
but with growing traffic demands most of the datacenter operators are replacing the 1G fibers with 10G
fibers that connect to the TOR switch. Increasing the
bandwidth in the edges requires sufficient bandwidth
overprovisioning at the aggregation and core levels.
From the aspect of increasing the bandwidth of the
cabling infrastructure within the interconnect network
the following trends are observed:
Increased deployment of SMF in DCN
Adoption of Multi level Modulation
Exploiting WDM technology
Usage of active optical cable

Faster processing at the switches:

We need to have faster processing at the switches
to handle the increasing data rates provided by the
servers through the cabling infrastructure. There are
many techniques to achieve this, some of them are:
exploring high speed switching architectures, scalable and programmable switching fabric [89], high
radix switches [90], using hybrid switching techniques
[4][91], building novel optical switching subsystems
[92], novel scheduling mechanism [93], efforts towards all optical swiching [94].
Most of the solution till date tries to use Optical packet
switching (OPS) to deal with shorter flows and Optical
circuit switching (OCS) for handling longer flows.
Some of the implmentation challenges include rapidly
determining the flow type and fast scheduling.

Optical transceivers consumes a large proportion of the

power from the compute network infrastructure. The major
reason for this is due to the following :

Transceiver electronics at high speed are MCML

based which consume more power than their CMOS

Transceivers are designed to be Always ON to

maintain continuous synchronization resulting in a
excessive power wastage

In Context, Ya et al proposes a possible solution to overcome this problem by proposing a physical layer burst mode
protocol and by clock gating at MAC layer. By exploiting
clock gating, high power consuming MCML components can
be disabled during periods of idle activity.


The interconnect network acts as like the central nervous

system in the human body. It provides the vital link for connecting the computing nodes in one rack with the others and for
connecting the computer infrastructure to the storage network.
The main components in the interconnect network are the
cabling infrastructure (containing a mixture of copper cables,
MMF and SMF) and the switching infrastructure (consisting
of the TOR switch, aggregation switch and the core switches).
Ever increasing demand for high performance interconnects
in data centers is putting a lot of strain on the current
electrical interconnects. Higher throughput can be achieved
by increasing the data-rate per channel and by increasing the
number of channels.
Since computing application requires higher performance
ie, ability to carry a large amount of data with lower penalty.
Due to the various reasons described in introduction (cite
here), electrical links are not able to handle this performance
requirements. This has resulted in considering other candidates
like optical links that are seen an a potential replacement for
their elecrical counterparts.

B. Low latency
Latency and jitter are dependent on the network topology
and traffic conditions. Understanding the latency profile of the
DC network is very challenging, partly due to the variety of
applications running in the datacenter.But knowledge of network latency can be very useful to do latency optimization and
latency engineering in datacenters. This section will discuss
about the various components that constitute the latency in the
interconnect network and how the deployment of optics will
affect the latency.
Latency is introduced in the network is due to buffering,
propagation, switching, queuing and processing delays. The
sources of latency in interconnect networks are:

network interface delays: This mainly includes serialization delays like signal modulation and data

scale for higher bit rate communications. Authors in

[97] demonstrate an Arrayed waveguide grating router
(AWGR) switching fabric that overcomes the limitation of the electrical switches exploiting wavelength
parallelism. This allows optical wavelengths to cross
over and propagate in parallel.

Fig. 13.

Latency split-up across two nodes in a Data center

framing (packetization). The interface delay depends

on the size of transmitted packets and varies with
link bandwidth: it is equal to the size of the data-unit
divided by bandwidth. There has been some research
in exploring architectures of physical hardware and
circuits (like Serdes) for 100 Gbps systems [95]. Lowering the network interface delays for higher bit rates
can be very challenging because of the requirements
of physical layer hardware and circuits (like Serdes).

signal propagation delays: This is the time taken for

the signals to travel through the physical cables. Even
though optical fibers have enabled high bandwidth
transmission, they have compromised in latency due to
their lower propogation speed (31 percentage slower)
in silicon based glass fibers compared to vaccum. Researchers in [96] experimentally demonstrated a fibrebased wavelength division multiplexed data transmission at close to (99.7 percent) the speed of light in
vacuum. These results are a promising step towards
the realization of low latency optical communication
(at the speed of light) for future data centers.

router and switching delays: It is the time taken to

switch the packets from ingress ports to egress ports.
This delay includes the time taken for serialization
at interfaces, look-up and forwarding. The delay is
highly dependent on the switching architecture and
their implementation. Below we discuss two of the
popular types of electrial switches:
Store and forward: These types of switches
waits for the entire packet to arrive before
switching the data to the next node towards
its destination. These types of switches have
higher latency but they support larger port
Cut-through switches: Cut-through switches do
not wait for the completion of the packet, they
start switching the bits as soon as they know
the destination. These are low latency switches
but they have lower port count.
Due to implementation complexities, current electical
switches use complicated input queuing structures
(like Virtual Output Queue (VOQ)) and multistage arbitration mechanism. The above architecuture cannot

queuing delays: Queuing delay occurs in electrical

switches and routers when packets from different
ingress ports are heading to the same egress port
concurrently. Since only one packet can be transmitted
at a time from the egress port, the other packets must
be queued for sequential transmission. This resource
contention is called head-of-line blocking and the
queuing can lead to substantial latency.
Absence of optical memories forces us to explore novel scheduling techniques for optical packet
switched networks [14]. Hybrid switching techniques
[4][91] is another alternative but this uses electrical
switches (for packet switching) along with optical
switches (for circuit switching).

For a intra-data center environment, assuming short distance optical cables are used, the most dominant cause of
network latency are the router and switching delays, queuing
delays and the serialization delays.
C. Energy efficiency

Fig. 14.

Power consumption split-up across two nodes in a Data center



The storage network is an important part of the data center

infrastructure as it hosts most of the user data. With the
growing user base of the internet, the access rate, bandwidth
and the utilization of these storage network is increasing
A. High bandwidth
The following are the biggest driver for cheap high bandwidth optical interconnects.

To handle the increasing data rates in electronic interconnects, the copper traces needs to be densely packed
and also increase the number of traces. The maximum
bandwidth that can be achieved by densely packing

Beyond the changing of the Optically enabled data center

storage: Along with the replacement of the copper with optical
transceiver and fibers, there is a need for fundamental changes
to the storage protocols (Optical SAS). One of the biggest
challenges in protocol challenges will be the transition from
Out Of Band (OOB) to optical OOB signalling in SAS
Developed an optically interconnected system, but trying
to validate them in a commercial environment. Performance
metric of the data storage system is the number of error
free reads and writes as we vary the block sizes. Where can
optics makes a difference? expander/controller to disk space
is increased (room size), data rates are high (6G) then copper
cannot handle it.
Fig. 15.

Optical storage unit (source Xyratek [98])

copper traces is limited by the effects of electromagnetic interference between adjacent copper traces,
resistive loss (due to skin effect) and dielectric loss of
the medium. This in turn increases the cost and lowers
scalability of the storage network based on electronic

The data center is also migrating from being a modular

and over provisioned system to newer architectures
based on disaggregation of datacenter resources. This
requires a dense network interconnecting different

There is also growing interest and research activities in

the development of optics enabled data storage systems using
embedded optical interconnects for data centers [98].
Figure 15 shows the block diagram of an futuristic optical
storage unit that uses optical backplanes (active and passive),
optical mid-board transceiver and optical edge transceivers
for high bandwidth communication. They also have passive
copper interconnects for low speed communication at lower
power consumption. Authors in [99] developed and successfully demonstrated an active pluggable optical PCB connector
solution, which will allow peripheral devices to plug into and
unplug from an electro-optical midplane with embedded MM
polymer waveguides. Industry expects that embedded photonic
interconnect will be adopted in data centers by 2016.
The optics based storage unit can have the following

send data further

higher density

higher link bandwidth (WDM, QAM)

advance passive and active structures (SoC, MZI,


No RFI/EMI from waveguides

lower power consumption

small form factor (better footprint, reduced pcb materials)

B. Low latency
Optics provides certain latency advantages but this can be
greatly improved in the storage networks with the use of right
storage medium. Flash based memory is one of the promising
solution that can provide the lowest latencies when integrated
with optics. IDC study [100] [101] provides detailed insights
into the rapidly growing market for enterprise storage systems
that leverage flash storage media.
All-flash data centers can provide a lot of advantages [102]

Low latency (hence faster I/O workarounds and response times)

No need for traditional layer of overprovisioning and


Consumes lower power compared to SSD based solutions.

Occupies lower floor space compared to SSD based


New sources of revenue

Elimination of slow I/O workarounds

Improved application response times

Simplified operations

Lower capital costs

C. Energy efficiency
Replacing active edge board transceivers will passive components, there by reducing the density.



Plexxi, Key consideration for selecting data center fiber cabling,

[2] C. G. C. I. . . 2018), Cisco global cloud index: Forecast and
methodology, 20132018, 2014,
Index White Paper.html.
[3] A. Vahdat, M. Al-Fares, N. Farrington, R. Mysore, G. Porter, and
S. Radhakrishnan, Scale-out networking in the data center, Micro,
IEEE, vol. 30, no. 4, pp. 2941, July 2010.


















N. Farrington, G. Porter, S. Radhakrishnan, H. H. Bazzaz,

V. Subramanya, Y. Fainman, G. Papen, and A. Vahdat, Helios: A
hybrid electrical/optical switch architecture for modular data centers,
in Proceedings of the ACM SIGCOMM 2010 Conference, ser.
SIGCOMM 10. New York, NY, USA: ACM, 2010, pp. 339350.
[Online]. Available:
D. Lee, Network evolution from a web company point of view, in
Summer Topical Meeting, 2009. LEOSST 09. IEEE/LEOS, 2009, pp.
S. M. Rumble, D. Ongaro, R. Stutsman, M. Rosenblum, and
J. K. Ousterhout, Its time for low latency, in Proceedings
of the 13th USENIX Conference on Hot Topics in Operating
Systems, ser. HotOS13. Berkeley, CA, USA: USENIX Association,
2011, pp. 1111. [Online]. Available:
S. Kandula, S. Sengupta, A. Greenberg, P. Patel, and R. Chaiken, The
nature of data center traffic: Measurements & analysis, in Proceedings
of the 9th ACM SIGCOMM Conference on Internet Measurement
Conference, ser. IMC 09. New York, NY, USA: ACM, 2009, pp. 202
208. [Online]. Available:
T. Benson, A. Anand, A. Akella, and M. Zhang, Understanding
data center traffic characteristics, SIGCOMM Comput. Commun.
Rev., vol. 40, no. 1, pp. 9299, Jan. 2010. [Online]. Available:
T. Benson, A. Akella, and D. A. Maltz, Network traffic characteristics
of data centers in the wild, in Proceedings of the 10th ACM
SIGCOMM Conference on Internet Measurement, ser. IMC 10.
New York, NY, USA: ACM, 2010, pp. 267280. [Online]. Available:
J. Dean and L. A. Barroso, The tail at scale, Commun.
ACM, vol. 56, no. 2, pp. 7480, Feb. 2013. [Online]. Available:
D. Abts, M. R. Marty, P. M. Wells, P. Klausler, and H. Liu, Energy
proportional datacenter networks, in Proceedings of the 37th Annual
International Symposium on Computer Architecture, ser. ISCA 10.
New York, NY, USA: ACM, 2010, pp. 338347. [Online]. Available:
A. Krishnamoorthy, R. Ho, X. Zheng, H. Schwetman, J. Lexau,
P. Koka, G. Li, I. Shubin, and J. Cunningham, Computer systems
based on silicon photonic interconnects, Proceedings of the IEEE,
vol. 97, no. 7, pp. 13371361, July 2009.
D. Monroe, Still seeking the optical transistor, Commun. ACM,
vol. 57, no. 10, pp. 1315, Sep. 2014. [Online]. Available:
M. Glick, Optical interconnects in next generation data centers: An
end to end view, in High Performance Interconnects, 2008. HOTI 08.
16th IEEE Symposium on, 2008, pp. 178181.
J. Gustavsson, sa Haglund, P. Westbergh, K. Szczerba, B. Kgel, and
A. Larsson, High-speed lasers for future optical interconnects, spie
newsroom, 2010,
Y.-C. Chang, C. Wang, and L. Coldren, High-efficiency, high-speed
vcsels with 35 gbit=s error-free operation, Electronics Letters, vol. 43,
no. 19, pp. 10221023, September 2007.
T. Anan, N. Suzuki, K. Yashiki, K. Fukatsu, H. Hatakeyama, T. Akagawa, K. Tokutome, and M. Tsuji, High-speed 1.1- um-range
ingaas vcsels, in Optical Fiber communication/National Fiber Optic
Engineers Conference, 2008. OFC/NFOEC 2008. Conference on, Feb
2008, pp. 13.
B. Hawkins, I. Hawthorne, R.A., J. Guenter, J. Tatum, and J. Biard,
Reliability of various size oxide aperture vcsels, in Electronic
Components and Technology Conference, 2002. Proceedings. 52nd,
2002, pp. 540550.
D. Kuchta, A. Rylyakov, C. Schow, J. Proesel, C. Baks, P. Westbergh,
J. Gustavsson, and A. Larsson, 64gb/s transmission over 57m mmf
using an nrz modulated 850nm vcsel, in Optical Fiber Communications Conference and Exhibition (OFC), 2014, March 2014, pp. 13.
F. Koyama, Recent advances of vcsel photonics, Lightwave Technology, Journal of, vol. 24, no. 12, pp. 45024513, Dec 2006.
A. Boletti, P. Boffi, P. Martelli, M. Ferrario, and M. Martinelli,
Performance analysis of communication links based on vcsel
















and silicon photonics technology for high-capacity data-intensive

scenario, Opt. Express, vol. 23, no. 2, pp. 18061815, Jan 2015.
[Online]. Available:
H. Park, A. Fang, S. Kodama, and J. Bowers, Hybrid silicon
evanescent laser fabricated with a silicon waveguide and iii-v offset
quantum wells, Opt. Express, vol. 13, no. 23, pp. 94609464, Nov
2005. [Online]. Available:
M. Lamponi, S. Keyvaninia, C. Jany, F. Poingt, F. Lelarge, G. de Valicourt, G. Roelkens, D. Van Thourhout, S. Messaoudene, J.-M. Fedeli,
and G.-H. Duan, Low-threshold heterogeneously integrated inp/soi
lasers with a double adiabatic taper coupler, Photonics Technology
Letters, IEEE, vol. 24, no. 1, pp. 7678, Jan 2012.
J. V. Campenhout, P. R. Romeo, P. Regreny, C. Seassal, D. V.
Thourhout, S. Verstuyft, L. D. Cioccio, J.-M. Fedeli, C. Lagahe,
and R. Baets, Electrically pumped inp-based microdisk lasers
integrated with a nanophotonic silicon-on-insulator waveguide
circuit, Opt. Express, vol. 15, no. 11, pp. 67446749, May 2007.
[Online]. Available:
D. Kuchta, A. Rylyakov, C. Schow, J. Proesel, F. Doany, C. W. Baks,
B. Hamel-Bissell, C. Kocot, L. Graham, R. Johnson, G. Landry,
E. Shaw, A. MacInnes, and J. Tatum, A 56.1gb/s nrz modulated
850nm vcsel-based optical link, in Optical Fiber Communication
Conference/National Fiber Optic Engineers Conference 2013. Optical
Society of America, 2013, p. OW1B.5. [Online]. Available:
A. W. Fang, H. Park, O. Cohen, R. Jones, M. J. Paniccia, and
J. E. Bowers, Electrically pumped hybrid algainas-silicon evanescent
laser, Opt. Express, vol. 14, no. 20, pp. 92039210, Oct 2006.
[Online]. Available:
G. Fish and D. Sparacin, Enabling flexible datacenter interconnect
networks with wdm silicon photonics, in Custom Integrated Circuits
Conference (CICC), 2014 IEEE Proceedings of the, Sept 2014, pp.
C. Zhang, S. Srinivasan, Y. Tang, M. J. R. Heck, M. L.
Davenport, and J. E. Bowers, Low threshold and high speed short
cavity distributed feedback hybrid silicon lasers, Opt. Express,
vol. 22, no. 9, pp. 10 20210 209, May 2014. [Online]. Available:
B. Koch, E. Norberg, B. Kim, J. Hutchinson, J.-H. Shin, G. Fish, and
A. Fang, Integrated silicon photonic laser sources for telecom and
datacom, in Optical Fiber Communication Conference and Exposition
and the National Fiber Optic Engineers Conference (OFC/NFOEC),
2013, March 2013, pp. 13.
M. Rowe, The next generations modulation: Pam-4, nrz, or enrz?
W. Bludau, A. Onton, and W. Heinke, Temperature dependence of
the band gap of silicon, Journal of Applied Physics, vol. 45, no. 4,
pp. 18461848, 1974.
F. Y. Huang and K. L. Wang, Normalincidence epitaxial sigec
photodetector near 1.3 m wavelength grown on si substrate,
Applied Physics Letters, vol. 69, no. 16, pp. 23302332, 1996.
[Online]. Available:
G. Masini, L. Colace, G. Assanto, H.-C. Luan, K. Wada, and L. Kimerling, High responsitivity near infrared ge photodetectors integrated on
si, Electronics Letters, vol. 35, no. 17, pp. 14671468, 1999.
G. Masini, L. Colace, G. Assanto, H. Luan, and L. Kimerling, Germanium on silicon pin photodiodes for the near infrared, Electronics
Letters, vol. 36, no. 25, pp. 20952096, 2000.
P. Chaisakul, D. Marris-Morini, G. Isella, D. Chrastina, X. Le Roux,
S. Edmond, E. Cassan, J.-R. Coudevylle, and L. Vivien, Ge/sige
multiple quantum well photodiode with 30 ghz bandwidth, Applied
Physics Letters, vol. 98, no. 13, pp. 131 112131 1123, Mar 2011.
F. Xia, T. Mueller, Y.-m. Lin, A. Valdes-Garcia, and P. Avouris,
Ultrafast graphene photodetector, Nature nanotechnology, vol. 4,
no. 12, pp. 839843, 2009.
















X. Wang, Z. Cheng, K. Xu, H. K. Tsang, and J.-B. Xu, Highresponsivity graphene/silicon-heterostructure waveguide photodetectors, Nature Photonics, 2013.
L. Components, Germanium Detectors and Position Sensors, ISO
O. Optoelectronics, High Speed Silicon Photodiodes.
M. Geis, S. Spector, M. Grein, R. Schulein, J. Yoon, D. Lennon,
S. Deneault, F. Gan, F. Kaertner, and T. Lyszczarz, Cmos-compatible
all-si high-speed waveguide photodiodes with high responsivity in
near-infrared communication band, Photonics Technology Letters,
IEEE, vol. 19, no. 3, pp. 152154, Feb 2007.
M. J. Connelly, Semiconductor optical amplifiers. Springer, 2002.
R. Bonk, Linear and Nonlinear Semiconductor Optical Amplifier for
Next-Generation Optical Networks, ser. Karlsruhe Series in Photonics
and Communications / Karlsruhe Institute of Technology.
Scientific Publishing, April 2013, vol. 8.
B. Mersali, G. Gelly, A. Accard, J.-L. Lafragette, P. Doussiere,
M. Lambert, and B. Fernier, 1.55 mu m high-gain polarisationinsensitive semiconductor travelling wave amplifier with low driving
current, Electronics Letters, vol. 26, no. 2, pp. 124125, Jan 1990.
K. Magari, M. Okamoto, H. Yasaka, K. Sato, Y. Noguchi, and
O. Mikami, Polarization insensitive traveling wave type amplifier
using strained multiple quantum well structure, Photonics Technology
Letters, IEEE, vol. 2, no. 8, pp. 556558, Aug 1990.
T. Akiyama, M. Sugawara, and Y. Arakawa, Quantum-dot semiconductor optical amplifiers, Proceedings of the IEEE, vol. 95, no. 9, pp.
17571766, Sept 2007.
H. Wang, E. T. Aw, M. Xia, M. G. Thompson, R. V. Penty, and I. H.
White, Temperature independent optical amplification in uncooled
quantum dot optical amplifiers, in Optical Fiber Communication
Conference. Optical Society of America, 2008, p. OTuC2.
R. Brenot, M. Manzanedo, J.-G. Provost, O. Legouezigou, F. Pommereau, F. Poingt, L. Legouezigou, E. Derouin, O. Drisse, B. Rousseau
et al., Chirp reduction in quantum dot-like semiconductor optical
amplifiers, ECOC 2007, 2007.
S. Pathak, M. Vanslembrouck, P. Dumon, D. Van Thourhout, and
W. Bogaerts, Optimized silicon awg with flattened spectral response
using an mmi aperture, Lightwave Technology, Journal of, vol. 31,
no. 1, pp. 8793, Jan 2013.
A. Liu, L. Liao, Y. Chetrit, J. Basak, H. Nguyen, D. Rubin, and
M. Paniccia, Wavelength division multiplexing based photonic integrated circuits on silicon-on-insulator platform, Selected Topics in
Quantum Electronics, IEEE Journal of, vol. 16, no. 1, pp. 2332, Jan
S. Park, K.-J. Kim, I.-G. Kim, and G. Kim, Si micro-ring mux/demux
wdm filters, Opt. Express, vol. 19, no. 14, pp. 13 53113 539, Jul
2011. [Online]. Available:
S. Janz, A. Balakrishnan, S. Charbonneau, P. Cheben, M. Cloutier,
A. Delage, K. Dossou, L. Erickson, M. Gao, P. Krug, B. Lamontagne,
M. Packirisamy, M. Pearson, and D.-X. Xu, Planar waveguide echelle
gratings in silica-on-silicon, Photonics Technology Letters, IEEE,
vol. 16, no. 2, pp. 503505, Feb 2004.
Ciena, Dwdm vs cwdm, 2010,
H. Liu, C. Lam, and C. Johnson, Scaling optical interconnects
in datacenter networks opportunities and challenges for wdm, in
High Performance Interconnects (HOTI), 2010 IEEE 18th Annual
Symposium on, Aug 2010, pp. 113116.
D. Qian, M.-F. Huang, E. Ip, Y.-K. Huang, Y. Shao, J. Hu, and
T. Wang, High capacity/spectral efficiency 101.7-tb/s wdm transmission using pdm-128qam-ofdm over 165-km ssmf within c- and
l-bands, Lightwave Technology, Journal of, vol. 30, no. 10, pp. 1540
1548, May 2012.
A. Sano, T. Kobayashi, S. Yamanaka, A. Matsuura, H. Kawakami,
Y. Miyamoto, K. Ishihara, and H. Masuda, 102.3-tb/s (224 x 548gb/s) c- and extended l-band all-raman transmission over 240 km using
pdm-64qam single carrier fdm with digital pilot tone, in Optical Fiber
Communication Conference and Exposition (OFC/NFOEC), 2012 and
the National Fiber Optic Engineers Conference, March 2012, pp. 13.


















J. Sakaguchi, B. Puttnam, W. Klaus, Y. Awaji, N. Wada, A. Kanno,

T. Kawanishi, K. Imamura, H. Inaba, K. Mukasa, R. Sugizaki,
T. Kobayashi, and M. Watanabe, 305 tb/s space division multiplexed
transmission using homogeneous 19-core fiber, Lightwave Technology, Journal of, vol. 31, no. 4, pp. 554562, Feb 2013.
R.-J. Essiambre, R. Ryf, N. Fontaine, and S. Randel, Breakthroughs
in photonics 2012: Space-division multiplexing in multimode and
multicore fibers for high-capacity optical communication, Photonics
Journal, IEEE, vol. 5, no. 2, pp. 0 701 3070 701 307, April 2013.
R. Ramaswami, K. Sivarajan, and G. Sasaki, Optical networks: a
practical perspective. Morgan Kaufmann, 2009.
R. Bonk, T. Vallaitis, J. Guetlein, C. Meuer, H. Schmeckebier, D. Bimberg, C. Koos, W. Freude, and J. Leuthold, The input power dynamic
range of a semiconductor optical amplifier and its relevance for access
network applications, Photonics Journal, IEEE, vol. 3, no. 6, pp.
10391053, 2011.
A. Dugan, L. Lightworks, and J. Chiao, The optical switching spectrum: A primer on wavelength switching technologies, Telecommun.
Mag, no. 5, 2001.
T. E. Stern and K. Bala, Multiwavelength optical networks, AddisonWesley, EUA, 1999.
P. B. Chu, C. Lee, and S. Park, Mems: the path to large optical
crossconnects, Communications Magazine, IEEE, vol. 40, no. 3, pp.
8087, 2002.
K. Sakuma, H. Ogawa, D. Fujita, and H. Hosoya, Polymer ybranching thermo-optic switch for optical fiber communication systems, in The 8th Microoptics Conf.(MOC01), Osaka, Japan, 2001.
J. Sapriel, D. Charissoux, V. Voloshinov, and V. Molchanov, Tunable
acoustooptic filters and equalizers for wdm applications, Journal of
lightwave technology, vol. 20, no. 5, p. 864, 2002.
G. I. Papadimitriou, C. Papazoglou, A. S. Pomportsis et al., Optical
switching: switch fabrics, techniques, and architectures, Journal of
lightwave technology, vol. 21, no. 2, p. 384, 2003.
S. Bregni, G. Guerra, and A. Pattavina, State of the art of optical switching technology for all-optical networks, Communications
World, 2001.
M. Petracca, B. Lee, K. Bergman, and L. Carloni, Photonic nocs:
System-level design exploration, Micro, IEEE, vol. PP, no. 99, pp.
11, 2009.
Q. Xu and M. Lipson, All-optical logic based on silicon micro-ring
resonators, Optics Express, vol. 15, no. 3, pp. 924929, 2007.
S. C. Nicholes, M. L. Masanovic, B. Jevremovic, E. Lively, L. A.
Coldren, and D. J. Blumenthal, An 8 8 inp monolithic tunable optical
router (motor) packet forwarding chip, Lightwave Technology, Journal
of, vol. 28, no. 4, pp. 641650, 2010.
A. Husain, Optical interconnect of digital integrated circuits and
systems, in 1984 Los Angeles Techincal Symposium. International
Society for Optics and Photonics, 1984, pp. 1020.
D. Vantrease, R. Schreiber, M. Monchiero, M. McLaren, N. Jouppi,
M. Fiorentino, A. Davis, N. Binkert, R. Beausoleil, and J. Ahn,
Corona: System implications of emerging nanophotonic technology,
in Computer Architecture, 2008. ISCA 08. 35th International Symposium on, June 2008, pp. 153164.
J. Kim, W. Dally, S. Scott, and D. Abts, Technology-driven, highlyscalable dragonfly topology, in Computer Architecture, 2008. ISCA
08. 35th International Symposium on, June 2008, pp. 7788.
Y. Pan, P. Kumar, J. Kim, G. Memik, Y. Zhang, and A. Choudhary,
Firefly: illuminating future network-on-chip with nanophotonics,
ACM SIGARCH Computer Architecture News, vol. 37, no. 3, pp. 429
440, 2009.
Y. Pan, J. Kim, and G. Memik, Flexishare: Channel sharing for an
energy-efficient nanophotonic crossbar, in High Performance Computer Architecture (HPCA), 2010 IEEE 16th International Symposium
on, Jan 2010, pp. 112.
N. Dupuis, B. Lee, J. Proesel, A. Rylyakov, R. Rimolo-Donadio,
C. Baks, C. Schow, A. Ramaswamy, J. Roth, R. Guzzon, B. Koch,
D. Sparacin, and G. Fish, 30gbps optical link utilizing heterogeneously integrated iii-v/si photonics and cmos circuits, in Optical Fiber Communications Conference and Exhibition (OFC), 2014,
March 2014, pp. 13.













D. Miller, Device requirements for optical interconnects to silicon

chips, Proceedings of the IEEE, vol. 97, no. 7, pp. 11661185, July
R. Soref, Silicon photonics technology: past, present, and future, pp.
1928, 2005. [Online]. Available:
G. H. Duan, C. Jany, A. L. Liepvre, A. A. P. Kaspar, A. Shen,
P. Charbonnier, F. Mallecot, F. Lelarge, J.-L. Gentner, S. Olivier,
S. Malhouitre, and C. Kopp, Hybrid wavelength-tunable iii-v lasers
on silicon, spie newsroom, 2014,
The ipkiss design framework, 2014, http://www.lucedaphotonics.
Optodesigner 5, 2014,
Photonic ic design, 2014,
Y. Urino, T. Shimizu, M. Okano, N. Hatori, M. Ishizaka, T. Yamamoto,
T. Baba, T. Akagawa, S. Akiyama, T. Usuki, D. Okamoto,
M. Miura, M. Noguchi, J. Fujikata, D. Shimura, H. Okayama,
T. Tsuchizawa, T. Watanabe, K. Yamada, S. Itabashi, E. Saito,
T. Nakamura, and Y. Arakawa, First demonstration of high density
optical interconnects integrated with lasers, optical modulators,
and photodetectors on single silicon substrate, Opt. Express,
vol. 19, no. 26, pp. B159B165, Dec 2011. [Online]. Available:
Y. Urino, Y. Noguchi, M. Noguchi, M. Imai, M. Yamagishi,
S. Saitou, N. Hirayama, M. Takahashi, H. Takahashi, E. Saito,
M. Okano, T. Shimizu, N. Hatori, M. Ishizaka, T. Yamamoto,
T. Baba, T. Akagawa, S. Akiyama, T. Usuki, D. Okamoto,
M. Miura, J. Fujikata, D. Shimura, H. Okayama, H. Yaegashi,
T. Tsuchizawa, K. Yamada, M. Mori, T. Horikawa, T. Nakamura,
and Y. Arakawa, Demonstration of 12.5-gbps optical interconnects
integrated with lasers, optical splitters, optical modulators and
photodetectors on a single silicon substrate, Opt. Express, vol. 20,
no. 26, pp. B256B263, Dec 2012. [Online]. Available: http:
R. Hult, Mid-board optical transceivers light up, 2014, http://www.
C. EOS, icphotonics, 2014,
N. Bamiedakis, A. Hashim, R. Penty, and I. White, A 40 gb/s optical
bus for optical backplane interconnections, Lightwave Technology,
Journal of, vol. 32, no. 8, pp. 15261537, April 2014.
P. Zabinski, B. Gilbert, and E. Daniel, Coming challenges with
terabit-per-second data communication, Circuits and Systems Magazine, IEEE, vol. 13, no. 3, pp. 1020, thirdquarter 2013.
I. Young, E. Mohammed, J. Liao, A. Kern, S. Palermo, B. Block,
M. Reshotko, and P. Chang, Optical technology for energy efficient i/o in high performance computing, Communications Magazine,
IEEE, vol. 48, no. 10, pp. 184191, October 2010.
Z. Zhu, S. Zhong, L. Chen, and K. Chen, Fully programmable
and scalable optical switching fabric for petabyte data center, Opt.
Express, vol. 23, no. 3, pp. 35633580, Feb 2015. [Online]. Available:
N. Binkert, A. Davis, N. Jouppi, M. McLaren, N. Muralimanohar,
R. Schreiber, and J. H. Ahn, The role of optics in future high radix
switch design, in Computer Architecture (ISCA), 2011 38th Annual
International Symposium on, June 2011, pp. 437447.
G. Wang, D. G. Andersen, M. Kaminsky, K. Papagiannaki,
T. E. Ng, M. Kozuch, and M. Ryan, c-through: part-time
optics in data centers, SIGCOMM Comput. Commun. Rev.,
vol. 41, no. 4, pp. , Aug. 2010. [Online]. Available: http:
Y. J. Liu, P. X. Gao, B. Wong, and S. Keshav, Quartz: A
new design element for low-latency dcns, in Proceedings of the
2014 ACM Conference on SIGCOMM, ser. SIGCOMM 14. New
York, NY, USA: ACM, 2014, pp. 283294. [Online]. Available:
S. Liu, Q. Cheng, M. R. Madarbux, A. Wonfor, R. V. Penty,
I. H. White, and P. M. Watts, Low latency optical switch for
high performance computing with minimized processor energy load
(invited), J. Opt. Commun. Netw., vol. 7, no. 3, pp. A498A510,









Mar 2015. [Online]. Available:

Q. Cheng, A. Wonfor, J. Wei, R. V. Penty, and I. White, Modular
hybrid dilated mach-zehnder switch with integrated soas for large
port count switches, in Optical Fiber Communication Conference.
Optical Society of America, 2014, p. W4C.6. [Online]. Available:
C. Hermsmeyer, H. Song, R. Schlenk, R. Gemelli, and S. Bunse,
Towards 100g packet processing: Challenges and technologies, Bell
Labs Technical Journal, vol. 14, no. 2, pp. 5779, 2009. [Online].
F. Poletti, N. Wheeler, M. Petrovich, N. Baddela, E. N. Fokoua,
J. Hayes, D. Gray, Z. Li, R. Slavk, and D. Richardson, Towards highcapacity fibre-optic communications at the speed of light in vacuum,
Nature Photonics, vol. 7, no. 4, pp. 279284, 2013.
Y. Yin, R. Proietti, X. Ye, C. Nitta, V. Akella, and S. Yoo, Lions: An
awgr-based low-latency optical switch for high-performance computing and data centers, Selected Topics in Quantum Electronics, IEEE
Journal of, vol. 19, no. 2, pp. 3 600 4093 600 409, March 2013.
R. Pitwon, Migration of embedded optical interconnect into data centre systems, 2013,
R. Pitwon, K. Wang, J. Graham-Jones, I. Papakonstantinou, H. Baghsiahi, B. Offrein, R. Dangel, D. Milward, and D. Selviah, Firstlight:
Pluggable optical interconnect technologies for polymeric electrooptical printed circuit boards in data centers, Lightwave Technology,
Journal of, vol. 30, no. 21, pp. 33163329, Nov 2012.
E. Burgener, I. Feng, J. Janukowicz, E. Sheppard, and
flash array 2014-2018 forecast and 1h14 vendor shares,
R. Villars and E. Burgener, Building data centers for todays datadriven economy: The role of flash, 2014,
Violin, Why implement an all flash data center? 2014, http://www.