You are on page 1of 21

Table of Contents

Abstract i
1. INTRODUCTION 1
2. BACKGROUND 2
3. THE END OF ELECTRON-BASED COMPUTING 3
4. TYPES OF OPTICAL COMPUTING 4
4.1 Electro-Optical Hybrid Computers 5
4.2 Pure Optical Computers 5

5. SOME KEY OPTICAL COMPONENTS FOR COMPUTING 5


5.1 Optical Transistors 8
5.2 Photonic Crystals 8

5.2.1 Light localization 8

5.2.2 Logic gates using photonic crystals 10

6. CURRENT WORK AND IMPROVEMENTS 12


6.1 Lenslet’s EnLight Optical Processor 12
6.2 HP Lab's Experimental Optical Processor 14

6.3 Optalysys optical processing powers convolutional neural network 15

7. ADVANTAGES OF OPTICAL COMPUTING 16


8. DISADVANTAGES OF OPTICAL COMPUTING 16
9. THE FUTURE 17
10. CONCLUSIONS 17
11. REFERENCES 18
Abstract

With the growth of computing technology the need of high performance computers (HPC) has
significantly increased. Optics has been used in computing for a number of years but the main
emphasis has been and continues to be to link portions of computers, for communications, or
more intrinsically in devices that have some optical application or component.

Optical computing was a hot research area in 1980s. But the work tapered off due to materials
limitations that prevented optochips from getting small enough and cheap enough beyond
laboratory curiosities. Now, optical computers are back with advances in self-assembled
conducting organic polymers that promise super-tiny of all optical chips.

Optical computing technology is, in general, developing in two directions. One approach is to
build computers that have the same architecture as present day computers but using optics that is
Electro optical hybrids. Another approach is to generate a completely new kind of computer,
which can perform all functional operations in optical mode. In recent years, a number of
devices that can ultimately lead us to real optical computers have already been manufactured.
These include optical logic gates, optical switches, optical interconnections and optical memory.

Optical computing describes a new technological approach for constructing computers


processors and other components. Instead of the current approach of electrically transmitting
data along tiny wires etched in to silicon. Optical computing employs a technology called silicon
photonics that uses laser light instead. This use of optical lasers overcome the constrains
associated with heat dissipation in todays components and allows much more information to be
stored and transmitted in the same amount of space.

i
1. INTRODUCTION
Optical or photonic computing uses photons produced by lasers or diodes for computation. In
recent years, a number of devices that can ultimately lead us to real optical computers have
already been manufactured. These include optical logic gates, optical switches, optical
interconnections, and optical memory. High-density physical integration optical switching
devices have been achieved. Optical devices can have switching speeds of the order of 10-15
seconds with power requirements as low as 10-6 watts. Two types of optical processors have
been under active development, namely numeric and nonnumeric. Numeric processors include
logic multiplication unit, optical arithmetic unit and optical correlator. Non-numeric processors
for optical text processing and optical knowledge based processing have shown good results.
Among the most crucial performance-limiting factors of today's very large-scale integrated
circuits are the limited pin number and the low bandwidth of interconnections, rather than the
chip's processing power. (For example, the Japanese Earth Simulator, a computer system
developed by NEC, uses a processor IC with 5,000 pins.) Much more performance will be
achieved if chip-to-chip data transfer can be realized by fast links leaving the circuit directly out
from the chip's surface instead of via the chip's edge. Arrays of optical modulators or vertical
surface emitting lasers, that can be flip-chip bonded onto complementary metal oxide
semiconductor circuits, promise to offer a solution for the bottleneck in chip-to-chip
communications.
Optical interconnections for VLSI systems also offer massive parallelism and three-dimensional
interconnection capabilities. During the last decade, significant advances have been achieved in
the field of optoelectronics VLSI (OE-VLSI) and microelectronics. Arrays of OE-VLSI circuits
linked together by 3-D optical interconnections based on free space optics offer a kind of natural
pipeline. It is possible now to design circuitry within the OE-VLSI chip as an array processing
structure. This will help to combine the two most popular parallel processing techniques:
pipelining and array processing. This is of great interest for the design of innovative 3-D
arithmetic units that can serve as the core for future 3-D processors. It is envisaged that such
architectures offer potential for a significant increase of computing performance, which would
not be possible by using only all-electronic technology. It may also be considered as a road map
for a future massively parallel optoelectronic supercomputer system.
An interesting suggestion is that such systems can consist of multiple clusters, which can be
directly mounted with a silicon spacer on a glass substrate. Inside the glass substrate, the optical
interconnections are to run along zigzag paths. Additionally, diffractive optical elements are
etched in a glass substrate's surface to realize the optical interconnections, which link the OE-
VLSI circuit.
Optical computing technology is, in general, developing in two directions. One approach is to
build computers that have the same architecture as present day computers but using optics.
Another approach is to generate a completely new kind of computer, which can perform all
functional operations in an optical mode. The research carried out to date suggests that all optical
computers are far from reality and a hybrid system could be tested for typical
mathematical/functional operations. Performance benchmark reports in this direction could push
optoelectronic-computing systems out of laboratories and encourage more research towards cost
effective user friendliness.

1
Another interesting area is holo-computing. Consider two apparently unrelated facts. First, the
trend in all areas of high-end computing is towards greater parallelism. Second, because of its
quantum nature, a photon has no path or position until it is detected. These facts get connected to
the domain of holography. Holographic techniques will be central to photonic computing,
provided substantial research continues towards material properties. Using the properties of
holography and creating holograms with computers provide many computer-oriented
applications. These include computer memories, pattern recognition, data encryption, optical
contouring, CAD, medical imaging, etc. Optical memories are expected to be an integral part of
high performance computing. Where research will take optical memories, it remains to be seen.
One of the emerging trends in the area of optical computing is the development of photonic
devices. Optical interconnects promise to eliminate I/O bottlenecks in VHSIC chips, boards, and
back-panes, increasing the processor power of high speed computing systems. The future lies in
the development of massively parallel photonic switches relying on optically processed signal
routing, monolithic integration of optoelectronic elements, and an architecture optimized to
exploit the speed and parallelism of optical interconnections. Such developments will pave way
for high speed routing in broadband networks as well as optically interconnected multiprocessors
in future.
We are in an era of daily explosions in the development of optics and optical components for
computing and other applications. Photonics is booming in industry and universities worldwide.
Data traffic is growing worldwide at a rate of 100% per year, while in the US; data traffic is
expected to increase 300% annually. The requirement for high data rate transfer equipment is
also expected to continue increasing. Electronic switching limits network speeds to about 50
Gigabits per second. Terabit speeds are needed to accommodate the growth rate of the Internet
and the increasing demand for bandwidth-intensive data streams.

2. BACKGROUND
Optical computing was a hot research area in the 1980s. But the work tapered off because of
materials limitations that seemed to prevent opto-chips from getting small enough and cheap
enough to be more than laboratory curiosities. Now, optical computers are back with advances in
self-assembled conducting organic polymers that promise super-tiny all-optical chips. Advances
in optical storage device have generated the promise of efficient, compact and large-scale
storage devices. Another advantage of optical methods over electronic one’s for computing is
that parallel data processing can frequently be done much more easily and less expensively in
optics than in electronics. Parallelism, the capability to execute more than one operation
simultaneously, is now common in electronic computer architectures. But most electronic
computers still execute instructions sequentially; parallelism with electronics remains sparsely
used. Its first widespread appearance was in Cray supercomputers in the early 1980's when two
processors were used in conjunction with one shared memory. Today, large supercomputers may
utilize thousands of processors but communication overhead frequently results in reduced
overall efficiency. On the other hand for some applications in input-output (l/O), such as image
processing, by using a simple optical design an array of pixels can be transferred simultaneously
in parallel from one point to another.
Optical technology promises massive upgrades in the efficiency and speed of computers, as well
as significant shrinkage in their size and cost. An optical desktop computer could be capable of
processing data up to 100,000 times faster than current models because multiple operations can
2
be performed simultaneously. Other advantages of optics include low manufacturing costs,
immunity to electromagnetic interference, a tolerance for low loss transmissions, freedom from
short electrical circuits and the capability to supply large bandwidth and propagate signals within
the same or adjacent fibers without interference. One oversimplified example may help to
appreciate the difference between optical and electronic parallelism. Consider an imaging system
with 1000 x 1000 independent points per mm2 in the object plane which are connected optically
by a lens to a corresponding number of points per mm2 in the image plane the lens effectively
performs an FFT of the image plane in real time. For this to be accomplished electrically, a
million operations are required.
Parallelism, when associated with fast switching speeds, would result in staggering
computational speeds. Assume, for example, there are only 100 million gates on a chip, much
less than what was mentioned earlier (optical integration is still in its infancy compared to
electronics). Further, conservatively assume that each gate operates with a switching time of
only 1 nanosecond (organic optical switches can switch at sub-picosecond rates compared to
maximum picosecond switching times for electronic switching). Such a system could perform
more than 1017 bit operations per second. Compare this to the gigabits (109) or terabits (1012) per
second rates which electronics are either currently limited to, or hoping to achieve. In other
words, a computation that might require one hundred thousand hours (more than 11 years) of a
conventional computer time could require less than one hour by an optical one.
But building an optical computer will not be easy. A major challenge is finding materials that
can be mass produced yet consume little power; for this reason, optical computers may not hit
the consumer market for 10 to 15 years. Another of the typical problems optical computers have
faced is that the digital optical devices have practical limits of eight to eleven bits of accuracy in
basic operations due to, e.g., intensity fluctuations. Recent research has shown ways around this
difficulty. Thus, for example, digital partitioning algorithms, that can break matrix-vector
products into lower-accuracy sub-products, working in tandem with error-correction codes, can
substantially improve the accuracy of optical computing operations. Nevertheless, many
problems in developing appropriate materials and devices must be overcome before digital
optical computers will be in widespread commercial use. In the near term, at least, optical
computers will most likely be hybrid optical/electronic systems that use electronic circuits to pre
process input data for computation and to post-process output data for error correction before
outputting the results. The promise of all-optical computing remains highly attractive, however,
and the goal of developing optical computers continues to be a worthy one. Nevertheless, many
scientists feel that an all-optical computer will not be the computer of the future; instead
optoelectronic computers will rule where the advantages of both electronics and optics will be
used. Optical computing can also be linked intrinsically to quantum computing. Each photon is a
quantum of a wave function describing the whole function. It is now possible to control atoms
by trapping single photons in small, superconducting cavities. So photon quantum computing
could become a future possibility.

3. THE END OF ELECTRON-BASED COMPUTING


In 1954 the first silicon transistor was comparable in size to a US postage stamp. In 1965 co-
founder of Intel, Gordon Moore, predicted the number of transistors fitting onto a chip doubling
every two years. This prediction was later called Moore's Law and was very accurate up through
the beginning of the 21st century. While the exponential growth predicted hasn't stopped

3
completely it has slowed down. Today silicon transistors rapidly approach ~10nm across, the
theoretical minimum size for electron-based computing. This limitation is due to the quantum
nature of electrons, where the electron can simply tunnel through the transistor as if it weren't
there. The 5nm minimum size imposes a maximum number of transistors and maximum speed
of computation. This limitation can be mitigated by parallel processing, or multiple processors
running in tandem, but this in turn generates more heat and is difficult to program for. Current
chips are made with a two dimensional model, they could be made three dimensional to increase
possible number of transistors but this increases cooling problems (cooling is related to surface
area, a cube has much less than a chip of similar volume), power consumption, as well as
tunneling issues as before.

Another fundamental limitation of electron based computing is the use of electrons. In addition
to quantum tunneling they travel relatively slowly. A material's drift velocity is the speed at
which electrons move in a wire. Silicon's drift velocity depends on temperature, doping, and
applied current, but tops out in the range of 107 cm/s for undoped silicon. This figure is slightly
misleading in that one does not need the original electron sent to arrive at a destination, but the
electron that your sent electron smashed into and catapulted forward many times over. This
propagation wave is approximately the speed of light. The major slowdown of electron use
comes from the necessity of charging/discharging the transistors and their connections. The
minimum time for this action is in the realm of nanoseconds.
4. TYPES OF OPTICAL COMPUTING
There are two types of optical computers
 Electro-Optical hybrid computers.
 Pure optical computers.

4
4.1 Electro-Optical Hybrid Computers
Electro-optical hybrid computer chips will integrate high speed photonic components with low
cost electrical components onto a single chip. Scientists and researchers have begun to turn to
this for creating computer components as a means of overcoming the high cost of photonic
computer chip creation. Computers which integrate both photonic and electronic components
will contain an electronic processor connected to a variety of electronic and photonic
components. In order to accomplish this, components will need to be able to convert electrons to
photons. Intel has devised a variety of ways of doing this. One way they have come up with is by
changing the construction of a standard silicon chip by infusing a small amount of cerium or
erbium into a layer of silicon dioxide laced with silicon nanocrystals. In doing so Intel has
managed to, in essence, turn an entire silicon chip into a light emitting diode (LED) which can
glow in response to a small voltage. The chip could flicker at the same rate as the electron flow
going through it. The light given off by this could then be fed into a series of etching disks
(microscopic disks of silicon dioxide perched on silicon pillars). When the light enters a disk it
circles the edge of the disk several millions of times, building in intensity until it is finally
emitted as laser light which could then be fed into an optical component. Another more
simplistic approach also devised by Intel is a modulator which has the ability to modulate pulses
given off by a laser beam so they are in time with the electrons flowing through the modulator.
The modulator first breaks up the laser light into two waves each paired with a capacitor. When
the capacitors – charged with the electron flow – become fully charged they unleash static
electricity which interacts with the two separate light waves. When the two altered halves of the
original light wave rejoin the troughs and crests of the waves interfere with each other resulting
in the light wave to pulse in the same pattern as the electron flow. Photons would then travel to
any required photonic chips, which would be small due to the high price of III-V semiconductors
or to the electrical CPU where they could be detected by photon detectors and processed as data.
Optical-electrical hybrid systems do, however, face one problem. Optical III-V microchips begin
to break down at 80 degrees Celsius, the temperature electrical processors typically reach when
operating. This would mean that hybrid systems would need to have III-V chips far enough away
on the mother boards so as to not overheat, which could detract from the potential performance
of the system.
4.2 Pure Optical Computers
This type of optical computer uses multiple frequencies to send the information through the
computer as light waves. Unlike the Electro-Optical model, there is no use of electricity; it
strictly uses optics to transmit data. Therefore, there is no need to convert the information from
binary to optical, which increases the speed of the processing.
5. SOME KEY OPTICAL COMPONENTS FOR COMPUTING
The major breakthroughs on optical computing have been centred on the development of micro-
optic devices for data input. Conventional lasers are known as 'edge emitters' because their laser
light comes out from the edges. Also, their laser cavities run horizontally along their length. A
vertical cavity surface emitting laser (VCSEL - pronounced 'vixel'), however, gives out laser
light from its surface and has a laser cavity that is vertical; hence the name. VCSEL is a
semiconductor vertical cavity surface emitting micro-laser diode that emits light in a cylindrical

5
beam vertically from the surface .of a fabricated wafer, and offers significant advantages when
compared to the edge-emitting lasers currently used in the majority of fiber optic
communications devices. They emit at 850 nm and have rather low thresholds (typically a few
mA). They are very fast and can give mW of coupled Power into a 50 micron core fiber and are
extremely radiation hard. VCSELS can be tested at the wafer level (as opposed to edge emitting
lasers which have to be cut and cleaved before they can be tested) and hence are relatively
cheap. In fact, VCSELs can be fabricated efficiently on a 3-inch diameter wafer. The principles
involved in the operation of a VCSEL are very similar to those of regular lasers. There are two
special semiconductor materials sandwiching an active layer where all the action takes place.
But rather than reflective ends, in a VCSEL there are several layers of partially reflective mirrors
above and below the active layer. Layers of semiconductor with differing compositions create
these mirrors, and each mirror reflects a narrow range of wavelengths back into the cavity in
order to cause light emission at just one wavelength.

Spatial light modulators (SLMs) play an important role in several technical areas where the
control of light on a pixel-by pixel basis is a key element, such as optical processing, for
inputting information on light beams, and displays. For display purposes the desire is to have as
many pixels as possible in as small and cheap a device as possible. For such purposes designing
silicon chips for use as spatial light modulators has been effective. The basic idea is to have a set
of memory cells laid out on a regular grid. These cells are electrically connected to metal
mirrors, such that the voltage on the mirror depends on the value stored in the memory cell. A
layer of optically active liquid crystal is sandwiched between this array of mirrors and a piece of
glass with a conductive coating. The voltage between individual mirrors and the front electrode
affects the optical activity of the liquid crystal in that neighbourhood. Hence by being able to
individually program the memory locations one can set up a pattern of optical activity in the
liquid crystal layer. Figure 2(a) shows a reflective 256x256 pixel device based on SRAM
technology. Several technologies have contributed to the development of SLMs. These include
micro-electro-mechanical devices, such as, acoustic optic modulators (AOMs), and pixelated

6
electrooptical devices, such as liquid-crystal modulators (LCMs). Figure 2(b) shows a simple
AOM operation in deflecting light beam direction. Encompassed within these categories are
amplitude only, phase-only, or amplitude phase modulators.

Broadly speaking, an optical computer is a computer in which light is used somewhere. This can
means fiber optical connections between electronic components, free space connections, or one
in which light functions as a mechanism for storage of data, logic or arithmetic. Instead of
electrons in silicon integrated circuits, the digital optical computers will be based on photons.
Smart pixels, the union of optics and electronics, both expands the capabilities of electronic
systems and enables optical systems with high levels of electronic signal processing. Thus, smart
pixel systems add value to electronics through optical input/output and interconnection, and
value is added to optical systems through electronic enhancements which include gain, feedback
control, and image processing and compression. Smart pixel technology is a relatively new
approach to integrating electronic circuitry and optoelectronic devices in a common framework.
The purpose is to leverage the advantages of each individual technology and provide improved
performance for specific applications. Here, the electronic circuitry provides complex
functionality and programmability while the optoelectronic devices provide high-speed
switching and compatibility with existing optical media. Arrays of these smart pixels leverage
the parallelism of optics for interconnections as well as computation. A smart pixel device, a
light emitting diode (LED) under the control of a field-effect transistor (FET), can now be made
entirely out of organic materials on the same substrate for the first time. In general, the benefit of
organic over conventional semiconductor electronics is that they should (when mass-production

7
techniques take over) lead to cheaper, lighter, circuitry that can be printed rather than etched.
Scientists at Bell Labs have made 300-micron-wide pixels using polymer FETs and LEDs made
from a sandwich of organic materials, one of which allows electrons to flow, another which acts
as highway for holes (the absence of electrons); light is produced when electrons and holes meet.
The pixels are quite potent, with a brightness of about 2300 candela/m2, compared to a figure of
100 for present flat-panel displays. A Cambridge University group has also made an all-organic
device, not as bright as the Bell Labs version, but easier to make on a large scale.

5.1 Optical Transistors

An optical transistor, also known as an optical switch, is a device that switches or amplifies
optical signals. Light occurring on an optical transistor’s input changes the intensity of light
emitted from the transistor’s output. Output power is supplied by an additional optical source.
Since the input signal intensity may be weaker than that of the source, an optical transistor
amplifies the optical signal. The device is the optical analog of the electronic transistor that
forms the basis of modern electronic devices. Optical transistors provide a means to control light
using only light and has applications in optical computing and fiber-optic communication
networks. Such technology has the potential to exceed the speed of electronics, while consuming
less power.

Since photons inherently do not interact with each other, an optical transistor must employ an
operating medium to mediate interactions. This is done without converting optical to electronic
signals as an intermediate step. Implementations using a variety of operating mediums have been
proposed and experimentally demonstrated. However, their ability to compete with modern
electronics is currently limited.

5.2 Photonic Crystals

A photonic crystal is a periodic optical nanostructure that affects the motion of photons in much
the same way that ionic lattices affect electrons in solids. Photonic crystals occur in nature in the
form of structural coloration and animal reflectors, and, in different forms, promise to be useful
in a range of applications.

When it comes to computing, one of the primary bottlenecks in increasing data transfer rates is
the conversion time required from optics to electronics when the signal reaches a semiconductor
microprocessor. One approach to addressing this problem would be converting systems to ones
that are based on optics, with optical microprocessors in place of semiconductor
microprocessors. Achieving this feat would lead to an era of all-optical information processing.

Optics, however, is not the sole technology eyed for the future of computing. Myriad
technologies, known as quantum computing technologies, are also under consideration. Even
though they lack some of the advantages of photonics, they have immense potential to overtake
conventional systems.

Information processing in photonics is currently handled by artificial structures known as


photonic crystals1. These dielectric structures can be made from materials such as silicon,
compound semiconductors or polymers. The functionality of these structures depends on the
periodic variation in refractive index, which results in photonic band gap in the structure at a
particular frequency range.

8
5.2.1 Light localization

These structures, which reflect light in the forbidden frequency range, prohibit the light’s
passage through it. Alternatively, if there is a defect in the periodic structure, such as a line
defect that forms a waveguide or a point defect that creates a cavity, the forbidden frequencies
can be confined within the structure with little propagation loss2. This phenomenon, known as
light localization, was first predicted in 1987 and has come to fruition with photonic crystals3. A
cavity created in a photonic crystal due to a point defect is illustrated in Figure 1.

Cavities in photonic crystals can be used for creating optical logic gates, the building blocks of
computing. All arithmetic and logic operations are carried out in a computer making use of logic
gates — a modern PC relies on them to perform tasks based on the program stored in its
memory.

Once the technology for implementing logic gates is in place, it is a pretty straightforward
process toward realizing an optical microprocessor. Among various logic gates, NAND and
NOR are said to be universal gates since they can be used for performing all other logical
operations. In other words, a combination of NOT along with AND or OR can be used for
making all logical operations. Here, the implementation of NOT and OR gates using photonic
crystals is covered.

In the near future, a proper combination of these gates can be used for designing an optical
microprocessor.

Logic gates are computational elements that provide an output state based on their inputs. In
semiconductor chips, logic gates are constructed with circuits that consist of transistors and
diodes. The transistors have an inherent logical functionality; in other words, the logic gates
provide an output that could be either 1 or 0 depending on their functionality. The input to the
logic gates is usually a low or high voltage. In this way, the device can be used for generating
9
binary digits. A logical switching can be carried out by transistors based on the input current or
voltage values, which is the cornerstone of today’s semiconductor industry.

In optics, switching can be realized using optical cavities. An optical cavity supports only certain
wavelengths depending on the length and refractive index of the cavity. More specifically, the
length of the cavity should be an integral multiple of half the wavelength. This ensures that
nodes are formed at both the ends of the cavity. Once the length of the cavity is fixed, its transfer
properties depend on refractive index only. However, when the intensity of the light is
sufficiently high, the refractive index depends on intensity of light, too. This phenomenon,
known as the optical Kerr effect, can be harnessed for achieving optical switching with an
optical cavity4. In this technique, a laser pulse is used to change the refractive index of the
cavity medium, which, in turn, enables the system to switch to another logical state (Figure 2).

For realizing logic gates based on this technique, two types of cavity configurations can be used.
The first configuration involves a cavity length set to have resonance wavelength below the
wavelength of input light (Figure 2a). In the other configuration, the cavity length is set to have
resonance wavelength at the wavelength of input light (Figure 2b). In the first case, the change in
refractive index caused by the laser pulse tunes the input wavelength to match the resonance
wavelength of the cavity. In the second case, the laser pulse causes the resonance wavelength to

10
differ from the input wavelength, prohibiting the passage of light through the cavity. Both of
these switching approaches can be put to good use for creating optical logic gates.

5.2.2 Logic gates using photonic crystals

The implementation of a NOT gate using a photonic crystal is considered first. For this purpose,
the optical cavity considered here has a resonance wavelength the same as that of the incident
light. This cavity is represented by S in Figure 3. The optical cavity serves as the coupling point
between input and output waveguides. A bias light sent through the input waveguide is coupled
with the optical cavity and crosses over to reach the output waveguide, as the resonance
wavelength of the cavity is the same as that of the input light. On the other hand, when a laser
pulse is sent along with the bias light, because of the high intensity of the laser, the refractive
index of the cavity medium is altered, prohibiting the coupling of light into the output
waveguide. This scheme is useful for realizing a NOT gate. Here, the presence of a laser pulse
denotes a logical one state at the input waveguide. Because of the change in refractive index of
the cavity, this laser pulse detunes the resonance wavelength away from the resonance condition
and hence the output turns logical zero. But in the absence of a laser pulse that denotes a logical
zero state at the input, the coupling is effective, and the light reaches the output waveguide
representing a logical one state. This circuit that satisfies the NOT logical operations can be used
for creating a NOT gate (Figure 3).

The operation of an OR gate using photonic crystals is considered next. The circuit for realizing
this gate consists of three optical cavities: Two of them are coupled to input waveguides; the
third one acts as the coupling point between input and output waveguides and is also coupled to
both of these cavities. The cavities, L, coupled to input waveguides have resonance wavelength
lower than that of the bias light (Figure 4b). Conversely, the third cavity, S, coupled to output
waveguide has resonance wavelength set to be the same as that of the bias light. The bias light
from the input waveguides cannot traverse the cavity since its wavelength does not match the

11
resonance wavelength of the cavity. This constraint can be sorted out by sending a high-intensity
laser pulse that enables the cavity to achieve resonance with the wavelength of bias light because
of the change in refractive index of the cavity medium. In this way, the high-intensity laser pulse
enables the cavity to open up for the bias light.

The presence of a laser pulse at the input waveguide denotes a logical one state, whereas its
absence denotes a logical zero. Once the bias light reaches any of the two intermediate cavities,
it will be coupled to the third cavity, which has a resonance wavelength the same as that of the
bias light. Subsequently, the bias light will reach the output waveguide by its coupling with the
third cavity.

The presence of light at the output waveguide denotes a logical one state, whereas its absence
denotes a logical zero state. In this way, the circuit with three optical cavities can effectively
execute OR logical operations for realizing an OR gate (Figure 4).

It is possible to realize the universal gate NOR by combining the NOT and OR gates. Further, an
AND gate can be derived from a proper combination of NOR gates, as practiced in digital
circuitry5. This makes it possible to realize an optical microprocessor using photonic crystals.

The advancement of photonic crystals in optical computing would give an impetus for further
venturing into the arena of quantum computing. The availability of high-quality-factor optical
cavities and coherent light sources such as lasers makes photonics an appealing platform for this
paradigm shift in computation.

6. CURRENT WORK AND IMPROVEMENTS

12
6.1 Lenslet’s EnLight Optical Processor

Lenslet’s EnLight Optical Processor combines optics, silicon, communication interfaces, and
development tools in a standard electronic board format. EnLight 64 delivers 240 Giga MAC
ops/sec. It’s said to be the first programmable optical processor. Enlight 256, under
development, reportedly boosts standard DSP performance by three orders of magnitude, and
said to achieve 1,000x faster operation. Israel-based Lenslet Ltd. says the EnLight256 carries a
specification of up to 8-Tera (1,012) multiply-accumulate (MAC) operations per second, or
8,000 GMAC/s.

In addition, EnLight256 is a general-purpose, fixed-point ODSP with an embedded optical core.


It consists of three elements: vector matrix multiplier (VMM) that performs its ultra-fast vector-
matrix operations; vector processor unit (VPU) that handles 128 Giga operations per second; and
standard DSP (TI’s TMS320C64xx) for control and scalar processing.

Also, software for EnLight is developed using three main tools: Matlab APL bit exact simulator,
APL Studio bit exact and cycle exact simulator, and APL Studio Emulator. These tools ensure a

13
smooth development path starting from the floating-point algorithm to running and debugging
code.

Lenslet adds that EnLight targets computationally intense applications, such as video
compression, video encoders, security (baggage scanning and multi-sensor threat analysis), and
defense and communication systems. It can be applied either as a system-embedded accelerator
or a standalone processor. Potential benefits of the optical processor include enhanced
communications in noisy channels, multi-channel interference cancellation, and replacement of
existing multi-DSP boards.

 First programmable optical processor.


 Combines optics, silicon, communications, and tools in standard board.
 Specified at up to 8 Tera (1,012) multiply-accumulate (MAC) operations per second.
 Software tools allow smooth development path.

6.2 HP Lab's Experimental Optical Processor

Hewlett Packard Labs has developed an optical processor that could tackle a class of
computational problems not easily solved by conventional digital chips. The processor, which
was developed in conjunction with the US Defense Advanced Research Projects Agency’s
Mesodynamic Architectures program, houses 1,052 optical components laid down on a silicon-
based substrate.

According to a report in IEEE Spectrum, the chip design implements an Ising machine, which
can take on the computational behaviour of magnetic material. In a real magnet, atoms can

14
“spin” up or down to represent different states. In the optical approach, magnetic spin is
simulated using a combination of light beams, wave guides, interferometers, and heater wires –
all integrated on-chip. The spins in HPE’s optical implementation are represented by “two
phases of light that are 180 degrees out of phase of each other.”

An Ising machine is good at solving combinatorial optimization problems, like the traveling
salesman problem, which can be computed by tuning the optical elements until an optimal low-
energy configuration is reached. Such computation is highly efficient in terms of both speed and
power usage. With the winding down of Moore’s Law for conventional digital designs, such
Ising machines promise a new avenue for building more powerful computers. The HP Labs team
is already looking at future designs to scale up the number of spins.

For the past two years, Sanford University has been working on its own implementation of an
Ising machine, using a combination of electronic and optical components. The latest
implementation can “solve 100-variable problems with any arbitrary set of connections between
variables.” Researchers report it has been tested on thousands of scenarios. In collaboration with
the Stanford team, Nippon Telegraph and Telephone in Japan has created its own
implementation of the design.

6.3 Optalysys optical processing powers convolutional neural network

Optalysys, the UK-based developer of an optical computing platform using low-power laser
light to carry out certain processor-intensive mathematical functions, has announced the first
implementation of a convolutional neural network (CNN) using its technology.
CNNs, a focus of interest for the kind of machine learning methods referred to as deep learning,
are expected to be crucial elements in applications such as the control of autonomous vehicles
and medical image analysis.

"Optalysys has for the first time applied optical processing to the computationally demanding
area of CNNs with initial accuracy rates of over 70 percent," said Optalysys CEO Nick New.
"Through our scalable optical approach, we are developing models that will offer whole new
levels of capability, not only cloud-based but also opening up the potential of CNNs to mobile
systems."

The company's technology, developed from its origins at University of Cambridge spin-out
Cambridge Correlators, uses optical components to perform a mathematical function on an input
light signal. This approach can allow Fourier transforms (FT) to be calculated more rapidly than
conventional electronic computation can achieve, with inherent potential to carry out multiple
computations in parallel and scale-up the capacity of the system.

The Optalysys architecture employs spatial light modulators (SLMs) to manipulate the optical
signal, and the company has been able to exploit the high-performance components now being
developed by the displays industry for high-resolution micro-displays. SLMs designed for those
applications offered the company a solution to the challenge of inputting data into its platform
and increasing the power of the system, although SLMs developed specifically for optical
computation may well become a feature of the system in the future.

"Our collaboration with the Genome Analysis Centre, now the Earlham Institute, to develop a
new Genetic Search System (GENESYS) recently concluded and surpassed expectations, and

15
has led to a co-processor product that is now in beta test with selected genetics institutes and
universities," Nick New told Optics.org.

"In parallel to that, we have also been developing the technology to suit other applications. There
are several areas where the same computational functions are clearly a good fit, and others where
we can adapt the workings of the optics to perform tasks that you would not normally associate
with FT operations. We have applied it to weather forecasting, where spherical harmonic
functions are involved in mapping the atmosphere's fluid dynamics; and to the solving of partial
differential equations. But opportunities in deep learning and AI may well be greater than
anywhere else."

7. ADVANTAGES OF OPTICAL COMPUTING

Higher performance: Although optical computers are still in their early stages and cannot yet
be compared to conventional computers, it still is safe to say that they have a higher processing
speed. There are two reason for this. First, as discussed previously, metallic wires reduce the
transmission speed. Second, in the seminar (referenced further down), the author states that
nothing is faster than the speed of light.

Less consumption: Modern day computers consume a lot of energy. In the seminar, the speaker
states that computers require over 80 watts in idle state, 120 watts during normal use and 250 in
performance mode and all of this energy is not used efficiently.

Less heat is released: As previously mentioned, optical computers function with the use of
lasers. These do not radiate much heat, depending on the application. Moreover, compared to
conventional computers, optical computers do not require a processor airing therefore, they
could be smaller and do not need free space for airing. As a consequence, the probability of a
fire occurring due to overheating are significantly reduced.

Less noise: Conventional computers often produce a lot of noise due to the rotating fan. Because
optical computers do not have fans for airing, the noise factor is also reduced significantly.

More flexibility in layout: Conventional computers are usually built in the form of a
rectangular box (or in the form of a laptop). The reason for this is the speed of the electric
connections. "Using optical components the distance of communication does not matter. Once
the signal is in an optical fiber it does not matter whether the signal runs 1 meter or 1000 meters.
Because of the low damping long-range communication is possible. Still the data rate is very
high and there is no crosstalk.
So the optical computer technology has the potential to change the shape and layout of
computers fundamentally. The components of one computer can be spread across a car, a
building or even a city with almost no loss in performance. Consequently the server/client and
the peer-to-peer architectures could be advanced. Many clients, terminals or even single
components can be connected optically and consequently allow higher ranges."

Reduced loss in communication: Nowadays, communication is often transmitted through


electric wires or radio frequency thus reducing the range in the communication process. Optical
computers use optical wires to transmit data. These have a higher bandwidth therefore which
leads to higher performance.

16
8. DISADVANTAGES OF OPTICAL COMPUTING

Although there are many positive aspects about optical computers, there are also some
disadvantages.

Expensive Components: Parts for conventional computers are produced in plants whose only
job is to manufacture these parts therefore, the price is low mainly do to mass production.
Optical components on the other hand there are not any manufacturers the specialize in the
production of optical components and as a result, the price is high.

Components are not the "right size": In contrast with conventional computer parts, optical
components are of larger magnitude. Researchers have not yet been able to create optical
components small enough to assemble a motherboard.

Manufacturing Problems: For the computer to work properly, the miniaturized components
need to be manufactured exactly. As aforementioned, this has not yet been achieved. Even the
slightest of deviations can cause the light beams (lasers) to divert resulting in massive problems.
Therefore, it can be concluded that the production process is quite costly.

Incompatibility: Conventional computers are assembled accordingly with the Von Neumann
architecture. Application software and operating systems are built around this architecture. In
contrast, optical computers are put together according to a different architecture because of the
system's parallelism. As a result, operating systems such as Microsoft Windows may not be able
to function properly or may not even function at all.

9. THE FUTURE

So far so good, but there is a caveat: Even though optics are superior to electronics for
communication, they are not very suitable for actually carrying out calculations. At least, when
we think binary—in ones and zeros.

Here the human brain may hold a solution. We do not think in a binary way. Our brain is not
digital, but analogue, and it makes calculations all the time.

Computer engineers are now realising the potential of such analogues, or brain-like, computing,
and have created a new field of neuromorphic computing, where they try to mimic how the
human brain works using electronic chips. And in turns out that optics are an excellent choice
for this new brain-like way of computing.

The same kind of technology used by MIT and our team, at Aarhus University, to create optical
communications between and on silicon chips, can also be used to make such neuromorphic
optical chips.

In fact, it has already been shown that such chips can do some basic speech recognition. And
two start-ups in the US, Lightelligence and Lightmatter, have now taken up the challenge to
realise such optical chips for artificial intelligence.

Optical chips are still some way behind electronic chips, but we're already seeing the results and
this research could lead to a complete revolution in computer power. Maybe in five years from

17
now we will see the first optical co-processors in supercomputers. These will be used for very
specific tasks, such as the discovery of new pharmaceutical drugs.

But who knows what will follow after that? In ten years these chips might be used to detect and
recognise objects in self-driving cars and autonomous drones. And when you are talking to
Apple's Siri or Amazon's Echo, by then you might actually be speaking to an optical computer.

While the 20th century was the age of the electron, the 21st century is the age of the photon – of
light. And the future shines bright.

10. CONCLUSIONS

The history of optical computing reveals an extraordinary scientific adventure. It started with the
processing power of coherent light and particularly its Fourier transform capability. The history
shows that considerable efforts were dedicated to the construction of optical processors that
could
process a large amount of data in considerable amount of time. Today, Optics is very successful
in information system s such as communications and memories com- pared to its relative failure
in computing.
All the research results in optical computing contribute strongly to the development of new
research topics such as biphotonic, nanophononics, optofluidics, and femto- second nonlinear
optic s. But, the dream of an all optical computer overcoming the digital computer never became
reality, and optical correlators for pattern recognition have almost disappeared. The speed of the
optical processor was always limited by the speed of the input and output devices. Digital
computer have progressed very rapidly, the Moore’s law is still valid, multi-core processors are
more powerful, and it is clear that digital computer are easier to use and offers more flexibility.
Digital computers have progressed faster than optical processors. Optical computing is mostly
analogue when electronic computing is digital. Due to the lack of appropriate optical
components digital optical computers were not able to compete with the electronic. The solution
to this is to associate optics and electronics and to use optics only when it can bring something
that electronics cannot do. The potential of optics for parallel real time processing remains and
the future will tell if optical computing will be back, for example, by using nanotechnologies.

11. REFERENCES

[1] K. Preston, Coherent Optical Computers, McGraw-Hill, New York, NY, USA, 1972

[2] S. H. Lee, Optical Information Processing Fundamentals, Springer, Berlin, Germany,


1981.

[3] H. H. Arsenault, T. Szoplik, and B. Macukow, Optical Processing and Computing,


Academic Press, SanDiego, Calif, USA, 1989.

[4] J. Shamir, Optical Systems and Processes, SPIE Press, Bellingham, Wash, USA,
1999.

[5] H. J. Caulfield, “Perspectives in optical computing,” Computer, vol. 31, no. 2, pp.
22–25, 1998.

18
[6] https://ieeexplore.ieee.org/document/5218639

[7] https://ieeexplore.ieee.org/document/1672821

19

You might also like