You are on page 1of 60

A New Focus for Light......................................................................................................................

1
Augmented Reality............................................................................................................................2
Digital Imaging, Reimagined............................................................................................................3
Invisible Revolution..........................................................................................................................3
Nanocharging Solar...........................................................................................................................5
Nanohealing......................................................................................................................................6
Neuron Control..................................................................................................................................8
Peering into Video's Future...............................................................................................................9
Personalized Medical Monitors......................................................................................................11
Single-Cell Analysis........................................................................................................................12
A Smarter Web................................................................................................................................14
193nm immersion lithography: Status and challenges...................................................................21
Ultrafast laser fabricates high-index-contrast structures in glass...................................................27
New approach allows integration of biosensors into electronics...................................................29
Shaping gold nanorods for plasmonic applications........................................................................33
Waveguiding by subwavelength arrays of active quantum dots.....................................................35
Liquid-crystal-based devices manipulate........................................................................................37
terahertz-frequency radiation.........................................................................................................37
New molecules may improve the performance of optical devices.................................................39
Optical wavefront measurement using a novel phase-shifting point-diffraction interferometer...41
A Mathematical Solution for Another Dimension..........................................................................44
A Single-Photon Server with Just One Atom.................................................................................45
Double-negative metamaterial edges towards the visible..............................................................46
From Diatoms to Gas Sensors.........................................................................................................47
Gamma-Ray Burst Challenges Theory...........................................................................................50
Magnifying superlenses show more detail than before..................................................................51
Physicists shine a light, produce startling liquid jet.......................................................................52
Flexible Batteries That Never Need to Be Recharged....................................................................54
The LXI Standard: Past, Present and Future..................................................................................55
A New Focus for Light

Researchers trying to make high-capacity DVDs, as well as more-powerful computer chips and
higher-resolution optical microscopes, have for years run up against the "diffraction limit." The
laws of physics dictate that the lenses used to direct light beams cannot focus them onto a spot
whose diameter is less than half the light's wavelength. Physicists have been able to get around the
diffraction limit in the lab--but the systems they've devised have been too fragile and complicated
for practical use. Now Harvard University electrical engineers led by Kenneth Crozier and
Federico Capasso have discovered a simple process that could bring the benefits of tightly focused
light beams to commercial applications. By adding nanoscale "optical antennas" to a commercially
available laser, Crozier and Capasso have focused infrared light onto a spot just 40 nanometers
wide--one-twentieth the light's wavelength. Such optical antennas could one day make possible
DVD-like discs that store 3.6 terabytes of data--the equivalent of more than 750 of today's 4.7-
gigabyte recordable DVDs.
Crozier and Capasso build their device by first depositing an insulating layer onto the light-
emitting edge of the laser. Then they add a layer of gold. They carve away most of the gold,
leaving two rectangles of only 130 by 50 nanometers, with a 30-nanometer gap between them.
These form an antenna. When light from the laser strikes the rectangles, the antenna has what
Capasso calls a "lightning-rod effect": an intense electrical field forms in the gap, concentrating
the laser's light onto a spot the same width as the gap.
"The antenna doesn't impose design constraints on the laser," Capasso says, because it can be
added to off-the-shelf semiconductor lasers, commonly used in CD drives. The team has already
demonstrated the antennas with several types of lasers, each producing a different wavelength of
light. The researchers have discussed the technology with storage-device companies Seagate and
Hitachi Global Storage Technologies.
Another application could be in photolithography, says Gordon Kino, professor emeritus of
electrical engineering at Stanford University. This is the method typically used to make silicon
chips, but the lasers that carve out ever-smaller features on silicon are also constrained by the
diffraction limit. Electron-beam lithography, the technique that currently allows for the smallest
chip features, requires a large machine that costs millions of dollars and is too slow to be used in
mass production. "This is a hell of a lot simpler," says Kino of Crozier and Capasso's technique,
which relies on a laser that costs about $50.
But before the antennas can be used for lithography, the engineers will need to make them even
smaller: the size of the antennas must be tailored to the wavelength of the light they focus.
Crozier and Capasso's experiments have used infrared lasers, and photolithography relies on
shorter-wavelength ultraviolet light. In order to inscribe circuitry on microchips, the researchers
must create antennas just 50 nanometers long.
Capasso and Crozier's optical antennas could have far-reaching and unpredictable implications,
from superdense optical storage to superhigh-resolution optical microscopes. Enabling engineers
to simply and cheaply break the diffraction limit has made the many applications that rely on light
shine that much brighter.
http://www.techreview.com/Infotech/18295/

1
Augmented Reality

Markus Kähäri wants to superimpose digital information on the real world.


Finding your way around a new city can be exasperating: juggling maps and guidebooks, trying to
figure out where you are on roads with no street signs, talking with locals who give directions by
referring to unfamiliar landmarks. If you're driving, a car with a GPS navigation system can make
things easier, but it still won't help you decide, say, which restaurant suits both your palate and
your budget. Engineers at the Nokia Research Center in Helsinki, Finland, hope that a project
called Mobile Augmented Reality Applications will help you get where you're going--and decide
what to do once you're there.
Last October, a team led by Markus Kähäri unveiled a prototype of the system at the International
Symposium on Mixed and Augmented Reality. The team added a GPS sensor, a compass, and
accelerometers to a Nokia smart phone. Using data from these sensors, the phone can calculate the
location of just about any object its camera is aimed at. Each time the phone changes location, it
retrieves the names and geographical coördinates of nearby landmarks from an external database.
The user can then download additional information about a chosen location from the Web--say, the
names of businesses in the Empire State Building, the cost of visiting the building's observatories,
or hours and menus for its five eateries.
The Nokia project builds on more than a decade of academic research into mobile augmented
reality. Steven Feiner, the director of Columbia University's Computer Graphics and User
Interfaces Laboratory, undertook some of the earliest research in the field and finds the Nokia
project heartening. "The big missing link when I started was a small computer," he says. "Those
small computers are now cell phones."
Despite the availability and fairly low cost of the sensors the Nokia team used, some engineers
believe that they introduce too much complexity for a commercial application. "In my opinion,
this is very exotic hardware to provide," says Valentin Lefevre, chief technology officer and
cofounder of Total Immersion, an augmented-reality company in Suresnes, France. "That's why
we think picture analysis is the solution." Relying on software alone, Total Immersion's system
begins with a single still image of whatever object the camera is aimed at, plus a rough digital
model of that object; image-recognition algorithms then determine what data should be super-
imposed on the image. The company is already marketing a mobile version of its system to cell-
phone operators in Asia and Europe and expects the system's first applications to be in gaming and
advertising.
Nokia researchers have begun working on real-time image-recognition algorithms as well; they
hope the algorithms will eliminate the need for location sensors and improve their system's
accuracy and reliability. "Methods that don't rely on those components can be more robust," says
Kari Pulli, a research fellow at the Nokia Research Center in Palo Alto, CA.
All parties agree, though, that mobile augmented reality is nearly ready for the market. "For
mobile-phone applications, the technology is here," says Feiner. One challenge is convincing
carriers such as Sprint or Verizon that customers would pay for augmented-reality services. "If
some big operator in the U.S. would launch this, it could fly today," Pulli says.
http://www.technologyreview.com/read_article.aspx?ch=specialsections&sc=emerging&id=18291

2
Digital Imaging, Reimagined

Richard Baraniuk and Kevin Kelly believe compressive sensing could help devices such as
cameras and medical scanners capture images more efficiently.
Richard Baraniuk and Kevin Kelly have a new vision for digital imaging: they believe an overhaul
of both hardware and software could make cameras smaller and faster and let them take incredibly
high-resolution pictures.
Today's digital cameras closely mimic film cameras, which makes them grossly inefficient. When
a standard four-megapixel digital camera snaps a shot, each of its four million image sensors
characterizes the light striking it with a single number; together, the numbers describe a picture.
Then the camera's onboard computer compresses the picture, throwing out most of those numbers.
This process needlessly chews through the camera's battery.
Baraniuk and Kelly, both professors of electrical and computer engineering at Rice University,
have developed a camera that doesn't need to compress images. Instead, it uses a single image
sensor to collect just enough information to let a novel algorithm reconstruct a high-resolution
image.
At the heart of this camera is a new technique called compressive sensing. A camera using the
technique needs only a small percentage of the data that today's digital cameras must collect in
order to build a comparable picture. Baraniuk and Kelly's algorithm turns visual data into a
handful of numbers that it randomly inserts into a giant grid. There are just enough numbers to
enable the algorithm to fill in the blanks, as we do when we solve a Sudoku puzzle. When the
computer solves this puzzle, it has effectively re-created the complete picture from incomplete
information.
Compressive sensing began as a mathematical theory whose first proofs were published in 2004;
the Rice group has produced an advanced demonstration in a relatively short time, says Dave
Brady of Duke University. "They've really pushed the applications of the theory," he says.
Kelly suspects that we could see the first practical applications of compressive sensing within two
years, in MRI systems that capture images up to 10 times as quickly as today's scanners do. In five
to ten years, he says, the technology could find its way into consumer products, allowing tiny
mobile-phone cameras to produce high-quality, poster-size images. As our world becomes
increasingly digital, compressive sensing is set to improve virtually any imaging system,
providing an efficient and elegant way to get the picture.
http://www.technologyreview.com/read_article.aspx?ch=specialsections&sc=emerging&id=18293

Invisible Revolution

Artificially structured metamaterials could transform telecommunications, data storage, and even
solar energy, says David R. Smith.
The announcement last November of an "invisibility shield," created by David R. Smith of Duke
University and colleagues, inevitably set the media buzzing with talk of H. G. Wells's invisible

3
man and Star Trek's Romulans. Using rings of printed circuit boards, the researchers managed to
divert microwaves around a kind of "hole in space"; even when a metal cylinder was placed at the
center of the hole, the microwaves behaved as though nothing were there.
It was arguably the most dramatic demonstration so far of what can be achieved with
metamaterials, composites made up of precisely arranged patterns of two or more distinct
materials. These structures can manipulate electromagnetic radiation, including light, in ways not
readily observed in nature. For example, photonic crystals--arrays of identical microscopic blocks
separated by voids--can reflect or even inhibit the propagation of certain wavelengths of light;
assemblies of small wire circuits, like those Smith used in his invisibility shield, can bend light in
strange ways.
But can we really use such materials to make objects seem to vanish? Philip Ball spoke with
Smith, who explains why metamaterials are literally changing the way we view the world.
Technology Review: How do metamaterials let you make things invisible?
David R. Smith: It's a somewhat complicated procedure but can be very simple to visualize.
Picture a fabric formed from interwoven threads, in which light is constrained to travel along the
threads. Well, if you now take a pin and push it through the fabric, the threads are distorted,
making a hole in the fabric. Light, forced to follow the threads, is routed around the hole. John
Pendry at Imperial College in London calculated what would be required of a metamaterial that
would accomplish exactly this. The waves are transmitted around the hole and combined on the
other side. So you can put an object in the hole, and the waves won't "see" it--it's as if they'd
crossed a region of empty space.
TR: And then you made it?
DRS: Yes--once we had the prescription, we set about using the techniques we'd developed over
the past few years to make the material. We did the experiment at microwave frequencies because
the techniques are very well established there and we knew we would be able to produce a
demonstration quickly. We printed millimeter-scale metal wires and split rings, shaped like the
letter C, onto fiberglass circuit boards. The shield consisted of about 10 concentric cylinders made
up of these split-ring building blocks, each with a slightly different pattern.
TR: So an object inside the shield is actually invisible?
DRS: More or less, but when we talk about invisibility in these structures, it's not about making
things vanish before our eyes--at least, not yet. We can hide them from microwaves, but the shield
is plain enough to see. This isn't like stealth shielding on military aircraft, where you just try to
eliminate reflection--the microwaves seem literally to pass through the object inside the shield. If
this could work with visible light, then you really would see the object vanish.
TR: Could you hide a large object, like an airplane, from radar by covering its surface with the
right metamaterial?
DRS: I'm not sure we can do that. If you look at stealth technology today, it's generally interested
in hiding objects from detection over a large radar bandwidth. But the invisibility bandwidth is
inherently limited in our approach. The same is true for hiding objects from all wavelengths of
visible light--that would certainly be a stretch.
TR: How else might we use metamaterials?
DRS: Well, this is really an entirely new approach to optics. There's a huge amount of freedom for
design, and as is usual with new technology, the best uses probably haven't been thought of yet.
One of the most provocative and controversial predictions came from John Pendry, who predicted

4
that a material with a negative refractive index could focus light more finely than any
conventional lens material. The refractive index measures how much light bends when it passes
through a material--that's what makes a pole dipped in water look as though it bends. A negative
refractive index means the material bends light the "wrong" way. So far, we and others have been
working not with visible light but with microwaves, which are also electromagnetic radiation, but
with a longer wavelength. This means the components of the metamaterial must be
correspondingly bigger, and so they're much easier to make. Pendry's suggestion was confirmed in
2005 by a group from the University of California, Berkeley, who made a negative-refractive-
index metamaterial for microwaves.
Making a negative-index material that works for visible light is more difficult, because the
building blocks have to be much smaller--no bigger than 10 to 20 nanometers. That's now very
possible to achieve, however, and several groups are working on it. If it can be done, these
metamaterials could be used to increase the amount of information stored on CDs and DVDs or to
speed up transmission and reduce power consumption in fiber-optic telecommunications.
We can also concentrate electromagnetic fields--the exact opposite of what the cloak does--which
might be valuable in energy-harvesting applications. With a suitable metamaterial, we could
concentrate light coming from any direction--you wouldn't need direct sunlight. Right now we're
trying to design structures like this. If we could achieve that for visible light, it could make solar
power more efficient.
http://www.technologyreview.com/Nanotech/18292/page2/

Nanocharging Solar

Arthur Nozik believes quantum-dot solar power could boost output in cheap photovoltaics
No renewable power source has as much theoretical potential as solar energy. But the promise of
cheap and abundant solar power remains unmet, largely because today's solar cells are so costly to
make.
Photovoltaic cells use semiconductors to convert light energy into electrical current. The
workhorse photovoltaic material, silicon, performs this conversion fairly efficiently, but silicon
cells are relatively expensive to manufacture. Some other semiconductors, which can be deposited
as thin films, have reached market, but although they're cheaper, their efficiency doesn't compare
to that of silicon. A new solution may be in the offing: some chemists think that quantum dots--
tiny crystals of semiconductors just a few nanometers wide--could at last make solar power cost-
competitive with electricity from fossil fuels.
By dint of their size, quantum dots have unique abilities to interact with light. In silicon, one
photon of light frees one electron from its atomic orbit. In the late 1990s, Arthur Nozik, a senior
research fellow at the National Renewable Energy Laboratory in Golden, CO, postulated that
quantum dots of certain semiconductor materials could release two or more electrons when struck
by high-energy photons, such as those found toward the blue and ultraviolet end of the spectrum.
In 2004, Victor Klimov of Los Alamos National Laboratory in New Mexico provided the first
experimental proof that Nozik was right; last year he showed that quantum dots of lead selenide
could produce up to seven electrons per photon when exposed to high-energy ultraviolet light.
Nozik's team soon demonstrated the effect in dots made of other semiconductors, such as lead

5
sulfide and lead telluride.
These experiments have not yet produced a material suitable for commercialization, but they do
suggest that quantum dots could someday increase the efficiency of converting sunlight into
electricity. And since quantum dots can be made using simple chemical reactions, they could also
make solar cells far less expensive. Researchers in Nozik's lab, whose results have not been
published, recently demonstrated the extra-electron effect in quantum dots made of silicon; these
dots would be far less costly to incorporate into solar cells than the large crystalline sheets of
silicon used today.
To date, the extra-electron effect has been seen only in isolated quantum dots; it was not evident in
the first prototype photovoltaic devices to use the dots. The trouble is that in a working solar cell,
electrons must travel out of the semiconductor and into an external electrical circuit. Some of the
electrons freed in any photovoltaic cell are inevitably "lost," recaptured by positive "holes" in the
semiconductor. In quantum dots, this recapture happens far faster than it does in larger pieces of a
semiconductor; many of the freed electrons are immediately swallowed up.
The Nozik team's best quantum-dot solar cells have managed only about 2 percent efficiency, far
less than is needed for a practical device. However, the group hopes to boost the efficiency by
modifying the surfaces of the quantum dots or improving electron transport between dots.
The project is a gamble, and Nozik readily admits that it might not pay off. Still, the enormous
potential of the nanocrystals keeps him going. Nozik calculates that a photovoltaic device based
on quantum dots could have a maximum efficiency of 42 percent, far better than silicon's
maximum efficiency of 31 percent. The quantum dots themselves would be cheap to manufacture,
and they could do their work in combination with materials like conducting polymers that could
also be produced inexpensively. A working quantum dot-polymer cell could eventually place solar
electricity on a nearly even economic footing with electricity from coal. "If you could [do this],
you would be in Stockholm--it would be revolutionary," says Nozik.
A commercial quantum-dot solar cell is many years away, assuming it's even possible. But if it is,
it could help put our fossil-fuel days behind us.
http://www.technologyreview.com/read_article.aspx?ch=specialsections&sc=emerging&id=18285

Nanohealing

Tiny fibers will save lives by stopping bleeding and aiding recovery from brain injury, says
Rutledge Ellis-Behnke.
In the break room near his lab in MIT's brand-new neuroscience building, research scientist
Rutledge Ellis-Behnke provides impromptu narration for a video of himself performing surgery. In
the video, Ellis-Behnke makes a deep cut in the liver of a rat, intentionally slicing through a main
artery. As the liver pulses from the pressure of the rat's beating heart, blood spills from the wound.
Then Ellis-Behnke covers the wound with a clear liquid, and the bleeding stops almost at once.
Untreated, the wound would have proved fatal, but the rat lived on.
The liquid Ellis-Behnke used is a novel material made of nanoscale protein fragments, or peptides.
Its ability to stop bleeding almost instantly could be invaluable in surgery, at accident sites, or on
the battlefield. Under conditions like those inside the body, the peptides self-assemble into a
fibrous mesh that to the naked eye appears to be a transparent gel. Even more remarkably, the

6
material creates an environment that may accelerate healing of damaged brain and spinal tissue.
Ellis-Behnke stumbled on the material's capacity to stanch bleeding by chance, during
experiments designed to help restore vision to brain-damaged hamsters. And his discovery was
itself made possible by earlier serendipitous events. In the early 1990s, Shuguang Zhang, now a
biomedical engineer at MIT, was working in the lab of MIT biologist Alexander Rich. Zhang had
been studying a repeating DNA sequence that coded for a peptide. He and a colleague
inadvertently found that under certain conditions, copies of the peptide would combine into fibers.
Zhang and his colleagues began to reëngineer the peptides to exhibit specific responses to electric
charges and water. They ended up with a 16-amino-acid peptide that looks like a comb, with
water-loving teeth projecting from a water-repelling spine. In a salty, aqueous environment--such
as that inside the body--the spines spontaneously cluster together to avoid the water, forming long,
thin fibers that self-assemble into curved ribbons. The process transforms a liquid peptide solution
into a clear gel.
Originally, Ellis-Behnke intended to use the material to promote the healing of brain and spinal-
cord injuries. In young animals, neurons are surrounded by materials that help them grow; Ellis-
Behnke thought that the peptide gel could create a similar environment and prevent the formation
of scar tissue, which obstructs the regrowth of severed neurons. "It's like if you're walking through
a field of wheat, you can walk easily because the wheat moves out of the way," he says. "If you're
walking through a briar patch, you get stuck." In the hamster experiments, the researchers found
that the gel allowed neurons in a vision-related tract of the brain to grow across a lesion and
reëstablish connections with neurons on the other side, restoring the hamster's sight.
It was during these experiments that Ellis-Behnke discovered the gel's ability to stanch bleeding.
Incisions had been made in the hamsters' brains, but when the researchers applied the new
material, all residual bleeding suddenly stopped. At first, Ellis-Behnke says, "we thought that we'd
actually killed the animals. But the heart was still going." Indeed, the rodents survived for months,
apparently free of negative side effects.
The material has several advantages over current methods for stopping bleeding. It's faster and
easier than cauterization and does not damage tissue. It could protect wounds from the air and
supply amino-acid building blocks to growing cells, thereby accelerating healing. Also, within a
few weeks the body completely breaks the peptides down, so they need not be removed from the
wound, unlike some other blood-stanching agents. The synthetic material also has a long shelf life,
which could make it particularly useful in first-aid kits.
The material's first application will probably come in the operating room. Not only would it stop
the bleeding caused by surgical incisions, but it could also form a protective layer over wounds.
And since the new material is transparent, surgeons should be able to apply a layer of it and then
operate through it. "When you perform surgery, you are constantly suctioning and cleaning the site
to be able to see it," says Ram Chuttani, a gastroenterologist and professor at Harvard Medical
School. "But if you can seal it, you can continue to perform the surgery with much clearer vision."
The hope is that surgeons will be able to operate faster, thus reducing complications. The material
may also make it possible to perform more procedures in a minimally invasive way by allowing a
surgeon to quickly stop bleeding at the end of an endoscope.
Chuttani, who was not involved with the research, cautions that the work is still "very
preliminary," with no tests yet on large animals or humans. But if such tests go well, Ellis-Behnke
estimates, the material could be approved for use in humans in three to five years. "I don't know

7
what the impact is going to be," he says. "But if we can stop bleeding, we can save a lot of
people." Ellis-Behnke and his colleagues are also continuing to explore the material's nerve
regeneration capabilities. They're looking for ways to increase the rate of neuronal growth so that
doctors can treat larger brain injuries, such as those that can result from stroke. But such a
treatment will take at least five to ten years to reach humans, Ellis-Behnke says.
Even without regenerating nerves, the material could save countless lives in surgery or at accident
sites. And already, the material's performance is encouraging research by demonstrating how
engineering nanostructures to self-assemble in the body could profoundly improve medicine.
http://www.technologyreview.com/Nanotech/18290/page2/

Neuron Control

Karl Deisseroth's genetically engineered "light switch," which lets scientists turn selected parts of
the brain on and off, may help improve treatments for depression and other disorders.
In his psychiatry practice at the Stanford Medical Center, Karl Deisseroth sometimes treats
patients who are so severely depressed that they can't walk, talk, or eat. Intensive treatments, such
as electroconvulsive therapy, can literally save such patients' lives, but often at the cost of memory
loss, headaches, and other serious side effects. Deisseroth, who is both a physician and a
bioengineer, thinks he has a better way: an elegant new method for controlling neural cells with
flashes of light. The technology could one day lead to precisely targeted treatments for psychiatric
and neurological disorders; that precision could mean greater effectiveness and fewer side effects.
While scientists know something about the chemical imbalances underlying depression, it's still
unclear exactly which cells, or networks of cells, are responsible for it. In order to identify the
circuits involved in such diseases, scientists must be able to turn neurons on and off. Standard
methods, such as electrodes that activate neurons with jolts of electricity, are not precise enough
for this task, so Deisseroth, postdoc Ed Boyden (now an assistant professor at MIT; see
"Engineering the Brain"), and graduate student Feng Zhang developed a neural controller that can
activate specific sets of neurons.
They adapted a protein from a green alga to act as an "on switch" that neurons can be genetically
engineered to produce (see "Artificially Firing Neurons," TR35, September/October 2006). When
the neuron is exposed to light, the protein triggers electrical activity within the cell that spreads to
the next neuron in the circuit. Researchers can thus use light to activate certain neurons and look
for specific responses--a twitch of a muscle, increased energy, or a wave of activity in a different
part of the brain.
Deisseroth is using this genetic light switch to study the biological basis of depression. Working
with a group of rats that show symptoms similar to those seen in depressed humans, researchers in
his lab have inserted the switch into neurons in different brain areas implicated in depression.
They then use an optical fiber to shine light onto those cells, looking for activity patterns that
alleviate the symptoms. Deisseroth says the findings should help scientists develop better anti-
depressants: if they know exactly which cells to target, they can look for molecules or delivery
systems that affect only those cells. "Prozac goes to all the circuits in the brain, rather than just the
relevant ones," he says. "That's part of the reason it has so many side effects."
In the last year, Deisseroth has sent his switch to more than 100 research labs. "Folks are applying

8
it to all kinds of animals, including mice, worms, flies, and zebrafish," he says. Scientists are using
this and similar switches to study everything from movement to addiction to appetite. "These
technologies allow us to advance from observation to active intervention and control," says Gero
Miesenböck, a neuroscientist at Yale University. By evoking sensations or movements directly, he
says, "you can forge a much stronger connection between mental activity and behavior."
Deisseroth hopes his technology will one day become not just a research tool but a treatment in
itself, used alongside therapies that electrically stimulate large areas of the brain to treat
depression or Parkinson's disease. By activating only specific neurons, a specially engineered light
switch could limit those therapies' side effects. Of course, the researchers will need to solve some
problems first: they'll need to find safe gene-therapy methods for delivering the switch to the
target cells, as well as a way to shine light deep into the brain. "It's a long way off," says
Deisseroth. "But the obstacles aren't insurmountable." In the meantime, neuroscientists have the
use of a powerful new tool in their quest to uncover the secrets of the brain.
http://www.technologyreview.com/read_article.aspx?ch=specialsections&sc=emerging&id=18289

Peering into Video's Future

The Internet is about to drown in digital video. Hui Zhang thinks peer-to-peer networks could
come to the rescue.
Ted Stevens, the 83-year-old senior senator from Alaska, was widely ridiculed last year for a
speech in which he described the Internet as "a series of tubes." Yet clumsy as his metaphor may
have been, Stevens was struggling to make a reasonable point: the tubes can get clogged. And that
may happen sooner than expected, thanks to the exploding popularity of digital video.
TV shows, YouTube clips, animations, and other video applications already account for more than
60 percent of Internet traffic, says CacheLogic, a Cambridge, England, company that sells media
delivery systems to content owners and Internet service providers (ISPs). "I imagine that within
two years it will be 98 percent," adds Hui Zhang, a computer scientist at Carnegie Mellon
University. And that will mean slower downloads for everyone.
Zhang believes help could come from an unexpected quarter: peer-to-peer (P2P) file distribution
technology. Of course, there's no better playground for piracy, and millions have used P2P
networks such as Gnutella, Kazaa, and BitTorrent to help themselves to copyrighted content. But
Zhang thinks this black-sheep technology can be reformed and put to work helping legitimate
content owners and Internet-backbone operators deliver more video without overloading the
network.
For Zhang and other P2P proponents, it's all a question of architecture. Conventionally, video and
other Web content gets to consumers along paths that resemble trees, with the content owners'
central servers as the trunks, multiple "content distribution servers" as the branches, and
consumers' PCs as the leaves. Tree architectures work well enough, but they have three key
weaknesses: If one branch is cut off, all its leaves go with it. Data flows in only one direction, so
the leaves'--the PCs'--capacity to upload data goes untapped. And perhaps most important, adding
new PCs to the network merely increases its congestion--and the demands placed on the servers.
In P2P networks, by contrast, there are no central servers: each user's PC exchanges data with
many others in an ever-shifting mesh. This means that servers and their overtaxed network

9
connections bear less of a burden; data is instead provided by peers, saving bandwidth in the
Internet's core. If one user leaves the mesh, others can easily fill the gap. And adding users
actually increases a P2P network's power.
There are just two big snags keeping content distributors and their ISPs from warming to mesh
architectures. First, to balance the load on individual PCs, the most advanced P2P networks, such
as BitTorrent, break big files into blocks, which are scattered across many machines. To re-
assemble those blocks, a computer on the network must use precious bandwidth to broadcast
"metadata" describing which blocks it needs and which it already has.
Second, ISPs are loath to carry P2P traffic, because it's a big money-loser. For conventional one-
way transfers, ISPs can charge content owners such as Google or NBC.com according to the
amount of bandwidth they consume. But P2P traffic is generated by subscribers themselves, who
usually pay a flat monthly fee regardless of how much data they download or upload.
Zhang and others believe they're close to solving both problems. At Cornell University, computer
scientist Paul Francis is testing a P2P system called Chunkyspread that combines the best features
of trees and meshes. Members' PCs are arranged in a classic tree, but they can also connect to one
another, reducing the burden on the branches.
Just as important, Chunkyspread reassembles files in "slices" rather than blocks. A slice consists of
the nth bit of every block--for example, the fifth bit in every block of 20 bits. Alice's PC might
obtain a commitment from Bob's PC to send bit five from every block it possesses, from Carol's
PC to send bit six, and so forth. Once these commitments are made, no more metadata need
change hands, saving bandwidth. In simulations, Francis says, Chunkyspread far outperforms
simple tree-based multicast methods.
Zhang thinks new technology can also make carrying P2P traffic more palatable for ISPs. Right
now, operators have little idea what kind of data flows through their networks. At his Pittsburgh-
based stealth startup, Rinera Networks, Zhang is developing software that will identify P2P data,
let ISPs decide how much of it they're willing to carry, at what volume and price, and then deliver
it as reliably as server-based content distribution systems do--all while tracking everything for
accounting purposes. "We want to build an ecosystem such that service providers will actually
benefit from P2P traffic," Zhang explains. Heavy P2P users might end up paying extra fees--but in
the end, content owners and consumers won't gripe, he argues, since better accounting should
make the Internet function more effectively for everyone.
If this smells like a violation of the Internet's tradition of network neutrality--
the principle that ISPs should treat all bits equally, regardless of their origin--then it's because the
tradition needs to be updated for an era of very large file transfers, Zhang believes. "It's all about
volume," he says. "Of course, we don't want the service providers to dictate what they will carry
on their infrastructure. On the other hand, if P2P users benefit from transmitting and receiving
more bits, the guys who are actually transporting those bits should be able to share in that."
Networking and hardware companies have their eyes on technologies emerging from places like
Rinera and Francis's Cornell lab, even as they build devices designed to help consumers download
video and other files over P2P networks. Manufacturers Asus, Planex, and QNAP, for example, are
working with BitTorrent to embed the company's P2P software in their home routers, media
servers, and storage devices. With luck, Senator Stevens's tubes may stay unblocked a little longer.
http://www.technologyreview.com/Infotech/18284/page2/

10
Personalized Medical Monitors

John Guttag says using computers to automate some diagnostics could make medicine more
personal.
In late spring 2000, John Guttag came home from surgery. It had been a simple procedure to repair
a torn ligament in his knee, and he had no plans to revisit the hospital anytime soon. But that same
day his son, then a junior in high school, complained of chest pains. Guttag's wife promptly got
back in the car and returned to the hospital, where their son was diagnosed with a collapsed lung
and immediately admitted. Over the next year, Guttag and his wife spent weeks at a time in and
out of the hospital with their son, who underwent multiple surgeries and treatments for a series of
recurrences.
During that time, Guttag witnessed what became a familiar scenario. "The doctors would come in,
take a stethoscope, listen to his lungs, and make a pronouncement like ‘He's 10 percent better than
yesterday,' and I wanted to say, ‘I don't believe that,'" he says. "You can't possibly sit there and
listen with your ears and tell me you can hear a 10 percent difference. Surely there's a way to do
this more precisely."
It was an observation that any concerned parent might make, but for Guttag, who was then head of
MIT's Department of Electrical Engineering and Computer Science, it was a personal challenge.
"Health care just seemed like an area that was tremendously in need of our expertise," he says.
The ripest challenge, Guttag says, is analyzing the huge amounts of data generated by medical
tests. Today's physicians are bombarded with physiological information--temperature and blood
pressure readings, MRI scans, electrocardiogram (EKG) readouts, and x-rays, to name a few.
Wading through a single patient's record to determine signs of, say, a heart attack or stroke can be
difficult and time consuming. Guttag believes computers can help doctors efficiently interpret
these ever-growing masses of data. By quickly perceiving patterns that might otherwise be buried,
he says, software may provide the key to more precise and personalized medicine. "People aren't
good at spotting trends unless they're very obvious," says Guttag. "It dawned on me that doctors
were doing things that a computer could do better."
For instance, making sense of the body's electrical signals seemed, to Guttag, to be a natural fit for
computer science. Some of his earlier work on computer networks caught the attention of
physicians at Children's Hospital Boston. The doctors and the engineer set out to improve the
detection of epileptic seizures; ultimately, Guttag and graduate student Ali Shoeb designed
personalized seizure detectors. In 2004, the team examined recordings of the brain waves of more
than 30 children with epilepsy, before, during, and after seizures. They used the data to train a
"classification algorithm" to distinguish between seizure and nonseizure waveforms. With the help
of the algorithm, the researchers identified seizure patterns specific to each patient.
The team is now working on a way to make that type of information useful to people with
epilepsy. Today, many patients can control their seizures with an implant that stimulates the vagus
nerve. The implant typically works in one of two ways: either it turns on every few minutes,
regardless of a patient's brain activity, or patients sweep a magnet over it, activating it when they
sense a seizure coming on. Both methods have their drawbacks, so Guttag is designing a
noninvasive, software-driven sensor programmed to measure the wearer's brain waves and

11
determine what patterns--specific to him or her--signify the onset of a seizure. Once those patterns
are detected, a device can automatically activate an implant, stopping the seizure in its tracks.
Guttag plans to test the sensor, essentially a bathing cap of electrodes that fits over the scalp, on a
handful of patients at Beth Israel Deaconess Medical Center this spring. Down the line, such a
sensor could also help people without implants, simply warning them to sit down, pull over, or get
to a safe place before a seizure begins. "Just a warning could be enormously life changing," says
Guttag. "It's all the collateral damage that people really fear."
Now he's turned his attention to patterns of the heart. Like the brain, cardiac activity is governed
by electrical signals, so moving into cardiology is a natural transition for Guttag.
He began by looking for areas where large-scale cardiac-data analysis was needed. Today, many
patients who have suffered heart attacks go home with Holter monitors that record heart activity.
After a day or so, a cardiologist reviews the monitor's readings for worrisome signs. But it can be
easy to miss an abnormal pattern in thousands of minutes of dense waveforms.
That's where Guttag hopes computers can step in. Working with Collin Stultz, a cardiologist and
assistant professor of electrical engineering and computer science at MIT, and graduate student
Zeeshan Syed, Guttag is devising algorithms to analyze EKG readings for statistically meaningful
patterns. In the coming months, the team will compare EKG records from hundreds of heart attack
patients, some of whose attacks were fatal. The immediate goal is to pick out key similarities and
differences between those who survived and those who didn't. There are known "danger patterns"
that physicians can spot on an EKG readout, but the Guttag group is leaving it up to the computer
to find significant patterns, rather than telling it what to look for. If the computer's search isn't
influenced by existing medical knowledge, Guttag reasons, it may uncover unexpected
relationships.
Joseph Kannry, director of the Center for Medical Informatics at the Mount Sinai School of
Medicine, calls Guttag's work a solid step toward developing more accurate automated medical
readings. "It's promising. The challenge is going to be in convincing a clinician to use it," says
Kannry.
Still, Guttag feels he is well on his way toward integrating computing into medical diagnostics.
"People have very different reactions when you tell them computers are going to make decisions
for you," he says. "But we've gotten to the point where computers fly our airplanes for us, so
there's every reason to be optimistic."
http://www.technologyreview.com/Biotech/18294/page2/

Single-Cell Analysis

Norman Dovichi believes that detecting minute differences between individual cells could
improve medical tests and treatments.
We all know that focusing on the characteristics of a group can obscure the differences between
the individuals in it. Yet when it comes to biological cells, scientists typically derive information
about their behavior, status, and health from the collective activity of thousands or millions of
them. A more precise understanding of differences between individual cells could lead to better
treatments for cancer and diabetes, just for starters.
The past few decades have seen the advent of methods that allow astonishingly detailed views of

12
single cells--each of which can produce thousands of different proteins, lipids, hormones, and
metabolites. But most of those methods have a stark limitation: they rely on "affinity reagents,"
such as antibodies that attach to specific proteins. As a result, researchers can use them to study
only what's known to exist. "The unexpected is invisible," says Norman Dovichi, an analytical
chemist at the University of Washington, Seattle. And most every cell is stuffed with mysterious
components. So Dovichi has helped pioneer ultrasensitive techniques to isolate cells and reveal
molecules inside them that no one even knew were there.
Dovichi's lab--one of a rapidly growing number of groups that focus on single cells--has had
particular success at identifying differences in the amounts of dozens of distinct proteins produced
by individual cancer cells. "Ten years ago, I would have thought it would have been almost
impossible to do that," says Robert Kennedy, an analytical chemist at the University of Michigan-
Ann Arbor, who analyzes insulin secretion from single cells to uncover the causes of the most
common type of diabetes.
And Dovichi has a provocative hypothesis: he thinks that as a cancer progresses, cells of the same
type diverge more and more widely in their protein content. If this proves true, then vast
dissimilarities between cells would indicate a disease that is more likely to spread. Dovichi is
working with clinicians to develop better prognostics for esophageal and breast cancer based on
this idea. Ultimately, such tests could let doctors quickly decide on proper treatment, a key to
defeating many cancers.
A yellow, diamond-shaped sign in Dovichi's office warns that a "laser jock" is present. Dovichi
helped develop the laser-based DNA sequencers that became the foundation of the Human
Genome Project, and his new analyzers rely on much of the same technology to probe single cells
for components that are much harder to detect than DNA: proteins, lipids, and carbohydrates.
For proteins, the machines mix reagents with a single cell inside an ultrathin capillary tube. A
chemical reaction causes lysine, an amino acid recurring frequently in proteins, to fluoresce. The
proteins, prodded by an electric charge, migrate out of the tube at different rates, depending on
their size. Finally, a laser detector records the intensity of the fluorescence. This leads to a graphic
that displays the various amounts of the different-sized proteins inside the cell.
Although the technique reveals differences between cells, it does not identify the specific proteins.
Still, the analyzer has an unprecedented sensitivity and makes visible potentially critical
differences. "For our cancer prognosis projects, we don't need to know the identity of the
components," Dovichi says.
Dovichi is both excited about the possibilities of single-cell biology and sober about its
limitations. Right now, he says, analyses take too much time and effort. "This is way early-stage,"
says Dovichi. "But hopefully, in 10, 20, or 30 years, people will look back and say those were
interesting baby steps."
http://www.technologyreview.com/read_article.aspx?ch=specialsections&sc=emerging&id=18296

13
A Smarter Web

New technologies will make online search more intelligent--and may even lead to a "Web 3.0."

Last year, Eric Miller, an MIT-affiliated computer scientist, stood on a beach in southern France,
watching the sun set, studying a document he'd printed earlier that afternoon. A March rain had
begun to fall, and the ink was beginning to smear.
Five years before, he'd agreed to lead a diverse group of researchers working on a project called
the Semantic Web, which seeks to give computers the ability--the seeming intelligence--to
understand content on the World Wide Web. At the time, he'd made a list of goals, a copy of which
he now held in his hand. If he'd achieved those goals, his part of the job was done.
Taking stock on the beach, he crossed off items one by one. The Semantic Web initiative's basic
standards were in place; big companies were involved; startups were merging or being purchased;
analysts and national and international newspapers, not just technical publications, were writing
about the project. Only a single item remained: taking the technology mainstream. Maybe it was
time to make this happen himself, he thought. Time to move into the business world at last.
"For the Semantic Web, it was no longer a matter of if but of when," Miller says. "I felt I could be
more useful by helping people get on with it."
Now, six months after the launch of his own Zepheira, a consulting company that helps businesses
link fragmented data sources into easily searched wholes, Miller's beachside decision seems
increasingly prescient. The Semantic Web community's grandest visions, of data-surfing computer
servants that automatically reason their way through problems, have yet to be fulfilled. But the
basic technologies that Miller shepherded through research labs and standards committees are
joining the everyday Web. They can be found everywhere--on entertainment and travel sites, in
business and scientific databases--and are forming the core of what some promoters call a nascent
"Web 3.0."
Already, these techniques are helping developers stitch together complex applications or bring
once-inaccessible data sources online. Semantic Web tools now in use improve and automate
database searches, helping people choose vacation destinations or sort through complicated
financial data more efficiently. It may be years before the Web is populated by truly intelligent
software agents automatically doing our bidding, but their precursors are helping people find
better answers to questions today.
The "3.0" claim is ambitious, casting these new tools as successors to several earlier--but still
viable--generations of Net technology. Web 1.0 refers to the first generation of the commercial
Internet, dominated by content that was only marginally interactive. Web 2.0, characterized by
features such as tagging, social networks, and user-created taxonomies of content called
"folksonomies," added a new layer of interactivity, represented by sites such as Flickr, Del.icio.us,
and Wikipedia.
Analysts, researchers, and pundits have subsequently argued over what, if anything, would
deserve to be called "3.0." Definitions have ranged from widespread mobile broadband access to a
Web full of on-demand software services. A much-read article in the New York Times last
November clarified the debate, however. In it, John Markoff defined Web 3.0 as a set of

14
technologies that offer efficient new ways to help computers organize and draw conclusions from
online data, and that definition has since dominated discussions at conferences, on blogs, and
among entrepreneurs.
The 3.0 moniker has its critics. Miller himself, like many in his research community, frowns at the
idea of applying old-fashioned software release numbers to a Web that evolves continually and on
many fronts. Yet even skeptics acknowledge the advent of something qualitatively different. Early
versions of technologies that meet Markoff's definition are being built into the new online TV
service Joost. They've been used to organize Yahoo's food section and make it more searchable.
They're part of Oracle's latest, most powerful database suite, and Hewlett-Packard has produced
open-source tools for creating Semantic Web applications. Massive scientific databases, such as
the Creative Commons-affiliated Neurocommons, are being constructed around the new ideas,
while entrepreneurs are readying a variety of tools for release this year.
The next wave of technologies might ultimately blend pared-down Semantic Web tools with Web
2.0's capacity for dynamic user-generated connections. It may include a dash of data mining, with
computers automatically extracting patterns from the Net's hubbub of conversation. The
technology will probably take years to fulfill its promise, but it will almost certainly make the Web
easier to use.
"There is a clear understanding that there have to be better ways to connect the mass of data online
and interrogate it," says Daniel Waterhouse, a partner at the venture capital firm 3i. Waterhouse
calls himself skeptical of the "Web 3.0" hyperbole but has invested in at least one Semantic Web-
based business, the U.K. company Garlik. "We're just at the start," he says. "What we can do with
search today is very primitive."
Melvil Dewey and the Vision of a New Web
For more than a decade, Miller has been at the center of this slow-cresting technological wave.
Other names have been more prominent--Web creator Tim Berners-Lee is the Semantic Web's
most visible proselytizer, for example. But Miller's own experiences trace the technology's history,
from academic halls through standards bodies and, finally, into the private sector.
In the often scruffy Web world, the 39-year-old Miller has been a clean-cut exception, an articulate
and persuasive technological evangelist who looks less programmer than confident young
diplomat. He's spent most of his professional life in Dublin, OH, far from Silicon Valley and from
MIT, where he continues to serve as a research scientist. But it's no accident that Zepheira is based
in this Columbus suburb, or that Miller himself has stayed put. Dublin is a hub of digital library
science, and as the Semantic Web project has attempted to give order to the vast amounts of
information online, it has naturally tapped the expertise of library researchers here.
Miller joined this community as a computer engineering student at Ohio State University, near the
headquarters of a group called the Online Computer Library Center (OCLC). His initial attraction
was simple: OCLC had the largest collection of computers in the vicinity of Ohio State. But it also
oversees the venerable Dewey Decimal System, and its members are the modern-day inheritors of
Melvil Dewey's obsession with organizing and accessing information.
Dewey was no technologist, but the libraries of his time were as poorly organized as today's Web.
Books were often placed in simple alphabetical order, or even lined up by size. Libraries
commonly numbered shelves and assigned books to them heedless of subject matter. As a 21-year-
old librarian's assistant, Dewey found this system appalling: order, he believed, made for smoother
access to information.

15
Dewey envisioned all human knowledge as falling along a spectrum whose order could be
represented numerically. Even if arbitrary, his system gave context to library searches; when
seeking a book on Greek history, for example, a researcher could be assured that other relevant
texts would be nearby. A book's location on the shelves, relative to nearby books, itself aided
scholars in their search for information.
As the Web gained ground in the early 1990s, it naturally drew the attention of Miller and the
other latter-day Deweys at OCLC. Young as it was, the Web was already outgrowing attempts to
categorize its contents. Portals like Yahoo forsook topic directories in favor of increasingly
powerful search tools, but even these routinely produced irrelevant results.
Nor was it just librarians who worried about this disorder. Companies like Netscape and Microsoft
wanted to lead their customers to websites more efficiently. Berners-Lee himself, in his original
Web outlines, had described a way to add contextual information to hyperlinks, to offer computers
clues about what would be on the other end.
This idea had been dropped in favor of the simple, one-size-fits-all hyperlink. But Berners-Lee
didn't give it up altogether, and the idea of connecting data with links that meant something
retained its appeal.
On the Road to Semantics
By the mid-1990s, the computing community as a whole was falling in love with the idea of
metadata, a way of providing Web pages with computer-readable instructions or labels that would
be invisible to human readers.
To use an old metaphor, imagine the Web as a highway system, with hyperlinks as connecting
roads. The early Web offered road signs readable by humans but meaningless to computers. A
human might understand that "FatFelines.com" referred to cats, or that a link led to a veterinarian's
office, but computers, search engines, and software could not.
Metadata promised to add the missing signage. XML--the code underlying today's complicated
websites, which describes how to find and display content--emerged as one powerful variety. But
even XML can't serve as an ordering principle for the entire Web; it was designed to let Web
developers label data with their own custom "tags"--as if different cities posted signs in related but
mutually incomprehensible dialects.
In early 1996, researchers at the MIT-based World Wide Web Consortium (W3C) asked Miller,
then an Ohio State graduate student and OCLC researcher, for his opinion on a different type of
metadata proposal. The U.S. Congress was looking for ways to keep children from being exposed
to sexually explicit material online, and Web researchers had responded with a system of
computer-readable labels identifying such content. The labels could be applied either by Web
publishers or by ratings boards. Software could then use these labels to filter out objectionable
content, if desired.
Miller, among others, saw larger possibilities. Why, he asked, limit the descriptive information
associated with Web pages to their suitability for minors? If Web content was going to be labeled,
why not use the same infrastructure to classify other information, like the price, subject, or title of
a book for sale online? That kind of general-purpose metadata--which, unlike XML, would be
consistent across sites--would be a boon to people, or computers, looking for things on the Web.
This idea resonated with other Web researchers, and in the late 1990s it began to bear fruit. Its first
major result was the Resource Description Framework (RDF), a new system for locating and
describing information whose specifications were published as a complete W3C recommendation

16
in 1999. But over time, proponents of the idea became more ambitious and began looking to the
artificial-intelligence community for ways to help computers independently understand and
navigate through this web of metadata.
Since 1998, researchers at W3C, led by Berners-Lee, had been discussing the idea of a "semantic"
Web, which not only would provide a way to classify individual bits of online data such as
pictures, text, or database entries but would define relationships between classification categories
as well. Dictionaries and thesauruses called "ontologies" would translate between different ways
of describing the same types of data, such as "post code" and "zip code." All this would help
computers start to interpret Web content more efficiently.
In this vision, the Web would take on aspects of a database, or a web of databases. Databases are
good at providing simple answers to queries because their software understands the context of
each entry. "One Main Street" is understood as an address, not just random text. Defining the
context of online data just as clearly--labeling a cat as an animal, and a veterinarian as an animal
doctor, for example--could result in a Web that computers could browse and understand much as
humans do, researchers hoped.
To go back to the Web-as-highway metaphor, this might be analogous to creating detailed road
signs that cars themselves could understand and upon which they could act. The signs might point
out routes, describe road and traffic conditions, and offer detailed information about destinations.
A car able to understand the signs could navigate efficiently to its destination, with minimal
intervention by the driver.
In articles and talks, Berners-Lee and others began describing a future in which software agents
would similarly skip across this "web of data," understand Web pages' metadata content, and
complete tasks that take humans hours today. Say you'd had some lingering back pain: a program
might determine a specialist's availability, check an insurance site's database for in-plan status,
consult your calendar, and schedule an appointment. Another program might look up restaurant
reviews, check a map database, cross-reference open table times with your calendar, and make a
dinner reservation.
At the beginning of 2001, the effort to realize this vision became official. The W3C tapped Miller
to head up a new Semantic Web initiative, unveiled at a conference early that year in Hong Kong.
Miller couldn't be there in person; his wife was in labor with their first child, back in Dublin.
Miller saw it as a double birthday.
Standards and Critics
The next years weren't easy. Miller quickly had to become researcher, diplomat, and evangelist.
The effort to build the Semantic Web has been well publicized, and Berners-Lee's name in
particular has lent its success an air of near-inevitability. But its visibility has also made it the
target of frequent, and often harsh, criticism.
Some argue that it's unrealistic to expect busy people and businesses to create enough metadata to
make the Semantic Web work. The simple tagging used in Web 2.0 applications lets users
spontaneously invent their own descriptions, which may or may not relate to anything else.
Semantic Web systems require a more complicated infrastructure, in which developers order terms
according to their conceptual relationships to one another and--like Dewey with his books--fit data
into the resulting schema. Creating and maintaining these schemas, or even adapting preëxisting
ones, is no trivial task. Coding a database or website with metadata in the language of a schema
can itself be painstaking work. But the solution to this problem may simply be better tools for

17
creating metadata, like the blog and social-networking sites that have made building personal
websites easy. "A lot of Semantic Web researchers have realized this disconnect and are investing
in more human interfaces," says David Huynh, an MIT student who has helped create several such
tools.
Other critics have questioned whether the ontologies designed to translate between different data
descriptions can realistically help computers understand the intricacies of even basic human
concepts. Equating "post code" and "zip code" is easy enough, the critics say. But what happens
when a computer stumbles on a word like "marriage," with its competing connotations of
monogamy, polygamy, same-sex relationships, and civil unions? A system of interlocking
computer definitions could not reliably capture the conflicting meanings of many such common
words, the argument goes.
"People forget there are humans under the hood and try to treat the Web like a database instead of
a social construct," says Clay Shirky, an Internet consultant and adjunct professor of interactive
telecommunications at New York University.
It hasn't helped that until very recently, much of the work on the Semantic Web has been hidden
inside big companies or research institutions, with few applications emerging. But that paucity of
products has masked a growing amount of experimentation. Miller's W3C working group, which
included researchers and technologists from across academia and industry, was responsible for
setting the core standards, a process completed in early 2004. Like HP, other companies have also
created software development tools based on these standards, while a growing number of
independent researchers have applied them to complicated data sets.
Life scientists with vast stores of biological data have been especially interested. In a recent trial
project at Massachusetts General Hospital and Harvard University, conducted in collaboration
with Miller when he was still at the W3C, clinical data was encoded using Semantic Web
techniques so that researchers could share it and search it more easily. The Neurocommons project
is taking the same approach with genetic and biotech research papers. Funded by the scientific-
data management company Teranode, the Neurocommons is again working closely with W3C, as
well as with MIT's Computer Science and Artificial Intelligence Laboratory.
Government agencies have conducted similar trials, with the U.S. Defense Advanced Research
Projects Agency (DARPA) investing heavily in its own research and prototype projects based on
the Semantic Web standards. The agency's former Information Exploitation Office program
manager Mark Greaves, who oversaw much of its Semantic Web work, remains an enthusiastic
backer.
"What we're trying to do with the Semantic Web is build a digital Aristotle," says Greaves, now
senior research program manager at Paul Allen's investment company, Vulcan, which is
sponsoring a large-scale artificial-intelligence venture called Project Halo that will use Semantic
Web data-representation techniques. "We want to take the Web and make it more like a database,
make it a system that can answer questions, not just get a pile of documents that might hold an
answer."
Into the Real World
If Miller's sunset epiphany showed him the path forward, the community he represented was
following similar routes. All around him, ideas that germinated for years in labs and research
papers are beginning to take root in the marketplace.
But they're also being savagely pruned. Businesses, even Miller's Zepheira, are adopting the

18
simplest Semantic Web tools while putting aside the more ambitious ones. Entrepreneurs are
blending Web 2.0 features with Semantic Web data-handling techniques. Indeed, if there is to be a
Web 3.0, it is likely to include only a portion of the Semantic Web community's work, along with
a healthy smattering of other technologies. "The thing being called Web 3.0 is an important subset
of the Semantic Web vision," says Jim Hendler, professor of computer science at Rensselaer
Polytechnic Institute, who was one of the initiative's pioneer theorists. "It's really a realization that
a little bit of Semantic Web stuff with what's called Web 2.0 is a tremendously powerful
technology."
Much of that technology is still invisible to consumers, as big companies internally apply the
Semantic Web's efficient ways of organizing data. Miller's Zepheira, at least today, is focused on
helping them with that job. Zepheira's pitch to companies is fairly simple, perhaps looking once
again to Dewey's disorganized libraries. Businesses are awash in inaccessible data on intranets, in
unconnected databases, even on employees' hard drives. For each of its clients, Zepheira aims to
bring all that data into the light, code it using Semantic Web techniques, and connect it so that it
becomes useful across the organization. In one case, that might mean linking Excel documents to
payroll or customer databases, in another, connecting customer accounts to personalized
information feeds. These disparate data sources would be tied together with RDF and other
Semantic Web mechanisms that help computerized search tools find and filter information more
efficiently.
One of the company's early clients is Citigroup. The banking giant's global head of capital markets
and banking technology, Chris Augustin, is heading an initiative to use semantic technologies to
organize and correlate information from diverse financial-data feeds. The goal is to help identify
capital-market investment opportunities. "We are interested in providing our customers and traders
with the latest information in the most relevant and timely manner to help them make the best
decisions quickly," says Rachel Yager, the program director overseeing the effort.
Others are beginning to apply semantic techniques to consumer-focused businesses, varying
widely in how deeply they draw from the Semantic Web's well.
The Los Altos, CA-based website RealTravel, created by chief executive Ken Leeder, AdForce
founder Michael Tanne, and Semantic Web researcher Tom Gruber, offers an early example of
what it will look like to mix Web 2.0 features like tagging and blogging with a semantic data-
organization system. The U.K.-based Garlik, headed by former top executives of the British online
bank Egg, uses an RDF-based database as part of a privacy service that keeps customers apprised
of how much of their personal information is appearing online. "We think Garlik's technology
gives them a really interesting technology advantage, but this is at a very early stage," says 3i's -
Waterhouse, whose venture firm helped fund Garlik. "Semantic technology is going to be a slow
burn."
San Francisco-based Radar Networks, created by EarthWeb cofounder Nova Spivack and funded
in part by Allen's Vulcan Capital, plans eventually to release a full development platform for
commercial Semantic Web applications, and will begin to release collaboration and information-
sharing tools based on the techniques this year. Spivack himself has been part of the Semantic
Web community for years, most recently working with DARPA and SRI International on a long-
term project called CALO (Cognitive Agent that Learns and Organizes), which aims to help
military analysts filter and analyze new data.
Radar Networks' tools will be based on familiar ideas such as sharing bookmarks, notes, and

19
documents, but Spivack says that ordering and linking this data within the basic Semantic Web
framework will help teams analyze their work more efficiently. He predicts that the mainstream
Web will spend years assimilating these basic organization processes, using RDF and related tools,
while the Semantic Web's more ambitious artificial-intelligence applications wait in the wings.
"First comes what I call the World Wide Database, making data accessible through queries, with
no AI involved," Spivack says. "Step two is the intelligent Web, enabling software to process
information more intelligently. That's what we're working on."
One of the highest-profile deployments of Semantic Web technology is courtesy of Joost, the
closely watched Internet television startup formed by the creators of Skype and Kazaa. The
company has moved extraordinarily quickly from last year's original conception, through software
development and Byzantine negotiations with video content owners, into beta-testing of its
customizable peer-to-peer TV software.
That would have been impossible if not for the Semantic Web's RDF techniques, which Joost chief
technology officer Dirk-Willem van Gulik calls "XML on steroids." RDF allowed developers to
write software without worrying about widely varying content-use restrictions or national
regulations, all of which could be accommodated afterwards using RDF's Semantic Web linkages.
Joost's RDF infrastructure also means that users will have wide-ranging control over the service,
van Gulik adds. People will be able to program their own virtual TV networks--if an advertiser
wants its own "channel," say, or an environmental group wants to bring topical content to its
members--by using the powerful search and filtering capacity inherent in the semantic ordering of
data.
But van Gulik's admiration goes only so far. While he believes that the simpler elements of the
Semantic Web will be essential to a huge range of online businesses, the rest he can do without.
"RDF [and the other rudimentary semantic technologies] solve meaningful problems, and it costs
less than any other approach would," he says. "The entire remainder"--the more ambitious work
with ontologies and artificial intelligence--"is completely academic."
A Hybrid 3.0
Even as Semantic Web tools begin to reach the market, so do similar techniques developed outside
Miller's community. There are many ways, the market seems to be saying, to make the Web give
ever better answers.
Semantic Web technologies add order to data from the outset, putting up the road signs that let
computers understand what they're reading. But many researchers note that much of the Web lacks
such signs and probably always will. Computer scientists call this data "unstructured."
Much research has focused on helping computers extract answers from this unstructured data, and
the results may ultimately complement Semantic Web techniques. Data-mining companies have
long worked with intelligence agencies to find patterns in chaotic streams of information and are
now turning to commercial applications. IBM already offers a service that combs blogs, message
boards, and newsgroups for discussions of clients' products and draws conclusions about trends,
without the help of metadata's signposts.
"We don't expect everyone to go through the massive effort of using Semantic Web tools," says
Maria Azua, vice president of technology and innovation at IBM. "If you have time and effort to
do it, do it. But we can't wait for everyone to do it, or we'll never have this additional
information."
An intriguing, if stealthy, company called Metaweb Technologies, spun out of Applied Minds by

20
parallel-computing pioneer Danny Hillis, is promising to "extract ordered knowledge out of the
information chaos that is the current Internet," according to its website. Hillis has previously
written about a "Knowledge Web" with data-organization characteristics similar to those that
Berners-Lee champions, but he has not yet said whether Metaweb will be based on Semantic Web
standards. The company has been funded by Benchmark Capital, Millennium Technology
Ventures, and eBay founder Pierre Omidyar, among others.
"We've built up a set of powerful tools and utilities and initiatives in the Web-based community,
and to leverage and harness them, an infrastructure is desperately needed," says Millennium
managing partner Dan Burstein. "The Web needs extreme computer science to support these
applications."
Alternatively, the socially networked, tag-rich services of Flickr, Last.fm, Del.icio.us, and the like
are already imposing a grassroots order on collections of photos, music databases, and Web pages.
Allowing Web users to draw their own connections, creating, sharing, and modifying their own
systems of organization, provides data with structure that is usefully modeled on the way people
think, advocates say.
"The world is not like a set of shelves, nor is it like a database," says NYU's Shirky. "We see this
over and over with tags, where we have an actual picture of the human brain classifying
information."
No one knows what organizational technique will ultimately prevail. But what's increasingly clear
is that different kinds of order, and a variety of ways to unearth data and reuse it in new
applications, are coming to the Web. There will be no Dewey here, no one system that arranges all
the world's digital data in a single framework.
Even in his role as digital librarian, as custodian of the Semantic Web's development, Miller thinks
this variety is good. It's been one of the goals from the beginning, he says. If there is indeed a Web
3.0, or even just a 2.1, it will be a hybrid, spun from a number of technological threads, all helping
to make data more accessible and more useful.
"It's exciting to see Web 2.0 and social software come on line, but I find it even more exciting
when that data can be shared," Miller says. "This notion of trying to recombine the data together,
and driving new kinds of data, is really at the heart of what we've been focusing on."
John Borland is the coauthor of Dungeons and Dreamers: The Rise of Computer Game Culture
from Geek to Chic. He lives in Berlin.
http://www.technologyreview.com/Infotech/18306/page1/

193nm immersion lithography: Status and challenges

Yayi Wei and David Back


The first of a series on this important technology -- an overview of 193 immersion lithography
basics.
This article is a comprehensive review of 193nm immersion lithography. It will be focused on the
materials and processes rather than the optical system of the tool. Some of the results are from the
authors' previous publications. This is the first of five parts; each part is relatively self-complete.

The next installment will be available in mid-April on SPIE Newsroom.

21
History and status
193nm immersion lithography (193i) has been accepted by IC manufacturers as a manufacturing
patterning solution at least down to the 45nm half-pitch node. Immersion lithography is a
lithography enhancement technique that replaces the usual air gap between the final lens element
and the photoresist surface with a liquid medium with a refractive index greater than one. The
smaller wavelength in the liquid allows the imaging of smaller features and water is currently used
as the liquid. Fig.1(a) shows a sketched diagram of the final lens and wafer.

The immersion technique was first introduced by Carl Zeiss in the 1880s to increase the resolving
power of the optical microscope. Introduction of the immersion technique into modern lithography
was suggested in the 1980s. It attracted the IC industry's attention in 2002 when 157nm
lithography was delayed by several technical problems. Since then the development of 193i has
been incredibly fast. In the immersion workshop organized by Sematech in July 2003, scanner
suppliers showed their development plans of the 193i scanners. Sematech and its member
companies re-oriented their development resources from 157nm to 193i. In August 2004, the first
193i full-field scanner (ASML AT1150i, 0.75NA, α-tool) was delivered to Albany Nanotech and
used for the early immersion process learning. According to a sales report, through December
2006 more than 30 193i scanners have been sold to IC manufacturers worldwide.

Figure 1. (a) A sketched diagram of 193i exposure head. Water fills the gap between the final lens
and the wafer. The water injection and confinement system is not included in the diagram. (b)
Optical paths of two-beam interference for both "dry" and 193i exposures. (Click to enlarge.)
Advantages of 193i
The introduction of water into the gap between the final lens and wafer changes the optical paths
of exposure light. Fig.1(b) shows two-beam interference exposures in "dry" and "wet" situations
for comparison. The exposure beams pass through the air or water gap and are focused on the
wafer surfaces. Refractions occur at the interface of lens/air for the "dry" exposure or lens/water
for the immersion exposure, as sketched in Fig.1 (b). According to the Snell's law, the following
relation dictates the refraction angles,

where 1 is the incident angle at the interface; 2 and 3 are the refractive angles
corresponding to "dry" and immersion respectively. It is obvious from Eq.(1) that the introduction
of water alone does not change the NA. However, the water does reduce the refractive angle, i.e.,
2 > 3, increasing the depth-of-focus (DOF) of exposure. In the high-NA situation, the paraxial
approximation can not be applied; and the Rayleigh DOF has to be modified to include the
contribution of high incident angle beams 1, 2.

22
where k2 is the process factor. The contributions of the illumination settings, mask, and process to
the DOF can be included into the factor k2.

Exposing a pattern with a pitch p, the incident angles can be calculated by sin2 = λ/p for the
"dry" exposure and nwater×sin3= λ/p for the immersion exposure. The DOF improvement of
immersion vs. "dry" can be obtained as

The improvement in DOF is at least the refractive index of the fluid, and gets larger for the
smaller pitches.2 For example, 90nm dense lines were printed by a "dry" and an immersion tool
with the same mask and illumination settings (0.75NA). About 2× DOF was obtained with the
immersion exposure.3

Another advantage of the immersion technique is that it enables the lens designs with numerical
apertures greater than 1.0 - hyper-NA 193nm immersion lithography. The NA of a "dry" exposure
system does not go beyond 1.0; otherwise, the exposure light will be totally reflected back at the
lens/air interface -- totally internal reflection. With water immersion, the maximum NA
approaching nwater=1.44 is possible. Currently, 193i full-field exposure systems with NA of 1.07,
1.2, and 1.3 are available in the market; and 1.35NA capability is under design. It appears that the
hyper-NA 193i will provide lithography solutions down to 45nm half-pitch.

193i process and challenges


Apart from the gains -- DOF enhancement and hyper-NA lens design, 193i brings various process
challenges. The first challenge is leaching, where the photo-acid generator (PAG), quencher, and
other small molecular components in the resist can leach into the water. This leaching not only
degrades the resist performance but also contaminates the water. The contaminated water can
further contaminate the lens and wafer stage of the scanner. Water can also permeate into resist
film, causing resist swelling and changing its photochemical properties. Prior to the availability of
low-leaching resists, topcoat is an effective solution to block both the leaching and the water
uptake.

Topcoat is spin-coated on resist and is transparent at 193nm wavelength. It serves as a barrier


layer, enabling the regular 193nm resists -- "dry" resists -- to be used in the wet process. The
topcoat layer is removed after exposure / post exposure bake (PEB) and before pattern
development. According to their solubility in regular aqueous TMAH developer, there are two
types of topcoats - developer-insoluble topcoat and developer-soluble topcoat. Developer-
insoluble topcoat can only be removed with a specific topcoat solvent -- an additional step as well
as a process-tracking module is needed for removing the topcoat, as sketched in Fig. 2(a).
Therefore, developer-insoluble topcoat is not production favorable, and it was quickly followed by
developer-soluble topcoats. Developer-soluble topcoat can be dissolved by regular aqueous

23
TMAH developer. Therefore, the topcoat removal step can be done in the develop module and
integrated into the development step, as sketched in Fig. 2(b). However, compatibility of the resist
and topcoat has to be considered; and the process parameters have to be aligned to obtain the best
lithography process window.

Resist processes without top protection coatings are the preferred solution for introduction of 193i
lithography into mass production. These simplified processes, as sketched in Fig.2(c) do not need
separate coating and baking steps for the topcoat material and thereby reduce cost of ownership
and offer fewer sources for defects. Conventional 193nm resists that are optimized to optimum
performance upon "dry" exposures are not suitable for being used in immersion lithography
without topcoats, because resist components show (before and/or after exposure) at least some
solubility in water and leaching into water, thereby deteriorating lithographic performance.
Without a topcoat as a barrier layer, the selection of components for single-layer 193i resists that
can be used without top coatings is challenging, since minimized leaching and superior
lithographic performance are to be met simultaneously. Material innovation is the key for non-
topcoat processes to supercede topcoat processes.

Figure 2. Process flow comparison of resist stack with solvent-soluble topcoat (a), developer-
soluble topcoat (b), and without topcoat (c). (Click to enlarge.)
Immersion-related defects are another challenge to the 193i process. The water between the front
lens and wafer forms a meniscus that moves with the exposure head across the wafer. Various
physical and chemical interactions between the water and resist stack occur, leading to water
immersion-related defects: Bubbles in the water can distort the exposure image, water droplets left
on the wafer surface may deteriorate the local resist performance, and water can transport particles
to the wafer surface and deposit them there. With an un-optimized 193i process, typically 4-20%
more defects may be added to a wafer than that of "dry" 193nm lithography. The limitations on the
process yield caused by immersion defects must be solved before bringing 193i lithography into
high-volume production.

With hyper-NA 193i exposure, the maximum incident angle of the exposure light on the resist
stack is high. For example, corresponding to 1.3NA, the maximum incident angle is sin -1(1.3/1.44)
= 64.5°; while at 0.75NA of 193i, the maximum incident angle is only about 31.4°. The high
incident angle will cause the contrast loss of the transverse magnetic (TM) imaging component.

Fig.3 shows a diagram of the two-beam interference. Electric field of the incident beams can

be divided into two components: one in the incident plane ( ); and another perpendicular to

24
the incident plane ( ), where TM and TE denote transverse magnetic and transverse electric

fields, respectively. and . After the light is focused on the

wafer surface, its total intensity is

The TE components ( and ) are parallel to each other, and their superposition has no

relation to the incident angle. However, the TM components ( and ) form an angle of

180°-23; and the superposition depends on the incident angle by a factor of cos(23). At hyper-
NA exposure, the cos(23) is much smaller than one and destroys the TM contrast. Therefore,
illumination with only the TE component -- i.e., TE polarized illumination - has been suggested
for hyper-NA exposure. The high incident angle will also lead to a high reflection at the interfaces
of the resist stack and make the reflectivity control very difficult. Especially when patterns with
different pitches are exposed, the incident angles of the exposure lights are scattered in a broad
range. Reduction of reflections from different incident angles is extremely challenging for a
single-layer BARC. Different BARC strategies have been proposed for hyper-NA imaging, for
example the thick organic BARC approach and double-layer BARC approach 4.

Figure 3. Two-beam interference. The electric field of the incident beams can be divided into two
components -- TE and TM. The TM components cannot form enough contrast at hyper-NA.
Extendibility of 193i
Fig.4 shows a stack of lens/immersion fluid/resist/BARC system, which mimics the immersion

exposure head. According to Snell's law, . As labeled in

Fig.4, 1,3,4 are the incident angles of the exposure beam at the interfaces of lens and fluid, fluid
and resist, and resist and BARC (topcoat is not included in the discussion) respectively.
Apparently, the maximum effective NA is equal to min[nlens, nfluid, nresist], i.e., the further increase of
NA is limited by the refractive index of lens, fluid, and resist. Therefore, high refractive index
materials are the key to the further increase of NA.

25
Figure 4. Optical path of the exposure beam in a lens/immersion fluid/resist/BARC system
Encouraged by the great success of the water immersion, various high-refractive-index (RI)
materials, including high-RI immersion fluids, high-RI lens materials, and high-RI resists, are
being developed. A high-RI fluid of n=1.65 is now available and ready to be tested by scanner
suppliers.5 Various high-RI lens materials have been screened. Initial results have demonstrated
that LuAG has a RI of ~2.2 and can be a strong candidate for making a high-RI lens 6. Introduction
of sulfur into photoresist polymers can significantly increase their refractive index. 7 Second-
generation 193i with an immersion fluid of n=1.65 and lens of n=2.0 is expected to achieve a NA
of 1.55 and third-generation 193i will eventually push the NA to 1.65. Based on this projected
progress, 193i+ is becoming a strong competitor to EUV lithography for the 32nm half-pitch
node.8

References:
1. B. J. Lin, The k3 coefficient in nonparaxial λ/NA scaling equations for resolution, depth of
focus, and immersion lithography, JM3 1, No. 1 pp. 7-12, 2002. doi:10.1117/1.1445798
2. C. A. Mack and J. D. Byers, Exploring the capabilities of immersion lithography through
simulation, Proc. SPIE 5377, p. 428-441, 2004. doi:10.1117/12.537704
3. Yayi Wei, K. Petrillo, S. Brandl, F. Goodwin, P. Benson, R. Housley, and U. Okoroanyanwu,
Selection and evaluation of developer-soluble topcoat for 193nm immersion lithography, Proc.
SPIE 6153, 615306, 2006. doi:10.1117/12.655725
4. M. Colburn, S. Halle, M. Burkhardt, S. Burns, N. Seong, D. Goldfarb, and DE Team, Process
challenges for extension of H2O immersion lithography to hyper-NA, Sematech Litho Forum,
2006.
5. Y. Wang, T. Miyamatsu, T. Furukawa, K. Yamada, T. Tominaga, Y. Makita, H. Nakagawa, A.
Nakamura, M. Shima, S. Kusumoto, T. Shimokawa, and K. Hieda, High-Refractive-Index Fluids
for the Next Generation ArF Immersion Lithography, Proc. SPIE 6153, 61530A, 2006.
doi:10.1117/12.656022
6. J. H. Burnett, High-Index Materials for 193 nm Immersion Lithography, Presentation at
International Symposium on Immersion Lithography, Kyoto, 2006.
7. I. Blakey, W. Conley, G. A. George, D. J. T. Hill, H. Liu, F. Rasoul, and A. K. Whittaker,
Synthesis of high refractive index sulfur-containing polymers for 193nm immersion lithography: a
progress report, Proc. SPIE 6153, 61530H, 2006. doi:10.1117/12.659757
8. M. van den Brink, The only cost-effective extendable lithography option: EUV, Presentation at
EUV Symposium, Barcelona, 2006.
http://newsroom.spie.org/x6002.xml?highlight=x509

26
Ultrafast laser fabricates high-index-contrast structures in glass

Kiyotaka Miura
Adding metallic aluminum to silicate glass allows a femtosecond-laser irradiation to create
silicon precipitates within the glass, suggesting new fabrication methods for high-index-contrast
integrated optical devices.
Compact integrated optical devices require tight confinement of the optical
field, which in turn demands a high refractive-index difference between the
material of the waveguide and the surrounding matrix (see Figure 1).
Currently, most planar-waveguide devices in optical networks use a low
refractive-index difference between the core and the cladding layers, whose
indices are approximately 1.50 and 1.45, respectively.

Figure 1. To shrink integrated optical devices, they must be built from materials having very
different refractive indices. In the past, refractive indices of core and cladding layers were
approximately 1.50 and 1.45, respectively. Our present design uses silicon and conventional oxide
glass, with indices of about 3.5 and 1.5, respectively. In the future, we want 3D integration of
different components in the glass.
Femtosecond-laser irradiation is a well-known method of producing three-dimensional patterns of
structural and refractive-index change in various types of glass.1–9 Promising applications,
including three-dimensional optical waveguides and photonic crystals,6,10,11 have been
demonstrated. However, because the increased index results from local densification of the glass,
the maximum laser-induced refractive-index difference is limited to ˜10-2.
Achieving a high refractive-index difference is possible if silicon can be deposited within the
glass, because the respective refractive indices of Si and conventional oxide glass are
approximately 3.4 and 1.5, respectively. As a photonic medium, Si offers unique advantages for
photonic integrated circuits. It is transparent in the range of optical telecommunications
wavelengths (1.3 and 1.55µm) and its high refractive index enables sub-micrometer structures for
photonic devices.12
Our previous research suggested the possibility that Si clusters or particles could be extracted from
silicate glasses with the femtosecond laser. The focused laser causes a complex sequence of
phenomena, including formation of oxygen-deficiency centers in silica or silicate glasses,
reduction or oxidation of specific ions, reduction of metallic or semiconductor ions by capture of
electrons from non-bridging oxygen atoms, aggregation of metallic atoms into nanoparticles after
heat treatment, and diffusion of ions over distances of micrometers.
However, very little has been published on creating Si inside a bulk glass. Presumably this is
because general oxide glass cannot trap O ions, which are generated by breaking Si-O bonds and
growth of Si in an enclosed space inside the glass. To circumvent this problem, we have attempted
Si precipitation in silicate glass by adding metallic aluminum to the starting material and using
femtosecond-laser irradiation. The Al ions should act as O-trapping centers, because metallic Al

27
acts as a reducing agent via the thermite reaction. Therefore, we assumed that silicon precipitation
results when the laser energy dissociates silicon and oxygen and the dissociated oxygen ions react
with the aluminum during heat treatment, as illustrated in Figure 2.

Figure 2. A femtosecond laser can form Si precipitates if Al is present to react with the oxygen
that the laser liberates.
This strategy successfully created Si precipitates (see Figure 3). The inset is a scanning electron
microscope (SEM) image of a Si particle, and the graph plots the results of elemental analysis by
energy-dispersive x-ray spectroscopy (EDS) on the polished glass surface at the depth of the focal
point.

Figure 3. Femtosecond-laser irradiation of silicate glass formed a micron-sized particle (shown


in the scanning electron micrograph in the inset). This particle is rich in Si, as shown in the
energy-dispersive x-ray spectrum on the left and the elemental maps on the right.
Why does Si precipitation occur when the glass is irradiated with a femtosecond pulse, but not
with longer pulses? To address this question, we estimated the density, pressure, and temperature
caused by laser irradiation. The temperature at the center of the beam is higher than 3000K,
corresponding to a pressure increase of 1GPa. In addition, the pressure wave propagates outward
with a constant velocity of 6.2µm/ns, which agrees with the longitudinal sound velocity in
conventional silicate glass at room temperature. High local temperatures and pressures and the
generation of shock waves appear to be very important for forming the Si-rich structures that are
needed for the growth of Si particles.
Focused irradiation with femtosecond lasers is very useful for forming Si structures inside glass.
Although much still remains to be done to clarify the precipitation mechanisms and to control the
shape and size of Si structures that are formed, our findings open up new fabrication options for Si
integrated devices and Si photonics.

References:
1. K. Miura, J. Qiu, H. Inouye, T. Mitsuyu, K. Hirao, Appl. Phys. Lett. 71, pp. 3329, 1997.
2. J. Qiu, K. Miura, K. Hirao, Jpn. J. Appl. Phys. 37, pp. 2263, 1998.
3. K. Hirao, K. Miura, Jpn. J. Appl. Phys. 37, pp. 49, 1998.

28
4. K. Miura, H. Inouye, J. Qiu, K. Hirao, Nuclear Instruments and Methods in Physics Research
B141, pp. 726, 1998.
5. K. Miura, J. Qie, T. Mitsuyu, K. Hirao, Three-dimensional microscopic modifications in glasses
by a femtosecond laser, Proc. SPIE 3618, pp. 141, 1999. doi:10.1117/12.352678.
6. K. Hirao, J. Qiu, K. Miura, T. Nakaya, J. Si, Femtosecond Laser Induced Phenomena in Active
Ion-doped Glasses and Their Applications, Trans. MRS-J 29, no. 1, pp. 3, 2004.
7. K. Miura, J. Qie, T. Mitsuyu, K. Hirao, Space-selective growth of frequency-conversion crystals
in glasses with ultrashort infrared laser pulses, Opt. Lett. 25, pp. 408, 2000.
8. K. Miura, J. Qiu, S. Fujiwara, S. Sakaguchi, K. Hirao, Appl. Phys. Lett. 80, pp. 2263, 2002.
9. Y. Shimotsuma, G . P. Kazansky, J. Qiu, K. Hirao, Phys. Rev. Lett. 91, pp. 247405, 2003.
10. S. Hiramatsu, T. Q. Mikawa Ibaragi, K. Miura, K. Hirao, IEEE Photonics Technol. Lett. 16,
pp. 207, 2004.
11. J. Qiu, Y. Shimotsuma, K. Miura, P. G. Kazanski, K. Hirao, Mechanisms and applications of
femtosecond laser induced nanostructures, Proc. SPIE 5713, pp. 137, 2005. doi:
10.1117/12.598090.
12. Lionel C. Kimerling, Photons to the rescue: Microelectronics becomes microphotonics,
Electrochem. Soc. Interface 9, no. 2, pp. 28-31, 2000.
DOI: 10.1117/2.1200702.0660
http://newsroom.spie.org/x5995.xml?highlight=x525

New approach allows integration of biosensors into electronics

Joe Shapter and Jamie Quinton


Direct attachment of carbon nanotubes makes silicon substrates conduct even in the presence of a
normally insulating oxide layer.
Biosensors are powerful analytical instruments due to their selectivity, sensitivity, and range of
possible applications.1 But integration of the sensing element into the electronic circuitry used to
read the sensor output would be a huge step forward. The power of this type of integration has
been obvious in the development of micro electro-mechanical systems (MEMS), as both the
moving and electronic parts of a device could be packaged on one chip.2
Of course, this type of integration will require the construction of biosensors on silicon substrates.
Often the oxide layer on silicon makes this difficult as it is fairly unreactive as well as insulating.
Our recent work has overcome these obstacles by attaching single-walled (carbon) nanotubes
(SWNT) directly to silicon substrates.3 These nanotubes provide a conduit to access the electronic
states of the semiconducting substrate, making electron transport into the silicon possible. These
currents could be read directly in an integrated package with both a sensing element and electronic
circuitry.
Figure 1 presents a schematic of the approach used to attach the nanotubes. In short, the native
oxide of p-type silicon (100) wafers is hydroxylated. The OH groups produced react with
functionalized single-walled carbon nanotubes via a condensation reaction using a standard

29
coupling agent.3 We have confirmed the attachment of the SWNTs to silicon using a variety of
surface and spectroscopic techniques.

Figure 1. Schematic representation of the preparation of SWNTs directly attached to silicon.


This chemical approach has two distinct advantages over the various growth processes often
used.4,5 First, the nanotubes are chemically bound to the surface, whereas grown tubes can have
adhesion issues.6 Second, the tops of our nanotubes have chemical functionalities that can react
further to attach species appropriate to applications such as biosensing. Grown tubes often have
end caps, and further modification is difficult without destroying the prepared substrate. 6 In
addition, the attachment procedure is quite simple compared to some other approaches, and,
perhaps most importantly, yields only vertically aligned nanotubes. As such, these structures are
available for further functionalization leading to a variety of possible applications.
For example, the nanotubes could be used to construct sensors that require the measurement of
current. We have probed the ability of the nanotube substrates to conduct electrons. Figure 2
shows cyclic voltammograms (CVs) obtained with SWNTs directly attached to the silicon
electrode as well as some control experiments. The nanotube surface produces distinct redox
waves with anodic and cathodic peak positions at 474mV and 596mV, respectively (see Figure
2a). These are very similar to those observed on a gold substrate, as shown in the inset. No signal
is observed with only the oxide layer present. This proves that the nanotubes have in essence
‘electronically’ punched through the insulating oxide layer such that the electronic states of the
nanotubes overlap with those of the semiconducting substrate allowing conduction.

30
Figure 2. Cyclic voltammograms (CVs) using (a) single-walled (carbon) nanotubes (SWNTs)
directly attached to a silicon working electrode in 0.1 mmol L -1 ferrocene dissolved in 0.1 mol L-1
tetrabutylammonium perchlorate (TBAP)/CH3CN solution. (b) SWNTs directly attached to a
silicon working electrode in 0.1 mol L-1 TBAP/CH3CN blank solution; and (c) a hydroxyl
terminated silicon working electrode in 0.1 mmol L-1 ferrocene dissolved in 0.1 mol L-1
TBAP/CH3CN solution. The scan rate is 100mV s-1. The inset shows the CV on a gold electrode
with the same ferrocene solution under the same conditions.
Our work successfully demonstrates a new, direct-chemical anchoring approach to the fabrication
of vertically-aligned shortened carbon nanotube architectures on a silicon (100) substrate. The
new interface demonstrates excellent conductivity to the substrate, and as such this approach has
numerous potential applications. The attachment of SWNTs directly to a silicon surface provides a
simple new avenue for the fabrication and development of silicon-based electrochemical and bio-
electrochemical sensors, solar cells, and nanoelectronic devices using further surface modification.

References:
1. G. S. Wilson, R. Gifford, Biosensors for real-time in vivo measurements, Biosensors and
Bioelectronics 20, no. 12, pp. 2388-2403, 2005. doi:10.1016/j.bios.2004.12.003.
2. T.-R. Hui, MEMS and Microsystems: Design and Manufacture, McGraw-Hill, Berkshire, UK,
2001.
3. J. Yu, J. G. Shapter, J. S. Quinton, M. R. Johnston, D. A. Beattie, Direct attachment of well-
aligned single-walled carbon nanotube architectures to silicon (100) surfaces: a simple approach
for device assembly, PCCP 9, no. 4, pp. 510-520, 2007. doi:10.1039/b615096a.
4. S. S. Fan, M. G. Chapline, N. R. Franklin, T. W. Tombler, A. M. Cassell, H. J. Dai, Self-
Oriented Regular Arrays of Carbon Nanotubes and Their Field Emission Properties, Science 283,
pp. 512-514, 1999.
5. Z. F. Ren, Z. P. Huang, J. W. Xu, J. H. Wang, P. Bush, M. P. Siegal, P. N. Provencio, Synthesis
of Large Arrays of Well-Aligned Carbon Nanotubes on Glass, Science 282, pp. 1105-1107, 1998.
6. L. B. Zhu, Y. Y. Sun, D. W. Hess, C. P. Wong, Well-Aligned Open-Ended Carbon Nanotube
Architectures: An Approach for Device Assembly, Nano Lett. 6, pp. 243-247, 2006.
doi:10.1021/nl052183z.

31
DOI: 10.1117/2.1200702.0667
http://newsroom.spie.org/x6024.xml?highlight=x523

32
Shaping gold nanorods for plasmonic applications

Carsten Reinhardt, Wayne Dickson, and Robert Pollard


The restructuring of metallic nanorod arrays using femtosecond laser writing may lead to novel
plasmonic devices based on localized assemblies of interacting metallic nanorods.
Nanostructuring is a key element for the development of optics, optoelectronics, and photonics
capable of operating on subwavelength scales. However, the realization of highly integrated
optical devices and sensors with nanolocal electromagnetic field control requires structural
elements comparable to and smaller than a wavelength of light. One widely investigated approach
is to use appropriately designed metallic and metallodielectric nanostructures to manipulate light
in the form of various surface plasmon excitations, such as surface plasmon polaritons and
localized surface plasmons.1 These electromagnetic surface modes are associated with the
coherent excitation of the free-electron density charge waves at the interface of a metal and a
dielectric material. Various plasmonic circuit elements based on nanostructured metal films, such
as waveguides, mirrors, lenses, resonators, and plasmonic crystals, have been designed and tested.
Plasmonic nanostructures can provide both passive and active all-optical elements, the latter
capable of operating at low-control light intensities due to the electromagnetic field enhancement
resulting from plasmonic excitations.2
Alternative plasmonics can be developed based on discrete sets of interacting metallic particles.
We have recently developed an approach to inexpensively fabricate large arrays (1cm2 with up to 1
billion nanorods) of aligned nanoscale metallic rods attached to a substrate.3 Typically, the
nanorods have a diameter of some 20nm with length controllable between 50 and 450nm. These
are required to tailor the linear and nonlinear behavior of plasmonic excitations. However, to
develop their functionalities and integrate them in complex plasmonic circuits for sensing and
telecommunication applications, the nanorod assemblies must be configured in various shapes.
An attractive solution is to use femtosecond laser-based direct writing technology to restructure
free-standing nanorods and fabricate nanorod assemblies in the required shapes. This approach is
uniquely versatile because of its ability not only to ablate nanorods from the arrays but also to
melt adjacent ones, thus yielding arrays with continuous metallic nanostructures. It also offers the
possibility to create polymer structures with embedded nanorods.4 The application of femtosecond
laser technology (see Figure 1) enables the precise adjustment of the laser power delivered to the
target material while allowing high-resolution restructuring of the nanorod arrays. By scanning the
laser beam over the sample surface, structures of different geometries can be written. Plasmonic
elements including simple lines, splitters, and double splitters have been fabricated in this manner.
Examination of the specific elements of these structures reveals that nanorod ablation can be
successfully achieved with high precision. At optimal laser parameters, the ablation process can
create a sharp boundary with single-nanorod sharpness (see Figure 2). However, the accuracy
strongly depends on laser power and on the properties of the nanorod array that may cause melting
to occur at the boundary.

33
Figure 1. Schematic setup for restructuring gold nanowires by femtosecond laser irradiation.

Figure 2. SEM images of (a) 2×2 splitter created on gold nanorods by two-photon polymerization
of a negative photoresist using femtosecond laser radiation and (b) the boundary of an ablated
nanorod structure (nanorod length: 250nm, diameter: 25nm, spacing 40nm). The femtosecond
pulse energy was 80nJ.
The possibility to locally restructure metallic nanorod arrays using femtosecond laser writing may
lead to the development of novel plasmonic devices based on localized assemblies of interacting
metallic nanorods, refractive index sensors, extreme nanoscale waveguides, and
splitters/combiners for optical chips. Combined with nonlinear polymers, these structures provide
an opportunity to develop integratable nonlinear optical nanodevices.

References:
1. A. V. Zayats, I. I. Smolyaninov, A. A. Maradudin, Nano-optics of surface plasmon polaritons,
Phys. Rep. 408, no. 3–4, pp. 131-314, 2005. doi: 10.1016/j.physrep.2004.11.001.
2. G. A. Wurtz, R. Pollard, A. V. Zayats, Optical bistability in nonlinear surface-plasmon
polaritonic crystals, Phys. Rev. Lett. 97, no. 5, pp. 057402, 2006.
doi:10.1103/PhysRevLett.97.057402.
3. R. Atkinson, W. R. Hendren, G. A. Wurtz, W. Dickson, P. Evans, A. V. Zayats, R. J. Pollard,
Anisotropic optical properties of arrays of gold nanorods embedded in alumina, Phys. Rev. B
73, no. 23, pp. 235402, 2006. doi:10.1103/PhysRevB.73.235402.
4. C. Reinhardt, S. Passinger, B. N. Chichkov, W. Dickson, G. A. Wurtz, P. Evans, R. Pollard, A.
V. Zayats, Restructuring and modification of metallic nanorod arrays using femtosecond laser
direct writing, Appl. Phys. Lett. 89, no. 23, pp. 231117, 2006.
DOI: 10.1117/2.1200702.0633
http://newsroom.spie.org/x5814.xml?highlight=x523

34
Waveguiding by subwavelength arrays of active quantum dots

Lih Y. Lin and Chia-Jean Wang


Quantum dots self-assembled into an array enable light to be transmitted in submicrometer widths
over straight and 90°-bend waveguides.
Guiding light on a nanometer scale without extensive loss is one of the major obstacles to ultra-
high-density photonic integrated circuits. Due to the diffraction limit, the width of a passive
waveguide must be on the order of the transmitted wavelength or larger to achieve confinement.
This restriction necessitates micrometer-wide structures for most optical components.
Within the last decade, a number of researchers have made gold- and silver-nanoparticle 1 and gold
layered-slot2 wave-guides, which rely on the plasmonic behavior and negative dielectric property
of metals to transfer an optical signal. Photonic-crystal-based devices, 3 whose loss is less than that
of plasmonics, may also confine light within a space of several hundred nanometers in a matrix of
holes that provides a photonic band gap. However, loss is not actively mediated in any of these
cases, and confinement of the field to the actual device footprint and possible integrated-
fabrication methods are not optimal.
Our approach to reducing loss and shrinking the structure is to form a waveguide from gain-
enabled quantum dots (QDs) that are deposited via self-assembly in a subwavelength template. 4
The operation, depicted in Figure 1(a), uses pump light to excite electrons from the valence to the
conduction band in the QDs. A light signal introduced at one edge via a fiber probe propagates
along the device to promote recombination of electron-hole pairs, resulting in stimulated emission.
The photons cascade one after the other and travel downstream, leading to increased throughput.
Transmission is tempered only by the coupling efficiency between adjacent QDs, and is detected
at the output edge by another fiber probe.

Figure 1. (a) Schematic diagram of nanophotonic waveguide in operation; (b) Fluorescence


image of 500nm-wide waveguides. Scale bar in white is 1µm.5
We have used two different self-assembly chemistries to fabricate such nanophotonic QD
waveguides.5 The first exploits complementary strands of DNA as programmable elements to
deposit QDs in areas defined by the DNA sequence. The second method is a two-layer self-
assembly process that enables rapid prototyping and leads to higher QD packing density on the
substrate. We check assembly of the device with fluorescence and atomic force microscopy. The
fluorescence micrograph in Figure 1(b) shows 500nm waveguides, both isolated and in pairs with
200 and 500nm spacing.
We tested the device, in the configuration described above, with both pump and signal lasers. To
determine the net contribution of the waveguide, we ramped the pump light to vary the absorption
or gain of the QDs while the signal light was toggled on and off. Subsequently, the sample was

35
moved out from underneath the test probes, and we ran control tests on the bare substrate. After
performing data analysis, we found that the output power from the waveguide, relative to the
substrate transmission, increased with pump power. The curves in Figure 2 demonstrate the
expected trend for both straight and 90°-bend waveguides with improved throughput due to the
QDs.

Figure 2. The measured relative power out of a 500nm-wide wave-guide rise with increasing
pump light for both (a) a 10µm-long straight waveguide and (b) a 20µm waveguide with a
90°bend. The inset shows a fluorescence image.5
To further qualify our waveguide for integration with existing optoelectronics, we are measuring
transmission through structures of varying length to find loss as a function of distance. We will
also investigate crosstalk between adjacent devices. Comparing the results with conventional
dielectric waveguides will test whether the near-field energy transfer between quantum dots in our
device reduces the crosstalk to neighboring wave-guides and increases the overall direct
transmission. The characterization of the device will demonstrate its viability as a cornerstone for
nanoscale photonic integrated circuits.

References:
1. S. A. Maier, M. D. Friedman, P. E. Barclay, O. Painter, Experimental demonstration of fiber-
accessible metal nanoparticle plasmon waveguides for planar energy guiding and sensing, Appl.
Phys. Lett. 86, pp. 071103, 2004. doi:10.1063/1.1862340.
2. L. Chen, J. Shakya, M. Lipson, Subwavelength confinement in an integrated metal slot
waveguide on silicon, Opt. Lett. 31, pp. 2133-2135, 2006.
3. W. Bogaerts, R. Baets, P. Dumon, V. Wiaux, S. Beckx, D. Taillaert, B. Luyssaert, J. Van
Campenhout, P. Bienstman, D. Van Thourhout, Nanophotonic waveguides in silicon-on-insulator
fabricated with CMOS technology, J. Lightwave Technol. 23, no. 1, pp. 401-412, 2005.
doi:10.1109/JLT.2004.834471.
4. C.-J. Wang, L. Y. Lin, B. A. Parviz, Modeling and simulation for a nano-photonic quantom dot
waveguide fabricated by DNA-directed self-assembly, J. Select. Topics Quantum Electron. 11, no.
2, pp. 500-509, 2005. doi:10.1109/JSTQE.2005.845616.
5. C.-J. Wang, L. Huang, B. A. Parviz, L. Y. Lin, Sub-diffraction photon guidance by quantum dot
cascades, Nano Lett. 6, no. 11, pp. 2549-2553, 2006. doi:10.1021/nl061958g.
DOI: 10.1117/2.1200702.0588
http://newsroom.spie.org/x5700.xml?highlight=x523

36
Liquid-crystal-based devices manipulate

terahertz-frequency radiation

Ci-Ling Pan and Ru-Pin Pan


Birefringence and transparency of selected liquid crystals at terahertz (THz) frequencies promise
added functionalities for liquid-crystal-based THz photonic elements such as phase shifters and
filters.
The birefringence (double refraction of light into polarized ordinary and extraordinary rays) of
liquid crystals (LC) is well known and used extensively to manipulate optical radiation in visible
and near-IR light. Recently, we showed that several LCs are relatively transparent (extinction
coefficient of 2cm-1) and exhibit substantial birefringence magnitude, Δn=0.1, in the terahertz
(THz)—or sub-millimeter wavelength—region. Thus, it should be feasible to produce new THz
photonic elements with LC-enabled functionalities such as phase shifters, modulators, attenuators,
and polarizers.
To illustrate, we present the principle and performance of an LC-based Lyot filter. It has two phase
retarder elements, A and B, separated by a linear polarizer (see Figure 1). Each retarder element
consists of a fixed retarder (FR) and a tunable retarder (TR). The FR consists of a pair of
permanent magnets sandwiching a homogeneously-aligned LC cell (i.e., the LC molecules align
parallel to the substrate). The homogeneous cells in FRA and FRB supply fixed phase retardations,
GA and GB, for THz waves. The tunable retarders, TRA and TRB—see Figure 1(b)—are
homeotropically-aligned LC cells (i.e., LC molecules align perpendicular to the substrate) at the
center of a rotatable magnet. TRA and TRB are used to achieve the desired variable phase
retardation, ΔGA and ΔGB.

Figure 1. Schematic diagram of a liquid-crystal-based tunable terahertz (THz) Lyot filter. LC:
liquid crystal. P: polarizer. N: north pole. S: south pole.
Because of the birefringence of LC, the THz waves that pass through each element separate into
extraordinary and ordinary rays (e-ray and o-ray) with corresponding time delays between the e-
ray and o-ray, ΔtA and ΔtB, respectively. This Lyot filter is designed such that ΔtB=2ΔtA. The first
maxima of the transmittance of the Lyot filter, T(f), occur when ΔtA·f=1, where f is frequency of
the THz wave. The homogeneous LC layer in FRA and FRB are 4.5 and 9mm thick, respectively.
The LC layers in homeotropic cells for TRA and TRB, are 2 and 4mm thick, respectively. The
retardation of THz waves transmitted through the homeotropic cells is tuned by rotating the
magnets. The retardation provided by the homeotropic cells, TRA and TRB, is zero when the LC
molecules are parallel to the propagation direction of THz waves and increases with reorientation
of the LC molecules as the magnets rotate.1,2

37
Figure 2. An example of the transmitted spectrum of the broadband THz pulse through the LC
THz Lyot filter, obtained by taking the fast Fourier transform of the time-domain transmitted THz
signal, which is shown in the inset.
An example of the transmitted THz spectrum through the filter, normalized to the maximum of
transmittance, is shown in Figure 2. The transmitted peak frequency and the bandwidth of the
filter are 0.465THz and ~0.10THz, respectively. The corresponding THz temporal profile with
total retardance, ΔG=0, is shown in the inset of Figure 2. Note that the four peaks have peak-to-
peak separations of ΔtA. This is explained as follows: The THz wave is separated into an o-ray and
an e-ray after passing the first element. These two waves are further separated into an o-o-ray, e-o-
ray, o-e-ray, and e-e-ray, respectively, again after passing through the second element. The
transmission spectrum manifests the interference among the four peaks of the THz signal.
The filter is tuned by rotating the magnets in TRA and TRB synchronously to maintain ΔtB=2ΔtA.
The temporal THz profile is split from one peak into four peaks after passing through the two
elements of the filter. The equal time difference, ΔtA, between each peak can be observed from
measured data. The frequency-dependent transmitted spectrum comes from the interference of
these four parts of the THz signal and is obtained by applying a fast Fourier transform (FFT) to the
temporal signals. The peak transmission frequency of the filter decreases with increasing ΔtA. The
tuning range of the filter is from 0.388 to 0.564THz, or a fractional tuning range of ~40%. The
bandwidth of the present device is 0.1THz. Adding elements can narrow the bandwidth even
more. Extending the LC-based Lyot filter to the 10–30THz range or mid-IR is straightforward,
with shorter wavelengths providing the additional benefit of a larger tunable range.
We have demonstrated the design of an LC-based, tunable Lyot filter with potential applications in
the THz frequency region as phase shifters, modulators, and other photonic devices.

References:
1. Ci-Lin Pan, Ru-Pin Pan, Recent progress in liquid crystal THz optics, Proc. of SPIE 6135, pp.
D1-13, 2006. Invited paper.
2. Chao-Yuan Chen, Cho-Fan Hsieh, Yea-Feng Lin, Ci-Ling Pan, Ru-Pin Pan, A Liquid-Crystal-
Based Terahertz Tunable Lyot Filter, Appl. Phys. Lett. 88, pp. 101107, 2006.
doi:10.1063/1.2181271.
DOI: 10.1117/2.1200702.0676
http://newsroom.spie.org/x5999.xml?highlight=x521

38
New molecules may improve the performance of optical devices

Mark Kuzyk
A newly synthesized molecule that interacts with light 50% more efficiently than all previous
molecules has possible applications ranging from digital data storage to fiber optics to cancer
treatments.
The key element of any application that uses light, such as a DVD player or fiber optics, is the
interaction of light with a material. We rely on such applications as we seek an ever faster Internet
with larger data capacity, higher-density electronic storage media, and more effective cancer
therapies. For example, an optical switch needs to be fast and consume little power, while an
effective photodynamic cancer therapy requires specialized molecules that stick only to cancer
cells and generate heat when exposed to laser light, zapping the disease without damaging healthy
organs. Thus, increasing the efficiency of the interaction of light with the molecular building
blocks of a material is important for a wide variety of applications. For over three decades, the
efficiency of the best molecules remained a factor of 30 short of the fundamental limits.1
In the past, a common approach to increasing the efficiency of interactions between light and
matter was to make larger molecules and to pack more of them into the material. However, with
this brute force approach, the efficiency never increased nearly as much as possible, so molecules
never lived up to their full potential. The first step for breaking through this glass ceiling was
taken by Zhou, Watkins, and myself when we used computer modeling to design the ultimate
molecule.2 We entered an arbitrary molecular shape into the computer, which calculated the
switching efficiency per unit of molecular size. It then varied the shape while noting how the
efficiency changed, in effect ‘tuning in’ to the ideal shape, as one would tune in to a radio station
while listening for the best signal. The result was somewhat surprising, since it implied that the
motions of the electrons in the molecule needed to be somewhat impeded. Shortly after the design
criteria for the ideal molecule were established, I was part of an international collaboration with
Pérez and Clays that reported on a new molecule with the required ‘speed bump’ to trip up the
electrons. This new molecule broke through the long-standing glass ceiling with an efficiency that
is 50% larger than the previously best molecules.3 Using the same paradigm, additional
improvements—up to a factor of 20—are possible.
The strength of interaction between a molecule and light is quantified by the hyperpolarizability,
which characterizes the probability that the molecule will mediate the merger of two photons into
one. The second hyperpolarizability characterizes the merger of three photons, while higher-order
terms represent the merger of even more photons. Applications such as electro-optic switching,
frequency doubling, and non-contact chip testing rely on the hyperpolarizability, while all-optical
switching, three-dimensional photolithography, and photodynamic cancer therapies rely on the
second hyperpolarizability. Our computer simulations focused on optimizing the intrinsic
hyperpolarizability, which divides out the effect of molecular size. The same technique can be
applied to higher-order polarizabilities.
Figure 1 shows the initial smooth potential energy function— V(x)=tanh(x}—and the evolution of
the potential energy function as it tunes in to the optimized response. After about 7000 iterations,
the potential energy function develops dramatic wiggles that act to localize the various excited

39
state wave functions. At this point, the intrinsic hyperpolarizability is more than 0.7, less than 30%
away from the fundamental limit I calculated.4 Based on the observation of these wiggles, we
suggested that a molecule with modulated conjugation should lead to the required bumpy
potential. The inset to Figure 1 shows the proposed type of structure, which has alternating single
and double bonds with nitrogens, carbons, and various ring structures. This variation in
composition was proposed as a way to create the required bumps.

Figure 1. The evolution of the potential energy function as it approaches the optimal response.
The inset shows an example of modulated conjugation.
Using the concept of modulation of conjugation, we reported on a series of molecules that showed
both smooth conjugation and modulated conjugations.3 Figure 2 shows a plot of the intrinsic
hyperpolarizabilities of these molecules. In particular, molecule 4 has two identical rings, while
molecule 7 has two rings of different compositions. These two different types of rings provide
bumps of different sizes, which leads to modulation. The intrinsic hyperpolarizability of molecule
7 is 50% greater than the glass ceiling (i.e., the apparent limit characterizing the behavior of all
previous molecules). Since this new record-breaking molecule is still about a factor of 20 below
the fundamental limit, there is room for improvement.

40
Figure 2. Measurement of the intrinsic hyperpolarizability of the new molecules is shown. The
insets show the molecular structures of molecules 4 and 7.
The series of seven molecules, including number 7, were supplied by chemist Yuxia Zhao at the
Chinese Academy of Sciences and measured in Belgium. The molecules currently have only
lengthy, multisyllabic chemical names, and for that reason are not listed here.
To take full advantage of the concept of conjugation modulation, new molecules with many
wiggles (i.e., speed bumps) will need to be synthesized and tested. If the theoretically possible
improvement in intrinsic hyperpolarizability by a factor of 20 is realized, then green laser pointers
(which use frequency doublers) can be made brighter, and optical switches can operate at lower
voltage and power, thus making them more attractive in telecommunications systems. Subtler
electric field variations can be detected with electro-optic sampling when testing new chip
designs. By offering more efficient interactions between light and matter, the new supermolecules
will enable these and other enhancements to current technologies ranging from data transmission
to cancer treatments.
I thank the National Science Foundation (ECS-0354736) and Wright Patterson Air Force Base for
generously supporting this work.

References:
1. Kuzyk quantum gap, . http://en.wikipedia.org/wiki/Kuzyk_quantum_gap
2. J. Zhou, M. Kuzyk, D. S. Watkins, Pushing the hyperpolarizability to the limit, Opt. Lett. 31,
pp. 2891, 2006.
3. J. Pérez Moreno, Y. Zhao, K. Clays, M. G. Kuzyk, Modulated conjugation as a means for
attaining a record high intrinsic hyperpolarizability, Opt. Lett. 32, pp. 59, 2007.
4. M. G. Kuzyk, Physical limits on electronic nonlinear molecular susceptibilities, Phys. Rev.
Lett. 85, pp. 1218, 2000.
DOI: 10.1117/2.1200702.0637
http://newsroom.spie.org/x5985.xml?highlight=x521

Optical wavefront measurement using a novel phase-shifting point-

diffraction interferometer

Pietro Ferraro, Melania Paturzo and Simonetta Grilli


A simple point-diffraction interferometer with phase-shifting capability has been designed using a
pinhole fabricated from a lithium niobate crystal.
Optical wavefront testing is an important issue in several different fields ranging from astronomy
to any application with optical testing requirements. A variety of techniques are currently applied
in such diverse fields as optical component testing and wavefront sensing that require the
qualitative or quantitative analysis of optical phase disturbances. And since these cannot be
directly observed, a method must be used to extract the desired information indirectly, for
example, through generation of fringe patterns in an interferometer.

41
The use of a point-diffraction interferometer (PDI) has several advantages when compared with
other methods, the most important being its common path design (see Figure 1). In a PDI, an
interferogram can be produced with only a single laser path rather than with the two paths required
by Mach–Zehnder or Michelson interferometers. This is especially important in the measurement
of large objects such as wind tunnel flows, in which the optical paths are very long and air
turbulence must be minimized along the paths. The simple common path design requires relatively
few optical elements, thereby reducing the cost, size, and weight of the instrument while also
simplifying alignment. The PDI has been used to test a variety of optical elements, and its simple
alignment makes it useful for optical testing in the IR, UV, and X-ray spectral regions even in
astronomy applications.

Figure 1. Scheme of a point-diffraction interferometer: optical interference occurs between the


two wavefronts.
As with any interferometric technique, PDI interferograms must be interpreted to extract object
wavefront information. The most accurate and effective way to measure both the magnitude and
the sign of wavefront aberrations is to use phase-shifting (PS) interferometry. However, this
advanced technique could not be used with the PDI because its common path design made it
difficult to shift the phase of one beam relative to the other.
Different schemes were proposed to integrate PS into PDI configurations, 1–3 but the first to phase-
shift the PDI was Kwon, who fabricated a pinhole in a sinusoidal transmission grating to produce
phase-shifted interferograms.4Other designs used liquid crystals as a variable retarder. 5 One
interesting PDI configuration based on liquid crystals was invented by Mercer and Creath under a
NASA contract. Instead of a pinhole, they used a microsphere inside a liquid crystal layer to
generate a spherical wavefront, while the aberrated wavefront was shifted electro-optically by the
liquid crystal.6–8
We have designed a new PDI configuration based on a pinhole filter fabricated from a lithium
niobate (LN) crystal,9 an optical material widely used in optics and optoelectronics. LN is
transparent in a very wide spectral range (400–5500nm), hence its usefulness for numerous
applications. Moreover, it also enables a very high frequency of phase-shift modulation.
A thin aluminum layer with a circular opening is fabricated on the surface of the crystal by
conventional photolithography with subsequent aluminum deposition and liftoff (see Figure 2,
inset). A uniform planar aluminum layer is deposited on the opposite face. The aluminum acts as
electrode on both faces and as pinhole filter on the exit face of the crystal. The applied voltage
causes a uniform phase shift over the aberrated wavefront while leaving unaffected the diffracted
reference beam passing through the pinhole.

42
The optical configuration used for the experimental test is shown in Figure 2. A He-Ne laser beam
at 632.8nm is expanded and then focused onto the PDI-LN. One lens of the optical setup is
intentionally widely tilted to introduce severe off-axis aberrations in the tested wavefront.

Figure 2. Scheme of the optical setup of our interferometer. L i: Lenses. PDI-LN: Point-diffraction
interferometer. CCD: Camera. Insets: (a) Schematic drawing of the pinhole fabrication process;
(b) optical microscope image of the resist dot and (c) of the subsequent aluminum opening
structure obtained on the LN crystal surface.
During the application of the external voltage, the un-diffracted wavefront experiences a change of
optical path length, due to the electro-optic effect. We apply a linear voltage ramp across the
sample from −0.6 to 0.3 kV with continuous capture of the fringe pattern (10 frames per second)
using a CCD. The images are digitized and stored in a PC. This allows us to select one or more
sequences of phase-shifted images with unknown but constant PS step. The selected images are
those acquired at specific voltage intensities differing by a constant voltage step. They can then be
processed using the Carré algorithm10 to retrieve the aberrated wavefront.
Figure 3 shows the plot (in radians) of the retrieved phase of the aberrated wavefront, and the inset
shows one of the four phase-shifted images of the fringe pattern.

Figure 3. Phase map of the wavefront calculated with a Carré phase shift algorithm. Inset: One
of the acquired fringe patterns.
In conclusion, we have described a new phase-shifting point-diffraction interferometer design
based on a LN crystal pinhole. We have demonstrated the proof of principle of the device, testing
it on a wavefront emerging from a tilted spherical lens. Our PDI design has several important
advantages over other PDI configurations. The optical setup is very simple and highly stable to
disturbing environmental noise, therefore eliminating requirements for quiet laboratory
environments.
http://newsroom.spie.org/x5912.xml?highlight=x521

43
A Mathematical Solution for Another Dimension

Ever since 1887, when Norwegian mathematician Sophus Lie discovered the mathematical group
called E8, researchers have been trying to understand the extraordinarily complex object described
by a numerical matrix of more than 400,000 rows and columns.

The E8 root system consists of 240 vectors in an 8-


dimensional space. Those vectors are the vertices (corners)
of an 8-dimensional object called the Gosset polytope 421.
In the 1960s, Peter McMullen drew by hand a 2-dimensional
representation of the Gosset polytope 421. This image was
computer generated by John Stembridge, based on
McMullen's drawing.
Credit: American Institute of Mathematics

Now, an international team of experts using powerful computers and programming techniques has
mapped E8--a feat numerically akin to the mapping of the human genome--allowing for
breakthroughs in a wide range of problems in geometry, number theory and the physics of string
theory.
'Although mapping the human genome was of fundamental importance in biology, it doesn't
instantly give you a miracle drug or a cure for cancer' said mathematician Jeffrey Adams, project
leader and mathematics professor at the University of Maryland. 'This research is similar: it is
critical basic research, but its implications may not become known for many years.'
Team member David Vogan, a professor of mathematics at the Massachusetts Institute of
Technology (MIT), presented the findings today at MIT.
The effort to map E8 is part of a larger project to map out all of the Lie groups--mathematical
descriptions of symmetry for continuous objects like cones, spheres and their higher-dimensional
counterparts. Many of the groups are well understood; E8 is the most complex.
The project is funded by the National Science Foundation (NSF) through the American Institute of
Mathematics.
It is fairly easy to understand the symmetry of a square, for example. The group has only two
components, the mirror images across the diagonals and the mirror images that result when the
square is cut in half midway through any of its sides. The symmetries form a group with only
those 2 degrees of freedom, or dimensions, as members.
A continuous symmetrical object like a sphere is 2-dimensional on its surface, for it takes only two
coordinates (latitude and longitude on the Earth) to define a location. But in space, it can be
rotated about three axes (an x-axis, y-axis and z-axis), so the symmetry group has three
dimensions.
In that context, E8 strains the imagination. The symmetries represent a 57-dimensional solid (it
would take 57 coordinates to define a location), and the group of symmetries has a whopping 248
dimensions.
Because of its size and complexity, the E8 calculation ultimately took about 77 hours on the

44
supercomputer Sage and created a file 60 gigabytes in size. For comparison, the human genome is
less than a gigabyte in size. In fact, if written out on paper in a small font, the E8 answer would
cover an area the size of Manhattan.
While even consumer hard drives can store that much data, the computer had to have continuous
access to tens of gigabytes of data in its random access memory (the RAM in a personal
computer), something far beyond that of home computers and unavailable in any computer until
recently.
The computation was sophisticated and demanded experts with a range of experiences who could
develop both new mathematical techniques and new programming methods. Yet despite numerous
computer crashes, both for hardware and software problems, at 9 a.m. on Jan. 8, 2007, the
calculation of E8 was complete.
http://www.physlink.com/News/070327AnotherDimensionE8.cfm

A Single-Photon Server with Just One Atom

Every time you switch on a light bulb, 10 to the power of 15 (a million times a billion) visible
photons, the elementary particles of light, are illuminating the room in every second. If that is too
many for you, light a candle. If that is still too many, and say, you just want one and not more than
one photon every time you press the button, you will have to work a little harder. A team of
physicists in the group of Professor Gerhard Rempe at the Max Planck Institute of Quantum
Optics in Garching near Munich, Germany, have now built a single-photon server based on a
single trapped neutral atom. The high quality of the single photons and their ready availability are
important for future quantum information processing experiments with single photons. In the
relatively new field of quantum information processing the goal is to make use of quantum
mechanics to compute certain tasks much more efficiently than with a classical computer. (Nature
Physics online, March 11th, 2007)

A single atom trapped in a cavity generates a single photon


after being triggered by a laser pulse. After the source is
characterised, the subsequent photons can be distributed to a
user.
Image: Max Planck Institute of Quantum Optics

A single atom, by its nature, can only emit one photon at a time. A single photon can be generated
at will by applying a laser pulse to a trapped atom. By putting a single atom between two highly
reflective mirrors, a so called cavity, all of these photons are sent in the same direction. Compared
with other methods of single-photon generation the photons are of a very high quality, i.e. their

45
energy varies very little, and the properties of the photons can be controlled. They can for instance
be made indistinguishable, a property necessary for quantum computation. On the other hand, up
to now, it was not possible to trap a neutral atom in a cavity and at the same time generate single
photons for a sufficiently long time to make practical usage of the photons.
In 2005 the team around Prof. Rempe was able to increase the trapping times of single atoms in a
cavity significantly by using three dimensional cavity cooling. In the present article they report on
results where they have been able to combine this cavity cooling with the generation of single
photons in a way that a single atom can generate up to 300,000 photons. In their current system
the time the atom is available is much longer than the time needed to cool and trap the atom.
Because the system can therefore run with a large duty cycle, distribution of the photons to a user
has become possible: The system operates as a single-photon server.
The experiment uses a magneto-optical trap to prepare ultracold Rubidium atoms inside a vacuum
chamber. These atoms are then trapped inside the cavity in the dipole potential of a focused laser
beam. By applying a sequence of laser pulses from the side, a stream of single photons is emitted
from the cavity. Between each emission of a single photon the atom is cooled, preventing it from
leaving the trap. To show that not more than one photon was produced per pulse, the photon
stream was directed onto a beam splitter, which directed 50% of the photons to a detector, and the
other 50% to a second detector. A single photon will be detected either by detector 1 or by detector
2. If detections of both detectors coincide, more than one photon must have been present in the
pulse. It is thus the absence of these coincidences that proves that one and not more than one
photon is produced at the same time, which is demonstrated convincingly in the work presented.
With the progress reported now, quantum information processing with photons has come one step
closer. With the single-photon server operating, Gerhard Rempe and his team are now ready to
take on the next challenges such as deterministic atom-photon and atom-atom entanglement
experiments.

Double-negative metamaterial edges towards the visible

Researchers in the US have created a metamaterial that has a negative magnetic


permeability and negative electric permittivity for infrared light with a wavelength of 813
nm. This is claimed to be the shortest wavelength yet for such a metamaterial and lies just
outside of the visible spectrum at 380-780 nm. The previous record had been about 1500 nm,
and the result is an important step towards the creation of double-negative negative-index
metamaterials (DN-NIMs) that operate in the visible range.

Naturally occurring materials have a positive index of refraction,


whereas a negative-index metamaterial (NIM) has a structure that is
engineered artificially to have a negative index of refraction. NIMs
have a number of desirable properties that do not exist in normal
materials including the ability to focus light to a point smaller than its
wavelength in a so-called “superlens”, which could allow optical Fishnet metamaterial
microscopes to view much smaller objects than possible today.

46
A negative index of refraction can occur when only the permittivity of the material is negative and
the permeability, although positive, is different from that in free space. However, the effect is
much more pronounced (and more technologically useful) in DN-NIMs, in which both
permittivity and permeability are negative.

While NIM metamaterials have been developed with negative permittivities for visible light,
negative permeability is much more difficult to achieve because the magnetic interaction between
light and a metamaterial is more than 100 times weaker than the electrical interaction.

Speaking at the recent March Meeting of the American Physical Society in Denver, Vladimir
Shalaev of Purdue University, Indiana, unveiled a new DN-NIM that is tantalizingly close to the
visible range. The metamaterial is a thin sheet comprising two layers of silver separated by
alumina. The structure is perforated with a regular array of rectangular-shaped holes to create a
“fishnet” pattern. The holes are about 120 nm across and are separated by about 300 nm. The
magnetic permeability of the material was found to be negative for light with wavelengths
between about 799 and 818 nm, while the permittivity is negative from about 700 to beyond 900
nm.

Shalaev told Physics Web that the fishnet structure could be adapted to create a DN-NIM for
visible light – something that he and his colleagues are working on right now. However, he
cautioned that fishnet NIMs display negative permeability over a relatively narrow band of
wavelengths and therefore it is unlikely that a single structure could be used to create a DN-NIM
that works throughout the visible spectrum. Also, some of the light passing through fishnet NIMs
is absorbed, which means that it could not be used as a superlens. However, he believes that such
metamaterials could be used to achieve sub-wavelength imaging using other schemes including a
“hyperlens”, which aims to covert near-field evanescent waves into waves that can be focussed to
create a far-field image.
http://physicsweb.org/articles/news/11/3/13/1

From Diatoms to Gas Sensors

The three-dimensional shells of tiny ocean creatures could provide the foundation for novel
electronic devices, including gas sensors able to detect pollution faster and more efficiently than
conventional devices.

Image shows a sensor created from a microporous silicon


structure converted from the shell (frustule) of a single
diatom.

Credit: GeorgiaTech

Using a chemical process that converts the shells’ original silica (silicon dioxide, SiO2) into the
semiconductor material silicon, researchers have created a new class of gas sensors based on the
unique and intricate three-dimensional (3-D) shells produced by microscopic creatures known as

47
diatoms. The converted shells, which retain the 3-D shape and nanoscale detail of the originals,
could also be useful as battery electrodes, chemical purifiers – and in other applications requiring
complex shapes that nature can produce better than humans.
“When we conducted measurements for the detection of nitric oxide, a common pollutant, our
single diatom-derived silicon sensor possessed a combination of speed, sensitivity, and low
voltage operation that exceeded conventional sensors,” said Kenneth H. Sandhage, a professor in
the School of Materials Science and Engineering at the Georgia Institute of Technology. “The
unique diatom-derived shape, high surface area and nanoporous, nanocrystalline silicon material
all contributed towards such attractive gas sensing characteristics.”
The unique devices, part of a broader long-term research program by Sandhage and his research
team, were described in the March 8 issue of the journal Nature. The research was sponsored by
the U.S. Air Force Office of Scientific Research and the U.S. Office of Naval Research.
Scientists estimate that roughly 100,000 species of diatoms exist in nature, and each forms a
microshell with a unique and often complex 3-D shape that includes cylinders, wheels, fans,
donuts, circles and stars. Sandhage and his research team have worked for several years to take
advantage of those complex shapes by converting the original silica into materials that are more
useful.
Ultimately, they would like to conduct such conversion reactions on genetically-modified diatoms
that generate microshells with tailored shapes. However, to precisely alter and control the
structures produced, further research is needed to learn how to manipulate the genome of the
diatom. Since scientists already know how to culture diatoms in large volumes, harnessing the
diatom genetic code could allow mass-production of complex and tailored microscopic structures.
Sandhage’s colleagues, Prof. Nils Kröger (School of Chemistry and Biochemistry at Georgia
Tech) and Dr. Mark Hildebrand (Scripps Institution of Oceanography) are currently conducting
research that could ultimately allow for genetic engineering of diatom microshell shapes.
“Diatoms are fabulous for making very precise shapes, and making the same shape over and over
again by a reproduction process that, under the proper growth conditions, yields microshells at a
geometrically-increasing rate,” Sandhage noted. “Diatoms can produce three-dimensional
structures that are not easy to produce using conventional silicon-based processes. The potential
here is for making enormous numbers of complicated 3-D shapes and tailoring the shapes
genetically, followed by chemical modification as we have conducted to convert the shells into
functional materials such as silicon.”
Silicon is normally produced from silica at temperatures well above the silicon melting point
(1,414 degrees Celsius), so that solid silicon replicas cannot be directly produced from silica
structures with such conventional processing. So the Georgia Tech researchers used a reaction
based on magnesium gas that converted the silica of the shells into a composite containing silicon
(Si) and magnesium oxide (MgO). The conversion took place at only 650 degrees Celsius, which
allowed preservation of the complex channels and hollow cylindrical shape of the diatom.
The magnesium oxide, which makes up about two-thirds of the composite, was then dissolved out
by a hydrochloric acid solution, which left a highly porous silicon structure that retained the
original shape. The structure was then treated with hydrofluoric acid (HF) to remove traces of
silica created by reaction with the water in the hydrochloric acid solution.
The researchers then connected individual diatom-derived silicon structures to electrodes, applied
current and used them to detect nitric oxide. The highly porous silicon shells, which are about 10

48
micrometers in length, could also be used to immobilize enzymes for purifying drugs in high-
performance liquid chromatography (HPLC) and as improved electrodes in lithium-ion batteries.
“Silicon can form compounds that have a high lithium content,” Sandhage said. “Because diatom-
derived silicon structures have a high surface area and are thin walled and highly porous, the rate
at which you can get lithium ions into and out of such silicon structures can be high. For a given
battery size, you could store more power, use it more rapidly or recharge the battery faster by
using such structures as electrodes.”
In testing, the researchers showed that the silicon they produced was photoluminescent – meaning
it glows when illuminated by certain wavelengths of light. That shows the fabrication process
produced a nanoporous, nanocrystalline structure – and may have interesting photonic applications
in addition to the electronic ones.
Though Sandhage and his collaborators have demonstrated the potential of their technique,
significant challenges must be overcome before they can produce useful sensors, battery
electrodes and other structures. The sensors will have to be packaged into useful devices, for
example, connected into arrays of devices able to detect different gases and scaled up for volume
manufacture.
The Aulacoseira diatoms used in the research reported by Nature were millions of years old,
obtained from samples mined and distributed as diatomaceous earth. To provide samples with
other geometries, Sandhage’s group has set up a cell culturing lab, with the assistance of Georgia
Tech colleagues Nils Kröger and Nicole Poulson, to grow the brownish-colored diatoms.
Sandhage, who is a ceramist by training, would now like to work directly with electronics
engineers and others who have specific interests in silicon-based devices.
“We can target diatoms of a certain shape, generate the right chemistry, and then work with
applications engineers to get these unique structures into practice,” he said. “We are now at the
point where we have a good idea of the chemical palette that is accessible with the conversion
approaches we have taken. The next step is really to start making packaged devices.”

49
Gamma-Ray Burst Challenges Theory

In a series of landmark observations gathered over a period of four months, NASA's Swift satellite
has challenged some of astronomers' fundamental ideas about gamma-ray bursts (GRBs), which
are among the most extreme events in our universe. GRBs are the explosive deaths of very
massive stars, some of which eject jets that can release in a matter of seconds the same amount of
energy that the sun will radiate over its 10-billion-year lifetime.
Deep at the heart of this event, the core has shrunk into a
fantastically dense magnetar, a neutron star possessing a
magnetic field trillions or even quadrillions of times stronger
than Earth's. The magnetism is what powers the long glow
of X-rays seen by Earthbound scientists.
Images: Aurore Simonnet SSU NASA E/PO

The core of a massive star in a distant galaxy collapses, ending its life though there is little effect
visible at the surface. Deep inside, twin beams of matter and energy begin to blast their way
outward.
When GRB jets slam into nearby interstellar gas, the resulting collision generates an intense
afterglow that can radiate brightly in X-rays and other wavelengths for several weeks. Swift,
however, has monitored a GRB whose afterglow remained visible for more than 125 days in the
satellite's X-ray Telescope (XRT).
Swift's Burst Alert Telescope (BAT) detected the GRB in the constellation Pictor on July 29, 2006.
The XRT picked up GRB 060729 (named for its date of detection) 124 seconds after BAT's
detection. Normally, the XRT monitors an afterglow for a week or two until it fades to near
invisibility. But for the July 29 burst, the afterglow started off so bright and faded so slowly that
the XRT could regularly monitor it for months, and the instrument was still able to detect it in late
November. The burst's distance from Earth (it was much closer than many GRBs) was also a
factor in XRT's ability to monitor the afterglow for such an extended period.
The slow fading of the X-ray afterglow has several important ramifications for our understanding
of GRBs. 'It requires a larger energy injection than what we normally see in bursts, and may
require continuous energy input from the central engine,' says astronomer Dirk Grupe of Penn
State University, University Park, Penn., and lead author of an international team that reports these
results in an upcoming issue of the Astrophysical Journal.
One possibility is that the GRB's central engine was a magnetar — a neutron star with an ultra-
powerful magnetic field. The magnetar's magnetic field acts like a brake, forcing the star's rotation
rate to spin-down rapidly. The energy of this spin-down can be converted into magnetic energy
that is continuously injected into the initial blast wave that triggered the GRB. Calculations by
paper coauthor Xiang-Yu Wang of Penn State show that this energy could power the observed X-
ray afterglow and keep it shining for months.
A burst observed on January 10, 2007, also suggests that magnetars power some GRBs. GRB
070110's X-ray afterglow remained nearly constant in brightness for 5 hours, then faded rapidly
more than tenfold. In another paper submitted to the Astrophysical Journal, an international group

50
led by Eleonora Troja of the INAF—IASF of Palermo, Italy, proposes that a magnetar best
explains these observations.
'People have thought for a long time that GRBs are black holes being born, but scientists are now
thinking of other possibilities,' says Swift principal investigator Neil Gehrels of NASA's Goddard
Space Flight Center in Greenbelt, Md., a co-author on both studies.
Deep at the heart of this event, the core has shrunk into a fantastically dense magnetar, a neutron
star possessing a magnetic field trillions or even quadrillions of times stronger than Earth's.
Another surprising result from GRB 060729 is that the X-ray afterglow displayed no sharp
decrease in brightness over the 125-day period that it was detected by the XRT. Using widely
accepted theory, Grupe and his colleagues conclude that the angle of the GRB's jet must have been
at least 28 degrees wide. In contrast, most GRB jets are thought to have very narrow opening
angles of only about 5 degrees. 'The much wider opening angle seen in GRB 060729 suggests a
much larger energy release than we typically see in GRBs,' says Grupe.
http://www.physlink.com/News/070311GRBs.cfm

Magnifying superlenses show more detail than before

Two teams of physicists from the US have independently created the first truly magnifying
"superlenses" using metamaterials with a negative index of refraction. Unlike conventional
lenses, superlenses can provide images of almost limitless resolution, and could one day
enable the optical imaging of proteins, viruses and DNA.

No matter how smooth and polished a conventional lens is, there will always be a finite amount of
detail it can reproduce. This is because light tends to diffract, and so prevents resolution of
features much smaller than its wavelength – what physicists refer to as the "diffraction limit".
However, this limit can be surpassed if one can find a way of collecting the idle "evanescent"
waves that exist close to the surface of an object. These waves can resolve surface features much
smaller than normal propagating waves, but decay too quickly for conventional lenses to capture.

In 2000, John Pendry of Imperial College in London predicted that the decay of evanescent waves
can be offset by amplifying them in a material with a negative refractive index – in other words,
one that bends incoming light in the opposite direction to an ordinary material. In theory, such a
negative-index "superlens" could take evanescent waves from a surface, carry them, and convert
them into propagating waves that travel far enough to be captured by a conventional microscope
after they leave. Since Pendry's prediction, several superlenses have been built that have
successfully transmitted evanescent waves. However, none has been able to make the crucial
conversion to propagating waves – leaving the evanescent waves with the same fast decay rate as
when they started.

Now, two groups have managed to create superlenses that can convert
evanescent waves into propagating waves. At the University of Maryland, a
team led by Igor Smolyanivov has created a flat superlens consisting of
concentric polymer rings deposited onto a thin film of gold (Science 315
1699). Meanwhile, a team led by Xiang Zhang at the University of California Zhang's hyperlens

51
in Berkeley has opted for a 3D stack of curved silver and aluminium-oxide layers on a quartz
substrate (Science 315 1686).

Both of these designs are classified as "metamaterials" – artificial nanostructures made by


physicists because substances with a negative index in the optical range do not occur naturally.
Superlenses have been made from metamaterials before, but the cylindrical geometry of these new
designs enables evanescent waves emitted from illuminated objects to be guided outward. Because
momentum must be conserved, this separation forces the "tangential" or side-to-side momentum
of the waves to be compressed, resulting in a magnified image beyond the diffraction limit – one
that a conventional microscope can register.

Smolyanivov used his flat superlens to image rows of polymer dots


deposited near the inner ring with a resolution of 70 nm – seven times better
than the diffraction limit of the illuminating laser. Zhang, however, took his
3D superlens (or "hyperlens" as he prefers because of the hyperbolic shape)
one step farther and imaged the word "ON" inscribed on the surface, albeit Smolyanivov's
with a slightly lower resolution of 130 nm. superlens

Although both of these superlenses imaged objects that were "built-in" to the material, in practice
one would keep an object separate but still close enough so that the evanescent waves can be
captured. Even so, Smolyanivov told Physics Web that widespread applications may be some way
off. This is because a side effect of the increased resolution is the vastly reduced depth of field,
meaning focusing must be much more accurate. "The real challenge will be to locate a sample", he
said. "You won't be able to see it if it's out of focus."

http://physicsweb.org/articles/news/11/3/17/1

Physicists shine a light, produce startling liquid jet

It is possible to manipulate small quantities of liquid using only the force of light, report
University of Chicago and French scientists in the March 30 issue of Physical Review Letters.
“In previous work, people figured out that you can move individual particles with lasers,” said
Robert Schroll, graduate student in physics at the University of Chicago and lead author of the
PRL article. Now it appears that lasers can also be used to generate bulk flow in fluids. “As far as
we know, we’re the first to show this particular effect,” Schroll said.
Schroll and Wendy Zhang, Assistant Professor in Physics at the University of Chicago, carried out
the project with RŽgis Wunenburger, Alexis Casner and Jean-Pierre Delville of the University of
Bordeaux I. The technique might offer a new way to control the flow of fluids through extremely
narrow channels for biomedical and biotechnological applications.
In their experiment, the Bordeaux scientists shined a laser beam through a soapy liquid. The laser
produced a long jet of liquid that broke up into droplets after traversing a surprisingly long
distance.
“I thought this was just so weird because I know when liquid is supposed to break up, and this one

52
isn’t doing it,” Zhang said.
Physicists know that lasers can set liquid in motion through heating effects, but heat was not a
factor in this case. The liquid used in the Bordeaux experiment is a type that absorbs very little
light. Heating the liquid would require more light absorption. In this case, the Chicago team’s
theoretical calculations matched the Bordeaux team’s experimental results: the mild force of the
light itself drives the liquid motion.
“Light is actually pushing onto us slightly. This effect is called radiation pressure,” Zhang said.
This gentle pressure generated by photons—particles of light—ordinarily goes unnoticed. But the
liquid used in the Bordeaux experiment has such an incredibly weak surface that even light can
deform it.
The experimental liquid was a mixture of water and oil. “It’s basically soap,” Zhang said. But
Delville and his associates have precisely mixed the liquid to display different characteristics
under certain conditions.
“A lot of shampoos and conditioners are designed to do that,” Zhang said. Shampoo poured out of
a bottle exists in one state. Add water and it turns into another state. Delville’s liquid behaves
similarly, except that he has rigged it to change its properties at 35 degrees Celsius (95 degrees
Fahrenheit). Below 35 degrees Celsius, the liquid takes one form. Above that temperature, it
separates into two distinct forms of differing density.
Physicists refer to this as a “phase change.” Many phase changes, like changing boiling water into
steam, are familiar in everyday life. The phase change that the Bordeaux group engineered in its
laboratory is more exotic. As the soapy liquid approached the critical temperature, it took on a
pearly appearance. This color change signaled the intense reflection, or scattering, of photons.
“The photon will scatter off some part of the fluid, but moves away with the same energy that it
came in with,” Schroll explained. “This scattering effect is what’s responsible for the flow that we
see. Because the photon doesn’t lose energy it doesn’t transfer any energy into the fluid itself, so it
doesn’t cause any heating.”
Delville first observed this effect after completing a previous experiment involving the behavior of
the same fluid under a less intense laser beam. He turned up the laser power to see what it could
do, much the same way a motorist might test the performance of a powerful car on a deserted
road.
“He turned up the power and then saw this amazing thing,” Zhang said. “Because he has a lot of
experience with optics, he realized that what he saw was strange.”
Further research may determine whether light-driven flow could provide a new twist to
microfluidics, the science of controlling fluid flow through channels thinner than a human hair. In
microfluidics, researchers bring together tiny streams of droplets or liquids to produce chemical
reactions. Laser light can do that, too, Zhang said, “but it does all that completely differently from
conventional microfluidics.”
In conventional microfluidics, scientists etch channels in computer chips and connect them to
syringe pumps. It’s a relatively easy process, Zhang said, but a laser-driven microfluidics system
might allow researchers to make more rapid adjustments.
“Here I’ve created a channel, but I didn’t have to make anything. I just shined a light,” Zhang said.
http://www.physlink.com/News/070327LiquidLaserJet.cfm

53
Flexible Batteries That Never Need to Be Recharged

European researchers have built prototypes that combine plastic solar cells with ultrathin, flexible
batteries. But don't throw away your battery recharger just yet.
Mobiles phones, remote controls, and other gadgets are generally convenient--that is, until their
batteries go dead. For many consumers, having to routinely recharge or replace batteries remains
the weakest link in portable electronics. To solve the problem, a group of European researchers
say they've found a way to combine a thin-film organic solar cell with a new type of polymer
battery, giving it the capability of recharging itself when exposed to natural or indoor light.
It's not only ultraslim, but also flexible enough to integrate with a wide range of low-wattage
electronic devices, including flat but bendable objects like a smart card and, potentially, mobile
phones with curves. The results of the research, part of the three-year, five-country European
Polymer Solar Battery project, were recently published online in the journal Solar Energy.
"It's the first time that a device combining energy creation and storage shows [such] tremendous
properties," says Gilles Dennler, a coauthor of the paper and a researcher at solar startup Konarka
Technologies, based in Lowell, MA. Prior to joining Konarka, Dennler was a professor at the Linz
Institute for Organic Solar Cells at Johannes Kepler University, in Austria. "The potential for this
type of product is large, given [that] there is a growing demand for portable self-rechargeable
power supplies."
Prototypes of the solar battery weigh as little as two grams and are less than one millimeter thick.
"The device is meant to ensure that the battery is always charged with optimum voltage,
independently of the light intensity seen by the solar cell," according to the paper. Dennler says
that a single cell delivers about 0.6 volts. By shaping a module with strips connected in series,
"one can add on voltages to fit the requirements of the device."
The organic solar cell used in the prototype is the same technology being developed by Konarka.
(See "Solar-Cell Rollout.") It's based on a mix of electrically conducting polymers and fullerenes.
The cells can be cut or produced in special shapes and can be printed on a roll-to-roll machine at
low temperature, offering the potential of low-cost, high-volume production.
To preserve the life of the cells, which are vulnerable to photodegradation after only a few hours
of air exposure, the researchers encapsulated them inside a flexible gas barrier. This extended their
life for about 3,000 hours. Project coordinator Denis Fichou, head of the Laboratory of Organic
Nanostructures and Semiconductors, near Paris, says that the second important achievement of the
European project was the incorporation into the device of an extremely thin and highly flexible
lithium-polymer battery developed by German company VARTA-Microbattery, a partner in the
research consortium. VARTA's batteries can be as thin as 0.1 millimeter and recharged more than
1,000 times, and they have a relatively high energy density. Already on the market, the battery is
being used in Apple's new iPod nano.
Dennler says that the maturity of the battery and the imminent commercial release of Konarka-
style organic solar cells mean that the kind of solar-battery device designed in the project could be
available as early as next year, although achieving higher performance would be an ongoing
pursuit.

54
The paper's coauthor Toby Meyer, cofounder of Swiss-based Solaronix, says that the prototypes
worked well enough under low-light conditions, such as indoor window light, to be considered as
a power source for some mobile phones. Artificial light, on the other hand, may impose
limitations. "Office light is probably too weak to generate enough power for the given solar-cell
surface available on the phone," he says.
Watches, toys, RFID tags, smart cards, remote controls, and a variety of sensors are among the
more likely applications, although the opportunity in the area of digital cameras, PDAs, and
mobile phones will likely continue to drive research. "The feasibility of a polymer solar battery
has been proven," the paper concludes.
Rights to the technology are held by Konarka, though the solar company says it has no plans itself
to commercial the battery.
http://www.techreview.com/Energy/18482/

The LXI Standard: Past, Present and Future

The idea for an LXI standard was first formulated in 2004. Since then, what progress has been
made? What is the role of the LXI Consortium? How does LXI fit with competing standards? Are
LXI-compliant products being widely produced and adopted? And what does the future hold? This
Special Report
Test and measurement (T&M) is one of the most dynamic sectors of our industry as rapidly
developing technologies require complementary, parallel T&M development to support and
augment their implementation. Technological advances are moving apace, particularly in the
communications sector as manufacturers of handsets, wireless systems, etc. develop products to
satisfy the seemingly insatiable demand for the latest innovations. Test and measurement
manufacturers have a significant role to play in developing the associated standards, test
procedures and protocols for prototyping right through to full production of the end product.
To provide insight into just what this role entails this Special Report focuses on the development
of the LXI standard and resultant initiatives and products. First, key figures from the LXI
Consortium provide background information, starting from the identification of the need for a new
standard, then chart its development and proffer future goals and objectives. Second, to give a
‘coal-face’ perspective, representatives from individual companies involved in the development of
the standard and compliant products answer questions on their involvement in the development of
the LXI standard, its adoption, the availability of compliant products, international reach and
future developments.
Development of the LXI Standard
The LXI standard is the result of several change vectors prevalent at the be-ginning of this
millennium. The Ethernet had become more ubiquitous and a number of test and measurement
companies began experimenting with this interface. No one, though, had solidified this into a
vision or a set of products that challenged the current dominant interconnect for instruments—
GPIB—although it was clear that a successor for GPIB was desired for many applications. Many
interfaces had been proposed and introduced on instruments. Firewire, or IEEE-1394, and USB
were leading contenders at one time, and a few products were introduced with these interfaces.
However, it was generally recognized that Ethernet was the obvious choice, but there was no agent

55
for change.
That was until 2004 when Agilent Technologies and VXI Technology banded together and
proposed forming an open consortium for standardizing Local Area Network (LAN)-based
instrumentation systems. After publicly announcing the initiative at AUTOTESTCON 2004 in
September of that year the first meeting was held in Salt Lake City, UT, in November 2004. Over
50 people encompassing vendors, systems integrators and end users attended the open meeting
where a very rough draft of the initial specification was circulated, scrutinised and discussed.
Some excellent work had been done in formulating the outlines of the specification and defining
particular parameters, but it was clear that there was still a lot of work to be done to complete the
specification.
However, from this first meeting, a few things were very clear. First, there was tremendous
interest and enthusiasm in this new proposed technology from the test and measurement industry.
Second, many companies in attendance had thought about connecting their instruments to
computers via a LAN interface and many had done some preliminary investigations into how to
develop such an interface. Third, there was virtually unanimous agreement that an industry
standard for LAN interfacing of instrumentation was needed and viable.
From this positive standpoint, what followed was an extremely rapid formation and development
of the LXI standard from draft to first release, very strong growth in the membership of the LXI
Consortium and the rapid introduction of new products that were conformant to the standard. To
put the pace and extent of these developments into perspective consider the initial
accomplishments.
Within the first two months the LXI Consortium had been joined by five additional Sponsor
members, bringing the number of Directors to seven. They had elected officers, hired a
management firm (Bode Enterprises), set up a Technical Committee, formed six Technical
Subcommittees to work on various aspects of the specification, hired a firm to develop the
LXIstandard.org web site and began holding weekly technical subcommittee meetings. Within 18
months of the concept being first introduced the membership had grown to over 40. The first
release of the LXI standard was at AUTOTESTCON 2005, less than nine months after the first
meeting of the LXI Technical Committees in January of 2005. At AUTOTESTCON 2006 over 150
products had been introduced by a large array of instrument vendor member companies, which
had been invented, tested for compliance and offered for sale. This momentum has continued and
to explain some of the intricacies of the LXI standard’s progress, its place within the larger
LAN/instrumentation picture and future development, consider some pertinent questions.
First, “Isn’t LAN already a standard?” Of course, LAN is a well-established standard, and the
Consortium follows it completely. This is especially important since the design and protocol
(TCP/IP) is very widespread and universally recognized. In addition, LAN data transfer speed is
flexible and has been designed in such a way as to grow with technology. While initial LXI
implementations are required to work with 10 MHz systems, most implementations are being
designed to work with 100 MHz systems, which are currently being deployed. 1 GHz systems are
available now, and the LAN and LXI standards will be able to seamlessly adopt the new faster
implementations as they develop. The answer to the question, “What else needs to be added for
instrumentation systems?” is quite a bit, actually. Particularly significant are the ‘extensions for
instrumentation’ that LXI has standardized.
The first to look at is Discovery, which is when a new printer (or other standard peripheral, such as

56
a scanner or hard drive) is connected to a PC, it notifies the user of the new device, identifies the
type and suggests the right driver. The user may still have to locate the correct driver, download
and install it, but MS Windows makes this fairly straightforward. However, there are millions of
these similar devices and Microsoft is very interested in making this interconnect easy for its
millions of consumers. The test and measurement industry is much smaller and has hundreds of
instrument categories, in each of which the number of installations is probably only in the
thousands. This makes each instrument category three or four orders of magnitude less important
to Microsoft (or any other OS developer). No matter how much we would like connecting a new
instrument to a PC to be as easy as hooking up a new printer, Microsoft is not going to do this for
us. Therefore, we need agreement on some software that will help in this process. The industry
already has such a standard, called VXI-11, which was developed by the VXIbus Consortium, but
works for other interconnects too. It is a little cumbersome but it is well proven and currently
available. The computer industry is working on additional standards in this area, and the LXI
Consortium plans to follow this development and may adopt a more elegant way of discovery in
the future, but for now, it has adopted the VXI-11 method and made it a requirement for LXI-
compliant devices.
With regards to the Web Interface extension, the natural way for any computer controlled device
to be controlled when it is initially connected is through a computer screen interface, in this
instance, a web interface. There are thousands of ways to design a web interface, but upon
studying the problem, the LXI design engineers all agreed on a specific set of functions.
For instance, the LXI Web Welcome Page requires the following information: Instrument Model,
Manufacturer, Serial Number, Short Description, LXI Class (A, B or C), LXI specification version
(initially Rev. 1.0), Host Name, MAC address, TCP/IP address, Firmware of software revision and
IEEE-1588 current time (optional for Class C devices). LAN Configuration Web Pages and SYNC
Configuration Web Pages have similar lists of required information. Standardization of this
information goes a long way to ensuring that the system integrator will have the information
needed to quickly implement a system and this will help ensure rapid implementation of LXI
systems.
For Software Control of the Instruments – IVI Drivers the industry has been working for many
years to standardize software solutions to help test engineers control their instruments. This started
with the Standard Commands for Programmable Instruments (SCPI), which was first introduced
in 1990. That effort gave way to VXIplug&play drivers, introduced in 1995, which used SCPI
commands as their default command set. As the demand for an even more robust standard became
evident and the requirement for interchangeable instruments and drivers rose in importance, the
Interchangeable Instruments Foundation (IVI) was formed by the same manufacturers who had
worked on SCPI and VXIplug&play standards.
Both of these former organizations were absorbed by the IVI Foundation and the resulting IVI
Driver standards are well recognized as ‘the’ software driver standards supplied by the leading test
instrument vendors. LXI requires that an IVI compliant driver be supplied with every LXI
instrument.
With regards to Hardware Triggering any LXI device can supply trigger signals or receive triggers
in a wide variety of ways. Most current programmable GPIB instruments can receive or provide
triggers via external BNC connectors, or via software over GPIB. This is also perfectly acceptable
for any LXI device, if it provides acceptable trigger precision. However, triggering precision has

57
undergone considerable advances since GPIB was invented some 35 years ago.
Therefore, both VXI and PXI, with their controlled impedance backplanes and known distance
between modules, have been able to offer the industry much tighter triggering. Considerable effort
has been invested in working with commercial manufacturers of connectors, cabling and internal
circuits to develop unique hardware trigger systems that could match or exceed anything available,
without the constraints of a fixed backplane and this is available on Class A LXI instruments.
Finally, there is Software Triggering, Time Stamping and IEEE-1588 Capability – The Precision
Time Protocol. Perhaps the most exciting and interesting new capability introduced to the industry
by LXI is the adoption of the IEEE-1588 Precision Time Protocol. This new standard has already
been adopted by other industries, but is just now being introduced into the test industry. Briefly
described, it allows all instruments on the same network to automatically look for the most
accurate clock available to them on their internet sub-net, synchronize to it and then provide either
time of daytime stamps or synchronization signals to all instruments with exceptional accuracy. It
also provides peer-to-peer communications between instruments (relieving traffic congestion and
loading of the control computer). We are still learning about this capability as new
implementations appear and, as time goes by, this may be the most important aspect of LXI.
Another pertinent question is: “By adding additional requirements, do you break LAN
compatibility?” The answer is a resounding no. All LAN requirements are intact, and we expect to
be able to follow the developments of the LAN development with complete transparency.
Also frequently asked is, “Do I have to wait for enough LXI instruments to make a complete
system, or can I mix VXI, PXI or GPIB instruments in the same system with LXI instruments?”
Again, the answer is easy. LXI instruments are expected to be used in test systems with legacy
instrumentation that already exists. This includes GPIB instruments, clusters of LXI and/or PXI
instruments, and perhaps other interfaces. It is extremely important that LXI instruments integrate
easily with other interface technologies, and the LXI Consortium is working on specifying bridges
and enhancing our own specifications to make this transition easy and transparent. While testing is
still ongoing, we have not yet found a combination that could not be accomplished, usually in a
very straightforward manner.
Perhaps the most important question though remains, “Is LXI a viable replacement for GPIB, and
if so, will it be successful?” There is no doubt that the jury is still out, but some pretty strong
indicators are already visible. First, there is the impressive list of enhancements to LAN listed
above. Then there are the advantages that LAN brings to instrument connectivity when compared
to GPIB even without any of the LXI enhancements. These include low cost cabling and no
requirement for a GPIB interface card, no distance restriction, whereas GPIB is limited to 20 m,
no restriction on the number of instruments, while GPIB is limited to 14 instruments per interface,
and the LAN speed is faster for large data transfers and will get faster still as internet technology
progresses.
Also, the cost to implement basic LXI requirements (Class C) for instrument manufacturers is low,
as LAN technology is mass produced and very inexpensive. This may eventually drive down the
cost of instruments, although for some time vendors are expected to provide LAN interfaces in
addition to GPIB and other current interfaces. What cannot be ignored is the overwhelming
acceptance of the concept by the test and measurement vendor community, as evidenced by the
growth of the LXI Consortium membership, the intense activity for specification development and
immediate product introductions.

58
That said, it is also true that the T&M user community is very conservative and adopts new
technology slowly. Most test engineers subscribe to the old adage, “If it ain’t broke, don’t fix it,”
as do we. The intriguing point of the argument then, is that GPIB is not broken. It works just fine.
Sure, it may be a little slow for some applications, but it works for most. It may be a little more
expensive, and the thick cable is a pain, but users are used to it, and it is not that burdensome.
Summary
After tracing the developments and presenting the evidence what can we conclude? Undoubtedly,
users will vote with their money, but the momentum behind LXI is fairly overwhelming. Some
unforeseen glitches could still be encountered, but we are beginning to get some significant
experience with implementation, both in developing compatible instruments and by initial systems
integrators developing systems.
So is GPIB dead? Not by a long shot. GPIB instruments will be around for a very long time…
probably another 35 years or so. However, as more LXI instruments are introduced, and as more
systems are implemented using LXI, we believe the inherent advantages of LAN and the
enhancements offered by the LXI extensions will become increasingly apparent. As more IEEE-
1588 implementations appear, these will foster new applications not possible by GPIB
instruments, which will open still more applications.
And if more powerful tools for systems integration through the use of the internet are provided,
then LXI will, over time, become the de facto instrument standard that GPIB has been for the last
35 years. Of course, this will not happen overnight. It may take five or possibly 10 years, but it
will happen and hopefully the preceding explanations have proffered the reasons why.
A Commercial Viewpoint
That is the history, background, development and prospects for the LXI standard, but what are the
practicalities of promoting, developing and selling LXI compliant products. To get another
perspective, this time from the manufacturer’s point of view, Microwave Journal asked
representatives from leading T&M manufacturers, who are also key players in the LXI
Consortium, questions designed to give a commercial insight into the development of the LXI
standard.
Conclusion
The development of the LXI standard has come a long way in a relatively short period of time.
Instigated by the emergence of LAN and driven by the LXI Consortium and its partners, the
standard has gained considerable momentum. They have a commercial interest in making it a
success and are striving towards that goal. That said, there is still some work to be done.
Technologically, compatibility and interoperability issues need to be addressed, especially if the
standard is to be widely accepted by a traditionally conservative T&M user community. Efforts to
publicise and extend its global reach are key too. The LXI standard has progressed rapidly and has
now reached a critical stage of its develop advances over the next few years.
http://www.mwjournal.com/Journal/article.asp?HH_ID=AR_3949

59

You might also like