You are on page 1of 2

That’s not to imply a dearth of progress.

At companies and universities


around the world, engineers and computer scientists are devising ways to
make robots more perceptive and dextrous.

The robotics industry worldwide keeps innovating, combining artificial


intelligence and vision and other sensory technologies, according
to Analytics Insight magazine. The magazine noted that newer iterations of
robots are easier to set up and program than their predecessors. Some
notable developments in 2021 include high-tech ocean robots that explore
the world underneath the waves; a robot named Saul that shoots UV rays at
the Ebola virus to destroy it; and an AI-controlled therapeutic robot that
helps caregivers and patients communicate more efficiently, which reduces
stress.
More human-like in cognitive ability and, in some cases, appearance.
In warehouses and factories, at fast food joints and clothing retailers,
they’re already working alongside humans. This one, in Germany, can pick
like a champ. They’re even starting to perform functions that have
typically been the domain of humans, such as making coffee, caring for the
elderly and, crucially, ferrying toilet paper. One Redwood City, California-
based startup just got $32 million in Series A funding to further develop
its robot waiters. And here’s a neat new schlepper-bot named Gita. They’re
even proliferating down on the farm. But no matter which sector they
serve, robots are far less advanced than many thought they’d be by now.
IS ROBOTICS GOOD FOR THE FUTURE?
It depends on whom you ask. Robots will free human workers for more complex tasks, but
also eliminate jobs for an estimated 120 million workers globally, 11.5 million in the United
States alone. Commercial drones, another kind of robot, might well be put to use for medical
transport, package delivery, but a sky filled with drones presents possible safety and privacy
concerns. Humanoid robots will see use in entertainment (think “Westworld”) and
communication, but not rise to the level of human activity as nothing (as yet) replicates
human muscle. The best vision for a world that combines robots and humans is multiplicity,
where the two work in tandem.

Decades ago, Hannaford said, “everyone was focused on energy, and


extrapolating humans’ use of it. “[They thought], ‘A jet can fly to Europe,
so in 2020 we’ll be able to go to Mars in a passenger vehicle.’”

What they missed, he went on, is that “energy didn’t scale.” Meaning that,
according to Moore’s Law — a theory (now widely considered defunct)
that the number of performance-boosting transistors on a computer
microchip will double every two years — the cost per unit of energy failed
to drop by 50 percent every 18 months decade after decade like the cost of
increasingly powerful computing did.
But other factors continue to have a significant impact on computing and,
consequently, robotics. Computing power per watt of electric power, for
instance, is growing dramatically. In everyday terms, that means your
smartphone can do more with the same battery life. It also means quicker
advances in artificial intelligence — things like computer vision and
natural language processing that help robots “see” and learn. The writing
of more efficient software code is another way to enhance robotic
performance. In a couple of decades, perhaps, robots might do most of our
coding.

You might also like