You are on page 1of 9

Today machines with 

artificial intelligence  (AI) are becoming more


prevalent in society. Across many fields, AI has taken over numerous
tasks that humans used to do earlier. As the reference is to human
intelligence, artificial intelligence is being modified into what humans can
do. However, the technology has not yet matched the level of utmost
wisdom possessed by humans and it seems like it is not going to achieve
the milestone any time sooner. To replace human beings at most jobs,
machines need to exhibit what we intuitively call “common sense”.
Common sense is basic knowledge about how the world of human
beings works. It is not rule-based. It is not entirely logical. It is a set of
heuristics almost all human beings quickly acquire. If computers could be
granted a generous measure of common sense, many believe that they
could make better employees than human beings. Whatever one might
think about economics, there is an interesting objective question… can
machines achieve “common sense” in the near future?
To note, before his passing in 2018, Microsoft co-founder Paul Allen
dedicated an admirable amount of time and resources to solving an
essential challenge that seems to come up again and again: The
fundamental lack of common sense in AI technologies. Mr. Allen, whose
Allen Institute for Artificial Intelligence (AI2) launched Mosaic to continue
addressing this problem, framed it like this:

“Early in AI research, there was a great deal of focus on common sense,


but that work stalled. AI still lacks what most 10-year-olds possess:
ordinary common sense. We want to jump start that research to achieve
major breakthroughs in the field.”

Allen’s analogy highlights a big problem with the current state of deep
learning technologies . As intelligent as our AI products often are, they
still can’t answer extremely simple questions that we might ask a co-
worker or our spouse. For example, “If I paint this wall red, will it still be
red tomorrow?” To illustrate how far we have to go to solve this problem,
Oren Etzioni, CEO of AI2 cites the example that “…when AlphaGo beat
the number one Go player in the world in 2016, the program did not know
that Go is a board game.” This implies that there is a pretty important
detail and until one solves for this, AI’s potential for success will be
limited to narrow applications.

HISTORY OF AI
Intelligent robots and artificial beings first appeared in the ancient Greek
myths of Antiquity. Aristotle's development of the syllogism and it's use of
deductive reasoning was a key moment in mankind's quest to understand its
own intelligence. While the roots are long and deep, the history of artificial
intelligence as we think of it today spans less than a century. The following is
a quick look at some of the most important events in AI. 
1943
 Warren McCullough and Walter Pitts publish "A Logical
Calculus of Ideas Immanent in Nervous Activity." The
paper proposed the first mathematic model for building a
neural network. 
1949
 In his book The Organization of Behavior: A
Neuropsychological Theory, Donald Hebb proposes the
theory that neural pathways are created from experiences
and that connections between neurons become stronger
the more frequently they're used. Hebbian learning
continues to be an important model in AI.
1950
 Alan Turing publishes "Computing Machinery and
Intelligence, proposing what is now known as the Turing
Test, a method for determining if a machine is intelligent. 
 Harvard undergraduates Marvin Minsky and Dean
Edmonds build SNARC, the first neural network
computer.
 Claude Shannon publishes the paper "Programming a
Computer for Playing Chess."
 Isaac Asimov publishes the "Three Laws of Robotics."  
1952
 Arthur Samuel develops a self-learning program to play
checkers. 
1954
 The Georgetown-IBM machine translation experiment
automatically translates 60 carefully selected Russian
sentences into English. 
1956
 The phrase artificial intelligence is coined at the
"Dartmouth Summer Research Project on Artificial
Intelligence." Led by John McCarthy, the conference,
which defined the scope and goals of AI, is widely
considered to be the birth of artificial intelligence as we
know it today. 
 Allen Newell and Herbert Simon demonstrate Logic
Theorist (LT), the first reasoning program. 
1958
 John McCarthy develops the AI programming language
Lisp and publishes the paper "Programs with Common
Sense." The paper proposed the hypothetical Advice
Taker, a complete AI system with the ability to learn from
experience as effectively as humans do.  
1959
 Allen Newell, Herbert Simon and J.C. Shaw develop the
General Problem Solver (GPS), a program designed to
imitate human problem-solving. 
 Herbert Gelernter develops the Geometry Theorem Prover
program.
 Arthur Samuel coins the term machine learning while at
IBM.
 John McCarthy and Marvin Minsky found the MIT
Artificial Intelligence Project.
1963
 John McCarthy starts the AI Lab at Stanford.
1966
 The Automatic Language Processing Advisory
Committee (ALPAC) report by the U.S. government
details the lack of progress in machine translations
research, a major Cold War initiative with the promise of
automatic and instantaneous translation of Russian. The
ALPAC report leads to the cancellation of all
government-funded MT projects. 
1969
 The first successful expert systems are developed in
DENDRAL, a XX program, and MYCIN, designed to
diagnose blood infections, are created at Stanford.
1972
 The logic programming language PROLOG is created.
1973
 The "Lighthill Report," detailing the disappointments in
AI research, is released by the British government and
leads to severe cuts in funding for artificial intelligence
projects. 
1974-1980
 Frustration with the progress of AI development leads to
major DARPA cutbacks in academic grants. Combined
with the earlier ALPAC report and the previous year's
"Lighthill Report," artificial intelligence funding dries up
and research stalls. This period is known as the "First AI
Winter." 
1980
 Digital Equipment Corporations develops R1 (also known
as XCON), the first successful commercial expert system.
Designed to configure orders for new computer systems,
R1 kicks off an investment boom in expert systems that
will last for much of the decade, effectively ending the
first "AI Winter."
1982
 Japan's Ministry of International Trade and Industry
launches the ambitious Fifth Generation Computer
Systems project. The goal of FGCS is to develop
supercomputer-like performance and a platform for AI
development.
1983
 In response to Japan's FGCS, the U.S. government
launches the Strategic Computing Initiative to provide
DARPA funded research in advanced computing and
artificial intelligence. 
1985
 Companies are spending more than a billion dollars a year
on expert systems and an entire industry known as the
Lisp machine market springs up to support them.
Companies like Symbolics and Lisp Machines Inc. build
specialized computers to run on the AI programming
language Lisp. 
1987-1993
 As computing technology improved, cheaper alternatives
emerged and the Lisp machine market collapsed in 1987,
ushering in the "Second AI Winter." During this period,
expert systems proved too expensive to maintain and
update, eventually falling out of favor.
 Japan terminates the FGCS project in 1992, citing failure
in meeting the ambitious goals outlined a decade earlier.
 DARPA ends the Strategic Computing Initiative in 1993
after spending nearly $1 billion and falling far short of
expectations. 
1991
 U.S. forces deploy DART, an automated logistics
planning and scheduling tool, during the Gulf War.
1997
 IBM's Deep Blue beats world chess champion Gary
Kasparov
2005
 STANLEY, a self-driving car, wins the DARPA Grand
Challenge.
 The U.S. military begins investing in autonomous robots
like Boston Dynamic's "Big Dog" and iRobot's
"PackBot."
2008
 Google makes breakthroughs in speech recognition and
introduces the feature in its iPhone app. 
2011
 IBM's Watson trounces the competition on Jeopardy!.  
2012
 Andrew Ng, founder of the Google Brain Deep Learning
project, feeds a neural network using deep learning
algorithms 10 million YouTube videos as a training set.
The neural network learned to recognize a cat without
being told what a cat is, ushering in breakthrough era for
neural networks and deep learning funding.
2014
 Google makes first self-driving car to pass a state driving
test. 
2016
 Google DeepMind's AlphaGo defeats world champion Go
player Lee Sedol. The complexity of the ancient Chinese
game was seen as a major hurdle to clear in AI.

THE FUTURE OF ARTIFICIAL


INTELLIGENCE
7 ways AI can change the world for better ... or
worse
“[AI] is going to change the world more than anything in the history of mankind.
More than electricity.”— AI oracle and venture capitalist Dr. Kai-Fu Lee, 2018

In a nondescript building close to downtown Chicago, Marc Gyongyosi and the small but
growing crew of IFM/Onetrack.AI have one rule that rules them all: think simple. The
words are written in simple font on a simple sheet of paper that’s stuck to a rear upstairs
wall of their industrial two-story workspace. What they’re doing here with artificial
intelligence, however, isn’t simple at all.
Sitting at his cluttered desk, located near an oft-used ping-pong table and prototypes of
drones from his college days suspended overhead, Gyongyosi punches some keys on a
laptop to pull up grainy video footage of a forklift driver operating his vehicle in a
warehouse. It was captured from overhead courtesy of a Onetrack.AI “forklift vision
system.”
THE FUTURE OF ARTIFICIAL INTELLIGENCE
Artificial intelligence is impacting the future of virtually every industry and every human being. Artificial intelligence has acted
as the main driver of emerging technologies like big data, robotics and IoT, and it will continue to act as a technological
innovator for the foreseeable future.

Employing machine learning and computer vision for detection and classification of
various “safety events,” the shoebox-sized device doesn’t see all, but it sees plenty. Like
which way the driver is looking as he operates the vehicle, how fast he’s driving, where
he’s driving, locations of the people around him and how other forklift operators are
maneuvering their vehicles. IFM’s software automatically detects safety violations (for
example, cell phone use) and notifies warehouse managers so they can take immediate
action. The main goals are to prevent accidents and increase efficiency. The mere
knowledge that one of IFM’s devices is watching, Gyongyosi claims, has had “a huge
effect.”

“If you think about a camera, it really is the richest sensor available to us today at a very
interesting price point,” he says. “Because of smartphones, camera and image sensors
have become incredibly inexpensive, yet we capture a lot of information. From an image,
we might be able to infer 25 signals today, but six months from now we’ll be able to infer
100 or 150 signals from that same image. The only difference is the software that’s
looking at the image. And that’s why this is so compelling, because we can offer a very
important core feature set today, but then over time all our systems are learning from
each other. Every customer is able to benefit from every other customer that we bring on
board because our systems start to see and learn more processes and detect more things
that are important and relevant.”

THE EVOLUTION OF AI

IFM is just one of countless AI innovators in a field that’s hotter than ever and getting
more so all the time. Here’s a good indicator: Of the 9,100 patents received by IBM
inventors in 2018, 1,600 (or nearly 18 percent) were AI-related. Here’s another: Tesla
founder and tech titan Elon Musk recently donated $10 million to fund ongoing research
at the non-profit research company OpenAI — a mere drop in the proverbial bucket if
his $1 billion co-pledge in 2015 is any indication. And in 2017, Russian president
Vladimir Putin told school children that “Whoever becomes the leader in this sphere [AI]
will become the ruler of the world.” He then tossed his head back and laughed
maniacally.
OK, that last thing is false. This, however, is not: After more than seven decades marked
by hoopla and sporadic dormancy during a multi-wave evolutionary period that began
with so-called “knowledge engineering,” progressed to model- and algorithm-based
machine learning and is increasingly focused on perception, reasoning and generalization,
AI has re-taken center stage as never before. And it won’t cede the spotlight anytime
soon.
 

Google CEO Sundar Pichai On Stage in 2018. Google is working on an AI Assistant that can place human-
like calls to make appointments. | Photo Credit: Alive Coverage

THE FU TUR E IS N OW: A I'S IMPAC T IS EVERYWH ERE

There’s virtually no major industry modern AI — more specifically, “narrow AI,” which
performs objective functions using data-trained models and often falls into the categories
of deep learning or machine learning — hasn’t already affected. That’s especially true in
the past few years, as data collection and analysis has ramped up considerably thanks to
robust IoT connectivity, the proliferation of connected devices and ever-speedier
computer processing.

Some sectors are at the start of their AI journey, others are veteran travelers. Both have a
long way to go. Regardless, the impact artificial intelligence is having on our present day
lives is hard to ignore:

 Transportation: Although it could take a decade or more to perfect them, autonomous cars


will one day ferry us from place to place.
 Manufacturing: AI powered robots work alongside humans to perform a limited range of
tasks like assembly and stacking, and predictive analysis sensors keep equipment running
smoothly.
 Healthcare: In the comparatively AI-nascent field of healthcare, diseases are more quickly
and accurately diagnosed, drug discovery is sped up and streamlined, virtual nursing
assistants monitor patients and big data analysis helps to create a more personalized
patient experience.
 Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist
human instructors and facial analysis gauges the emotions of students to help determine
who’s struggling or bored and better tailor the experience to their individual needs.
 Media: Journalism is harnessing AI, too, and will continue to benefit from it. Bloomberg uses
Cyborg technology to help make quick sense of complex financial reports. The Associated
Press employs the natural language abilities of Automated Insights to produce 3,700 earning
reports stories per year — nearly four times more than in the recent past.
 Customer Service: Last but hardly least, Google is working on an AI assistant that can place
human-like calls to make appointments at, say, your neighborhood hair salon. In addition to
words, the system understands context and nuance.
But those advances (and numerous others, including this crop of new ones) are only the
beginning; there’s much more to come — more than anyone, even the most prescient
prognosticators, can fathom.
“I think anybody making assumptions about the capabilities of intelligent software
capping out at some point are mistaken,” says David Vandegrift, CTO and co-founder of
the customer relationship management firm 4Degrees.
With companies spending nearly $20 billion collective dollars on AI products and
services annually, tech giants like Google, Apple, Microsoft and Amazon spending
billions to create those products and services, universities making AI a more prominent
part of their respective curricula (MIT alone is dropping $1 billion on a new college
devoted solely to computing, with an AI focus), and the U.S. Department of Defense
upping its AI game, big things are bound to happen. Some of those developments are
well on their way to being fully realized; some are merely theoretical and might remain
so. All are disruptive, for better and potentially worse, and there’s no downturn in sight.

“Lots of industries go through this pattern of winter, winter, and then an eternal spring,”
former Google Brain leader and Baidu chief scientist Andrew Ng told ZDNet late last
year. “We may be in the eternal spring of AI.”

You might also like