You are on page 1of 3

Will this be the year that computers finally

start thinking?
Edsger Dijkstra, a renowned Dutch scientist, once said, "The question of whether machines can
think is about as relevant as the question of whether submarines can swim." And Drew
McDermott (another computer scientist) said when discussing the chess-playing computer Deep
Blue, "Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly
because it doesn't flap its wings."

So which is it? Can a computer think? Before we address that question, let us talk briefly about
Artificial Intelligence.

Ways to think about AI

Pretty much everyone has heard about AI. Within the technology sector, startups are forming
every day and big tech platform companies are rebuilding themselves around it. Every company
has some AI project underway. We know this is the next big thing. Even politicians have weighed
in on this topic. In my previous life as a consultant, most of my conversations around AI went like
this:

 Data is the new oil (The first time I heard that I was blown away)
 Uberize your company (I have heard various other terms as well - Googlize, Applize,
Amazonify, just Facebook)
 AI will do everything. There will be no requirement for humans
 And sometimes simply the term "AI". That is usually supposed to be a conversation
stopper

I think these are fundamentally silly ways to talk about advances in this field. A much better way
to address this is by treating it as automation.

Why Automation?

I personally prefer to use the term Machine Learning (ML), but will continue to use AI and ML
interchangeably. Automation gives a good grounding way to think about AI / ML. It is a change in
what we can do with computers. Eventually everything will have ML embedded somewhere and
no one will care. We are nowhere close to creating a Skynet or a Matrix (I don't know you if you
haven't watched the Terminator and Matrix franchises. I am joking. Please stop reading, watch
those movies, and then come back here. I will be waiting I promise). IBM does tout its Watson
(more on that a little later), but Watson's accomplishments are ...ahem... elementary. An ML
system needs plenty of data to be useful - for example a health provider might feed a system
thousands of images to improve the ability to spot malignant tumors, while another system might
"read" thousands of letters to improve handwriting recognition. Both use data, but are
fundamentally set up to do different things.

What about Watson?

Apparently, internal IBM documents show that its Watson supercomputer often spit out
erroneous cancer treatment advice and that company medical specialists and customers identified
"multiple examples of unsafe and incorrect treatment recommendations" as IBM was promoting
the product to hospitals and physicians around the world. You can follow the story here. In my
opinion, the capabilities of Watson were significantly oversold and hence the outcomes are not a
surprise. Anyone who has worked on software projects is familiar with the story.

Watson is a victim of our expectations (and the marketing hype). The article linked above states
"Watson is still struggling with the basic step of learning about different forms of cancer," which
should surprise no one. Cancer AI isn't like self-driving cars — where at some point the systems
may be good enough that the AI won't need further training, because the system will know
everything it needs to know. In medicine, and particularly in oncology, we do not know — and do
not expect to ever know — everything we need to know. Like they say, it's a journey, not a
destination.

So how should I think about AI / ML?

At some point, we need to tone down our expectations. Instead of expecting to see a humanoid
robot doing everything for us, or a sentient computer system controlling our thoughts, we should
expect to see evolution of tools that solve specific problems. We all have robots in our homes - a
washing machine, a dishwasher, a robotic vacuum cleaner. But these are all built for specific
purpose. Similarly, ML will help us solve classes of problem that computers could not usefully
address before, but each of those problems will require a different implementation, and different
set of data, a different route to market. Each of them is a piece of automation. Each of them is a
robotic vacuum cleaner.
Bottom line

One of the challenges in talking about machine learning is to find the middle ground between a
mechanistic explanation of the mathematics on one hand and fantasies about general AI on the
other. It is easy to say "you can ask new questions". But in reality, what are those questions? I
think we can think of two sets of applications.

o ML can deliver better results for questions that we are already asking about data we already
have. For example, we can identify the angry caller real time, or anomalous claims data, or
read audio files and images to complete a better profile of our members
o ML can make sense of noisy sensor data at the edge without having to send it to a central
place for analyzing, combining, and processing. We could see a tiny coin-battery powered
device that could detect fall risks for the elderly and alert a care-giver. I could dream up
hundreds of similar devices, but what gets me most excited is the endless possibilities and
new applications that I cannot even imagine today

In a way, this is what automation does. We got fast cars, we did not get artificial horses. We got
chess software that chess players use to train and get better, not artificial chess players. We got
turn-by-turn directions, not artificial navigators. So the question whether machines can think is
irrelevant. ML will help us find patterns that we as human beings cannot recognize, and eventually
our needs are met by solving the obvious optimization and analysis problems. ML will help us find
that irate customer, and maybe even suggest a course of action to calm them down based on
historical data. It will help detect symptoms that could predict a fall and allow a care-giver to
intervene. So let's not continue down this path by referring to these problem-solving, pattern-
recognizing machines as "artificial intelligence." We're just building tools to solve real-world
problems, like we've always done.

You might also like