Professional Documents
Culture Documents
“Like it or not, artificial intelligence is here to stay. This book is timely and provides readers with
the knowledge needed to understand the concepts behind AI—and also with the critical tools
needed to use it successfully in their own companies.”
—Dr. Jürgen Meffert, McKinsey & Company, Inc., Germany
“The business world is going through an unprecedented level of change that is both fast and
fundamental. Change always brings confusion. To this brave new AI-driven world, Anastassia
Lauterbach and Andrea Bonime-Blanc bring much-needed clarity and structure with a laser-like
precision. It is this precision that strategists need to design the next generation of successful
processes and business models.”
—Clara Durodié, Founder and CEO, Cognitive Finance Group, London
“For anyone wishing to embark on their own deep learning, this book offers a sober and
comprehensive look at the frontier of AI, and a roadmap for navigating its impact. Recommended
reading whether you’re into supervised, or unsupervised, learning.”
—Scott Hartley, New York City-based venture capitalist and author of The Fuzzy and the
Techie: Why the Liberal Arts Will Rule the Digital World
“The printing press, the Internet, and now AI. This book is a remarkable organization and analysis
of this vast, dynamic, and absolutely critical issue, replete with keen observations and practical
advice spanning basic research, board strategy, and cyber security. Organizations that hope to thrive
even in the relatively near future will find this book to be an invaluable tool as they plan and
implement their strategy.”
—Larry Clinton, President and CEO,
Internet Security Alliance, Washington, DC
“This work is an extremely impressive compendium of the history of artificial intelligence, the
nuance of various methods, and the implications of evolutions in the technology to various
applications in modern business. The breadth of research and depth of understanding conveyed are
valuable to students, practitioners, and consumers alike.”
—Anthony J. Scriffignano, PhD, SVP/Chief Data Scientist,
Dun & Bradstreet, New Jersey
“Anastassia Lauterbach and Andrea Bonime-Blanc have masterfully captured the opportunities
presented by AI in the business world. More importantly they have provided a unique glimpse into
the future by identifying governance and compliance considerations that boards and executive
management will need to tackle in a responsible manner. This rare insight can and should help
shape the strategic direction of AI implementation and is a must-read for companies planning to
leverage AI.”
“Everyone has the right to be scared, or even critical, of artificial intelligence. But this book allows
the reader to get these attitudes under control. Its authors take you on a journey through the AI
world. They show how AI is already a part of our lives and will be even more so in the future. The
book presents practical recommendations for how to prepare yourself and your business for this
technological revolution. It leaves the reader convinced that AI will change the way we do business
and will become an integral part of our future. Therefore the question is not if, but how we can
prepare ourselves and our businesses to be integrated players in this new world. To get started the
authors offer us a roadmap to become AI natives.”
—Annette Heuser, Executive Director, Beisheim Stiftung, Germany
“This book is simply excellent! I really enjoyed it. If you are a beginner, this book takes you
through the basics of machine learning and artificial intelligence and does a great job de-mystifying
much of the jargon that is thrown around all too casually. If you are not a novice, you will still get a
great deal out of this book as the authors clearly articulate the social, ethical, and governance issues
that we are all going to be faced with. I highly recommend this book— it is one of the must-read
books of our time.”
—Anand Chandrasekher, Senior Vice President, Qualcomm Inc., and
President, Qualcomm Datacenter Technologies, Inc., Germany
“AI has now reached a level of maturity to impact each of our businesses and personal lives.
Understanding AI is now a business imperative for every executive and director. Read this book to
separate all the hype from facts and learn practical ways to improve your business performance.
Board members need to understand the content of this book to adequately execute their governance
and fiduciary responsibilities.”
—Martin M. Coyne II, board director and senior advisor to CEOs, USA
“Artificial intelligence is poised to completely upend the way we do virtually everything. The
consequences—including unintended consequences—are massive and the stakes for getting it right
are high. Understanding this phenomenon is critical for every business leader. Lauterbach and
Bonime-Blanc have done the seemingly impossible—made a complex and wide-ranging subject
digestible, understandable, and actionable. I’ve already applied the learnings from this book to my
work and personal life. This is a must read for every board member and leaders across institutions
and industries. I can’t recommend it highly enough.”
— Erin Essenmacher, Chief Programming Officer, National
Association of Corporate Directors, Washington, DC
“AI impacts are expanding at an increasing pace into all facets of business and society. The authors
have put together rigorous research, insightful points of view, and a prescriptive structure for
leaders to create their own AI path. This ambitious work gives the reader a broad overview of AI
which would be a time-consuming and costly endeavor to do on one’s own.”
—Tom Easthope, Director, ERM, Microsoft, Seattle
“This has become my favorite go-to book for artificial intelligence that I have started to share with
my business network. Being an area of hyper growth and fluidity, it is extremely difficult to provide
clarity, focus, and organization to interested readers. Anastassia Lauterbach and Andrea Bonime-
Blanc have just done that by making this book not only relevant to first-time AI readers but also a
great source of reference for everyday needs.”
—Andreas Roell, Managing Partner, Analytics Ventures, Germany
“Lauterbach and Bonime-Blanc’s book The Artificial Intelligence Imperative is a must read for
management and boards of all companies. For corporate directors who may not be savvy in this
space, it’s a good tutorial but more importantly, I like that it outlines the questions directors should
be asking and provides a successful governance approach to AI. You want to be an effective board?
. . . Then understand how artificial intelligence can keep companies competitive.”
—T.K. Kerstetter
Host of Inside America’s Boardrooms, CEO of Boardroom
Resources, and co-founder/editor-at-large of Corporate Board Member
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or
transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
otherwise, except for the inclusion of brief quotations in a review, without prior permission in
writing from the publisher.
22 21 20 19 18 1 2 3 4 5
Praeger
An Imprint of ABC-CLIO, LLC
ABC-CLIO, LLC
130 Cremona Drive, P.O. Box 1911
Santa Barbara, California 93116-1911
www.abc-clio.com
Foreword
by Ian Bremmer
Acknowledgments
Introduction
Notes
Index
Geopolitical Implications
As a political scientist, my primary interest is in how AI affects
geopolitics—how it shifts the balance of power between different
countries, cultures, sectors, and interest groups. Over the past two years, as
the private sector has pushed ahead with AI innovations, governments
around the world have begun to pay greater attention to the field,
particularly in the U.S., the European Union, and China. Lauterbach and
Ian Bremmer
President and Founder, Eurasia Group
Editor-at-Large, Time Magazine
Author of Us vs. Them: The Failure of Globalism
From Anastassia
Abraham Lincoln once famously said something like this: “You cannot
escape the responsibility of tomorrow by evading it today.” The
companies, researchers, and investors involved in the development of
smart machines should never forget about the billions of people new
technologies will impact in the very near future. Millions of educated
readers should start asking questions about AI in their children’s schools,
as well as in their workplaces, universities, and meetings with elected
representatives.
AI apocalypse scenarios like Bostrom’s AI being a “paper-clip
maximizer” or Musk’s AI implementing “strawberry fields forever”
represent colorful warnings. In November, I was listening to Stephen
Hawking address the 2017 Web Summit in Portugal. While
acknowledging that AI has tremendous positive potential, Hawking said
that “we cannot know if we will be infinitely helped by AI, or ignored by it
and side-lined, or conceivably destroyed by it.” He advocated developing
best practices and rules to regulate AI and robotics. These words echoed
something I had heard a month earlier from my good friend Trent
McConaghy: “Lack of governance does not mean ‘no governance.’ It
means ‘bad governance.’” If this book offers practical advice on how to
think about AI in your business, policy development, or research; how to
bridge technology with corporate and societal governance; and how to
think about ethics and consequences, I will consider ten months of writing
as a very good investment. Rather than throwing your hands up in despair,
or thinking that someone with greater authority or deeper pockets might
take care of all AI-related design and implementation problems, we hope
that our educated readers will dedicate some time to addressing these
important and impactful technology issues. Beneficial AI will always be a
multifactor and multidisciplinary field, ranging from technical to ethical,
regulatory, administrative, educational, and social.
From Andrea
Working on this book has been a journey of discovery and learning,
and I am very grateful to my co-author (and lead author of this book),
Anastassia, for her vast subject matter expertise, incredible work ethic, and
passion for this subject.
This book is the culmination of a story that began for me in 1999 when
I was general counsel of an international electric power company, and
then-CEO Michael J. Thomson asked me to figure out the impact of Y2K
on our global electric operations. He pushed me to think practically about
the interconnection between governance and technology and the
importance of a multidisciplinary approach to new technology risks and
opportunities.
Ten years later, I was the head of risk, compliance, audit, and corporate
responsibility at a midsize global technology company when another tech
challenge fell into my lap. Though I had spent many years developing
governance, risk, and ethics solutions to InfoSec challenges, I had never
supervised that function. And then I was suddenly asked to directly
manage InfoSec. This proved immensely educational, frequently painful,
and a mind-blowing experience that pushed me once again to try to think
ahead of the curve.
In October 2016, the world expected Hillary Clinton to win the U.S.
presidential election in November, but MogAI, an artificial intelligence
(AI) system developed by Sanjiv Rai, founder of the Indian healthcare
startup Genic.ai, suggested otherwise. MogAI had predicted the results of
the last three presidential elections, as well as those of the Democratic and
Republican primaries. Its technology utilized information from public
platforms like Facebook, Twitter, and Google to model voting behavior,
and it favored Trump based on data on online user engagement with his
candidacy.1
In December 2016, Alexa—Amazon’s smart speaker, launched in June
of that year—was being used in 4 percent of U.S. households. After a very
short engagement period with consumers, Alexa had received 250,000
marriage proposals.2 Alexa was even wanted by police as a “witness” to a
murder in Arkansas. The victim was found floating in the hot tub of the
prime suspect, James Bates. Amazon claimed that records from the device
should receive special protection under the First Amendment since what
owners say and what Alexa answers is a form of free expression.3
When the automobile was first introduced, its market-growth
predictions were based on the number of professional chauffeurs. Many
believed that cars would never become a consumer product. The noise of
their engines was frightening to farm animals and carriage horses. Are we
trapped in a similar bias when we think about AI? We live in a mixed
world of science, technology, marketing, traditional pre-Internet behaviors,
and ambitions inspired by advances in a number of disciplines, from
Book Structure
This book will provide the AI beginner as well as knowledgeable
readers with a broad cross-section of the most important developments,
issues, and information needed to understand AI in both a business and
and a socio-political context. We focus predominantly on ML, which
represents just a small part of the AI field, and aim to provide boards,
executives, and students with a series of tools and perspectives to address
AI in their own work and in business in general. To achieve this, we have
organized the book into six parts:
1. The Past, Present, and Future of AI
2. Enablers and Preconditions to Achieve AI
3. The Race for AI Dominance—From Full-Stack Companies to “AI-as-a-Service”
4. AI in Different Sectors—From Cars to Military Robots
5. The Socio-Politics of AI—Critical Issues
6. The Governance of AI—A Practical Roadmap for Business
Below is a sampling of the types of issues that fall under the three ESG
buckets we will be considering:
Table I.A Introduction Sample of Environmental, Social and Governance (ESG) Issues
Finally, apologies to anyone who does not find their favorite startup,
application, or other AI reference mentioned in our pages. Likewise, we
Final Words
This is a book written by business, governance, and ethics practitioners
for practitioners. We hope that the book helps our readers make sense of
the multiple factors surrounding AI today. If we had to sum up this book in
a single phrase, it would be: AI is complicated. If we had a single call to
action, it would be: Become educated to actively participate in the AI
discussion. Making AI safe and beneficial—what we call the “AI
Imperative”—requires collective leadership and forward-looking
governance. Though they are very important, small organizations like The
Future of Life Institute or Open AI will not be able to solve all the
problems or answer all the questions. The fact that you are reading these
lines means that you are already better positioned to make a difference.
We urge you to become a leader.
A Brief History of AI
May 1997 was a landmark date in AI, as IBM’s Deep Blue defeated the reigning World Chess
champion Garry Kasparov. Since 2009, programs capable of grandmaster-level play have run on
smartphones.
In 2004, the U.S. government’s Defense Advanced Research Projects Agency (DARPA) organized
the first contest for what we today call self-driving vehicles. The contest involved an emerging
technology called LIDAR (Light/Laser Detection and Ranging), used mainly for military mapping
and targeting, and focused on sensing the environment and responding to it. Additional contests
happened in 2005 and 2007. Sebastian Thrun, leader of the Stanford AI Lab, joined Google after
2007 to work on autonomous driving. Lidar technology became prominent when in May 2017
Alphabet sued Uber over the use of allegedly stolen technology to develop its autonomous-driving
capabilities. This dispute originated when Uber acquired Otto, a self-driving truck firm co-founded
by Anthony Levandowski, who had worked at Alphabet for ten years and who allegedly took
Google proprietary information when he left that firm.
In February 2011, IBM’s Watson defeated two Jeopardy champions. Immediately after collecting
the US$1 million prize, IBM set off to apply Watson to real-world scenarios. At the 2011 Jeopardy
event, Watson used a database of 200 million facts and figures. By 2015, every consumer could
have bought a hard drive to hold such information for US$120.
In 2012, a ML algorithm built by engineers without any medical knowledge helped a team of four
expert pathologists to more accurately diagnose breast cancer with the input of thousands of
screening images.
AI Social-Science Concepts
Verticals Companies
Company/total
funding as of end
2016 Investors Technology Patents
Vicarious Systems Marc Benioff, Elon Cortical networks Applied for appx. 6
USA, US$68 million Musk, Mark patents since 2013 on
Zuckerberg, Sam topics such as object
Altman, Jeff Bezos, recognition using
Samsung Ventures, recursive cortical
Wipro Ventures, ABB networks
Technology Ventures,
Data Collective, Felicis
Ventures, Formation 8,
Good Ventures, Khosla
Ventures
Kindred AI, US$13 11.2 Capital, AME Robotics Patent activity started
million Cloud Ventures, 2016, two patents titled
Bloomberg Beta, Bold “facilitating device
Capital Partners, Data control” and “methods
Collective, Eclipse of self-preservation of
Ventures, First Round robotic apparatus”
Capital, Google
Ventures, Innovation
Endeavors
Numenta, US$23.6 Ed Colligan, SkyMoon Reverse Patent activity since
million Ventures engineering of 2005, over 40 patents
neocortex
Nnaisence, seed Alma Mundi Ventures DL Team (incl. No U.S. patents
funding (Switzerland) Jürgen
Schmidhuber)
aiming to build
large-scale neural
network solutions
for AGI and
intelligent
automation
The IQ of Computers
In 1950, Alan Turing published a paper in the journal Mind entitled
“Computing Machinery and Intelligence,” which was the first-ever
introduction of an AI concept. He described machines able to pass what
has become known as the Turing Test, defining “machine IQ” in a sort of
game between a human and a machine that is capable of communicating.
McCarthy first used the term “AI” in the proposal he co-authored with
Marvin Minsky, Nathaniel Rochester, and Claude Shannon for the 1955
Dartmouth workshop.21 In the original Turing Test, an interrogator asks
two participants a series of increasingly difficult questions aimed at
determining which of them is the human and which is the computer. If, at
the end of the test, the interrogator cannot distinguish between these two,
then the computer has passed the test.
The Turing Test is just one of several methods to evaluate machine
intelligence. Hector Levesque has also proposed the Winograd Schema
Challenge, which aims to evaluate whether a system can apply general
knowledge of the world, coupled with common sense, to tasks such as
language understanding.
Gary Marcus has proposed a different approach, called the Marcus
Test, with a similar theoretical thrust. His take on intelligence is that it
requires the ability to take in information and synthesize it, and then use
the resulting knowledge. He envisions a test in which systems watch
television and then answer questions based on what they saw and
understood. As with the Winograd Challenge, the Marcus Test assumes
that common-sense reasoning is a core requirement of intelligence and has
to be part of any evaluation.
Table 3.1
Cognitive Computing
IBM and several other companies are using the term “cognitive
computing” to refer to an approach of curating massive amounts of
information that can be ingested into a “cognitive stack,” while at the same
time creating connections within this information so a particular problem
can be discovered by the user, or a particular new and unanticipated
question can be explored. Watson, an example of cognitive computing, is
not a single tool, but a mix of models and APIs.
Critics of this approach, however, believe that ingesting a ton of data
and then serving it up through a “search” and “retrieve” function doesn’t
constitute or create domain knowledge. In most cases, Watson can’t
answer the “why” question, i.e., explain why the outcome or a decision is
materially significant. Thus, while Watson adds value from a data-
structuring standpoint, it really doesn’t add anything from a new
technology and interface perspective.2
Bayesian Models
Some researchers also believe that the small data powers of GPs can play a
vital role in the push toward autonomous AI. Vishal Chatrath, CEO of the
AI startup Prowler, who has worked with Ghahramani, believes that
building autonomous agents requires rapid adaptability to the environment.
Besides, GPs are easy to interpret. By contrast, the results of neural
networks and DL models are often perceived as coming out of a black box.
Current regulatory frameworks, demanding transparency on how
technologies work, might imply a broader adoption of Bayesian or GP
frameworks instead of DL.
DL started with the work of William McCulloch and Walter Pitts. In 1943, they published “A
Logical Calculus of the Ideas Immanent in Nervous Activity,” in which they outlined the first
computational model of a neural network. This paper served as the blueprint for the first ANNs.
In 1949, Donald Hebb published “The Organization of Behavior,” which argued that the
connections between neurons strengthened with use. This concept proved fundamental to
understanding human learning and how to train ANNs.
In 1954, Belmont Farley and Wesley Clark, using the research done by McCulloch and Pitts, ran the
first computer simulations of an artificial neural network. These networks of up to 128 neurons
were trained to recognize simple patterns.
In the summer of 1956, computer scientists met “to act on the conjecture that every aspect of
learning or any other feature of intelligence can in principle be so precisely described that a
machine can be made to simulate it.” This event, known as the Dartmouth Conference, is
considered the birthplace of AI.
In 1975, Ukrainian Soviet scientist Alexey G. Ivakhnenko was able to train an eight-layer neural
network with limited computation power.
In 1997, Schmidhuber and one of his students, Sepp Hochreiter, wrote a paper that proposed a
method for how artificial neural networks—computer systems that mimic the human brain—could
be boosted with a memory function, by adding loops that interpreted patterns of words or images in
light of previously obtained information. They called it long short-term memory (LSTM). For
instance, if a recurrent neural network (RNN) could recognize verbs or adjectives in a sentence and
if they are used correctly, an LSTM might be able to remember the plot of the book.
In 2006, Geoffrey Hinton introduced an algorithm that could fine-tune the learning procedure used
to train ANNs with multiple hidden layers. Hinton and his team reportedly coined the term “deep
learning” to rebrand ANNs. Today, Hinton believes that unsupervised learning, not
backpropagation, is the best way to implement DL.
DL finally took off with the dataset ImageNet (which started in 2006 and obtained its first results in
2009), which Fei Fei Li, a professor at Stanford and also currently a scientist at Google, started with
her students. It consists of one million images in 1000 classes. The size of the dataset made training
of DL models possible.
In 2015, Google announced that it had managed to improve the error rate of its voice recognition
software by almost 50 percent using LSTM. This is the type of system that powers Amazon’s
Alexa. Apple announced last year that it is using LSTM to improve the iPhone.
In March 2016, Google’s AlphaGo computer system sealed a 4-1 victory over South Korean Go
grandmaster Lee Sedol using DL. AlphaGo also scored a victory against Chinese Grandmaster Ke
Starting in June 2018, the European Union will implement the General
Data Protection Regulation (GDPR), which may require businesses to
explain decisions made by algorithms. Models defining mortgages, credits,
investment, or any other financial tool may become subject to such
regulation. Recommendation engines programmed automatically might hit
regulatory limits as well—even the engineers building these algorithms
cannot fully understand how they function.20
Companies and researchers are working to overcome this problem.
MIT Technology Review reports that the neural network architecture
developed by Nvidia’s researchers is designed to stress the parts of a video
image that influence the behavior of a car’s deep neural network most
strongly. Interestingly, such networks are focused mostly on the edges of
roads, lane markings, and parked cars—just the sort of things that a human
driver would pay attention to. According to Urs Muller, Nvidia’s chief
Supervised
Supervised learning algorithms (or classification) make predictions
based on a set of examples. With this method there is an input variable that
consists of labeled training data and a desired output variable. The
algorithm is used to analyze the training data to learn the function that
maps the input to the output.
Semi-Supervised
The major challenge with supervised learning is that labeling (naming,
defining) data is expensive and time consuming. If labels are limited, a
company can use unlabeled examples to enhance supervised learning.
Because a computer is not fully supervised in solving such a task, we talk
about semi-supervised learning, in which unlabeled examples are injected
with a small amount of labeled data, which can lead to better learning
accuracy.
Unsupervised or Predictive
When performing unsupervised (or predictive) learning, the computer
is presented with completely unlabeled data. It is asked to discover the
intrinsic patterns that underlie the data, e.g., their clustering structure,
sparse tree, graph, or the like. In unsupervised ML, there is no
predetermined correct answer that the algorithm needs to learn to predict.
Reinforcement
In reinforcement learning (RL), the behavior of an agent is optimized
based on feedback from the environment.
Reinforcement learning is one of the areas that is particularly
noteworthy in terms of its potential impact on the future of digital products
and services and the democratization of AI.22 This framework drives ML
attention from pattern recognition to sequential decision-making that is
experience driven. RL learns by trial and error, inspired by the way
humans learn new tasks. In a typical RL setup, an agent is tasked with
observing its current state in a digital environment and with taking actions
that maximize the accrual of a long-term reward for which it has been
incentivized. The agent receives feedback from the environment as a result
of each action so that it knows whether the action promoted or hindered its
progress.
Google’s DeepMind popularized this approach in the games Atari and
Go. An important advantage of using RL agents in environments that can
be simulated (e.g., video games) is that training data can be generated in
droves and at very low cost. This is in marked contrast to supervised DL
tasks, which often require training data that are expensive and difficult to
procure in the real world. Alpha Go Zero (AGZ), as the newest DeepMind
technology, relies on self-play reinforcement learning, starting from
random play without any supervision or use of human input. It uses a
single neural network, rather than separate policy and value networks.
Within three hours, AGZ played as well as a human beginner. Within a
couple of days of self-play, AGZ became the world’s best Go player.
Federated Learning
Internet giants like Netflix and startups like x.ai both benefit from
network effects—the more data they receive, the better their algorithms
work and the more effectively they attract new customers. In a B2B
context, it is more difficult to unleash data network effects because
enterprises are protective of their data for both regulatory and competitive
reasons. Some interesting solutions are appearing, however. Google
AI Technologies: From
Computer Vision to Drones
Computer Vision
Computer vision has been the sub-area of AI most transformed by the
rise of DL. Superior progress has been achieved because of large-scale
computing, especially on GPUs; the availability of vast datasets, including
via the Internet (e.g., classification on ImageNet);1 and optimized neural
network algorithms. In this way, some computers have been capable of
performing some vision tasks better than people.
Although machine vision has already been great at collecting and
tagging data, e.g., recognizing faces, the development of prediction,
assessment, and analytics to detect temporary aspects is still in its infancy.
Taking an unstructured, real-time data stream to understand more about
the signal itself and the content will be a necessity when humans apply
machine vision to robots that are interacting with objects, for example.
This will be crucial for manufacturing applications and for service robots
like those that will take care of the elderly, robot cooks, etc.
Industries Applications
Health Care Summarizing doctors’ notes for billing, moving differently formatted
medical records across medical and administration providers
Law Research for legal documents
Financial Services Gathering insights based on sentiments in world news or social media
• Natural language processing (NLP) is what happens when computers read language. NLP
processes turn text into structured data.
• Natural language generation (NLG) is what happens when computers write language. NLG
processes turn structured data into text.7
• Natural language understanding (NLU) is what happens when computers understand language.
This implies sensing the semantics of language, including sarcasm, grammatical errors made
by mistake or by choice, and differences in accentuation and voice modulation that occur
through pronouncing whole sentences and emphasizing one word over another. This is the
most difficult part in language processing, and it is requiring a lot of R&D.
The global NLP market was valued at US$277.2 million in 2015, and is
expected to reach US$2.1 billion by 2024, according to a study by
Tractica, a market intelligence company. The e-commerce sector was the
second-largest end-user segment, accounting for 12 percent of the market
in 2014.8
Rob May of Talla, which builds intelligent conversational enterprise
applications, believes that in five years the natural language stack could
have the following components:9
• Voice input at the top. Voice to text is already relatively well solved for most common cases.
• Next is natural language processing (NLP), which consists of parsing text (from voice or from
direct input) and pulling out the key components. At this point NLP is good, moving toward
great, with parsing at near-human-level success rates.
• Finally, further down, is an understanding of interference and reasoning in language. AI with
bodies (e.g., robots) may be required to help ground language in some other format, or perhaps
new visualization techniques might help, but these are all still very nascent. Rob May expects
that the intersection of DL and visual models could become a very helpful way to solve this
problem.
Company Specifics
Robotics
Robotics is a branch of technology dealing with robots. Robots are
programmable machines used to carry out a series of actions autonomously
or semi-autonomously. These actions often require extreme precision.
These robots may take on work that humans find tedious or hazardous.
The autonomous robotics approach suggests that machines could be
controlled by AI software. These robots should be able to function in
uncertain and constantly changing environments, and therefore be powered
by models with superior learning and adapting capabilities: “Scientific
American reported that by 2050, a single robot would be able to perform
100 trillion instructions per second. It took 50 years for the world to install
the first million industrial robots. The next million will take only eight,
according to Macquarie.”16 Manufacturing robots capable of performing
repetitive tasks are about one-tenth the price they were ten years ago.
Today, China is installing more industrial robots than any other country in
the world.17 Gill Pratt, former head of DARPA’s Robotics Challenge,
assumed in 2016 that robotics is facing a “Cambrian explosion”—a period
of rapid diversification: “Although a single robot cannot yet match the
learning ability of a toddler, Pratt has pointed out that robots have one
huge advantage: humans can communicate with each other at only 10 bits
per second—whereas robots can communicate through the Internet at
speeds 100 million times faster. This could result in multitudes of robots
building on each other’s learning experiences at lightning speed.”18 The
Drones
Chris Anderson, CEO of drone maker 3D Robotics, calls personal
drones the “peace dividend of the smartphone wars, which is to say that
the components in a smartphone—the sensors, the GPS, the camera, the
ARM core processors, the wireless, the memory, the battery—all that stuff,
which is being driven by the incredible economies of scale and innovation
machines at Apple, Google, and others, is available for a few dollars. They
were essentially ‘unobtainium’ 10 years ago. This is stuff that used to be
military industrial technology; you can buy it at RadioShack now.”20
Kespry, a commercial drone company, is implementing Nvidia’s Jetson
TX1 AI technology to offer the construction industry a way to keep track
of their expensive equipment and allow them to remotely manage multiple
AI and Blockchain,
IoT, Cybersecurity, and
Quantum Computing
Key Issues
• There are two major trends no business leader can afford to miss in the world of technology
today. Both trends have their roots in data. The first is the subject of this book—that is, AI.
• The second is the emergence of advanced cryptographic tools and distributed ledgers, also
known as blockchain or distributed ledger technology. Popularized in general public by
cryptocurrency bitcoin, blockchains are currently being hailed by some as the future
facilitators of radically different societal systems of trust, identity, and economic exchange.
• Blockchain and AI go hand in hand. Both technologies are deeply horizontal. While ML helps
us to find opportunity and improve decision-making, smart contracts and blockchains can
automate verification of the transactional parts of the process. In essence, ML solves the data
efficiency, discovery, and interpretation problem. Blockchain improves the overall architecture
problem in data verification and organization, exchange and storage, and finally, consumer-
centric monetization. Blockchain has the power to reshape the business of current Internet
giants, empower consumers, and enable strong traditional brands to reshape interactions with
enterprises and consumers. Both technologies reshuffle the concept of identity resolution,
which is a strong driver in monetizing relationships with customers in B2C and B2B.
Blockchain has a great chance to deliver global distributed infrastructure for datasets, which
can be utilized to build narrow AI applications.
• Two other critical technologies—the Internet of Things (IoT) and cybersecurity—cannot be
solved without the involvement of AI and blockchain.
• In the past, we needed humans to identify and mitigate security gaps in companies’
infrastucture and datasets. But increasingly, security researchers are building automated
systems that can work alongside these human agents.
• As more and more IoT devices and services move into our daily lives, we require this kind of
automated and secure deployment.
• The growth of data is driving traditional computing to the edge of its possible performance.
Once quantum computing and quantum algorithms arrive at scale, a period of transition in how
Blockchain
A blockchain is a cryptographically secure, decentralized, distributed
database of information. A blockchain includes an immutable record of
events that can be made auditable by every participant in the network.
Public blockchains are theoretically available for anyone to participate in
(given a certain level of IT literacy) and are not administered or controlled
by any central party, such as a government or commercial organization
like Google or Facebook. They can be used, among other things, for the
transfer of economic value between network participants, as in the case of
the world’s first cryptocurrency, bitcoin. ML technologies are used in
architecting blockchain, providing permissions, securing data and
operations, and organizing token distribution among participants.
Blockchains can be seen as large opaque data vaults, with federated
permission-giving architectures providing some of the most advanced data
encryption and security to individuals who would otherwise have their
personal information exposed to a variety of parties that they don’t (or
shouldn’t) trust within cyberspace. In fact, a public blockchain of personal
information could be designed as a decentralized bank, with billions of
personal vaults containing the keys to who each of us is and the manner in
which we can be engaged by organizations and individuals alike on fair
terms. Some of today’s prototype blockchain-based identity systems will
most probably serve as the future gateways into the digital world.
The first principle of digital engagement would be to provide users
with the infrastructure to secure their personal information with a set of
private keys, as well as a platform with which to share this information
with third parties if and when they desire (and perhaps with an explicit
economic value and rules attached to the usage of these data through
vehicles such as smart contracts). This kind of setup would create a fairer
and more dynamic equilibrium between those who produce information in
cyberspace and those who exploit it.
Currently, centralized companies hold data in their centralized silos.
They control and generate insights from these data, even though the
information does not legally belong to them since there are so many grey
areas in terms of data ownership. For example, Facebook holds the
personal data of over 1 billion users.
Cybersecurity
Cybercriminals are innovative, but most frequently they get into
computer systems because of common programming errors in software
Quantum Computing
The future of AI may be shaped by very different technologies, with
three possible directions, including high performance computing (HPC),
neuromorphic computing (NC), and quantum computing (QC). HPC is the
major focus of what is happening today in the semiconductor industry—
something discussed later in this book. We will touch on NC as well, as it
has already demonstrated improvements over today’s deep learning neural
networks. We believe, however, that QC excels at all types of problem
What Is a “Qubit”?
A “bit” in classical computing can have one of two states: 0 or 1. A bit
can be represented by a transistor switch set to “off” and “on,” or
abstractly by an arrow pointing up or down.
A qubit, the quantum version of a bit, has many more possible states. If
we have a sphere, the North Pole would be equivalent to 1, and the South
Pole to 0. All other locations on the sphere would be quantum
superpositions of 0 and 1. In this way, a qubit “contains an infinite amount
of information. Its coordinates on the sphere encode an infinite sequence
of digits. The information in a qubit, however, has to be extracted by
measurement. When the qubit is measured, quantum mechanics requires
that the result is always an ordinary bit (a 0 or a 1). The probability of each
outcome depends on the qubit’s “latitude.”5 What will this mean for
computing? Let’s dig a little deeper.
Data
Our interviews with companies reveal much confusion and fear around
reforming the IT infrastructure to allow for more advanced data analyses.
We also see a serious absence of internal skills in change management and
data science. Many executives single out deep cultural hurdles to changing
existing IT infrastructure. “You will not get fired for doing nothing, but
you will get into trouble for failures and budgeting issues,” as one CIO of
a major telecommunication company told us.
In sum, any traditional business has the potential to become data driven
and thereby improve its overall competitiveness. Leadership, IT literacy in
top management teams, and competence in digital transformation will
separate successful companies from stagnating ones.
Economies of Scale
Training ML models is similar to having economies of scale. One must
have enough prepared and cleaned data to train a model to be successful.
Competitive advantage is equivalent to the size of a dataset and its
readiness for computing.
In 2009, three computer scientists from Google wrote a paper entitled
“The Unreasonable Effectiveness of Data.” They demonstrated that even
when using a messy dataset with a trillion items, ML performs much more
effectively in tasks (e.g., machine translation) than ML operating on a
clean dataset with a mere million items. Since then, it has been common
understanding that the size of available datasets represents a competitive
advantage for businesses, especially when they have a refined architecture
that enables them to make the most of it.
The quality of datasets also matters if the best results are to be
achieved. Quality means that the data should reflect the real world. The
best datasets take a lot of manual effort, as the ImageNet story illustrates.
Originally announced in 2009 at a Miami Beach conference center, the
dataset quickly evolved into an annual competition to screen which
algorithms could identify objects in data with the lowest error rate. Fei Fei
Li, who started working on it in 2006, led the project. The key to the
success of ImageNet was a solution by Amazon called Mechanical Turk.
This is a crowdsourcing Internet marketplace enabling individuals and
businesses to coordinate the use of human intelligence to perform tasks
that computers haven’t been able to do yet. After the team discovered
Mechanical Turk, the data set took two and a half years to complete. It
consisted of 3.4 million labeled images, separated into 5,247 categories.
Today the dataset is 13 million images strong, precise, and used by many
companies.
Google, Microsoft, and the Canadian Institute for Advanced Research
have introduced several additional high-profile datasets since 2010.
Startups have begun assembling proprietary datasets. For example,
TwentyBN, a ML company focused on video, collected videos of people
Economies of Scope
Economies of scope can happen when several types of datasets, e.g.,
text data and images, become available. Training of multimodal datasets
can make a model more valuable. Types of data (scope) will be more
powerful than the amount of data in any given set (scale).
The competitive advantage will be with companies that are capable of
generating and/or acquiring the new datasets needed to train systems.
Barriers to training will be inversely proportional to barriers to implement
AI.6
Table 6.1 Questions Business Development Teams Should Ask Their Data Scientists
1. Do you have training data? If not, how do you plan to get it?
2. Do you apply one big dataset, or several smaller sets with different data to start training?
3. Do you have an evaluation procedure built into your application development process to assess
what works best?
Area Examples
Regulatory Considerations
Although data scientists can gain great insights from large datasets,
many such efforts are compromised from the start. Privacy concerns make
it difficult for researchers and practitioners to access the data they need to
work with.
Systems with a voice interface, for example, are most effective while
personalized and connected to sources of personal data such as calendars
or e-mails. That helps to flag privacy and security challenges. To add to
this challenge, many voice-enabled devices are always in a listening mode.
MIT researchers Kalyan Veeramachaneni, Neha Patki, and Roy Wedge
have presented a good solution to overcome privacy constraints. They
suggested using synthetic instead of authentic data, which can be used to
develop and test data-science algorithms and models without raising
privacy concerns: “Once we model an entire database, we can sample and
recreate a synthetic version of the data that very much looks like the
original database, statistically speaking. If the original database has some
missing values and some noise in it, we also embed that noise in the
synthetic version. . . . In a way, we are using machine learning to enable
machine learning.”20 This innovation can be easily scaled and used in any
industry or for educational purposes.
Algorithms have impacts beyond products and services, or single
companies. According to research published in Virtual Competition
(2016), a book by Ariel Ezrachi of the University of Oxford and Maurice
Stucke of the University of Tennessee,21 companies with very large data
and algorithmic capabilities can manipulate markets by letting their
Data Governance
The best data-driven businesses provide wide access to data and have
data governance based on opening up access and allowing as many people
as possible to find valuable insights. One might call such a framework data
democratization or self-service analytics. Investing in and developing
robust datasets allows for fewer conflicts within a business. Understanding
what kinds of conflicts can happen triggers the building of practices to
address them directly. Conflicts should represent different views on how to
measure or interpret the data, what kind of algorithms to apply, and at
what point in time a company requires outside expertise. It is to the benefit
of any organization to uncover these differing perspectives and to find
constructive ways to coordinate them.
It is critical to find a common data management language for everyone
to use and to support cross-organization collaboration on analyses. Our
interviews in several sectors—telecommunication, automotive, and
consumer goods—reveal that traditional businesses are trying to lock up
analytical skills within financial departments to exercise end-to-end
control over any and every type of analysis. Unfortunately, such practices
are completely counterproductive and the worst fit for a data-driven
company. New strong, visionary leadership is needed in the traditional
Table 6.5 Summary of Organizational and Governance Best Practices for Data
Management
• Netflix captures roughly 500 billion events per day, which translates to roughly 1.3 petabytes
(PB) per day. At peak hours, they’ll record 8 million events per second. Netflix employs over
100 data engineers. Their architecture uses components such as Apache Kafka (an open-source
stream processing platform developed by LinkedIn in 2011); Elastic Search (scalable real-time
search technology supporting so-called multi-tenancy, or multiple groups of users sharing
common access to software); AWS S3 (Simple Storage Service); Spark; Hadoop; and EMR
(Amazon Elastic Compute Cloud).
• With over 1 billion active users, Facebook stores over 300 petabytes of data. In order to do
interactive querying at scale, Facebook engineering invented Presto, a custom-distributed SQL
query engine optimized for ad hoc analysis. Over one thousand engineers use it.
• Airbnb supports over 100 million users and over 2 million listings and employs over 30 data
engineers, investing over US$5 million in headcount alone.
• Twitter handles 5 billion sessions per day in real time and uses Amazon ELB (elastic computer
cloud), Kafka, Storm, Hadoop, and Cassandra storage. It invented Heron, a real-time,
distributed, and fault-tolerant stream-processing engine to power Twitter’s entire real-time
analysis sine 2014. Twitter open-sourced Heron in 2016.
• eBay wants to integrate ML into each and every piece of data in the eBay product infrastructure.
They are working on building infrastructure that they can use to promote self-service ML,
reusable ML, and extensible ML.
Data Access
Augmenting a company’s existing data warehouse with a data lake is a path most recommended by
data scientists exploring AI models.
A data warehouse (DW) is a central repository for all the data that is collected in an
organization’s business systems, e.g., ERP. In DW, only processing of structured data is possible.
No DL applications or analytics using unstructured information is feasible. Employees at all levels
of an organization are not able to access DW to run their analyses. The analysis itself cannot access
the original raw data, but certain queries can be answered quickly. Business intelligence tools are
built on top of DW to provide dashboards or other kinds of insights from resident data.
A data lake, in contrast to DW, holds a vast amount of raw data in its native format. The data
scheme is decided upon reading, rather than loading or writing, the data. Data are quickly available
because they do not have to be curated before they can be used. This technology has existed since
2010 and is maturing very quickly. Data lakes allow self-service that is not possible with DW.
Cloudera’s data lake technology has generated a lot of use cases for customer care and marketing
applications (e.g., personalization of a customer instead of working with broad customer segments),
IoT (e.g., telematic devices used for car insurance), or fraud detection and cybersecurity in credit
card operations.
Hadoop: Google has given the world many tools to handle big data, as the company’s technology
evolved while dealing with massive amounts of real-time information. Hadoop is an example as it
was generated by Google’s File System paper, published in 2013.
Hadoop is an open-source software framework used for distributed storage and processing of
big datasets using the MapReduce programming model. Its highly parallel storage layer offers a lot
of resilience, as it is designed with a fundamental assumption that hardware failures are common
occurrences and should be automatically handled by the framework. Hadoop runs on commodity
hardware. Unfortunately, Hadoop is not very user friendly and it is hard to find and hire people with
adequate Hadoop skills. This explains why more than 50 percent of businesses do not have plans to
invest in it, according to Gartner.
Spark: Spark is a computer framework originally developed at the University of California,
Berkeley, that addresses speed in the handling of data. Its goal is to do queries very fast, thus
providing companies with a competitive advantage and allowing new operating models and use
cases to develop, as it does real-time data processing. Spark has deep ties to ML data structures and
data frames, and supports integration with ML libraries.
Hive: Hive is a DW infrastructure developed at Facebook and built on top of Hadoop to provide
data aggregation, query, and analysis. It is highly suitable for ad hoc analyses, is very fault resistant,
and is especially strong in creating data pipelines for cloud workloads.
Presto: An open-source distributed SQL query engine for running interactive queries against data
sources of all sizes developed at Facebook.
Impala: An open-source, massive parallel-processing SQL query engine developed by Cloudera to
analyze data stored in clusters running Hadoop. Impala is similar to Presto.
Algorithms
Cyberattacks on ML Models
An important research field now includes putting together and creating
a powerful, adaptive model of a cyberattacker. Experts are working on so
called adversarial models, which attempt to break other models to show
where they are weak, or where the data assumptions might be wrong.
Adversarial AI as quality control and security might one day become a
precondition for release of any AI product or service. An example would
include an attack to thwart a self-driving car’s AI system, causing it to
mistake a sign, speed limit, or child for something else. In December 2017,
researchers fooled a Google AI into mistaking a helicopter for a rifle.10
Adversarial examples are inputs to ML models that a cyberattacker has
intentionally implented to cause the model to make mistakes. They
represent a new problem in AI (and overall) safety that should be
addressed as soon as possible. Fixing them is very difficult and requires
research and funding. OpenAI is one of the organizations investing in this
research to comply with its overall goal of delivering safe and beneficial
AI.
One of the ways to combat such attacks is by training models not to be
tricked. This is not easy, especially if an attacker has a good strategy for
guessing where defense weaknesses are, and trains his or her model
against potential defense techniques. Defense strategies in this context
can’t be perfect. They might block one kind of attack, but be open to
another vulnerability.11
Hardware
Semiconductors
The Development of Silicon: Underlying Drivers and
Their Consequences
In the last three years, AI-enabling infrastructure has become more
easily available, and this trend is expected to accelerate even further. Most
AI researchers and practitioners refer to curve-fitting economic “laws”
while predicting the pace of technology development and adoption.
Moore’s Law, named after Gordon Moore, a founder of Intel, is one of the
best examples. Gordon Moore published a four-page article in the trade
journal Electronics in 1965.1 This article not only articulated the beginning
of a trend, it also described what had to happen in the silicon industry over
the next 50 years. Rodney Brooks dedicates a brilliant piece, “The End of
Moore’s Law,” to explaining how this major discovery shaped competition
in silicon and produced several megatrends.2
According to Moore’s Law, the number of components on an
integrated circuit doubles every one and a half years. This initial reading
outlined the components, and not the transistors. Generally, there are many
more components than transistors, though the ratio has dropped over time
as different types of transistors have been adopted.
With time, we have seen variations on Moore’s Law. The most popular
versions involve one of the following:
• two times as many transistors;
• two times as much switching speed in these transistors (so a computer could run twice as fast);
• twice the amount of memory on a single chip; and
• two times as much secondary memory in a computer (first on mechanically spinning disks, but
more recently in solid-state flash memory).
Openness
Software framework and hardware reference designs are not the sole
domain to influence the pace of AI. High-tech giants are releasing large
and well-labeled datasets necessary to train neural networks, such as the
YouTube Video Dataset, with 500,000 hours of video-level labels, and
Yahoo’s 13.5TB of data, which includes 110 billion events describing
anonymized user-news item interactions from 20 million users on various
Yahoo properties.
Similar developments happened prior to AI. Facebook entered the
Internet world after Google and paid attention to open development of
their cloud architecture to benefit from crowd-sourced intelligence without
damaging their core strategic advantage in social networks and monetizing
of social data in the attention economy. Google publicly discussed
DeepMind algorithmic advancements in Go, while some algorithms were
focused on optimizing energy consumption in Google data centers. For
companies such as Google or Amazon, the software and datasets they
open-sourced are a complement to their cloud-computing infrastructure
products, with Google offering a convenient way to run their customers’
systems with TensorFlow in the Google cloud, and Amazon similarly
expanding AWS to make it simple to run DSSTNE. Openness in AI
development is absolutely crucial to preventing the monopolization of AI
research and development within a few commercial companies. It is an
important step in establishing safe and beneficial AI.
AI Crossplay—Academia and
the Private Sector
When Andrew Ng joined Google from Stanford in 2011, he was one of the
first AI experts from academia taking up an active role in industry. Ng
moved again in 2014 to become chief scientist at Baidu, building the
company’s research lab in Silicon Valley. He left Baidu in early 2017,
teaches DL on Coursera, the online education company he cofounded and
raised an AI fund. Google hired Geoffrey Hinton, a deep-learning pioneer
at the University of Toronto, in 2013. DeepMind, which was acquired by
Google, has close links to Oxford, where Alphabet is currently funding
over 250 research projects and dozens of PhD fellowships.1 In 2015, Uber
hired almost 40 of 150 researchers at the U.S. National Robotics
Engineering Center based at Carnegie Mellon University in Pittsburgh,
Pennsylvania, mainly those working on self-driving cars. Uber then
donated US$5.5 million to support student and faculty fellowships at the
Center.
These stories are not unique. Fifteen years ago, academia was able to
retain the brightest minds, especially those who would have otherwise
gone to an investment bank. Today technology giants such as Microsoft,
Google, Facebook, and Amazon provide an intellectually stimulating
environment, superior R&D labs, and a competitive salary. Moreover,
scientists can continue doing their research, occasionally coaching product
teams to improve existing products with new insights. In the 1950s, a
similar trend of job migration was observed in semiconductor sector.
Carlos Guestrin works at the University of Washington and Apple. Russ
Salakhutdinov, Hinton’s protégé and head of AI at Apple, still spends time
at Carnegie Mellon. Hinton works for both Google and the University of
Toronto. As The Washington Post has reported, “Building ties to academic
superstars not only helps to improve products but also becomes a key
recruiting tool,” said Richard Zemel, director of the Vector Institute for AI
and a professor specializing in ML at the University of Toronto.2
Yoshua Bengio, who advised IBM and is currently supporting
Microsoft AI efforts, believes the concentration of wealth, power, and
capability in top Internet brands is “dangerous for democracy,” and even
that these companies should be broken up. His reasoning is simple. AI
technology naturally lends itself to a winner-take-all scenario: “The
country and company that dominates the technology will gain more power
with time. More data and a larger customer base give you an advantage
that is hard to dislodge. Scientists want to go to the best places. The
company with the best research labs will attract the best talent. It becomes
a concentration of wealth and power.”3
North America is currently leading in AI research and applications,
although China is a strong challenger and is committed to becoming the
global AI leader by 2030. Thriving AI centers exist in Europe, with the
UK, Germany, France, and Switzerland among the most DL-savvy
geographical hubs.
It is important to mention the Swiss universities Università della
Svizzera Italiana and University of Lugano, whose labs have made
research breakthroughs in many areas, including DL and robotics. Jürgen
Schmidhuber, one of the fathers of DL, works there. He is especially
renowned for his invention and subsequent development of long-short-
term memory (LSTM), a recurrent network algorithm that helps machines
learn many things unlearned by feed-forward networks. A demonstration
of LSTM can be found on everyone’s mobile phone. Schmidhuber’s PhD
student Shane Legg later became a co-founder of DeepMind, which was
sold to Google in 2014.
Gary Marcus encourages us to think big for open AI research. We
would love to see European and U.S. governments launch something
equivalent to CERN or the Manhattan project for AI, with thousands of
scientists sharing their research.4 Such an enormous undertaking would
have huge benefits for all of humanity. In contrast to Alphabet and other
AI giants, even the largest open AI efforts, such as OpenAI, sponsored
partly by Elon Musk, have only about 50 staff members.
When we talked to non-executive directors of nine European and seven
top U.S. companies, not a single one of them was aware of their
companies’ commitment to support AI research. It is critical that a broad-
based international, perhaps transatlantic, public/private collaboration
attract sponsorship and funding from Fortune 2000 companies, as such an
approach would greatly improve fundamental research, which is
profoundly needed to secure next-generation AI applications.
CHAPTER ELEVEN
The Geopolitics of
AI—China vs. USA
Iflytek
Iflytek is focused on speech recognition and understanding natural
language. The company has won international competitions both in speech
synthesis and in translating Chinese- and English-language texts.
Iflytek is said to have a close relationship with the Chinese government
for the development of surveillance technology. “Our goal is to send the
machine to attend the college entrance examination, and to be admitted by
key national universities in the near future,” said Qingfeng Liu, Iflytek’s
CEO.7
Alibaba
Alibaba is capable of becoming one of the world’s most powerful
ecosystems, serving 2 billion people and enabling 10 million small
businesses with AI-enabled cloud computing and other services. Its
founder Jack Ma has been cultivating the image of a rebel fighting the
system against state-owned enterprises. He is, however, backed by Beijing,
whether in expanding China’s footprint in Africa, exploring the ocean
frontier in Southeast Asia, revitalizing the once-famous Silk Road, or
supporting China’s talks on global trade at Davos.
Alibaba’s AI priorities are currently in the cloud, with a platform
called DT PAI that allows developers and companies using Alibaba’s e-
commerce sites to analyze massive amounts of data in order to predict user
behavior as well as industry trends. The company chose Singapore as the
site of its new data center, as well as its international cloud division
headquarters.8 In 2017, it started offering cloud services in Europe without
any stated profitability targets.
Alibaba has been publishing impressive research in AI. For example,
they published a frequently quoted paper describing a DL system that
learned to execute a number of strategies employed by high-level StarCraft
players without it having been given any specific instructions on how best
to manage combat.9
In March 2017, Alibaba launched ET Medical Brain, a suite of AI
solutions designed to ease the workload of medical personnel. The suite
uses computers to act as virtual assistants for patients and in medical
imaging, drug development, and hospital management. Another healthcare
project between Alibaba Cloud and Wuhan Landing Medical High-Tech
Co. leverages AI and visual computation technologies to detect early stage
cervical cancer using cell cytology.10
Baidu
Qi Lu, who was one of the leading AI heads at Microsoft AI, became
the COO of Baidu, where he will lead the company’s efforts to become a
global leader in AI. Baidu is working on driverless cars. It has turned an
app that started as a visual dictionary (take a picture of an object and the
app will identify it) into a site that uses facial recognition to find missing
people. Baidu’s speech recognition software is considered top of its class,
and it masters the difficult task of hearing tonal differences in Chinese
dialects. In 2017, Baidu opened a joint company-government lab partly run
by academics who once worked on research into Chinese military robots
(the Tsinghua Mobile Robot). In summer 2017, Baidu had 60 different
types of AI services in its suite called Baidu Brain.11
Baidu has acquired Raven Tech, a Y Combinator startup developing a
voice-recognition assistant. It has largely focused on the technology
underpinning smart-home devices. Launched as well is a mobile voice
assistant app named Flow. Baidu will likely search for ways to integrate
Raven Tech’s products into its own digital assistant service, Dumi. Baidu
also acquired the U.S. computer vision company xPerception, which
makes vision perception software and hardware with applications in
robotics and virtual reality (VR).12 In September 2016, the company
launched a US$200 million VC fund dedicated to AI and augmented
reality (AR).13 In 2017, Fast Company named Baidu among the 10 most
innovative companies in AI and ML for accelerating mobile search with
AI.14 It has a platform called DuerOS for a natural language-based,
conversation-based computing. DuerOS in China has already accumulated
more conversation-based skills than Google Now or Siri for Apple. The
platform is in over 100 brands of private home appliances like
refrigerators, air conditioners, TVs, speakers, and the like. The architecture
of the platform is very similar to what Amazon is doing with
Echo/Alexa.15
Baidu has also created a platform for self-driving vehicles called
Apollo. They are trying to implement the same strategy as Google with its
Android operating system for smartphones a decade ago. The platform has
already signed over 50 partners, including Intel, Microsoft, and Ford.16 In
September 2017, Baidu announced that it will invest US$1.5 billion in
start-up autonomous driving companies. In 2016, Baidu Cloud announced
the ABC-Stack, a hybrid cloud platform supporting Baidu customers to
integrate and deploy ML. It includes over 60 AI capabilities, which are
available as a set of open APIs and SaaS products.17
Tencent
In 2016, Tencent, developer of the mobile ecosystem WeChat, which
has 889 million active users in China, created an AI research laboratory in
Seattle and started to invest in U.S.-based AI companies. Additionally, the
company announced a significant new AI hire. Yu Dong, a prominent
expert on speech recognition and DL, is now the deputy director of the lab
in Seattle. He was previously a principal researcher at Microsoft, where he
worked on applying DL to voice recognition, an approach that has
produced dramatic advances in accuracy over the past few years. In
Europe, Tencent owns an AI lab in Barcelona.18
Huawei
In 2016, Huawei announced it would invest US$1 million in new AI
research in partnership with the University of California, Berkeley. In
2017, Huawei launched Kirin 970, a SoC (system on a chip) powered by
HiAI, a new computing architecture for AI acceleration. It includes a NPU
(neural network processing unit), making it up to 25 times faster and 50
times more energy efficient than traditional CPUs.19
Full-Stack Companies—The
Battle of the Giants
Alphabet/Google
AI Approach at Google
Google’s 2016 annual report highlights ML and AI on its first page. There
is no reference to AI in the 2015, 2014, or 2013 annual reports. Google
does not only see AI as a crucial component of its strategy, it is developing
full-stack AI capabilities. It uses its data (or rather the data of its over 1
billion monthly active users) to train its proprietary algorithms deployed in
its own cloud, which runs on its own chipset. By operating a very broad
range of services, Google can access data in various formats, be they text,
image, video, maps, voice, or webpages. It has a powerful platform for ML
and invests in development of TPUs, its silicon chip.
Alphabet is seen as the tech industry’s top destination for AI engineers
and scientists. The company is aggressively trying to corral as much AI
talent as possible. Most of its acquisitions have been of firms specializing
in speech and image recognition, such as API.ai, Moodstocks, Dark Blue
Labs, and Vision Factory. These technologies are vital for hot new
services including self-driving cars and virtual assistants.1 Google also
appears to be trying to attract as many female AI researchers as possible as
they understand how diversity contributes to better AI products.
In May 2017, Google launched a new venture capital program focused
on AI led by Anna Patterson, a longtime Google VP of engineering
focused on AI.2 The program is called Gradient Ventures. In July 2017,
the company announced Launchpad, a studio program to provide resources
to AI startups. Roy Geva Glasberg leads this effort. The Launchpad Studio
supports companies with specialized datasets, simulation tools, and
prototyping assistance. Launchpad operates in 40 countries and involves
Alphabet luminaries Peter Norvig, Dan Ariely, Yossie Matias, and Chris
diBona.4
DNN Research, 2013 DL and neural network to upgrade Google’s image search
DeepMind, 2014 Deep Mind beat human champions in “Go”; DL leader of the
industry
JetPac, 2014 Social travel application for iPad aggregating photos from
Facebook and automatically locating where they’re from
Emu, 2014 Messaging product for iPhones using ML and natural
language processing to understand messages, add relevant
information to manage your scheduling and reservations
Vision Factory, 2014 Object and text recognition based on DL
Dark Blue Labs, 2014 Learning deep structured and unstructured representations of
data to make intelligent products, including natural language
understanding
Granata Decision Systems, 2015 Software platform providing real-time optimization and
scenario analysis capabilities for large-scale, data-driven
marketing problems and group/organizational decision-
making
Timeful, 2015 Stealth company developing an app using AI, big data,
behavioral science, and product design to reinvent the way
people manage time
Moodstock, 2016 Visual Search
API.AI, 2016 Bot Platform
AIMatter, 2017 Maker of the Fabby computer vision app, processing images
like humans do while using both a neural network–based AI
platform and SDK to detect and process images quickly on
mobile devices
Halli Labs, 2017 DL and ML system development
Kaggle, 2017 Operator of data sciene and ML competitions, enabling data
scientists to conduct ML contests, host datasets, and write
and share codes
In December 2017, Google announced a new AI research lab in China
under the leadership of Fei-Fei Li, former director of the Stanford AI Lab,
and Jia Li, who was hired from Snap.5
Google’s AI Technologies
Google’s TensorFlow, an open-source platform for ML, provides
anyone with a computer and Internet connection access to one of the most
powerful ML platforms ever created. TensorFlow is even available on the
Raspberry Pi, which is a small single-board computer priced at US$30 and
developed in the UK to promote the teaching of computer science. Thus
the barriers to entry product coding in AI are really low for anyone willing
to try it. TensorFlow is also implemented in the curriculum of Stanford
computer science professor Christopher Manning.
The TensorFlow system uses data flow graphs. In this system, data
with multiple-dimension values is passed along from mathematical
computation to mathematical computation. Those complex bits of data are
called tensors. The math bits are called nodes, and the way in which the
data change from node to node tells the relationships of the data to the
overall system. These tensors flow through the graph of nodes, thus the
name TensorFlow.
Google open-sourced its own image-captioning model around
TensorFlow that can both identify objects and classify actions with an
accuracy of over 90 percent. This activity has helped the TensorFlow
framework gain prominence and become very popular with ML
developers. According to calculations by François Chollet, now a Google
engineer and the creator of Keras, another popular DL framework,
TensorFlow was the fastest-growing DL framework as of September 2016,
with Keras in second place. Tensorflow is free, but it connects easily with
Google’s servers to provide greater data storage and computing power.12
In 2016, Google disclosed that it was using an in-house-developed chip
customized for AI called a tensor processing unit, or TPU. The TPU
provides high performance in DL inference while consuming only a
fraction of the power of other chipsets. In May 2017, Google announced
its second-generation TPU with a higher-precision floating point number
format, allowing it to perform DL training as well as inference. Google
says companies that use its cloud services will receive the benefits of
TPU’s power and energy efficiency.
AI Ethics at Alphabet
According to The Guardian, Google wanted to set up an ethics and
safety board as part of the DeepMind acquisition to ensure that its AI
technology is not abused. Since then, executives at DeepMind have
confirmed the existence of this board, though its work, composition, and
agenda remain unknown.
A second board was created in January 2016 to supervise DeepMind
activities in health care. This board meets four times a year and issues an
annual statement outlining its findings. It includes the editor of the Lancet
Medical Journal; Richard Horton, the kidney expert from NHS; Prof.
Donald O’Donaghue; and the chair of Tech City UK, Eileen Burbidge.20
In late 2017, DeepMind created the Ethics & Society (DMES) research
group to investigate AI bias, aligning technology with ethical values,
addressing lethal autonomous weapons as one of the key areas for studies,
and to research the economic impact of automation. This new group came
along after the launch of the Partnership on AI, a consortium of technology
companies, activist organizations and academia focused on best practices
for AI safety. Amazon, Microsoft and IBM are members of the
Partnership.21
Facebook
Facebook’s Approach to AI
In Marc Zuckerberg’s manifesto about building communities, the
Facebook founder uses the terms “artificial intelligence” or “AI” seven
times, all in the context of how ML and other techniques will support
keeping communities safe and well informed. In this context, Facebook is
focused on optimizing user experience on the Facebook platform with AI
capabilities. The company launched Facebook at Work as a separate
version of the social network in 2016. We believe that, as a part of
enterprise experience, Facebook will offer bot technologies in a manner
similar to how it has done in its consumer business.
Facebook is a full-stack AI company, as it controls productivity and
experience on its own infrastructure, with its own platform, and provides
AI tools to developers to grow the ecosystem and ultimately benefit from
scale. Facebook runs computing infrastructure necessary to serve 1.5
billion active daily users. The company invested in open source and open
standards within its Open Compute Project to optimize hardware. It has
already generated more than US$2 billion in savings from this investment.
The company announced next-generation GPU-based systems for training
neural networks, called “Big Sur.” Big Sur is Facebook’s Open Rack–
compatible hardware designed for AI computing at a large scale, designed
with Quanta, a Taiwanese manufacturer, and Nvidia, a chipmaker that
specializes in GPUs. Facebook is apparently also collaborating with
Qualcomm to introduce AI software knowledge into its next generation of
chips.
Open-sourcing Facebook’s AI hardware means that DL has graduated
from the Facebook AI Research (FAIR) lab into Facebook’s mainstream
production systems. FAIR has published a number of good open-source
implementations in Torch, an AI framework used in DL and ML.
Facebook’s company-wide internal platform for ML is called
FBLearner Flow. We understand that this platform will not be open-
sourced to the public, as its value comes from Facebook data. It combines
several ML models to process several billion data points from the activity
of users, and forms predictions about a multitude of behaviors, including
individual users’ interests. FBLearner Flow’s algorithms then choose what
content appears in each user’s news feed and what advertisements a user
sees. ML is therefore augmenting the capabilities of Facebook’s engineers;
1,100 engineers are using FBLearner Flow, and not all of them are ML
experts.
Company, Acquisition
Year Specifics
Face.com, 2012 Face recognition software, offering a platform for developers and
publishers to automatically detect and recognize faces in photos using
free REST API
JibbiGo, 2013 Speech translation app, based on speech recognition and machine
translation
Wit.ai, 2015 API for Siri-like voice interfaces to enable developers to add a voice
interface to their device or app in a few minutes
Masquerade Mobile platform using facial recognition to allow users to add various
Technologies, 2016 filters in pictures or real-time videos and share them on social networks
Zurich Eye, 2016 Computer vision, enabling machines/robots to independently navigate
in any space, including indoors, urban areas, etc.
Ozlo, 2017 AI-powered assistant to build compelling experiences with Messenger
Facebook AI Research
Facebook’s AI groups engage in collaboration to quickly release new
features and products.23 They have two major divisions focused on AI.
Joaquin Quiñonero Candela has headed the new AML team since October
2015. He maintains a close relationship with FAIR (Facebook AI
Research), another AI branch based in New York City, Paris, and Menlo
Park that is headed by Yann LeCun of New York University’s Courant
Institute of Mathematical Sciences. Quiñonero Candela joined Facebook
from Microsoft and applied his experience with the Microsoft organization
in setting up ways to move good ideas quickly between product, R&D, and
other corporate functions.24 Such collaboration was successfully
implemented for a prototype that allows the visually impaired to place
their fingers over an image and their phones to read a description of what’s
happening onscreen.
It has recently become standard practice to train a system to identify
objects in a scene or reach a general conclusion, for example, in what
environment a picture was taken. FAIR’s researchers have found ways to
train neural nets to identify virtually every interesting object in an image
and then figure out from their positions and relationhips to other objects
what the photo is about—for example, analyzing people’s positions to
understand what they are actually doing in a photo.
Quiñonero Candela breaks AI applications into “four areas: vision,
language, speech, and camera effects. All of those, he says, will lead to a
content understanding engine.”25 The company is building generalized
systems where work on one project can accrue to the benefit of other
teams working on related projects. Knowledge and algorithms are getting
transferred from one area to another, improving how quickly Facebook
ships products.
The FAIR team is supporting Facebook’s attempts to combat the fake
news problem. It has already produced a model called World2Vec. This
technology adds memory capability to neural nets and enables tagging
every piece of content with information, such as its origin and who has
shared it. With those data points, Facebook can analyze the sharing
patterns that characterize fake news. At the time of this writing, Facebook
finds itself at the vortex of the U.S. special prosecutor’s investigation into
alleged collusion between the Trump campaign and/or administration and
the Russian government. It remains to be seen how far and how public
their anti–fake news activities will become.26
In 2017, Facebook launched an open-source project to improve natural
speech recognition. The company hopes to get researchers from around the
world to share what they learn from their individual experiments with
language recognition and conversation technologies along with the data
they use.
In September 2017, Facebook announced an AI lab in Montreal with a
US$5.7 million investment. Joelle Pinaeau of the McGill School of
Computer Science will head the effort.
Facebook’s AI Applications
Facebook is convinced that building decent consumer products now
requires the predictive capabilities of AI. AI is now embedded into
Instagram and Messenger.
Facebook has built its neural net to work on smartphones in spite of the
fact that it does not control the hardware like Apple, which has
implemented similar technology. In the short term, this enables quicker
responses in understanding text and interpreting languages. In the longer
term, however, having such control could enable real-time analysis of what
a person sees and says.
Facebook uses ML to translate 2 billion news feed items per day and
has ceased its use of Microsoft’s Bing Translate in favor of its own
technology. Facebook also applies computer vision models to satellite
images to create population density maps to decide where it needs to
deliver broadband. According to Fortune, thanks to AI, Facebook’s video-
captioning efforts have increased engagement by 15 percent and boosted
viewing time by 40 percent.27 This, of course, increases Facebook’s
negotiating power with advertisers.
The launch of the Facebook photo-sharing service Moments in June
2015 showed how far Facebook research in AI has influenced its products.
This service relies on image recognition to let users create private photo
albums with select groups of friends. At the launch, the company said that
its technology was capable of recognizing human faces with 98 percent
accuracy. Regulations on privacy and facial recognition technology
prevented the launch of this feature in Europe.
Its acquisition of Occulus gave Facebook additional technology
capabilities in VR and AR. In 2017, Facebook launched a groundbreaking
platform for AR powered by AI. The platform virtually transforms the
camera on a smartphone into an engine to build digital effects that a user
can layer on top of what he or she sees through the camera. Facebook
allows outside companies and other developers to contribute to the
technology. The platform applies to still images, videos, and even live
videos shot with a phone. It makes it possible to “pin” digital objects to
specific locations and situations in the real world, for example by adding a
cup of coffee on a picture of a kitchen table, or a swimming shark in a
bowl of cereal. And they are expanding into new areas—inspired by games
like Pokémon Go, for example, this technology changes the way people
interact with the world around them. In addition to the Camera Effects
Platform, Facebook announced an open DL platform, AI Caffe2, able to
capture, examine, and process pixels in real time on a smartphone.
The Facebook AR application depends on deep neural networks that
run on a phone—a capability for which Apple is famous. The neural
network is used to track people’s movements, so that digital effects
connect to the real world. According to Mike Schroepfer, Facebook’s
CTO, the company is exploring ways of adding effects based not only on
what people are doing, but also on what they are saying. Through these
efforts, Facebook is building a pipeline of core technologies that will
eventually enable all of these common AR effects.28 Cooperation with
chipset makers of wireless devices such as Qualcomm is essential to
ensure the success of such a complex venture. AI-enabled AR will not
happen just on a phone. It will also go into AR glasses, as explained by
Facebook Oculus Chief Scientist Michael Abrash.29 In 2016, Facebook
opened its wit.ai platform to allow developers to build powerful AI for use
as bots.
Apple
Apple’s Approach to AI
Apple has never been open about its technology, and started to talk
about AI only in 2016. However, if you use an iPhone, you are using
Apple’s AI. For example, your phone identifies a caller who isn’t in your
contact list, but who e-mailed you recently, or you get a reminder of an
appointment that you did not put into your calendar. These product
enhancements were supported by DL and neural nets, which Apple has
been developing for quite some time. In 2014, the company moved Siri
voice recognition to a neural-net–based system. Now Siri relies on DNN,
CNN, and LSTM, among others. Apple has never spoken publicly about
this.
Apple has always been a full-stack company, controlling experience on
devices throughout its App Store and developing its own component
capabilities to be independent from chipset providers like Qualcomm.
Indeed, Apple is developing its own chip for AI. This silicon will not just
power existing product lines; it requires an AI chip for two of the areas
that Apple is betting its future on—AR and self-driving cars.30
Apple is ranked second in the race to acquire AI startups, according to
the CB Insights database. The company’s activity on the M&A front
suggests it is trying to catch up to its rivals in AI. Apple has had a harder
time hiring AI talent because of its unwillingness, until recently, to allow
engineers to publish research papers and participate publicly in open-
source projects. Apple has stepped up its focus on ML to close the gap. In
the hopes of attracting researchers, it has undertaken a concerted PR
campaign in the tech press to talk up its gains in AI.
Apple bought Turi, a Seattle company, for US$200 million. Turi built
an ML toolkit that has been compared to Google’s TensorFlow. In 2017,
Apple acquired Lattice Data for US$200 million. Lattice has developed an
AI reference system that turns unstructured data from video, audio, and
other sources into structured data. Make Cafarella, co-creator of Hadoop,
was Lattice’s co-founder.
Many of the recent Apple technology developments are strongly
connected to the hiring of Carnegie Mellon professor Russlan
Salakhutdinov, a big supporter of unsupervised learning and DL, in 2016.
Among other things, he appears to be behind Apple’s silicon chip efforts.
Company, Acquisition
Year Specifics
Apple’s AI Applications
Apple added DL developer tools to iOS in 2016. The new APIs are
allowing developers to build apps of their own where DL processing
would take place on the phone. That speeds up response time, as requests
do not have to go to the cloud and back, and protects user data by keeping
everything in the phone.32 Apple’s competitors have similar capabilities,
but none of the other companies can do these things while protecting
privacy. Many researchers and practitioners believe that Apple is
constrained by its lack of a search engine and its insistence on protecting
user information. But Apple seems to have figured out how to bypass both
hurdles. In 2017, Apple released a paper on how it implements differential
privacy.33 It involves a mathematically solid definition of privacy, which
allows ML to mask user-identifiable features. While it has trade-offs, we
believe other organizations that take privacy seriously should consider this
approach seriously.
Apple uses DL to detect fraud in the Apple store, to extend battery life
between charges on all its devices, and to help the company identify the
most useful feedback from the thousands of reports from its beta testers. It
recognizes faces and locations in the photos. The article “The iBrain is
Here” explains how Apple implements AI.34 Apple executives believe that
it is possible to get all the data required for robust ML without keeping
profiles of users in the cloud or even storing data on their behavior to train
neural nets. This is possible by taking advantage of Apple’s unique control
of both software and hardware, including silicon. The most personal
information stays inside the device. The computing happens right there on
the phone. A neural network–trained system “watches” while a user types,
thereby detecting key items and events, e.g., contacts, flight information,
and appointments. The data itself stays on the phone. Even in backups that
are stored on the company’s cloud, the information is filtered in such a
way that the backup alone can’t “disclose” anything specific on the user.
In 2017, Apple introduced an ML framework API for developers
called Core ML to make AI on mobile devices, including watches, as fast
and powerful as possible. According to Apple, image recognition on the
iPhone will be six times faster than on Google’s Pixel. Core ML will
support all sorts of neural networks (deep, recurrent, and convolutional), as
well as linear models and tree ensembles. Data won’t leave mobile devices
to respect privacy.35 Apple anonymizes data, tagging them with random
identifiers not associated with Apple IDs. Beginning with iOS 10, Apple
has implemented differential privacy, which crowd-sources information in
a way that doesn’t identify users at all. Traditionally, every word and
character that is typed is sent to servers, and then interesting things are
spotted. Apple does not do it this way, as they apply end-to-end encryption
on a massive scale.
Amazon
Amazon’s Approach to AI
According to one July 2017 report, “In its 16 years as a public
company, Amazon has received unique permission from Wall Street to
concentrate on expanding its infrastructure, increasing revenue at the
expense of profit. Stockholders have pushed Amazon shares up to a record
level, even though the company makes only pocket change. Profits were
always promised tomorrow.”36 Amazon has in fact started posting profits
due to the success of Amazon Web Services (AWS). Amazon is pursuing
AI to expand the reach of AWS, embedding more developers and
enterprises into its ecosystem and offering new product categories like
Echo/Alexa.
In almost all “main categories, Amazon’s position as a platform works
as a data feedback loop: Amazon owns the richest dataset on how
consumers consume, how sellers sell, and how developers develop. This
allows Amazon to optimize its user experience in retail, logistics,
developer environment, and now in voice AI, all of which make Amazon’s
offerings even richer.”37 In April 2017, Jeff Bezos, Amazon’s CEO, wrote
extensively about AI and ML in his letter to shareholders. Voice, virtual
assistants, and natural language processing remain his focus areas.
Amazon is also starting a AI-as-a-service AWS and developer community.
This is how Amazon implements a strategy to become a software platform
leader.
According to Bezos, “ML and AI is a horizontal enabling layer. It will
empower and improve every business, every government organization,
every philanthropy—basically there’s no institution in the world that
cannot be improved with ML. . . . Alexa and Eco . . . [and] Prime Air
delivery drones use a tremendous amount of ML, machine vision, systems,
natural language understanding and a bunch of other techniques. . . . [A]
lot of the value that we are getting from ML is actually happening beneath
the surface.”38
Swami Sivasubramanian, Amazon’s AI vice president, believes that AI
makes “it easier to do things that used to take considerable time like
product fulfillment, logistics, personalization, language understanding, and
computer vision to big forward-looking ideas like self driving cars. At
AWS, the combination of the algorithms, access to cheap ways to store
information, process and query data (to train these algorithms), and access
to specialized computer infrastructure (GPU and custom ASICs)”39 all
accelerate AI. AWS is investing in all layers of the stack, from core DL
frameworks (such as Apache MXNet, Caffe2, and TensorFlow) to ML
platforms and ML applications (e.g., Amazon Lex, Amazon Polly, and
Amazon Rekognition). Amazon works with Nvidia and Intel to optimize
these DL frameworks to hardware on GPUs and CPUs.
Amazon’s support for MXNet has advantages over TensorFlow, Torch,
and other frameworks for ML and DL. It is compact and has cross-
platform portability, both of which Werner Vogel, Amazon CTO, praised:
“The core library (with all dependencies) fits into a single C++ source file
and can be compiled for both Android and iOS.” Developers can also use a
wide variety of languages with the framework, such as Python, C++, R,
Scala, Julia, Matlab, and JavaScript. The framework is highly scalable.
It is safe to assume that Amazon’s long-term plans for MXNet include
monetizing it by offering it as a cloud service. This does not have to be
through Amazon’s existing ML service; it could come from an officially
supported machine image like the existing DL API that Amazon already
sells. The former would be suitable for those who want an easily
consumed product; the latter, for those who want total hands-on control.
Amazon also wants to become a major sponsor of MXNet’s
development. Currently, the framework enjoys support from leading AI
companies and researchers such as Nvidia, Microsoft, Baidu, and Carnegie
Mellon. Vogel has stated that Amazon will “contribute code and improved
documentation as well as invest in the ecosystem around MXNet,” and
“partner with other organizations to further advance MXNet.”
This plan includes another possibility: creating custom hardware
specifically designed to run MXNet at scale to provide a service not found
anywhere else. In theory this could be done without making significant
changes to MXNet, although Amazon could build an in-house version with
enhancements specifically coupled to its own hardware. It is not as if the
publicly available open-source version would magically lose its value if
this happened. But cloud vendors recognize the importance of being able
to provide an at-scale option that is normally out of a regular user’s
practical reach.40 Given the scale of AI efforts at Amazon, it’s interesting
that the company so far has not made that many ML/DL-related
acquisitions.
AWS’s acquisition of the San Diego–based cybersecurity startup
harvest.ai is a very interesting move. Harvest.ai’s product MACIE
Analytics monitors in real time how an enterprise’s intellectual property is
being accessed. Assessments of who is looking at data, who is moving
and/or copying documents, along with the location of these events,
supports identification of suspicious behaviors. This enables prevention of
potential data breaches before they take place. AWS already offers
embedded security features for cloud customers, developed either
internally or with help of partners. In time, Amazon ML can become
Amazon’s next pillar of business, along with Prime and AWS.
Company, Acquisition
Year Specifics
Orbeus, 2015 Computer vision engine that enables machines to perform face
detection and recognition, logo and product recognition, optical
character recognition, and scene understanding
AngelAI, 2016 Former GoButler, a company that supports businesses in automating
conversational commerce using natural language processing. Domain-
specific NLP solution is built on an extensive dataset of real
conversations
HarvestAI, 2017 Cybersecurity company using ML to analyze user behavior around a
company’s key intellectual property to try to identify and stop targeted
attacks before valuable customer data can be swiped
Body Labs, 2017 AI-powered 3D human modeling technology, allowing understanding
of both 3D human motion and shape from any input
Amazon’s AI Applications
Amazon is using ML to optimize personalized recommendations and
improve inventory management. Since 2016, Amazon has been known for
its voice-enabled device. Amazon is one of the most impressive innovators
in the world as the company has the willingness and skills to go after
multiple bets. The company unsuccessfully tried to get into the smartphone
business with its Fire model. In October 2014, Amazon wrote off this
business at US$170 million. In November of that year, the company
launched the Amazon Echo, which the company had already started
working on in 2011. Amazon Echo created its own market of voice-based
assistants for home, car, and later B2C use cases. The home was one of
few places in the world where phones were not necessarily the most
practical devices. An ecosystem needed to be assembled: more “smart”
products—light bulbs, thermostats, and power switches—were coming
onto the market.42 Amazon is building a new ecosystem around Echo and
Alexa (Echo’s voice assistant), creating a framework of skills that allows
smart devices to connect to Alexa. The payoff was already obvious at the
2016 Consumer Electronics Show (CES): Alexa support was everywhere.
LG, GE, Ford, and many more companies announced gadgets, home
appliances, and even cars that could connect to Alexa.
Alexa’s business model is smart: it can subsidize the device with other
lines of business. Customers will buy multiple Echos. Alexa makes it easy
to order products, supporting Amazon’s strategic goal of becoming the
logistics provider for everything and everyone.
The number of “skills” that Alexa possesses—tasks that it can perform,
such as setting a thermostat or summoning an Uber—had grown from 135
in January 2017 to 15,000 by the summer of 2017. As a comparison, in the
summer of 2017, Google’s Assistant had 378 skills and Microsoft’s
Cortana had 65.
Flash briefings make up 20 percent of U.S. skills for Alexa, as these
are the easiest to develop and implement. Examples include The Wall
Street Journal, Digiday, NPR, and The Washington Post.43
In September 2016, Amazon announced the annual Alexa Prize, worth
US$2.5 million, for which “university students will try to create bots that
can intelligently chat about topical matters for 20 minutes.”44 There is also
the Alexa Fund, Amazon’s captive venture arm, which has further
supported the developer and hardware ecosystem around Alexa. The
platform offers software development kits (SDKs). Alexa Fund
investments are focused on adding new interfaces, e.g., gesture controls by
Thalmic Labs, as well as new form factors and hardware like robotics,
such as those developed by Emboded. In January 2017, Alexa Accelerator,
in partnership with TechStars—one of the most active IoT accelerators
worldwide—was announced.
Amazon has already worked with the pharmaceutical company Merck
& Co. to make an Alexa application for diabetes patients to check glucose
levels. Amazon invested in the genomic startup GRAIL, which
demonstrates the company’s interests in health care.45 The wide adoption
of Alexa in health care requires compliance with data privacy rules. So far,
however, Alexa has not yet complied with HIPAA (Health Insurance
Portability and Accountability Act).46 Amazon also plans to use Alexa in
call centers.
Amazon wants to serve big and small developers through AWS. These
customers want ML without the upfront costs. Amazon unveiled offerings
that will work like an API and allow any developer to access Lex (the NLP
inside Alexa), Amazon Polly (speech synthesis), and Amazon Rekognition
(image analysis). In addition, Amazon is focused on drones involving
vision for obstacle avoidance. It seems Amazon wants to offer more pre-
built algorithms.47 At the launch of that platform, AWS CEO Andy Jassy
said, “We do a lot of AI in our company. We have thousands of people
dedicated to AI in our business.”48Amazon CTO Werner Vogels has
described security as one of the integrated elements of every service the
company builds. For example, in late 2016, the company launched
Amazon Inspector, a service delivering vulnerability reports on security
and compliance based on a customer’s AWS activities.
Amazon also invests in and partners with companies focused on
enterprise cloud services, e.g., the Middle Eastern e-commerce site
Souq.com; the India-based Housejoy, which will help expand its reach in
the region; Do.com, a meeting productivity software with the potential to
roll into AWS’s video-conferencing suite for a business called Chime; and
Twilio, focused on AI in enterprise and AWS.
We believe Amazon might move into self-driving technologies as well.
Amazon has announced Prime-Air, a program in which unmanned aerial
vehicles will deliver products in less than 30 minutes. Unmanned logistics
and delivery are new business areas with an uncertain timeline. We
understand that Amazon is thinking of adding drones to its pool of delivery
vehicles. Right now there are demos, but no actual products. Amazon
patent activities suggest the “company is trying to create a flying
warehouse that would dispatch package-laden drones to the ground. Called
an ‘Airborne Fulfillment Center’ (AFC), the patent describes it as ‘an
airship that remains at high altitude.’ In another patent, Amazon talks
about a drone mesh network that alerts all other drones about their
surroundings.”49 Other patents contain details on how Amazon’s
fulfillment centers would use robotics to assemble orders by tossing items
through the air.50
As the company is changing the way products and services are
delivered to customers, its efforts will potentially influence the
development of autonomous vehicles as well. According to Nvidia CEO
Jen-Hsun Huang, the “Amazon effect” will turn transportation on its head
with autonomous technology playing a big role, especially for point-to-
point movement of products and people.51
Microsoft
Microsoft’s Approach to AI
Like Google, Facebook, and Amazon, Microsoft is another leading
company in AI. Yoshua Bengio, one of the fathers of DL, agreed to be a
strategic advisor to Microsoft in January 2017. This gave the company a
direct line to one of AI’s top resources for ideas, talent, and strategic
direction. Bengio works with Harry Shum, who is in charge of Microsoft
AI research. The research group is positioned horizontally to categories
such as Windows, Office, and Azure. This development is promising
because it should accelerate product development and get AI’s benefits to
customers faster.52 Microsoft has a dedicated AI business unit with over
8000 computer scientists and engineers focused on embedding AI into the
company’s products.
Microsoft’s approach is very engineering oriented. The company has
fantastic talent and is credited for discovering residual networks. Among
other frameworks is Decision Forest. Models such as Random Forests and
Gradient Boosted Decision Trees imply a “tree of logic rules that slice up
the domain recursively to build a classifier. This approach has been
effective in many Kaggle competitions. Microsoft has an approach that
melds the tree-based models with DL. . . . Their Cognitive toolkit . . . is a
high-quality piece of engineering. It is likely one of the better frameworks
out there with respect to learning, using distributed computers.”53
Microsoft is one of the biggest acquirers of AI startups.
Microsoft’s AI Applications
Microsoft is very advanced in natural language recognition and
processing. Cortana is a less popular, more functional version of Siri, with
less visibility than Amazon’s Alexa. Because Cortana comes with every
installation of Windows 10, it has 145 million monthly active users.
Unlike Alexa, it responds not just to voice but to text as well. As Cortana
is linked to Bing, Microsoft’s search engine, which has 30 percent of the
U.S. market, it learns from data signals from many different devices.
Microsoft as a dominant player in business productivity extends its
relationships with companies like Nissan and Volkswagen into new
product categories, e.g., embedding ML into cars. Microsoft also designs
bots to automate one-off tasks a person used to do herself, like making a
dinner reservation or completing a bank transaction. Microsoft launched
the Microsoft Bot Framework in 2016 to enable bots for a variety of
environments, from Skype to Alexa to Facebook Messenger. A suite of
plug-and-play tools like image recognition and ML allows developers to
leverage Microsoft’s research on AI. According to Satya Nadella,
Microsoft’s CEO, bots are the new apps. Consumer-focused examples
include:
• the language-learning bot built at Microsoft’s Skype Bot Lab in Palo Alto;
• the social bot Zo, available on the web;
• Xiao Ice (“princess”), Zo’s more experienced Chinese counterpart, with 40 million users;
• the teen-focused social messaging app Kik; and
• Rinna, for Japanese speakers.
Bots are being integrated into Bing searches of locations and events. All of
Microsoft’s bots are built on top of the Bing knowledge platform. A big
part of training the bots is teaching them what concepts are interrelated.
Enterprise bots include a Calendar.help bot to set up meetings and
coordinate schedules, and Who.bot, which identifies people within
Microsoft’s workforce with desired skills. These developments may also
be extended into the LinkedIn database.55
In the automotive sector, Microsoft is teaming up with Baidu to power
self-driving cars. The company will provide its Azure cloud services to
companies using Baidu’s open-source Apollo self-driving platform outside
China. Baidu aims to make this platform open source and follow the steps
Google has already taken with Android on smartphones. Baidu has already
signed on about 50 partners willing to build and improve the platform,
including Chinese OEMs (e.g., Great Wall Motors) and the U.S.
companies Intel and Ford.56
Microsoft Azure Cloud has ML learning tools for developers to exploit
big data, GPUs, data wrangling, and container-based model deployment.
Environmental • Take lead and implement “ethics and safety” policies and
programs
Social • Develop guardrails for specific, sensitive AI applications (e.g.,
health care)
• Special social responsibility to protect against harmful socio-
political impacts (fake news, propaganda)
• The special privacy responsibilities of voice-enabled AI
products (mobile/home)
Governance • Implement AI ethics advisory boards at leading AI companies
to oversee ethical design issues
• Anonymization of personal data as structured and unstructured
data grow exponentially
• Development of AI to combat fraud, corruption, and similar
crimes
CHAPTER THIRTEEN
Proprietary Data
Merlon Intelligence gathers training data from compliance analyst interactions with a financial
crimes investigation dashboard. Gathering the data requires a full-stack product, where the interface
is designed and instrumented to gather data that feeds into the models. It is learning to rank the
setup and learning to rank for risk. Banks have many operational risks in deploying new financial-
crimes compliance software, so it is a challenge to penetrate the market. The harder it is to gather
proprietary data and the more the data are interlinked with a company’s go-to-market and product
development strategy, the more stable the business.
Full-Stack Industrial Solution
Blue River builds agriculture equipment that reduces chemicals and saves costs. They “personalize”
treatment of each individual plant, applying herbicides only to the weeds and not to the crop or soil.
They use computer vision to identify each individual plant, ML to decide how to treat each plant,
and robotics to take precise corresponding action for each plant.
Cybersecurity Integrated in Product Design and Implementation Processes
Pypestream offers messaging and bots across all segments of customer value chain in financial
services, automotive, and retail. Product design and implementation/customization are linked with
cybersecurity best practices, and reviewed on a weekly basis from a security perspective.
Team with the Right Balance of Skills
The Zymergen leadership team has capabilities that are important for industrial biology. CEO
Joshua Hoffman is a business executive, CSO Zach Server is a scientist, and CTO Aaron Kimball is
a data expert. The balance of skills makes the business stable.
Domain Company
Productivity Clara
X.ai
Branasof
Zomm.ai
Julie Desk
Customer Care and Lifecycle Management Action IQ
Preact
Pypestream
Interactions
Zendesk
HR and Talent Unitive
Entelo
Wade&Wendy
Textio
SpringRole
HiQ
Consumer Marketing Appio
Lexalytics
AirPR
Invoa
Finance and Operations AppZen
Sapho
Cognitive Scale
Work Fusion
Industrial and Manufacturing SkyChain
Predix
Pensiamo
Fusion APS
Security and Risk Vectra
Signal Sense
Zimperium
Graphistry
Dark Trace
Anodot
Sift Science
CHAPTER FOURTEEN
Large enterprise software vendors like IBM or SAP are retooling their
businesses around the belief that AI will make their products and systems
smarter, more resilient, and more profitable. Each company takes a slightly
different approach. Success depends largely on data, capabilities to
transform legacy platforms and systems, access to AI talent, and a nuanced
balance between marketing and ability to execute. In most cases,
traditional enterprise vendors spend more to deliver on ML and DL than
do domain- and product-focused players, who develop their applications
from scratch.
In this chapter, we do not cover Oracle in detail, though it seems clear
that it is reviewing its strategy and may announce more AI-related updates
in 2018. In September 2017, reports unveiled Oracle’s fully autonomous
version of its database in the cloud. With this, it is clear that it wants to
compete with AWS.1
These are some of the critical questions that the likes of SAP,
Salesforce, and IBM are asking:
• Should we engage in more AI acquisitions?
• Should we develop an internal platform?
• How do we balance external partnerships and acquisitions with internal capabilities and
roadmaps?
• How far do we go in introducing AI into our business?
All three companies struggle with similar challenges, though they look
very different from the outside, and though IBM is in a somewhat more
vulnerable position as doubts were raised in 2015 and 2016 about the
company’s ability to deliver on previous promises. We examine each of
them in this chapter.
Enterprise startups with generic ML/DL offerings are in a difficult
position, as they compete against traditional vendors and product and
domain specialists. If they execute well, they will still have great potential
to help traditional companies in augmenting and automating business
processes and tasks, introducing ML into customer care or HR activities,
supporting data preparation, and delivering on training as a service.
SAP
SAP has been a gatekeeper for enterprise data since its inception.
Today SAP delivers APIs with ML through its enterprise cloud, e.g.,
Concur for travel bookings and Success Factors for HR offerings.
According to ComputerWorld UK, Concur processes “[US$50] billion of
travel transactions every year. Success Factors is installed on 245,000
systems.”2
SAP first created simple invoice- and CV-matching applications in
which computers read and match documents, freeing people up from these
tasks. The system is called SAP Resume Matching. ML is used at SAP
Clea, the SAP Cash application that automates the process of making and
capturing payments.
The SAP digital innovation system is called Leonardo, after Leonardo
da Vinci. It comprises applications and microservices around IoT, ML,
blockchain, and big data. The scope is deep, as these technologies are
widely interdependent, and all rely on data. SAP has a number of
accelerator programs to implement Leonardo across industries and across
business functions. The implementation is customized and supported by
consultants from SAP Digital Business Services. They help SAP
customers to clarify innovation priorities and design prototypes. According
to ComputerWoche.de, prototypes can be achieved within eight weeks of
an initial workshop.3
SAP HANA and Google Cloud have announced a partnership, and
developers can now use the SAP HANA express edition to build custom
enterprise applications on the Google Cloud platform with the flexibility of
on-demand deployment. SAP Cloud will be soon available on Google
Cloud. The collaboration involves building ML features into
conversational apps that guide users through workflows and transactions.
There will be integrations between productivity applications such as
Gmail, Calendar, and Sheets. Thus, SAP customers will receive better
choice and capabilities to link e-mail conversations; search for duplicate
contacts; and create new leads, tasks, and visits directly from Gmail.4
Salesforce
Salesforce offers cloud tools for managing sales leads and
communication with customers. The company’s efforts in AI became
visible in 2015, when it announced two major acquisitions: MetaMind,
backed by Khosla Ventures, and the open-source ML server PredictionIO.5
Salesforce also acquired MinHash, a big-data and ML startup; the smart
calendar startup Tempo AI; and a customer relationship platform, RelateIQ
—all of which together form the backbone of SalesforceIQ.6
PredictionIo, originally called TappingStone, was founded in 2012 in
London and offered “ML as a service” before pivoting to an open-source
model with the Apache Software Foundation (ASF), sponsored by Apache
Incubator. Together with ASF, the company developed traction with 8,000
developers and 400 applications. MetaMind’s recursive neural networks
power Salesforce’s sentiment analysis and image classification. Image
classification is a drag-and-drop tool where customers upload an image
with the system, identifying subjects afterwards. The tool can also perform
sentiment analysis on text such as tweets, successfully labeling the
language as positive, negative, or neutral. Richard Socher, chief scientist at
Salesforce and teacher of an AI course at Stanford, is a well-regarded ML
researcher who came from MetaMind. He is enthusiastic about
Salesforce’s huge amount of data. Combined with advances in ML
algorithms, the company will be able to make progress in sales CRM.
Metamind technology has now been fully integrated into Salesforce
platforms.
Salesforce’s Einstein, introduced in late 2016, is the core AI
technology powering the Salesforce CRM platform by using data mining
and ML algorithms.7 Salesforce has 150,000 customers, most of which
have customized Einstein for their own needs. Salesforce keeps each
customer’s data separately. When a customer adds a custom data field,
Salesforce doesn’t even know the nature of the information, according to
Wired reporting. Within Einstein there is a technology called Optimus
Prime, which automates the creation of ML models for each Salesforce
customer so that data scientists’ model-training tasks become easier.8
Currently, Einstein aims to proactively spot trends across sales,
services, and marketing systems. The system is aiming at forecasting
behavior that could spot sales prospects and opportunities, or identify a
crisis situation in advance. The system has interesting image recognition
analysis tools that, when properly trained, can count objects and even
recognize features like color and size.9 The program studies the history of
the data and figures out for itself which factors best predict the future. It
keeps adjusting the model based on new information over time, and the
more data, the subtler the answers.10 “If we were to make the 150,000
companies that use Salesforce 1 percent more efficient through ML, you
would literally see that in the GDP of the United States,” Socher says.11
An algorithm for summarizing documents produces surprisingly
coherent and accurate snippets of text from longer pieces. Its performance
is still not as good as that of a human. Still, it paves the way toward
condensing text to eventually become increasingly automated.12
Summarizing text fully would require AGI or ASI, as it implies
common-sense knowledge and a mastery of language. Shortcomings might
be resolved over time through the injection of new technologies from third
parties. In 2017, Salesforce launched a US$50 million AI fund.
IBM
Watson and Reputation Risk
Watson was mentioned over 200 times in earnings calls starting in
2013. Over the last five years, IBM issued more than 200 press releases
with Watson in the headline. Over this period, IBM invested US$15 billion
in cognitive assets, not including US$5 billion spent on data acquisitions
such as Truven Health Analytics and the Weather Channel.
Watson became famous in 2011 when it won the game show Jeopardy!
against the show’s two highest-rated players. The Watson business unit
was created in 2014, and now has 7,000 employees.
In 2016 and 2017, IBM suffered a major reputational setback, as its
marketing machine brought Watson into too many verticals too soon. In
July 2017, investment bank Jefferies suggested that IBM would not be able
to return value to shareholders because it cannot compete properly with
other technology giants investing in AI. The company was “outgunned in
the war for AI talent,” a problem that will only worsen.13
We interviewed 12 executives working with Watson. In every single
case, feedback on Watson’s performance was cautious and reserved. In all
cases, IBM customers had to invest vast human and financial resources to
get their models up and running. Everyone agreed that Watson’s technical
capabilities were not near Google’s DeepMind or that of niche-focused
startups. “IBM is excellent at using their sales and marketing infrastructure
to convince people who have asymmetrically less knowledge to pay for
something,” says Chamath Palihapitiya, founder and CEO of Social
Capital.14
In order to manage reputational risk, IBM had no other way but to
partner with serious AI research brands. In 2017, IBM teamed up with
MIT for a ten-year fundamental AI research project, creating the US$250
million MIT-IBM Watson AI Lab. The focus is on advancing hardware,
software, and algorithms related to DL and other areas; increase AI’s
impact on industries such as health care and cybersecurity; and explore the
economic and ethical implications of AI on society.15
Watson implies dozens of separate cognitive and ML components
including image recognition, a language classifier, and text-to-speech
translation. The current version does not seem to perform higher-level
cognitive reasoning, which surprised the world during the Jeopardy!
contest. IBM legacy technologies are complemented on Watson by
capabilities to use any language (including Scala, Java, and Python); any
popular ML framework (including Apache SparkML, TensorFlow, and
H2O); and any transactional data type, according to the company. IBM,
however, seems not to be on the cutting edge of DL research and practice.
While their TrueNorth neuromorphic computing program seems to be
promising, it too currently does not have practical substance.16
Watson has ingested an extraordinary amount of data. Serving up
information through a search and retrieve function cannot be considered
domain knowledge, however. Only application of that information
autonomously could be characterized as autonomously created knowledge.
It would be one thing if Watson were able to distinguish information
from autonomously created knowledge and thereby provide added value
and quality in decision-making. However, in many cases (including
finance, automotive, and health) Watson is unable to answer “why”; in
other words, it is unable to explain why the decision is a materially
significant one.
It may be that Watson does not have the treasure of metadata that
Salesforce does, for example. This prevents IBM from automating more
aggressively. Throwing armies of experts at a customer’s problem is not a
scalable or sustainable approach.
According to Shaunak Khire, cofounder of Stealth, the Watson model
is built on the concept of adding the value from data, not adding value
from the interface and technology. The whole point of AI is that it learns,
reasons, and provides outputs autonomously. Watson does not appear to be
close to achieving this goal because it deploys the AI piece only in the
context of NLP and not during the decision-making process. As one writer
noted, “There is a video commercial by Watson with Alpha Modus, where
Watson says it can help investors predict markets close before they close.
What does this even mean? How does one even come to define quality in
such scenario? . . . I suppose one could say if it improves trading on
investment performance by x%—then you at least have some empirical
data point. I find this strange and a marketing gimmick at best without any
underlying datapoints. ‘ROI of 900%,’ but what is the underlying
metric?”17
IBM has begun to deploy ML tools on its Z series mainframes so they
can be run on the premises instead of requiring that data be re-hosted in a
cloud like IBM’s Bluemix. This means that companies are able to build
and deploy these applications on their own systems and in their private
cloud. This removes the cost, latency, and risk of moving data off the
premises.18 IBM Z can encrypt up to 13 gigabytes of data per second per
chip, with roughly 24 chips per mainframe depending on the configuration,
thanks to proprietary on-chip processing hardware. This allows for a high
level of security. Data will be kept “encrypted at all times unless it is being
actively processed, and even then it is only briefly decrypted during such
computations.” The system “cuts down on the number of administrators
who can access raw readable data. This means that hackers have fewer
targets” to gain system access.19
IBM is trying to position Watson as a platform others can use to build
applications. Nearly 80,000 developers have worked with it. IBM is also
looking to establish strategic partnerships to boost its cognitive
capabilities. In March 2017, the company announced a strategic deal with
Salesforce to jointly provide ML and data analytics services, which
support enterprises in making faster and smarter decisions. IBM’s Watson
and Salesforce’s Einstein will be integrated for e-commerce, sales, and
marketing.
Watson’s Applications
IBM pitches Watson’s diverse range of capabilities to industries like
health care, retail, finance, tax, and architectural drawing, where it claims
to have acquired deep vertical knowledge. Watson grows when industries
it serves grow—it extends the personality of the company that’s using
Watson, so it operates sort of like a white label. When IBM talks about
Watson, it references augmented intelligence more than AI.
IBM has more than 500 industry partners across industry verticals.
Many IBM customers use a Watson-powered virtual assistant to handle
customer support. Its bots have deep domain knowledge about specific
areas. Watson wants to expand into further fields of business, e.g., into
media to scan for potential fake news, according to David Kenny, general
manager of Watson.22 At the same time, Watson’s prices are dropping as
customers can receive similar service and capabilities from competitors,
sometimes for free. As previously discussed, AI- and ML-related APIs are
currently available from Microsoft, Google, and Amazon. Specialized
vendors like Nuance are offering APIs as well. In this context, IBM had to
drop the price of Watson Conversation over 70 percent from US$0.0089 to
US$0.0025 per AIP query per month in October 2016.23
Watson has interesting data assets in the health care and meteorology
industries. In the case of meteorological data, Watson’s capabilities from
its Weather Channel acquisition are not completely unique. CEO Rometty
is also publicly talking about Watson’s automotive capabilities due to its
collaboration with GM OnStar, which will theoretically put Watson into
millions of cars.24 According to our sources at GM, the collaboration was
difficult and GM invested a lot of unplanned money and engineering
resources to make the solution work.
Enterprise Startups
There are several interesting newcomers in B2B market, suggesting
that they can help companies to introduce ML/DL into their product lines.
Some of these started in mobile cybersecurity, IoT, and silicon, and
acquired DL teams to complement their offerings. Others secured VC
funding to go after traditional enterprises. Time will show how successful
these enterprises may become. Generally we are more confident about
domain-focused players, which possess proprietary data and training
expertise to solve problems with a lot of focus. We believe that enterprise
newcomers might bring value in case a company has several ML
implementations going on, which require orchestration and integration
with legacy systems.
Texas-based Mobiliya Inc. has roots in wireless, cybersecurity, and
silicon, and is a traditional engineering outsourcing partner to companies
such as Microsoft, Google, HP, Samsung, and Nvidia. In 2017, the
company built a DL team to offer customized solutions to traditional
business customers.
Toronto-based DeapLearning works with enterprises to accelerate their
ML deployment. The company served Scotiabank to develop a credit card
system that analyzes customer behavior to make connections to credit card
collection. The company partnered with insurance software developer
Symbility, which is working to introduce ML into its tech stack.26
Introducing AI into a
Traditional Business
AI Misconceptions
Many traditional companies seem to have a number of misconceptions
around AI and its implications. Some of these are due to lack of
knowledge about ML technologies, especially in procurement and other
business functions, which then result in poor planning and execution of
pilots. Others have to do with the lack of support at the highest levels of
the company—the c-suite and the board of directors, which lack
individuals who understand data, data governance, and IT-infrastructure
readiness.
Many companies would benefit from greater knowledge and insight on
these topics within their business development teams, who often act as
mediators between technology, products, procurement, legal, sales and
marketing. These teams require top talent with strategic and tech savvy
minds.
Corporations’ inability to work transversally beyond the typical
organizational silos is a common and huge bottleneck. Layer on top of
these issues the fact that some cognitive and ML businesses contribute to
the confusion in their sometimes pushy marketing, often promising more
than they can deliver.
TechEmergence asked 30 AI researchers, “What do you believe to be
the biggest misconception that executives and businesspeople have in
applying ML to business opportunities?”4 Below are some of the results of
this study, complemented by further research from TechEmergence, and
our own findings from extensive interviews with industry leaders. We note
that while they used the term “AI” they really meant ML.
Corporate Acceptance
The “bandwagon effect,” according to Investopedia, is a
“psychological phenomenon in which people do something primarily
because other people are doing it, regardless of their own beliefs, which
they may ignore or override.” AI, according to our interviewed experts,
has created a bandwagon effect in the digital space.
At the same time, and according to some of the executives we
interviewed, there is a myriad of “hoops to jump through” regarding
corporate acceptance for ML and DL solutions. If the executive leadership
team is not ready to redesign business models and end-to-end processes
across the whole organization, a company may never benefit from the full
potential of AI. Scaling ML and embedding it into daily work across
functions and teams should be considered a natural next step in the
evolution of work and business—as natural as revenue forecasting revenue
and profit targets.
AI is poised to transform the technology buying process, creating a
more complex buying process and sales environment. Acceptance of AI is
seriously jeopardized when executives fail to explain its benefits to
employees. Instead of replacing people, AI will augment their jobs and
create new ones. Repetitive tasks will be eliminated and new tasks will
arise that require good human judgement and domain expertise. For
example, fraud detection applications will reduce the time people spend
looking for anomalies yet increase their ability to decide what to do about
deviations.
Likewise, a ML application will allow financial analysts to spend less
time extracting data on financial performance. It will, however, only be
useful if an employee is skilled enough to consider the implications of that
performance. Augmented with ML applications, customer service
employees don’t need to spend a lot of time with routine problems.10 They
can use the time instead to better understand customer requests and needs
and design a better customer experience.
Companies that view AI purely as a cost-cutting opportunity are likely
to deploy ML in all the wrong places, and in a compromised way. These
companies will automate the status quo instead of imagining a better
world. They will cut jobs instead of upgrading roles.11 We wonder how
many traditional businesses will raise human resources to the strategic
prominence it requires to rethink jobs, processes, training, and education,
and the overall organizational talent roadmap that is so critical and
necessary to realizing the full potential of AI.
AI in Different Sectors—
From Cars to Military
Robots
We are still far from solving AGI, but there is much promise and many
developments in the field of narrow AI, from predicting a company’s
earnings to identifying a tumor in a CT scan. In Part 4, we provide an
overview of some of the key industry- and sector-specific AI technologies
being implemented from the automotive to military sectors.
CHAPTER SIXTEEN
General Trends
Several trends account for the realization of self-driving:1
• Technology: Computer vision has finally become good enough to distinguish objects on the
road, build 3D maps, and be supported by powerful processor speeds sufficient to operate
within a car. Vision and radar technology can now develop pre-collision systems that allow
cars to brake autonomously when a risk of a collision is detected.
• Ride-sharing: Self-driving technology is still expensive due to its lack of scale. The progress of
ride-sharing companies like Uber or Lyft can speed up if the capital cost is amortized over
many drivers. On-demand transportation has “emerged as another pivotal application of
sensing, connectivity, and AI with algorithms for matching drivers to passengers by location
and suitability (reputation modeling).”2 Partnerships between ride-sharing and autonomous
driving companies (e.g., Lyft and DriveAI) will speed up adoption.
• Electric vehicles: Virtually every fully autonomous car company today is planned on an
electric vehicle platform. The reason is simple: its cost is dramatically lower and municipalities
support it.
• “Free entrepreneur cash”: Internet pioneers and entrepreneurs have created a cash-surplus
environment that has been deployed in new business fields, like autonomous driving.
• Ecosystem partnership growth: Chipset manufacturers, automakers, suppliers, data providers,
fleet managers, and platform developers have been testing simultaneously competitive and
cooperative models in this new ecosystem, a key to successful automotive competition.
The Enablers
Data
Traditional auto companies believe there are five components to
building self-driving cars:
1. large-scale manufacturing capabilities;
2. a virtual driver platform that routes a travel plan;
3. autonomous technology that allows the car to drive;
4. a team to safely redesign cars to accommodate self-driving technology; and
5. a way to manage vehicle servicing.
Interestingly enough, these companies don’t talk about the need for data to
understand traffic, routes, and human behavior.4 Yet data are critical to
everything involving driving, from traffic patterns and changing lanes to
stopping for a police officer.
An average car moves only 10 percent of the time. The rest of the time,
it is parked somewhere. Traditional car companies currently don’t know
enough about their customers. The power of Uber, Lyft, and similar
platforms is in their data and their understanding of the movement and
patterns of cars and people that are derived from these data.
According to Forbes, data generated by a vehicle can be used to
optimize the product and ensure better safety, design, and training
programs and to make software updates, etc. This approach is commonly
known as cognitive predictive maintenance, and companies such as
DataRPM have emerged to create AI-driven platforms that make it
possible. “Cognitive predictive maintenance provides exactly what
manufacturers are looking for: actionable insights,” said Sundeep
Sanghavi, DataRPM co-founder and CEO. “By harnessing the powerhouse
of information generated by IoT, manufacturers can develop deep insights
into results that were never possible earlier with human intervention.”5
AI-enabled features in vehicles tied to biometrics can also help with
security, by enabling a car to drive only when it recognizes a certain voice.
Eye tracking is another possibility for AI to help with monitoring driving
and sense whether a driver might be tired or distracted.
No one likes traffic or unexpected road delays, so having a car that can
instantly provide new routes to avoid this irritation makes AI even more
attractive. Autonomous driving must adjust to traffic and the social norms
of any location. In this context, the question is whether the same self-
driving vehicle can perform at its best in San Francisco, Naples, or
Mumbai. Safe and reliable software updates over-the-air to receive a local
update may be a method to optimize driving in multiple locations.
An understanding of human social norms is critical to building self-
driving technology. For example, if an autonomous car navigates at a low
speed in a residential area, it needs to understand when to stop and when it
is safe to continue moving. Vehicles circulating in crowded locations have
to react to unwritten norms of human behavior. This requires
understanding of public resources sharing (e.g., sidewalks); rules on where
to make a turn; and signals people give each other to coordinate
movements, for example in crossing a road.
Stanford researchers have created Jackrabbot, a prototype of just such
a self-navigating machine. It is equipped with sensors to screen its
surroundings. Researchers hope that Jackrabbot will collect and analyze
enough data to follow human etiquette. These insights will be used in
designing next-generation robots.
Today, this machine is expensive to implement at scale. However,
within the next five years “social robots like this could be as cheap as
$500, making it possible for companies to release them to the mass
market.”6 Autonomous vehicles and the connected transportation
infrastructure will of course also give rise to a new venue for hackers to
exploit vulnerability to attack. Lastly, there is new opportunity: new data
services are emerging around connected vehicles. The Berlin-based
company AVA, for example, is focused on location safety data.
Electrification
The price of lithium batteries has dropped dramatically over the past
decades. Batteries that cost US$150,000 five years ago now cost
US$6,000, with further reductions in sight. Companies that have invested
in electric car development are also well positioned to compete in self-
driving. The U.S., China, Japan, South Korea, and Germany are leading
this effort.
If cars eventually have intelligent charging and vehicle to grid (V2G)
and vehicle to home (V2H) capabilities, cars themselves could be used as
energy resources. The grid could tap into this energy when it is needed.
Electricity would be downloaded from stationary vehicles. The combined
model that Elon Musk is developing, including solar/renewables, batteries,
and Tesla, makes a lot of sense in the longer term.
Supplier Ecosystem
PwC predicts that electronics will account for 50 percent of automobile
manufacturing costs by 2030, up from one-third today.8 Traditional parts
manufacturers will face competition from high-tech players, like chipset
manufacturers. As cars get more connected, traditional after-sales services
will need to change their business models. For example, Zubie, a startup,
“offers real-time diagnostics to owners of connected cars, allowing people
to know what’s wrong with their engine before they bring their car in for
inspection.”9
Some large conglomerates are entering the automotive market. For
example, Samsung is planning to become a serious player in three
categories: car electrification, infotainment, and autonomous driving. The
company already supplies electric car batteries to Tesla and other
automakers. On the infotainment side, Samsung acquired Herman, which
has 15 percent of the market. The company is working on the advanced
driver assistance system (ADAS) in partnership with other companies. The
system will have technology to remotely update software made by
Harman. In autonomous driving, Samsung is investing in lidar companies
such as Quanergy and TetraVue.10 In September 2017, Samsung
committed US$300 million to a captive investment fund to develop a self-
driving car.11
Mobileye, an Intel company, has about 80 percent of the market for
camera-based driver assistance systems, which rely on Mobileye
microchips and its image recognition algorithms. Mobileye currently
partners with BMW and Delphi Automotive in developing autonomous
driving technology. Other traditional suppliers with camera-based, semi-
autonomous systems include Bosch, Autoliv, and Continental.
Nvidia is also focused on cars. In summer 2017, they revealed a
partnership with Toyota. The plan calls for supplying the auto giant with
Nvidia’s Drive PX computing system and working with Toyota engineers
to create autonomous driving software using Nvidia’s AI platform. That
follows other alliances Nvidia has forged with Volvo, Tesla, Audi,
Mercedes-Benz, Germany’s ZF, and Bosch. Given the vast scale of
Toyota, which annually sells more than 10 million cars worldwide, a
successful partnership could make it Nvidia’s most important yet. They are
also funding a self-driving company, Tusimple, in China.12
The Impact
Safety
The most obvious impact of autonomous vehicles is expected to be
accident avoidance. Nearly 37,000 U.S. citizens die each year in car
accidents, and nearly 1.3 million die globally. For every death in the U.S.,
there are over 100 people treated in emergency rooms. In 2012, the costs
of these treatments were estimated at US$33 billion annually. Autonomous
driving will reduce healthcare costs, and traffic congestion due to
accidents will drop significantly.13
There are already a number of AI safety features linked to automatic
braking, alert systems, and collision avoidance systems. Connectivity
systems installed in vehicles enable auto insurance companies to determine
premiums. In time, when car sharing becomes more widespread, such
pricing models can be leveraged on an individual vehicle basis and
mirrored against a fleet.
Mobility-as-a-Service
Mobility-as-a-Service (MaaS) companies like Lyft, Uber, Didi, Grab,
and Ola share a market valued at roughly US$109 billion today. ARK
thinks that when accounting for the potential cash flow from autonomous
taxi services, the current market should be valued at between US$600
billion and US$3 trillion, depending on an investor’s time horizon. Of
course, companies such as Didi and Uber may not accrue the benefits of
this opportunity. Alphabet has announced plans to partner with original
equipment manufacturers (OEMs); both nuTonomy and Delphi have
launched pilots in Singapore; and several OEMs, notably Tesla, VW,
BMW, and Toyota, have detailed plans for both autonomous vehicles and
shared autonomous services.16
China will become one of the largest markets for autonomous
mobility-as-a-service, reaching US$2.5 trillion by 2030. Contenders to
benefit from this market are:
• Baidu (partnered with BAIC Motor and Nvidia);
• Chongqing Changan, the first OEM in China to complete a 1,200-mile autonomous road trip;
• Volvo/Geely, which launched a China autonomous test driving pilot in 2017; and
• GM, through its joint venture with SAIC Motor and investments in Lyft, together with
ridesharing service Didi.
Urban Planning
Over the past decade, major cities have begun to invest in making
infrastructure smart, while adding sensing capabilities for vehicle and
pedestrian traffic. In 2013, New York started testing a mesh network of
microwave sensors, cameras, and pass readers to profile traffic. Cities
already use ML to build bus and subway schedules, supervise traffic
conditions for dynamic adjustment of speed limits, and implement smart
pricing in public parking spots. However, there are as yet no standards that
are broadly applicable.18
In his article “A New Approach to Designing Smart Cities,” David
Galbraith notes that today’s planners create scenarios for how cities can
emerge organically rather than drawing blueprints that specify exactly how
this should happen. Connected technologies can change this. By merging
social science with architecture, planning becomes something entirely
new. This is about social science becoming more like economics,
generating insights and applying findings to serve urban needs.19
There are about 42 billion square feet in the U.S. dedicated to parking
and an estimated 2 billion parking spaces. There is also metered parking
on most city streets.20 Such space will be re-purposed, and even home
garages might disappear, if citizens move from owning a car to using fleet,
car-share, and other mobility services.
Alphabet plans to build a city from the ground up. Dan Doctoroff, the
head of Sidewalk Labs, an Alphabet subsidiary, is in charge of the project.
The city should emerge in an area of “sufficient size and scale that it can
be a laboratory for innovation on an integrated basis.” This plan is focused
not just on self-driving technology; new approaches to data policies will be
tested as well.21
Interior Redesign
Since humans will no longer need to focus on driving, their traveling
time can be used for other purposes. This shift requires a complete
rethinking of car interiors. BMW addressed this issue in their TED event at
the International Automotive Show in Frankfurt in 2017. Driving hotel
rooms, entertainment centers, and hospitals are possible. Companies like
Panasonic are one step ahead, working on an “autonomous cabin” concept.
They believe that autonomous vehicles will have applications such as
augmented reality and work and entertainment screens, making the interior
of a car more like a living room. Large networks such as Facebook will
focus on how to monetize such interiors.22
Employment
Autonomous vehicles will undoubtedly bring changes to the job
market. In the U.S. in 2015, about 3.8 million people drove taxis, trucks,
ambulances, and other vehicles as their first source of income, and 11.7
million workers drove as part of their job. In all, these numbers represent
approximately 11.3 percent of total U.S. employment. According to The
Wall Street Journal, self-driving cars could transform jobs held by one in
nine U.S. workers.23 Among additional impacts will be the following:
• Police: 42 percent of all police encounters are traffic related, according to the U.S. Bureau of
Justice Statistics (2011). And US$50 million in revenue is generated from parking meters in
the city of San Francisco alone. These funds will disappear with autonomous vehicles.24
• Truck drivers: Truck drivers will be among the first to lose their jobs to automation. Walmart
may be the first company to deploy this technological change, adopting it en masse, according
to Kristin Sharp, the executive director of the New America Foundation and Bloomberg’s Shift
Commission on the Future of Work, Workers and Technology. The Teamsters labor union,
representing almost 100,000 truckers, has pushed the importance of keeping human drivers for
safety reasons.25 There are 3.5 million truck drivers in the U.S. today.
• Employees in traditional auto companies: As the transportation industry becomes more data
driven and digital, its human resources profile will also shift. The current work of most full-
time employees will be automated, with potentially tremendous negative social impacts.
However, new skills that are scarce today—such as data analysis, software development, and
drone programming—will be in high demand, thus underlining the great importance of skills
retraining and restructuring in the labor force.
• Thomas Frey, executive director of the DaVinci Institute think tank, has said that driverless
cars could eliminate jobs in up to 128 industries in the next two decades, including in
agriculture, construction, and public service.26
• As people buy fewer cars, insurance and financing jobs could see a reduction in employment
too. Repair shops could also be affected on a large scale.
The Timeline
HIS Automotive has estimated that unit shipment of AI-based systems
in new vehicles will be 122 million by 2025, as compared to 7 million
today.27 Self-driving cars are being introduced in two ways today. There is
the evolutionary path, in which cars implement self-driving features slowly
but surely. For example, Tesla has already adopted an autopilot feature.
Companies like Peloton and NXP are “platooning” trucks—tying them
together through a vehicle-to-vehicle communication system. A “pilot”
truck drives while those behind can sit and relax.
But there is a revolutionary path as well, like the one Google has been
working on since 2009. Google expects its test vehicles to become
mainstreamed because they will be able to drive in most places. Google is
aiming at embedding insurance into the price of a vehicle starting in 2018.
Ford is testing self-driving cars as well. We believe that both the
evolutionary and revolutionary paths will eventually converge.
Today, it takes approximately 15 years for an auto fleet to turn out.
Experts agree that around 2030, OEMs will only produce fully
autonomous vehicles. By then, 25 percent of all cars will be self-driving.28
Until about 2045, there will be a mix of manual, semi-autonomous, and
fully self-driving cars.
Regulators, insurance companies, traditional car manufacturers,
parking lot owners, and urban planners will need to be fully prepared long
before 2030. Safety standards, accident liability, traffic sign planning, and
infrastructure issues need to be clarified by then, along with
communication connectivity standards.
Traditional car company executives should beware of the risks related
to this new wave of innovation and digitization. Systems must be secured
from cyberattacks, and AI systems should be adequately tested. Such
leaders need to rethink their business models, as their customers will be
better served if deep analytics are part of their offerings. Linking
transportation and logistics with a customer’s value chain, and seamless
product delivery—using data, analytics, and possibly blockchains—may
become the basis for new revenue models and business synergies. For an
industry currently valued at around US$1 trillion, the opportunities for
growth in the era of cognitive technologies are endless. But there will also
be fierce competition from every side.
Tesla
Tesla has 1.3 billion miles of data from its autopilot-equipped vehicles
operating under diverse road and weather conditions around the world.
With over 200,000 Tesla cars in operation in 2017 and sales and
production ramping up, the mileage will grow quickly. Tesla pushes
software updates over the air, and new features often emerge from the
company’s treasure trove of data.
It was originally planned that Tesla cars would feed their autopilot
algorithms with data generated by radar and ultrasonic technologies.
Practitioners believe that a combination of radar and camera can
outperform lidar, especially in adverse weather conditions such as rain
(with lights mirroring in the water), snow, and fog.
Tesla creates a detailed 3D map of the locations. As more vehicles
with Autopilot 2.0 and the “Vision” system are purchased, better mapping
with increasingly finer resolution is possible.
In June 2017, Elon Musk hired Andrej Karpathy as AI director at
Tesla. He is a renowned researcher at OpenAI, an expert on reinforcement
learning and computer vision who studied at Stanford with Fei Fei Li and
interned with DeepMind. In his blog about reinforcement learning,
Karpathy talks about RL in the context of an autopilot, and “notes that
while reinforcement learning does not generally scale well to situations
where experimentation is costly, new approaches, combined with lots of
real-world data (of the kind Tesla is collecting) may help.”31 Tesla is also
working with AMD to develop its own silicon. This demonstrates that the
company is getting closer to becoming a full-stack AI competitor, with
decreased reliance on other companies. Jim Keller is in charge of the
project. He previously worked at Apple, where he designed the A4 and A5
iPhone chips. Currently, Tesla uses Nvidia GPUs as part of the autopilot
self-driving hardware.32 In after-sales, Elon Musk announced that Tesla in
Japan had started to sell vehicles for a single price, including insurance
and maintenance.
Figure 16.1 ESG Reputation Risk Example: Tesla (Source: RepRisk® – www.reprisk.com.)
Waymo (Alphabet/Google)
Waymo, Alphabet’s self-driving subsidiary, doesn’t manufacture cars.
Instead, it aims to reach consumers and businesses through partnerships
with car OEMs. In 2015, Alphabet hired John Krafcik, a former Hyundai
North America executive, as Waymo’s first CEO. In September 2017,
Krafcik gave several interviews to charm OEMs into partnering on
autonomous ride-hailing services. Time will tell how successful this charm
offensive was. Automotive OEMs are used to acquiring companies, not
partnering with them.
A 2017 report from the California DMV showed that Alphabet’s
Waymo was leading the industry in fully autonomous miles driven on
public roads. Waymo has become the safest self-driving technology. This
progress is due to Waymo’s ability to harvest data in almost real time.34
The Waymo cars depend on a lidar system, and in 2017, Waymo and
Uber became engaged in litigation when Waymo accused Uber of a
“calculated theft” of its trade secrets, alleging that former Google
employees brought the proprietary lidar technology to Uber.
According to Business Insider, Waymo is now looking to gather data
on human interaction with self-driving cars. To achieve this, the company
created a program in Phoenix, Arizona. Waymo outfitted 100 Fiat Chrysler
Pacificas with its technology, with plans to make 500 more this year.
Phoenix residents will be able to use the service for free, and each car will
have an engineer aboard in case the car requires human intervention. The
program’s goal is to gather data on human reactions to self-driving cars,
from motion sickness to route optimization.35 In addition, Waymo recently
announced a partnership with ride-sharing company Lyft. The
partnership’s goal is to bring self-driving car technology into the
mainstream through joint projects and product development efforts. Some
months later, the parent company Alphabet started talks about investing
US$1 billion in Lyft. In 2013, Google Ventures invested US$258 million
in Uber.
It appears that part of Waymo’s plan is to offer vehicles with integrated
insurance, following the Tesla business model.
Uber
Uber is a fierce competitor in self-driving technology. Its strategic
edge is in data involving types of vehicles; routes; number of driving
hours; distribution of traffic throughout a day, month, and year; and
passengers’ and drivers’ profiles. Through these data, Uber is able to
understand most common patterns needed to develop autonomous
technology. A late entrant into the self-driving technology, Uber uses
M&A to bridge the competence gap.
In 2016, Uber acquired Otto, Anthony Levandowski’s self-driving
truck outfit, for US$700 million. Levandowski has played a central role as
a pioneer of self-driving technology in the U.S. In February 2017, Waymo
sued Uber for conspiring with its former executive Levandowski to steal
Waymo IP in an effort to benefit Uber’s autonomous driving program. In
February 2018, the parties abruptly settled the lawsuit with a $245 million
payment by Uber to Waymo.
In spite of the notorious lawsuit, Uber is still hiring actively for its
Advanced Technologies Group, or ATH, which is focused on autonomous
vehicles. The unit comprises 6 percent of Uber’s total job listings,
according to CB Insights.36 Even before the Levandowski scandal, Uber
hired a number of researchers from Carnegie Mellon and launched an AI
lab in Canada headed by University of Toronto AI researcher Raquel
Urtasun.
Environmental • Society needs convincing that autonomous cars will make “ethical”
decisions under duress
• Proliferation of electric cars = proliferation of lithium batteries, raising
environmental concerns
• Improved traffic patterns will lead to improved driving efficiencies and
lower emissions
Social • Autonomous vehicles must incorporate social human norms; AI
decision-making needs to improve on human alternative
• Autonomous vehicles adapt to cultural and regulatory norms in every
location
• Autonomous vehicles contribute to substantial decline in driver
employment
• New jobs for AI trainers to provide human context for autonomous-
driving algorithms
Governance • Data privacy issues proliferate
AI in Health Care
Patient Consultation
Companies working on AI patient consultation applications are not
aiming to replace doctors with AI. Instead, they are partnering with
practicing physicians. Current regulatory regimes around the world will
prevent AI from making official diagnoses. But the systems are designed
to guide patients and suggest whether the symptoms warrant a trip to the
doctor or hospital, and will help doctors get additional information for
their diagnoses. Dr. Clare Aitchison said in an interview with MIT
Technology Review, “While it’s true that computer recall is always going
to be better than that of even the best doctor, what computers can’t do is
communicate with people. . . . People describe symptoms in very different
ways depending on their personalities.”16 These kinds of applications are
likely to become increasingly accepted in everyday life.
Below are several companies that are developing AI-enabled programs
in this sector:
• NextIT’s product, a digital health coach, is similar to a service like Alexa. The assistant can
remind a patient to take medicine and ask about symptoms, and can convey that information to
the doctor.
• California-based Sense.ly developed Molly, a virtual nurse, to help patients with chronic
conditions between doctor visits. Sense.ly claims this technology gives doctors 20 percent of
their time back.
• New York–based AiCure wants to ensure that patients are taking their medications. Their
application is supported by The National Institutes of Health. It uses a smartphone’s webcam
and AI with facial recognition capabilities to autonomously confirm that patients are sticking
to their prescriptions.
• Sentrian analyzes biosensor data and sends individualized patient alerts to doctors.
• UK-based subscription company Babylon provides ML consultation based on personal
medical history and common medical knowledge: “Consumers report the symptoms to the app,
which checks them against a database of diseases using speech recognition. After taking into
account the patient’s history and circumstances, Babylon offers an appropriate course of
action.”17
• Your.MD claims it has already built the largest medical map linking probabilities between
symptoms and conditions. Its chatbot uses ML algorithms and natural language processing to
understand and engage its users. The application comes pre-installed on all Samsung Galaxy
phones.18
Healthcare Robotics
In 2000, Intuitive Surgical20 launched a technology called da Vinci to
support minimally invasive heart bypass surgery. This system today is in
its fourth generation, providing 3D visualization and wristed instruments
in an ergonomic platform. As a writer for IEEE Spectrum notes, “It is
considered the standard of care in multiple laproscopic procedures, and
used in nearly three quarters of a million procedures a year, providing not
only a physical platform, but also a new data platform for studying the
process of surgery.”21 The success of the platform triggered competition in
robotic surgery, as the Alphabet spin-off Verb, in collaboration with
J&J/Ethicon, proves.
According to Engadget, researchers from Harvard University and
Boston Children’s Hospital have developed a “soft robot” to improve the
survival rate of heart attack patients. This device is wrapped externally
around the heart instead of being inserted into heart valves to assist
cardiovascular function. It has been successfully tested on pigs but, of
course, successful animal tests do not guarantee the device will be used or
used successfully on humans. However, this type of “mechanotherapy”
holds plenty of therapeutic promise, and not just for the heart.22
Hospital Operations
Intelligent automation in hospital operations would seem to be a no-
brainer at first sight. But in fact, it has not been successful. HelpMate
developed a robot for hospital deliveries, such as medical records and
food, but without success. Another example is Aethon,23 which introduced
TUG Robots for basic deliveries. Only a few medical centers have
invested in this technology. This caution does not mirror what is
happening in other service industries such as hotels and warehouses, where
robots have already demonstrated that they can increase the efficiency and
effectiveness of operations (e.g., Amazon Robotics, formerly Kiva). There
may be a different explanation for the lack of adoption of these AI and
robotic solutions, having more to do with labor protections for unionized
nurses and staff.
IBM
IBM has two healthcare-related products still in research and
development. Privacy concerns in this field are great and Watson has not
figured out how to address the issue. In countries with less sophisticated
privacy rules, Watson is being used more successfully. Hospitals in China,
India, South Korea, and Slovakia use Watson for Oncology to obtain a
second opinion on treatment options virtually. In the United States, this
practice has been implemented at the Jupiter Medical Center in Florida.
According to Gizmodo, “Using technology to help doctors treat cancer is a
noble effort [and] AI can have a tremendous impact on health care . . .
[b]ut the technology doesn’t seem to be advanced enough to have a
transformational impact just yet. And Watson technology isn’t especially
unique. But IBM’s buzzy marketing is trying to make us believe it is leaps
and bounds ahead of anything else.”24 Former employees who worked as
design researchers at Watson for Oncology felt uncomfortable with how
commercials portrayed the platform, believing that confusing patients
about the platform’s capabilities might be unethical.
The second IBM healthcare platform is Watson Genomics, which
processes massive amounts of genomic sequencing data from patients’
tumors. According to researchers, it is still unknown how good Watson’s
results are. To increase confidence in Watson, IBM requires access to top
research institutions to recruit thousands of patients, generate real test
results, and publish outcomes in respected industrial sources, as medical
research and practice rely on publications. Experts would also need to
study how physicians’ treatments changed after receiving and
implementing Watson’s recommendations, including comparing basic
statistics, e.g., patient survival rates.
In 2016, The New York Times noted that “[a]t the University of Texas
MD Anderson Cancer Center in Houston, Watson technology was one
ingredient in an automated expert adviser for cancer care.”25 The
partnership, however, is falling apart. According to Forbes, MD Anderson
is actively requesting bids from other contractors to replace IBM. It has
already started working with some of IBM’s smaller competitors, such as
Cognitive Scale. Auditors at the University of Texas say the project cost
MD Anderson over US$62 million (US$21.2 million of which was paid to
PwC, which was hired to create a business plan around the product) but
did not meet its goals.
IBM has not yet published any scientific papers showing how Watson
affects doctors and patients. Watson for Oncology was in development for
six years and is still considered to be in its infancy even by IBM insiders.
The company’s sales representatives talk about “new approaches” to
cancer care, but specialists believe the system does not create new
knowledge.26
Google
Google’s DeepMind division, DeepMind Health, works with multiple
healthcare organizations, including the UK National Health Service and
the Royal Free Hospital London. DeepMind built Streams in collaboration
with the hospital to support doctors and nurses in diagnosing acute kidney
injury.
Microsoft
In 2017, Microsoft launched Healthcare NExT to “combine work from
existing industry players and Microsoft’s Research & AI units to help
doctors reduce data entry tasks, triage sick patients more efficiently and
ease outpatient care.”27 Their ultimate aim is to replace manual data entry
by doctors.
Microsoft’s HealthVault Insights project works with fitness bands,
Bluetooth scales, and other connected devices to make sure patients stick
to their care plans when they leave the hospital or their doctor’s office.
In September 2017, Microsoft launched a new healthcare division at
their Cambridge research facility to focus on personal health information
systems, health monitoring plans, diseases such as diabetes, and AI to
target interventions. Ian Buchan, formerly a clinical professor in public
health informatics at the University of Manchester, is heading this unit.28
Chatbots
In previous chapters, we discussed how natural language processing
and sentiment build the base for virtual assistants. There are several
companies developing this form of AI product in the insurance field,
including:
• Cognicor, which offers an intelligent customer care service assistant that can be addressed in a
human-like conversational interface. It answers questions, resolves complaints, and suggests
tailored products;
• Conversica, a virtual sales assistant that leverages AI to automate the lead conversation;
• Babylon, which provides virtual consultation to offer affordable health care; and
• Telematics, which provides long-distance information sharing and is expected to have a
significant impact on the insurance industry. A company in this sector, Octo Telematics,
provides telematics in auto insurance. Carriers are already offering black box tariffs, giving
discounts based on the frequency and times of driving as well as other customized factors.
Financial Trading
Traditional hedge funds such as Bridgewater Associates, Point72, and
Renaissance Technologies have been optimizing their IT for a long time.
Some companies are now using ML not just to optimize operations, but to
generate investment ideas. Still, few financial companies are betting on AI
at the core of their trading. There is little performance data available. For
this reason, many funds are expressing caution with respect to new AI and
blockchain trading companies.2
Sentient Technologies, funded by Hong Kong’s richest man, Li Ka-
shing, and India’s biggest conglomerate, Tata Group, are among the
backers who have given the company US$143 million. The company is
preparing again to raise capital to diversify its product portfolio.
In trading, Sentient’s net of computers algorithmically creates what are
essentially trillions of virtual traders that it calls “genes.” These genes are
tested by giving them hypothetical sums of money to trade in simulated
situations created from historical data. The genes that are unsuccessful die
off, while those that make money are spliced together with others to create
the next generation. Thanks to increases in computing power, Sentient can
squeeze 1,800 simulated trading days into a few minutes. An acceptable
trading gene takes a few days and then is used in live trading. Employees
set goals such as returns to achieve, risk level and time horizon, and then
let the machines go to work. The AI system evolves autonomously as it
gains more experience. Sentient typically owns a wide-ranging batch of
U.S. stocks, trading hundreds of times per day and holding positions for
days or weeks.3
The company is considering spinning off the fund business into a separate
entity.
Numerai, founded by Richard Craib in San Francisco, builds financial
trading models by assembling the best algorithms from a community of
data scientists. The company encrypts its trading data before sharing it
with the data scientists to prevent the scientists from repeating the fund
trade strategies themselves. At the same time, Numerai organizes this
information in a way that allows the data scientists to build models leading
to even better trades. The crowd-sourced approach seems to be working
despite the obvious incentive problem. Participants in a trading market are
usually adversaries. Numerai proposes a novel solution where traders back
confidence in their algorithms using stakes in a new cryptocurrency.
Numerai has distributed Numeraire—1,000,000 tokens in all—to 12,000
participating scientists. High-performing algorithms lead to payouts, while
poor performers lose their stake. The system encourages data scientists to
build models that work on live trades, not just test data.
The model so far aligns incentives for co-operation on the platform.
The system has attracted top supporters like Howard Morgan, a founder of
Renaissance Technologies, the highly successful hedge fund that
pioneered an earlier iteration of tech-powered trading. It is still unclear
whether the approach will work in the long run.4 Unlike the typical model
of competition, Wall Street has never implemented a system where
everyone could win. Numerai built its new token on top of Ethereum, a
vast online ledger, a blockchain where anyone can build a bitcoin-like
token driven by a self-operating software program.5
AI in Natural Resources
and Utilities
Value-added data are delivered more frequently than basic data, are
requested and provided on an ad hoc basis, and are more granular than
basic data. Examples of value-added data include system data, such as
forecasted load data, voltage profile and power quality data, and customer
data, such as meter data or aggregation of customer data by ZIP code, for
example.
Value-added data are gaining prominence also because of the shift
from cloud to edge computing in areas such as the connected home, smart
city, and electric or connected cars. In combination with traditional
distributed energy resource (DER) opportunities, asset ownership use
cases include device monitoring and control at the meter, demand
response, DER dispatch, and settlement and interfacing with on-premise
devices (e.g., building management systems).
According to the Indigo Advisory Group, 93 percent of energy and
utility companies have increased their number of IoT projects. Google’s
US$3.2 billion acquisition of Nest in 2014 was less about hardware sales
and more about data. The Green Button Initiative, the energy data
standardization effort that was officially launched in the United States in
January 2012, has enabled the launch of 235 applications using data from
over 50 utilities and 60 million homes and businesses. Similarly, in April
2016, the DOE launched Orange Button, with US$4 million for projects to
increase access to solar data to increase solar market transparency and fair
pricing by establishing data standards for the industry.
Renewables Management
AI use cases in this area are focused on enhancing short-term
renewable forecasting and improving equipment maintenance, wind and
solar efficiency, and storage analysis. AI is being deployed in various
pilots focused on wind turbine operation data and solar panel sensor data
that gauge sunlight intensity. This is then combined with atmospheric
observations obtained by radar, satellites, and ground-weather stations.
AI is also being applied to energy storage and estimates of the useful
life of a battery pack or unit by applying prognostic algorithms. In
Germany, a ML program named EWeLiNE has operated as an early-
warning system for grid operators to assist them in calculating renewable
energy output over a 48-hour period.
In Japan, GE is using AI to enhance wind turbine efficiencies, raising
power output by 5 percent and lowering maintenance cost by 20 percent. A
Japanese construction company has developed a smart energy system
powered by AI to manage its solar power plant. The system is called
AHSES (Adjusting to Human Smart Energy System) and is used to
anticipate, manage, and display energy needs from the solar power plant. It
supports smart pricing.4
Demand Management
Several companies are working on AI-backed demand management
solutions that focus on the demand response of different devices running in
parallel. Similarly, a series of AI platforms are under development
focusing on energy performance in buildings with solutions that gauge,
learn, and anticipate user behavior to optimize energy consumption. AI
and game theory use cases are also being applied to reward/penalty
mechanisms to ensure that enough customers in a DER pool participate
and are responsive when necessary. We have already cited the use of DL
by Google’s DeepMind to save 40 percent on power consumed for data-
center cooling purposes, leading to an overall reduction of 15 percent in
power consumption costs. The company is negotiating with the UK’s
National Grid to use AI to help balance energy supply and demand in
Britain.5
Infrastructure Management
Power companies have an opportunity in deploying AI and digital asset
management, in which ML algorithms collate, compare, analyze, and
highlight risks and opportunities. In these cases, AI methods are used to
model possible scenarios and to advise on actions and impacts. AI is also
being deployed for the operation and maintenance of generation sources
such as gas turbines to minimize emission of nitrogen oxides. Siemens, for
example, is using a neural model that alters the distribution of fuel in a
turbine’s burners to increase efficiency. Siemens is also deploying
cognitive technology by Watson to deliver predictive, prescriptive, and
cognitive analytics within the industrial cloud platform MindSphere.
Dan Walker, BP’s Group Technology leader, has said that BP will use
ML to “combine datasets (flow rates, pressures, equipment vibration) with
data from the natural environment (seismic information, ocean wave
height) to transform the way they run and optimize their drilling
operations.” In Chicago and New York, BP has begun testing AI
technology with new “personality pumps” that aim to “provide a more
interactive experience at the pumping station.”6 Customers can interact
with a bot called Miles that offers music, video e-cards, and other
interactive options to share with friends.
Geo-Location Discovery
Fossil energy companies use AI to improve location deployment. For
example, Texas-based Pioneer Natural Resources uses AI to ensure
accurate and optimal drilling locales. Nervana Solutions is
commercializing an algorithm that can detect sub-surface vaults with
possible oil deposits, a task normally completed by human geologists.7
Table 19.1 AI/ESG Considerations: In the Utility and Natural Resources Sectors
Environmental
• Innovations using AI (data-as-a-service, demand-management,
renewables) will improve environmental compliance and performance
• AI-enabled failure forecasting saves lives and property
Social • Communities will experience improved environmental and services
consequences
Governance • Utilities/energy companies deploying AI are more desirable by
stakeholders from an ESG governance and sustainability standpoint
CHAPTER TWENTY
AI in Software Development
In 2015, researchers at MIT wrote code that automatically fixed
software bugs by replacing faulty lines of code with working lines from
other programs. Recently, several groups have made even more progress
on getting learning software to make learning software. These include
researchers at Open AI, a non-profit research institute co-founded by Elon
Musk; the University of California, Berkeley; and DeepMind.
Automated ML is the trend to watch in AI. As it expands beyond DL
to other types of AI, these systems will mix and match AI approaches that
should lead to new applications and services. This trend will accelerate AI
even faster, which, while exciting, is not without challenge and risk. We
are living at a time when legal frameworks around AI are still in their
infancy, and humans are struggling to fully understand AI and its
outcomes. Moreover, there is inadequate architecture to protect AI models
from adversarial attacks.
AI in Scientific Publishing
The essence of the challenge for scientific journals is that they publish
a lot of scientific data on a broad diversity of topics and, in some cases, for
example that of the publisher Elsevier, they have thousands of publications
and thousands of authors, and the need to peer-review articles before they
are published. A contest called ScienceIE asked teams to create programs
that could extract the basic facts and findings from sentences in scientific
papers and compare those to the basic facts from sentences in other papers.
Isabelle Augenstein, a post-doctoral AI researcher at the University
College of London who works with Elsevier, oversaw this challenge. Her
efforts were focused on automatically suggesting the right reviewers for
each manuscript.5
Several other major publishers implement software to support peer
review. However, it is important not to be overly optimistic about AI in
publishing. AI researchers warn about limitations in contemporary text
recognition and language understanding technologies: “Linguists call
anything written by humans, for humans, natural language. Computer
scientists call natural language a hot mess.”6
Is AI Creative?
The question of creativity and AI has been looming large over AI
researchers and practitioners for a while. While some believe that AI is
already “creative,” creating a new kind of aesthetic, others stress that
computers are still derivative rather than truly creative. These discussions
are just emerging, but there are already events dedicated to AI creativity,
among them the International Conference on Computational Creativity that
took place in Atlanta in June 2017.
1. AI as a Writer
A novel called The Day a Computer Writes a Novel almost won a
competition in Japan. Hitoshi Matsubara and his team at Future University
Hakodate in Japan selected words and sentences and set parameters for
construction before they “asked” an ML system to “write” the novel. For
the past few years, the Hoshi Shinichi Literary Award has only technically
been open to non-human contenders; 2016 was the first time the award
committee received applications by AI programs. Of 1450 submissions, 11
were written at least partially by non-humans, though they had their
challenges. As Satoshi Hase, a Japanese science fiction novelist, stated, “I
was surprised at the work because it was a well-structured novel. But there
are still some problems (to overcome) to win the prize, such as character
descriptions.”8
2. AI and Music
Google’s project Magenta is looking into ML in music and whether or
not there are ways to create a better collaboration between humans and
machines on that front.9 Below we explore additional uses of AI in music.
AI in Government
and the Military
Government AI Policies
In the United States
In Chapter 11, we explored the current Chinese government focus on
leading in AI to secure Chinese global leadership in the field by 2030, and
how the U.S. and China are in a neck-and-neck competition for such
leadership. For some time, the U.S., especially under President Obama,
was clearly focused on the importance of maintaining AI leadership. It is
unclear at the time of this writing whether the U.S. will be able to maintain
this leadership under the current administration led by President Trump.
It is critical that the U.S. remain focused on how to better understand
the impact of AI on employment, on the transformation of traditional
businesses into digitally savvy ones, and on the necessity of adapting the
educational system so that young people develop the skills that are
necessary to adapt, learn, and compete for the rest of their lives.
We are concerned that if investments in fundamental research slow
down or stop altogether, the U.S. will suffer a serious and damaging loss
of leadership in this absolutely key and pervasive technology. There are
still 10 to 15 years remaining for the development of AI-related
technologies and applications based on current research. Once this period
is over, there will be a know-how gap that may never be closed, as the
example of the Russian space industry amply illustrates.
Human-machine collaboration was a major theme in the Obama
administration’s reports “Preparing for the Future of Artificial
Intelligence” and “National Artificial Intelligence Research and
Development Strategic Plan.” The consensus in the U.S. in 2016 was that
there should not be a heavy push into regulating AI too broadly, though its
use in the automotive, aviation, and finance industries should be held to
certain standards.
These two Obama-era reports offered three key guiding principles:
• AI needs to augment rather than replace humanity;
• AI needs to be ethical; and
• there must be equality of opportunity for everyone to access and develop AI.
The report suggested that AI also has a pivotal role in cybersecurity and
can be used to detect and counter cyberattacks as they target U.S. citizens
and/or infrastructure. The reports also recommended the deployment of
algorithmic surveillance of individuals and crowds while acknowledging
that more study is needed on the matter,1 especially given current attempts
to implement “predictive policing,” some of which may be racially
biased.2
In Canada
The Canadian government is funding the Pan-Canadian AI Strategy for
research and talent that will cement Canada’s position as a world leader in
AI. This US$125 million strategy will attract and retain top academic
talent in Canada, increase the number of post-graduate trainees and
researchers studying AI, and promote collaboration between Canada’s
main centers of expertise in Montreal, Toronto-Waterloo, and Edmonton.
The program will be administered through CIFAR, the Canadian Institute
for Advanced Research.
Canada cemented its global lead in AI in large part due to the early
support by CIFAR of a group of international researchers, led by Geoff
Hinton of the University of Toronto, starting over a decade ago. Notable
Canadian researchers work for U.S. full-stack AI companies, with Hinton,
among others, supporting Google and Yoshua Bengio supporting
Microsoft. CIFAR’s program in Learning in Machine and Brains is now
co-directed by Yoshua Bengio and Yann LeCun (a professor at New York
University and director of AI research at Facebook).5
In Continental Europe
In early 2016, the European Commission (EC) launched a “Digitizing
European Industry” strategy, and identified robotics and AI as critical
technologies in which to invest. Shortly thereafter, SPARC, a public-
private partnership for robotics in Europe, was launched. With €700
million in funding and robust private participation, bringing the fund to a
total €2.8 billion, SPARC is now one of the largest civilian research
programs in this area in the world.
The EC also recognizes the potential liabilities that come with broad
adoption of AI and robotics—for example, see their publication
“Communication on Building a European Data Economy.” One of their
ideas under development includes the establishment of cross-border
corridors to test new technologies such as automated driving.
From a regulatory perspective, the EC is currently evaluating existing
legislation, such as the Defective Products Liability Directive and the
Machinery Directive, to understand its relevance to the changing
technological landscape. Also, in late 2016, the EC started the “Digital
Jobs and Skills Coalition,” focused on understanding the impact of the
digital age on employment.6
There have also been reports that France and Germany are interested in
working together to speed up AI investments in addition to the work that is
already being driven by the EC.
AI in the Military
The U.S. military has been funding, testing, and deploying various
types of ML and robotics for a long time. In 2001, Congress mandated that
one-third of ground combat vehicles should become unmanned by 2015, a
goal that could not be fully implemented.
AI technology could eventually change the balance of international
power by making it easier for smaller nations to go after the richer G7
countries. On the positive side, Estonia successfully implemented the e-
Residence program, which enables any individual to open and carry out
business within the EU, even without being a citizen of Estonia. This
program is a set of technologies utilizing ML and blockchain. On a more
negative note, ML and facial recognition capabilities can be misused.
Recent research demonstrates how voice can be synthesized, or how a
video can be changed, such that a person like President Obama “says”
things that he has never actually said.
At the request of IARPA, the research agency of the Office of the
Director of National Intelligence, Harvard’s Belfer Center for Science and
International Affairs issued a report on the effect of AI on national
security.10 One of this report’s “conclusions is that the impact of
technologies such as autonomous robots on war and international relations
could rival that of nuclear weapons.” The report recommends that the U.S.
include AI in military action planning and in possible future international
treaty negotiations. Indeed, autonomous and semi-autonomous weapons
are already in use without an overarching agreement on how to control
their implementation and diffusion. Unlike nuclear and biological
weapons, autonomous weapons pose several problems. The absence of an
international treaty is just one of them. A fundamental problem is that it is
relatively easy to develop and deliver autonomous weapons. The hardware
is getting cheaper, with drones already in use by insurgent forces at a price
point of just a few hundred dollars. Software know-how is available.
Humanity might not have much time to negotiate proper and enforceable
regulations.
The report is also concerned with the attack and defense capabilities of
ML, as automation in probing and targeting enemy networks or crafting
fake information can reduce efforts and costs associated with such
operations. The use of cyberattacks has become a major concern, but the
use of technology by the wrong actors is not limited only to these
possibilities. Commoditization of drone delivery and autonomous vehicles
could become powerful weapons of terrorists and criminals. ISIS has
already started using consumer drones to drop explosives in combat.
Today, the CIA has 137 pilot projects directly related to AI. These
experiments include automatic tagging of objects in video (so analysts can
pay attention to what’s important) and better predictions of future events
based on big data and correlational evidence.11 Quite often, research on AI
from intelligence agencies and the military finds its way into civilian
applications. Let’s not forget that the famous touchscreen of a smartphone
was first developed for the military in the 1960s.
Environmental • AI military robotics race; social fear that out-of-control military robots
make own decisions or are manipulated by bad actors (terrorists, rogue
regimes)
Social • Deployment of government algorithmic surveillance on individuals and
crowds; potential for abuse/violations
• Beneficial AI use in public sector work (e.g., health care)
Governance • Obama administration‘s three guiding principles of AI governance:
ethical, equality of opportunity, augment not replace humans
• AI pivotal role in cybersecurity
• AI arms race that may be equivalent to nuclear race
• AI and public security: a double-edged sword
• In an AI-evolved world, assymetric international power relations may
arise
Military drones have been in use since the late 1960s. Since 2002, the
U.S. has been using Predator drones equipped with Hellfire missiles. This
model was replaced by MQ-9 Reaper, which can fly faster and longer, and
can carry a larger weapons payload. General Atomics is the contractor.
Surveillance drones are getting smaller. There are drones deployed in
swarms to automatically communicate with each other and make
decisions. For example, Perdix drones (named after a character in Greek
mythology) is an experimental project carried out by the Strategic
Capabilities Office of the U.S. Department of Defense, pioneered by MIT
students, and modified for military use around 2013. Some drones are the
size of a hummingbird and can provide information directly to humans. An
example of this kind of system is Black Hornet, developed by Norwegian
Prox Dynamics, later acquired by FLIR.
More than 3,000 researchers, scientists, and executives from
companies including Microsoft and Google signed a 2015 letter to the
Obama administration asking for a ban on autonomous weapons. In 2012,
the U.S. Department of Defense set a temporary policy requiring a human
to be involved in decisions to use lethal force; it was updated to be
permanent in May 2017.
PART 5
The Socio-Politics of AI
—Critical Issues
Fake News
Fake news and the way it spreads virally via social and other media is
emerging as one of the key threats of our time, as it is found in anything
from the manipulation of stock markets and the purchase of potentially
harmful products to the manipulation of elections, undermining a very
pillar of democracy.
In April 2017, Facebook acknowledged for the first time that
“malicious actors” used their platform during the 2016 presidential
election. The company’s security division produced a paper dividing fake
news production sources into the following four categories (the first three
are quoted directly from the paper):
• Information Operations: Actions taken by governments or organized non-state actors to distort
domestic or foreign political sentiment
• False News: Articles that purport to be factual, but which contain intentional misstatements of
fact with the intention to arouse passions, attract viewership, or deceive
• False Amplifiers: Coordinated activity by inauthentic accounts with the intent of manipulating
political discussion, e.g., by discouraging specific parties from participating in discussion7
• Disinformation: Inaccurate and/or manipulated content that is spread intentionally
Innovation Incentives
To provide for a more robust social and corporate governance of new
technology, government and business institutions could create more
concrete forms of competition to provide additional incentives for
innovation, help increase public visibility of the socio-political impact of
new technologies, and generate momentum for a more participatory digital
society. For example, city hackathons focusing on quality-of-life
improvements via technology should become a regular practice, sponsored
by a mix of research institutions, corporations, startups, municipalities, and
individual contributors. This could include local open-data access to
ensure a better collaboration between different stakeholders.
The Internet has already affected the 20 percent of the economy that is
primarily digital, like entertainment, communications, and finance. We are
now moving rapidly into the era of the Internet of everything and AI where
because of the cloud, new connectivity standards, powerful chip
semiconductors, and advanced algorithms, all industries are becoming
digitized, even the most traditional, such as public services, energy, health
care, and manufacturing.
On the one hand, there’s an enormous potential for innovation. But on
the other, these emerging technologies raise concerns, confusion, and all
kinds of uncertainty around security, data ownership and protection, and
the new competitive landscape, which might completely disrupt
established profit pools.
Investors and boards have traditionally been obsessed and single-
minded about quarterly results. In spite of this continuing pressure, their
focus is now also shifting to other serious strategic topics, such as the need
to transform their industry from within, to adapt to new business models,
and to protect customer relationships, especially from Internet disruptors.
Business sustainability is no longer about complementing the corporate
roadmap with the skills of a few acquired startups, or assessing
environmental and labor policies at suppliers, or complying with European
data-protection regulations. It is about the profound impact emerging
technologies are having on profits, succession planning, workforce
training, and the possible automation of so many processes.
Visionary boards adapt their agendas and spend more time on technology,
IT transformation, data security, and digital customer interfaces. There are
still challenges, since governance frameworks and board refreshment do
not change at the same pace as the business environment. Among the most
important challenges to transformation are the following:
• Current board and committee structures have been shaped by regulatory demands. They are
implemented to ensure compliance. This results in practices based on past insights; linked to
established regulations; and not focused on the needs, challenges, and creative possibilities of
the present and future.2
• Regulatory frameworks have always trailed technological change, leaving many grey areas
around labor relationships, product liability, data protection, customer-centricity, and
workplace safety. These grey areas become even more urgent to address with the uncertainty
brought about by the rise of AI and related technologies.
• According to McKinsey, as of 2016, only 5 percent of corporate boards in North America had
a technology committee.3 Our interviews with two institutional shareholders and three large
headhunters showed that there are no company boards with a technology committee in
Switzerland, France, or Germany. In the UK, some boards have established advisory councils
to work on topics such as cybersecurity, big data, e-commerce, social media, and IoT. Only 13
percent of U.S. Fortune 100 boards can be called truly digital in the sense that they have a
resident director of digital expertise, and this includes the boards of “technology first”
companies like Amazon and Microsoft. In other words, even these technologically advanced
companies have room for improvement when it comes to addressing the digital challenge.
• We believe that the pressure for the appointment of digitally literate board members will
increasingly come from institutional shareholders.
• Companies and their boards are still lacking an understanding of the value of their data and the
eventual transformational influence of this data. The only way data make it onto the board
agenda is when a negative event like a cybersecurity breach occurs. Data, and its status and
governance in a given company, should be instead a positive and standing topic on all board
agendas and a cornerstone of competitive analysis.
• Discussions about AI are still mainly taking place at technology companies and research
centers. Google and Microsoft have created ethics advisory boards focused on ensuring the
design of safe and beneficial AI. Several high-tech giants support initiatives such as OpenAI.
There is a lack of involvement by traditional industries in the discussion on AI, which is both
regrettable and complacent, taking into account the unrelenting pace of change and the fact
that they will be disrupted by these technologies sooner or later.
• Successful boards and c-suites will also integrate their consideration of AI into their enterprise
risk management—however, those without a disciplined and forward-looking approach to risk
will be in a poor position to assess the risk and opportunity presented by AI.
• Last but not least, businesses will start to design their products and services with AI in mind.
Boards rarely discuss questions of digital design, real-time data to create feedback loops
between products, customer care, and development functions. We believe this will change
within the next five years.
Five of the top ten Fortune 500 companies have declared their strategy to
be “AI first,” and they have been aggressively investing in AI startups and
R&D, attracting top talent from academia, and changing the way in which
research and development collaborate with product developers. If they
want to survive in the short term and thrive in the long term, traditional
business must study the strategies and operational practices of these five
leading companies: Alphabet, Apple, Microsoft, Facebook, and Amazon.
Every traditional and nontraditional company today must understand that it
is also a technology enterprise. The question of what strategic leadership is
in the era of nascent AI has never been more acutely important. Visionary
leaders understand that the future is more important than the past, as they
can still do something about the future today. Knowledge about AI is
crucial, as this time companies and societies cannot rely on simply
learning from past mistakes. It is much better to prepare, do AI safety
research, enable ethical design guardrails, and get partnerships right the
first time around, as there may not be more than a couple of chances to
correct the course.
Discussing AI in the
Boardroom
AI governance, just like the risk and opportunity governance of any key
strategic issue, requires synchronicity between the board’s strategic
oversight function, the executive team’s responsibility for developing and
implementing strategy, and the implementation of such strategy by key
cross-disciplinary and multifunctional teams of experts.1 This is what we
refer to as the AI governance, risk, and ethics triangle, which is depicted in
Figure 26.1 below.
Figure 26.1 The AI Governance Triangle (© Andrea Bonime-Blanc 2018. All Rights Reserved.)
Our key findings relevant to the work of corporate boards are the
following:
• AI is not going away. Like mobile and cloud, it will become pervasive and omnipresent,
powering applications, services, and operations across industrial verticals and business
functions.
• Traditional businesses should not introduce AI for the sake of AI or because it seems like a
“cool” thing to do. Technology is a tool to power a business, creating operational savings and
business opportunities. A great place to start talking about AI at any industry or business
function is via the formulation of a data value strategy to complement a company’s
competitive strategy.
• Data is one of the most important assets of any business. AI only works well when a company
can clearly (1) state a problem that it wants to solve with data; (2) address a part of its
operations that it wants to automate for better efficiency and quality; or (3) want to achieve
insights it currently does not have, or struggles to get in a cost-efficient way.
• There are, of course, different AI schools of thought and methodologies. The choice of
technology frameworks a business needs depends heavily on the particulars of that business—
its sector, footprint, strategy, regulatory framework, operational practices, and areas it has
targeted to invest in.
• A company’s and its board’s lack of understanding of the differences between cognitive, ML,
DL, and IT architecture that enable collection and preparation of data stands in the way of the
successful development and implementation of a data value strategy that includes the
introduction of AI into the business.
• Confusion around the AI ecosystem impacts decision-making on why one vendor might be a
better fit than another, what talent or technology should be acquired, and what is a realistic
assessment about the scope of work, cost of delivery, and the efforts that will be required from
the organization to transform existing practices.
• It is becoming an increasingly obvious fact that certain mission-critical business and
compliance problems cannot and will not be properly solved without AI, including notably, for
example, cybersecurity.
• The regulatory environment around AI is in flux. Technology once again is outpacing and
outflanking legal and regulatory frameworks, creating confusion as well as opportunity.
• The technology industry would greatly benefit from including traditional businesses into its
discussions of AI design frameworks, ethics, and compliance. At the same time, traditional
businesses should view AI as an integral part of their sustainability, ethics, and strategy
frameworks.
• Companies must find talent that understands technology, but also has a keen ability to work
cross-functionally. Business development executives likewise can rise to new prominence in
companies that embrace emerging technologies such as AI and blockchain.
1. What unique data do we currently hold on our customers, suppliers, and partners? What is their
strategic value? Which company among our current suppliers and competitors would like to
have our data and why?
2. Do we use and should we use real-time (streaming) data to achieve competitive advantage in
strategic positioning, operations, and regulatory reporting?
3. What data does our company use to define a wide variety of activities including strategic
planning, pricing strategies, customer care practices, supply chain management, cybersecurity,
key risks, M&A track record, and business development?
4. What data would the company like to use or have to optimize strategic planning, business
development, and operations?
5. What data assets are most vulnerable from a cybersecurity point of view? What systems
incorporate these data? Where are the cyber-vulnerable “crown jewels”?
6. What regulations are in place in our businesses concerning data collection, insight generation,
pricing based on behavioral insights, and compliance with cybersecurity laws?
7. Who is using data in our company to make decisions?
8. Who is working with data to support decision-making, insight generation, etc.?
9. What should be changed to enable more functions, business teams, and employees to apply
data-driven frameworks for better decision-making and insight generation?
10. If we apply a price tag to our data assets, or part of our data, what would it be?
1. Has the company’s risk-management function interfaced with other relevant functions—like
technology and strategy—and run scenarios to understand the impact of not engaging AI in the
business, as well as scenarios in which AI is incorporated into products and services?
2. What are risk-mitigation practices we want to apply to ensure our transformational path is
reliable and realistic?
3. Has the company engaged in competitive landscape risk and opportunity benchmarking?
4. Has the legal department, together with risk and technology, assessed the regulatory landscape
nationally and internationally, including its costs and benefits?
Introduction
1. Andrew Buncombe, “AI System That Correctly Predicted Last 3 U.S. Elections Says Donald
Trump Will Win,” The Independent, October 28, 2016.
2. Kate Devlin, “Living with Robots,” The Exponential View Podcast, February 25, 2017.
3. Jeff John Roberts, “Amazon Argues Free Speech in Alexa Murder Case,” Fortune, February
23, 2017.
4. “AI-Driven Facial Recognition Is Coming and Brings Big Ethics and Privacy Concerns,” CB
Insights, September 13, 2017.
5. Pumulo Sikaneta, “AI Won’t Go Anywhere Unless It Has Empathy,” Venturebeat, September
18, 2017.
6. “The 2016 AI Recap: Startups See Record High in Deals and Funding,” CB Insights, January
19, 2017.
7. Erin Griffith, “Google Is on the Prowl for Cloud and AI Deals in 2017,” Fortune, February 8,
2017.
8. Erin Griffith, “It’s Time to Take AI Seriously,” Fortune, February 17, 2017.
9. Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” revised version of a paper
published in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in AI, Vol.
2, ed. 1, Institute of Advanced Studies in Systems Research and Cybernetics, 2003, 12–17.
10. Rory Cellian-Jones, “Stephen Hawking Warns AI Could End Mankind,” bbc.com,
December 2, 2014.
11. Kevin Rawlinson, “Microsoft’s Bill Gates Insists AI Is a Threat,” bbc.com, January 29,
2015.
12. Matt McFarland, “Elon Musk: ‘With AI We Are Summoning the Demon,’” The Washington
Post, October 24, 2014.
13. Danny Hillis, “Back to the Future,” TED, 1994, posted 2012.
14. Tim Adams, “AI: ‘We’re Like Children Playing with a Bomb,’” The Guardian, June 12,
2016.
15. Bostrom, “Ethical Issues in Advanced AI.”
16. Ibid.
17. “Five Distractions in Thinking about AI,” Quamproxime.com, March 25, 2017.
18. Daniel Boffey, “Robots Could Destabilise World Through War and Unemployment, Says
UN,” The Guardian, September 27, 2017.
19. We define “ESG” as those environmental, social, and governance issues that (1) are or
should be part of the portfolio of any organization (whether private, public, non-profit, or
academic); (2) should be considered in relationships with key stakeholders (employees, customers,
regulators, community); and (3) may have negative (risk) or positive (opportunity) financial or
reputational impacts on the organization.
20. Andrea Bonime-Blanc, The Reputation Risk Handbook: Surviving and Thriving in the Age
of Hyper-Transparency, Greenleaf/Routledge, 2014.
Part I: The Past, Present, and Future of AI
1. Fei Fei Li, Frank Chen, and Sonal Chokshi, “When Humanity Meets AI,” a16z Podcast, June
29, 2016.
2. Zachary C. Lipton, “The AI Misinformation Epidemic,” approximatelycorrect.com, March
28, 2017.
3. Ibid.
4. “Five Distractions in Thinking About AI,” Blog.fastforwardlabs.com, March 22, 2017.
Chapter 6: Data
1. Nick Harrison and Deborah O’Neill, “If Your Company Isn’t Good at Analytics, It’s Not
Ready for AI,” Harvard Business Review, June 7, 2017.
2. Forbes Corporate Communications Staff, “Majority of Companies Lack Tools and
Investment Necessary for Analytics Usage in Business,” Forbes, June 7, 2017; “Analytics
Accelerates into the Mainstream: Dun & Bradstreet/Forbes Insights 2017 Enterprise Analytics
Study,” Foreword by Nipa Basu, Chief Analytics Officer, Dun & Bradstreet.
3. Tye Rattenbury, Joe Hellerstein, Jeffrey Heer, Sean Kandel, and Connor Carreras, “Principles
of Data Wrangling: Practical Techniques for Data Preparation,” by O’Reilly Media Podcast, 2017,
19.
4. Teresa Escrig, “Is AI a Real Existential Threat?,” teresaescrig.com, June 9, 2015.
5. Interviews; Adam Gibson, “AI & Robot Show. Episode 5. Adam Gibson (Skymind),” The
Architect Show Podcast, July 14, 2017.
6. Inside AI, “Scope vs. Scale: The Next Wave of AI Strategy,” Inside AI (newsletter), May 21,
2017.
7. Big Data Analytics, “Top 5 Big Data Trends That Will Shape AI in 2017,”
bigdataanalyticsnews.com, January 31, 2017.
8. Luke de Oliveira, “Fueling the Gold Rush: The Greatest Public Datasets for AI,” Medium,
February 11, 2017.
9. See Luke de Oliveira, “Fueling the Gold Rush: The Greatest Public Datasets for AI,”
Medium, February 11, 2017.
10. Nathan Benaich, “Six Areas of AI and ML to Watch Closely,” Medium, January 16, 2017.
11. “Snorkel: A System for Fast Training Data Creation,” with frameworks, tutorials,
references, and contributors, at hazyresearch.github.io.
12. “Creating Large Training Datasets Quickly with Alex Ratner,” O’Reilly Data Show,
O’Reilly Media Podcast, June 8, 2017.
13. Jana Eggers, “AI Building Blocks: The Eggs, the Chicken, and the Bacon,” oreilly.com,
January 26, 2017.
14. “Unstructured Data,” Wikipedia, en.wikipedia.org.
15. Alice Zheng, “We Make the Software, You Make the Robots,” an interview with Andreas
Mueller, radar.oreilly.com.
16. Carlos E. Perez, “Machine Teaching: The Sexiest Job of the Future,” Medium, July 29,
2017.
17. Ibid.
18. Matthew Hutson, “The Future of AI Depends on a Huge Workforce of Human Teachers,”
Bloomberg Businessweek, September 7, 2017.
19. Andrej Karpathy, “Software 2.0,” Medium, November 11, 2017.
20. Stefanie Koperniak, “Artificial Data Give the Same Results as Real Data—Without
Compromising Privacy,” news.mit.edu, March 3, 2017.
21. Ariel Ezrachi and Maurice E. Stucke, Virtual Competition: The Promise and Perils of the
Algorithm-Driven Economy, Harvard University Press, 2016.
22. David Weinberger, “Alien Knowledge: When Machines Justify Knowledge,”
backchannel.com, April 18, 2017.
23. Danah Boyd, “Your Data Is Being Manipulated,” Medium, October 4, 2017.
24. Kelsey Campbell-Dollaghan, “The Art of Manipulating Algorithms,” Fast Company,
January 3, 2017.
25. Y Combinator, “At the Intersection of AI, Governments, and Google,” Y Combinator
Podcast, June 16, 2017.
26. For more information, see Leah Wong, “Experts Weigh in on Fairness and Performance
Trade-Offs in Machine Learning,” The Regulatory Review, October 4, 2017.
27. Sam Harris, “What Is Technology Doing to Us?,” Waking Up Podcast, April 14, 2017.
28. Eric Newcomer, “Uber Starts Charging What It Thinks You’re Willing to Pay,” Bloomberg,
May 19, 2017.
29. Kate Brodock, “Why We Desperately Need Women to Design AI,” Medium, August 6,
2017.
30. Carlos E. Perez, “Why Women Should Lead Our A.I. Future,” Medium, December 4, 2017.
31. Michelle Wetzler, “Architecture of Giants: Data Stacks at Facebook, Netflix, Airnbnb, and
Pintrest,” blog.keen.io, April 4, 2017.
32. Michelle Wetzler, “Architecture of Giants: Data Stacks at Facebook, Netflix, Airnbnb, and
Pintrest,” blog.keen.io, April 4, 2017.
33. Table 6.7 draws on the following resources: (1) “Creating a Data-Driven Enterprise with
DataOps: Insights from Facebook, Uber, LinkedIn, Twitter, and eBay,” by Ashish Thusoo and
Joydeep Sen Sarma, O’Reilly, April 2017; (2) “How Data Lakes Support ML in Industry—with
Cloudera’s Amir Awadallah,” February 26, 2017; (3) “Artificial Intelligence in Industry” podcast
with Dan Faggella www.cloudera.com; (4) http://www.gartner.com/newsroom/id/3051717.
Chapter 7: Algorithms
1. Domingos, The Master Algorithm, pos. 211.
2. Wikipedia article on algorithms.
3. Ibid.
4. Andrew Tutt, “An FDA for Algorithms,” Administrative Law Review, Vol. 67, 2016, posted
March 15, 2016.
5. Hui Li provides a good introduction to the most basic algorithms in his “Which ML
Algorithm Should I Use?”. In addition, SAS Visual Data Mining and ML give a good start for
beginners to learn ML quickly and apply it to different problems.
6. Nathaniel Payne, “What Is Tuning in ML,” stackoverflow.com, April 7, 2014.
7. Dan Faggella, “Tuning ML Algorithms with Scott Clark,” AI in Industry Podcast, February
12, 2017.
8. Domingos, The Master Algorithm, pos. 1274, 1287, 1301.
9. Ibid., pos. 1327.
10. Louise Matsakis, “Researchers Fooled a Google AI into Thinking a Rifle Was a Helicopter,”
Wired, December 20, 2017.
11. See ML security blog cleverhans.io and Ian Goodfellow, Nicolas Papernot, Sandy Huang,
Yan Duan, Pieter Abbeel, and Jack Clark, openai.com, February 16, 2017.
12. Jamie Condliffe, “AI Shouldn’t Believe Everything It Hears,“ MIT Technology Review, July
28, 2017.
13. “Anti AI AI—Wearable AI,” rnd.dt.com.au, May 19, 2017.
Chapter 8: Hardware
1. Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics, Vol.
32, No. 8, April 19, 1965.
2. Rodney Brooks, “The End of Moore’s Law,” rodneybrooks.com, February 4, 2017.
3. “The rise of AI is creating a new variety in the chip market, and trouble for Intel. The success
of Nvidia and its new computing chip signals rapid change in IT architecture.” See The Economist,
February 25, 2017.
4. Darrell Etherington, “Intel Capital Has Invested over $1 Billion in Companies Focused on
AI,” Techcrunch, September 18, 2017.
5. Katyanna Quach, “Your 90-Second Guide to New Stuff Nvidia Teased Today: Volta V100
Chips, a GPU Cloud, and More,” theregister.co.uk, May 10, 2017.
6. Benaich, “Six Areas of AI and ML to Watch Closely.”
7. Carlos E. Perez, “Google’s AI Processor’s (TPU) Heart Throbbing Inspiration,” Medium,
April 5, 2017.
8. Ari Levy, “Several Google Engineers Have Left One of Its Most Secretive AI Projects to
Form a Stealth Startup,” cnbc.com, April 20, 2017.
9. Karl Freund, “An ML Landscape: Where AMD, Intel, NVIDIA, Qualcomm and Xilinx AI
Engines Live,” Forbes, March 3, 2017.
10. Cade Metz, “The Race to Build an AI Chip for Everything Just Got Real,” Wired, April 24,
2017.
11. James Morra, “Graphcore Prepares ML Silicon for this Year,” Electronic Design, July 26,
2017.
12. Mark Gurman, “Apple Is Working on a Dedicated Chip to Power AI on Devices,”
Bloomberg Technology, May 26, 2017.
13. Greg Diamos, “We Need Next Generation Algorithms to Harness the Power of Today’s AI
Chips,” Forbes, June 21, 2017.
14. Shanahan, The Technological Singularity, pos. 457.
15. William Vorhies, “The Three Way Race to the Future of AI: Quantum vs. Neuromorphic vs.
High Performance Computing,” datasciencecentral.com, November 14, 2017.
16. Tom Simonite, “Intel’s New Chip Design Takes Pointers from Your Brain,” Wired,
September 25, 2017.
17. Tom Simonite, “Google’s New Chip Is a Stepping Stone to Quantum Computing
Supremacy,” MIT Technology Review, April 21, 2017.
18. R. Colin Johnson, “DARPA Funds Development of New Type of Processor,” eetimes.com,
June 9, 2017.
19. Nova Spivack, “Why Cognition-as-a-Service Is the Next Operating System Battlefield,”
Bottlenose, gigaom.com, December 7, 2013.
20. For example, see Network Management and AI Lab research of Carleton University,
Canada, and Fernando Koch, Carlos Becker Westphall, Marcos Dias de Assuncao, and Edison
Xavier, “Distributed AI for Network Management Systems.”
21. Mike Barlow, “What About the Shannon Limit?” in “Practical AI in the Cloud,” O’Reilly,
2017, 12.
Chapter 9: Openness
1. John Mannes, “Facebook and Microsoft Collaborate to Simplify Conversions from PyTorch
to Caffee2,” September 7, 2017.
2. Tom Krazit, “Amazon and Microsoft Unveil ‘Gluon’ Neural Network Technology, Teaming
up on Machine Learning,” geekwire.com, October 12, 2017.
Daimler, 165
Dark Blue Labs, Google, 107, 108t
Dark data, 65, 119t
Dark Trace, 132t
DARMS (Dynamic Aviation Risk Management Solution), 204
DARPA (Defense Advanced Research Projects Agency), 5t, 31, 42, 52, 89, 206
DARPA Robotics Challenge, 42
Dartmouth Conference (1955), 5, 18, 28t
Darwin, Charles, 13–14, 16
Data analytics, 69, 138, 144, 164, 179, 181
Data Artisans, 90
Data collection: best practices, 74t; bias in datasets and algorithms, 70; boardroom decisions, 236;
Internet of things, 51; preparing for modeling and processing, 64–66. See also Data governance;
Data privacy
Data control, 209; fake news, 213–214; surveillance and influence, 211–213
Data custodians, 224
Data Exchange Singapore (DEX), 51
Data governance, 73–74, 74t, 77t, 151t; AI misconceptions, 144; boardroom discussions about, 238;
data custodians and, 224; ethical design and, 226t; in healthcare sector, 177t; in public security,
204
Data graphs, data flow graphs, 110–111
Data metrics, 225–226, 226t
Data preparation, 57, 64–66, 96, 134, 146
Data privacy, 34–35, 35t, 45t, 127t, 128t; Amazon and, 124; Apple and, 96, 119–120; automotive
sector, 168t; best practices, 74t; bias in datasets and algorithms, 70; blockchain, 144; differential
privacy, 96, 119–120; ethical design, 226t; Facebook and, 116; financial sector, 181, 185t;
Google and, 96, 111; healthcare sector, 175, 177t; IBM Watson, 175; Internet of things and, 51,
54t; life-long learning, 220; public security sector, 203–204; regulatory concerns, 61, 68–69,
77t; surveillance and influence, 212, 214t
Data processing, 51, 62, 75, 77, 77t, 87, 90
Data Sift, 66t
Data & Society, datasociety.net, 224
Data vaults, 48, 176
Datalogue, 66t
Data-preparation-as-a-service, 64–66, 96
DataRPM, 156–157
Data-training-as-a-service, 65, 67, 80, 81, 96, 97, 134, 219
DaVinci Institute, 163
Davos, xxii, 103
Decentralized AI actors, 23, 31, 48, 50, 219
Decision Forest, 125
Decision theory, 23
Deep Blue, 5, 6, 17
Deep Genomics, 174
Deep learning (DL), 2, 28–31, 52, 126t
Deep Neural Network(s) (DNNs), 26, 27, 31, 38, 117, 194–195
Deep Scalable Sparse Tensor Network Engine (DSSTNE), 93
DeepCoder system, Microsoft and University of Cambridge, 194
DeepDream, Google, 197
DeepMind, 108t, 109–110; academia and the private sector, 99, 100; data training, 65; deep
learning, overview of, 29; demand management, 190; ethics and safety board, 113; openness,
93, 94; reinforcement learning, 34; software development, 193
DeepMind Ethics & Society (DMES), 113
DeepMind Health, 176
Defcon Security Conference, 52
Defense Advanced Research Projects Agency (DARPA), 5t, 31, 42, 52, 89, 206
Delphi Automotive, 159, 160
Democratization of AI, 33, 49, 57, 73
Dennett, Daniel, 12
Department of Defense, US, 207. See also Military sector
Deutsch, David, 11, 14
DEX (Data Exchange Singapore), 51
Didi, 160–161
Diffbot, 66t
Differential privacy, Apple, 96, 119–120
Digital agenda, 224
Digital Jobs and Skills Coalition, 202
Digital Reasoning Systems, 172
Digitizing European Industry, 201–202
Dimension deduction, 33
Disinformation, fake news, false news, 213–214
Distributed energy resource (DER), 188, 189
Distributed infrastructure of data sets, 47, 50–51
Distributed ledgers, blockchain, 47, 184
Distributed System Implementation Plan (DSIP), 188
Diversity, 58, 72, 107, 149, 151t, 195
DMES (DeepMind Ethics & Society), 113
DNN Research, Google, 108t
DNNs (Deep Neural Network(s)), 26, 27, 31, 38, 117, 194–195
Do.com, 124
Doctoroff, Dan, 162
Domingos, Pedro, 24–25, 27, 79, 111
Dong, Yu, 104
Dorsey, Jack, 195
Dreamquark, 181
DriveAI, 155
DrivePX2, 87
Driverless cars. See Autonomous vehicles (cars)
Droid, Motorola, 51
Drones, 43–44, 45t; Amazon and, 121, 124–125; chipset solutions, 87; data collection, 65;
employment opportunities, 163; military uses, 205, 207; public sector uses, 203–204; software
for, 10t
DSIP (Distributed System Implementation Plan), 188
DT, the New Zealand research agency, 82
DuerOS, Baidu, 104
Dumi, 104
Dun & Bradstreet, 60
Durov, Pavel, 212
D-Wave, 54
Dynamic Aviation Risk Management Solution (DARMS), 204
Gamalon, 28
Game theory, 23–24, 190
Games: data training with, 65; DeepMind and, 29, 34, 109; demand management solutions, 190;
domain-focused ANI companies, 10t; Facebook and, 117; IBM Watson, 136; Turing Test, 18–
19
GANS (generative adversarial network(s)), 35, 63, 197
Gates, Bill, xxiii
Gates, Melinda, 72
Gaussian process (GP), 27–28
GDPR (General Data Protection Regulation, EU), 31, 35t, 50, 51
GE (General Electric), 96, 123, 131, 189
Geely, 161
Gehlhaar, Jeff, 87–88
Genee, Microsoft, 126t
General Data Protection Regulation, EU (GDPR), 31, 35t, 50, 51
General Electric (GE), 96, 123, 131, 189
General Motors (GM), 139, 161, 164–165
General-purpose AI, DL, and ML, 67, 84, 86, 96, 131
Generative adversarial network(s) (GANs), 35, 63, 197
Geo-location discovery, 190–191
Geometric Intelligence, 12, 27
Georgia Tech, 38, 89
Ghahramani, Zoubin, 27–28
Glasberg, Roy Geva, 109
Gliimpse, Apple, 119t
Gluon, 94
GM (General Motors), 139, 161, 164–165
Go, 6, 11, 23, 29, 34, 94, 108t
Goertzel, Ben (Annual AGI conference), 13
Google: academia and the private sector, 99, 100; AI applications, 111–112; AI approach, 107–109;
AI ethics, 112, 231; AI research organizations, 109–110; AI technologies, 110–111; Google
(cont.) AI-as-a-service model, 90–91; AlphaGo, 11–12; automotive sector, 163–164, 166–167;
bias in datasets and algorithms, 69–73; chatbots, 41t, 42; chipset solutions, 85, 86, 87; CIFAR
program, 201; cloud services, 40; cyberattacks, 81; data, economies of scale, 62–63; data
infrastructure, 75, 76t; deep learning projects, 29; DeepMind, reinforcement learning, 34;
drones, 43; emotion measurement technology, 44; federated learning, 35; full-stack AI
companies, 96; Gaussian process, 27; healthcare sector, 170, 176; Internet of things, 51;
machine teaching, 67; military sector, 207; music, Magenta, 197; natural resources and utility
sector, 188–189, 190; openness, 93, 94; open-source tools, 57; past, present and future of AI, 1–
2, 5–6; quantum computing, 54–55; SAP HANA partnership, 134; startups, 11t; surveillance
and influence, 212; visual arts, DeepDream and Sketch RNN, 197
Google Assistant, 123
Google Brain, 27, 72, 110
Google Cloud, 134
Google Glass, 44
Google Home, 111
Google News, 70
Google.ai, 110
Gordon, Vitaly, 143
Governance: AI giants, 128t; AI misconceptions, 144; AI technologies, 45t; automotive sector, 165,
168, 168t; blockchain, 49, 54t; boardroom discussions, 233–240; current issues in, 7t;
cybersecurity, 54t; data collection and sharing, 57, 58, 60–61, 72–74, 77t, 238; ethical design
and social governance, 209–210, 223–226, 226t; financial sector, 185t; future of employment,
217, 221t; future trends, xxvii, 1; geopolitics of AI, 105t; healthcare sector, 177t; Internet of
Things, 54t; key performance indicators, 238–239; machine learning, 35t; media and arts sector,
197t; military sector, 204, 206t; natural resources and utilities sector, 191t; openness, 94t;
quantum computing, 54t; reputation risk, 141t; social data, 214; social science issues, 19t;
strategic risk management, 237–238; successful business transformation and, 229–232, 232t;
surveillance and influence, 211; technology environment, 237; for traditional companies, 151t
GP (Gaussian process), 27–28
GPUs (graphical processing units): AI-as-a-service, 90; algorithms, 88; Amazon and, 121; chipset
solutions, 86, 87; computer vision, 37–38; Facebook, Big Sur, 113–114; future trends, 89;
Microsoft, Azure Cloud, 127; Moore’s Law, 85; neural networks, 26; Tesla and, 165–166
Gradient Ventures, Google, 107, 109
GRAIL, 124
Granata Decision Systems, Google, 108t
Graph analytic processor(s), 89
Graphcore, 67, 88
Graphical processing units. See GPUs (graphical processing units)
Graphistry, 132t
Great Wall Motors, 127
Green Button Initiative, 189
Greene, Diane, xxi
Grid search, 80
GridSense, Alpiq, 188
Groq, 67
Guestrin, Carlos, 31, 99, 119t
H2O, 137
H2O.ai, 172
Hadoop, 75–77, 96, 118
Hall, Wendy, 200–201
Halli Labs, Google, 108t
HANA, SAP, 134
Harari, Yuval Noah, 14
Hardware: Amazon, 121, 122, 124; Apple, 120; automotive sector, 158, 161, 166; Baidu, 104;
chipset solutions, 85–89; cloud technologies, 89–91; cloud-as-a-service (CaaS), 112; Hadoop,
77; IBM Watson, 137, 138; Life 1.0, 16; machine teaching-as-a-job, 67; military sector, 205;
natural resources and utility sector, 189; new algorithms, 88; Open Computer Project, 113–114;
quantum computing, 54; semiconductors, 83–85; smart networks, 91
Harris, Sam, 14
Harris, Tristan, 71
Harrison, Don, xxii
Hart, Peter, 25
Harvard, Belfer Center for Science and International Affairs, 205
Harvest.ai, Amazon, 122–123
Hawking, Stephen, xv, xxiii
Health Insurance Portability and Accountability Act (HIPAA), 124
Hebb, Donald, 28
Heckerman, David, xxiv
HelpMate robot, 174–175
Heuerman, David, 25
Heuristic, 2, 23
HiBot USA, 191
Hierarchical Identify Verify Exploit (HIVE) processor, 74, 77, 89
High-altitude Internet balloons, 27
High-performance computing (HPC), 52
Hillis, Danny, xxiv
Hinton, Geoff, 24, 26, 29, 34, 99, 201
HIPAA (Health Insurance Portability and Accountability Act), 124
HiQ, 10t, 131t
HIVE (Hierarchical Identify Verify Exploit) processor, 74, 77, 89
Hochreiter, Sepp, 29
Hodgkin, Alan, 25
Hofstadter, Douglas, 25
Holland, Josh, 25
HomeKit, Apple, 51
Homo sentients vs. Homo sapiens, 14, 19t
HoneyComb, 96
Hoshi Shinichi Literary Award, 196
Hougeland, John
HPC (high-performance computing), 52
Huang, Jen-Hsun, 125
Huawei, 105
Huggable robot, MIT, 43t
Huxley, Andrew, 25
Hwang, Tim, 70
IBM: AI acquisitions, 138t; AI-as-a-service model, 90, 130; cognitive computing, 24; Deep Blue,
17; financial sector, 180, 181; healthcare sector, 175–176; history of AI, 5–6; legacy services,
96, 137; ML focus, 133–134; natural language processing (NLP), 40; neuromorphic processors,
89; quantum computing, 40–41; Watson and reputation risk, 136–139; Watson applications, 139
IBM Global Entrepreneurship, 40
Identity resolution, 47
Identity verification, Know Your Customer (KYC), 184
IEEE, 174
IEEE Conference on Data Mining, 25
Iflytek, 102
ImageNet, 29, 37, 62, 65t
Impact of AI: algorithms, impact of, 69, 79–80; autonomous vehicles, impact of, 159–163;
boardroom decisions and, 229–230, 232, 232t, 234, 238; in Continental Europe, 202; Deep
Learning (DL), impact of, 31; diversity issues, 72–73; economic impact, 113, 137, 151t, 162–
163, 199; environmental issues, 7t, 35t, 45t, 77t; ethical issues, 137, 223–226, 226t; in financial
sector, 181–182; governance issues, 35t; in government and military sector, 201, 202, 205; in
healthcare sector, 175; on organizational design, 67–68; past, present and future of AI, 1–3;
reinforcement learning, impact of, 33; social issues, 7t, 12, 13, 45t, 105t, 127t, 151t; on
surveillance and influence, 212; in United Kingdom, 201; in utility sector, 190. See also
Employment and AI, future of work
Inception, convolution neural network, 109–110
ING, 160
Init.ai, Apple, 119t
Innovation: in automotive sector, 164; China’s investment in, 101; data collection and capture, 66;
data governance and, 73; drones, 43–44; employment and, 209, 215–221; in financial sector,
180, 181; in hardware, 85; incentives for, 225; machine learning, 35t; machine-teaching-as-a-
job, 68; pitching AI to a business, 149; regulations, impact on, 210; SAP Leonardo, 134; self-
programming software, 194; user interface and, 147; in utility and natural resource sector, 191t
Intel: Amazon partnership, 121; automotive sector, 127, 159; Baidu partnership, 104; fragmentation
of chipset solutions, 86–87; Loihi, 89; machine teaching-as-a-job, 67; Mobileye, 159; multicore
designs, 85
Intelligence: artificial general intelligence (AGI), 9–12, 13; artificial narrow intelligence (ANI), 9,
10t; artificial super intelligence (ASI), 12–13; of computers, 18–19; machines vs. people, 15–17
Intelligence, human, 2, 13, 15–17, 29, 62, 109
Intelligent machines, smart machines, 15–19, 220
Intelligent Medical Imaging, Inc., 173
Intelligent processing unit (IPU), 88
Interface: Amazon AI applications, 124; application process interfaces (APIs), 40, 112; boardroom
decisions, 230, 232t, 237–238; chatbots, 182; Deep Learning applications, 30t; domain-focused
companies, 129; ease of use, 147; General Data Protection Regulation (GDPR), 50; Internet of
things, 51; machine teaching-as-a-job, 67; natural resource and utility sector, 187; regulatory
considerations, 68; unstructured data, handling of, 66; voice interfaces, 114t, 118t
Interior redesign, automotive, 162
International Conference on Computational Creativity, 196
Internet of Things (IoT), 3, 48, 51
Intuition, artificial, 34, 72
Intuition Machine, 25, 29
Intuitive Surgical, 174
Invoa, 132t
IoT (Internet of Things), 3, 48, 51
IP, 6t, 11t, 50, 79, 89, 206
iPhone, 29, 41t, 51, 117, 120, 165
IPU (intelligent processing unit), 88
IQ of computers, machines, 2, 12, 18–19
Ivakhnenko, Alexey G., 28t
Jaakola, Tommi, 31
Jackrabbot, robot, Stanford, 157
Jaitly, Navdeep, 24
Japan’s National Institute of Informatics, 19
Jassy, Andy, 124
Jeopardy!, 6t, 136, 137
JetPac, Google, 108t
Jetson TX1, 43, 87
JibbiGo, Facebook, 114t
J&J, 174
Jobs. See Employment and AI, future of work
Jordan, Michael, 25
Judicata, 96
Julie Desk, 131t
Labeling, tagging: Cortica, 38; data preparation, 64–65; dataset openness, 94; DL models, 12;
economies of scale, 63; machine teaching-as-a-job, 67–68; neural networks, 26; pitching AI to a
business, 149; Salesforce, 135; scene labeling, 158; semi-supervised learning algorithms, 33;
supervised learning algorithms, 32; unsupervised or predictive learning, 33
Lattice Data, Apple, 118, 119t
Launchpad, Google, 109
Law Train, 204
Lawrence National Lab, Berkley, 160
LeCun, Yann, 10, 21, 24, 34, 63, 87, 115, 201, 214
Lee, Kai-Fu, 102
Legacy IT, systems, 96, 133, 137, 141, 202
Lemonade, 180
Leonardo, SAP, 134
Levandowski, Anthony, 5t, 167
Levesque, Hector, 18
Levy-Rosenthal, Patrick, 45
Lex, 40, 121, 124
Lexalytics, 132t
Li, Fei-Fei, 72, 109
Li, Jia, 109
Lidar/LIDAR (Light/Laser Detection and Ranging), 5t, 158, 159, 165, 167
Life 3.0, 13, 16–17, 19t
Life-long education, 209, 220–221
Lisp machine, 84
Liu, Qingfeng, 102
Lloyd, Seth, 54
Location safety, 157
Loihi, 89
Long short-term memory (LSTM), 29, 100, 117
Long-range anti-ship missile (LRASM), 102
Lovelace Test, 18
LRASM (long-range anti-ship missile), 102
LSTM (long short-term memory), 29, 100, 117
Luckerson, Victor, 111
Luminata, 86
Lumos computer vision platform, 38
Lyft, 155, 156, 160, 161, 167, 216
Safety of AI: artificial narrow intelligence (ANI), 9; automotive sector, 158, 159–160, 163; bias in
datasets and algorithms, 69–70; boardroom decisions, 231–232, 240; China’s efforts in, 105;
cybersecurity, 52, 81; DeepMind Ethics & Society (DMES), 113; governance considerations,
19t, 94t, 127t, 128t; healthcare sector, 177t; openness, 94t; Partnership on AI, 113
Sagar, Mark, 196
SAIC Motor, 161
Salakhutdinov, Ruslan, 30, 99, 118
Salesforce: benefits of AI, 143; domain-focused companies, 96, 131, 133, 135–136; IBM Watson
and, 137, 138
Samsung, 11t, 51, 141, 159, 173
Samuel, Arthur, 5
Sandia National Laboratories, 54
Sanghavi, Sundeep, 157
SAP, 130, 133, 134
Sapho, 132t
Sapolsky, Robert, 15
Schmidhuber, Jürgen, 11t, 29, 100
Scholes, Robert, 71
Schroepfer, Mike, 117
Searl, John, 15–16
Security. See Cybersecurity
Sedol, Lee, 29
Self-play, learning, 7, 34
Self-programming AI, 193–194
Selfridge, Oliver, 5
Semantic(s), 6, 16, 39
Semiconductors. See Silicon, chip(set), semiconductors
Semi-supervised learning, 32t, 33
Sense.ly, Molly Virtual Assistant (VA), 173
SenseTime Group, 105
Sentient Technologies, 183
SGT STAR Virtual Assistant (VA), 203
Shakey Robot, 7
Shannon, Claude, 18
Shannon limit, 91
Shift Technology, 180
Shum, Harry, 125
Shutterstock, 38
Sidewalk Labs, Google, 162
Sift Science, 132t
Signal Sense, 132t
Silicon, chip(set), semiconductors: Apple and, 117–118, 120; development of silicon, 83–84;
enterprise startups, 141; fragmentation of chipset solutions, 84–88; Google and, 107; machine
teaching-as-a-job, 67; new algorithms, 88; quantum computing, 54; Tesla and, 165; trends, 58,
83–85
Simon, Herbert, 5
Simulation: AI building concepts, 23–24; artificial general intelligence (AGI), 9–12, 10t, 11t; DL
applications, 30t; history of, 28t; Launchpad Studio, 109; law enforcement training, 204;
openness, 98; quantum computing and, 54
Singapore, 51, 103, 160, 211–212, 221
Singularity, xxiii
Siri, 12, 41–42, 41t, 44, 117, 119t
Sivusubramanian, Swami, 121
Sketch RNN, Google, 197
Skillset, talent: Apple and, 118; automotive sector, 164; boardroom decisions, 231, 234, 235, 238,
239; in Canada, 201; in China, 101; DL research, 31; domain specialists, 131t; Google and, 107;
IBM and, 136; long-term investment in, 57, 95, 133, 148; Microsoft and, 125; openness and
talent development, 93; recruitment and development of, 96, 97, 100, 144, 231; shortages of, 62,
91, 145; strategic planning around, 148, 151t, 221; in United Kingdom, 201
Sky Futures, 44
Skype, 126, 127
Smart City Challenge, US Department of Transportation, 161
Smart machines, intelligent machines, 15–17, 220
Smart networks, 57, 91
Smartphones: Amazon and, 123; Apple and, 118t; drone technology and, 43; Facebook and, 116–
117; Federated Learning, 109; healthcare sector, 173; history of AI, 5t; military sector, 205, 206
Snorkel, 65
Social Capital, 136
Social governance, 209–210, 223–226
Social media: big data, 61–62; boardroom decisions, 230; fake news, 69–70; financial and
insurance sector, 181; governance considerations, 197t; healthcare sector, 171; IBM and, 138t;
Microsoft and, 126t; NLP applications, 39t; reputation risk, 142; Twitter, 194–195
Social safety net policies, 221t
Soft robot(s), 174
Softbank, xvi
Software 2.0, Andrej Karpathy, 68
Software Development Kit(s) (SDKs), 90, 108t, 124
Solomonoff, Ray, 5
Somorjai, John, xxii
Soul Machines, 196
Souq.com, 124
SPARC, public-private partnership for robotics in Europe, 201
Spark, 57, 75t, 77, 96, 168
Sparta Science, 174
Speech APIs, 42
Speech recognition: Apple neural engine, 88; ASIC, tensor processing units (TPU), 87; Baidu, 103;
Facebook, 114t, 116; healthcare sector, 173; Iflytek, 102; IQ of computers, 18–19; neural
networks, 26; open datasets, 65t; Tencent, 104
Speech synthesis, 30t, 102, 124
Spiking NNs technology, 89
Spiking of neurons, 25
Spotify, 2
SpringRole, 131
SRI International, 7
Stanford, Core NLP Suite, 40
Stanford Media Lab, xxv
Statistical approaches for AI, 23
Stealth, 108t, 137
Stickiness of products, 71
Streams, DeepMind, 176
Strong AI, 2
Structured data, 39, 76, 118
Stucke, Maurice, 69
SunSpec Energy Storage Model Specification, 187
Super intelligence, Super General Intelligence (SI, SGI), 2, 9, 12–13, 16, 23t
Supervised learning, 32–33, 32t, 34
Supply chain, 66t, 232t, 236
Surveillance, 102, 200, 203, 206t, 207, 211–213, 214t
Sustainability, business, 210, 220, 229, 234
Swarm AI, 31, 207
SwiftKey, Microsoft, 126t
Switzerland, 11t, 100, 217, 230
Symbolic systems, 7, 23
Symbolists, 24
Synapses, 25–26
Syntax, 16
Systems on the chip (SoC), 86, 105
Uber: academia and private sector, 99; automotive sector, 155, 156, 164; fairness and algorithm
bias, 70–71, 72; Gaussian processes, 27; history of AI, 5t; lidar system, Waymo and, 167;
mobility-as-a-service, 160; overview of, 167–168; reputation risk, 140t; traffic and energy
consumption, 160
Uber Freight, 168
UK National Grid, 187, 190
Understand.ai, 67
United Nations, xxv
Unitive, 131t
unity3d.com, 65
Universal basic income (UBI), 218
Università della Svizzera Italiana, 100
University College of London, 195, 214
University of California, Berkley, 77
University of Cambridge, 27, 118t, 176, 193
University of Edinburgh, 27
University of Lugano, 100
University of Pittsburgh Medical Center, 170
University of Texas, 16, 176
University of Tokyo, 19
University of Toronto, 99–100, 167, 201
University of Washington, 31, 99
University of Wyoming, 31
unrealengine.com, 65
Unstructured data, 32t, 66, 118, 128t, 171, 181
Unsupervised learning, 12, 26, 29, 32t, 33, 34, 118
Urban planning, 155, 161–162, 164
Urtasun, Raquel, 167
US Department of Defense, 207. See also Military sector
US National Robotics Engineering Center, Carnegie Mellon, 99
US Patent and Trademark Office, 79
Use cases: AI-as-a-service, 96; Amazon and, 123; boardroom decisions, 239; chatbots in public
sector, 202–203; chipset solutions, 86; cybersecurity, 81–82; data lakes, 76t; data preparation,
64, 66t; data-as-a-service, 188; DL applications, 30t; emotion measurement technology, 44;
history of AI, 6t; natural resource and utility sector, 188, 189–190; privacy concerns, 111;
quantum computing, 54; silo strategy and, 148; Spark, 77; transactional use cases, 41t
User interface, ease of use, 147
User-centric governance, 49
V Kontakte, 212
Valuation of startups, companies, xxii, 60
Value-added data, DSIP, 188
Vapnin, Vladimir, 25
Vector Institute for AI, 100
Vectra, 132t
Veeramachaneni, Kalyan, 68
Venture capital (VC), xxii, 6t, 60, 95, 101, 107, 149, 216
Vicarious, 6t, 11t
Viégas, Fernanda, 110
Vigoda, Ben, 28
Virtual assistant (VA): Alibaba, 103; Amazon, 121; financial and insurance sector, 182; Google,
107; government and military sector, 203; IBM, 138t, 139. See also Chatbots
Virtual Reality (VR), 104, 221
Vision Factory, 107, 108t
Visual search, 38, 108t
Visual Turing Test, 18
Vocaliq, Apple, 118t
VoCo, 40–41
Vogel, Werner, 121, 124
Voice: algorithms and, 79, 88; Amazon, 120–121, 123; Apple, 117, 118t; automotive sector, 157;
Baidu, 104; bots and chatbots, 41–42; cybersecurity, 82; emotion measurement technology, 44;
Facebook, 114t; Google, 29t, 107, 109, 111; healthcare sector, 171; intelligent machines, 17;
interface considerations, 147; Internet of things, 51; Microsoft, 126, 127t; military sector, 204;
natural language processing, 38–41; Nuance, 131; regulatory considerations, 68; Tencent, 104
Volkswagen, 126, 140t
Volume, variety, velocity – three Vs of data, 61–62, 85, 146, 171
von Neuman, 84
von Neumann architecture, 84
von Neumann processor, 84
VoxForge, open data set(s), 65t
Wade&Wendy, 131t
Warren Center for Network and Data Science, 70
Watson: applications, 139; boardroom decisions, 235; cautions about, 96; cloud services, 40, 90;
cognitive computing, 24; financial and insurance sector, 181; healthcare sector, 175–176;
history of AI, 6t; natural resource and utility sector, 190; reputation risk, 136–139; use cases,
148
Watson for Oncology, 175–176
Watson Genomics, 175–176
Watson Message Insights, 90
Watson Message Sentiment, 90
Wattenberg, Martin, 110
WaveNet, 109
Way of the Future, religion, xxiii
Waymo, 166–167
Weather Channel, IBM, 136, 139
Weave or Thread protocols, 51
WeChat, 104
Westworld, HBO science-fiction thriller, 14
Whetlab, 27
White House Office of Science and Technology Policy, 102
Wiki Text, open data set(s), 65t
Williams, Chris, 27
Winograd Schema Challenge, 18
Wise.io, 96
Wolfram, Stephen, 90
Wolfram language, 90
Work Fusion, 66t, 132t
Workplace impacts. See Employment and AI, future of work
Wuhan Landing Medical High-Tech Co., 103
X86, 85
XAI, Explainable AI, 31
x.ai, Amy assistant, xvi, 35, 131t
Xi Jinping, 102
Xiao Ice, 127
Xilinx, 87
xPerception, 104
X-ray, 172