You are on page 1of 24



Issue #01

Intelligentsia Index
Return Since Inception
In this Issue:
Why AI and Machine Learning Are
So Important: Now that machines
can learn on their own from the data
we provide them, we are getting
more addicted to what they provide
us. You might be surprised just how
much more time we spend with our
smart devices in 2016 than in 2015.
We make the case that this is down to
the effectiveness of machine learning
algorithms at providing personalized
AI and Machine Learning Case
Studies: If youre not doing
marketing or your legal work with
machine learning, then youre doing
it wrong. We evaluate more than 70
case studies from the start up world
to find the best results.
A Conversation with Jenna Niven:
An insiders view into the effect of AI
on advertising and marketing.
AI is the New Electricity: Andrew Ng
was right AI is the new electricity.
Helen offers up 10 things to know
from her 20 years in the utility
The Intelligentsia Index:
Introducing our 21 stock AI and
machine learning index.
Patent Watch: Summaries of
applications that caught our

August 26, 2016

Why AI and Machine Learning Are So Important

By Dave Edwards

Machines have changed the world. Machines that provide information and
entertainment surround us. Our computers, phones, TVs, game consoles,
home security systems, and thermostats can be connected to the Internet
to entertain us, educate us, and enrich our lives. As data speeds have
increased and chip sizes have decreased, these machines have done more
for us and we have become more connected, indeed dependent, on their
Now though, something different is happening. Its difficult to pinpoint the
exact moment of change, but it has changed. The amount of data stored in
the cloud and the computing power that is able to process that data has
passed an inflection point. Machines are no longer limited to looking up
data for us (whats the weather like, who was the 35th President of the
United States), they can now learn from the data itself.
We routinely use the results of machine learning algorithms multiple times a
day. Whether we talk to Siri or click on a recommendation from Amazon or
become aware of how our newsfeed on Facebook is personalized, our
everyday consumer experiences are being transformed.
Natural language understanding is giving us personal assistants that
truly understand our needs.
Faster and more accurate analysis and prediction is making us more
confident in the professional advice we receive.
Smarter devices are making our homes more comfortable and secure.
And more entertaining.
Better choices are making us more efficient. Whether its in
transportation, commerce, social, finance, entertainment or
education, we have more control over how we use our time.
Adding intelligence to machines commonly called artificial intelligence
(AI) has been around for more than 50 years. AI practitioners have made
major breakthroughs over the years that have led to the plethora of
intelligent machines that surround us. But a human programmed most of
that intelligence into a machine. They were static systems, unable to learn
and update on their own.
The breakthrough disrupting the technology world today is machine


learning. Machine learning allows computers to

look at data, understand it, and create their own
rules. AI researchers have been working on various
techniques for a long time but the big change has
been in computation power and data quantity.
Machine learning is out of the lab and in the world.

process the data, the faster the companys services

will be.
The Machine Learning Cycle

Understanding how machines learn is fascinating.

Helens upcoming book, How Machines Learn An
Illustrated Guide to Machine Learning demystifies
the math and makes it easy to understand the
mechanics of machine learning. But for now, lets
focus on the effect of machine learning. At a high
level, machine learning allows computers to
improve their own performance. Their learning
processes mimic the way humans learn from
experience. Computers use algorithms to
understand data and create models that can be
used to make predictions. Predictions are the
foundation of personalization what you might
buy, read, watch, eat.
Source: Intelligentsia Research

The machine learning cycle is new. The machine

automatically discovers meaning in the data and
gathers more data from external sources,
combining it all in an intelligent service that is
delivered through a human-computer
interfacethat then, in turn, generates more data
for it to learn from again. For instance, a fitness
wearable could know from my historical data that I
am more likely to go for a mountain bike ride or a
long walk than go to the gym. It could combine that
knowledge with an understanding of whether Im
close to a trail, my schedule availability and what
the weather outlook is, then present me with a
fitness target for the day. At the end of the day,
what I actually did is captured as fresh data. Was its
prediction correct or does the new data change its
Machines learn from data by understanding
connections and patterns in the data. Much of the
value in machine learning is discovery where the
computer uncovers connections and patterns that
humans cant see. And the more data the computer
can access, the better the predictions are. So the
more data a company has, the better its services
can be. And the faster the machine can access and


Why This Matters

Machines are learning and providing more
personalized services all around us. Googles
searches are better targeted after several years of
investing in machine learning. Facebooks timeline
and Amazons product recommendations are

better targeted at each individual. Better targeting

more personalization makes the services more
addictive. More addiction means more revenue.
And the results are starting to show.

Thats an astounding increase since Facebook

currently generates around $10B / year in the US
and around $25B / year worldwide.

Over the past year, Americans consumption of

media and entertainment has increased by 11%.
Thats an increase of 1 hour per day, from 9 hours
and 39 minutes per day to 10 hours and 39 minutes
per day. Thats a big increase especially when you
consider that there was no increase from 2014 to
2015. Something big has happened in the last year.

If building addictive machine learning-driven

services can create such a big revenue opportunity,
the question is
How Much More Time Can We Spend with

Breaking down the data makes it clear. None of this

happened on traditional TV and radio or even timeshifted TV like Netflix and Hulu. Instead, all of the
increase in time spent came from using smart
machines - tablets (up 63%), smartphones (up
60%), multimedia devices like AppleTV and Roku
(up 44%), and Internet usage on a PC (up 21%). On
a pure minute basis, the biggest increase came in
smartphone usage increasing from 1 hour per day
to 1 hour and 40 minutes per day. This is almost 40
more minutes per day of machine learning-driven
What is an extra hour per day worth? It depends on
the media youre consuming but I like using
Facebook as a benchmark. Currently, Facebook
makes $0.16 per hour spent by an American on its
site. This means that Facebook would generate an
additional $15.5B in revenue if this is how all
Americans spent their additional hour per day.


Im fascinated by this question. If were already

spending 10.5 hours per day with media, how much
more can we spend? And how can we be spending
that much time with media anyway?
The answer is multi-tasking. We are taking
multitasking to levels that would have been
thought impossible a few decades ago. In fact, this
is a measurable effect. Our day is now 32 hours
long. I came to this answer after combining data
from 10 sources across 70 different activities that
we spend time on every day. (You can check out the
data for yourself, its included it in the back of this
American adults engage in 23 hours of activities in
the 15 hours we are awake each day. Most of the
time is spent as youd expect: an hour of eating &
drinking, an hour of driving, two hours of
socializing, three and a half hours of working
(accounts for all the non-workers), and twenty
minutes of exercise. But when you add in the 11

hours of digital time, its visible how much multitasking were doing. While we exercise, were
listening to music. While we drive, we talk on the
phone. While we watch TV, we check Facebook.

consumers told us using technology is one of the

most popular ways they will spend their time while
the car is driving. It will take a long time for the car
fleet to convert to self-driving cars but there is an
extra hour per day available or maybe more if we
are multi-tasking while the car is driving. And what
about as wearables get adopted and there is
another interface to use? How much more time
would I spend with my mobile or wearable if I could
converse rather than type, chat rather than click?
Wall Street is concerned with a slowdown in
Facebook growth. This analysis shows that there
isnt an issue with time. The question is whether
Facebook (and other media companies) can keep
increasing our addiction and extending the time we
spend with them. Time is money, which means
companies are racing to build their machine
learning expertise to get more of our time and
What are the Best Companies Doing?

Some of this is intuitive. Whats not intuitive is how

rapidly and how completely our machine time is no
longer a one-way, passive activity. Machine
learning creates feedback, a two-way interaction of
which we are barely aware but is changing our use
of time.
More than 43% of our awake hours are spent on
some activity that involves a machine. And those
machines are getting smarter which allows the
machine to do more for us and allows us to do
something else at the same time frequently
spend more time with machines. So the machine
time results in us spending more time contributing
to the algorithms which, in turn, get smarter about
Its unclear how much more time we can spend
with machines but I think theres still a long way to
go. For instance, we currently spend an hour
driving per day on average. That time will be freed
up for other activities once car companies start
shipping self-driving cars in the next 3-5 years. In a
survey for our Intelligent Car Consumer Report,

Stockpiling data. Everywhere you turn a company

is asking permission for your data. Well, those who
are upfront about it. Most of the time you simply
dont know that your data is being collected. Every
click is counted, correlated with everything else
known about us, and categorized with others like
us. This is what makes our digital lives so rich. The
more that Google knows about me, the more likely
it will present search results that interest me.
Thats good for meand for Google since Im more
likely to click on a link.
Corralling PhDs. There is a hiring frenzy like weve
never seen in Silicon Valley. The big tech
companies Alphabet, Amazon, Apple, Facebook,
Microsoft, Salesforce, etc. are acquiring small
machine learning companies left and right for their
people (aka acquihiring). An AI-focused venture
capitalist recently told us that the last dozen
acquihires were valued at $2.4M per person. Thats
a lot of money to pay to be able to employ
someone. We also heard a story recently about
Google making their highest offer ever to a recent
PhD graduate only to have Microsoft double the

offer. Google matched itbut the PhD went on to

found a company instead.
Why are PhDs so valuable? Machine learning is
data science. Science is the operative and
important word. This is a world of iterative
experimentation and validation. Deciding what
algorithms to use and how to train them well
requires years of experience. Most of that
experience comes from academia, making
professors and PhD students the new rock stars. Go
to any machine learning conference and youll see
that the grey haired professors get all the attention.
Quite a change for Silicon Valley, enamored with
the hack mentality for so long.
Its all about learning.
Machine learning is about scale and scope. Its the
amount of data that makes machine learning
necessary. With machine learning, its possible to
efficiently access vast new data sets, discover new
knowledge, and create new inventions. Now that
computers can see images and understand
natural language, we can make use of vast
amounts of social data in ways that have never
before been possible, laying the foundation for new
consumer experiences in virtual and augmented
reality, for example.
This is not in any way limited to social internet
data. Every process, every industry, every
consumer experience has the potential to benefit
from this technology. In many cases, these
algorithms outperform humans at complex
cognitive tasks. For example, one computer vision
company has an algorithm that can analyze almost
34 million radiological scans in 8 days, a task that
would take 1 human 1,282 years. In a competition
with radiologists, the human rate of false negative


results was 7%, the algorithm 0%; the human false

positive rate 66%, the algorithm 47%. Theres
simply too much to gain from machine learning not
to adopt it.
But the question remains who will benefit? Which
countries, communities, companies, and
individuals will have access these new AI
technologies? Will these technologies be deployed
democratically or will there be an unfairness that
creates a new era of haves and have-nots?
The people and machines making and using these
new AI machines constitute an intelligentsia of
sorts. Those in the intelligentsia will have great
opportunities to benefit economically,
intellectually, and socially. A deeper and wider
digital divide may separate others.
Intelligentsia Research is focused on the key
machine learning businesses and technologies, the
growth of which will diffuse AI through the
economy and our daily lives. We believe AI will have
the biggest impact yet on our modern human
experience; bigger than the industrial revolution,
the information technology revolution, the Internet
revolution, or the mobile revolution. This change is
underway and it affects every government,
institution, company, organization, and individual.
Our research aims to demystify the impact of this
new phase of intelligence, revealing where
machine intelligence creates shifts in wealth,
opportunity, and personal experience. We will
bring you analysis of the key companies, opinions
from key individuals, and data about the
technologies to help you profit from AI at work and
at home. We hope you will engage and challenge
us. Join our community debate on Facebook:

AI and Machine Learning Case Studies

A key question for CxOs What machine learning
applications should I consider adopting today? To
help answer this question, we studied more than 70
case studies from the start up world where retailers,
professional firms, financial institutions, hospitals,
medical practices, research labs and production
businesses have used machine learning
applications. We then ranked the cases on three
criteria impact (size of the bubbles), data leverage,
and ease of solution. More details about the ranking
criteria are at the end of this section.
We found that the top ranked use cases were in
marketing / sales / advertising and in legal. There
are three reasons for this result:
- Language AI has evolved to the point that
there are now domain-specific machine
learning-based products that are providing
significant ROI, performance and efficiency
- All of these areas have the potential to tap
into large data sets that provide high
leverage to any individual customer
- These areas employ high value / cost
employees so adding leverage is especially
What we found

algorithmic approach was applied in a highly

complex industrial process to solve a multidimensional problem where it was simply not
possible for a human to discover the critical
AI is hot in marketing, as new things become
New social data, sentiment analysis techniques, and
AI web analytics are taking the spray and pray
guesswork out of marketing campaigns. With the
ability to adjust a campaign in real-time, marketing
is in transition. Most marketing-focused AI is fairly
standardized in terms of metrics and deployment
complexity. Of course, marketers write many case
studies so theres clearly some selection bias here.
Relationship management is being automated with
natural language AI
Natural language AI is impacting many touchpoints
in customer and employee relationship
management. As the AI gets more and more
accurate, its clear that these processes will become
increasingly AI-first.
AI is doing the heavy lifting in legal, regulatory and
financial functions.

As expected, there was a wide spread in the results

and these only reflect a small sample of commercial
machine learning use cases. But we found some
interesting general themes.

Increasingly, humans are being freed up to take on

more complex tasks with automation of big data and
text analysis well established in many firms.

Usability of technology and applicability to multiple

data sets or use cases drives impact

No one wants to talk about how good (or not) their

fraud detection is

In general, in order to get a high impact machine

learning result, the solution needs to be easy to
deploy, easy for humans to act on and have high
applicability in many different areas (i.e. uses data
sources that may apply to many different cases).
However, there are some interesting exceptions,
namely, where highly specialized approaches can
solve previously intractable problems. The best
example we found of this was where an evolutionary

Fraud detection improvement is one of the key AI

use cases. But good data is hard to find.


Medical is differentbut you knew that

We did not find many medical use cases. This isnt
surprising as medical analyses that work are to be
found in the medical literature, rather than start up
case studies. Having said that, we did identify a

couple of exciting results: one using deep learning

for image classification and the other, evolutionary
algorithms for model discovery. The standout with
these case studies was that the AI had changed a
standard medical practice. In one, the AI had

enabled a non-surgical diagnosis method and in the

other, computer imaging now provides diagnosis
more reliably to more patients by avoiding the need
for complex diagnostic access.

AI Case Studies by Use Case

Campaign management

Ease of Solution

Fraud detection
Marketing analytics


Online marketing
Process improvement
Product management /
Relationship Management



Data Leverage

Source: Intelligentsia Research

AI Case Studies by Machine Learning Category



Ease of Solution

Ensemble / multiple
Language AI


Topological Deep Learning

Vision AI



Data Leverage

Source: Intelligentsia Research


Advertising, Marketing & Sales


Ease of Solution

Ensemble /
Language AI


Topological Deep
Vision AI


Data Leverage
Source: Intelligentsia Research. Includes: marketing analytics, online
marketing, relationship management, campaign management


Ease of Solution

Ensemble /

Language AI
Topological Deep



Data Leverage
Source: Intelligentsia Research. Includes: product management and
design, process improvement, legal / financial / risk

How we did the ranking

The case studies were sourced from companies that
we are researching. These are all private companies
that are developing machine learning-based
technologies or applications. In many instances, we
spoke with technical people involved in the
implementation. In these cases, companies shared
more information about the case study than was


available publically, which enabled a deeper

exploration of the benefits of machine learning.
Impact: the impact of the result was ranked based
- The scope of result business-wide or
limited, or beyond the immediate
- The improvement from an initial baseline
whats possible with new data, whats
possible over and above the traditional
approach (this could be time, cost or qualitycentered)
- The AI enabled a new process, or a new
knowledge level
- The AI became embedded as the way things
are done now versus a nice to have
Ease of Solution: the ease and completeness of the
solution, from implementation approach to the way
the resulting information is presented.
- Does not require an expert to set up and run
- Does not require an expert to interpret
results into actions or insights
- Delivered via API or application with readymade, configurable visualizations and
- High level of transparency in either the way
the AI works or by nature of the outcome
Data Leverage: the ability to access a common or
shared data set
- The degree of applicability and
generalizability across other, similar,
business processes
- Low requirement for large proprietary data
for effectiveness
- The ease of applicability of the case study to
other customers due to the universal nature
of the solution
- Whether the resulting change to a business
process has diffused outside of where it was

A Conversation with Jenna Niven

Advertising and marketing are one of the hottest
areas for AI innovation and ranked high in our use
case study. To get an insider perspective, I spoke
with Jenna Niven, associate creative director at
R/GA in New York. She has a focus on technology
and has delivered award winning campaigns,
developed solutions and created memorable brand
touch points for clients including IBM, Netsuite,
Microsoft, J.P. Morgan and Cisco. Jenna is one of a
few industry executives with early and practical
experience applying AI and machine learning
technologies in advertising and marketing. I spoke
with her about how AI is changing this industry.
Helen: What impact do you see AI making today on
marketing and advertising?
Jenna: Advertising and marketing has changed
significantly from the days of purely running a
campaign over print. We have not only a multitude
of channels we can communicate through but also
the ability to develop products and systems to
meet the exact needs of a particular audience. For
example, when we look at improving the
effectiveness of a customer support tool, we can
now create new ways to communicate with the
audience, maybe a chatbot, and use sentiment
analysis to gain insight into a persons
temperament at that exact moment. We also have
tools for analyzing a whole different set of data,
both structured and unstructured, for example call
center recordings. This enables us to better
understand a customers needs and be able to
personalize their experience more effectively in real
time. So advertising is moving from traditional
campaign work to a service that helps access a
particular audience in more effective and diverse
ways, perhaps with new products, perhaps with


new business models. The main impact of AI will be

enabling this.
Helen: So being able to create new products and
solutions at scale and in real time?
Jenna: Yes, for example, a web site is now a product
in itself, not just a static place used for developing
brand awareness. We could create a tool for a
website that changes a customers experience
based on previous visits. Advertising and product
development are merging, becoming a more
dynamic service not just for brand awareness but
creating new systems that support brand
awareness and customer experience.
Helen: Speaking of website tools, is the chatbot
hype ahead of the reality?
Jenna: Definitely. The problem is that clients view
chatbots quite negatively. This is usually because of
a bad experience in the past or some of the recent
bad press. There are a lot of developments in
natural language processing and in machine
learning and chatbots are now a lot more
sophisticated. Some clients are taking baby steps
with chatbots and using them to learn. The benefit
of a chatbot is multifaceted; its not only about
being able to have a conversation with a customer,
but the ability to parse information from the
customer and keep a history, to analyze sentiment
and to make its solutions smarter and more
accurate over time. I think the more important
issue generally is determining what specific AI
capability is right for the client. Its not always
going to be a chatbot, there are a lot of other AI
solutions out there, and we start with the clients

Helen: Do you see chatbots being able to play a role

in generating brand awareness and publicizing and
exercising marketing campaigns? Can a chatbot be
the voice of a company and go out and bring in a
customer or is it still really just about customer
Jenna: Definitely, a chatbot can be part of a
campaign and as a product to support the brand.
But this raises important questions like, how do we
teach a chatbot to be the voice of the brand, how
do we get it to talk in the right tone of voice and use
the right language? I havent seen anyone do this
well yet but its an exciting area of development.
Helen: How would you describe talking to clients
about AI and machine learning? What general level
of awareness do you find?
Jenna: It depends on the industry. Financial sector
clients tend to have a relatively high level of
awareness of AI but no experience of practical
applications beyond analysis. They havent been
exposed to ideas for new products and services that
are AI based. Consumer sector clients tend to be
more aware because theyve used AI before, say on
their website as a recommendation engine. But in
general we see most clients being cautious about
building new products based on AI because there
isnt enough out there already. This is very much an
early adopters market and advantage! We will
see more adoption as more solutions using
machine learning are trained and deployed. Theres
not a lot out there that will blow you away yet. Its
still early days.
Helen: Do you think AI will change what is
measured in advertising and marketing?
Jenna: Yes, and thats driven by the ability of AI to
handle huge quantities of data. We now have so
many more data sources; we can do collective
analysis, we can drill down in ways that havent
been possible and we can do all of this in real time.
This is a massive opportunity for AI. Its a huge shift
to go from running a campaign, collecting
information and analyzing it days or weeks later to
being able to analyze in real time, get insights


about how to augment the campaign such as

changing the messaging, the channel or the time of
day. The level of granularity we can now go to
during a campaign is a very big shift. For example,
take football, where you have a high volume
audience. Now its possible to see significant
change in real time. For these large audiences in a
specific time frame, it could now even be possible
to go down to a specific moment in the game and
target a campaign around the sentiment during
that actual play.
Helen: Do you encounter resistance to the AI
opportunity because of this level of change? Are
some clients better positioned than others to move
to that level of sophistication more quickly?
Jenna: If you take the traditional process; running a
campaign, waiting for it to finish, then the manual
process for assessing effectiveness of the campaign
(which even then doesnt do a very good job
because it doesnt take into account all sorts of
other factors) youd expect people to embrace the
new AI solutions. But, yes, there are areas of
resistance based on people not being sure if it will
make their jobs more complicated or more difficult
to manage. Most commonly, people arent able to
visualize the workflow process thats generated by
AI. Again, I think it will take a few early adopters to
figure it out, explain it to others and be able to
demonstrate how it makes their jobs easier as well
as their campaigns more effective. Its a natural
fear-based reaction and I think this is a
conversation that has to be had over time with
people who have experienced how AI enhances
their role rather than making it redundant.
Helen: Advertising prides itself on being a creative
industry. How is the creative process being
Jenna: The creative concept is always going to be a
human activity. I think the backend of the process
is what AI best drives; the analysis, the new data
sources, the sentiment, what the audience likes,
the strategy to support the creative concept, the
whole creative process. Everything including the
ideation, the use of AI, experience design,

prototyping and testing will change. The people

who will succeed are the ones who understand the
fundamental concepts of the AI capabilities. But
machines wont come up with creative concept
anytime soon!
Helen: What happens when a machine generates an
outcome based on a black box algorithm but a
creative director disagrees with what the machine
Jenna: Yes, thats interesting! Who are you going to
believe; the experienced ad person whos been in
the industry for 20 years or the machine thats been
around a couple of months but is now analyzing the
audience in real time with a lot of sophistication?
Its a bit of a leap of faith. We will have to think in
more nuanced ways and think about the context of
the campaign. For example, take a brand
awareness campaign, which tends to be more long
tail, where we dont see the affect for a while,
versus a product campaign thats measured with a
conversion rate and we can see the uptick straight
away. I think some campaigns will be more suited
than others to the sorts of approaches that AI
offers. I can see a bell curve approach where AI can
effectively target a specific group, say early
adopters, and then build up data on the influence
of that group over time. These types of campaigns
have been a bit of a gamble in the past but I think
we can now use AI to make a better case for that
Helen: What have you found on the vendor side?
Jenna: Weve had the most success with start-ups,
and Id say thats mostly because start-ups are


more likely to be open to collaboration with a third

party. Our focus is keeping the lines of
communication open with everyone. Small start-up
vendors are very open to being part of the overall
solution that we want to offer. Weve found thats
been well received by clients too. And weve had
the best success with companies that have a highly
specialized offering that addresses a specific need
in the market. Weve worked with one company
that has a very specific product around contextual
content, its very granular and very effective and its
clear they will be successful.
Helen: Theres a lot of focus on bias and fairness in
AI. What are your thoughts on how this applies in
Jenna: I view this as a risk mitigation issue. The
advertising industry needs to think carefully
because personalization can be counter-productive
and end up too narrow. If the goal is to personalize,
how do we make sure we dont alienate a person or
constrain whats offered to them? On a broader
note, I believe AI needs to be developed by a
diverse group. I like the analogy of AI as a baby; it
needs to grow up in an environment where its
exposed to a whole range of views, ethics and
morals as the way to keep bias at bay. At an
advertising level, we need to make sure we are
actively aware of stereotypes or if we use AI, we
need to make a point of checking that we arent
reinforcing a stereotype. I think this risk is real and
its inevitable, but I dont think anyone has really
tackled managing it at the level of workflow yet. We
dont want to miss the edge cases.
Edited for clarity and brevity.


AI: The New Electricity

By Helen Edwards

At a recent event in San Francisco, Andrew Ng, the

Chief Scientist of Baidu suggested, AI is the new
electricity. He later ran a twitter poll where the
results were resoundingly positive for AI being as
positive for humanity as electricity has been. Its a
good way to conceptualize the uptake of AI as a
diffuse, enabling infrastructure (broadband and
mobile uptake have been used similarly). The
analogy works best to describe a transformative
technology which, when mature, has a small unit
cost compared to the value it creates. In other
words, a high utility value. In the case of electricity,
it was the ability to do work in untold different ways
heat, light, and kinetic energy. In the case of AI,
its the ability to turn information into insights and
actions we can use automatically, without human
translation or effort.
But there is more to the electricity / AI analogy than
a conceptual metaphor.
I have spent 20 years deploying intelligence in
electricity systems, integrating customer
experience with technology and implementing
systems designed to augment human intelligence.
At this juncture, where AI is moving beyond expert
systems and Big Data to machine learning and
merged human / machine intelligence, there are
many parallels, lessons and insights from
electricity. Here are ten from my experience that
are relevant to machine learning and the new AI.
#1 Its what you dont know about a customer that
can matter most
Smart meters pushed utility companies customer
information into the modern era. Sophisticated
utilities now have very accurate customer
information. Best-in-class utilities can precisely
forecast a customers bill, how their demand will
vary over a day, a week, and a year. In terms of the
accuracy of a prediction made from big data on an
individual customers behavior, a modern utility
company only needs a weather forecast.


But that doesnt mean they know their

customers. Utility companies are notoriously, and
often, caught out by the data they miss, the
individual stories. The models can now, technically,
define a customer down to what appliances
customers run and when. But the model can never
pull together the backstory of a customers life. The
single mom who stops bathing her kids every night
to save money on hot water. The seniors who
mistakenly believe their power is cheaper at night
so turn off heating during the day, raising their risk
of respiratory disease. Ive sat on hundreds of calls
in utility call centers and have never heard the
same story twice; every one is a single origin
narrative. Machines dont have narratives,
customers do.
When it comes to persons rather than
personalization, we need smart AI to help guide us
to everyones unique narrative.
#2 Too much technology push can strand early
adopters, frustrate advocates and crater hype cycles
Online half-hour data, instant alerts, downloadable
use data, configurable appliances, home area
networks. A decade ago, anyone who criticized
large scale, early, technologically risky investment
in the Home Area Network was an outlier. It was
unfashionable to question whether this was really
what customers wanted in a smart home. The
prevailing view was that a public hungry to eat
their energy data would tolerate technology
imperfections. Electricity might be an important
gateway to the home. An entire ecosystem of
regulation, technology and startups was set up to
make this work. But a lot of it was based on
assumptions that became entrenched myths about
what consumers would actually value.
The reality was far more mundane. While
customers were initially excited by new
information, the reality of electricity use for most
people is that its boring. The decay curve for
engagement in home energy information is rapid.

Its not the information that matters; its the ability

to have the technology and the intelligence retreat
behind the scenes. Today smart home as originally
envisaged is still not ready. Its fiddly, unreliable
and requires either having an expert or a lot of
patience to implement. Price isnt even a good
indicator of quality. In this age of intelligent
everything, there still is not a seamless, reliable,
simple, well designed single solution for security,
lighting, heating and cooling and entertainment
connectivity in the home. I still have four remote
controls with way too many buttons.
The technology frontier is clearer now. The home
will be a voice experience. All the problematic
networking technology and hardware
fragmentation that has held UI design hostage is
about to be rendered obsolete, driven into the
background where it should have been all along.
#3 High cost needs the luxury of time and luxury
Elon Musk and Tesla have defined electric vehicles.
His unapologetic focus on the elite car experience,
with price and performance to match, has enabled
Tesla to beat enormous odds so far. But its taken
more money and time than even he imagined.
Electric vehicles are making steady progress.
Perfection took time and it took a market segment
strategy with trend, rather than price, sensitivity to
make important technology breakthroughs. Trickle
down, or even horizontal transfer, has been orders
of magnitude more difficult than the optimists
hoped for. Hybrid technologies still claim a lot of
the territory that the mid-level EV folks hoped to
Today, theres a similar situation in virtual reality.
Theres no doubt that virtual reality is cool. A really
high end VR experience is an extraordinary, mindaltering experience. Even Google cardboard with an
iPhone is pretty neat. Theres a lot of hype right
now about VR moving from gaming to everything
because presence can now be achieved. But its
wise to be skeptical. As with smart home, decades
of history in the technology has given rise to a set of
assumptions, and perhaps myths, about how
people will want to experience the technology


beyond marketing demonstrations and legacy

applications such as training. Critically we simply
dont know for how much and for how long
consumers will want to engage with a product that
involves so much sensory manipulation. As with
smart home, its a highly complex and dynamic
ecosystem with key aspects of usability still
presenting technical hurdles. As with electric
vehicles, there are multiple players but only a few
have positioned as premium. For all the PR, it will
take longer and require excited, loyal customers
who are happy to wait in anticipation, before VR
moves beyond perfectly expensive gaming
transformations and super-specialized education
use-cases, such as pediatric cardiac surgery, into
the world of everything.
Perhaps by the time VR is both good enough and
affordable enough, advanced augmented reality
(AR) may well be superior in many applications
where VR is currently forecast for widespread and
insatiable growth. It is possible people dont want
trippy VR experiences for video conferencing or to
feel motion sick when considering a virtual home
purchase or to experience self-conscious remorse
at making a dick of themselves in front of a group of
people who couldnt see what they saw. It is
possible that VR experiences are limited in ways as
yet unanticipated. There isnt even yet a good
description of what VR isnt. We wont really know
until it happens.
#4 No one likes a black box, especially when its run
by a monopoly
Hackable homes, health risks from new technology,
personal data shared with other companies,
algorithms that spit out graphs of one home
compared with a neighbors, staff who cant explain
the rationale behind it all. These spelled disaster for
the utility industry in the early days of the smart
meter. People didnt want technology and graphs,
they wanted choice and control. When its
perceived that choice and control are taken away
while more decisions are embedded in software,
theres a backlash. One frustrating aspect is that
once trust is threatened it makes it hard to
introduce new products, even good ones.

The electricity industry was an early pioneer in

dynamic pricing. Initially invented by theoretical
economists with the overarching goal to reduce
engineering gold plating and over-investment in
under-utilized peak assets, it has been fine-tuned
and implemented as an economic price signal to
stimulate new investment or curtail demand. This
academically pure approach was an extremely
challenging idea to sell to a captive audience of
electricity customers who were already resistant to
a monopoly they didnt trust. The perception
quickly took hold that the new pricing was punitive
and a moneymaker for the utility. The reality was
that most customers would save money. And
overtime, costs for everyone would go down. But it
was an impossible target. Millions of dollars have
been spent trying to convince the public on the
benefits of variable pricing. Truly innovative
retailers have created great new products around
dynamic pricing but the dominant pricing paradigm
to keep customers happy remains a simple, flat,
unchanging price.
Uber, by comparison, successfully implemented
dynamic pricing with the simple statement saying
surge pricing allows us to get more drivers on the
road at busy times. Choice and control remain in
the hands of the customer by virtue of the fact Uber
isnt the only viable alternative. Google and
Facebook stand out at the moment as akin to
monopolies. Wise use of this power now would be a
good strategy. Newly deregulated electricity
retailers would agree: the best time to build
customer loyalty is when youre a monopoly.
#5 Doing stuff with data is messy. And often political.
How many engineers does it take to name an asset?
Turns out that it can take six months and the same
number of engineers to battle out whether a tower
is a pylon is a tower, adding time and cost to an
already strained data-cleaning project.
Theres now a large body of evidence to suggest the
Big Data projects, especially those involving porting
of legacy data, are failing to meet their target ROI.
The act of naming something establishes right of


possession that can assume proportions beyond a

simple technical decision. Ive heard specialist
health data scientists remark with frustration that
every physician in their hospital uses a slightly
different term for heart attack and that they are
unwilling to have their preferences subverted by
those of another respected colleague. If medical
fiefdoms prove to be anything like those of utility
engineers, we may be waiting a while for the kind of
electronic health records we will need in order to
meet the high expectations for healthcare-centered
A precursor for useful AI is AI that deals efficiently
with data. Its probably a good thing to get people
out of it. So they can get on with something more
#6 Effects of automation can be counter-intuitive
Early advances in AI projects in electricity were in
power system control. The laptop on the beach
was a vision to build a control system that would
enable the power system to be run from a laptop on
the beach, preferably a tropical island in the middle
of the Pacific. The idea was to rely on advanced
telecommunications infrastructure to monitor a
fully automated, AI-smart control system while
being on permanent vacation a long way away from
the power system itself.
It was a wonderful idea, but a complete fantasy.
Even before all the technical issues were scoped, it
was quickly obvious that autonomous power
systems would be politically unacceptable. In much
the same way as we are years from removing the
pilot in commercial aviation, it matters not that the
majority of failures (outside of natural events) are
caused by human error, we all want the human
there. One prediction made by an aviation AI
researcher was by 2020 the cockpit will only
contain a pilot and a dog. The dog is there to bite
the pilot in case he tries to touch anything. We
want the pilot there for the rational reason that a
person could well perform at something the
machine cannot because trained experts will still
have the edge in extreme edge cases, but we also
want them there for accountability. Someone to

blame when things go wrong because blaming a

machine is entirely unsatisfactory.
The desire to blame and our tendency to trust
human intuition will keep machine dominance over
experts in check for a while yet. In fact, this is an
example of the opposite of technological
unemployment. As more AI went into the power
system, control systems got bigger and supremely
technical and it became clear more human
expertise was needed, not less, and what they
needed were better interfaces visualization of
events and states in particular. The power system
controller workforce has gone through a revolution.
When previously, controllers were brought in from
years of experience in field operations, a modern
power system controller is now likely to be a
college graduate in math or computer science.
They may also have previously trained as an air
traffic controller or similar. They perform more
simulations and planning studies, which use more
advanced tools. The result is the system can now
do a lot more to meet the demands of more
complex scenarios which include intermittent
generation sources such as solar and wind as well
as more flexible demand as the system responds to
The real smart grid is as much about ongoing up
skilling of humans as it is about intelligent systems.
This is a significant contributing factor to the
increased resilience of the grid in the face of
increasing complexity and under-investment in the
hard assets of power transmission. A similar
polarization may happen as AI diffuses into other
expert professions, enabling the experts to handle
more complexity and build a new level of
robustness on existing systems. Its not all doom
and gloom for employment.
#7 AI loves probability, people prefer certainty
Y2K was a fun time in IT management, especially in
critical infrastructure. Money wasnt a limiting
function when it came to preparing for the ball to
drop. It was ideal timing to do some interesting
applied R&D on the biggest data we had available
system control and data acquisition as well as other


power system data sources. This was heavy

operational data, sometimes down to the
millisecond, about the performance of the power
system in real time and in real circumstances. To
this big data (before this term existed), we
applied leading edge intelligence tools and
industrial strength stats in a way no one had done
before. And it was gold. Spectacular correlations,
predictions we didnt know could make. But we
lacked the ability to change our response because
machine learning is probability and control theory
is measure, compare, compute, correct.
As a simplification, machine learning uses calculus
and statistical tools and looks to predict the
average case. On the other hand, control theory
looks to build a physical model of reality and alerts
us to the worst case. While it was good to have
more information on the average case, or even a
probability for the worst case, the systems that are
used to control the response are built based on
real-world models and, therefore, respond based
on the real-world capability. The control system
couldnt take a probability as an input and solve for
that as an output. A lot more data was required
than we had the resources to acquire in order to
test and prove enough examples to be able to
transition from known control states to prediction.
We are not in a dissimilar state with self-drive
vehicles. While millions of miles have been driven,
we may need hundreds of millions or even billions
before we have the ability to move from human
control to the probabilistic models of self-drive
We also learnt that people dont know what to do
with a probability; a probability of any given event
simply wasnt very useful. Controllers had a short
list of levers they could pull and mostly in a binary
way to respond to any particular situation. The
flexibility of the actions simply couldnt match the
diversity now present in the problems.
We fear spectacular, unlikely events more than
events we consider more ordinary and we fear
more the things we think we cant control. Selfdriving cars are expected to be safer but we will

need to perceive self-driving cars to be much, much

safer than our driving before we hand over control.
Human error as a factor in aviation accidents is
estimated at 80% yet we still have pilots in control.
There are no dogs yet.
#8 It takes a lot to go against intuition and AI should
be persuasive
We prefer avoiding loss to acquiring gain. In fact,
losses are almost twice as powerful. And many
decisions we make are governed by heuristics, rules
of thumb, some of which may be tainted with
strong emotional contexts or prior stereotypes.
When AI was trialed in predictive maintenance most
assets (except the ones at extreme high levels of
use) were predicted to require significantly less
time out of service for maintenance and longer
period between service. These savings (both in
direct cost and in loss of service) were significant
sometimes up to 30% with the most frequently
serviced units. However, changing the policy wasnt
Engineers are risk averse. An engineer has a high
hurdle before they will go out on a limb and
embrace a new policy. Even with high quality data
and analytics, this change took years prototype,
pilot, review, non-critical - before moving finally to
critical pieces of equipment. It speaks to the
asymmetry of risk that many professionals in
critical industries face. One failure can negate an
entire career of success. We need AI that can help
with these transitions, baby steps as required.
#9 Monopolies are regulated and regulators can be
quite creative
Regulation started for electric utilities in 1935 with
the passing of the Public Utility Holding Company
Act, 56 years after the invention of the electric light
bulb. Around the world, regulation is a significant
component of life in a utility. A regulators decisions
can change the business rapidly and in
fundamental ways. Regulators have split
competitive activities off from the natural
monopoly business of pipes and wires, have broken


companies apart to reduce either size or scope, and

forced structures such as requiring innovation in
energy efficiency and decoupling of profit from
electricity revenue. In extreme cases, energy
companies have been nationalized, in whole or
part. Electricity can also be a way for a state to
stimulate their economy by offering attractive rates
or other incentives for businesses to establish
themselves in a particular region.
In electricity the drivers are simple to understand
and transparent geographic natural monopoly
price control and socio-political factors, such as
fairness, anti-discrimination and support for low
income consumers, raising energy efficiency
standards, giving customers choice through
competitive markets and system-wide cost control.
The electricity industry is probably the only
industry that, worldwide, has fairness at the heart
of its regulatory framework.
Consider this: a new breed of monopoly is being
created by data. Andrew Ng proposed that data is
now the only defensible barrier to entry in internet
businesses where everything is based on the value
of the network. Data is incredibly difficult to
replicate and many products are launched solely to
acquire data. In game theory, something that
competitive electricity markets regularly obsess
over, the rational strategy is to game the system up
to the point of regulatory intervention. The large
USA technology companies have hit this threshold
in many markets: Facebook in India, Apple in China,
Google in Europe.
Regulatory thresholds are difficult to predict and
costly when crossed. Once regulators taste initial
success, they tend to move in closer and the power
struggle begins. In the case of the large internet
companies that now dominate search and media,
regulators must feel a thrill at the theoretical
possibility of creeping regulation. Its not difficult to
foresee such scenarios as splitting data out from
infrastructure or forcing companies to allow
customers to choose configurable or competitive
algorithms. These arent technically infeasible,
especially if one imagines open or competitive
markets in intelligent systems. One of the key

enablers of competitive electricity markets was

development of the information systems that
allowed the value of a delivered electron to be
separated from the assets that got it there.
#10 Once the economy is dependent, its a matter for
national security
Whether its cyber security risk, natural disaster
recovery or grid operational security standards,
government regulation and oversight is a fact of
life. Governments regulators are highly technical
and in the weeds when it comes to regulating grid
security because security policy cannot be
separated from cost and risk analysis. In other
words, from the algorithms.
As machine learning algorithms confer increasing
economic and social value, as robots become
increasingly sophisticated, strong, networked and
coexisting with humans, security regulation will
follow. We cannot risk development of machines
that can be commandeered by a malevolent agent
once we are dependent on truly useful AI. Its not at
all clear how this regulatory framework would be
instituted. The Apple FBI battle distracts us into
thinking this is customer agency versus big
government. Its not. At its logical conclusion its an

existential threat. Secure and safe AI is a national

security priority.
Electricity is a mature industry, demand and supply
forecasts have reasonable certainty, any debate on
a solar death spiral notwithstanding. The value of
the electricity system is in its connectedness. Once
in place, its extremely hard to displace this value
and extremely hard to disconnect from it. Every
intelligent innovation in the network has made the
core commodity of electricity more valuable, not
less. Governing the system is a delicate balance of
physics, economics and politics. Advanced AI and
machine learning software diffusing through our
everyday lives will be as significant as electricity,
perhaps more so.
The electricity industry is sincere in honoring its
history - the inventors and engineers who had a
vision of a better society. AI definitely shares this
legacy but its not enough to rely on the good will of
the founders. AI will need its own versions of
fairness, safety and security governance. One
prediction thats easy to make from the electricity
parallel; engineers enjoy your time in the sun. The
theoretical economists, regulators, and social
advocates will be here soon enough.

Intelligentsia Index
As a horizontal technology, artificial intelligence
and machine learning cut across many industries,
and, similar to the Internet and cleantech, is more
of an investment theme than an industry in and of
itself. To address this theme, we have created the
Intelligentsia Index, an equal-weight index that
tracks companies which we think stand to benefit
from advancements in artificial intelligence.


Company selection is based on our qualitative

analysis including three key criteria: 1) percentage
of revenue from artificial intelligence products, 2)
penetration of artificial intelligence as a core
technology, and 3) percentage of growth
dependent on artificial intelligence. The industries
represented include automotive, consulting
services, consumer products, industrial products,
Internet services, semiconductors, and software.


We will rebalance the Index on a quarterly basis and

may add or remove companies at those times
based on our research. All changes to the Index will
be announced in The Intelligentsia Report.

has 14 self-driving patents). And Amazon is

embedding AI inside its Alexa-based products to
allow consumers to converse with their Amazon

Currently, the Intelligentsia Index includes:

Apple (APPL) is in the interesting position of having

the most experience with a voice assistant but also
being widely criticized for being behind in the AI
race. Critics claim that Apple is behind Google and
Facebook because it doesnt collect as much user
data. And that it is too slow in releasing new AI
products. We take a different point of view that
Apples commitment to user privacy will be a longterm differentiator and that the companys
principle of not releasing half-baked products is
why it is the most valuable company in the world. If
the consumer market AI goal is to make a truly
intelligent (and usable) assistant, Ill bet that Apple
can pull that off better than any of their current

Accenture (ACN) is well positioned to take

advantage of the fact that adopting artificial
intelligence is a human endeavor. Understanding
the technology potential, planning for change
management, and executing new technology plans
all require expertise. Accenture has stepped up as a
leader in large scale AI consulting including a
focused R&D effort through the Accenture
Technology Lab and research relationships with
MIT and University College Dublin.
Alphabet (GOOG) says the companys future is
being AI First. In some ways, Google was the First AI
company since AI has been core to the companys
search business from the beginning. Today, Google
is easily one of the leaders in the cutting edge
machine learning and deep learning, using the
technology to dominate the online advertising
business and streamline its own business. Google
has designed its own chips to optimize its machine
learning algorithms. Google offers machine
learning technology as part of its cloud business
through Tensorflow and other services and APIs.
On the moonshot side of Alphabets business, the
company has invested heavily in its self-driving car
program and is clearly one of the most advanced
players in the market. Alphabet has been an active
acquirer and should be assumed as one of the key
M&A exit targets for any start-up. (AMZN) has used machine learning to
dominate commerce. The company has invested
heavily in its product recommendation and
targeting technology, making it a formidable player
across almost any product segment. Amazon offers
machine learning services as part of its marketleading Amazon Web Services offering and provides
data cleaning services through its Mechanical Turk
business. Amazon is also investing heavily in
operations-focused AI such as warehouse
automation and delivery systems (note, Amazon

INTELLIGENTSIA.AI (BIDU) has one of the largest AI research

efforts including a Big Data Lab, the Institute of
Deep Learning, and the Silicon Valley AI Lab. The
company is researching image recognition, speech
recognition, natural language processing, robotics,
and big data to serve its 667M customers. A
particular focus for Baidu is on speech recognition
and the companys Deep Speech technology has
been shown to be 3x faster than people typing on a
Facebooks (FB) investment in machine learning
has allowed it to be the dominant player in social
mediaand increasingly all of media. The
companys sophisticated personalization engines
have created one of the most addictive consumer
products ever. And, the addiction to Facebook isnt
waning as the company continues to add users
across its product portfolio and increase the time
spent with its services. Facebook is investing
heavily in virtual reality through its Oculus
subsidiary which requires quite a lot of AI under the
hoodand could involve more consumer-viewable
AI content if Oculus shifts into augmented reality as


General Electric (GE) is positioned as the worlds

premier digital industrial company which sets it
apart from most other companies focused on
machine learning. GE has tightly focused its
machine learning agenda on connecting with the
physical (industrial) world through sensing devices
(Internet of Things) and prediction software
(recently released Predix). GE leverages machine
learning for inspection automation, data
imputation, and anomaly detection to more
efficiently manage large scale assets.
IBM (IBM) cannot be mentioned these days without
a reference to Watson, the companys wide-ranging
AI platform. While many will point out that
customers need to engage IBMs consulting
business to realize the promise of Watson, there is
no doubt that IBM is taking a leadership position in
enterprise AI especially in financial services and
healthcare. IBM is an active acquirer and an
aggressive hirer of AI talent.
Intel (INTC) is working on being the leading chip
company inside AI. In some ways, the companys
CPU products have been lagging behind NVidias
GPUs for parallel processing the cutting edge
machine learning like neural networks. The
companys recent acquisition of Nervana Systems
could help it catch up and even leapfrog as
Nervanas unreleased deep learning chip was very
well thought of in the community. If nothing else,
Intel has acquired a strong team who should be
helpful in Intel thinking through the best strategy
for embedding AI processes in chips across its
product lines.
Microsoft (MSFT) is a strong contender in machine
learning in the cloud through Azure and on the
desktop through various applications and Cortana,
its AI assistant. The company has one of the larger
research efforts in AI, dating back more than 20
years, that has resulted in machine learning
technologies including Hotmail spam filtering, Bing
search and maps, Skype translation, and soon
HoloLens augmented reality. The company has a
long list of machine learning initiatives in
applications and services, computing devices, and
cloud services. And the company is extending its


machine learning offerings to its developer

community through both cloud applications and an
upcoming developer platform called Open Mind
Studio that supports Microsoft and non-Microsoft
machine learning libraries (an important step for
making machine learning more accessible to a
broader community).
Mobileye (MBLY) is current the dominant player in
driver assistance (sensors, cameras, etc.), providing
technology to 90% of the worlds automakers. The
company is also working with several auto makers
to bring self-driving cars to market and says that
five automakers including BMW, General Motors,
and Volkswagen will use its highway-only selfdriving systems for 2018 models and all condition
self-driving systems for 2019 models. While
Mobileyes various partnerships should be expected
to come and go, the company is well-positioned to
be a leader in one of the most important physical
embodiments of AI.
Nuance (NUAN) is a major player in natural
language understanding, a key technology as we
move more into voice-controlled intelligent
systems. While there are many big and small
players pursuing speech technologies, Nuance has
a history and breadth that sets it apart.
Nvidia (NVDA) has expanded its focus from
accelerating graphics processes on its graphics
processing unit (GPU) chips to accelerating deep
learning processes. The company has applied the
highly parallel structure of its GPUs to neural
networks and claims that training deep neural
networks on its GPUs is 12x faster than on CPUs.
Its unclear how many machine learning processes
can be accelerated through hardware but it is clear
that acceleration is very valuable. And Nvidia is one
of only a few companies in the game.
Qualcomm (QCOM) is focused on delivering
machine learning hardware acceleration in mobile
devices. The company claims its Snapdragon 820
cores can accelerate on-device execution of
convolutional and recurrent neural networks.
Qualcomm is focused on applications such as scene
detection, text recognition, object tracking and

avoidance, gesturing, face recognition, and natural

language processing in smart phones, security
cameras, automobiles, and drones. Qualcomm is
well positioned for algorithms which can run on the
device without going to the cloud. (CRM) seems determined to extend
its leadership in customer relationship
management by investing in machine learning
solutions to help its customers market and sell
more effectively. The company has acquired several
technologies including a smart calendar app, a
deep learning personalization service, and a
machine learning as a service platform.
SAP (SAP) systems house a vast quantity of
corporate data. And data is what drives insights and
outcomes through machine learning. SAP is well
positioned to provide greater intelligence to its
customers and the company is investing heavily in
machine learning across its product suite.
Sony (SNE) may seem a speculative choice in the
Index as it is emerging from a restructuring and
rebooting of its core consumer business. But the
companys $1B investment in AI and robotics cant
be ignored. The company isnt limiting itself to the
consumer market despite previous investments in
its Aibo robot. Today, Sony is looking to automate
factories, warehouses and companies as well.
Symantec (SYMC) has used AI and machine
learning in its security software solutions for some
time. Recognizing and filtering spam is one of the
most common use case examples of machine
learning because it is so prevalent. Symantec
continues to expand its machine learning
capabilities and has recently started using deep
learning as a defense against cyberattacks.

been rocky, the company has a significant lead in

on-the-road experience with intelligent cars. Tesla
has been aggressive at building out and testing new
technology and the company may include some
form of self-driving technology in its upcoming
Model 3.
Toyota (TM) has committed $1B to its Toyota
Research Institute for research and product
development in automobiles, robotics, and
machine learning. Since late 2015, Toyota has
announced three TRI offices located next to three
university partners: MIT, Stanford, and University
of Michigan. The SAIL-Toyota Center for AI
Research, led by Fei-Fei-Li, has garnered well
deserved press and praise for bringing together an
impressive research team from fields including
machine learning, robotics, human-computer
interactions, and natural language processing.
Interestingly, Toyota has more self-driving patents
than any other company more than twice as many
as any other company. Even though the number of
patents doesnt necessarily indicate success, it
does indicate a serious focus.
Twitter (TWTR) has made machine learning a key
initiative, organized a machine learning team called
Twitter Cortex, and has acquired a handful of
companies with deep learning-based visual
processing and predictive advertising technologies.
The companys CEO Jack Dorsey says, Machine
learning is increasingly at the core of everything we
build at Twitter. Its powering much of the work
were doing to make it easier to create, share, and
discover the very best content so that every time
you open Twitter youre immersed in the most
relevant news, stories, and events for you. Itll be
interesting to see if its machine learning can do
something about the bot plague.

Tesla (TSLA) has made a huge bet on self-driving

car functionality. And although the recent news has



Index Performance
We launched the Intelligentsia Index on June 28,
2016. To date, the Index is up 13%. Clearly the Index
has benefitted from the Brexit rebound. Notably,

the Index has also outperformed the S&P and

NASDAQ indexes which are up 7% and 11% in the
same period.

As part of our effort to understand the future of AI
and machine learning technology, we keep an eye
on patent applications. Our view is that patent
applications indicate technologies that companies
think are valuable, even if they dont always make it
to market. And we think patent applications serve


as an interesting view of the breadth of machine

learning use cases.
Here are summaries of a few AI and machine
learning-based patent applications that caught our
attention recently:


Digital Reasoning Systems and methods for

neural language modeling including word-level and
character-level representations, and word
morphology and shape. (20160247061)
Facebook (FB) A method of operating a cameraenabled device to learn user preferences of how to
process digital images. The computing device can
apply machine learning on multiple user image
selections to determine visual effect preferences.
Facebook (FB) Technology for creating and
tuning classifiers for language dialects and for
generating dialect-specific language modules.
Google (GOOG) Techniques for using image
metadata and feature analysis to evaluate the
shareability of a photograph associated with a
particular user. (20160239724)
IBM Reconfigurable and customizable generalpurpose circuits for neural networks that comprise
an electronic synapse array including multiple
synapses interconnecting a plurality of digital
electric neurons. (20160247063)

variables to generate a list of travel options

designed to optimize travel expenses.
Iteris (ITI) Multiple applications for frameworks
for diagnosing and predicting a suitability of soil
conditions to various agricultural operations by
analyzing one or more factors relevant to field
trafficability, workability, and suitability for
agricultural operations due to the effects of
freezing and thawing cycles. (20160247075,
20160247076, 20160247079)
National ICT Australia Determining a health
condition of a structure, such as a bridge, based on
vibration data measured of the bridge using a
support vector machine classifier. (20160238438)
PlaceIQ A process of discovering psychographic
segments of consumers with unsupervised machine
learning to infer consumer shopping behavior
affinities using geolocation analytics platforms.
Prophecy Sensorlytics Applications for fault
detection in rotor driven equipment and predictive
and preventative maintenance of vacuum pumps.
(20160245686, 20160245279)

IBM Method for performing a constraint-based

optimization and a genetic algorithm on a set of


By Dave Edwards

Analyzing how we spend our time every day isnt a

trivial task. At the base level, the US Bureau of
Labor Statistics publishes the results of an annual
survey that describes how Americans spend their
time across about 70 different categories including
things like working, caring for household adults,
grocery shopping, lawn and garden care, and

relaxing and thinking. The glaring omission from

the survey is computer and mobile usage.
Normalizing for TV viewing (which is included in the
USBLS survey), there is a missing 7-8 hours / day of
media time.


To create a more complete view of how we spend

our time, I combine data from several sources
including, US Bureau of Labor Statistics, Common
Sense Media, eMarketer, Federal Reserve Bank,
Nielsen, Pearson, Pew Research Center, Project
Tomorrow and our own proprietary data gathered
through consumer surveys and other sources.
Together these data sources provide a more
complete view of how Americans across multiple
generations spend their time each day. Its
important to note that there has been a lot of data
cleaning to pull together this analysis. The data
sources dont categorize generations or activities in
the same way which makes combining them
challenging. All that said, I think this is as accurate
as the analysis can be and, most importantly,
accurate enough to be useful. I subscribe to George
E. P. Boxs oft quoted statement, Essentially, all
models are wrong but some are useful.
I breakdown the time spent across four generations
defined as Gen Z (under 18), Millennial (18-34), Gen
X (25-50), and Boomers (50+). Im not taking a
stand in the debates over how to define these
generations but am using them as useful tools to


distinguish how different people of different ages

spend their time. Its also useful when we start
forecasting the future to understand how each
generation will contribute to the population as a
whole over time.
When incorporating the media usage data, I have
taken the unusual approach to group the individual
activities by activity type not by hardware or
platform type as is traditional in market research. I
have chosen this structure because I think it is more
important to understand how much time
consumers spend on a particular activity than how
or where that activity is taking place. For instance, I
want to understand how much time a consumer
spends watching video no matter whether the
consumer is watching on a phone, a tablet, a
computer, or a TV. In this case, the key is
segmenting video content based on the potential
for machine learning to affect the viewed content.
For instance, live TV has no opportunity for
machine learning impact (at least at the moment
the consumer is watching) while online / mobile /
time-shifted video (aka Netflix and Hulu) has


The Intelligentsia Report is published 21 times per year by Intelligentsia Research, a subsidiary of Koru Ventures, LLC. The report
covers the artificial intelligence and machine learning industries, the interfaces that connect humans and machines, and the policy
and social issues they raise. Subscriptions cost $210 per year.
Co-Editor: Dave Edwards
Co-Editor: Helen Edwards
Copyright 2016, Koru Ventures, LLC. All rights reserved. No material in this publication may be reproduced without written
permission; however, we gladly arrange for reprints, bulk orders, or site licenses.