You are on page 1of 157

REINALDO NORMAND

2
INNOVATION

THE 15 DISRUPTIVE TECH


TRENDS DEFINING OUR FUTURE

ABOUT THIS BOOK

Innovation2 is available as a free download on www.innovation2.co or


as an iBook or the iPad or Mac.
Join our community on Facebook.
For a list of all images used on this book and respective licenses,
please follow this link.
Some images fall under "fair use" and are copyrighted by their
respective owners.
2015 by Reinaldo Normand. All rights reserved.
Innovation2 - Version 1.00 - 04/14/2015

ABOUT THE AUTHOR

Reinaldo Normand is an accomplished entrepreneur with


extensive experience in founding and running
bootstrapped and venture-backed technology startups,
as well as in doing business abroad. He has been
featured in media outlets such as NBC, Business Week,
Venture Beat, Monocle, Le Monde and others.

Reinaldo has lived in So Paulo, Mexico City, Shanghai,


San Diego and currently resides in San Francisco,
California. In his free time, he loves to mentor young
entrepreneurs around the world.

Reinaldo has been invited to speak about technology


and entrepreneurship at events around the San
Francisco Bay Area and at universities such as the
University of California, San Diego.
His book about Silicon Valley and its culture reached
hundreds of thousands of readers in LATAM. He holds a
BSc in Computer Information Systems, an MBA and
speaks five languages.

ii

OPENING REMARKS

Since the invention of the first integrated circuit in 1960,


our technological prowess has evolved so dramatically
t h a t w e b a re l y c o m p re h e n d w h a t h a p p e n e d .
Microprocessors are probably our most important
invention ever.
They became such an integral part of modern life that we
cannot imagine a world without them. Think about the
countless devices and services central to our society
that owe their existence to microprocessors.
Our technology is changing so fast and becoming so
complex that many of us have not been able to follow the
latest trends. We feel alienated and sometimes we
cannot grasp what is going on. This is the reason why
most people, including you and I, initially reject the
technologies and companies that end up revolutionizing
entire industries.

Google was once considered a fad that could never


become a real business. The startup that made just
$220,000 in revenue in 1999 had amassed $66 billion in
2014. Google's services have become an indisputable
part of our digital existence. The company is now
entering markets as diverse as robots and
biotechnology.
In 2000, Apple was just a niche player in the computer
industry but in 2014 it became the most profitable
company ever. Apple revolutionized the computer, music
and telecommunication industries in the last 30 years
and is now offering wearables and soon a cable TV
alternative. The company is rumored to be eyeing
vehicles as well.
In 2004, no one thought Tesla Motors had a chance in
the auto industry. In 2015, the company's Model S is the
iii

indisputable leader in electric vehicles, overall car safety,


autonomous driving, and performance. The company
expects to sell half a million vehicles by 2020.
In 2008, Mark Zuckerberg was considered to be
unprepared and unfit to be a CEO. Many analysts
dismissed his ability to transform his social network into
a real company. Facebook is now worth $230+ billion
and is investing in drones, artificial intelligence and
virtual reality.
In the last five years, we have witnessed the rise of
disruptive business models and products from Airbnb,
Instagram, WhatsApp, Uber, Theranos, OculusVR,
AngelList, Palantir, Kickstarter and Instacart, just to name
a few. They are now part of our lives and are changing
the world in ways we never imagined.
In 2015, the same pattern of denial and underestimation
of new paradigms continues to occur over and over. A
huge knowledge gap exists now between users and
creators of technologies, which is not healthy for our
society.
I wrote this book to narrow this gap and explain in simple
language what the next tech trends are and how they will
affect your life and your business. I show why the

incredible advances of the last 50 years seem pale in


comparison to what is coming.
The technologies and trends listed in Innovation2 were
carefully researched and the information contained in
this book is based on facts retrieved from publiclyavailable sources. When I speculate, as in the last
chapter, I take care to distinguish speculation from fact.
Throughout the next pages, youll notice lots of
hyperlinks. Take time to tap them and follow the sites,
articles or videos that validate or enrich the subject
you're reading about. The extra content makes this book
more interesting, credible, and pleasant to read. It will
help you realize how deep the rabbit hole is and how far
our technology has come.
My goal with Innovation2 is to involve ordinary people
and leaders in a discussion about what we want as a
society driven by technology. A serious debate about the
implications of exponential technologies should be part
of our basic education and our politics. This is a unique
moment in history and I strongly believe we need to act
now if we want to control our own fate as a civilization.
Welcome and prepare to have your mind blown!

iv

INTRO TO EXPONENTIAL TECHNOLOGIES

Magnification of an Intel Itanium processor 9500 series containing 3.1 billion transistors.

Intel

Nature took almost 3.8 billion years to evolve primitive

When dealing with the phenomenon of exponential

microbes into humans through the process of natural

technologies, we might face similar difficulties. The

selection. To understand evolution, we must think in

multiples can be so astonishingly high that our natural

much larger units of time. It is a tough effort because our

instinct is to deny or dismiss them. But if we really want

brains cannot intuitively comprehend extremely large

to understand the modern world, we must grasp what

numbers, as our lifespan is limited to a mere hundred

exponential technologies are and how they work.

years.
5

Exponential technologies are technologies that

To comprehend what chips can do we must learn the

demonstrate continued accelerating growth of

differences between two simple concepts in

capabilities in speed, efficiency, cost-effectiveness or

mathematics: linear and exponential growth.

power, driven both by advances in the individual


technologies themselves, and by their interplay and
synergies.
The invention of the first microchip in 1960 marked
ground zero for all exponential technologies. Without the
chip, the commonplace conveniences of modern life
smartphones, video games, the Internet, displays,
sensors, modern vehicles, even something as simple as

In this chapter, I intend to show how our assumptions


about technology are generally mistaken and why these
conclusions impact our understanding of the future.
The methodology I employ in the following examples is
based on the Intel line of processors. The benchmark
used is the MIPS*, a measure of task performance
speed** of a given CPU***.

a handheld calculatorwould be the stuff of science

*1 MIPS = one million instructions per second.

fiction.

** Effective MIPS performance is highly dependent on the programming


language used.

In 1965, Gordon Moore, founder of Intel, predicted the


processing power of chips would double every two

*** MIPS is not used anymore for CPU benchmarking but it serves well
for our educational purposes.

years, while the cost would be cut in half. The


observation, known as Moores Law, helped us
understand how microchips would evolve and give birth
to exponential technologies that would ultimately
transform our society.

Fig. 1 - Simulation of microchip


speeds over 20 years using the
linear growth method.

Example Number One


Linear Growth - 20 years
In 1971, Intel launched its first CPU,
the 4004, with a task performance
speed of 0.092 MIPS. Imagine that,
every year, the manufacturer would
release a new updated version of the

processor. In this example, I project the


performance growth of this line of
microchips over twenty years applying
the linear growth method.
Linear growth is synonymous with
constant growth. If microchip speeds
were to evolve linearly, one needs to
7

add a linear amount of progress for each subsequent

can infer from the following examples, the differences

period (a year). It is a simple addition operation.

can be quite astonishing.

What I did in the chart (fig. 1) was to simply add 0.2


MIPS in the following year. The same amount was kept
constant until the end of the series.
So, going back to our example, in 1972, the chip would
have a performance of 0.292 MIPS, or three times faster
than the model of the previous year. In 1976, it would run
at 1.092 MIPS and in 1991, 4.092 MIPS.
The conclusions in interpreting this chart are obvious.
The manufacturer would have multiplied the
performance of this line of processors by 44 times in
twenty years.
Ask someone outside tech what they think about these
numbers. You would be surprised by how many people
believe the numbers are actually real. Humans seem to
intuitively validate linear growth in order to make sense
of analogies.
Needless to say, the conclusion about linear growth
couldn't be more wrong, as microchips don't evolve at
this slow pace. They evolve exponentially and, as you
8

Fig. 2 - Simulation of microchip


speeds over 20 years using the
exponential growth method.

Example Number Two


Exponential Growth - 20 years

By 1976, a similar model of the same


line of chips would run at a meager
0.52 MIPS. In 1991, though, the newer

In this example we apply Moores Law

processor would run at an astonishing

to the same Intel processor from 1971.

94.5 MIPS. This is more than 1,000

Observe how doubling the

times faster than in 1971.

performance every two years makes a


difference in the long run.
9

Fig. 3 - Simulation of microchip


speeds over 20 years comparing
exponential and linear growth
methods.

This other chart compares linear

Note that linear growth performance is

versus exponential growth using the

faster than exponential growth until

same scale. Exponential growth tricks

1980. In 1991, though, exponential

our minds, as it does not look powerful

growth takes over and makes the same

enough in the short termso we

processor 23 times faster than the one

underestimate the long-term impact of

in example one (using linear growth).

technological progress.

10

Fig. 4 - Simulation of microchip


speeds over 50 years comparing
exponential and linear growth
methods.

Example Number Three


Linear versus Exponential - 50 years

reach a performance of only 8.7 MIPS,


or almost a 100 times faster than the
original. The same processor using

A timeline of 50 years illustrates the

exponential growth would reach a

huge differences in scale that separate

performance of 274,000 MIPS

the two methods. In 2014, an

roughly 3,000,000 times faster than its

equivalent Intel microprocessor

1971 counterpart!

accelerated by linear growth would


11

You might think the numbers presented are random but

such as artificial intelligence, robotics, biotech and

they're actually very close to the real thing. In 2014, an

nanotech.

Intel Core i7-5960X boasts a performance of about


300,000 MIPS.
What is interesting to note on this graph is that we can
safely predict that a next-generation Intel processor, in
2021, will boast a performance of more than 3,000,000
MIPS, or 10 times faster than the fastest Intel processor
of 2014.
Moore's Law is still reliable to understand what the future
holds. Actually, due to the advent of GPUs, which now
come bundled with every processor, we need to adjust
Moore's law to a doubling every 18 months.
That is how futurists such as Ray Kurzweil can predict a
$1,000 PC will have the same power as the human brain
by the mid 2020s (I personally believe it will happen in
the early 2030s). Or that, by the 2050s, the PC will have
more power than all human brains on Earth combined.
The consequences of this raw power at our disposal are
enormous. Microchips are now driving exponentially
most of our technologies and directly influencing fields

To think that an iPhone 6 is hundreds of thousands of


times more powerful than the AGC computer used in
Apollo 11 in 1969 is mind-boggling.
Exponential technologies don't drive only CPU
performance. Digital storage is another example that is
worth mentioning to illustrate what kind of changes can
happen in technology over a long period of time.
In 1960, one gigabyte of storage cost $10,000,000. In
1981, around $300,000. In 1990, it dropped to $10,000
and in 2004, to just $1. In 2015, just 55 years later, the
cost of one gigabyte is merely 2 cents.
In fact, anyone can set up a free Dropbox or Google
Drive account that stores up to 15 gigabytes. In the near
future, some technologies will be as free and abundant
as the air we breathe due to exponential growth.
Remind yourself to always think exponentially when
trying to predict what impact each technology shown in
this book might have in your life or your business.

12

1
CLOUD
COMPUTING

VT100, a very popular computer


terminal introduced by DEC in
1978.
Jason Scott

The fundamental concept of cloud


computing originated in the 1950s,
when large corporations and research
institutes allowed employees to access
mainframes from multiple terminals
and tap into their computational power.
Those terminals had a screen and a
keyboard for inputting data and

depended on the host computer for


most of their processing capabilities.
In the late 1970s IBM and DEC
introduced a myriad of terminals that
enjoyed enormous success in the
c o r p o r a t e m a r k e t . Te r m i n a l s
democratized the use of computers at

14

large companies and boosted the productivity of


employees.
Even after the invention of the IBM PC in 1981, these
"dumb" computers continued to thrive due to low costs
and robust processing power (when connected to
mainframes).
Terminals had made possible "cloud computing" in
corporate networks and they were popular until the late
1980s when PCs became powerful enough to challenge
mainframes for most tasks.
During the 1990s, spreadsheets, word processors, and
desktop publishing software relied on a PC for
processing and storing data locally. PCs became cheap
and mainstream.
In the early 2000s, with the popularization of the Internet,
the paradigm of local computing began to shift again
and a new concept of cloud computing was born:
instead of using a mainframe inside a corporate network
to store, manage and process data, you could count on
a network of remote servers hosted on the Internet.
Modern cloud computing

In the early 2000s, powerful computers that could store


and process data were still inaccessible for most
startups. If you wanted to operate a photo-sharing site,
for instance, you would need to buy servers and either
locate them at your office or hire the services of a colocation center.
Opting to have servers at your office would require a
dedicated Internet connection. At the time, renting one
could cost several thousand dollars a month, depending
on the speed. Additionally, you would need to hire an
infrastructure expert to monitor your services 24/7 and
perhaps security guards to physically protect the
servers.
The colocation option was not good or cheap either. On
the one hand, startups would have better infrastructure
and security, but on the other hand it also required
maintenance 24/7 and rushing up to install and setup
new servers. Data Centers were usually located near
medium-sized or large cities, so if you lived far from
them you were out of luck.
Operating a startup in 2005 was expensive, slow, and
painful as entrepreneurs wasted a lot of time and energy
buying computers, setting up servers and searching for
problems on the hardware side. Scaling a startup was
15

an order of magnitude harder and if you were not funded


by venture capitalists you would probably not have the
working capital to grow.
Innovation, in a way, was restricted to large companies
such as Microsoft or Apple, venture-backed startups
such as Youtube, or students who could tap into their
universities' networks and resources (such as Mark
Zuckerberg).
Cloud computing existed as a concept for large
companies willing to buy hundreds of computers to do
their heavy lifting. But it wasn't within the grasp of most
startups. In 2006, though, something extraordinary
happened.
The e-commerce giant Amazon launched a new service
called AWS where they proposed to transform
computing power, storage, and bandwidth into a utility.
The more services you used, the more you'd pay. If you
didnt use any, you wouldn't pay.
The launch of AWS gave birth single-handedly to
modern cloud computing. It was a revolution. For the
first time in history, anyone could launch a startup or a
digital product in a matter of hours.

If you needed to scale, you would do it just by pressing


a button and bringing more servers online
instantaneously. Amazon replaced hardware for a
software stack that could be remotely controlled by
anyone with a minimum knowledge of network and
server architectures. No more visits to data centers or
server purchases were necessary.
Cloud computing allowed thousands of startups to
flourish worldwide and unleashed the creativity of
millions of entrepreneurs. Without it, Instagram,
WhatsApp, Playstation Now, Siri, Netflix, and other
popular services would not exist. The vast majority of
sites, apps, and games you use today are powered by a
cloud provider.
Cloud computing is currently becoming so
commoditized that companies such as Amazon, Google,
and Microsoft are offering free tiers of service to users
worldwide. For the first time in history, storage and
computing power are being given away free.
Corporations had no other choice than moving their
information from local servers to the cloud. In the US,
even the government uses the cloud to power many of
its services, as it is cheaper, safer, and more practical
than storing data locally.
16

Maybe, in a few years, cloud computing services might


be 100% free if they become sponsored by advertisers.
If Internet connections improve fast enough, we'll see
even cheaper computing devices like the Chrome line of
notebooks. Maybe a one-dollar smartphone or tablet is
not that far away.
The next time someone talks about the cloud, try to
visualize a huge datacenter with thousands of
computers working 24/7 to deliver the information you
want almost instantaneously.
Many of the technologies you'll see in the next chapters
became accessible to the mainstream population
because of the availability of cheap and reliable cloud
computing.
We must thank Jeff Bezos for that.

17

One of the many cloud computing datacenters


operated by Google.
Google
18

22
THE
THE MOBILE
MOBILE
REVOLUTION
REVOLUTION

Steve Jobs during the launch of


the iPhone 4 in 2010
Matthew Yohe

When Steve Jobs announced the first


iPhone in 2007 he predicted the
smartphone would change the world.
He was right.
With the advent of the iPhone, for the
first time in history, consumers would
have in their pockets a powerful
computer able to access the Internet

with an interface easier to use than any


PC or competitor phone.
The first generation model was slow
(no 3G), expensive ($600) and in the
first year of sales, Apple was able to
sell 1.4 million iPhones. For a moment,
it looked like the sophisticated and
expensive pocket computer would be
20

in the hands of the rich and fanboys only. In 2008, Apple


launched the iPhone 3G and was able to sell almost 12
million units.
By the end of 2008, alarmed by Apple's success,
Google entered the market with their free Android OS
and allowed manufacturers around the world to launch
their own smartphones. Google's move made
smartphones cheaper, more accessible and kickstarted

A Xiaomi Mi 4, one
of the most popular
smartphones around
powered by Android.
Xiaomi

21

a gold rush into the next phase of the computing


revolution. With the launch of the iPad by Apple in 2010,
the era of mobile computing officially began and since
then the world has never been the same.
Google ended up dominating the category pioneered by
Apple, with a market-share of over 75%, and Android
gave rise to startups like Xiaomi, a Chinese smartphone
maker that was valued at $45 billion at the end of 2014.

Xiaomi, founded in 2010, produces high-end and


affordable Android smartphones. They're now the
market leader in China.
Apple, contrary to doomsday predictions, became the
largest company in the world solely based on the
success of its mobile devices. In 2014, Apple sold 170
million iPhones and more than 63 million iPads.
If the iPhone were a company, it would generate more
money than McDonalds, Coca-Cola and Starbucks
combined. More than one billion iOS devices were sold
since the iPhone unveiling and the prophetic words of
Steve Jobs.
But the truth is that even Jobs, if alive, wouldn't believe
in the revolution he started. The numbers of the mobile
industry are staggering.
In 2014 alone, more than 1.2 billion smartphones were
sold. From 2009 to 2013 the mobile industry invested
$1.8 trillion on improving wireless infrastructure around
the world. Download speeds have increased by a factor
of 12,000 and data rates have dropped to a few cents
per megabyte.
In countries like the United States, the smartphone
became the platform of choice to access the Internet,

share photos, communicate, and watch videos. Average


users check their phones 150 times a day. Tech
companies and startups are now mobile first and make
most of their revenues from mobile platforms.
Entire categories were built around the smartphone such
as the car-hailing apps. That category alone sparked an
industry worth more than $50 billion. Messengers are
now the main form of communication for billions of
people. WhatsApp, founded in 2009, handles 10 billion
more messages a day than the SMS global textmessaging system.
Smartphones are entering healthcare and fitness
territories with cheaper and smarter sensors and will
trigger profound changes in many traditional industries,
such as medical devices and research.
And the revolution knows no boundaries. Mobile
payment is the fastest growing payment method in the
US. Mobile games make more money than console
games. The iPhone is the most popular camera on the
Internet.
Entire industries were decimated by smartphones.
Among the items a smartphone replaced are

22

All the items listed in this Radio Shack


1991 newspaper advertisement were
replaced by a Smartphone.

23

conventional cameras, MP3 players, GPS navigators,


dictionaries, books, handheld video games, and PCs.

devices didnt exist eight years ago, and now theyre


becoming the remote control of our lives.

But most importantly, smartphones are having s


significant impact in the lives of people in emerging
countries. Android phones can now be purchased for
$35, thus they are becoming the first Internet-connected
device for billions of people.

In the next decade, smartphones will diagnose


diseases, serve as our banks, and translate different
languages in real time.
Mobile is the one technology that will finally connect the
rest of our world into a unique digital ecosystem.

In Africa, Vodafone and Safaricom's M-Pesa mobile


payments system, launched in 2007 in Kenya, now
handles a sizable portion of the country's gross
domestic product.
It is predicted that, by 2020, something like 80% of
adults will own a smartphone connected to the Internet,
including a billion in Africa alone. New devices are
promised to cost only $20 in a couple of years.
Mobile devices are revolutionizing commerce,
agriculture, advertising, telecommunications, and many
other industries, creating millions of jobs and injecting
trillions of dollars into the worlds economy.
The smartphone's exponential growth is happening
under our noses, but there are still many people outside
tech who dont realize what a big deal this is. These
24

3
THE SHARING
ECONOMY

The sharing economy or collaborative consumption is a


socio-economic system built around the sharing of
human and physical resources through the Internet. It
creates micro-entrepreneurs and put excess capacity to
use.

the ride. The fare is calculated by distance (using the


GPS) and time. When the final destination is reached,
the credit card on file is charged automatically. Drivers
are paid weekly and Uber takes a cut on each ride for
providing the platform.

The best way to understand why the sharing economy is


an exponential technology is through the example of
Uber, its shining star.

The system is very elegant and smart. There is no


payment friction and both the driver and the passenger
rate each other from 1 to 5 stars after the ride. Drivers
with low ratings are automatically excluded from the
system.

Uber
The startup, founded in 2009, uses a business model
that is so innovative and brilliant that lots of people still
dont understand how it works. Let me explain.
Uber basically connects private drivers, who want to rent
their own cars on their spare time, with passengers, who
want to go from point A to B. Drivers need to register
with the company, go through a background check and
provide their bank account. Passengers need to
download a smartphone app and provide their credit
card number to use the service. Everything else is
automated.
When passengers summon an Uber car, their location is
retrieved by the phone's GPS and sent to drivers nearby,
who also have a smartphone. The first to respond wins

Uber might seem just like another inoffensive app but


the implications derived from its business model are
mind-boggling. The startup and its clones are disrupting
a very old and powerful industry that was until now
immune to exponential technologies.
Taxis are heavily regulated by local governments.
Drivers are required to pay a huge sum to buy a
medallion, a license that allows a cab to pick up
passengers in the streets of any given city. Medallions
used to be very expensive because they were so scarce
(cities regulate the number of cabs allowed to operate).
In NYC, in 2012, a medallion could cost up to a million
dollars. Owning one was considered to be a great
26

investment once, but now, because of Uber, theyre in


free fall.

cities around the world. As of March 2015, Uber is


available in 300+ cities in 55 countries.

Uber is so efficient and loved by passengers that it is


crushing the taxi industry wherever it operates. In San
Francisco, its birthplace, taxi use in the city has tumbled
by 65% from 2012. Actually, Uber is now making 3.5
times more revenue in San Francisco than the entire cab
industry! The same phenomenon is taking place in many

The innovations dont stop there. In San Francisco, you


can now share an Uber with another passenger and
have up to a 70% discount in the fare. It is called
UberPool. A regular trip from downtown to SFO costs
around $45 if you ride a cab. On UberPool it is less than
$15.

An Uber driver waits for a


passenger.
Uber

27

Some people, including myself, think that very soon the


cost of owning a car in big cities will be higher than
using Uber everyday. Fewer cars mean fewer accidents
and better traffic. Uber estimates UberPool can take
more than one million cars off the road in a metropolis
like New York.
Uber expects to create about one million jobs in 2015.
Look at how disruptive this is. A startup, founded in
2009, using a brand new business model, in which it
does not own any inventory, will hire more people than
all workers currently employed by manufacturers of
motor vehicles and parts in the United States.
No wonder Uber raised more than $5 billion dollars in
venture capital and is now worth $41 billion. The startup
has the potential to be larger than Facebook in 10 years
as the transportation industry offers many unexplored
opportunities such as trucks, helicopters or private
planes. Maybe one day Uber will deliver cargo and
compete with DHL, Fedex, or UPS in logistics. Who
knows? Five years ago nobody thought they would be
where they are today.

believe well have self-driving vehicles in America by


2025 at the latest.
Think about the consequences of this move. Drivers are
the main cost for Uber. If the startup gets rid of humans,
its margins will go up dramatically. This would mean
cheaper rides and more people using Uber. At some
point, analysts predict, Uber will be so cheap that it
disrupts public transportation.
Not only that, but if by 2025 we indeed have a fleet of
autonomous vehicles, other traditional industries will be
affected. Car sales will go down because not everyone
will need a car anymore. Car insurance companies will
declare bankruptcy as autonomous vehicles would have
a near perfect safety record. Healthcare providers would
avoid more than 2.5 million emergency rooms visits per
year.
Uber is just the tip of the iceberg. The sharing economy
is booming with hundreds of apps and services that are
disrupting old industries and creating new types of jobs
and ways to make money. Here are some examples of
startups that are thriving in the new environment.

But the story of Uber doesnt end there. The company


just announced a partnership with Carnegie Mellon
University to research autonomous cars. Most experts
28

Airbnb home page


Airbnb

29

Airbnb
Airbnb, founded in 2008, is a community marketplace
for people to list, discover, and book unique spaces
around the world. It provides a platform that connects
hosts, who want to rent out their homes (or rooms), to
guests, who are visiting local cities. Airbnb takes a cut
on any transactions that take place on its platform.
The startup competes directly with established chains of
hotels and is present in 34,000 cities in 190 countries. It
boasts more than one million listings, 50% more than all
rooms available from Hilton Holdings, the largest
hospitality company in the world. More than 25 million
people have stayed at Airbnb since its inception in
2008.
In terms of guest satisfaction, 95% of properties listed
on the site have an average review of 4.5/5 stars. By
contrast, properties on TripAdvisor had an average
rating of 3.8 stars. Airbnb, like Uber, supports hundreds
of thousands of jobs in the local economies. In 2013, the
Airbnb community generated $824 million in economic
activity in the UK and supported 11,600 jobs.

of hosting, including marketing, communicating with


guests, booking, optimizing pricing, cleaning and
checking rentals, repairing as needed, and any
troubleshooting during a guests visit. Hosts pay 15% of
booking fees for the service, on top of the fees they pay
to Airbnb.
Airbnb announced in 2015 that approximately 550,000
travelers brought in the New Year in about 20,000 cities
around the globe at rentals booked through the site.
That's impressive growth for a company that had only
2,000 travelers using the service on New Year's Eve five
years ago.
AirBnB, through its ingenious sharing economy model,
has the potential to be larger than all other hospitality
companies combined. It is now valued at upwards of
$20 billion.

There are even startups that piggyback on Airbnbs


success. Pillow Homes, for instance, handles all aspects
30

Instacart
Instacart is a same-day grocery delivery startup
focused on delivering fresh groceries and home
essentials quickly (sometimes in less than one hour).
Instacart has already over 500,000 items in its
catalogue from local stores and established chains.
It works like this: open the app, choose your items and
pay by credit card with one tap. Your order is then
sent to the phone of a local shopper who buys your
groceries and delivers them directly to your home.
Most of the time, Instacart groceries are delivered
using the shoppers' own vehicles.
The startup, founded in 2012 and valued at $2 billion,
does not operate any physical stores or own any
inventory. It has the potential to rival established
chains without spending too much working capital.

Instacart app
Instacart

31

Others
Handy matches you with crowd-sourced cleaners,
handymen and plumbers. Scheduling, payment and
rating are done through the app.
Washio sends a "ninja" to pick up your dirty clothes using their own vehicles - and bring them back clean in
less than 24 hours. The startup rents idle time from
laundry and dry-cleaning facilities around any given city.
Munchery is not a delivery service from an existing
restaurant. It does rely on well-known and high-end
chefs who have available time before their jobs at fancy
restaurants. The startup rents commercial kitchens
where the chefs can prepare their meals. Delivery is
done in less than 30 minutes through a crowdsourced
workforce.
I could spend this entire chapter mentioning sharing
economy startups that were founded in the last five
years. There are about 200 new startups listed in
AngelList. It is really revolutionary how they're giving old
industries a run for their money.
In the last decades, we saw many jobs migrate overseas
to countries such as China, India, and Philippines. Now
theyre coming back thanks to the sharing economy

startups. Most of them pay hourly wages around $15-25,


which is pretty good compared to traditional entry jobs
that pay $10 per hour, such as a Walmart cashier, a
Target clerk or a McDonalds worker.
The point I want to make in this chapter is that most
industries are actually threatened by the sharing
economy trend. It is exponential and unstoppable.
I can foresee how the textile industry, for instance, could
be replaced by a marketplace that connects customers
who want to build customized clothes with businesses
that provide high-end printers. Even mighty wireless
carriers or the powerful financial industry could be
jeopardized.
Crowdsourcing sites like Kickstarter and Indiegogo allow
entrepreneurs to raise money from ordinary people
before launching their products. AngelList is giving VCs
a hard time. Some experts think we might be creating a
new economic system that could replace capitalism. I
disagreeit is actually capitalism 2.0.
The sharing economy is revolutionizing the world in ways
we never dreamed. If this trend looks like a distant future
to you, it's because youre not thinking exponentially
enough.
32

4
INTERNET OF
THINGS

The Internet of Things (IoT) is a scenario in which


objects, animals or people are provided with unique
identifiers and the ability to transfer data over a network
without requiring human-to-human or human-tocomputer interaction.
IoT has evolved from the convergence of wireless
technologies, micro-electromechanical systems (MEMS),
and the Internet. Each thing is uniquely identifiable
through its embedded computing system but is able to
interoperate within the existing Internet infrastructure.
These devices used to be mostly computers,
smartphones and tablets, operated by people. Now they
can be everything.
In 2008, the number of things connected to the Internet
exceeded the human population. By 2020 it is expected
that 75 billion IoT devices will be online communicating
with each other. IDC forecasts it will represent a $7.1
trillion dollar market in 2020.
The Internet of things is a field containing many
exponential technologies that will start a revolution
during our lifetime. In the next sections, I list some
interesting applications.

34

Home automation
The next place to be disrupted by technology is our
home. Smarter and connected appliances promise to
make life more convenient and raise our productivity.
Korean refrigerators, for instance, already feature LCD
displays that can tell what items are inside, food that is
about to expire and even the ingredients youve got to

The iPad app controls


your Dropcam camera
Dropcam

35

buy to create a delicious dish. Very soon they'll order


food automatically so you won't need to worry about
going to the supermarket. All this information can be
conveniently accessed from a smartphone app.
Dropcam is a simple wi-fi connected camera that
monitors your home and saves the footage automatically
to the cloud. The camera sends alerts when any motion

The Nest Protect smoke alarm


Nest Labs

or sounds are detected so you can


watch live what is going on from any
device, anytime.
Smoke alar ms also got an IoT
upgrade. Nest Protect constantly
monitors your home for dangerous
levels of carbon monoxide. If smoke is
detected, it automatically sends a

message to your phone warning about


the gravity of the situation, when it
started and where in your home the
smoke originated. It has also sensors
that differentiate steam from smoke to
avoid false alarms (as when you take a
prolonged shower).

36

Dropcam and Nest are just the most notable companies


in this space. There is now a plethora of smart sensors
and devices redefining home automation, from
automated sprinklers, a detector of water leaks, garden
monitors, smart speakers to wi-fi light bulbs. IoT devices
are still in their first generation but imagine the
possibilities in a few years.

The IoT will spark a new generation of security and


insurance startups. New players will rise, old players will
fall. Companies like Google, which owns Nest and
Dropcam, might even enter the lucrative insurance
business.

A smoke alarm could, for instance, be integrated with


Siri or any other personal assistant so it would be able to
call the fire department automatically as soon as a
dangerous type of smoke is detected, describing each
incident with technical precision (by using its sensors).
That, in turn, would help firemen to respond faster and
come better prepared to a specific type of emergency,
improving efficiency and saving taxpayers' money. Many
lives would be spared in the process.
With all the data gathered from devices around the
world, IoT companies can build a very comprehensive
and detailed map about what is going on at millions of
homesfor instance, which regions trigger the most fire
alarms, at what time of the day consumers replenish
their freezers, or what cities are the safest.

37

Several modes of Fitbit trackers.


Fitbit

Wearables
Many of us already wear computers
without noticing. Activity and fitness
trackers such as Fitbit contain sensors
that measure steps walked, floors
climbed, distance travelled, activity
time, calories burned, heart rate, and
sleep quality.

These wearables communicate with


our smartphones via a specific app.
There are fascinating stories about how
wearables can change our
understanding about events and our
behaviors.
One of them happened in August
2014, when a strong earthquake hit the
38

San Francisco Bay Area. Jawbone, the company that


manufactures the UP, a device that tracks daily activity
and sleep patterns, published a chart on their blog
showing the exact moment when their users woke up.
They could also see from the data that 45% of UP
wearers living less than 15 miles from the epicenter
stayed up the whole night.
Imagine what kind of data authorities and companies
could gather from wearables in a few years. They will be
able to predict human behavior in its quintessence.
Perhaps human behavior might have been the reason
that led to the downfall of the most famous wearable so
far, the ill-fated Google Glass. Early adopters of the
device were affectionately called glassholes" due to
peoples misperception about being filmed or
photographed without consent.
Google Glass was an important step to bring awareness
to the social norms expected from people using
wearables 24/7. We learned that technology evolves
fast, but our perceptions and culture take a while to
change. However, some companies are finding
ingenious ways to deal with our concerns.

But if you really want to understand the impact of


wearables in our society you must pay attention to what
is going to happen with the Apple Watch. This is the first
truly mainstream wearable ever released, and the most
powerful.
Many pundits predict the Apple Watch is going to flop
because it doesn't have any advantages over a
smartphone. The same has been said about the iPod,
the iPhone and the iPad and all of them turned out to be
huge hits. I think it is too early to judge. App developers
are the ones who will make a difference and create the
use cases warranting the purchase of the Watch.
Tim Cook believes the Apple Watch will replace car
keys. I believe this category of wearables will eventually
replace all keys in our lives. There are now many
startups relying on the smartphone to open electronic
locks. The Watch would definitely make the process of
unlocking much smoother and more natural.
There is also Apple Pay, the contactless payment
system that is built into the Apple Watch. For now, the
system requires a credit card to work but in the future,
Apple might launch its own digital currency and become
a worthy competitor to the credit card industry. Apple
39

An Apple Watch Edition


covered with 18-karat
gold.
Apple

40

Pay is accepted by more than a million retailers in the


US, as of May 2015.
The Apple Watch might accelerate the demise of a
hundred year old industry: analog wristwatches. Swiss
watchmakers are saying publicly that the Apple Watch
doesn't threat their businesses. They should know better
and learn from the recent examples of Nokia, Blackberry
and Nintendo. The Swiss companies are moving in the
right direction but they simply don't have the resources
or the expertise to dethrone tech companies at their own
game.

If the Apple Watch succeeds, we would transition to an


era where computers become commodities and a minor
selling point for wearable computers. It is the beginning
of a larger trend that will undoubtedly change our
lifestyles and by definition, many traditional businesses.

Maybe luxury brands will survive as a symbol of tradition


and craftsmanship over technology. But they shouldn't
bet on it. Apple is positioning the Watch as a fashion
item and most of their models are reasonably priced
($349 to $1099).
The 18-karat gold limited Watch Edition, however, will
put a tech company in competition with manufacturers of
haute horlogerie for the first time ever. The price tag of
$17,000 might sound ridiculous, as the models won't
have any functional differences except for the materials
used. But, as I mentioned earlier, it is too early to dismiss
what Apple is trying to do.
41

Smart cities
With sensors and devices getting cheaper, smarter and
connected to the Internet, it would be just a matter of
time before the IoT makes our cities greener, more
efficient, safer, and smarter.
IoT devices, such as surveillance cameras, are now
widely deployed in cities such as London and Chicago.
With the help of facial recognition software, suspects
can now be identified in minutes instead of days.
Some IoT technologies are simpler. Cities use smart
meters to collect electric and gas usage data from a
home or business. This data is periodically transmitted
to utilities via a secure wireless communication network.
It helps manage the demand on the grid and increase
service and reliability.
Driverless vehicles are also IoT devices. As soon as they
are available commercially, around 2020, they'll connect
to the cloud, talk to each other, choose the best routes
and contribute to alleviating congestion in cities such as
Los Angeles or San Francisco. Best results will come
only when the majority or the totality of cars in the streets
are autonomous. It might take a few decades before
becoming a reality.

However, new cities such as Songdo, in South Korea,


have already been constructed according to a smart city
template. Its buildings have automatic climate control
and computerized access; its roads and water, waste,
and electricity systems are dense with electronic
sensors to enable the citys "mainframe" to track and
respond to the movement of residents.
With all this technology and glamour, Songdo is taking a
while to become a commercial success. Its streets are
empty and a lot of buildings have not yet been finished.
Maybe it is a reminder that cities need to cater to
humans first and foremost. Nevertheless, Songdo and
other experiments around the world will help answer a
lot of questions about connected cities.
Questions like: what if dictatorial states use the
technology to repress their citizens? How would law
enforcement maintain individual privacy? How would you
keep vital infrastructure safe from terrorists? How will IoT
devices affect maintenance workers' jobs?
One thing is certain: public officials will definitely need to
be a lot more sophisticated and knowledgeable to
manage the cities of the future.

42

Cow computing

Precision agriculture

The IoT is revolutionizing industries perceived as very


backward. One great example is livestock breeding.

Imagine you are a farmer riding along in your corn field


in the growing season. You push a button and your
tractor's GPS pinpoints your exact location. Another
button displays a series of maps that show where the
soil in your field is moist, where the soil eroded and
where there are factors that limit crop growth. Collected
sensing shows where your crop is already thriving and
areas where it isnt. All this data is then uploaded
wirelessly to a computer that automatically regulates the
application of fertilizer, water irrigation, and pesticides.

There are now wireless neck collars for cows that


monitor cattles temperature and send alerts to farmers
mobile devices when, for instance, cows go into heat. If
farmers can maximize the probability of a pregnancy in
cows, the likelihood is theyre going to increase their milk
yields. And best of all, workers wouldnt need to do this
anymore.
These connected devices also monitor movement
patterns related to forage intake and the total time each
day an individual animal takes in feed. Reduced activity
is often the first sign of health problems or lameness so
the data helps in identifying sick cows at an earlier date
and benefitheifer raisers and beef cattle operations.
Cow computing impacts positively the farmers bottom
line by helping them produce more dairy products and
meat using the same animals. A single IoT category like
this may increase the output of food in the whole world.
The possibilities are endless.

This is not science fiction. Thousands of farmers around


the world are practicing what is called precision
agriculture, which uses data from a farmer's field to help
predict weather conditions and optimize operations,
saving money and time.
Farmers who don't apply technological innovation in
their field will definitely be behind their peers. The
Internet of Things is not the future anymore it is the
present.

43

5
BIG DATA

In 1929, americans sent about 1/2 million telegrams


each day. Hypothetically, if security services wanted to
read all messages circulating on US soil at that time,
they could have done so by employing only 500 people
(assuming each one read a thousand messages a day).
In 2014, the total daily emails sent by Americans, about
20 billion, vastly outnumbered telegrams in 1929. If the
same security services used humans to read all the
messages, they would need to hire two million people.
But e-mails would be the least of their worries. To be
effective and able to deter a terrorist plot, for example,
security services are required to intercept, store,
analyze, and cross-link hundreds of billions of messages
(in all social media platforms and apps), calls, financial
transactions, travel information, and other types of data.
Each year, billions of people and an even greater
number of IoT devices connect to the Internet and
generate a massive volume of information,
compounding the problem. It became so complex and
time consuming to mine and make sense of these large
sets of data that a new buzzword was born: Big data.
Big data aims to extract value out of information assets
by employing new technologies and processes such as

A/B testing, crowdsourcing, data fusion and integration,


genetic algorithms, machine learning, natural language
processing, signal processing, simulation, time series
analysis, and visualization.
Visualization is particularly important to showcase the
importance of big data to decision makers. Professor
Hans Rosling became famous due to his ability to
present complex sets of data in cool presentations.
Big data, in conjunction with other technologies, is
helping many online and offline industriesfrom
optimizing assortment and merchandising decisions in a
retailer to organizing all the huge swaths of information
available to Google and Facebook.
Tons of computing power are required to crunch the
large sets of data, but fortunately tech titans such as
Microsoft, IBM, HP, Oracle, Amazon, Google offer
services in the cloud. Startups such as Palantir thrived
by creating customized solutions to help clients to
manage their data in areas ranging from anti-fraud to
disaster preparedness to pharmaceuticals R&D.
In a data-driven world, no one will be exempt from using
big data technologies to improve their business. If you
haven't started yet, better hurry to catch up.
45

A visual representation of
Google search volume by
language, one the
applications of big data.
Google

46

6
VIRTUAL
REALITY & AR

Virtual reality is the computer-generated simulation of


a three-dimensional image or environment. It can be
experienced through special electronic equipment,
such as a helmet with a screen inside, gloves fitted
with sensors and/or motion controllers.
Virtual reality is probably the most overhyped
technology of all time. To understand why it never took
off with customers, we need to revisit its history of
unfulfilled promises.
The beginnings
In 1961, Morton Heilig patented the first multi-sensory
virtual experience machine, the Sensorama.
Resembling a modern arcade cabinet, the Sensorama
was a mechanical machine that combined projected
film, audio, vibration, wind, and odors, all designed to
make the user feel as if he were actually in the film
rather than simply watching it.

The Sensorama machine.


Morton Heilig

48

Morton also patented the first ever head-mounted


display that provides stereoscopic 3D, wide vision and
true stereo sound. Both his inventions were used for
cinematography and TV.
The real concept of virtual reality was pioneered in the
late 1980s by VPL Research, a startup founded by Jaron
Lanier that developed and sold one of the first virtual
reality products, the Eyephone.
The virtual reality system came with a head-mounted
display and could be paired with a glove that allowed
users to manipulate virtual objects by just moving their
hands and fingers. VPL was focused on the high-end of
the VR market.
Their VPL Eyephone Model 1 cost the equivalent of
$18,000 in today's money. In 1989 VPL was responsible
for the majority of VR systems sold in the world. It had
about 500 customers including NASA, auto makers and
Pacific Bell.
Later on, VPL developed a more sophisticated version of
the Eyephone, called the HRX, sold at $80,000. They
also unveiled the RB2 system, the first virtual reality
equipment able to connect two persons in a virtual
environment. It cost the equivalent of $400,000, due to

the expensive Silicon Graphics workstations required to


run the system.
VPL was successful with corporate clients but the
company wanted to go after a larger audience. VPL
invested in more affordable consumer products and
started touring trade shows around the world to
demonstrate its technology.
At the time, the company was already making money
from licensing its patents to toy manufacturers such as
Mattel, maker of the Power Glove, one of the most iconic
gaming accessories of all time. VPL thought success in
the consumer space would be a matter of time.
Regrettably, the strategy backfired.
The Eyephone lacked enough computing power to make
the experience enjoyable and realistic to consumers.
Graphics ran at a pale 6 frames per second, the LCD
screens used in the headsets suffered from motion blur,
and the head tracking was poor. The equipment was
also heavy, uncomfortable and couldn't be used for
prolonged times. The hefty price tag also didn't help.
As a result, VPL bled financially, filed for bankruptcy and
never recovered. Years later, the startup sold all its
remaining patents to Sun Microsystems.
49

In 1991, another company, this time based out of the


UK, bet again on the popularity of VR with consumers.
The Virtuality Group introduced the Virtuality cabinets,
oversized units where players stepped in a 3D
immersive world (standing up or sitting down,
depending on the equipment).
The units featured a low-res head-mounted display
(the "Visette"), a controller, speakers and a
microphone. Exclusive games developed for the
system and sold to amusement parks around the
world promised a level of immersion and interaction
never seen before. Each machine cost $100,000 (in
today's dollars) and the parks had a hard time
breaking even. The company filed for bankruptcy a
few years later as demand faded.
In 1993, video-game giant Sega unveiled its own
virtual reality goggles for the Sega Genesis promising
a revolution at home. Soon the company found out the
Virtuality gaming system
marketing piece showing the
visette and controller.
Dr. Waldern/Virtuality Group

50

experience of an arcade machine couldn't be replicated


in a $300 consumer device. The product was never
released and Sega claimed to have terminated the
project because the virtual reality effect was "too realistic
and users might injure themselves".
In 1995, Segas chief competitor, Nintendo, would
painfully learn with the release of the Virtual Boy that VR
systems were indeed not ready for the home. The table-

A Nintendo Virtual Boy


system, discontinued in
1996.
Evan-Amos

51

top device, priced at $180, looked clunky, used a lowres monochrome display, had a bad selection of games,
and also brought discomfort after extended play. It failed
miserably and tainted Nintendos immaculate brand.
The virtual reality hype continued full force throughout
the 1990s. Movies including The Lawnmower Man,
Virtuosity, and Johnny Mnemonic entranced the public

with the promise of immersive computer-created worlds,


but it wasn't enough to save the industry.
The product flops of the 1990s made virtual reality fall
into oblivion. No companies wanted to invest in the field
as the technology wasn't powerful and cheap enough to
please consumers.
In the 2000s, VR was mostly used by industries such as
architecture and design, manufacturing, military training,
and medicine. It made a great impact in the corporate
world.
Nobody expected that virtual reality would take decades
to attempt a comeback into the consumer spotlight. But
in 2012 it finally happened, thanks of Palmer Luckey, a
20-year old student at the University of Southern
California.
The Renaissance of VR
Palmer and a group of friends developed a low-cost
next-generation VR headset called Oculus Rift. They
apparently fixed the main problems that impeded
consumer adoption of the technology.
They founded a company named OculusVR and ran a
successful crowd-funding campaign to bring their

creation to life. People who tried Rift were absolutely


impressed with what Oculus had achievedso
impressed that the startup raised, just 18 months later,
more than $90 million from some of the best venture
capital firms in the world. In 2014 Oculus VR was
acquired by Facebook for $2 billion.
Everyone was shocked. Why would Facebook pay such
an outrageous amount for a company that had not
released a product yet? The answer lies in the potential
of this technology to change the world in the next
decades and the fact that Oculus VR had finally cracked
the code of consumer-grade virtual reality.
The Oculus Rift headset is light, powerful, cheap, and
boasts an immersive high-res low-persistence OLED
display. It features a low-latency 360 positional headtracking technology that helps ordinary people to not
feel motion sickness when using the device for
prolonged periods of time. The Rift can already be
purchased by developers for $300. A consumer version
should be released by 2016 at the latest.
In the wake of the Rift, many other companies
developed similar devices. Samsung was the first to
announce the Gear VR in partnership with Oculus. The
52

A girl experiencing virtual


reality with the Oculus Rift
Rex Features

53

headset works with high-end Samsung smartphones


and is already available for $200.
Sony was next and unveiled its headset in 2014. Named
Project Morpheus, the accessory will be available for
Playstation 4 owners in 2016. Early reviews of the
Morpheus are encouraging and gamers are looking
forward to it.
Even Valve, the software company behind many popular
games and the digital store Steam, announced its own
device in partnership with HTC. Named Vive, the
headset seems to be a very impressive piece of
hardware. The developer version will be available in
Spring 2015, and a consumer version in 2016.
As the evidence suggests, 2016 is poised to be the year
where virtual reality finally goes mainstream. At first, its
main application will be games and movies, as
entertainment seems to be a natural fit for the
technology. But with $200 VR headsets looming in the
horizon, a revolution beyond gaming is about to happen.
Companies are already finding ingenious way to use the
technology in order to challenge established business
practices.

Around the world, real estate developers and leasing


agents are turning to virtual reality to let prospective
buyers experience a space even when a building is still
under construction, or has yet to break ground.
Specialized companies are modeling properties in 3D
and letting users explore and interact with them using
VR headsets.
Visiting a property in VR feels exactly like a game with
the major difference that the property will be built for real
in a few years. As soon as VR gets mainstream, buyers
won't even need to leave their homes to go shopping.
Millions of real estate agents jobs might be in jeopardy.
VR technology allows real estate developers to save
money by dropping the construction of expensive sales
centers and demo units. It also can be a great tool to
widen the audience to a specific property. Someone in
China can virtually visit an apartment in San Francisco
without setting foot in the United States.
VR is being used in ways that promise to help millions
for instance, in the treatment of phobias, anxiety and
even post-traumatic stress disorders. The US Army has
been remarkably successful with its Virtual Reality
Exposure Therapy (VRET), which is the reconstruction of
events in a virtual environment controlled by the patient.
54

Research has shown a drastic decrease in PTSD


symptoms.
In the next decade, VR technology will become smaller,
more powerful and extremely cheap. Everyone will be
able to afford a headset much in the way anyone can
now afford a mobile phone.
I believe virtual reality will finally take off in the next
decade. The hardware barrier is not an issue anymore
and it will all depend on the software.
I foresee a future where many daily tasks will eventually
migrate to the virtual world and one will spend countless
hours browsing the web, attending video calls or
immersing in remote work with colleagues. Education,
warfare, telemedicine, design, and even how we
communicate and socialize might be profoundly
transformed.
It is not a coincidence Facebook bought Oculus: VR is
the future of social networking too. Gaming is just the tip
of the iceberg.

55

Augmented reality (AR)


Much is talked about virtual reality but, maybe, the
greatest potential to disrupt society lies in augmented
reality. AR is a technology that superimposes a
computer-generated image on a user's view of the real
world, thus providing a composite view.
Augmented reality has been used in many apps over the
last years. It works by superimposing the real feed of
your smartphone camera with a 3D image or animation.
For instance, point your camera at a specific landmark
and see information about it floating in the LCD screen,
in real time. Or print a sheet of paper with a specific
image, point your camera towards it and reveal a virtual
monster walking on the table. You can even decorate
your home using AR.
Currently, AR technology is fun but not that useful. It only
works when you are looking through your smartphone or
computer screen. This is about to change soon as the
first always-on augmented reality device has been
announced by Microsoft.
Hololens is a very sophisticated and clever AR wearable
meant to be used indoors. It doesn't immerse you in a
100% computer-generated world like VR headsets do. It

actually does something cooler by fusing real world


footage with "holograms".
For instance, if you're wearing Hololens, you can have a
Skype conversation with your friends projected in a wall
while you prepare your breakfast. The projection exists
only in the eyes of the wearer; people who don't wear
Hololens wouldn't be seeing or hearing anything out of
the ordinary.
With Hololens you can play games using your home as
the scenario, design new items overlaying real objects,
and even work in ways that were not possible before. I
highly recommend watching the promotional video so
you can understand how it works.
If Microsoft executes well, Hololens has the potential to
be a big hit with consumers and businesses alike. The
device is rumored to cost less than $500 when it
launches in 2016.
Microsoft is not the only company eyeing the disruption
augmented reality can bring to our world. One startup,
called Magic Leap, promises to raise AR to a new level.
The company raised more than $500 million from Google
and other investors before launching a product. It has
recently unveiled a very cool demo of an AR game.
56

Magic Leap claims to have developed a technology to


project images directly on your retina. That means the
same effects created by Hololens could actually be
possible on a small device such as Google Glass.
In a decade or two, augmented reality technologies will
fit in your regular glasses or even in your contact lenses.
They'll have the power to recreate lifelike computer
imagery that will be undistinguishable from reality.
When that happens, we'll be living in a word not far from
the one imagined in the movie Matrix. New generations
might not know the real world as we knew it.

57

A woman in a videocall using a Hololens


headset.
Microsoft

58

7
3D PRINTING

Prior to the Industrial Revolution, which began in Britain


in the late 1700s, most people lived in the countryside
and manufacturing was often done in peoples homes,
using hand tools or basic machines.
The Industrial Revolution marked a shift to powered,
special-purpose machinery, factories, and mass
production. Almost every aspect of daily life was
influenced in some way and the average person's

Interior of Magnolia Cotton


Mills spinning room in 1911.
US National Archives and
Records Administration

60

income has exhibited unprecedented sustained growth


ever since.
The Second Industrial Revolution happened between
1840 and 1870, when technological and economic
progress continued with the increasing adoption of
steam transport, the large-scale manufacture of machine
tools, and the increasing use of machinery in steampowered factories.

Boeing 747-8 test planes in


assembly.
Jeff McNeill

Both revolutions were responsible for


creating the early roadway and railway
systems we use today, as well as
improved housing, nutrition, and higher
standards of living.
Nonetheless, conventional models of
production still rely on large, interlinked
manufacturing facilities and the vast

supply chain that revolves around


them.
Advancements in robotics and IT have
greatly improved the speed and cost of
mass manufacturing, but ultimately,
making a product such as a Boeing or
the iPhone is still a complex endeavor
that requires sophisticated equipment,
61

billions of dollars of investment, and hundreds of


thousands of workers.
A new Industrial Revolution
In 1984, Chuck Hull invented a technology that would
change manufacturing forever. This technology, called
3D printing, allows anyone to make a physical object
from a three-dimensional digital model.

A modern industrial printer,


the Voxeljet VX 4000, capable
of printing in plastic and sand
particles.
Voxeljet

62

3D printing removes the complexity of transforming an


idea into a tangible object, and streamlines the whole
manufacturing process. It helps companies to design
and prototype real products so they can release
production versions faster and cheaper.
3D printing relies on a factory-in-a-box" machine called
a 3D printer. It uses cartridges, like the ones in a home
printer, but they are refilled with materials, not ink.

A 3D printer prints objects using additive processes


and come in many models, sizes, and prices.
Since 1989, when the first commercial 3D printers
were made available, a lot has changed. Current
machines are able to print plastic, sand, metal alloys,
rubber, wax, bio-compatible materials, ceramics,
carbon fiber, cement, conductive filaments, and even
chocolate!
In the last decade, 3D printers became popular with
hobbyists and small businesses for prototyping and
model-making. In 2005, fairly sophisticated equipment
which printed ABS plastic cost around $50,000. In
2015, a similar model is available for less than $499.
Recently-released 3D scanners help consumers to
bring their favorite objects to life with no prior
knowledge of design or computer-aided design.

A 3D digital scanner.

Along with more affordable 3D printers came


disruptive business models that empower any

Makerbot

63

LayerWise lower jaw


implant 3D printed in
titanium.
3D Systems

64

3D printed jewelry.
Shapeways

65

ordinary person to experience the revolution.


Startups such as Shapeways allow customers to use its
infrastructure to print digital models in a variety of
materials, such as metallic plastic, sandstone, wax,
steel, gold, silver, platinum, or bronze. For the first time
ever, it is now affordable to make jewelry, sculptures,
miniatures, and prototypes from the comfort of your
home or work.

A 3D printed sandstone
sculpture.
Shapeways

66

3D printing will revolutionize many industries. Amazon


just filed a patent for a truck with a 3D printer inside to
build an order on the go to save precious hours in
delivery time. Another company came up with a faster
model in which the printing process resembles the
liquid-metal robot in The Terminator 2 movie.
Jet engines, guns, drones, prosthetics, organs and even
DNA already have been 3D printed. In a decade or so,

3D printed small jet engine.


Monash University
67

we would be able to print customized glasses, clothes,


shoes, musical instruments, and many other items at
home or through sharing economy companies.
Some futurists predict 3D printing will advance so much
in the next 50 years that well be able to use individual
atoms and reassemble them to our liking, opening up
possibilities to print virtually anything from diamonds to
food and new life.
Exponential technologies are scary and hard to grasp.
As we've seen in this book, we tend to overestimate their
short-term impact and underestimate the long-term
effect.
No matter what you believe, rest assured 3D printing will
be orders of magnitude more important than the
Industrial Revolution. It is just a matter of time.

68

8
BIONIC
IMPLANTS

When ace test pilot Steve Austin's ship crashed, he was


nearly dead. Deciding that "we have the technology to
rebuild this man", the government decides to rebuild
Austin, augmenting him with cybernetic parts that gave
him superhuman strength and speed. Austin becomes a
secret operative, fighting injustice where it is found.
You might not be familiar with the plot of The Six Million
Dollar Man, a popular TV series aired from 1974 to 1978,
but the show was very influential in introducing the term
"bionic implant" to the mainstream. Nevertheless, bionic
implants had been around for quite some time.
In 1958, the artificial pacemaker became one of the first
electronic devices to be implanted inside patients. A
pacemaker uses low-energy electrical pulses to prompt
the heart to beat at a normal rate. Between 1993 and
2009, 2.9 million patients received a permanent
pacemaker in the United States.
In 1984, the Food and Drug Administration (FDA)
approved the first cochlear implant for use in adults and,
in 1989, for use in children. The cochlear implant is a
small, complex electronic device that can help to
provide a sense of sound to a person who is profoundly
deaf or severely hard-of-hearing. The implant consists of
an external portion that sits behind the ear and a second

portion that is surgically placed under the skin.


According to the Food and Drug Administration (FDA),
as of December 2012, approximately 324,000 people
worldwide had received such implants.
In the last 25 years, we have made the most astonishing
and unexpected discoveries about our biology, our
brain, and the place of technology within it. The new
generation of bionic implants would make the The Six
Million Dollar Man writers feel vindicated. Take a look.
Bionic limbs
To get started, please spare 3 minutes of your time to
watch this moving video. It showcases ballroom dancer
Adrianne Haslet-Davis, who lost her left leg in the 2013
Boston Marathon bombing, performing again for the first
time since the incident. She uses a latest-generation
bionic leg customized to her body and dancing style.
In 2012, Oscar Pistorius made history by becoming the
first amputee sprinter to compete at the regular
Olympics, thanks to his carbon fiber running blades.
Another video shows a double amputee withtwo cuttingedge bionic arms under the direct control of his brain.
The sight of the man fitted with the sophisticated
modular prosthetic limbs illustrates how far our
70

technology has come in the last few decades. Even


bionic hands are getting really good at emulating real
ones.
In this field, scientists and entrepreneurs currently are
focusing on restoring freedom of movement for people
with amputation by making the best and coolest bionic
limbs.
Sooner than we expect, though, exponential progress
in bionic limb technologies might create people with
super strength. We'll then witness a revolution where
physical disabilities allow the creation of prostheses
that add new capabilities to the human body.

Oscar Pistorius in the London


2012 Paralympics games
Chris Eason
71

Bionic heart
A synthetic replacement for the heart remains one of
the long-sought holy grails of modern medicine. It is
incredibly challenging to create a device that can
withstand the harsh conditions of the bodys
circulatory system and reliably pump 35 million times
per year, as the heart does.
In the last years, though, many companies and
universities got closer to replacing our second most
important organ. In December 2013 the French
company Carmat performed the world's first total
artificial heart implant surgery on a 76-year-old man in
which no additional donor heart was sought. Although
the patient died soon thereafter, another one made
history in 2015.

The BiVACOR is a total artificial heart designed to take


over the complete function of a patients failing heart.

In March 2015, an American company announced


BiVACOR, the world's first truly bionic heart. The team
has successfully implanted it in a sheep, and human
trials are expected to begin in 2018. BiVACOR lasts
ten years and it is smaller and more reliable than any
artificial heart ever built. It could be an important step
for our path into becoming cyborgs.

BiVACOR
72

The Google Smart Lens


prototype, to be manufactured
by Novartis.
Google

Bionic eye
Technology reached a point where
smart contact lenses are being
developed for a variety of uses. One
model, created by Google and to be
manufactured by Novartis, has the
potential to help diabetics monitor their
glucose levels.

Another, when equipped, might give


one the power to zoom vision almost
three times. Developed initially for the
military, the lens can also help millions
of people suffering from macular
degeneration.
C u r r e n t l y, s c i e n c e i s n o t o n l y
augmenting the eye functions but it is
73

going down a path where it will eventually be able to


replace our second-most-complex organ. There are now
several devices in the market that can restore partial
vision. Some, like the eSight Eyewear, a kind of a VR
headset, helps the legally blind and the results are really
moving.
Others, like the Argus II Retinal Prosthesis System, the
world's first approved device intended to restore some
functional vision, can be tried by patients who suffer
from certain types of blindness. Using the Argus II
involves not only surgery, but also a post-operative
programming and low-vision rehabilitation protocol. The
results are encouraging.
The science and the research behind visual prosthesis
are relatively new; the current technology seems
rudimentary and the devices clunky. However, in the
next 20 years, scientists believe advances in bionic
implants will increasingly benefit people suffering from
eye diseases. Theoretically, even people with no eyes
would be able to see.

74

Brain implants
The brain controls our movements and our breathing,
makes sense of the world, and stores the memories that
help form our personalities. It is often referred by
scientists as the most complex object in the universe.
The human brain boasts more than a hundred billion
neurons. Each neuron may be connected to up to ten
thousand other neurons, passing signals to each other
via as many as a hundred trillion synaptic connections.
Figuring out how the brain works is the largest and most
difficult scientific endeavor humanity has ever pursued,
and our progress in this field has been astonishing. In
the last fifteen years, for instance, we have learned more
about the brain than we have in the rest of human
history. This knowledge allowed the emergence of sci-fiesque new technologies.
One such technology, available since the 1990s, is
called deep brain stimulation. DBS is a neurosurgical
procedure involving the implantation of a brain
pacemaker, which sends electrical impulses, through
implanted electrodes, to specific parts of the brain.
More than 100,000 people around the world have
undergone DBS for the treatment of movement and

mood disorders such as Parkinsons disease, dystonia,


and major depression. It is an impressive technology.
This remarkable video shows what happens when a
patient turns off the remote-control-operated pacemaker.
In 2012, a quadriplegic woman had a microchip
implanted in her brains motor cortex. The sensor could
read her "thoughts" and translate them into machine
language. Through this machine-brain interface, the
woman was able to move a robotic arm to serve herself
coffee for the first time since she became paralyzed.
And there is more.
Dr. Theodore Berger, from the University of Southern
California, has been developing a device that can be
implanted into the brain to restore memory functions,
modeling the complex neural activity that takes place in
the hippocampus, which is responsible for forming new
memories.
The devicea microchip that encodes memories for
storing elsewhere in the brainhas been tested using
tissue from rats brains, and researchers are planning
trials on live animals. They hope it will provide a way of
restoring memory function in patients who have suffered
damage to their hippocampus from a stroke, an
accident, or from Alzheimers disease.
75

Futurists such as Ray Kurzweil predict well have reverse


engineered the brain by the 2030s. Judging by the
massive amount of research and money in this field, it
could happen even sooner.

first primitive cyborgs are among us. In the view of many


scientists, it is just a matter of time until the human race
becomes more machine than biology. I concur.

In 2003, Microsoft co-founder and philanthropist Paul


Allen poured in $100 million to found the Allen Institute
for Brain Science. In 2013 the European Union launched
a similar initiative with great fanfare.
In 2014, President Barack Obama announced the
BRAIN initiative, a large-scale project with the goal of
mapping the human brain. It was inspired by the human
genome project and has a budget of $3 billion to be
spent in 10 years.
The advancements in neural implants over the next
decades will powerfully challenge our concept of
humanity. Are we going to implant fake memories in our
brains? Will we evolve to be more machine than biology?
Will we be able to upload our conscience to the Internet
and live forever?
I dont have the answers to these questions, but there is
already a community of people pushing the boundaries
of biohacking, which is the hacking of our own biology.
Cheap bionic implants are already sold online and the
76

President Barack Obama


announcing the BRAIN
initiative at the White
House in 2014

77

9
BIOTECH

Biotechnology, as the name implies, is the marriage of


technology with biology. Generally it refers to the use of
biological systems, living organisms, or derivatives
thereof, to make or modify products or processes for
specific use.
Depending on the tools and applications, it often
overlaps with the fields of bioinformatics, bioengineering
and biomedical engineering. For the sake of simplicity,
well treat all biotech related technologies in the same
chapter.
Ingestibles
Doctors are using ingestible sensors dotted with
sophisticated technologies to help them diagnose
patients. Some, like PillCam, are more than ten years
old.

doctors know when patients take their medicine and


when they dont. The sensor does not have batteries.
After being ingested the chip will interact with digestive
juices to produce a voltage that can be read from the
surface of the skin through a detector patch, which then
sends a signal via mobile phone to inform the doctor
that the pill has been taken. Sensors on the chip also
detect heart rate and can estimate the patients amount
of physical activity.
Researchers at MIT developed an ingestible pill that has
the potential to replace daily injections used by diabetes
patients. The pill is coated with tiny needles that can
deliver drugs directly into the lining of the digestive tract.

Developed by an Israeli company, PillCam is a pill that


uses a miniaturized camera contained in a disposable
capsule that naturally passes through the digestive
system, allowing physicians to directly view the
esophagus, small intestine, and the entire colon, without
sedation or radiation.
In 2012 the FDA approved an ingestible sensor the size
of a grain of sand that is integrated into pills and lets
79

A PillCam ingestible camera shown.


Given ImAging

80

Nanobots
Nanobots may be our medical future. These tiny robots
are capable of drug delivery inside our bodies,
detecting diseases and, in the near future, even
repairing or manipulating damaged cells. There are two
types of nanobots: biological and mechanical.
Recently-made mechanical nanobots measure about
1/50 of the diameter of a human hair. Were talking about
real micro-machines powered by micro-motors propelled
either by chemical reactions inside the body or
electromagnetism.
In December 2014, San Diego researchers published a
paper proving that mechanical nanobots can travel
inside a living creature and deliver their medicinal load
without any detrimental effects.
A mouse ingested these tiny machines and they
reached its stomach. There, the nanobots headed
outwards toward the stomach lining where they then
embedded themselves, dissolved, and delivered a
nanoparticle compound directly into the gut tissue.
Biological nanobots are even smaller, measuring just a
few dozen nanometers in diameter, the size of a typical
virus. Researchers have developed DNA-made

nanobots that could seek out specific cell targets and


deliver important molecular instructions. These nanoscale robots use DNA strands that fold and unfold like
origami.
In 2014, scientists at Harvard University and Bar IIan
University in Israel have successfully injected these tiny
living DNA nanobots into live cockroaches to deliver
drugs directly into the insects' cells.
Biological nanobots can function like primitive
computers, carrying out simple tasks such as telling
cancer cells to self-destruct. In a decade or so, they'll
have the equivalent computing power of a video-game
console from the 80s. Inspired by the mechanics of the
body's own immune system, the technology might one
day be used to program immune responses to treat
various diseases with astonishing precision.
Now, in 2015, biological nanobots will be tried in a
critically-ill leukemia patient. The patient will receive an
injection containing a trillion DNA nanobots designedto
interact with and destroy leukemia cells (with an
expected zerocollateral damage in healthy tissue). The
cancer is expected to be destroyed in one month.
Mind boggling.
81

Genetic engineering
Genetic modification is not novel. Humans have been
altering the genetic makeup of plants for millennia,
keeping seeds from the best crops and planting them in
the following years, breeding and crossbreeding
varieties to make them taste sweeter, grow bigger, last
longer.
Weve been doing the same with pets in a process
called artificial selection. Humans choose which animals
will live based on their most desired characteristics. It
only took us 15,000 to 20,000 years to turn a gray wolf
into all of the dog breeds we see today. The same
happened with cows, horses, and sheep.
But the technique of genetic engineering is new and
quite different from conventional breeding. This
technique gave rise to the infamous genetically modified
organisms or GMOs.
GMOs are plants or animals that have undergone a
process wherein scientists alter their genes with DNA
from different species of living organisms, bacteria, or
viruses to get desired traits such as resistance to
disease, insects, or tolerance of pesticides.
GMOs are part of our life. You have probably already

eaten genetically modified tomatoes, corn, or papaya. It


is estimated that 80% of the processed food in the US
contains at least one GMO crop.
An enormous variety of GMOs has been developed in
recent years, from genetically-engineered trees, an
environmentally friendly pig, to a genetically-modified
Atlantic salmon, engineered to grow twice as large and
twice as fast as regular salmon.
Countries all around the world are joining the GMO
frenzy. West Africa will soon experience bananas with
Vitamin A, Israel developed bizarre featherless chickens
to improve production, and China has cows that
produce human milk.
We dont need to be experts in this field to understand
the implications. GMOs will get more sophisticated in the
next decades and entire industries such as agriculture
and livestock farming may rely on them. GMOs might
end up being the only hope to feed an ever-growing
world population.
Most scientists agree that genetically-modified foods do
not represent a higher health risk to consumers than
traditional food. Their opinion is backed by many
scientific studies conducted over the years.
82

Nevertheless, as there are enormous interests at stake,


people are still skeptical about GM food and the science
behind it. They demand long-term studies to validate
GM food safety and also more oversight by the
government. The tobacco industry is always used as an
example on what corporations can do if they go
unchecked.
However, while most of us are concerned with GM food
safety, the most important innovation brought by GMO
the combat of diseasesis being left out of the debate.
South Korean scientists, for instance, created
fluorescent dogs to help combat AIDS. In Brazil, GM
mosquitoes are being used to contain Dengue's fever
spread. US scientists have genetically engineered
viruses to kill cancer.
There is a revolution coming that coud save millions of
lives and benefit all mankind. Genetic engineering
seems to be the way of the future.
Regenerative medicine
In 1997, scientists in the US were able to grow a human
ear on the back of a lab rat. The images of the small
Frankenstein rodent sent shockwaves around the world.
Many people protested against what they thought was a

bizarre and cruel experiment. Recently, Japan started to


tinker with growing human organs inside pigs.
These examples, resembling the script of The Island of
Dr. Moreau, are actually part of regenerative medicine, a
rapidly developing field with the potential to transform
the treatment of human disease through the
development of innovative new therapies that offer a
faster, more complete recovery with significantly fewer
side effects or risk of complications.
Actually, were close to real breakthroughs. Scientists
have grown kidneys, lungs, and even hearts in
laboratories. For now, they belong to animals, but human
trials might start in the next five years.
Londons Royal Free hospital is growing noses, ears,
and blood vessels made from stem cells. Kings College
had success with skin using the same technique. In
Poland, a paralyzed man walked for the first time after
cells from his nose were transplanted to his spine.
Imagine a world where there is no organ donor shortage,
where victims of spinal-cord injuries can walk, and
where weakened hearts are replaced. In conjunction
with other tech such as bionic implants and 3D printing,
this is the long-term promise of regenerative medicine.
83

Genome sequencing
In 1990, the US Congress established funding for the
Human Genome Project and set a target completion
date of 2005. The goal of the HGP was to sequence and
map all of the genestogether known as the genome
of our species. It was the equivalent in biology to the
"moonshot" of the 1960s.
At the time, many critics thought the project wouldnt be
good science. Part of the scientific community doubted
it could be finished on budget and on time, as we didnt
possess the technology to pursue the challenge in 1990.
Of course the pundits were wrong, betrayed by their
linear thinking.
Not only was the Human Genome Project completed two
years sooner than previously planned, but it also cost
less than the initial budget. A parallel project was
conducted outside of the government by Celera
Genomics, which was formally launched in 1998 and
completed just three years later.
The US governments $4 billion investment in the HGP
helped to drive down the cost of sequencing a genome
from any person. In 2001, it cost $100 million. In 2015, a
company announced a full genome sequencing for only

$1,000. The results can now be known in hours rather


than months.
Diagnostic medicine was one of the immediate
beneficiaries of the plummeting costs of sequencing a
human genome. There are now 2000+ genetic tests
available to physicians to aid in the diagnosis and
therapy for 1000+ different diseases.
Having the complete sequence of the human genome is
similar to having all the pages of a manual needed to
make the human body. Researchers and scientists are
now determining how to read the contents of all these
pages.
Individualized analysis based on each persons genome
might lead to a powerful form of preventive and
personalized medicine. By tailoring recommendations to
each persons DNA, doctors will be able to work with
individuals on the specific strategies that are most likely
to maintain health for that particular individual.
Genome sequencing seems to be poised to improve
healthcare in ways that were not possible before.

84

85

Lung on a chip
Emulate

Organ-on-a-chip

the blood-brain barrier.

You read it right. Scientists are creating


organs-on-a-chip to improve ways of
predicting drug safety and
effectiveness. So far these organs
include the lung, intestine, heart, liver,
skin, bone marrow, pancreas, kidney,
eye, and even a system that mimics

The idea is to recreate the smallest


functional unit of any particular organ
in a micro-environment that closely
imitates the human body.
An organ-on-a-chip is a microfluidic
cell culture device created with
microchip manufacturing methods that
86

contains continuously perfused chambers inhabited by


living cells arranged to simulate tissueand organ
level physiology. The device looks like a futuristic alien
artifact.
Organs-on-a-chip are a mind-blowing technology that
could exponentially accelerate the development of new
drugs and eventually replace animals used in lab
testing. In the future, all organs-on-a-chip may be put
together to create a human-in-a-chip to speed up drug
development.
I highly recommend watching this TED talk if you want to
understand how organs-on-a-chip work. Here you can
find a list of US universities currently researching the
technology.
Anti-aging tech
In the last 150 years, developed countries almost
doubled the average life expectancy of their
populations. The victories against infectious and
parasitic diseases were a triumph for public health
projects of the 20th century, which immunized millions of
people against smallpox, polio, and major childhood
killers like measles.
Even earlier, better living standards, especially more

nutritious diets and cleaner drinking water, began to


reduce serious infections and prevent deaths among
children.
In theory, there is no age ceiling for life if were able to
solve the diseases that afflict our species and some of
the mechanisms that contribute to aging. Scientists
believe we can extend the human lifespan to 150-200
years in our lifetime and that technology will solve any
biological limitations.
Recently, due to exponential advancements in
biotechnology, many Silicon Valley entrepreneurs were
encouraged to embark in the anti-aging crusade. Some
are doing it for vanity while others want to contribute to
the betterment of mankind. All of them see enormous
financial opportunities in this field.
Larry Ellison, founder of Oracle, was one of the first
Silicon Valley entrepreneurs to donate to the cause. He
has been investing in an anti-aging technologies and
research for more than 15 years through his Ellison
Medical Foundation.
The Palo Alto Longevity Prize is giving $1 million to any
team that first demonstrates innovations with the
potential to end aging, such as restoring the bodys
87

homeostatic capacity or promoting the extension of a


sustained and healthy lifespan of a mammal by 50%.
This type of incentive prize has generated great results
in areas such as private space exploration and
affordable healthcare.

transformations in how we live our lives, how we plan our


careers, and even how we run our countries.

The startup Human Longevity uses both genomics and


stem-cell therapies to find treatments that allow aging
adults to stay healthy and functional for long as
possible.
In 2014, Google has announced an investment of up to
$750 million in their own Calico, a company headed by
the former CEO of Genentech; its mission is to "reverse
engineer the biology that controls lifespan and devise
interventions that enable people to lead longer and
healthier lives.
Larry Page, the founder of Google, is adamant that one
day well solve death. Billionaires such as Peter Thiel,
famous for his investments in Facebook, Airbnb, and
Palantir, have made the cause even more public.
It is estimated that it'll take around five to10 years to see
concrete progress in this field. Maybe we'll double our
lifespan again in the next decades. If our technology
delivers on its promises, there would be profound
88

Synthetic life
In the last decades, our species was able to copy life
and to modify it radically through genetic engineering.
Soon, for the first time in human history, well be able
to create artificial life that could never have existed
naturally.
Craig Venter, the entrepreneur and scientist
responsible for privately sequencing the human
genome, announced in 2010 that his team had built
the genome of a bacterium from scratch and
incorporated it into a cell to make what they called "the
world's first synthetic life form".
Dr. Venter described the converted cell as the first
self-replicating species weve had on the planet
whose parent is a computer. The single-celled
organism has four "watermarks" written into its DNA to
identify it as synthetic and help trace its descendants
back to their creator.
M. mycoides JCVI-syn1.0, the first
synthetic life ever created.
Tom Deerinck and Mark Ellisman of
the National Center for Microscopy
and ImAging Research at UCSD
J. Craig Venter Institute

89

Some scientists dismissed the announcement arguing


the synthetic genome was almost identical to the
biological one, proving that the created bacterium was
actually semi-synthetic. The controversy still continues to
this day but certainly the experiment paved the way for
organisms that are built rather than evolved.

Building an entire genome from scratch is still a


daunting task, but many scientists believe it may be
possible within the next 10 years. With it will come
synthetic living systems made to order to solve a range
of problems, from producing new drugs to creating
biofuels. Or we would create them just for fun, as pets.

In 2014, more breakthroughs in synthetic biology were


achieved. The first synthetic yeast chromosome was
developed and the first living organism to carry and
pass down to future generations an expanded genetic
code was created by American scientists.

The scientific and philosophical consequences of


biotech breakthroughs might change the world forever.
Our society is transitioning from the age of scientific
discovery to the age of scientific mastery.

Venter is now focused on creating a machine called a


digital biological converter capable of biological
teleportation. It works similarly to a fax machine or a 3D
printer. One would send the digital code for a new DNA
piece and the DBC would receive and rewrite it into
genetic code on the spot.
Cambrian Genomics, a startup in San Francisco that
developed a technology to print DNA, wants to allow
their customers to create new life forms from the comfort
of their homes by manipulating DNA from plants and
animals. It would be like a real-life game where you can
play God and design your own creatures.
90

10
NANOTECH

Nanotechnology allows scientists and engineers to


manipulate matter at the nanoscale, which is about one
to 100 nanometers. It can be used across all the other
scientific fields such as chemistry, biology, physics,
materials science, and engineering.
This is a technology hard to master and to understand.
One nanometer is a billionth of a meter. A sheet of paper
is about 100,000 nanometers thick whereas a strand of
human DNA is 2.5 nanometers in diameter.
Nanotechnology benefits several industries and has
many applications. For example, it can be used to
design pharmaceuticals that can target specific organs
or cells in the body such as cancer cells, and enhance
the effectiveness of therapy.
Materials engineered to such a small scale are often
referred to as nanomaterials. They can take on unique
optical, magnetic, electrical, and chemical properties,
and impact electronics, robotics, biotech, food, and
other fields.
Deploying nanotechnology in cosmetics is a common
practice in the industry. Titanium dioxide, in the form of
nanoparticles used to render creams and lotions
transparent, is currently approved as a UV-filter in

sunscreens.
Nanotechnology used in food gives manufacturers
tighter control over what theyre producing, touching on
such areas as coloration, dimensions, and taste. Side
effects are still not very well understood and the FDA
has released some guidelines to regulate the industry.
Nanocoatings can be added to cloth or surfaces to alter
their original properties. For instance, nanoparticles of
silica incorporated into the weave of a fabric create a
coating that repels water and stain-producing liquids,
and a hydrophobic paint makes walls pee-proof. Silver
nanoparticles added to clothing kill bacteria and fungi,
preventing the nasty odors they cause.
Nanotechnology could dramatically improve energy
storage for electronics, cars, and buildings. Nanosize
batteries that are eighty thousand times thinner than a
human hair represent a promising new front.
Nanomaterials such as carbon nanotubes will ignite a
revolution. These tiny cylindrical structures exhibit 200
times the strength and five times the elasticity of steel;
five times more electrical conductivity, 15 times the
thermal conductivity and 1,000 times the current
capacity of copper. Carbon nanotube fibers can also
92

withstand extremes of temperature and resist radiationinduced degradation.

space elevator to deploy satellites or tourists in space


without the need of expensive rockets.

Despite being strong and having a toughness


comparable to that of fibers used for antiballistic vests,
fabrics woven from these nanotube yarns would be soft
to the touch and drapable, which is a consequence of
the very small nanotube yarn diameters. Carbon
nanotubes are so strong they could allow us to build a

There are literally hundreds of applications for the


material, including artificial muscles, high-intensity
filaments for light and X-ray sources, antiballistic
clothing, electronic textiles, satellite tethers, and yarns
for energy storage and generation that are weavable into
textiles.

1074 Carbon nanotubes


being spun to form a yarn.
CSIRO

93

Quantum computers
Quantum mechanics is a branch of physics that explains
the behavior of matter and its interactions with energy on
the scale of atoms and subatomic particles. It states that
particles can be in two places at once; that two particles
can be related, or entangled; and that when we look at
particles we unavoidably alter them.
In the science of the very small, things get very weird
and complicated. That is why one of the holy grails of
physics has been to build a quantum computer that can
process certain types of large-scale, very difficult
problems exponentially faster than classical computers.
Rather than store information as 0s or 1s as conventional
computers do, a quantum computer uses qubits which
can be a 1 or a 0 or both at the same time.
This quantum superposition, along with the quantum
effects of entanglement and quantum tunneling, enable
quantum computers to consider and manipulate all
combinations of bits simultaneously, making quantum
computation powerful and fast. In theory, a new
generation of quantum computers could be millions of
times faster than conventional computers to solve
certain types of problems.

If we invented a quantum computer, it would be capable


of changing the world in the blink of an eye. So, are we
ever going to build this kind of machine that sounds
counterintuitive to our logic and in the realm of science
fiction?
Well, actually there is already a company making
quantum computers. D-Wave, based out of Canada, is
selling their systems to NASA, Google, Lockheed Martin
and other large corporations and government branches.
The investment arm for the CIA and the famous DFJ
venture capital firm have a stake at the company.
D-Wave's newest quantum computer, the D-Wave Two,
is a marvelous piece of engineering. The computer is
cooled to a temperature 150 times colder than
interstellar space (0.02 Kelvin), consumes up to 300
times less energy than an equivalent supercomputer
and has its processor shielded against the tiniest
magnetic interference (50,000 times less than Earths
magnetic field).
The D-Wave computers are being tested in diverse
fields such as optimization, machine learning, video
compression, object detection, protein folding, and even
for labeling news stories.
94

A D-Wave quantum
computer exterior.
D-Wave Systems

95

Google and NASA even made a video trying to explain


quantum computers to the average consumer.
So far the results of the $15 million D-Wave Two machine
are controversial. It apparently does not solve problems
faster than regular computers. However, D-Wave's CEO
promises a newer version for 2015 that will show a great
leap in performance.
In the strange world of quantum physics, one must be
careful not to miss a new computing paradigm. Google,
Microsoft, IBM and even the NSA are working on their
own quantum computing technology.
It seems the next ten years will be exciting.

96

11
ARTIFICIAL
INTELLIGENCE

Artificial intelligence, or AI, is the ability of a digital


computer or computer-controlled machine to perform
tasks commonly associated with intelligent beings, such
as visual perception, speech recognition, decisionmaking, or translation between languages.
The idea that the human thinking process could be
mechanized has been studied for thousands of years by
Greek, Chinese, Indian and Western philosophers. But

Glen Beck and Betty


Snyder program the
ENIAC at the Ballistic
Research Laboratory.
US Army

98

researchers consider the paper A Logical Calculus of


the Ideas Immanent in Nervous Activity (McCulloch and
Pitts, 1943) as the first recognized artificial intelligence
work.
In 1946, the US Army unveiled ENIAC, the first
programmable general-purpose electronic digital
computer. The giant machine was initially designed to
calculate artillery-firing tables, but its ability to execute

different instructions meant it could be used for a wider


range of problems.
In 1950, the renowned computer scientist and
mathematician Alan Turing formally introduced the
concept of artificial intelligence in his paper Computing
Machinery and Intelligence. He proposed the Turing
test, which would test a machine's ability to mimic
human intelligence.
In 1956, the Dar tmouth Ar tificial Intelligence
conference, proposed by John McCarthy, Marvin
Minsky, Nathaniel Rochester, and Claude Shannon, gave
birth to the field of AI and awed scientists about the
possibility that electronic brains could actually think.
In 1964, Joseph Weizenbaum built ELIZA, an interactive
program that carries on a dialogue in English and on any
topic. It became a popular toy when a version that
simulated the dialogue of a psychotherapist was
programmed.
Since then, AI has been repeatedly featured in sci-fi
movies and TV shows that captivated the public's
imagination. Who doesn't remember HAL 9000, the
sentient and malevolent computer that interacts with
astronauts in 2001: A Space Odyssey?

Despite scientists' initial enthusiasm, practical


applications for artificial intelligence had been lacking
for many decades and led many to dismiss the impact
of AI on our society.
Only in the late 1980s, a new area of research called
deep learning began to show early promise about the
potential of AI. Unfortunately, the computational power at
the time was still too slow for scientists to reach any
meaningful breakthroughs.
It was only in 1997, when the IBM Deep Blue computer
defeated the world chess champion, Gary Kasparov,
that artificial intelligence started to be taken more
seriously.
In 2005, DARPA sponsored The Grand Challenge
competition to promote research in the area of
autonomous vehicles. The challenge consisted of
building a robot-car capable of navigating 175 miles
through desert terrain in less than ten hours, with no
human intervention. The competition kickstarted the
commercial development of autonomous vehicles and
showcased the practical possibilities of artificial
intelligence.
In 2011, IBM's Watson computer defeated human
99

players in the Jeopardy! Challenge. The quiz show,


known for its complex, tricky questions and very smart
champions, was the perfect choice to demonstrate the
advance of artificial intelligence.
So far, everything was going well and AI seemed to be
evolving at a pace that everyone understood. However,
in 2014, top scientists and entrepreneurs such as Bill
Gates, Elon Musk, Steve Wozniak, and Stephen Hawking
suddenly began making doomsday predictions and
warning society about the dangers of a superintelligent
AI that could potentially be very dangerous to humanity.
What happened? Why did some of the smartest people
on Earth sound the alarm about the perils of artificial
intelligence? How could something like a computer that
played Jeopardy! be a threat to civilization?
To answer these questions and to illustrate how AI will
be the most important technology in the next decades,
we need to understand the various types of AI in
existence, where we are in terms of technology
development, and how they work.
Weak Artificial Intelligence (WAI)
Weak artificial intelligence, also known as narrow
artificial intelligence, is the only type of AI we have

developed so far. WAI specializes in just one area of


knowledge and we can experience it everyday, even
though we rarely notice its presence.
Simple things like email spam filters are loaded with
rudimentary intelligence that learns and changes its
behavior in real time according to your preferences. For
instance, if you tag a certain sender as junk several
times, the WAI automatically understands it as spam and
you'll never need to flag it again.
Google is also a sophisticated WAI. It ranks results
intelligently by figuring out among millions of variables,
which ones are relevant to your specific search and
context.
Other examples of WAI are voice recognition apps,
language translators, Siri or Cortana, autopilots in cars
or planes, algorithms that control stock trading, Amazon
recommendations, Facebook friends' suggestions, and
computers that beat chess champions or Jeopardy!
players. Even autonomous vehicles have WAIs to control
their behavior and allow them to see.
Weak artificial intelligence systems evolve slowly, but
they're definitely making our lives easier and helping
humans to be more productive. They're not dangerous at
100

all. In case they misbehave, nothing super-serious would


happen. Maybe your mailbox would be full of spam, a
stock market trade would be halted, a self-driving car
would crash, or a nuclear power-plant would be
deactivated.

Computers might be much more efficient than humans


for logical or mathematical operations, but they have
trouble understanding simple tasks such as identifying
emotions in facial expressions, describing a scene, or
distinguishing the nuanced tones or sarcasm.

WAIs are stepping stones towards something much


bigger that will definitely impact the world.

But how far are we from allowing computers to perform


tasks that only humans could do? To answer this
question, we need to make sure we have affordable
hardware at least as powerful as the human brain. We
are almost there.

Strong Artificial Intelligence (SAI)


Strong artificial intelligence, also referred as general
artificial intelligence, is a type of AI that allows a
machine to have an intellectual capability or skillsets as
good as humans. Another idea often associated with SAI
is the ability to transfer learning from one domain to
another.
Recently, an algorithm learned how to play dozens of
Atari games better than humans, with no previous
knowledge on how they worked. It is an amazing
milestone for artificial intelligence, but it is still far away
from being a SAI.
We need to master a myriad of weak artificial
intelligence systems and make them really good at their
jobs before taking the challenge to build a SAI with
human-like capabilities.

Scientists estimate the speed of a human brain to be


about twenty PetaFLOPS. Currently, just one machine,
the Chinese supercomputer Tianhe-2, can claim it is
faster than a human brain. It cost $400,000,000, and of
course it is not affordable and accessible for
researchers in AI. Just for the sake of comparison, in
2015, an average $1,000 PC is roughly 2,000 times less
powerful than a human brain.
But wait a few years, and exponential technologies will
work their magic. Futurists like Ray Kurzweil are very
optimistic that we'll achieve one human brain capability
for $1,000 around the 2020s and one human race
capability for $1,000 in the late 2040s. In the early 2060s
we'll have the power of all human brains on Earth
101

Ray Kurzweil and


Kurzweil Technologies.

102

combined for just one cent.

technology ever built.

As you can infer from these calculations from Kurzweil,


computing power won't be an obstacle to achieving a
strong artificial intelligence. Actually, even if we make
pessimistic predictions following the current trends
dictated by Moore's Law, we'll achieve those capabilities
several decades later. It is just a matter of time until
hardware becomes billions of times more powerful than
all human brains combined.

A future SAI will be more powerful than humans at most


tasks because it will run billions of times faster than our
brains, with unlimited storage and no need to rest.
Initially, a SAI will make the world a better place by
doing human jobs more efficiently.

The major difficulties to inventing a strong artificial


intelligence lie in the software part, or how we'll be able
to replicate the complex biological mechanisms and the
connectome of the brain so a computer can learn to
think and do complex tasks.
There are many companies, institutions, governments,
scientists, and startups working on reverse engineering
the brain using different techniques and counting on the
help of neuroscience. Optimists believe we'll able to
have a complete brain simulation around the 2030s.
Pessimists think we'll have achieved it by the 2070s.
Theoretically, it may take decades to have a computer
as smart as a five-year-old kid, but a strong artificial
intelligence system will be the most revolutionary

The reason why tech visionaries and scientists are


concerned with the invention of a strong artificial
intelligence is because a computer intelligence doesn't
have morals; it just follows what is written in its code.
If an AI is programmed, for instance, to get rid of spams,
it could decide that eliminating humans is the best way
to do its job properly.
Also, it is feared that an SAI would be on the humanintelligence threshold for just a brief instant in time. Its
capability to program itself, recursively, would make it
exponentially more powerful as it gets smarter.
Recursive self-improvement works like this. Initially, the
SAI is programmed by itself, which would be the
equivalent of, let's say, two humans. As the machine has
access to abundant computing power, it can multiply the
number of "programmers" by the dozens in just a matter
of hours.
103

Within some days, these human-equivalent


"programmers" would have discovered so many
scientific breakthroughs that the artificial intelligence
would become smarter than an average human. That will
affect its "programmers" as well, that could be smarter
than Einstein.
At some point in time, this recursive capability will allow
any strong artificial intelligence to become orders of
magnitude more intelligent than us, giving birth to the
first superintelligence.
A Superintelligence
The moment a SAI becomes a superinterlligence is the
moment we might lose control of our creation. It can
"come to life" in just hours, without our knowledge. What
happens after a superintelligence arises is anyone's
guess. It could be good, bad, or ugly for the human
race.
The bad scenario we all know from movies such as The
Terminator or The Matrix. In this case, humans would be
destroyed or enslaved because they present a threat to
the superintelligence's survival.
The ugly scenario is more complicated. Imagine what
would happen if multiple superintelligences arise at the

same time in countries such as the United States or


China. What would happen? Would they fight for
supremacy, be loyal to the countries or the programmers
who created them, or coexist peacefully and share
power? Nobody knows.
The good scenario would remind us of paradise. The
artificial superintelligence would be like an altruistic God
that exists only to serve us. All humanity's problems
would be fixed and our civilization would go to infinity
and beyond.
Brilliant entrepreneurs such as Elon Musk, founder of
Paypal, Tesla, SpaceX and SolarCity, are raising
awareness about the consequences of our advanced
technology, specifically in artificial intelligence.
In his own words:
"I think we should be very careful about artificial
intelligence. If I had to guess at what our biggest
existential threat is, its probably that. So we need to be
very careful.
Im increasingly inclined to think that there should be
some regulatory oversight, maybe at the national and
international level, just to make sure that we dont do
something very foolish."
104

All these scenarios sound like science fiction now, but


they might become real one day. The comparison with
nuclear weapons is a good one. We've almost been
annihilated during the cold war by a technology that
most people thought was science fiction.
So, even if there is the slightest chance that a
superintelligence might arise in the next 20 years, we
should be worried, because it could be our last
invention.
We must not be afraid of being ridiculed, and proceed
to discuss these questions openly. The destiny of our
civilization will entirely depend on the safeguards and
regulations we now put on our technology in order to
avoid catastrophic scenarios.
And if you imagine what an artificial intelligence can
do when it takes control of mechanical bodies, things
start to get really scary...

Elon Musk was the first notable


person to warn against the
dangers of a superintelligence.
Dan Taylor / Heisenberg Media
105

12
ROBOTICS

Our fascination with robots is not new. Automata were


imagined by several writers and inventors during
different periods of history. Ancient books describe in
detail mechanical beings that were astonishingly similar
to the humanoid robots we have today. The power of
imagination never knew any boundaries.
The Chinese were probably the first to come up with the
concept of a non-human creature. The Lie Zi text, an
ancient philosophical volume of stories, describes a sort
of engineer named Yen Shi that, sometime around 1,000
B.C., presented a marvelous invention before the fifth
king of the Chinese Zhou Dynasty. Yen Shi had created a
life-sized automaton which was able to move and
perform several impressive functions. The amazing story
goes:
"The king stared at the figure in astonishment. It walked
with rapid strides, moving its head up and down, so that
anyone would have taken it for a live human being. The
engineer touched its chin, and it began singing,
perfectly in tune. He touched its hand, and it began
posturing, keeping perfect time...
As the performance was drawing to an end, the robot
winked its eye and made advances to the ladies in
attendance, whereupon the king became incensed and

would have had Yen Shi executed on the spot had not
the latter, in mortal fear, instantly taken the robot apart to
let him see what it really was. And, indeed, it turned out
to be only a construction of leather, wood, glue and
lacquer, variously colored white, black, red and blue.
Examining it closely, the king found all the internal
organs completeliver, gall, heart, lungs, spleen,
kidneys, stomach and intestines; and over these again,
muscles, bones and limbs with their joints, skin, teeth
and hair, all of them artificial... The king tried the effect of
taking away the heart, and found that the mouth could
no longer speak; he took away the liver and the eyes
could no longer see; he took away the kidneys and the
legs lost their power of locomotion. The king was
delighted.
The Greeks had imagined many similar creatures. My
favorite is Talos, a giant and handsome bronze man,
created by Zeus himself, who patrolled and defended
the island of Crete against pirates and invaders around
400 B.C.
Leonardo da Vinci, the most talented human being who
ever lived, also gave his contribution to robotics. The
sculptor, architect, musician, mathematician, engineer,
inventor, geologist, cartographer, botanist, and writer,
107

Model of a robot based


on drawings by Leonardo
da Vinci.
Erik Mller

108

had a profound interest in human anatomy as well. That


helped him understand how muscles propelled bones,
and he reasoned the learning could be applied to a
machine.
In 1495, the genius built his own version of a humanoid
robot, known as Leonardos mechanical knight. Da
Vinci's creation was probably more similar to a puppet
than a mechanical automaton, but it was absolutely
remarkable for the time.
Unfortunately it has not survived and no one knows
exactly what it was capable of doing, but apparently it
could walk, sit down, and even work its jaw. It was
driven by a system of pulleys and gears.

would conflict with the first law. 3) A robot must protect


its own existence as long as such protection does not
conflict with the first or second law.
It was later in the 20th century, however, that robots
became real and began to make a difference in our
lives. Movies like Metropolis and The Terminator
popularized scary and fancy humanoid representations
but actually, there are many types of robots that are now
an integral part of our world.
Take a look.

Roboticist Mark Elling Rosheim built a working replica of


the knight using the sketches that remain, detailed in his
book, Leonardos Lost Robots. Da Vincis ingenious
mechanical designs have inspired a generation of
roboticists.
In 1942, the prolific writer Isaac Asimov published his
influential three laws of robotics: 1) A robot may not
injure a human being or, through inaction, allow a human
being to come to harm. 2) A robot must obey the orders
given it by human beings, except where such orders
109

BMW plant in Leipzig, Germany:


Spot welding of BMW 3 series
car bodies with KUKA industrial
robots.
BMW Werk Leipzig

Industrial Robots
Industrial robots are the most important
category of robots in existence and for
more than 50 years they have helped
humans to weld, paint, assemble, pick
and place, inspect, and test products.
One point six million industrial robots
are active in factories around the

world, most of them allocated to the


auto industry. In Japan there are 1,562
industrial robots installed per 10,000
automotive employees.
Robots will play a big role in the future
of manufacturing, as they have grown
in importance in fields such as
logistics, pharma, and electronics.
110

Baxter Industrial Robot can be


programmed to do any tasks.
Rethink Robotics

Amazon, the worlds largest ecommerce company, bought Kiva


Systems in 2012 for almost $800
million. The startup manufactures
very cool robots to help retailers
with warehouse automation.
Amazon has already deployed
more than 15,000 robots at their
own fulfillment centers in the US.
It is impressive to see them in
motion and understand how they
affect the quality and speed of
your next delivery.

job soon.
A r t i fic i a l i n t e l l i g e n c e a n d
advances in robotics are giving
birth to a new generation of
extremely versatile industrial
r o b o t s t h a t a r e r e d e fin i n g
automation. These robots are
safer, cheaper, easy to program,
and are designed to work side by
side with humans.

Analysts believe that automation


and low energy costs will bring
manufacturing back from China to
Foxconn, the manufacturer of
the United States. Apparently, it is
Apple products, announced plans
just as cheap to make goods in
to replace 70% of their assembly
the USA. Products such as GEs
line work by robots in 3 years. The
line of home appliances and the
company employs more than a
new Macbook Pro are already
million workers and many of them
made in America.
will probably need to find another
111

In 2009, Adam, a robot scientist developed by


researchers at the Universities of Aberystwyth and
C a m b r i d g e , b e c a m e t h e fir s t m a c h i n e t o
independently discover new scientific knowledge. His
partner, Eve, automates early-stage drug design.
In medicine, a new generation of surgical robots is
allowing doctors to perform minimally invasive
surgeries in ways that were not possible before. The
Da Vinci Xi combines a 3D HD vision for a clear and
magnified view inside the patient's body with
instruments that have far more dexterity than the
human hand.
The surgeon controls the Da Vinci, which translates his
or her hand movements into smaller, more precise
movements of tiny instruments inside the patient's
body. The technology allows surgeons to perform
complex and routine procedures through a few small
openings, similar to traditional laparoscopy.

The da Vinci Xi surgical robot.


Intuitive Surgical

112

Consumer robots
In the 1960s, the animated sitcom The Jetsons promised
us a future with flying cars and Rosie, the robot maid. 50
years later, how close are we to the futuristic utopia
imagined by the cartoon? Well, at least weve got flat
screens, vending machines and videoconference right.
Flying cars already exist but robots definitely are not our
servants, yet.

The Roomba 880 robot


vacuum cleaner.
iRobot

113

Instead of Rosie, weve got autonomous vacuum


cleaners! The iRobots Roomba might not seem
attractive at first glance but, along its companions
Scooba and a Mirra, iRobot has sold more than 10
million home robots just in the last decade. Theyre
making our lives easier and our houses cleaner.
And there was Sonys Aibo dog, the most sophisticated
consumer robot ever made. It had all the ingredients of a

The Sony Aibo was


discontinued in 2006. It
was an awesome toy.
Sony

114

Pepper, the social robot from Aldebaran


Robotics, can be seen in Softbank stores
in Japan.
Aldebaran Robotics / Softbank

successful product, but the price


tag of $2,000+ was out of reach
for most customers. Aibo never
generated a profit and Sony killed
the division in 2006. Indisputably,
we got a taste of the future.

assistant in retail shops, among


other uses. Pepper will be sold to
consumers in Japan for less than
$2,000 plus a monthly
subscription of $200 to use its
cloud services.

In 2014, though, weve gotten


closer to Rosie with Jibo, the
worlds first family robot, and
Softbanks Pepper, an interactive
social robot that can "understand"
emotions. Pepper boasts a
sophisticated artificial intelligence
software designed to interact with
humans in a friendly manner.

But the coolest humanoid Robot,


not yet available to consumers, is
Hondas Asimo. Its been in
development since 1986. Asimo
can walk, run, climb stairs, serve
juice, hop on one leg, kick a
soccer ball, shake your hand, and
many other tricks. Unfortunately
were at least a decade away from
seeing this technology in our
homes.

SoftBank envisions Pepper as a


companion for the elderly, a
teacher of schoolchildren, and an

115

A DJI Inspire One consumer


drone with a 4K camera.
DJI

Consumer drones
In the last five years, there was an
explosion in popularity of smaller multirotor drones. They captured minds and
hearts around the world because of
their versatility and affordability. Drones
are now the most visible faces of
robotics in our society.

Most of them are equipped with a highresolution camera that allows operators
to see the action in real time through
their smartphones or tablets. Brands
like DJI, for instance, are having a big
impact in industries such as real
estate, news coverage, extreme
sports, and professional filmmaking
116

which traditionally employed very expensive aircraft


and/or video equipment for registering events.
A modern drone equipped with a professional HD
camera costs less than the hourly rental rate for a
helicopter. Even weddings are being recorded by
drones, and traditional videographers might be in
trouble.

Flyability, a Swiss company, created a very impressive


aircraft specifically for search and rescue missions in
areas struck by natural disasters. The Gimball drone is
surrounded by a spherical cage that separates its
propellers from objects and people, allowing it to utilize
insect-like flight methods such as bumping into things to
adjust its trajectory.
From Amsterdam to Tehran, drones are being tested for

The Gimball
drone, made
specifically for
search and
rescue missions.
Flyability

117

Amazon Prime Air drone.


Amazon

emergency situations, such as saving


lives in case of a cardiac arrest and
helping people who might be
drowning.
Drones will undoubtedly disrupt
logistics, and the industry is taking the
threat seriously. Amazon announced
Prime Air, a service that could deliver

packages to your home in just 30


minutes.
Although some people denounced the
video as a pure marketing stunt, Jeff
Bezos, CEO of Amazon, confirmed the
technology is ready and is just waiting
approval from federal regulators.
Meanwhile, DHL is already delivering
118

medicines via drones in Europe and the UAE


government just approved document delivery by
drones.
Google and Facebook have announced projects to
employ high-altitude and high-endurance drones to
broadcast internet to the rest of the world. Still some
years away for reaching the market, prototypes are
being tested worldwide.
I believe well soon live in a future where thousands of
drones are flying over our cities for a multitude of tasks.
The technology is ready and soon drones will fly
autonomously. Governments, however, need time to
make sure the use of drones doesnt hurt our privacy
and constitutional rights.

119

The Google self-driving


car prototype, unveiled in
2014.
Google

120

Autonomous vehicles (AVs)

that is really impressive.

Self-driving robot cars left the realm of science fiction


and might hit the road sooner than we expect. Most
experts believe well have full AVs in consumers' hands
between 2020 and 2025.

Teslas CEO believes Silicon Valley will be the leader in


the field of autonomous cars, as they will be more and
more driven by sensors and software. Uber is already
betting on an autonomous taxi fleet and even the tech
giant Apple is rumored to be building an electric AV for
2020.

In the last 10 years, Google has been the most visible


company working on the technology. Its autonomous
fleet consists of regular cars adapted to use high-tech
sensors and algorithms. Those vehicles logged more
than 700,000 miles without any computer-provoked
accidents.
Apparently, Google tried to convince car manufacturers
to use the technology, but they refused. Then in
December 2014, the company announced its own selfdriving car design, a cute 2-person vehicle with no
steering wheel or pedals. It is currently unknown if
Google will find a partner for manufacturing or launch
their own AV in the market.
Google succeeded in moving the industry into the era of
self-driving vehicles. From Mercedes to Hyundai to
Nissan, all the major automotive players are testing
prototypes with different levels of success. Tesla has
recently unveiled a new autopilot feature for its Model S

The immediate result of self-driving cars' massive


adoption will be less traffic in our cities and roads.
Sensors inside each vehicle will check traffic patterns
and automatically choose the best path to reach a
destination. Its estimated that AVs could save over 2.7
billion unproductive hours commuting to work. This in
turn translates to an annual savings of $447.1 billion per
year in the US alone (assuming 90% AV penetration).
But there are much broader implications for countries
which adopt the technology. The US could prevent the
majority of the 34,000 deaths by traffic accidents per
year and also the huge healthcare costs associated with
them.
Self-driving cars that don't crash will disrupt the $200
billion auto insurance industry. Humans may be
prohibited to drive because were too dangerous. Tens
121

The Mercedes-Benz F015 autonomous car prototype.


Mercedes-Benz

122

of millions of taxi drivers, truckers, and chauffeurs would


be out of their jobs. Silicon Valley incumbents could
become the new GMs or Toyotas.
In a world with autonomous vehicles, ride-sharing
services like Uber would allow people to be car-free.
Maybe we would lease autonomous vehicles or pay a
subscription fee to use them. Automakers and car rental
companies would see sales plummet.
Oil companies would suffer, as most of AVs rely on
electricity and solar energy. Companies like DHL or
Fedex might begin to compete with Uber, Amazon, or
Google for deliveries. Urban planning would be
tremendously affected as buildings or houses wouldnt
need to have a garage or parkways.
The deployment of robocars has the potential to
transform society as we know it. I hope all these
changes bring more time to enjoy our lives and loved
ones.

123

An RQ-4 Global Hawk


unmanned aircraft.
US Air Force photo by Bobbi
Zapka

Military / law enforcement drones

used for lethal and non-lethal missions.

Military drones are unmanned aerial


vehicles (UAVs) that can be remotely
piloted by a human or fly themselves
a u t o n o m o u s l y u s i n g a r t i fic i a l
intelligence software. They are
programmable robots that come in
many shapes and sizes and can be

Non-lethal UAVs can perform a variety


of tasks. The large reconnaissance
drones such as the Global Hawk
provide systematic surveillance using
high-resolution radar and long-range
infrared sensors. They can survey as
much as 40,000 square miles of terrain
124

a day and loiter over any area for extended periods of


time.
Spying is another area favored by the exponential
advancement of drone technology. The Black Hornet
nanodrone is a tiny model currently in use by the
military and law enforcement. It weights only 18
grams, emits almost no noise and is about the same
size of your thumb. It is called the worlds smallest
flying spycam.
The Black Hornet is only the beginning. The US Army
has developed micro-autonomous insect-sized drones
that carry powerful surveillance equipment. They are
virtually undetectable by enemies and can work in
tandem like a swarm of bees. This remarkable
technology is classified but we can get a glimpse of
the future examining solicitations like this one from
DARPA.

PD-100 BLACK HORNET PRS


drone being released by a soldier.
Proxy Dynamics
125

The X-47B UCAS flies over


NAS Patuxent River
during carrier suitability
testing conducted in spring
2012.
Northrop Grumman

126

But the real stars of Air Forces around the world are the
lethal drones like the Predator models used in Iraq,
Yemen, and Afghanistan. Predators can stalk and kill a
single individual on the other side of the planet much the
way a sniper does, and with total invulnerability. Even
Bin Laden feared them.
However, nothing compares to the latest weapon in the
US arsenal, the incredible X47B manufactured by
Nor throp Grumman. The stealth bomber flies
autonomously and has the capability to takeoff and land
from a carrier. It reaches subsonic speeds of 900 mph.
In the next two decades, it is expected that most of the
US military aircraft fleet will be composed of
autonomous drones. Some of them will be able to carry
nuclear weapons and others may fire without the
supervision of a person. By 2030, hypersonic suborbital
autonomous drones, capable of flying at speeds of up to
13,000 mph, might be among us.
Like them or not, drones are changing the way warfare is
conducted. Some organizations fear were making the
world more dangerous and are demanding the ban of
killer robots before it is too late. The Terminator rise of
the machines" scenario is beginning to look more and
more realistic.

Military robots
Military drones are scary, but real life terminators can be
much scarier. The US is leading the world in the
development of robots to aid and eventually replace the
soldier in the battlefield. The Robocop scenario where
machines patrol the streets is not far from happening.
Companies like Boston Dynamics, recently acquired by
Google, developed a myriad of military robots that are
impressive. You must watch the videos to understand
how advanced their technology is.
Spot, an electric four-legged robot the size of a large
dog, can walk autonomously, climb stairs, follow a
human, and never loses its balance. Spot is a very
sophisticated piece of machinery that, in theory, can
help soldiers in the battlefield or help us to patrol our
cities. It doesnt have guns installed on it, yet.
WildCat, a gas-powered cousin of Spot, was developed
to run fast on all types of terrains. On flat surfaces, it
reaches speeds of up to 16 mph using bounding and
galloping gaits. Another model, Cheetah, would win a
competition with the fastest man alive. Big Dog is
designed to carry heavy equipment for soldiers.
Finally, we have Petman, a humanoid robot that is
127

probably the scariest robot Ive ever seen. Its evolution,


Atlas, continues the dangerous evolution to a Terminatorlike machine.
Were probably two decades away from fully emulating
the complexity of a persons movements with a
humanoid robot.
Time will tell whether this will be a good idea or our
greatest mistake.

128

DARPA Atlas, the most sophisticated humanoid robot ever made. Boston Dynamics

129

13
ALTERNATIVE
ENERGIES

Over the millennia, humans have been harnessing the


power of nature to help them control and adapt to the
environment. Energy is the one resource fundamental to
the development of any modern society.
Our ancestors used fire to heat their camps, winds to
circumnavigate the Earth, and water to power irrigation
systems. With any technological advancement comes
the need for more efficient energy resources.
Before 1850, wood was our main source of fuel for
heating, cooking, and for powering steam engines and
machines. Then came coal, reigning supreme until 1945,
when it was surpassed by oil and natural gas.
The post-war boom in the automotive, manufacturing,
and construction industries raised the demand for fossil
fuels to unprecedented levels. In 2012, oil, natural gas,
shale gas, and coal were responsible for 67.9% of the
world's electricity generation and 81.7% of the total
primary energy supply.
The burning of fossil fuels produces billions of tons of
carbon dioxide per year, but it is estimated that natural
processes can only absorb part of that amount. Carbon
dioxide is one of the greenhouse gases that contribute
to global warming, causing the average surface

temperature of the Earth to rise in response.


Ninety-seven percent of climate scientists agree that
global warming is real and very likely due to human
activities. The phenomenon is expected to cause major
adverse effects to more than three billion people.
There is now scientific and economical consensus about
the perils of global warming, and the only way to
slowdown its effects is to reduce carbon emissions
derived from fossil fuels. The pollution currently seen in
most Chinese cities is just one visible collateral damage
of global warming.
Together, the US and China create more than one-third
of all greenhouse gas emissions worldwide. The
situation is so urgent that both countries were compelled
to sign an historic agreement in November 2014 to cut
their emissions.
In this context, entrepreneurs and large organizations
are spearheading the race toward the adoption of green
forms of energy. Exponential advances in technology
and science are driving down costs and helping the
development of more efficient products. It is clear that
alternative energies will be a major trend shaping global
development in the next two decades.
131

Shanghai's
sunset with a
clearly visible
smog line.
Suicup

132

A Model S electric car.


Tesla Motors

The renaissance of batteries


An invention that is more than two
hundred years old, the electric battery,
i s c a u s i n g a re v o l u t i o n i n t h e
transportation industry.
It all started with the vision of one
entrepreneur, Elon Musk, who believes
the only way to save humanity from the

catastrophic consequences of global


warming would be to replace all our
fossil fuel powered vehicles by models
that run entirely on electricity.
To achieve this goal Musk founded
Tesla Motors, a startup responsible for
shifting the perception that electric
vehicles are slow, boring, and inferior
133

to gas powered competitors. His goal is to make Tesla's


models so much better than gasoline cars that
consumers and the entire auto industry will eventually
agree electric vehicles are the future.
Tesla's Model S, launched in 2012, has already received
numerous accolades such as the best car ever tested
by the prestigious Consumer Reports magazine, the
safest car in the world and the quickest sedan in history.
In addition to all its technical, design and mechanical
accomplishments, the Model S success is propelled by
a new battery technology developed by Tesla that allows
its cars to run up to 300 miles with just one charge. This
extended autonomy eliminated the greatest fear of
electric vehicles owners: range anxiety.
The company is also building an extensive network of
superchargers to replace gas stations. The long-term
plan is to power those superchargers with solar energy
so Tesla's customers can recharge their cars for free
forever.
Elon Musk is not satisfied with leading the electric
vehicle category. He also wants to shift the world's car
production output to a majority of electric-powered
vehicles by 2025. In order to accelerate this vision, he

decided to give all Tesla's patents away in 2014 so


competitors have no excuses not to make an electric
car.
The risky and bold strategy seems to be paying off and
most of the traditional car manufacturers are announcing
new and cheaper electric vehicles to compete with
Tesla's incumbent models.
However, Tesla's strategy seems to go beyond electric
vehicles. The company is building in Nevada the socalled Gigafactory. When completed by 2017, it will have
a battery output larger than all the factories in the world
combined. The Gigafactory will drive battery prices
down and will help Tesla to become a player in other
segments of the market.
Tesla announced the same technology that powers
Teslas vehicles will be used to develop a battery for the
home to store excess energy generated from solar
panels during the day, and drawn from at night when
panels sit idle. This development might affect utilities in
ways that were unconceivable some years ago.
Large companies such as Samsung, Honda, Bosch and
GE are also betting on this market. Tesla is just the most
visible player reinventing the battery.
134

Fuel cells
Unlike batteries, fuel cells convert the chemical energy
from a fuel, such as hydrogen, to cleanly and efficiently
produce electricity with water and heat as byproducts. It
is an extremely clean energy source.
The first fuel cells were invented in 1839 and NASA has
been using the technology for decades to generate

A fuel cell power-plant at


UCSD.
FuelCell Energy

135

power for probes, satellites, and space capsules. Fuel


cells provide both electricity and drinking water,
eliminating the need to haul heavy water into space.
Since then, fuel cells have been used in many other
applications such as backup power for commercial,
industrial, and residential buildings, grid power
generation, and also for powering vehicles. Fleets of
hydrogen buses have been circulating in many cities

A Toyota Mirai prototype. The


FCV will start from $58,000 in
late 2015.
Toyota Motor

around the world, promoting the clean


technology. For a moment, fuel cells
looked like the future of energy.
Critics argue fuel cells are very
inefficient, expensive, and high
maintenance. Elon Musk, founder of
Tesla and SpaceX, said that "it is super
obvious that battery powered transport

is the future, not hydrogen".


Cities are indeed facing problems with
their fleet, but the technology has
caught some momentum. Toyota,
Hyundai, Honda, and many
manufacturers already announced
hydrogen-powered vehicles for sale in
US and Japan.
136

In California, automakers, energy providers, and the


State are jointly funding the construction of dozens of
hydrogen refueling stations for the cars of tomorrow.
There seems to be a similar push towards fuel cells in
governments around the world.
The transportation industry is facing a battle between
two technologies (EVs X FCVs) analogous to what
happened in the 1980s between the Betamax and VHS
formats.
Meanwhile, the technology continues to evolve and
scientists are making new discoveries to improve its
efficiency. Some companies believe fuel cells can power
electronics in the future. It looks like the energy source
of the future is not decided yet.

137

Solar power

wind and coal for the second year in a row.

Every hour the sun beams onto Earth more than enough
energy to satisfy global needs for an entire year. After
many decades of unfilled promises, we finally start to
harness the cleanest and most abundant renewable
energy source available.

The report also confirmed that the reach of the U.S. solar
market is continuing to expand, with the utility,
commercial, and residential sectors all delivering over
1GW of new capacity for the first time.

In the solar industry there is an equivalent axiom to


Moore's Law in computing. Swanson's Law, named after
Richard Swanson, the founder of U.S. solar-cell
manufacturer SunPower, suggests the cost of the
photovoltaic cells falls by 20% with each doubling of
global manufacturing capacity.
Over the last four decades, the cost of a photovoltaic
cell fell more than 99%, from $76/watt in 1977 to $0.36/
watt in 2012. Cheap panels coming from China helped
to slash prices more recently. Meanwhile, efficiency in
converting solar power into electricity grew many times
over.
In the US, the solar Investment Tax Credit (ITC) has
helped to fuel the industry's remarkable growth. A
report from the Solar Energy Industries Association
(SEIA) revealed solar accounted for 32% of new
generating capacity added in 2014, outperforming both

U.S. solar energy capacity grew an astounding 418%


from 2010-2014, but it only makes up just over 1% of
total national generation capacity. The growth
possibilities in this category are astounding.
Many companies are already thriving in the market.
Solar City, for instance, has a model where they lease or
finance third-party solar panels to homeowners with no
upfront costs for installation or maintenance. By
adopting solar power, customers pay less each month
than they previously paid for electricity from the utility
company.
Solar City is able to guarantee secure predictable rates
because the company knows solar panel costs will
plummet over the years and the sun will still be there
providing energy for free. The model is disruptive to
utilities and may affect the entire ecosystem of electricity
generation in the long term and help to bring down
energy costs for homeowners and companies alike.
138

The Ivanpah
solar farm
provides
electricity to
more than
140,000 homes
in California.
Bright Source

139

Nuclear fusion
Humans have discovered nuclear fission, the technology
that powers atom bombs and nuclear power-plants.
However, something much grander is awaiting to be
mastered: nuclear fusion.
Fusion is the nuclear reaction responsible for
heating stars across the universe. It was invented by
nature and is the ultimate form of energy: clean,
powerful and abundant.
So far, humans have successfully produced an
uncontrolled and extremely destructive fusion reaction in
a hydrogen bomb. Creating a controlled fusion reaction
inside a reactor has proven to be very difficult.
The greatest challenge faced by scientists is the
extreme temperature required for the reaction to take
place: six times higher than the surface of the sun. No
material known to man can withstand such heat without
melting, so the reacting elements must be suspended
without touching the walls of the reactor. In order to
achieve this, scientists must use gravity, inertia, or
magnetism, all of which are very challenging to create
and control, given the state of our current technology.
If we're able to overcome these obstacles, a nuclear

fusion reactor could use the resulting and gradual


thermonuclear reaction to generate electricity using a
conventional turbine.
A nuclear fusion reactor is the ideal energy source. It
gives off very little radioactivity; there is no need for
underground storage, and the environmental risk of high
radioactive fuel leakage is zero. A plant producing
electricity from a nuclear fusion reaction could provide
abundant power with almost no environmental impact.
Many countries are investing billions in order to pursue
one of the greatest scientific endeavors ever and
consortiums have been established to tackle the
mammoth challenge. ITER, a large-scale scientific
experiment, is funded and run by the European Union,
India, Japan, China, Russia, South Korea, and the
United States. It aims to demonstrate the technological
and scientific feasibility of fusion energy by 2027.
Recent news has been encouraging. Lockheed Martin, a
military contractor, has announced a portable fusion
reactor prototype capable of powering a small city for
2020. Scientists are skeptical of the optimistic timeframe.
It is more likely we'll invent a nuclear fusion reaction in
the next two decades. When it happens, energy will
cease to be a problem for mankind.
140

The magnetic coils


inside of the compact
fusion (CF)
experiment from
Lockheed Martin.
Lockheed Martin

141

14
BITCOIN

Bitcoin is a digital and decentralized cryptocurrency


launched in 2009. In 2010, there was an aggregate of
$100,000 in bitcoin transactions per day. In 2014, the
daily transaction amount jumped to $74,000,000. Bitcoin
is now a hot topic in any conversations about the future
of the financial industry.
You can buy bitcoins through an online wallet, by trading
money, goods, or services with people who have them,
or through mining. You can even borrow or lend money
in bitcoins.
The price of a bitcoin is determined by supply and
demand, and the currency can be used to pay for goods
and services. Notable companies such as Expedia,
Virgin Galactic, Dell, Microsoft, and more than a hundred
thousand merchants already accept bitcoin as a form of
payment.

sector concur with Buffett and dismiss bitcoin as a fad.


In 2015, though, an essay in the renowned The Wall
Street Journal defended bitcoin. It said: "No digital
currency will soon dislodge the dollar, but bitcoin is
much more than a currency. It is a radically new,
decentralized system for managing the way societies
exchange value. It is, quite simply, one of the most
powerful innovations in finance in 500 years.
Venture capitalists such as Marc Andreessen, one of the
Internet pioneers and inventor of the Netscape browser,
believe bitcoin may have a bigger impact in the world in
the next twenty years than the invention of the Internet.
Silicon Valley investors agree with him and poured about
$400 million into bitcoin startups in 2014.

For most people, bitcoin is so new, mysterious, and


complex that you hear contradictory opinions about it
from some of the worlds sharpest minds.

Revolutions are always unpredictable. In 1995, almost


no one could have foreseen how the Internet would
change the world. This 1995 interview is a cautionary
tale about how we should avoid dismissing an invention
too early.

The legendary investor Warren Buffett said in 2014:


Stay away from it. Bitcoin is a mirage, basically. The
idea that it has some huge intrinsic value is just a joke in
my view. The majority of people from the financial

Actually, the lack of consensus among experts about the


future of bitcoin is the main reason why you should pay a
lot of attention to its development. Bitcoin proposes a
new way to look at the financial ecosystem.
143

First and foremost, Bitcoin is open-source; its design is


public, nobody owns or controls bitcoin and it is the first
100% digital planetary payment system ever conceived.

combining your public and private keys together, and


applying a mathematical function to them. This creates a
certificate that proves the transaction came from you.

Second, bitcoins peer-to-peer technology operates with


no central authority; managing transactions and the
issuing of bitcoins are carried out collectively by the
network and the bitcoin algorithm. Governments cannot
interfere on how it works and prints" money. The total
number of bitcoins to be issued is capped at twenty one
million by the year 2140.

Bitcoin is super-secure and so far it has proven to be


hack-proof. One shouldn't confuse the news about
heists and black markets with the bitcoin protocol itself.
The problems occurred with organizations that were
intermediaries in the bitcoin ecosystem.

Third, bitcoin removes intermediaries such as banks and


credit card companies from validating transactions
made with the currency. That alone makes bitcoin
transactions cheaper and faster than any other payment
methods.
And finally, bitcoin transactions are very secure.
Authentications are done through a public ledger seen
by the entire network, and bitcoin doesnt require you to
give up any secret information such as name or phone.
Instead, they use two keys: a public key, and a private
one. Anyone can see the public key (which is actually
your bitcoin address), but your private key is secret.
When you send a bitcoin, you sign the transaction by

I believe the technology and design philosophy behind


bitcoin will be responsible for a major shift in how the
world deals with financial transactions. Bitcoin is more
transparent, faster, safer, and decentralized than
anything currently in use. It seems to be the first step
towards a total digitization of our planet's economy.
There is a possibility that bitcoin or alternative
cryptocurrencies such as Litecoin, Dogecoin and
Peercoin may replace part or the entire financial system
in the next decades.
Actually, bitcoin's blockchain technology is so powerful
that entrepreneurs envision using it outside financial
services, in areas such as digital identities, voting,
contracts, and even music distribution. It will change the
world in one way or another.
144

15
DIGITAL CRIME

There are two terminologies to define a hacker: An


adherent of the technology and programming
subculture, or someone who seeks and exploits
weaknesses in a computer system or computer network
for reasons such as profit, protest, challenge,
espionage, or enjoyment.
The latter is the definition we'll use in the context of this
chapter. In the field of computer security, there are also
two types of hackers.
White hat hackers are the good ones, sometimes
referred as ethical hackers. They penetrate systems to
find vulnerabilities and bugs in order to alert companies,
institutions, or governments. These guys might do it as a
service, to collect bounties, or just for fun.
Black hat hackers, also referred as crackers, violate
computer security for little reason beyond maliciousness
or personal gain, breaking into secure networks to
destroy, modify, or steal data. A black hat hacker may
work alone, in groups, or be sponsored by a criminal
organization or states. These are the guys we'll be
referring to in the next topics.
Hackers use a variety of hacking techniques to
compromise their targets. Some are purely technical, but

others are related to social engineering, which is the


psychological manipulation of people into performing
actions or divulging confidential information. There are
many ways in which hacking is used as a tool to cause
harm. Here are some of them.
Cyber Espionage
Industrial espionage has always been practiced by
governments and corporations around the world. The
Soviets wanted to build a nuclear bomb and they got the
blueprints from the Americans, using spies.
In the digital world, spying became more rewarding. Any
state, company, or group can now steal troves of
information without being detected or triggering a war.
The consequences of cyber espionage can be
significant when sponsored by states. The massive theft
of data and intellectual property related to the F-35 Joint
Strike Fighter has supposedly saved the Chinese more
than 25 years in R&D on their own jets. The same is said
about the Chinese mastery of bullet train technology.
China denies all accusations of cyber spying, of course,
as it knows it's very difficult to find out who are the real
perpetrators behind an attack.
146

Eduard Snowden warned us against the perils of cyber espionage.


Laura Poitras / Praxis Films.

Cyber espionage became vital to the sovereignty of any


nation and, due to the secrecy of the activities, the
potential for abuse is enormous. The Snowden leaks
have thrown light on spy agencies such as the NSA and
the perils of the online world, where data can be
syphoned on a massive scale without our knowledge.
Weve witnessed firsthand how governments can
basically invade any digital systems and crack strong
encryption using sophisticated techniques that seem to
have been inspired by the craziest conspiracy theories.
Aside from privacy concerns, which are very real,
ordinary citizens should be immediately concerned with
the reach of cyber spying, especially if you are a top
scientist, entrepreneur, executive, or journalist. Your
company's private data and yours might be resting in a
datacenter somewhere in the world.
Cyber espionage will only get worse with time, even if
countries cooperate. The best way to avoid being
hacked is to follow best practices when transmitting
sensitive information.
Cyberterrorism
We all know terrorists embraced digital technologies to
147

plan attacks, to divulge their propaganda, and to recruit


followers. But there is an imminent danger that
exponential technologies will become so cheap and
powerful that they might accelerate the deployment of
military grade cyber-weapons into the hands of terrorist
groups.
What's at stake is the critical infrastructure of countries
such as the United States, the preferred target for
terrorists. Utilities, refineries, military defense systems,
water treatment plants, and other facilities have been
replacing their analog controls with digital systems for
the last 25 years, making them more vulnerable to
attacks.
A terrorist group or a state actor could, for instance, take
over the electric grid and power down millions of homes,
industries, or military complexes causing enormous
financial losses. Or wreak havoc on the financial
systems that control stock trading or banking
transactions.
We've reached a point where cyberattacks can cause
physical damage, as illustrated by the case of the
damaged Iranian centrifuges contaminated with the
worm Stuxnet, probably the world's first digital weapon.

Today, any smart 15-year old kid can learn how to inflict
a lot of damage from behind his computer. In the near
future, well very likely witness cyberterrorists trying to
destroy entire corporations or vital infrastructure
remotely, in a matter of hours. The Sony hack,
supposedly done by the most backwards country on
Earth, North Korea, was just the prelude to what is
coming.
Cybercrime
The JP Morgan, SCEA, and Target cases illustrate the
limitations of corporations running their own internet
infrastructure and data security teams against black hat
hackers. Personal details and financial information of
hundreds of millions of people were compromised.
The Internet became a fertile ground for crime. One can
easily buy drugs, arms, and stolen credit cards in the
dark confines of the web. From underground
marketplaces to prostitution, criminals are migrating in
droves to the digital world, stimulated by the large
rewards and the low preparedness of the police forces
to tackle cybercrime.
It looks like there are no boundaries for criminals. In
Mexico, the powerful drug cartels built their own shadow
148

radio network with encryption, so they could be out of


reach of authorities. With the lack of IT professionals in
their ranks, the cartels decided to kidnap top IT
professionals and force them into digital slavery.

or are conducting advanced research.

In 2015, the security firm Kaspersky Labs found out that


cybercriminals stole almost $1 billion from more than
100 banks around the world. Hackers infected
employees' computers with spyware in order to steal
passwords that could authorize wire transfers and the
creation of fake accounts. This could have been the
largest bank heist ever.
It is frightening to understand the sophistication of the
next generation of criminals. They seem to be way
ahead of security forces and are causing great damage
to institutions and companies in all continents.
If governments don't react fast enough, we might enter a
dark era where technology will to do more harm than
good. I sense most legislators belong to another
generation that cannot fully understand what is going on
and how vulnerable we are.
Cybersecurity must be a high priority, not only for
governments but also for any multinationals, universities,
and startups that possess valuable intellectual property
149

Silk Road was a huge


black market that sold
drugs and forbidden
items using bitcoin.
It was busted by the FBI
in 2013 and its operators
went to jail.

150

THE WORLD IN 2035

After reading about the many awesome technologies


listed in this book, you might be wondering what will
happen in a couple of decades. What is real and what is
just early research?

actors to replace humans in dramas or comedies.


Computer generated actors, after all, happen to be
flexible, amenable and potentially more convincing than
humans when playing their roles.

The best way to predict how the world might look like in
2035 is by simply extrapolating the trends already set in
motion in the last few years.

Humans will still be responsible to animate them in the


short-term, though, which is a great opportunity for
young and talented actors. But I am afraid artificial
intelligence algorithms for acting, singing, playing
instruments, and dancing will be a reality in the 2030s.

For instance, by applying a more accurate version of


Moore's Law (processing power doubling every 18
months) and assuming no breakthrough in science or
technology will happen, I can safely affirm that, in 20
years, microprocessors will be at least 10,000 times
more powerful than the ones we have today.
To put things in perspective, these multiples mean an
average PC in 2035 will execute tasks at a higher speed
than the top supercomputer on Earth in 2015. A PC will
also have more computing power than the human brain.
Imagine the possibilities. A video-game console with this
processing power can render a virtual world so intricate
and interactive that no humans would be able to
distinguish it from reality. We will surely surpass the
uncanny valley paradox in the next decades.
By 2035, many Hollywood movies will be using 3D

A new generation of fans will idolize computergenerated characters in the same way they worship
flesh and bone celebrities. The future of entertainment is
inside a computer, for sure.
By 2035 we'll have finally cracked battery issues so
smartphones may be bendable, foldable, and wearable,
some as thin as a sheet of paper. Smartphones will
measure all our vital signals by communicating with
wireless nanochips implanted in our skin.
Every inhabitant of this planet will have one. A
smartphone more powerful than an iPhone 6 will cost
less than $1 by 2035. They'll be the devices that
integrate the poor with the modern world, where new
opportunities for studying and learning become cheaper
152

and more equalized. Physical universities will lose


relevance and online courses will boom. Brands will still
be very valuable.

it through advanced tactile sensors. I believe we'll spend


most of our time in 2035 in a virtual world, no matter
whether for business or pleasure.

In 20 years, broadband Internet will be beamed by


satellites and will reach the entire planet. I suspect it will
be free, sponsored by advertisers. Wireless connections
will have average speeds 10,000 times faster than those
of today, about 1 terabit per second.

Augmented reality lens will be common and all young


people will have one built into a regular glass frame.
They'll have 4K cameras and will fuse the real world with
computer graphics. They might replace smartphones as
the must-have gadgets of 2035.

Wireless Internet chips will be built into most products,


ranging from our clothes to toothbrushes, guns, and
even food packages. These chipsets will automate tasks
and communicate with other objects to make our lives
safer, more convenient, and efficient.

Consumer 3D printers will have the quality of the


industrial printers of 2015 and will print high quality
plastic, metal alloys, organic materials, and carbon.
Users will be able to create and print most smaller
objects. That will change e-commerce.

In 2035, a micro-SD card the size of your fingernail


could hold 128 petabytes of data, one million times more
than a 128 gigabyte model that you buy today for less
than $99. This is enough storage to stream 43 million
hours of HD video from Netflix. Unfortunately, I suspect
SD cards won't exist anymore, as cloud storage will be
free.

Packages delivered by drones in less than 15 minutes


will be a common sight in large cities. When we look up
to the skies we'll see thousands of drones doing the jobs
of postmen and delivery guys.

Using a VR headset to play a game would literally


immerse you in The Matrix. You will not only see and
hear a very realistic environment, but also be able to feel

In 2035, physical retailers in developed countries will be


in serious financial trouble, as most people prefer to
order their groceries from home. Amazon becomes the
largest company in the world by revenues, surpassing
Walmart.
Bionic implants and limbs become common and
153

augment our physical and cognitive abilities. People do


not find them scary anymore. Humans without implants
might descend to the bottom of the socio-economic
pyramid. A movement against technology and the
preservation of the biological purity of the human race is
born and gets millions of followers.
Nanorobots the size of cells are used regularly to treat
diseases. We find a cure for cancer, AIDS, spinal cord
injuries, diabetes, Alzheimers, and many incurable
illnesses of 2015. We can create new life forms from
scratch and resurrect extinct species. The brain is
reverse-engineered successfully, and talks about
uploading our conscience to the cloud become scarily
real.

a great degree, and collaborate for the reduction of


accidents by 90%. Flying cars never take off.
Humans are forbidden to drive in some US states
because they represent a great danger for public safety.
Most people in large cities opt to not have a car and just
use an autonomous vehicle from Uber and other
competitors. The auto industry suffers dramatically and
many iconic brands disappear.
In 2035, suborbital flights become routine and the first
electric supersonic plane prototype is unveiled. Air travel
is greatly improved but still sucks. The Hyperloop is built
and proves to be much more efficient and safer than
bullet trains and planes.

In developed countries, manufacturing and drug


discovery is entirely done by robots. Meanwhile, a new
startup unveils the first generation humanoid that is
affordable and can perform most tasks better than us.
Dozens of millions of people become unemployed due
to the popularity of robots.

The majority of the energy generated in California comes


from solar power. Several companies build the first
nuclear fusion reactors and oil starts to lose its
economic and political importance. The Middle East is
engulfed in more turmoil and the Gulf States lose their
grip on terrorist groups and their own population.
Anarchy ensues.

By 2035, no more gasoline vehicles are manufactured in


rich countries. Electric autonomous cars now represent
the majority of our fleet. These self-driving automobiles
help to combat pollution in large cities, alleviate traffic by

The US fights wars with autonomous drones and ground


robots to avoid casualties. Rogue states and terrorists
use swarms of cheap drones for terrorist attacks and
154

asymmetric warfare. Hacking becomes the biggest


security threat for police forces and governments
worldwide. Countries create their own private Internets.

universes advances exponentially, challenging our


concepts about life and the cosmos. Interstellar travel is
made possible by the new discoveries.

Bitcoin and other cryptocurrencies become mainstream


after governments unsuccessfully attempt to ban them.
Profound economic implications arise from the use of
digital currencies, from the collapse of the banking
industry to the creation of powerful new players. We
enter an era of digital globalization.

Computers infuse intelligence into the legs and arms of


robots and bring their existence into unusual forms.
Politicians try to stop the trend but are unsuccessful.
Machines become better laborers, teachers, scientists,
engineers, designers, programmers, writers, investors,
leaders, and thinkers.

Silicon Valley rejoices and becomes the new Wall Street.


It now controls the world economy and the algorithms
behind all the financial systems. Human trading and
investment is halted. The new Warren Buffett is an
artificial intelligence.

Quantum computers become a million times faster, as


predicted, and a superintelligence arises. Robots learn
how to deal with emotions and artificial intelligence takes
over, surpassing humans in every single task. We are
finally able to create legislators to be proud of.

By 2035, at least one human expedition visits and settles


on Mars. Our probes find alien life in Europa or
Enceladus. Hundreds of planets similar to Earth are
discovered by new telescopes and unmanned
spacecraft. We come to realize the universe is indeed
booming with life and we're not unique.

People get scared and doomsday predictions abound,


but the machines don't destroy us, yet. We have the first
evidence we live in a computer simulation and that our
existence is nothing more than a highly sophisticated
computer program invented by an advanced civilization.

The Large Hadron Collider and its successors find many


new particles, dark matter, dark energy, and our
understanding of quantum physics and parallel

We become unsure about our future as a species and


what to do next. We collectively look for a new purpose
in life and reboot the human race for the new paradigms
of the 21st century.
155

Final Thoughts
Predictions are cool to make and everybody loves them,
including myself. But rest assured, they generally reflect
what we want to happen instead of what is really going
to happen. Thus their importance to serve as a moment
of reflection on who we are and what we want.
The fact is the world is moving in a direction that has no
turning back. It is a unique moment in history. In my
opinion, it is useless to resist modernity.
Exponential technologies will soon penetrate traditional
markets that were once immune to them, and will affect
us and our descendants in ways that we haven't
imagined could be possible.
If the majority of ordinary citizens like you and me
cannot understand technologies that are widespread
and part of our lives, how would we ever grasp
sophisticated concepts such as bionic implants, reverse
engineering the brain, synthetic creatures, robots the
size of atoms, quantum computers, nuclear fusion, and
machines more intelligent than humans?
This is a complex conundrum that prevents most people
from accepting the trends listed in this book, no matter
how real and unstoppable they are.

Although I am optimistic about our next 20 years, the


lack of understanding about the state of our
technologies by our leaders and ordinary citizens alike
is disturbing.
I strongly believe it is just a matter of time until humans
are surpassed by our technologies in every single way.
Instead of denying the changes that lie ahead, wed
better figure out how to coexist with our robot overlords.
If we want to last as a species, we need to open our
minds and begin a debate right now on how we will
cope, control, and regulate our advanced technologies.
You can do your part by alerting friends, family
members, the press, and co-workers about the state of
our technologies. Discuss with them the videos, articles
and information contained in this book.
We cannot be inert waiting for scientists and geeks to
decide our fate. I advocate we need new tools for a new
era, starting with a mindset change in each one of us.
There is still time to influence the future. It all depends
on you.
Thank you.

156