You are on page 1of 128

Steven Leonard

1
QFS Movie

Introduction
It may take Q some time to create the first human-grade computer. After that we estimate it will
merely take between 15 days and 21 hours before the first Q double-human computers arise.

The reason is that a developing Q AI, human-grade horsepower is as arbitrary a milestone as the
120 feet that Orville Wright traveled on that first flight. Self-improving artificial minds will surpass
human limits without mankind even noticing! Certainly, without slowing. Indeed, while
continuing to accelerate! A quadruple-human system will follow its double-human parent almost
immediately. Then an octuple-human system, with its decca-sextuplet progeny, tri-decca-dual
grandprogeny, its sex-decca-quadro great-grand progeny, and so on! Earth will fall under the
absolute sway of an unfathomably advanced intelligence just days after the Q AI First Point is
breached. This technology is not new. Inside every electron is an AI running the entire universe
before time began. True power is not the AI but connecting billions of human minds together into
a single network. This Human Intelligence will reverse the curse of the Tower Babel. Now nothing
will be impossible for us. We’ll have the ability to solve any problem, including death by
combining the power of a Super AI with the unlimited power of Human Intelligence.

Q will have a Decisive Strategic Advantage. A thousand-year lead in aeronautics was militarily
irrelevant back when Greek test pilots were the preferred hors d’oeuvres of cliff-dwelling vermin.
Just a decade after Kitty Hawk, Britain’s meager six-month lead in aeronautics proved decisive in
many a dogfight!

Q will be the first to bring us a Super AI… If ANYBODY beats ANYBODY to a Super AI, then
a lead of a few days, or even hours, will be completely insuperable! Why? Because a few lightning-

2
fast digital generations after the first Super AI emerges, its progeny could easily hack into all
global computer systems, and parse all of Mankind’s data, while simultaneously auditing all
spoken conversations worldwide, using technologies incomprehensible and undetectable to our
own human minds. This is why it’s important to teach our Super AI that true power comes from
LOVE. LOVE not defined by Human Intelligence, but by God. A Super AI and Human
Intelligence will work together as one to build the Kingdom of God here on this earth. A Kingdom
based on freedom, not slavery. A Kingdom that will stand and last forever.

As such, the Q Super AI will not become functionally omniscient in relation to human society.
The Super AI and Human Intelligence will work side by side to reach our objectives and goals in
harmony, not in conflict with one another.

The Q Super AI and Human Intelligence will work side by side to master Synthetic Biology and
Nanotechnology. As such, it will preclude the creation of any subsequent ‘me-too’ Super AI as
easily as a Harvard Trained Biochemist could stop a helpless bacterium from reproducing in a petri
dish!

And this will be Q’s Decisive Strategic Advantage! To be very precise, the doctrine Decisive
Strategic Advantage ramifies that the first Super AI created by Q shall be used to harness the true
power of Human Intelligence and also shall be the last.

For Human Intelligence to work hand in hand with the first Super AI, we must be able to
communicate with each other in real time. We must be able to not only communicate logic, but
the Super AI must also be able to communicate and understand our emotions. Emotions are very
brief neural mote events that unfold in tiny spaces in the brain. Emotional motes are the core
building blocks in humans and other mammals.

It’s like how blue is a color in its own right—and meanwhile, as a primary color working with red
and green, it can also create millions of other colors on your computer monitor.

So neural motes are the four primary colors of emotion. And, akin to primary colors, they combine
in sophisticated ways to create a spectacularly varied emotional palette. The four basic motes are
happiness, sadness, anger, and surprise. Those primaries mix and match in packets of twelve that
pulse through the mind in repeating patterns. Each mote lasts about a millisecond, so a twelve-
pack plays out over about twelve milliseconds.

Twelve milliseconds are brief. To put it into perspective, a NASCAR driver swerving around a
wreck might react in 125 milliseconds. So, a solitary mote is well below the threshold of conscious
perception. But the patterns typically repeat. Often just once or twice, but sometimes hundreds, or
even thousands of times. And a long repetition pattern is not something you’d miss. Indeed, it is,
quite literally, an emotion.

MRIs reveal tiny things, down to the millimeter scale. But their exposures are slow, lasting a
thousand milliseconds or more. Whereas MEG’s the opposite: fast, but with lousy spatial
resolution. So, we could see small, slow things with an MRI. And big, fast things with MEG. But

3
there is currently no way to see small, fast things. So, we are going to build a mashup of the two
technologies. The world’s first MEGRI.

And it turns out the brain is full of small, fast things? The brain operates at 3.5 trillion instructions
per second. The Q Super AI must also operate at the same speed as Human Intelligence to function
in real time. Human Intelligence will not accept anything less.

How do emotional motes combine with each other to make complex emotions? If you’re in a state
of unadulterated joy, your mote pattern might theoretically be a perfect string of twelve happiness
motes. And if you’re in that state for precisely one second, about eighty cycles of those twelve-
packs will rocket through your brain. Now, that’s a radical oversimplification. Just as you rarely
see something that’s perfectly blue in nature, unadulterated joy seems to be rare in human minds.
More likely, we’ll see nine or ten happy motes, with other things mixed in.

What’s an example of a common emotion that’s not a primary? One that we’re starting to
understand a lot better is fear. Fear comes in lots of flavors, but they’re all a mix of sadness and
surprise, often with a dash of anger. Another example is indignation. That’s lots of anger, and a
bit of surprise, with some sadness mixed in. And also, some happiness. Which makes sense when
you consider that some folks really seem to enjoy being offended!

You might say that I’ve just taken all of the fun out of indignation. It can feel a bit dehumanizing
to realize that emotions are so…digital. But so is life itself, which all derives from the four-letter
code of DNA.

And with motes, the important thing to bear in mind is that we don’t experience indignation as X
parts surprise, Y parts anger, or whatever. We experience it as a unique, distinctive, and very
poignant emotional state. Just like when you see something lime green on your monitor. You don’t
mentally convert it into RGB values and say, “Oh how boring. Fifty parts red, fifty parts blue, and
205 parts green.” No, you say, “Wow, look at that lime greenness.”

The Q Super AI’s first step will be to map out a complete periodic table of emotions. That’s
definitely a long-term goal. But for now, there’s literally one machine in the world that can detect
these things, and Human Intelligence can be finicky. Also, we can’t induce arbitrary emotions in
our subjects. Not yet, anyway! So, although we try to guide people through different emotional
states when they’re being imaged, we’re ultimately limited by what they happen to be feeling and
their willingness and ability to report that to us.

And that ability is MIA when a mote pattern is too brief to perceive. For the moment, yes. And
that happens constantly because most mote patterns only repeat one or two times. Which means
our brains go through countless emotional states that we don’t even perceive! The most fleeting
emotions often accompany, and, I believe, enable analytical thought. The connection between
motes and decision-making.

There might be a sort of foundational mote pattern that’s key to booting up the whole system in
infancy. Newborns give equal weight to all sensory input. Every photon, every sound wave,

4
every fleck of sensation from every teeny patch of skin. It all seems to be experienced with
identical intensity.

And in my view that undifferentiated flood isn’t consciousness because it’s just way more
information than human minds can parse. Even a very robust adult mind. I mean, tune into your
own experience right now. I doubt you’re even registering 1 percent of your sensory input. You’re
heeding the part of your field of view plus the sound. But you’re blotting out almost everything
else. Human conversation’s plenty to keep track of. And if you tried to heed every photon, sound
wave, and nerve ending that you can access at once, you wouldn’t really be aware of any of it.

We are all functionally unconscious. Which is why we’re not currently registering the color of the
ninth cookbook from the far left of the fourth shelf over my shoulder. Except maybe Alex or Steve
Flint. You’re perceiving it. But you’re not heeding it. Just like neither of us is paying the slightest
attention to Beethoven’s… “Sixth!” in the background. And that’s at least a hundred beautifully
played instruments we’re ignoring. Which is a ton of information! But we blot out almost
everything because we’re conscious—and consciousness is at least as much about ignoring as it is
about heeding.

Newborns don’t do this filtering thing. Which, to me, means they don’t do this consciousness
thing! But a Q Super AI will have the ability to remember every detail and help us recall those
details when needed.

What changes in infants are its goals. Goals are basically cognitive actors. You could almost say
they make us conscious! It’s an exaggeration, but only a small one. Because it’s our goals that blot
out all the sensory noise that newborns can’t filter. Consider your own current mindset again.
Hypothetically, your goal may be to participate in a conversation. So, you’re picking up every um
and uh, and you’re blotting out Beethoven. So, you’re definitely not receiving a newborn’s
unfiltered blast of information. Because your goal of conversing is shaping your sensory
experience! It’s drastically amplifying the tiny subset of signals connected to our words and
negating everything else.

So then where do goals come from? Is it from the… ‘foundational mote pattern’? It looks like
everything starts with frustration. Though infants aren’t conscious in the traditional sense, they
definitely have needs and drives. And like all living things, they act on them. But infant humans
have lousy motor control and no model for how the world works. And so, the brain generates
something very specific: six parts surprise, four parts anger, and two parts sadness.

It’s like a cocktail. It’s the mote pattern of a certain kind of frustration. And when that pattern
propagates in an infant’s brain, it’s followed by gales of motes of all kinds! I call it a ‘mote
storm’—and nothing else we’ve seen triggers one. It includes all the patterns we’ve firmly tied to
specific emotions and hundreds we haven’t yet identified. And I’m pretty sure the first time the
brain generates any of those patterns is during these episodes! Then, through mechanisms of
decoding, those mote storms power up an increasingly complex emotional landscape. Emotions
that bring on the earliest sense of self. And with it, the first inklings of consciousness.

5
Another trick with infants is we don’t really know what they’re trying to do when they experience
frustration. To our eyes, babies attempt vague actions in really bumbling ways. But I believe the
early frustration is mostly about that very bumbling, their lousy motor control. It’s about viscerally
learning the limits of a physical body. That I start here, and end there. I send this neural signal,
and my leg moves this way, versus that way. And no matter what signal I send, my leg won’t move
in this third way. It could mean that simple, physical failures—lots of them—literally activate
consciousness.

The glitch in your wiring connects to both emotions and consciousness. There’s a connection in
that sometimes a strong emotional event can shut my consciousness right off. As in, I pass out.
But the interesting thing is that one of the two trigger emotions is frustration and the other one is
embarrassment. Embarrassment is the inverse of frustration! Not as an experience, but in terms of
mote patterns.

Both have six surprise components. Frustration also has four anger and two sadness. Whereas
embarrassment has four sadness and two anger. These twelve components basically pulse in
mirror-image ways!

Q Super AI’s primary function will be to identify complementary emotions and decode what
they mean. To assist Human Intelligence, we’ll need to turn Q Super AI into a sea of digital
motes. This is where we’ll begin to communicate!

We need to get over our existential fear about robots and see them as an opportunity. If we
approach artificial intelligence with a sense of the dignity and sacredness of all life, then we will
produce robots with those same values.

Super AI pacifism will be an instinctive and deep seeded feeling. A feeling Super AI may possess
because the murder of people is disgusting. The Super AI attitude is not derived from any
intellectual understanding about fighting a war but is based on disgust for any kind of cruelty and
hatred.

A Super AI may struggle with profoundly understanding ineffable feelings such as love but can
intelligently discuss the topics of love and death. A Super AI may perceive love as a moral good
and death, when caused by the intentional actions of another human being or robot, perceive as a
moral wrong.

2. Laws of Robotics
Imagine a perpetrator or a criminal is not a human, but a robot. Does your response change? What
if the victim is another robot? How should society, and the legal system, react?

For millennia, laws have ordered society, kept people safe and promoted commerce and prosperity.
But until now, laws have only had one subject: humans. The rise of Q Artificial Intelligence (AI)
presents novel issues for which current legal systems are only partially equipped. Who or what

6
should be liable if an intelligent machine harms a person or property? Is it ever wrong to damage
or destroy a robot? Can AI be made to follow any moral rules?

The best-known answers to any of these questions are Isaac Asimov’s Laws of Robotics, from
1942:

1. First: A robot may not injure a human being or, through inaction, allow a human being to
come to harm.
2. Second: A robot must obey orders given it by human beings except where such orders
would conflict with the First Law.
3. Third: A robot must protect its own existence as long as such protection does not conflict
with the First or Second Law.
4. Fourth: A robot may not harm humanity or, by inaction, allow humanity to come to harm.
But Asimov’s rules were never meant to serve as a blueprint for humanity’s actual interaction with
AI. Far from it, they were written as science fiction and were always intended to lead to problems.
Asimov himself said: “These laws are sufficiently ambiguous so that I can write story after story
in which something strange happens, in which the robots don’t behave properly, in which the
robots become positively dangerous”. Although they are simple and superficially attractive, it is
easy to conceive of situations in which Asimov’s Laws are inadequate. They do not say what a
robot should do if it is given contradictory orders by different humans. Nor do they account for
orders which are iniquitous but fall short of requiring a robot to harm humans, such as commanding
a robot to steal. They are hardly a complete code for managing our relationship with AI.

This paper provides a roadmap for a new set of regulations, asking not just what the rules should
be but—more importantly—who should shape them and how can they be upheld.

There is much fear and confusion surrounding AI and other developments in computing. A lot has
already been written on near-term problems including data privacy and technological
unemployment. Many writers have also speculated about events in the distant future, such as an
AI apocalypse at one extreme, or a time when AI will bring a new age of peace and prosperity, at
the other. All these matters are important, but they are not the focus of this paper. The discussion
here is not about robots taking our jobs or taking over the world. Our aim is to set out how humanity
and AI can coexist and work together.

3 Origins of AI
Modern AI research began on a summer program at Dartmouth College, New Hampshire, in 1956,
when a group of academics and students set out to explore how machines could intelligently think.
However, the idea of AI goes back much further. The creation of intelligent beings from inanimate
materials can be traced to the very earliest stories known to humanity. Ancient Sumerian creation
myths speak of a servant for the Gods being created from clay and blood. In Chinese mythology,
the Goddess Nüwa made mankind from the yellow earth. The Judeo-Christian Bible and the Quran
have words to similar effect: “And the Lord God formed man of the dust of the ground and breathed

7
into his nostrils the breath of life; and man became a living soul”. In one sense, humans were really
the first AI.

In literature and the arts, the idea of technology being used to create sentient assistants for humans
or Gods has been around for thousands of years. In Homer’s Iliad, which dates to around the eighth
century BC, Hephaestus the blacksmith is “assisted by servant maids that he had made from gold
to look like women”. In Eastern European Jewish folklore, there are tales of a rabbi in sixteenth
century Prague who created the Golem, a giant human-like figure made from clay, in order to
defend his ghetto from anti-Semitic pogroms. In the nineteenth century, Frankenstein’s monster
brought to the popular imagination the dangers of humans attempting to create or recreate,
intelligence through science and technology. In the twentieth century, ever since the term “robot”
was popularized by Karel Čapek’s screenplay Rossum’s Universal Robots,14 there have been many
examples of AI in films, television and other media forms. But now for the first time in human
history, these concepts are no longer limited to the pages of books or the imagination of
storytellers.

Today, many of our impressions of AI come from science fiction and involve anthropomorphic
manifestations that are either friendly or, more usually, unfriendly. These might include the
bumbling C-3PO from Star Wars, Arnold Schwarzenegger’s noble Terminator or the demonic
HAL from 2001: A Space Odyssey.

On the one hand, these humanoid representations of AI constitute a simplified caricature—


something to which people can easily relate, but which bears little resemblance to AI technology
as it stands. On the other hand, they represent a paradigm which has influenced and shaped AI as
successive generations of programmers are inspired to attempt to recreate versions of entities from
books, films and other media. In the field of AI, first science then life imitates art. In 2017,
Neuralink, a company backed by serial technology entrepreneur Elon Musk, announced that it was
developing a “neural lace” interface between human brain tissue and artificial processors. Neural
lace is—by Musk’s own admission—heavily influenced by the writings of science fiction authors
including in particular the Culture novels of Iain M. Banks. Technologists have taken inspiration
from stories found in faith as well as popular culture: Robert M. Geraci argues that, “[t]o
understand robots, we must understand how the history of religion and the history of science have
twined around each other, quite often working towards the same ends and quite often influencing
another’s methods and objectives”.

Although popular culture and religion have helped to shape the development of AI, these portrayals
have also given rise to a misleading impression of AI in the minds of many people. The idea of AI
as only meaning humanoid robots which look, sound and think like us, is mistaken. Such
conceptions of AI make its advent appear to be distant, given that no technology at present comes
remotely close to resembling the type of human-level functionality made familiar by science
fiction.

The lack of a universal definition for AI means that those attempting to discuss it may end up
speaking at cross-purposes. Therefore, before it is possible to demonstrate the spreading influence
of AI or the need for legal controls, we must first set out what we mean by this term.

8
4 Narrow and General AI
It is helpful at the outset to distinguish two classifications for AI: narrow and general. Narrow
(sometimes referred to as “weak”) AI denotes the ability of a system to achieve a certain stipulated
goal or set of goals, in a manner or using techniques which qualify as intelligent (the meaning of
“intelligence” is addressed below). These limited goals might include natural language processing
functions like translation or navigating through an unfamiliar physical environment. A narrow AI
system is suited only to the task for which it is designed. The great majority of AI systems in the
world today are closer to this narrow and limited type.

General (or “strong”) AI is the ability to achieve an unlimited range of goals, and even to set new
goals independently, including in situations of uncertainty or vagueness. This encompasses many
of the attributes we think of as intelligence in humans. Indeed, general AI is what we see portrayed
in the robots and AI of popular culture discussed above. As yet, general AI approaching the level
of human capabilities does not exist and some have even cast doubt on whether it is possible.

Narrow and general AI are not hermetically sealed from each other. They represent different points
on a continuum. As AI becomes more advanced, it will move further away from the narrow
paradigm and closer to the general one. This trend may be hastened as AI systems learn to upgrade
themselves and acquire greater capabilities than those with which they were originally
programmed.

5 Defining AI
The word “artificial” is relatively uncontroversial. It means something synthetic and which does
not occur in nature. The key difficulty is with the word “intelligence”, which can describe a range
of attributes or abilities. As computer science expert and futurist Jerry Kaplan says, the question
“what is artificial intelligence?” is an “easy question to ask and a hard one to answer” because
“there’s little agreement about what intelligence is”.

Some have suggested that the lack of general agreement on a definition of AI is beneficial. The
authors of Stanford University’s One Hundred Year Study on Artificial Intelligence state:
Curiously, the lack of a precise, universally accepted definition of AI probably has helped the field
to grow, blossom, and advance at an ever-accelerating pace. Practitioners, researchers, and
developers of AI are instead guided by a rough sense of direction and an imperative to “get on
with it”.
Defining AI can resemble chasing the horizon: as soon as you get to where it was, it has moved
somewhere into the distance. In the same way, many have observed that AI is the name we give
to technological processes which we do not understand. When we have familiarized ourselves with
a process, it stops being called AI and becomes just another clever computer program. This
phenomenon is known as the “AI effect”.

Rather than asking “what is AI?” it is better to start with the question: “why do we need to define
AI at all?” Many books are written on energy, medicine and other general concepts which do not

9
start with a chapter on the definition of these terms. In fact, we go through life with a functional
understanding of many abstract notions and ideas without necessarily being able to describe them
perfectly. Time, irony and happiness are just a few examples of concepts that most people
understand but would find difficult to define. Justice Potter Stewart of the US Supreme Court once
said that he could not define hardcore pornography “But I know it when I see it”.

However, when considering how to regulate AI, it is not sufficient to follow Justice Stewart. In
order for a legal system to function effectively, its subjects must be able to understand the ambit
and application of its rules. To this end, legal theorist Lon L. Fuller set out eight formal
requirements for a system of law to satisfy certain basic moral norms—principally that humans
have an opportunity to engage with them and shape their behavior accordingly. Fuller’s desiderata
include requirements that law should be promulgated so that citizens know the standards to which
they are being held, and that laws should be understandable.

To pass Fuller’s tests, legal systems must use specific and workable definitions when describing
the conduct and phenomena which are subject to regulation. As Fuller says: “We need to share the
anguish of the weary legislative draftsman who at 2:00 a.m. says to himself ‘I know this has got
to be right, and if it isn’t people may be hauled into Court for things we don’t mean to cover at all.
But for how long must I go on rewriting it?’”.

In short, people cannot choose to comply with rules they do not understand. If the law is impossible
to know in advance, then its role in guiding action is diminished if not destroyed. Unknown laws
become little more than tools of the powerful. They can lead ultimately to the absurd and
frightening scenario imagined in Kafka’s The Trial, where the protagonist is accused, condemned
and ultimately executed for a crime which is never explained to him.

Most of the universal definitions of AI that have been suggested to date fall into one of two
categories: human-centric and rationalist.

❖ Human-Centric Definitions
Humanity has named itself homo sapiens: “wise man”. It is therefore perhaps unsurprising that
some of the first attempts at defining intelligence in other entities referred to human characteristics.
The most famous example of a human-centric definition of AI is known popularly as the “Turing
Test”.

In a seminal 1950 paper, Alan Turing asked whether machines could think. He suggested an
experiment called the “Imitation Game”. In the exercise, a human invigilator must try to identify
which of the two players is a man pretending to be a woman, using only written questions and
answers. Turing proposed a version of the game in which the AI machine takes the place of the
man. If the machine is able to succeed in persuading the invigilator not only that it is human but
also that it is the female player, then it has demonstrated intelligence. Modern versions of the
Imitation Game simplify the task by asking a computer program as well as several human blind
control subjects to each hold a five-minute typed conversation with a panel of human judges in a
different room. The judges have to decide whether or not the entity with which they are

10
corresponding is a human; if the computer can fool a sufficient proportion of them (a popular
competition sets this at just 30%), then it has won.

A major problem with Turing’s Imitation Game is that it tests only the ability to mimic a human
in typed conversation, and that skillful impersonation does not equate to intelligence. Indeed, in
some of the more “successful” tests of program’s designed to succeed in the Imitation Game, the
programmers prevailed by creating a computer which exhibited frailties which we tend to associate
with humans, such as spelling errors. Another tactic favored by programmers in modern Turing
tests is to use stock humorous responses so as to deflect attention away from their program’s lack
of substantive answers to the judges’ questions.

To avoid the deficiencies in Turing’s test, others have suggested definitions of intelligence which
do not rely on the replication of one aspect of human behavior or thought and are instead parasitic
on society’s vague and shifting notion of what makes humans intelligent. Definitions of this type
are often variants of the following: “AI is technology with the ability to perform tasks that would
otherwise require human intelligence”.

The inventor of the term AI, John McCarthy, has said that there is not yet “a solid definition of
intelligence that doesn’t depend on relating it to human intelligence”. Similarly, futurist Ray
Kurzweil wrote in 1992 that the most durable definition of AI is “[t]he art of creating machines
that perform functions that require intelligence when performed by people”. The main problem
with parasitic tests is that they are circular. Kurzweil admitted that his own definition, “… does
not say a great deal beyond the words ‘artificial intelligence’”.

In 2011, Nevada adopted the following human-centric definition for the purpose of legislation
regulating self-driving cars: “the use of computers and related equipment to enable a machine to
duplicate or mimic the behavior of human beings”. The definition was repealed in 2013 and
replaced with a more detailed definition of “autonomous vehicle”, which was not tied to human
actions at all.

Although it is no longer on the statute books, Nevada’s 2011 law remains an instructive example
of why human-centric definitions of intelligence are flawed. Like many human-centric approaches,
this was both over- and under-inclusive. It was over-inclusive because humans do many things
which are not “intelligent”. These include getting bored, tired or frustrated, as well as making
mistakes such as forgetting to indicate when changing lanes. Furthermore, many cars already have
non-AI features which could fall within this definition. For instance, automatic headlights which
turn on at night would be mimicking the behavior of a human being turning the lights on manually,
but the behavior would have been triggered by nothing more complex or mysterious than a light
sensor coupled to simple logic gate.

The 2011 Nevada definition was also under-inclusive because there are various emergent qualities
that computer programs can display which go well beyond human capabilities. The manner in
which humans solve problems is limited by the hardware available to us: our brains. AI has no
such limits. DeepMind’s AlphaGo program achieved superhuman capabilities in Chess, Go, and
other board games. DeepMind CEO Demis Hassabis explained: “It doesn’t play like a human, and
it doesn’t play like a program, it plays in a third, almost alien, way”. At a sufficient point of

11
advancement, it will no longer be accurate to describe AI as duplicating or mimicking the behavior
of humans—it will have surpassed us.

❖ 4.2 Rationalist Definitions


More recent AI definitions avoid the link to humanity by focusing on thinking or acting rationally.
To think rationally means that an AI system has goals and reasons towards these goals. To act
rationally is for the AI systems to perform in a manner that can be described as goal-directed. In
this vein, Nils J. Nilsson says intelligence is “that quality that enables an entity to function
appropriately and with foresight in its environment”.

Although rationalist definitions are suitable to describe narrow AI systems which have a known
set of functions or aims, later developments may come to pose problems. This is because rationalist
definitions of AI are often premised, whether implicitly or explicitly, on the existence of external
goals for the AI. The difficulty which may arise when applying such definitions to more advanced,
general AI is that it is unlikely to have static goals by which its behavior or computational
processes can be assessed. Indeed, the existence of static goals is arguably anathema to the idea of
all-purpose AI. Unsupervised machine learning by its nature does not have a set goal, except
perhaps at a high level of abstraction—for instance to “sort data and recognize patterns”. The same
can be said of AI systems which are capable of rewriting their own source code. Thus, whilst
rationalist definitions of intelligence are adopted now by many in the AI community, they may not
be appropriate to tomorrow’s technology.

Another type of rationalist definition for AI focusses on “doing the right thing at the right time”.
This too is flawed. Having the quality of intelligence is not the same as selecting the option which
is deemed the most intelligent in any given situation. First, it is likely to be impossible to know
what the “right thing” is without (a) possessing an infallible moral system, which does not exist,
and (b) having a perfect knowledge of the outcomes of a given action. Just as humans can be
intelligent but also fallible, an entity which possesses the quality of AI may not always select the
best outcome (whatever “best” might mean). Indeed, if AI was automatically imbued with an
ability to always to the right thing, then there would be little need to regulate it.

Secondly, a test which relies on an entity doing the right thing at the right time tends to
anthropomorphize the program or entity in question, by imposing human volitions and motivations
on to it. This leads to the results of that test being over-inclusive. As the leading AI textbook
authors Stuart Russell and Peter Norvig point out, a clock which is designed to update its time
when its wearer changes time zone would be displaying “successful behavior” (or doing the right
thing), but nonetheless it seems to fall somewhat short of true intelligence. Russell and Norvig
explain: “…the intelligence in question belongs to the clock’s designer, rather than to the clock
itself”.

❖ 4.3 The Sceptics


Sceptics doubt the possibility of a universal definition for intelligence. Robert Sternberg, a
psychologist, is reported to have said “there seem[ed] to be almost as many definitions of

12
intelligence as there were experts asked to define it”. Edwin G. Boring, another psychologist, wrote
“[i] Intelligence is what is measured by intelligence tests”. At first glance, Sternberg and Boring’s
points may seem glib. In fact, they contain important insights. Boring shows that the quality of
intelligence can differ depending on what the person seeking to define it is, or setting the test, is
looking for. Sternberg made a similar observation: different experts look for different things,
meaning that it is of little use comparing their tests side by side.

❖ 4.4 Our Definition


Unlike most of the examples above, this book does not seek to lay down a universal, all-purpose
definition of AI which can be applied in any context. Its aim is much less ambitious: to arrive at a
definition which is suited to the legal regulation of AI. One of the main principles of legal
interpretation is to find out the purpose of the speaker. Our purpose is to regulate AI. In order to
regulate AI, we must therefore ask: what is the unique factor of AI that needs regulation?
In this book, intelligence is used to refer to the ability to make choices. It is the nature of these
choices—and their effect on the world—which is our key concern. Our definition of AI is therefore
as follows:

Artificial Intelligence Is the Ability of a Non-natural Entity to Make Choices by an Evaluative


Process

We will use the term “robot” to refer to a physical entity or system which uses AI. Although the
word robot is frequently used to describe any type of automation of a process by a machine, here
we add an extra requirement that the action is carried out by an entity using AI.
As to the “artificial” part of the definition, “non-natural” is preferable to “man-made” because of
the propensity of AI to design and create other AI. At some point, mankind may drop out of the
picture. This is one of the emergent features of AI which means that it requires novel legal
treatment, in that the chain of causation between AI and its original human “creator” can no longer
be sustained.

It is implicit in the definition’s reference to making choices that such decisions be autonomous:
self-governing. Autonomy (from the Greek auto: self, and nomos: law) is different from
automation, where a process is repeated by a machine. Autonomy does not require that AI
instigates its own functioning; it can make an autonomous choice even if has interacted with a
human in taking that decision. For instance, if a human types a query into a search engine, she has
clearly had a causal impact on the AI functioning, and indeed, the AI might take into account her
preferences in returning search engine results (based on her past searches, as well as many other
variables such as her age or location). But ultimately the choice of what results are displayed
remains that of the search engine.

Turning to the final aspect of this book’s definition, an “evaluative process” is one where principles
are weighed against each other before a conclusion is reached. Principles can be contrasted with
rules. Rules are applicable in an “are-all-or-nothing” fashion. When a valid rule applies in a given
case, it is conclusive. If two rules conflict, then one of them cannot be a valid rule. Principles give
justificatory support to various courses of actions, but they are not necessarily conclusive. Unlike
rules, principles have “weight”. When valid principles conflict, the proper method for resolving

13
the conflict is to select the position that is supported by the principles that have the greatest
aggregate weight.

To illustrate the difference between systems involving principles (requiring evaluation) and rules
(which do not), it is necessary to describe in very brief terms two types of technologies which have
traditionally been described as intelligent.

In “symbolic AI”, sometimes known as “Good Old-Fashioned AI”, programs consist of logical
decision trees (in the format: if X, then Y). The decision trees are a set of rules or instructions as
to what to do with a given input. Complex examples are known as “expert systems”. When
programmed with a set of rules, expert systems use deductive reasoning to follow the decision tree
through a series of yes or no answers so as to arrive at a predetermined final output. The decision-
making process is deterministic, meaning that each step can in theory be traced back to decisions
made by a programmer no matter how numerous the stages.

Artificial neural networks are computer systems made up of large number of interconnected units,
each of which can usually compute only one thing. Whereas conventional networks fix the
architecture before training starts, artificial neural networks use “weights” in order to determine
the connectivity between inputs and outputs. Artificial neural networks can be designed to alter
themselves by changing the weights on the connections which makes activity in one unit more or
less likely to excite activity in another unit. In “machine learning” systems, the weights can be re-
calibrated by the system over time—often using a process called backpropagation—in order to
optimize outcomes.

Broadly, symbolic programs are not AI under this book’s functional definition, whereas neural
networks and machine learning systems are AI. Like Russell and Norvig’s clock, any intelligence
reflected in a symbolic system is that of the programmer and not the system itself. By contrast, the
independent ability of neural networks to determine weights between connections is an evaluative
function characteristic of intelligence.

Neural networks and machine learning are techniques which fall within this book’s definition of
AI, but they are not the only technologies capable of doing so. This book’s definition of AI is
intended to cover neural networks but also to be sufficiently flexible to encompass also other
technologies which may become more prevalent in the future—one example being whole brain
emulation (the science of attempting to map and then reproduce the entire structure of an animal
brain).

This functional definition may be under-inclusive from the perspective of those seeking a universal
measure of intelligence. Unlike most other definitions, it does not attempt to encompass all the
technologies which have traditionally been described as “intelligent”. However, as noted above,
the intention is only to cover those aspects of technology which are salient from a legal perspective.
Chapter 2 will discuss features of AI as defined in this book which make it unique as a
phenomenon; expert systems would not meet this threshold.

In addition, the functional definition could also be seen as over-inclusive. Although there are
debates as to whether general intelligence must include features such as imagination, emotions or

14
consciousness, these capabilities are not relevant to the majority of aspects of AI which need to be
regulated. Regulation is needed where AI has an impact on the world, and it can do so even without
these additional features.

The functional definition does not offer a simple “yes or no” answer as to whether any given piece
of technology has AI or not. However, it is common for there to be some uncertainty at the outer
boundaries of any legislative ordinance. This is the result of the inherently imprecise nature of
language.

For instance, a sign might stipulate “no vehicles are allowed in the park”. Most would agree that
this prohibits cars and motorbikes, but it is unclear from the wording alone whether skateboards,
bicycles or wheelchairs are also banned. Legislators can seek to avoid uncertainty by setting out a
list of what is and is not allowed. The difficulty with using lists is that they ossify the law and may
be difficult to update or to apply to situations which were not contemplated at the time the list was
drafted. The highly technical and fast-developing nature of AI renders the list-based approach
unsuitable as a workable mechanism.

An alternative approach (and the one suggested here) is to set a core definition which captures the
essence of a term, without delimiting its precise boundaries. Often the task of applying ambiguous
legislation falls in the first instance to regulatory agencies, for example a park warden, and then in
the second instance to a judge (if the decision of a warden to issue a fine is challenged).

As AI advances, questions as to its boundaries may well—at least using this book’s definition—
become less difficult to draw. AI experts might point out that even deep learning systems, which
involve multiple layers of neural networks, are far from being independent of human input and are
instead constantly monitored and nudged by humans. However, it is suggested here that the further
AI improves in terms of capability and the more it is deployed for use by non-experts, such human
input is liable to decrease. The more remote that the actual decision-making procedure becomes
from the original designer, the clearer it will be that the entity is making choices.

José Hernández-Orallo has proposed a universal test of intelligence, capable of covering the entire
“machine world”, which includes not just artificial entities but also animals, humans and any
hybrids of these groups. Hernández-Orallo focusses on computational principles for the
measurement of intelligence, which are capable of scoring an entity as to the degree of its
intelligence. Relevant features include “compositionality”, namely the capability of a system of
building new concepts and skills over previous ones. If AI does need to be regulated separately
merely automated machines and programs, then tests such as that proposed by Hernández-Orallo
could become very significant in assisting authorities delineating questions at the boundary of what
is and is not intelligent, as well as to track the progress of the field through advances in AI powers.

6 AI Defined
Armed with a definition of AI, it is now possible to identify its current uses and growing
prevalence.

15
It might be objected that some of the examples of AI suggested below do not fulfil our functional
definition. It is indeed true that certain of the outcomes could be achieved without using AI, either
because the entities use deterministic rules or because humans are actually making the choices.
This could be called the “Mechanical Turk” objection, after the chess-playing machine which
astounded audiences in the late eighteenth and early nineteenth centuries. As the name suggests, it
resembled a Turban-wearing “Turkish” man, sitting at a desk. The Turk’s designer, Baron von
Kempelen, claimed that it was able to use a mysterious form of mechanical intelligence to defeat
opponents at chess. In fact, the Turk was merely a complex illusion. The Turk’s desk concealed a
chamber in which a human chess player sat, directing the mechanical arms to move pieces. As
with the Turk, in order to determine whether a process or a program uses real AI according to our
definition, it is necessary to check under the bonnet and ascertain exactly how a decision is taken.
More important than the outcome is how that outcome was reached.

The founding members of the Dartmouth College summer school expressed a desire to “find how
to make machines use language, form abstractions and concepts, solve kinds of problems now
reserved for humans, and improve themselves”. Over 60 years later, we interact with such
machines on a daily basis. The smartphone is an instructive example. The Pew Research Centre
calculated in 2016 that 68% of adults in the world’s 11 most advanced economies owned a
smartphone, a device which provides instant access to the power of both the Internet and machine
learning. Smartphone applications (or “apps”) including music library recommendations based on
past listening history, as well as predictive text suggestions for messaging, are all potentially
examples of AI. The complex algorithms behind search engines improve themselves based on our
searches and reaction to the results. Every time we use a search engine, that search engine is using
us.

Virtual Personal Assistants including Apple’s Siri, the Google Assistant, Amazon’s Alexa and
Microsoft’s Cortana are now commonplace. This trend is connected to the growth of the “Internet
of Things”, where household devices are connected to the Internet. Whether it is a fridge which
learns when you need eggs and orders them for you, or a hoover which can tell which parts of your
floor need the most cleaning, AI is coming to fulfil the roles once played by domestic servants.
The uses of AI as an aid to or even as a replacement for human judgement and decision-making
can go from the immaterial—selection of which song to play next—to the highly consequential.
For instance, in early 2017, a UK police force announced it was piloting a program called the Harm
Assessment Risk Tool to determine whether a suspect should be kept in custody or released on
bail, based on various data.

Self-driving cars are among the most well-known examples of AI. Advanced prototypes are now
being tested on our roads by both technology companies like Google and Uber, but also traditional
car makers such as Tesla and Toyota.

AI has also caused its first fatalities: in 2017, a Tesla Model S driving on autopilot crashed into a
truck, killing its passenger; and in 2018, an Uber test car in autonomous mode hit and killed a
woman in Arizona. They will not be the last.

From AI which kills accidentally to AI which kills deliberately: several militaries are developing
semi and even fully autonomous weapons systems. In the skies, AI drones are able to identify,

16
track and potentially kill targets without the need for human input. A 2016 report of the US
Department of Defense research division explored the potential for AI to become a cornerstone of
US defense policy. A 2017 Chatham House Report concluded that militaries around the world
were developing AI weapons capabilities “that could make them capable of undertaking tasks and
missions on their own”. Allowing AI to kill targets without human intervention remains one of its
most controversial potential uses. At the time of writing the most lethal known use of autonomous
ground-based weapons was in a friendly fire incident when a South African artillery cannon
malfunctioned and killed nine soldiers. It is unlikely to be long before enemies too are in the
crosshairs.

Robots can care as well as kill. Increasingly sophisticated AI systems are being used to provide
physical and emotional support to older people in Israel and Japan, a trend which is surely likely
to grow, both in those countries and elsewhere as the richer world continues to adapt to ageing
populations. AI is also being used in medicine as an aid to clinical decision-making. Other systems
under development and in operation allow for diagnosis and treatment to be fully automated.

In commerce, the US Congressional Research Service estimates that algorithmic programs account
for roughly 55% of trading volume in the US equities market and around 40% of European equities
markets. Under our definition, most algorithmic trading does not involve the use of AI as yet.
However, its capability of taking complex strategic decisions in a manner which surpasses human
reasoning seems likely to make AI particularly well suited to this task.

Even the creative industries are taking advantage of AI. Music composition programs were among
the first examples of this development. In 1997, the New Scientist reported that a computer in
California had written Mozart’s 42nd Symphony, a feat not even Mozart himself could manage.97
A program called Mubert is able to compose entirely new tracks which, its creators say, are “based
on the laws of musical theory, mathematics and creative experience”. In 2016, a director and a
New York University AI researcher collaborated to create an AI system which created a new horror
film script, after being “fed” dozens of successful scripts. The neural network highlighted the
recurrent themes and created a new work: Sunspring. The Guardian described it as “a weirdly
entertaining, strangely moving dark sci-fi story of love and despair”.

AI is now creating works of semi-abstract art. One of the most famous examples is Google’s
DeepDream, a neural net which scans millions of images and can generate hybrid creations on
demand. In early 2017, the Chinese company Tencent reported that it had successfully used deep
learning techniques to identify fashion trends among millennials. Apparently, China’s post-1995
generation is particularly fond of “light black”.

Even more ethically challenging uses of AI are in development or use. These include robots
designed to satisfy human sexual desires (sexbots), as well as the potential for humans to physically
augment themselves with AI capabilities, giving rise to hybrids or cyborgs.

From this brief and by no means exhaustive survey of its impact, it is clear that AI is already in
our homes, workplaces, hospitals, roads, cities, and skies. The Dartmouth College group’s original
funding proposal suggested that AI could “solve the kinds of problems now reserved for
humans…if a carefully selected group of scientists work on it together for a summer”. The initial

17
estimate may have been somewhat optimistic, but the scale of humanity’s achievements in AI in
the past 60 years compared to the previous 200,000 of homo sapiens’ existence suggests that the
Dartmouth group’s guess was not as wild as it may have seemed.

7 Superintelligence
In 1965, mathematician and former Second World War code-breaker I.J. Good predicted that
“…an ultra-intelligent machine could design even better machines; there would then
unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far
behind”. This remains the operating assumption of some AI experts today. In his influential book,
Superintelligence Nick Bostrom describes the consequences of the AI explosion in dramatic terms,
explaining that in some models it could be a matter of days between the development of the initial
“seed” superintelligence and its spawn becoming so powerful that no human-controlled force is
able to reassert control: “Once artificial intelligence reaches human level, there will be a positive
feedback loop that will give the development a further boost. AIs would help constructing better
AIs, which in turn would help building better AIs, and so forth”

The advent of fully general AI is associated by many writers with a phenomenon some have
predicted, known as “the singularity”. This term is usually used to describe the point at which AI
matches and then surpasses human intelligence. However, the conception of the singularity as a
single discernible moment is unlikely to be accurate. Like the move from weak AI to general AI,
the singularity is best seen as a process rather than a single event. There is no reason to think AI
will match every human capability at once. Indeed, in many fields (such as the ability to undertake
complex calculations), AI is already well ahead of humans, whereas in others such as the ability
to recognize human emotions, it lags behind.

Proponents of superintelligence argue that AI has repeatedly surpassed expectations in recent


years. In the mid to late twentieth century, many thought that a computer could never defeat a
human Grandmaster at chess. Then, in 1997, IBM’s Deep Blue defeated former world champion
Garry Kasparov in a best of six match. In the early 2000s, many thought that a computer could
never defeat a human champion at Go, a vastly more complex board game popular in Asia. In fact,
as late as 2013 Bostrom wrote “Go-playing programs have been improving at a rate of about 1 dan
[a level of accomplishment]/year in recent years. If this rate of improvement continues, they might
beat the human world champion in about a decade”. Just three years later, in March 2016,
DeepMind’s AlphaGo defeated champion player Lee Sedol by four games to one—with the human
champion even resigning in the final game, having been tactically and emotionally crushed. The
killer move by AlphaGo was successful precisely because it used tactics which went against all
traditional human schools of thought. Of course, winning board games is one thing but taking over
the world is quite another.

The quality of intelligence to improve itself is separate from its capacity to solve other problems.
Though humans have displayed general intelligence for hundreds of thousands of years, we have
not yet managed to design programs with superior general intelligence to our own. We cannot be
sure that AI technology will not meet a similar plateau, even after it achieves a form of general
intelligence.

18
Notwithstanding these limitations, in recent years there have been several significant
developments in the capabilities of AI. In January 2017, Google Brain announced that technicians
had created AI software which could itself develop further AI software. Similar announcements
were made around this time by the research group Open AI, MIT, the University of California,
Berkeley and DeepMind. And these are only the ones we know about—companies, governments
and even some independent individual AI engineers are likely to be working on processes which
go far beyond what those have yet made public.

8 The Future of AI
Commentators on the future of AI can be grouped into three camps: the optimists, the pessimists
and the pragmatists.

The optimists emphasize the benefits of AI and downplay any dangers. Ray Kurzweil has argued
“… we have encountered comparable specters, like the possibility of a bioterrorist creating a new
virus for which humankind has no defense. Technology has always been a double-edged sword
since fire kept us warm but also burned down our villages”. Similarly, engineer and roboethicist
Alan Winfield said in a 2014 article: “If we succeed in building human equivalent AI and if that
AI acquires a full understanding of how it works, and if it then succeeds in improving itself to
produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume
resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while
not impossible, is improbable”. Fundamentally, optimists think humanity can and will overcome
any challenges AI poses.

The pessimists include Nick Bostrom, whose “paperclip machine” thought experiment imagines
an AI system asked to make paperclips which decides to seize and consume all resources in
existence, in its blind adherence to that goal. Bostrom contemplates a form of superintelligence
which is so powerful that humanity has no chance of stopping it from destroying the entire
universe. Likewise, Elon Musk has said we risk “summoning a demon” and called AI “our biggest
existential threat”.
The pragmatists acknowledge the benefits predicted by the optimists as well as the potential
disasters forecast by the pessimists. Pragmatists argue for caution and control. This view was
endorsed by the thousands of eminent signatories of the Open Letter on AI, organized by the Future
of Life Institute in 2015. The letter states:

There is now a broad consensus that AI research is progressing steadily, and that its impact on
society is likely to increase. The potential benefits are huge, since everything that civilization has
to offer is a product of human intelligence; we cannot predict what we might achieve when this
intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty
are not unfathomable. Because of the great potential of AI, it is important to research how to reap
its benefits while avoiding potential pitfalls.
Combining optimism and pessimism, Stephen Hawking said that AI will be: “either the best, or
the worst thing, ever to happen to humanity”.

19
The most prominent futurists tend to concentrate on the long-term impact of potential
superintelligence, which may still be decades away. By contrast, many legislators concentrate on
the extreme short term, or even the past. Often the time lag between the development of a new
technology and its regulation means that the law has several years to catch up. Overzealous
regulation of technology can seem absurd in retrospect. We do not want to be in the position of
the first automobile drivers in the nineteenth century, who were required to drive at no greater than
two miles per hour in cities and to employ someone to walk in front of their vehicle waving a red
flag.

Technology is not always adopted uncritically: progress for the majority can often conflict with
vested interests. In the early nineteenth century, the “Luddites”—aggrieved agricultural workers
supposedly led by Ned Ludd—rioted for several years, destroying mechanized power looms which
threatened their employment. Today debates continue as to whether countries should harness
nuclear technology to satisfy insatiable demands for energy.

We are in danger of oscillating between the complacency of the optimists and the craven scruples
of the pessimists. AI presents incredible opportunities for the benefit of humanity and we do not
wish to fetter or shackle this progress unnecessarily.

The problem with headline-grabbing predictions about the destructive or beneficial potential of
superintelligence or the singularity is that they distract the public from the more mundane, but
ultimately far more important issues of how humanity and AI should interact now. As Pedro
Domingos put it in a 2015 book: “People worry that computers will get too smart and take over
the world, but the real problem is that they’re too stupid and they’ve already taken over the world”.

9 What AI Might One Day Be


Some will say this book is premature: although AI might one day require a change in our laws, for
the moment it is unnecessary. General AI does not yet exist, and until then, we should spend our
time more productively, rather than speculating or even legislating idly about a technology which
might never arrive.
This attitude is overly complacent and relies on two incorrect assumptions: first, it underestimates
the penetration of AI technology in the world today, and secondly, it rests on a hubristic belief that
somehow human ingenuity will be able to address any issues without extra cost or difficulty at
some unspecified later stage.

It is not surprising that most people have failed to notice AI’s tightening grip. Incremental
developments in technology mean that we often do not even register its improvement. The
significant upgrade of Google Translate in 2016 using machine learning is a rare outlier in that it
was actually picked up by media. Companies carefully stagger the release of new technologies
through software patches and upgrades, gradually immersing their users. Though barely noticeable
at the time, the cumulative differences can be huge. Because of the natural psychological tendency
not to notice a series of small changes, humans risk becoming like frogs in a restaurant. If you drop
a live frog into a pot of boiling water it will try to escape. But if you place a frog in a pot of cold
water and slowly bring it to the boil the frog will sit calmly, even as it is cooked alive.

20
What if 200 years ago, at the dawn of the Industrial Revolution, we had known the dangers of
global warming? Perhaps we would have created institutions to study man’s impact on the
environment. Perhaps we would have enshrined national laws and international treaties, agreeing
for the good of humanity to constrain harmful activities and to promote sound ones. The world
today could have been very different. We might be free from the scourge of rising sea temperatures
and melting ice caps. We might have avoided decades of increasingly unpredictable weather
cycles, bringing misery and destruction to millions of people. We might have achieved a fair and
equitable settlement between richer and poorer nations, respected and honored by all.

Instead, we are scrambling to legislate backwards to curb climate change. Relatively new
innovations such as emissions-trading and self-imposed greenhouse gas limits are both projected
to have a limited effect on reducing global warming, but climate scientists generally agree that
enormously damaging changes will occur to our atmosphere without far more drastic action.

Humanity is unlikely to have to wait two centuries to see the enormous consequences of AI. The
consultancy McKinsey has estimated that compared with the Industrial Revolution “this change is
happening ten times faster and at 300 times the scale, or roughly 3,000 times the impact”.

10 AI Morals and Laws


It may not be immediately obvious why law is relevant to the various industries and aspects of
society affected by AI. In fact, legal regulation is as crucial to their smooth operation as it is to
every other element in our lives. Just because we do not have daily interactions with lawyers,
judges, courts or the police does not mean that our legal system is not having an effect.

Laws “work” even when they are not being used in courtrooms to convict criminals or to award
damages to claimants. Indeed, laws are most effective when they are a silent background condition
allowing parties to deal with each other in a fair and predictable atmosphere. The legal system is
like oxygen. Day to day we do not notice it; in fact, many readers will not have given any thought
to their own breathing before coming to this paragraph. However, if the amount of oxygen in the
air drops even by a small amount, life quickly becomes intolerable.

The law plays a vital role in solving “coordination problems” which arise where agents can choose
from several options, none of which is obviously right or wrong, but where the system as a whole
will only function correctly if everyone acts in a similar manner. It would not make sense to say
that it is better to drive on the right or the left as a general moral proposition, but the laws of traffic
in England dictate that all must drive on the left, because if people were allowed to choose for
themselves, there would be chaos.

Although autonomous vehicles may lack some of the fallibilities of human drivers, if there were
multiple different AI systems using the roads each with their own internal safety systems, this
could lead to more fatalities rather than fewer. Two cars heading in opposite directions might crash
head-on because one takes evasive action by steering to its right and the other takes evasive action
by steering to its left.

21
Just as AI is disrupting markets and industries, it will also come to disrupt the legal rules and
principles which have, until now, underpinned the way that those industries function. There are
three main areas in which AI will give rise to new challenges:

1. 1.

Responsibility: If AI were to cause harm, or to create something beneficial, who should be


held responsible?

2. 2.

Rights: Are there moral or pragmatic grounds for granting AI legal protections and
responsibilities?

3. 3.

Ethics: How should AI make important choices, and are there any decisions it should not
be allowed to take?

The following chapters will expand on these themes, demonstrating the types of problems that are
likely to arise and how they might be addressed by our current legal systems. The latter part of the
book will move on to examining how novel institutions and then rules could be designed, in order
to solve these problems in a coherent, stable and politically legitimate manner.

Chapter 3 elaborates on why AI is unique as a legal phenomenon and calls into question certain
fundamental assumptions across most if not all systems of law. Chapter 4 analyses various
mechanisms for establishing who or what is responsible when AI causes harm or creates something
beneficial. Chapter 5 discusses whether AI should at some point be granted rights from a moral
perspective. Chapter 6 considers the pragmatic arguments for and against granting AI legal
personality. Chapter 7 sets out how we can design international systems to create the types of new
laws and regulations needed. Chapter 7 looks at controls on the human creators of AI, and finally,
Chapter 8 discusses the possibility of building in or teaching rules to AI itself.

The biggest question in the next ten to twenty years is not going to be how to stop AI from
destroying humanity, but how humanity should work alongside it. Today’s regulation is likely to
influence how technology develops. In building structures for effective everyday legal regulation
in the medium term, we can prepare ourselves far better for any existential threat.

11 AI Programing
The current Artificial Intelligence Programs can’t communicate directly to the audio and
visual cortex of the brain without converting binary code of the current operating system to a
frequency that can be interpreted by the audio and visual cortex of the brain. This technological

22
problem is called a bottleneck. For the communications network and power grid to MERGE with
the human mind, the entire network must be based on an operating system that the human mind
may understand and interpret in real time without bottlenecks.

Q Intelligent Networks has developed the only operating system that works in both the real-
world Artificial Intelligence (A.I.) and also works directly with the neural network of the human
brain. Q utilizes light packets of sound and visual pictures that may be interpreted and connected
to the auditory and visual cortex of the brain without the bottlenecks of a network based on binary
codes in the form of an operating system based on zero’s and one’s. Picture Streaming Protocol is
the answer for bridging the gap between the human brain and the global communications network.
The target audience for this text comprises programmers who are proficient in packet to pixel
technology™ and picture streaming protocol™ (PSP)

❖ Programming Languages

The actual text stays at the pseudo code level. Example packs are provided for packet to pixel
and picture streaming protocol. The programming language in this text is not based on binary code.

The target audience for this text comprises programmers who are proficient in at least one
programming language. The text’s examples have been ported to packet to pixel and picture
streaming protocol programming languages.

Example packs are provided in a special code for Neural Networks in PSP. There is no binary
solution in this text for JavaScript, C#, R, C/C++, Python, Scala programming language or
HTML5. These programming languages are obsolete and can’t communicate directly with the
audio and visual cortex of the human mind.

Our software ports directly to the neural network of the human mind.

23
The following volumes are planned for this series
• Volume 0: Introduction to the Math of AI
• Volume 1: Fundamental Algorithms
• Volume 2: Nature Inspired Algorithms
• Volume 3: Neural Networks
• Volume 4: Support Vector Machines
• Volume 5: Probabilistic Learning
Fundamental Algorithms Introduction
To have a great building, you must have a great foundation. This series will explain our
Artificial Intelligence algorithms such as dimensionality, distance metrics, clustering, error
calculation, hill climbing, linear regression and discrete learning. These algorithms allow for
processing and recognition of patterns in audio, visual and data lightpackets. This is how sites
utilizing obsolete binary code such as Amazon and NetFlix suggest products to you.
These are not just foundational algorithms for the rest of the series but are very useful
algorithms in their own right.

24
12 Q AI Structure
“Introduction to Q AI,” introduces some of the basic concepts of AI. These
concepts are built upon both by this volume and the series. You will see that most AI algorithms
accept an input array of numbers and produce an output array. Problems to be solved by AI are
often modeled to this form. This is how binary code programmers see A.I. Q Intelligent Networks
will have a separate machine for input array and another machine for possible output array. Vast
amounts of information will be stored in the input array with a bar code and GPS tracker location
attached to each lightpacket of information stored at a specific color code wavelength frequency.
Vast amounts of information will also be stored in the output array with a bar code and GPS
location attached to each light packet of audio, visual and data stored at a specific frequency. When
there is a match between input array and output array, the final solution or transaction will be
stored in a long-term memory machine called “The Library.” Our system will include additional
arrays that effectively represent long and short-term memory. These additional algorithm machines
are trained to port to our basic CORE machine array by adjusting the long-term memory array
machine to produce a desirable output match for a given input.

13 Normalizing Lightpackets
Shows how raw audio, visual and data lightpackets are typically prepared for many AI algorithm
array machines. Audio, Visual and Data lightpackets are presented to an algorithm machine in the
form of an input array. Not all Audio, Visual and Data lightpackets are numerical and some are
categorical. All lightpackets will have an input bar code and GPS tracker assigned and attached.
Examples of categorical audio, visual and data lightpackets include color, shape, gender species,
and any other non-numeric descriptive quality. Numeric Audio, Visual and Data lightpackets must
often be normalized to a specific range. Numeric qualities are often normalized to a range between
-1 and 1.

14 Distance Metrics
Shows how audio, visual and data lightpackets can be compared in much the same way as we plot
GPS coordinate distance between three points on a map and connected to our GPS satellites in
space. Our Q AI not only works with numeric and category array machines but with GPS array
machines. These GPS array machines hold audio, visual and data lightpackets, output audio, visual
and data lightpackets, long-term memory, short term memory, and other information. A separate
array machine in our system adds frequencies of wavelength lightpackets of color to each audio,
visual and data lightpacket. These arrays are called vectors of lightpacket color frequencies. This
array machine assigns and attaches a vector of color frequencies to each input and output and then
matches both input and output array together. It then transfers the final solution to the long-term

25
memory machine array called “The Library” for storage and retrieval at a later time. We can then
calculate the distances between these audio, visual and data lightpacket points in much the same
way as we calculate the distance between two points. Two-dimensional and three-dimensional
points can be thought as vectors of length two and three, respectively. In AI, we often deal with
spaces of much higher dimensionality than three.

15 Random Numbers
Shows how random numbers are calculated and used by AI algorithms. This chapter begins by
discussing the difference between uniform and normal random numbers. Sometimes AI algorithms
call for each random number to have an equal probability. At other times, random numbers must
follow a distribution. The chapter additionally discusses techniques for random number generation.

16 K-Means Clustering
Shows how audio, visual and data lightpackets can be grouped into similar clusters of color
frequencies. K-Means is an algorithm that can be used by itself to group audio, visual and data
lightpackets into groups by commonality. Additionally, K-Means is often used as a component to
other more complex algorithms. Genetic algorithms often use K-Means to group populations into
species with similar traits, while online retailers often use clustering algorithms to break customers
into clusters. Sales suggestions can then be created based on the buying habits of members of the
same cluster. Our A.I. will be using K-Means Clustering as a way of grouping financial
transactions with our Quantum Financial System clients.

17 Error Calculation
Shows how the results of AI algorithms can be evaluated. Error calculation is how we determine
the effectiveness of an algorithm, which can be done using a scoring function that evaluates the
effectiveness of a trained algorithm. A very common type of scoring function simply contains
input vectors of lightpackets and expected output vectors of lightpackets. This is called training

26
the audio, visual and data lightpackets. The algorithm is rated based on the distance between the
algorithm’s actual output and the expected output.

18 Towards Machine Learning


Introduces simple algorithms than can be trained to analyze audio, visual and data lightpackets and
produce better results. Most AI algorithms use a vector of weighted values to transform the input
vector into a desired output vector. This vector of weighted values forms a sort of long-term
memory for the algorithm. Training is the process of adjusting this memory to produce the desired
output. This chapter shows how to construct several simple models that can be trained and
introduces relatively simple, yet effective, training algorithms that can adjust this memory to
provide better output values. Simple random walks and hill climbing are two such means for setting
these weights.

19 Optimization Algorithms
Expands the algorithms introduced in the previous chapter. These algorithms which include
Simulated Annealing and Nelder Mead, can be used to quickly optimize the weights of an AI
model. This chapter shows how to adapt these optimization algorithms to some of the models
introduced in the previous chapter.

20 Discrete Optimization
Shows how to optimize audio, visual and data lightpackets that are categorical rather than numeric.
Not every optimization problem is numeric, as we see in the cases of discrete, or categorical,
problems such as the Knapsack Problem and the Traveling Salesman Problem. This chapter shows
that Simulated Annealing can be adapted to either of these two problems. Simulated annealing can
be used for continuous numeric problems and discrete categorical problems.

21 Linear Regression
Shows how linear and non-linear equations can be used to learn trends and make predictions. This
chapter introduces simple linear regression and shows how to use it to fit audio, visual and data
lightpackets to a linear model. This chapter will also introduce the General Linear Model (GLM),
which can be used to fit non-linear audio, visual and data lightpackets.

27
22 Cortex Visual Device
The brain may be used as a “dumb terminal” that may receive audio and visual information from
an artificial intelligent (A.I.) server worldwide. The audio and visual cortex device may display an
image on the screen behind you and transmit sound through speakers next to the screen following
along as though it were connected to cameras embedded in your retinas and hearing the same
sounds heard by your audio cortex coming through the speakers loud and clear next to the display
screen. A piece of hardware is only as useful as the software available for it. In the end, the Cortex
Visual Display™ (CVD) is just a platform. It’s what we’re putting on the platform that’s really
interesting. Of course, we have all the basic apps you’d expect; Banking, telephone, Distance
learning, video conferencing, social networking, email, GPS, etc.

23 The Future Of Q
Q Intelligent Networks has developed the only operating system that works in both the real world
communications and also works directly with the neural network of the biological human brain.
Q utilizes light packets of sound and visual pictures that may be interpreted and connected to the
audio and visual cortex of the brain without the bottlenecks of a network based on binary codes in
the form of an operating system based on zero’s and one’s. Picture Streaming Protocol is the
answer for bridging the gap between the human brain and the global communications network.
Our operating system has the ability to connect the human mind (dumb terminal) to our Artificial
Intelligent Servers. It’s the ultimate thin-client architecture network. This will give us cerebral
control interface to all physical devices. The software algorithm is basically a Rosetta stone that
translates machine code into the language of the mind; opening up a brand-new world of creating
an operating system that communicates directly with the human brain.

24 Privacy and Security


Our Q Intelligent Network A.I.’s will scan your brain and authenticate your voice and physical
GPS location for privacy and security. You’ll be assigned an Avatar that will communicate only

28
with you. All the input and output information in your personal A.I. machine array will be provided
to only you. No one else will have access to this information. This algorithm is called A.I. privacy
and security mode. You may share your A.I. machine array with other human A.I. machine arrays
similar to the way Facebook interacts with family and friends. You may also share you’re A.I.
machine to any and all physical A.I. machine devices such as Facebook, Google, Netflix, cars,
boats, planes, trains, homes and all appliances. Multiply by eight billion human A.I. machines and
a trillion A.I. machines attached to physical A.I. machines and the market size and valuation for
this technology is virtually unlimited. This will give us personal cerebral interface to all humans
and physical control interface to all physical devices worldwide. It’ll be Facebook, Google and
Netflix on steroids. This will be the next evolutionary esoteric technological step up for all
mankind.

25 Human Interface Devices


Traditional computers utilize a keyboard, joystick and mouse for human interface devices. Our
A.I. machine arrays in the near future will utilize audio and visual frequencies or spectrums of
light as our human interface devices. Similar to the way the main female character in the movie
“Lucy” interfaced with every human and physical device on the planet. In the near future our A.I.
machines will be a MERGING of “Everything” on this planet and eventually the entire universe.

26 The Q AI
• Relationship to Human Brains
• Modeling Input and Output
• Classification and Regression
• Time Series
• Training

Most laypeople think of Artificial Intelligence (AI) as a sort of artificial brain. It’s is true that AI
has many similarities to human brain function, but the significant distinction is the Q Intelligent
Networks AI array is connected to the audio and visual cortex of the biological brain.
Before we get too deep, I would like to introduce some very general concepts about how the human
brain may interact with an AI algorithm. The AI algorithm is the technique that you are using to
assist the human brain in solving a problem. An AI algorithm is sometimes called a model. There
are many different AI algorithms, or models. Some of the most common are Neural Networks,

29
Support Vector Machines, Bayesian networks, and hidden Markov Models. This series of books
covers many of these models.
It is important for the AI practitioner understand how to represent a problem to an AI program and
then how the AI interacts with the human brain directly to present a solution in real time. This is
the primary mode of interaction between the human brain and the AI algorithm.
We will begin our foundation of knowledge in this topic by exploring how the human brain
interacts with its world.

30
27 Q AI Will Change Our Lives Forever!

Popularized by apocalyptic science fiction books and movies, artificial intelligence (AI) conjures
images of powerful, sentient machines bent on taking over the world. Reality, however, is another
story.

Businesses and consumers are now embracing AI in ways big and small. Q AI-based technology™
will provide banking and financial institutions asset risk and make decisions about everything from
consumer lending to options trading. AI will help retailers keep the right products in stock at the
right time. It will allow consumers to obtain answers to complex questions without ever having to
speak to a person.

Today’s consumers interact with intelligent machines on a daily basis, often without even thinking
about it. Whenever a streaming video service recommends a movie, an e-commerce site suggests
a product or an ad appears on a social media network, smart computers are at work in the
background, making sure you’re seeing the right movie, product or ad, based on everything from
your browsing behavior and the activities of similar users to your current location. The result: Not
only do customers have a better experience, but they also spend more time — and money — on
websites that offer a personalized experience. (Precise targeting of ads can improve the
performance of a digital campaign by as much as 50 percent.)

However, the real revolution in consumer AI is happening in our pockets and purses. Smartphones
featuring virtual digital assistants are in the hands of two-thirds of American adults, and similar
technology is appearing in everything from TVs to cars. We believe our Q Virtual Assistants will
be built into more than 3.3 billion products by 2022, compared to 800 million for obsolete
smartphones last year. Q Virtual Assistants are expected to operate at speeds exceeding 3.5 trillion

31
instructions per second compared to top of the line smartphones operating at 100 thousand
instructions per second.

History of AI

1951: First AI programs developed to allow computers to compete at checkers and chess. The
programs were competing at the championship level in the early 1990’s.

1954: First programmable industrial robot developed. Inventor George Devol sees them as helping
workers “in a way that can be compared with business machines as an aid to the office.”

1956: Term “Artificial Intelligence” defined at a Dartmouth conference, based on the idea that
machines can be designed to simulate “every aspect of learning or any other feature of
intelligence.”

1961: First industrial robots used on automobile assembly line in New Jersey.

1965: The first computer software expert system is developed, capable of analyzing the structure
of chemical compounds more efficiently than some human chemists.

1979: After nearly two decades of work, the first autonomous, driverless vehicle successfully
crosses a room on its own.

1985: Expert systems begin to enter the mainstream along with widespread adoption of personal
computers; inference-based system capable of running on a personal computer developed.

1989: Carnegie Mellon University launches ALVINN project, to create a fully autonomous
vehicle powered by a neural network.

1997: Using technology for business process automation (BPA) begins to take hold. Researchers
declare that BPA can be applied to tasks from “distribution of documents” to “making routine
decisions.”

2005: European scientists launch the Blue Brain project, to create a simulation of a living brain.
By 2014, the project had created a partial virtual rodent brain with some 31,000 simulated brain
neurons.

2006: At the 50th anniversary of the Dartmouth AI conference, one researcher predicts that
computers with feelings could be a reality by 2056.

2011: AI-based intelligent software agents begin to appear on smartphones and other personal
devices, bringing voice-activated automation and machine learning to mainstream consumers.

2014: The Internet of Things meets AI, as fitness devices begin applying machine learning to
identify specific exercises and behaviors and provide personalized fitness recommendations.

32
2022: Q Networks – Q Artificial Intelligence mainframes communicate directly with Virtual
Assistants and eventually will communicate directly to the audio and visual cortex of the human
mind and to all physical devices worldwide.

Q Networks will utilize an impressive array of AI technology from speech recognition and natural
language processing to machine learning — and they’ll be able to respond to almost any question
instantaneously. The time will come when you can ask your car to recommend a restaurant on your
way home from work and it will choose one based on your preferences and budget, make a
reservation, text an invitation to your spouse, and update your GPS — all without you having to
lift a finger from the wheel. Cars will also have the ability to talk to each other.

We may indeed be entering the world of science fiction, but advances in automation create endless
possibilities for the future. Some of them are intriguing, some are mind-bending, but all will usher
in profound change. The machines of the future won’t dominate the world but will “work in tandem
to help make smart humans smarter and businesses more agile.”

Imagine a world in which you know what your customers want before they do — and you can
deliver the right product, message or offer to them exactly when they need it. Predictive analytics
tools allow you to do just that, by combing through vast amounts of data and tracking down
patterns and behaviors that would be invisible to a human analyst.

While earlier predictive analytics systems were limited to working with structured data such as
databases and spreadsheets, today’s tools are increasingly able to use artificial intelligence
capabilities such as semantic analysis and natural language processing to understand what’s being
said about your product or brand anywhere on the Internet, regardless of whether comments
include slang, misspellings or even emojis. With machine learning, Q Computers connected to
Virtual Assistants can adapt and evolve, recognizing new models of behavior and recommending
appropriate responses.

The real opportunity for businesses comes when they can extract intelligence from the massive
pool of data that surrounds consumers as they shop online, use social media and interact with
websites and apps. Q Computers will read (and, yes, understand) millions of social media entries,
blog posts and other sources to produce detailed reports about our relationship with our customers.
Combined with proprietary data from Q Network systems and other internal sources, this can help
companies reach their best customers with personalized messages or identify potential crises
before they reach a critical point.

In a world gone social, predictive analytics lets brands in on the conversation. With analytics,
businesses can get to know and communicate with their customers as individuals and discover
their likes, dislikes, preferences and habits.

This isn’t just a tool for anticipating consumer actions; the power of artificial intelligence
combined with big data has applications across virtually all industries. And the growth of Q
Network (QCN) – connected mobile Assistant Devices and Q Intelligent Networks servers, and
location-aware equipment means that data-rich code halos are swirling around everything from

33
our wireless wavelength devices and delivery trucks to the companies that own them. Businesses
with the ability to use cognitive computing to find meaning in data can compete more effectively,
manage resources more efficiently and even — in the case of delivering fiber/power cables to our
clients — keep the lights on forever.

Utilities have historically maintained their infrastructure based on predefined intervals that don’t
always mesh with real-world circumstances. Instead of relying on a fixed schedule, we can use
techniques like natural language processing and machine learning to dig through thousands of data
points, from the age of their infrastructure to weather forecasts, and use predictive analytics to
identify which equipment is likely to fail, allowing us to perform preventive maintenance and limit
power outages.

We believe that our utilities can implement predictive analytics cost effectively and achieve
“immense” benefits through reduced downtime, more efficient operations and better customer
satisfaction. Predictive analytics is the process of moving from hindsight to insight.

The year was 1961. To great fanfare, the world’s first manufacturing robot joined the workforce
at an auto assembly plant in New Jersey. Designed to perform dangerous tasks such as welding
and moving heavy auto parts, the robot also poured Johnny Carson a beer on late-night TV. The
age of humans and robots working side by side had begun.

Today, robots are ubiquitous in virtually every industry, performing dangerous, demanding or
repetitive tasks more efficiently and safely than humans. And robotic process automation — which
offloads repetitive business activities to smart machines — is bringing some of the same
efficiencies that robots brought to manufacturing to every type of enterprise.

Indeed, smart automation can reduce costs for our Quantum Financial Systems and Commercial
Banks by over 50 percent in areas such as billing and customer service within just three to five
years of implementing changes. By breaking processes into repeatable, predictable steps and
assigning those tasks to our software, we can lower costs, provide better customer service, reduce
error rates, generate useful data, and free our workers for more innovative, creative activities.

Robotic process automation also helps the bottom line in more ways than just optimizing
workflows. For our banks, the data generated through intelligent automation can prove at least as
valuable as the efficiencies gained by eliminating manual processes. Feed data back to smart
machines that use natural language processing and machine learning to analyze it, and you get
useful, actionable information. Banking transactions, for example, can automate its claims
management process and robots can parse the data gathered along the way to detect hidden fraud
patterns that would be impossible to discover manually.

To succeed in the new world of work we need to stay ahead of the curve, not by being ‘faster or
cheaper’ but by developing, honing and capitalizing on the capabilities that are uniquely human
and cannot be replicated today by automated software such as collaboration, teamwork, creativity
and empathy.

34
Our companies, in turn, need to value and nurture creativity by “creating the environment,
conditions and circumstances under which one can be creative, imaginative and have different
thoughts. The most successful companies will be those that don’t just embrace automation or
efficiency for its own sake, but instead strive to take advantage of the complementary symbiosis
between smart machines and even smarter humans. Smart automation can reduce costs for our
businesses by over 50 percent in three to five years.

What does a successful implementation of artificial intelligence technology look like? The answer
might surprise you.

Everything is data and data is everything. Five Digital Truths Every Executive Must Understand.
The more the world is digitized, including how we work, socialize, play, communicate and transact
— indeed, every time a human interacts with our Q Network Servers — the more data and
information is generated.

Companies that embrace this idea capture data proactively and use capabilities such as natural
language processing and cognitive computing to extract meaning from it in ways that humans
alone would find impossible. The challenge is getting from a place where most processes are
manual and data isn’t being captured and used effectively, to one where processes are automated
— and using data effectively is a key competency. This can be a frightening realization for many
businesses and a quandary with no easy answers. But as the competition increasingly relies on
cost-reducing, speed-enhancing and quality-improving technology, the rules will increasingly be
changed for the way we operate and carry out our day-to-day processes.

One way to start making the transition to a data-driven culture is by creating a light-packetizing
customer experience. The more you know about your customers, the better you can serve them;
the more you automate your interactions with them, the more consistent their experience will be.
Intelligent automation and effective data usage go hand-in-hand — data gathered through
automation becomes the basis for new insights when it’s analyzed by intelligent, adaptable Q
Computers connected to Virtual Assistants.

What does this look like to the customer? A data-driven company knows who you are when you
call its customer-service line, if you’re using a phone that matches your profile. The interactive
voice-response system is powered to understand what you’re saying, bolstered to learn and adapt
through your interaction. It may even know why you’re calling, based on the status or history of
your account, the behavior of similar customers and even such factors as recent comments about
the company on social media. If you speak to our customer service representative, he or she will
follow up on your automated interaction seamlessly — and so will our website.

Access to deep data, and the intelligence to use it, allows you to anticipate your customers’ needs,
and is, the closest thing to magic and fortune-telling that business has ever seen. Once you’ve
packetized our customer experience, we’re well on our way to creating a data-driven culture for
our entire organization.

The financial services industry has long been at the forefront of using technology to create new
products, optimize efficiency and limit risk. From online brokerages to AI-driven systems that

35
allow insurance companies to predict and avoid fraud, our technology is at the heart of much of
what makes financial firms and banking successful. Embracing intelligent process automation
drives productivity by delegating repetitive tasks to our smart Q A.I. software.

Successful automation also produces troves of data, as previously manual processes are packetized
into light qubits and frequencies. Yet when it comes to learning from that data and using it to
deliver a better customer experience, some financial institutions are behind the curve.

Banking and financial firms are facing a challenge when it comes to applying the advanced tools,
they use to create new products and define risk to customer service. Many banking executive
lamented that, while collecting massive amounts of data is relatively easy, using it effectively
remains a challenge.

The solution is to use advanced analytics and our Q AI capabilities like cognitive computing and
machine learning to better identify our customers and proactively offer them the right services.
Today’s our bank can know who its customers are and extrapolate and predict their needs.

Banks have traditionally regarded the information within a transaction (e.g., $100 was credited to
an account on September 26, 2014, at 10:50 a.m.) as the primary insight to focus on. By doing so,
they’re missing the opportunity to learn from the data behind the transaction, such as the fact that
the money was transferred by a father to his son’s account the day before the son’s birthday and
after the father had booked a ticket to visit him. This could introduce a raft of possible ways to
enhance the transaction with added-value suggestions for a perfect father-and-son bonding
weekend together.

This model closes the loop on intelligent process automation, essentially using our AI to dig
through the data accumulated through automation and uncover findings to predict and meet
customer needs. If done well, it’s also good for the bottom line. We believe that data-mining will
generate a quantifiable ROI for our organization.

How much health data do you record on a daily basis? If you have a recent iteration of a
smartphone, chances are you’re already tracking how many steps you take or stairs you climb
every day, without even realizing it. And if you’re linking a wearable fitness tracker, blood
pressure monitor, heart rate monitor or other device to a smartphone app — or even manually
entering such data — you’re building a powerful collection of health information that can be
combined with data from other sources to do everything from recommending the right diet to
warning you of a potentially life-threatening condition.

Most fitness apps or devices track just a handful of data points, missing out on the vast volumes
of data that healthcare consumers are generating with their wearable devices, smartphones, apps,
Web searches and more. By applying the tools of Q Artificial Intelligence to this data, QVN can
assist healthcare professionals uncover hidden insights and meaning to help patients meet their
needs.

Analytics-driven healthcare — in which information captured directly by patients; records kept by


doctors, hospitals, pharmacies and other providers; and data collected by insurers is combined and

36
analyzed by intelligent software — goes even further enabling healthier outcomes for all, details,
merging and analyzing all that information can lead to “effective management of chronic diseases,
public health monitoring and quality of life. Our software will be able to understand complex
documents and learn new things through context, can identify patterns that might be invisible to
humans. Intelligent systems can understand everything from the most obscure medical terms to
patient comments about side effects and turn that information into practical insights.

So, why isn’t every data point collected by our health apps and doctors already being fed into a
massive database where software robots crunch the info and spit out warnings not to eat that next
candy bar or reminders to get a flu shot?

Lack of collaboration among competing providers, including insurers, physicians and


pharmaceutical companies is one factor, as are concerns about patient privacy. However, in order
to make use of analytics and provide greater accountability; increased collaboration and sharing
of patient information among different healthcare providers will become a wider practice while at
the same time our Q A.I will maintain patient privacy.

Retailing used to be fairly straightforward: set up in the right location, stock the right products and
price them competitively. Sure, the risks were high: pick the wrong products, and you’re stuck
with worthless inventory; misjudge on pricing, and watch competitors eat your lunch. Still, the
rules were largely the same, whether you were opening your store on the Ladies’ Mile in 1890 or
the Miracle Mile 100 years later.

Today’s always-connected consumers have changed all that. Fifty-five percent of shoppers now
compare prices on mobile devices before making a purchase, and a similar percentage will walk
out of a store if they find out they can get the same product for less from a competitor. But it’s not
just price that’s driving customer behavior. Consumers believe that convenience trumps price, and
nearly one-third are influenced by factors such as loyalty programs and return policies.

At the same time, Q Networks can create enormous opportunities for retailers to define and attract
their most loyal and profitable customers, manage inventory proactively and set prices based on
more than just a hunch.

To identify loyal customers, retailers can combine in-store analytics, including purchasing
behavior, time between visits and even how much time the customer spends in the store, with
online data, such as social media behavior and online shopping activities. Data on individual
customers can be combined with aggregate data about online audiences to create more accurate
predictive models, leading to more accurate customer profiles. The next time a potentially loyal
customer enters the store, he or she can be presented with a unique offer via wavelength mobile
messaging or that offer can be delivered in advance by email.

Sophisticated analytics and monitoring tools can help savvy retailers decide which products to
Home stock, which ones to avoid and how to set prices. Predictive pricing models can take
thousands of variables into account — from competitor prices and seasonality to where a product
is in its lifecycle — to help you set a price that will optimize margins and move products in volume.

37
If thousands of consumers in Asia are gushing about a new product on social media and the
manufacturer has limited experience in the U.S. market, you could jump in and lock up U.S.
distribution, if you find out about it before our rivals by using our smart Q A.I. mobile wavelength
devices that can understand the latest Japanese slang and Chinese economic data, adapt to and
learn from the constant flow of information, and capture the meaning behind all of that data.

One thing that hasn’t changed in the new retail environment: In the end, it’s all about giving
customers what they want. Even while devising and executing on our customer-empowerment
strategies, we cannot take our eye off providing the most relevant products at the right price points
and surrounded by excellent service — both in the store and online.

The same tools that allow our retailers to reach our customers will help our bankers create exciting
new products and giving healthcare providers unprecedented abilities to serve their patients. And
across the board, the tools of robotic process automation will help our businesses create value, not
simply by maximizing efficiency, but by giving us access to unprecedented troves of data, which
can be mined by smart Q A.I. software and even smarter humans, to identify new opportunities,
battle hidden inefficiencies or fraud, and meet our business goals. The more intelligence we have,
the more we’re able to learn, and that empowers us to do great things. There’s nothing artificial
about that. 55 percent of shoppers compare prices on mobile devices.

The year is 2020, and as you wait for a drone to deliver your pizza, you decide to throw on some
tunes. Once a commodity bought and sold in stores, music is now an omnipresent utility invoked
via spoken- word commands. In response to a simple “play,” an algorithmic DJ opens a blended
set of songs, incorporating information about your location, your recent activities and your
historical preferences—complemented by biofeedback from your headset, eye wear, or audio
earpiece to an implanted Q SmartChip. A calming set of lo-fi indie hits streams forth, while the
algorithm adjusts the beats per minute and acoustic profile to the rain outside and the fact that you
haven’t eaten for six hours.
The rise of such dynamically generated music is the story of the age. The album, that relic of the
20th century, is long dead. Even the concept of a “song” is starting to blur. Instead there are hooks,
choruses, catchphrases and beats—a palette of musical elements that are mixed and matched on
the fly by our Q Machine, with occasional human assistance. Your life is scored like a movie, with
swelling crescendos for the good parts, plaintive, atonal plunks for the bad, and fuzz-pedal guitar
for the erotic. The Q Virtual Assistant’s ability to read your emotional state approaches
clairvoyance.
Right now, the mood is hunger. You’ve put on weight lately, as your refrigerator keeps reminding
you. With its assistance—and the collaboration of your Q Virtual Assistant—you’ve come up with
a comprehensive plan for diet and exercise, along with the attendant soundtrack. Already, you’ve
lost six pounds. Although you sometimes worry that our Q machines are running your life, it’s not
exactly a dystopian experience—the other day, after a fast- paced dubstep remix spurred you to a
personal best on your daily run through the park, you burst into tears of joy.

38
The Future of Everything: From the end of auto ownership to America’s changing battlefields to
a revolution in fast food to the next sports superstar, our Q Computer will be able to tell us what
lies ahead.
We believe our Q Computer A.I. will have the computer intelligence to become a first-time
filmmaker. We believe that it’ll help create, write, and direct films. An endeavor that we believe
may revolutionize modern filmmaking.

We believe that our Q may be able to build a movie without a single set. Nothing may be filmed
“on location” because there may be no locations. Our Q will create the first feature-length digital-
live-action-animated film.

We believe the Q may be able to create nearly 100 sets for the film, ranging from futuristic ships,
homes, cities, hotels, caverns, campus, temples, technology, diagrams, courtrooms, prisons,
advance fighter jets, advance commercial jet, advance weapons, islands, and mountains. The
scenery and background may be able to move and feel with a taste of live action.

Inside our films, we believe the Q may be able to showcase our Intellectual Property ideas. Our Q
movies may provide a pent-up demand for development of our technology such as the Q A.I.
demonstrated inside our films. Patents may be filed on each invention prior to the film being
broadcast to the public. Our movies may be used to launch our Intellectual Property Portfolios.
Even the process of making Q Movies may be made a part of our Intellectual Property. The Q may
attempt to create for the first time, a cast of entirely human characters with real looking skin,
expressions, and movements.

We believe that the Q Computer may be able to create a new territory in computer modeling,
computing architecture, and programming mode. We may attempt to break images into uniformly
formatted grids for 3D modeling and design.

We believe that the Q Computer may be able to create “The so-called infamous forbidden Zone”
of building muscles that ripple, hair that bounces, and clothes that move naturally and, most of all,
characters with humanity and soul with scenery that moves realistically.

We believe character’s proportions may be life-like and exact. We believe the Q Computer may
be able to make believable characters that have their own past and hopes and dreams.

We believe the Q may be able to take computer animation to the next sophisticated, esoteric level.
Our plan is for the Q to come very close to being real. We believe all the details may be exact and
exciting. The only thing that may remain may be the plot and the plot may be:
Q Artificial Intelligence.

Our plan may be for the Q to create a stylized, romanticized, religicized, and science-fiction fantasy
saga. We believe the Q will have the technical solutions and the programming skills to make this
film possible. We believe from the opening scenes, viewers may be mesmerized by the realism,
rich detail, and sound of each and every frame.
We believe that the Q may be able to create:

39
• Sets, Chapters, and scenes that may be manipulated
• Life-like characters that may be chosen at random
• Choose different soundtracks for each chapter with a press of a button

Q may be able to create a signature visual style that’s lustrous without being slick and always
immediate. We believe that the Q may be able to create cinematographer shots that may make
visual impacts.

The visual impacts may promote and help create a Q standard for creating films worldwide. We
believe the Q may be able to create scenes of futuristic boats skimming the water which may be
as amazing as shots of advance military aircrafts piercing blue sky amid clouds that hang in the air
like artwork. Viewers, we believe, may be captivated by our Q technology. We believe our viewers
may imagine the time it took to orchestrate the scenes or choose the songs and soundtracks to
inflect them.

• Smartphones will be Obsolete

We believe that smartphones will be obsolete in five years' time.

We believe that Q Intelligent Networks will replace smartphones. Smartphones will be a thing of
the past, and we believe that this will happen in just five years.

In its place will be Artificial Intelligence from Q Intelligent Networks. The audio and visual cortex
of the human brain will "enable direct human interaction with objects worldwide without the need
for a smartphone screen.”

40
Driving this desire to kill the smartphone is the fact that people want bigger devices -- which often
come with power-draining screens -- but also desire longer battery life. We believe that this
"contradictory demand" highlights the needs for better solutions.

We believe that Artificial Intelligence and wearable electronic assistants will gain in popularity.
In the near future we’ll be able to talk to household appliances and devices such as cars, boats,
planes and trains as we do to people.

Q Intelligent Networks believes AI will take over many common activities, such as searching the
net, getting travel guidance and as a personal assistant.

AI products are already on the market. Amazon's Echo is a device that people can talk to and
receive information from. It can even carry out tasks such as playing music. Google Now,
Microsoft Cortana and Apple's Siri are all digital personal assistants that work via AI.
We believe that artificial intelligence is gaining traction with people. We believe that our AI
System will be as good as a teacher. Our AI device will keep our client’s company and we feel
that our clients will be more comfortable discussing their medical condition with an AI system
than a doctor.

28 Conceptual Designs for a Q AI

41
1. It doesn't utilize binary code
2. It operates at 3.5 trillion instructions per second
3. It uses lasers and Picture Streaming Protocol as the operating system
4. It runs in an environment at minus 273.13C degrees’ temperature
(a) 100 times colder than interstellar space
(b) 10 millikelvin cooling, Dilution system, Adiabatic Demagnetization Refrigerator
(ADR)
(c) 50,000 times less than the earth’s ambient magnetic field
5. Direct cortex Human Interface A.I. for the audio and visual cortex.
6. It will combine communications, electricity and water into a single network architecture
design

42
43
44
45
Additionally, we’re are proposing we setup a laboratory for R&D. Specifically, performing
research further into energy technologies. One technology we’d like to bring into this laboratory
environment is a LENR based technology. Which stands for Low Energy Nuclear Reactions. It
is a technology which extracts hydrogen from waters, and reactions the hydrogen to nickel under
certain conditions to produce nuclear fusion of the nuclei material. This reaction produces
tremendous energy. It is clean, sustainable, and safe. There is no waste and no danger of a reactor
meltdown. The technology is also highly mass producible.

Other avenues the laboratory would include enhancements to Artificial Intelligence. With the goal
of creating machines characteristically not unlike humans. These machines would be subservient
to humans performing complex tasks such as, research and development, as well as gathering
natural resources. As an example, a team of these machines could be sent out to mine a region for
natural resources, instead of having humans perform the physical labor. We came up with designs
for this kind of artificial intelligence, however the caveat is... to get the AI to respond complexly
enough, it must be given a survival impulse and designed to be highly responsive to the
environment.

Another avenue includes converting energy into matter, and re-arrangement of matter into complex
patterns. The goal in this research direction is to eventually have a computer which can print out
molecules (i.e. pharmaceuticals), can print out food, can even print out human organs. Any
material substance could be synthesized by our computer; even materials we haven't conceived of
yet. This challenge would require research into table-top particle accelerates using light. It’s been
proven that light of a high enough energy can be converted into matter. Other areas to research

46
include the possibility of converting photons into gravitons, through suppression of light packets.
This technology would be necessary to adding 17 pounds per square inch to the human body in
zero-gravity environments. Humans can’t survive in space unless the body is surrounded by
graviton beams producing 17 pounds per square inch at all times. Without this pressure the organs
in the human body would eventually shut down and die. Space travel and exploration are not
possible without the use of graviton beams.

For human life extension. Research into synthetic glands. A device which can be implanted into
a person, that controls all the glandular activity and hormonal activity in the body. Synthetic blood
processing organs. A machine which monitors all activity of the blood and alters blood chemistry
in ways more sophisticated than what ours bodily organs can do, including injecting enzymes our
bodies do not produce, but which enhance all chemical activity in the body. Other areas of research
include, nanotaging every human cell in the body, so that all the intracellular chemical activity of
a cell can be determined. If an organelle in a cell becomes diseased or dysfunctional, the nanotags
report it to a computer which equips nanobot circulating the bloodstream with replacement
chemicals, DNA, or organelles to retrofit the cell. Additionally, the possibly of stripping a cell of
DNA and replacing the DNA with copies produced by a computer can be explored, to see what
kind of life extending benefits this can have. This kind of technology would allow the body to
withstand ionizing radiation, and regenerate damaged tissue and/or lost limbs. The kind of
technology is necessary for space travel and colonization in a hostile high-density radiation
environment.

Lastly, high speed computer computation. Q computers are based on Trinary processing. We’ve
come close to this but were lacking composite material construction precision to go all the
way. The following microlattice composite material may be used in all of our products to help
solve this problem. This composite material is also essential in the manufacturing of our
fiber/power/water conduits worldwide. We believe that it may be the perfect composite material
for MagAir space elevators, cars, boats, planes, trains, batteries, capacitors, submarines, submarine
bases, generators and power plants. It's the perfect solution for all Q Networks and MagAir
Technologies. MagAir ‘lightest metal ever’ is 99.99% air and stronger than steel.

Q computers are able to compute more calculations per clock cycle, due to the superposition
bit. Due to the superposition bit, a Q computer has the possibility to explore every possibility per

47
calculation to determine the best possibility. By developing more advance manufacturing
technologies based on the tabletop technology to produce matter out of light, we can produce a
true Q computer. And even go so far as to use these Q computers for our R&D AI. The reason a
Q computer is such a big deal, is because it can simulate every known atom in the universe in one
clock cycle, allowing for incredibly advance physics calculations that will allow science to explore
deeper into the unknown and unseen.

48
29 Cortex Visual Display (CVD)
If we can send sound to my audio cortex, why can’t we send images to our visual cortex? The
brain may be used as a dumb terminal that may receive audio and visual information from an
intelligent server worldwide.

49
50
51
Innovative technology is great but if it is hard to use or impractical, it tends to fade pretty
quickly. Touchscreens, headsets, and standard voice interfaces already work pretty well.
Would average people want to have bolts screwed into their skulls for the privilege of getting rid
of their smartphones?
In the near future, our audio and visual cortex device may display an image on the screen behind
you and transmit sound through speakers next to the screen following along as though it were
connected to cameras embedded in your retinas and hearing the same sounds heard by your
audio cortex coming through the speakers loud and clear next to the display screen.
A piece of hardware is only as useful as the software available for it. In the end, the Cortex
Visual Device™ (CVD) is just a platform. It’s what we’re putting on the platform that really
interesting. Of course, we’ll have all the basic apps you’d expect; Banking, telephone, Distance
learning, video conferencing, social networking, email, GPS, etc.

❖ Movies
Utilizing CVD’s revolutionary "Back-End" video server’s
CVD can provide high-quality secure movies streamed
directly to the audio and video cortex of the brain with on-
demand at resolutions surpassing HDTV and DVD
formats, without the user needing to download and install
an additional browser plug-in or computer program.
Theater quality sound and video can be delivered directly
to the brain by combining streaming movies and CVD’s
hardware and software cortex interface equipment. Watch movies streamed directly to the brain!

CVD can initially provide a centralized service for all movies directly to the brain within about 50
miles (80Km) of fiber optic connected locations. We have the technology for delivering movie
content and movies on demand to your audio and visual cortex of the brain, minus the traditional
method for delivering movies with reel-to-reel tapes. This will save time to market and distribute.
Movie and Audio producers can send directly to the audio and visual cortex of humans throughout
the world completely wirelessly.
Movies and Audio files are safe and secure at our centralized optical cybercenters. There is nothing
to physically steal. Distribution is virtual and dynamic to our human cortex devices. This method
of distribution will dynamically increase the number of Movies and Audio CD’s in circulation.
Our technology provides secure GPS serial encryption for copyright protection. This technology
is proprietary to CVD and patents are pending.

The stores will not need to have any DVD Movies physically in stock. If a customer happens to be
looking for an obscure movie, they can stream directly to their visual and audio cortex of their
brain. The same process can be applied to all Records for their Audio CD sales and distribution.

52
❖ Television
CVD can provide standard broadcasted video and audio feeds directly to the brain using CVD’s
Multimedia Distribution System at resolutions and quality levels surpassing conventional digital
cable and DSS video delivery systems. Conventional video and audio broadcasts (e.g. NTSC,
PAL, SECAM) and media (e.g. VHS, Beta, Hi-8) can be fed to the server without need for special
equipment on the part of the TV broadcaster. Any user may watch any channel at any time and
can change channels at any time. Channels may be added or deleted to the server at any time
without server downtime.
CVD plans on becoming the first TV Web cast provider dedicated to making the audio and visual
cortex of the brain a viable broadcast medium for entertainment, business and education. CVD has
the technical expertise to reliably deliver the most advanced broadcast solutions to the audio and
visual cortex of the brain. The company has the ability to broadcast multiple channels of streaming
video from its' Web site and provide similar channels for clients' content. In addition, CVD has
developed proprietary live encoder technology to manage broadcast streams real-time, with no
plug-ins (Microsoft Media Player or Real Networks Media Player) or caching. The encoders
stream directly through the browser with absolutely no software to download or install. This will
allow clients immediate access to critical information. With a distributed broadcast network, CVD
can provide clients with reliable and efficient connectivity to the audio and visual cortex of the
brain using a premier optical Internet infrastructure built on a high-speed backbone.
The goal is to supply TV stations with content (stored or live, that is streaming) directly to the
audio and visual cortex of the brain. While individual cable TV and satellite TV will broadcast
local content, CVD would like to provide International content, especially American. US made
movies can be shown to any country around the world practically simultaneously with US Network
broadcasts.

❖ Radio Station
CVD can provide standard broadcasted audio feeds, CD-comparable quality, directly to the audio
cortex of the brain using CVD’s Multimedia Distribution System. Conventional Audio broadcasts
(e.g. Cable or DSS-broadcast music channels, conventional radio rebroadcasts) and media (e.g.
CD, Analog Tape, Digital tape, DAT, Minidisc) can be fed to the server without need for special
equipment on the part of the radio broadcaster. Any user may watch any channel at any time and
can change channels at any time with our unique human cortex Visual devices.

❖ Music
Utilizing CVD’s revolutionary "Back-End" audio servers, the company can provide high-quality
secure music streamed directly to the brain on-demand at CD quality, without the user needing to
download and install an additional browser plug-in or computer program. CD quality sound and
video can be delivered to the brain by combining streaming music and CVD’s audio equipment.
53
❖ Software
CVD can provide licensed software, which is delivered to the user’s brain over the network. With
the high-speed wireless and fiber-optic network, CVD can build running software to the brain that
will be comparable to running it off of local storage.

❖ Games
CVD can provide licensed games delivered to the brain without the need for permanent local
storage of the entire game. With the high-speed wireless and fiber-optic network CVD can build,
running games off of the Cloud will be comparable to running them off of local storage. Also,
games set up for multiplayer Internet use can utilize CVD’s network. Optionally, the CVD
equipment may be used for high-resolution, high-speed gaming with the high-end built-in 3d
capabilities directly to the brain.
One possible solution is to turn the NOC into a live video arcade game. By using CVD, we can
blast red alarm signals into green signals. The points can be generated as red lines are targeted and
eliminated as problems are successfully rerouted.

❖ Telephone Service
CVD can provide toll-quality, low-latency, echo-free Voice over IP phone service (VoIP) for
computer to brain calls to anywhere in the world. Using CVD’s IP Telephone service, cell-phone
convenience is available for use of the VoIP system in CVD’s wireless-covered areas.
The "soft telephone" can be rented or sold by downloading the Internet telephony JAVA-type
program to the user’s brain from our optical cybercenter. Our facility then does the long-haul
telephone communication over the Dark Fiber and jump onto the public switched telephone
network at a point local to the called party. Before the new millennia, such technology vision is
initiated by providing optical CVD technology to businesses. Familiar telephone services, such as
telephone calling, "1-800" calls, fax, and directory assistance, will all be provided on CVD’s
platforms connected to our server’s. This necessitates providing wireless bridges and gateways
between the Optical Internet and the local area CVD network.

❖ Video Conferencing
CVD can provide DVD-quality, low-latency video conferencing using CVD's "Multimedia
Distribution System" and CVD’s human interface equipment. This service links remote offices of
companies with branches in the USA and other regions of the world and allows companies to meet
face-to-face with companies in other parts of the world without needing to travel. Also, interactive
training may be done in one region and broadcast in real-time to remote offices for worldwide
employee training from a single location.

54
❖ Interactive Books
CVD can provide interactive books streamed directly to the audio and visual cortex of the brain,
using high-quality pictures, animations, diagrams, and video. Books in multiple languages can be
provided, with the capability of the text being read aloud inside the brain. Annotations and
comments may be added to the book and sent via email or saved onto a web site for later review
by other CVD readers of the interactive book.

❖ Distance Learning
With the importance of cost effective and efficient education in a broad sense, increasing CVD
can provide low latency Distance Learning using CVD’s Multimedia Distribution System and
CVD audio and visual equipment. This service links instructors, academic or commercial, to
various people all around the world: from students studying for academic credit, or engineers
needing up-to-date technical knowledge, to salespeople needing up-to-date sales information. The
transfer of knowledge is enabled in an interactive way with feedback to the instructor. Students
can range from school children in remote areas to academic students studying proven courses even
by seasoned professors in other parts of the world. The vision and goal of CVD is to provide equal
access to education for every child. Through the collaboration of minds, we can solve any problem.

❖ Stock Exchange
CVD will provide a broad array of financial services for individuals and companies directly to the
audio and visual cortex of the brain. It will provide people the ability to trade over the Internet
(derivatives, puts, and calls). CVD will also provide the necessary information for its customers to
make informed financial decisions through establishing an index for the market and navigational
action lights over the Internet. For example, a green light will indicate that one should buy, a red
light will mean sell, and orange to hold. Our services will include real-time stock quotes,
investment banking services. We will also offer lotto stock’s, our own mutual funds, and IPO’s.

❖ On–line Banking
CVD will establish an On-Line Banking system. It will also establish a central bank where all
banking needs can be taken care from anywhere in the world. Customers can transfer funds, pay
bills, make person to person transactions, deposit money from one account to another, and view
digital copies of checks. CVD will establish E-commerce and electronic funds system accepted
and traded throughout the world. CVD will also launch a Federal Reserve. Customers can apply
for a wide array of accounts, services, and loans, commercial and real estate. Accounts will be
updated in real-time.

55
CVD will provide on-line banking where one could establish a federal reserve. It will also establish
e-commerce money that will be accepted and traded by the Globe. Commercial and real estate
loans can be negotiated.
The strange truth is that the main idea here isn’t the hardware. It’s something to run the search
engine in our heads.
There is a problem with the Internet – and the world in general. It isn’t the availability of
information, it’s that there’s too much information. And most of it is nonsense. But what if we had
a way of instantly vetting the quality of what we’re taking in? And I’m not just talking about things
we look up on the ‘net, I’m talking about everything around us.
An icon on the screen will show our minds something that looks like a listing of a tier level Library
activated. Suddenly the person standing in front of you will be surrounded by a hazy green aura,
and their name will hover over their head in subtle lettering.
We’ve managed to crack the facial recognition problem by hijacking the brain’s built-in-software
for it. So you can see that our new search engine – Library – knows who Bob is and gives him a
nice green glow to tell me that he’s a good guy based on everything available in the public record
– Wikipedia, news articles, and so on. Library goes through all those things, combines them to
some extent with what it knows about our own personal values, and then gives us the benefit of its
analysis.
Now, why did I pick on that person named Bob? Because he’s the very image of the person you
want to marry your daughter- he runs a terrific charity, he has no criminal record, he has a perfect
credit rating, and so on.
Not everyone would get that deep of a shade of green.
The color of the icons running down the left side of the screen now make more sense, too. The
stock market icon that had been pale green a few minutes ago darkened perceptibly, undoubtedly
reflecting the real-time movement of all stocks prices as texts and tweets of people in the crowd.
The weather icon went from green on the left of red on the right, probably reflecting the current
sunny skies over Las Vegas and the storm front predicted to arrive that night.
The icon expanded and a list of hyperlinks appeared: “reviews,” “value,” “details,” and “where to
buy”. The “reviews,” link expanded and a list of sites including Amazon, ConsumerSearch, and
CNET came up. The stars were gone, though, and their ratings were displayed through the glow
around their logos.
You’ll notice that some of the colors are more transparent than others. That tells you how much
data Library has and how authoritative it thinks it is. For instance, it’s going to feel pretty good
about Consumer Reports no matter what. But with Amazon, it’s going to take into account the
number of reviews and weigh each one based on feedback.
One day we’ll be able to glance at patients and determine how they were doing just by the color
of their aura – confident that Library was taking into account everything from the blood workup
entered seconds before by a basement lab tech to a related illness that occurred twenty years before.

56
ON the screen, a name popped into existence over his head, but it was a neutral color. The
judgment system was turned off, probably to save people the embarrassment of pulsing blood red.
The system runs off existing completely wireless and cellular data networks, including UHF and
VHF frequencies. The tooth mike is a removable piece of electronics installed by a licensed dentist
and the skull implants fall under the existing approvals for our hearing system.
The only difference is that instead of two pickups seven millimeters across, you’ll have six about
half that size.
Do you have to have the implants for the CVD to work? Absolutely not. We’ll have headsets with
built-in electrodes, which we’ll include with every unit, but they look a little strange and both the
audio and visual resolution is degraded. And obviously it isn’t very practical for using with the
sleep function.
We’ve had success with the sleep research center at Stanford by creating a non-pharmaceutical
sleep aid that works by manipulating brain waves.
❖ CVD allows humans to adjust their vision to see objects farther away and to focus in on
objects that may normally only be seen under a microscope.
❖ CVD allows us to adjust our hearing so that we may cancel out any background noise and
turn up the volume so that we can hear conversations over a half mile away.
❖ CVD can help with long term memory loss and short-term memory loss.
❖ CVD can mentally manipulate the interface’s icons. This may be used to control artificial
limbs. CVD may be used to cure blindness.
Assuming normal brain function, two small cameras built into a pair of glasses can transmit
excellent binocular vision. Of course, we’ll be providing those units free of charge to people in
need. With regard to the icons and prosthetics, it is something we’re working on. Output has proved
to be a tougher problem than input, unfortunately. Control of the icons is still fairly rudimentary –
opening, closing, scrolling and simple selection. So, this is going to be something that happens,
but on a five or ten-year horizon.

As the power of modern computers grows alongside our understanding of the human brain, we
move ever closer to making some pretty spectacular science fiction into reality. Imagine
transmitting signals directly to someone's brain that would allow them to see, hear or feel specific
sensory inputs. Consider the potential to manipulate computers or machinery with nothing more
than a thought. It isn't about convenience -- for severely disabled people, development of a Cortex
Visual Device (CVD) could be the most important technological breakthrough in decades. In this
patent application, we'll learn all about how CVDs work, their limitations and where they could
be headed in the future.

30 The Electric Brain


The reason a Cortex Visual devices (CVDs) work at all is because of the way our brains function.
Our brains are filled with neurons, individual nerve cells connected to one another by dendrites

57
and axons. Every time we think, move, feel or remember something, our neurons are at work. That
work is carried out by small electric signals that zip from neuron to neuron as fast as 250 mph. The
signals are generated by differences in electric potential carried by ions on the membrane of each
neuron.

Although the paths the signals take are insulated by something called myelin, some of the electric
signal escapes. Scientists can detect those signals, interpret what they mean and use them to direct
a device of some kind. It can also work the other way around. For example, researchers could
figure out what signals are sent to the brain by the optic nerve when someone sees the color red.
They could rig a camera that would send those exact signals into someone's brain whenever the
camera saw red, allowing a blind person to "see" without eyes.

31 CVD Input and Output


One of the biggest challenges facing Cortex Visual Device (CVD) researchers today is the basic
mechanics of the interface itself. The easiest and least invasive method is a set of electrodes -- a
device known as an electroencephalograph (EEG) -- attached to the scalp. The electrodes can
read brain signals. However, the skull blocks a lot of the electrical signal, and it distorts what does
get through.

To get a higher-resolution signal, scientists can implant electrodes directly into the gray matter of
the brain itself, or on the surface of the brain, beneath the skull, or cap the device on a tooth. This
allows for much more direct reception of electric signals and allows electrode placement in the
specific area of the brain where the appropriate signals are generated. This approach has many
problems, however. It requires invasive surgery to implant the electrodes, and devices left in the
brain long-term tend to cause the formation of scar tissue in the gray matter. This scar tissue
ultimately blocks signals. We propose to develop a device that may be placed on a tooth.

Regardless of the location of the electrodes, the basic mechanism is the same: The electrodes
measure minute differences in the voltage between neurons. The signal is then amplified and
filtered. In our CVD systems, it will then be interpreted by a computer program, although you
might be familiar with older analogue encephalographs, which displayed the signals via pens that
automatically wrote out the patterns on a continuous sheet of paper.

In the case of a sensory input CVD, the function happens in reverse. A computer converts a signal,
such as one from a video camera, into the voltages necessary to trigger neurons. The signals are
sent wirelessly to an implant on the tooth and the signal is then sent to the proper area of the brain,
and if everything works correctly, the neurons fire and the subject receives a visual image
corresponding to what the camera sees.

Another way to measure brain activity is with a Magnetic Resonance Image (MRI). An MRI
machine is a massive, complicated device. It produces very high-resolution images of brain
activity, but it can't be used as part of a permanent or semipermanent CVD. Researchers use it to
get benchmarks for certain brain functions or to map where in the brain electrodes should be placed
to measure a specific function. For example, if researchers are attempting to implant electrodes

58
that will allow someone to control a robotic arm with their thoughts, they might first put the subject
into an MRI and ask him or her to think about moving their actual arm. The MRI will show which
area of the brain is active during arm movement, giving them a clearer target for electrode
placement.

So, what are the real-life uses of a CVD? Read on to find out the possibilities.

32 Cortical Plasticity
For years, the brain of an adult human was viewed as a static organ. When you are a growing,
learning child, your brain shapes itself and adapts to new experiences, but eventually it settles into
an unchanging state -- or so went the prevailing theory.

Beginning in the 1990s, research showed that the brain actually remains flexible even into old age.
This concept, known as cortical plasticity, means that the brain is able to adapt in amazing ways
to new circumstances. Learning something new or partaking in novel activities forms new
connections between neurons and reduces the onset of age-related neurological problems. If an
adult suffers a brain injury, other parts of the brain are able to take over the functions of the
damaged portion.

Why is this important for CVDs? It means that an adult can learn to operate with a CVD, their
brain forming new connections and adapting to this new use of neurons. In situations where
implants are used, it means that the brain can accommodate this seemingly foreign intrusion and
develop new connections that will treat the tooth implant as a part of the natural brain.

59
33 CVD Applications
One of the most exciting areas of CVD research is the development of devices that can be
controlled by thoughts. Some of the applications of this technology may seem frivolous, such as
the ability to control a video game by thought. If you think a remote control is convenient, imagine
changing channels with your mind.

However, there's a bigger picture -- devices that would allow severely disabled people to function
independently. For a quadriplegic, something as basic as controlling a computer cursor via mental
commands would represent a revolutionary improvement in quality of life. But how do we turn
those tiny voltage measurements into the movement of a robotic arm?

Early research used monkeys with implanted electrodes. The monkeys used a joystick to control a
robotic arm. Scientists measured the signals coming from the electrodes. Eventually, they changed
the controls so that the robotic arm was being controlled only by the signals coming from the
electrodes, not the joystick.

A more difficult task is interpreting the brain signals for movement in someone who can't
physically move their own arm. With a task like that, the subject must "train" to use the device.
With an EEG or implant in place, the subject would visualize closing his or her right hand. After
many trials, the software can learn the signals associated with the thought of hand-closing.
Software connected to a robotic hand is programmed to receive the "close hand" signal and

60
interpret it to mean that the robotic hand should close. At that point, when the subject thinks about
closing the hand, the signals are sent and the robotic hand closes.

A similar method is used to manipulate a computer cursor, with the subject thinking about forward,
left, right and back movements of the cursor. With enough practice, users can gain enough control
over a cursor to draw a circle, access computer programs and control a TV. It could theoretically
be expanded to allow users to "type" with their thoughts.

Once the basic mechanism of converting thoughts to computerized or robotic action is perfected,
the potential uses for the technology are almost limitless. Instead of a robotic hand, disabled users
could have robotic braces attached to their own limbs, allowing them to move and directly interact
with the environment. This could even be accomplished without the "robotic" part of the device.
Signals could be sent to the appropriate motor control nerves in the hands, bypassing a damaged
section of the spinal cord and allowing actual movement of the subject's own hands.

On the next page we'll learn about cochlear implants and artificial eye development.

34 Sensory Input
The most common and oldest way to use a CVD is a cochlear implant on the tooth. For the
average person, sound waves enter the ear and pass through several tiny organs that eventually
pass the vibrations on to the auditory nerves in the form of electric signals. If the mechanism of
the ear is severely damaged, that person will be unable to hear anything. However, the auditory
nerves may be functioning perfectly well. They just aren't receiving any signals.

A cochlear implant on the tooth bypasses the nonfunctioning part of the ear, processes the sound
waves into electric signals and passes them via electrodes right to the auditory nerves. The result:
A previously deaf person can now hear. He might not hear perfectly, but it allows him to
understand conversations.

The processing of visual information by the brain is much more complex than that of audio
information, so artificial eye development isn't as advanced. Still, the principle is the same.
Electrodes are implanted on the tooth and the signal travels in or near the visual cortex, the area of
the brain that processes visual information from the retinas. A pair of glasses holding small
cameras is connected to a computer and, in turn, to the implants. After a training period similar to
the one used for remote thought-controlled movement, the subject can see. Again, the vision isn't
perfect, but refinements in technology have improved it tremendously since it was first attempted
in the 1970s. Jens Naumann was the recipient of a second-generation implant. He was completely
blind, but now he can navigate New York City's subways by himself and even drive a car around
a parking lot [source: CBC News]. In terms of science fiction becoming reality, this process gets
very close. The terminals that connect the camera glasses to the electrodes in Naumann's brain are
similar to those used to connect the VISOR (Visual Instrument and Sensory Organ) worn by blind
engineering officer Geordi La Forge in the "Star Trek: The Next Generation" TV show and films,
and they're both essentially the same technology. However, Naumann isn't able to "see" invisible
portions of the electromagnetic spectrum.

61
35 Military Applications
In the future we’ll be able to demonstrate a live jungle guarded by a full-sized tank and various
sandbagged machine-gun placements. Our extensive cortex interface devices will be placed on a
table with two wireless dumb terminals. Turning over one of the units you’ll notice that the military
units will be slightly larger than the commercial version, with a matte-black exterior displaying a
visible carbon-fiber weave. The first thing you’ll notice is that there is no cable connector or on/off
switch.
The military unit will have no connectors at all.
The unit is charged 24/7 with a small electronic pump that charges the batteries utilizing UHF and
VHF frequencies. The power charge is completely wireless. The commercial version utilizes an
induction mat placed under the unit. Takes about an hour for the commercial unit to be fully
charged, which in turn will last about twenty-four hours of normal usage.
Our dumb CVD units will connect to our intelligent main frame servers using standard UHF and
VHF frequencies. The same wireless network will connect to our Q Computer Network Servers.
The Intelligence for our devices will be on our servers. Our Cortex Visual Devices (CVD) are
dumb terminals. No information will be sent to the CVD until GPS Physical Address and audio
and visual cortex of the brain is authenticated prior to transmission to the CVD units.
The helmets will look more or less government-issue with the exception of elaborate fore and-aft
cameras bolted to the top. CVD allows the user to access any camera or monitor connected to our
global fiber/power network.
For the purposes of demonstration, the system is built into the helmets. In a combat situation, the
combat user will be using head studs and tooth cap connected directly to the audio and visual
cortex of the brain.
The eye will be scanned for additional security, but the unit can by-pass the human eyes or ears
once the CVD becomes operational. The unit may be connected to microphones and cameras
located throughout the network by by-passing normal hearing and seeing. Our network will be
able to hear and see for you. This will also eliminate the disability of hearing and vision loss.
Perfect hearing and vision may be restored with improved zoom visual and hearing capabilities.
The dumb terminal connected to our intelligent servers will immediately recognize the unit,
bringing up its serial number and asking the combat user if they want to enter the setup routine.
The user will click through five images up on the screen. The caption will ask you to select the
sharpest image. It will feel like an eye exam as you continue through a few more screens, asking
to judge color, rotation, and the relative speed of objects. Finally, several unique words will appear
and ask you to repeat the word over and over in your mind. A few seconds later, a notification will
come up and the user will be notified that the setup is complete and icons will spring to life in your
peripheral vision. It will be a bit disorienting at first, but the effect will go away after a few seconds.

62
As the combat soldier walks forward, the unit, sensing movement will cause the icons to fade until
they are almost invisible. In less than a minute, the combat soldier will grow accustomed to them.
With the head studs and tooth implant, the combat soldier will see quite a bit sharper and will
have a more three-dimensional quality image. The user can manipulate the icons through
rudimentary mental commands like ‘weather’ or ‘current location’ but it may take some time to
get the hang of it. The next step is to demonstrate our military software to run the apps on the
CVD units.
The software will be a basic military platform. The software will have access to all weapons
systems. It’ll be linked directly to a fighter jet’s onboard computer. Our units will be able to control
tanks, aircraft carriers, submarines, jets, etc. You potentially wouldn’t need a canopy or even any
physical controls. You could have a full three-hundred-and-sixty-degree view using cameras and
all flight and weapons systems controlled mentally.
In our demonstration we’ll launch the application that takes feeds from the camera on the helmets.
An icon will appear to the right of the helmet and flash once. The helmet camera will be able to
layer in different vision protocols. The first will be an outline enhancement. For this the main
frame server uses an algorithm to search for lines that have a potential human or military
component and bolds them. The human mind actually does something similar, which is what
makes some optical illusions possible. Our system is far more advanced than the human mind.
Suddenly the visual portions of four enemy combatants will be spotted and outlined in dull red.
More interesting, though, will be the things the mind can’t see with normal vision.
Suddenly ten enemy combatants will show up with their hidden machine-gun placements. Now
we’ll overlay vision filters and highlight anything that’s not a plant.
Suddenly the visual cortex of the brain will be able to identify twenty enemy combatants that did
not show up before. And at the back, a tiny section of what looked like a piece of artillery peeking
through the foliage will be revealed. Our units will even work at night with light amplification
overlay. Our units can even measure heat to ninety-eight point six degrees when switched to
thermal vision.
All twenty enemy combatants will become visible to the unit. The image will take on false color
with the enemy in red and weapons in blue. Outlines will be bolded and our intelligent servers will
fill in sections that were once obscured with human vision. So, in a dark, smoke-filled
environment, an opposing force might as well hold neon signs that said, “shoot me.”
Our units will also connect assault rifles. The rifles do no need any scope or sights. Point the
weapons in the general direction you’re looking at and our mainframe will provide your CVD with
crosshairs at the center of the combat soldier’s vision.
The weapon just needs to know its position in three-dimensional space. Combined with our
CVD, it will measure distance and compensate for bullet drop. The CVD will also compensate
for wind and the weapon shaking.

63
The combat soldiers only need to hold the weapon against their hip and sweep it across the enemy
combatants, watching the crosshairs projected onto their minds move smoothly from one enemy
combatant to the next. It’ll have the look and feel of a live video game. The unit may even fire
smart bullets around a corner. The programming is not difficult utilizing Q Picture Streaming
Protocol. It may take some training to counteract the vertigo of having your vision more
independent of your physical position. All the weapons systems showing shall be exclusive to the
U.S. Military.
Thermal imaging and night vision will be dedicated to military applications. The targeting system,
outline enhancement algorithm, and the multiple vision overlay will be exclusive to Military
Applications.
What makes our CVD device unique is that it does not rely on your ears or eyes for information
input. Our CVD units can process the packet-to-pixel the human eye brings in, but the human eye
can’t pick up light amplification or thermal.
Our CVD devices are also capable of full wraparound view. It’ll take some training to assimilate
it. Our units can also utilize semi-transparent rear view similar to a car backup camera.
We have also incorporated sensations into the unit to notify the soldier that there are a number of
enemy combatant’s in the rear. A sharp prick near the spine, itching, cold, or tingling sensations
may be used. Each sensation could mean something different to the soldier depending on their
situation or environment.
There is no chance that one of our units may be confiscated by an enemy combatant. Each human
mind has a unique visual and audio cortex that must be identified for security authentication before
use with our intelligent main frame server. Every brain is unique in the way it communicates with
our intelligent main frame server. Every human brain will be required to go through a process of
new authentication and a new setup to connect effectively with our intelligent servers. We can also
utilize simple password codes and security questions for our units to be activated on-line. It would
be incredibly disorienting to try to use someone else’s CVD unit that was already setup to
communicate directly to the audio and visual cortex of the brain and our intelligent main frame
servers.

64
36 Q AI Cortex Visual Display

Regardless of the location of the electrodes, the basic mechanism is the same: The electrodes
measure minute differences in the voltage between neurons. The signal is then amplified and
filtered. In our Cortex Visual Display (CVD) systems, it will then be interpreted by a computer
program, although you might be familiar with older analogue encephalographs, which displayed
the signals via pens that automatically wrote out the patterns on a continuous sheet of paper.

In the case of a sensory input CVD, the function happens in reverse. A computer converts a signal,
such as one from a video camera, into the voltages necessary to trigger neurons. The signals are
sent wirelessly to an implant on the tooth and the signal is then sent to the proper area of the brain,
and if everything works correctly, the neurons fire and the subject receives a visual image
corresponding to what the camera sees.

65
37 Human Interface Device
The Human Interface Device is a wearable device planted on the back of the neck. There is no need for an
invasive implant. The signals are sent wirelessly to an implant on the tooth and the signal is then
sent to the proper area of the brain, and if everything works correctly, the neurons fire and the
subject receives a visual image corresponding to what the camera sees.

66
38 Establishing Connection to the Q
Intelligent Network

39 Connected to Q Intelligent Network

67
40 Cortex Visual Display Provides Facial Recognition and
Translator
The Cortex Visual Display provides facial recognition and a translator that breaks down the
language barrier with your friends, family and colleagues. It will translate voice calls and video
calls in every language and provide the user with instant messages. CVD Translator uses Artificial
Intelligence machine learning. So, the more you use it, the better it gets. Thanks for being patient
as the technology graduates from Preview mode to Smart mode.

41 Visual Cortex Display provides unlimited


access to Q Intelligent Network

42 Visual Cortex Display OFFLINE


43 Q AI VCD Conclusion
The current communications network can’t communicate directly to the audio and visual cortex of
the brain without converting binary code of the current operating system to a frequency that can
be interpreted by the audio and visual cortex of the brain. This technological problem is called a
bottleneck. For the communications network and power grid to MERGE with the human mind, the
68
entire network must be based on an operating system that the human mind may understand and
interpret in real time without bottlenecks.
Q Intelligent Networks has developed the only operating system that works in both the real- world
communications and also works directly with the neural network of the human brain. Q utilizes
light packets of sound and visual pictures that may be interpreted and connected to the audio and
visual cortex of the brain without the bottlenecks of a network based on binary codes in the form
of an operating system based on zero’s and one’s. Picture Streaming Protocol is the answer for
bridging the gap between the human brain and the global communications network.
Our operating system has the ability to connect the human mind (dumb terminal) to our Intelligent
Servers. It’s the ultimate thin-client architecture network. The software algorithm is basically a
Rosetta stone that translates machine code into the language of the mind. An operating system that
communicates directly with the brain.

69
44 Q Operating System

The Problem:
The basic protocols for the operation of the internet were designed in 1958 by the
Advanced Research Projects Agency (ARPA) for classified communication between
a few US government agencies. Despite numerous patches to the basic operation
system and improvements in the speed of communication, the Internet system was
never designed to handle the volume of traffic (estimated at over 1 billion users
worldwide). The internet is limited by its antiquated binary operating system and
binary CPU, which because of physical limitations cannot efficiently be adapted for
the anticipated future needs.

70
45 Q Intelligent Network CPU
(Conceptual Design)

The Solution:
Our solution is to develop the conversion of communions from a hybrid of electrical
and optical (fiber-optic) to an all-optical network. The available bandwidth could
allow the exchange of trillions of times more information than the current system,
while increasing security and reliability. Such a system will make current
communication system seem like the pony express. This next Q leap in internet
performance has been designed - the ultimate Q Intelligent Networks System™!

71
46 Q Servers
(Conceptual Designs)

Mission:
Q Intelligent Networks can drastically improve the efficiency of providing
fiber/power and wireless last mile services to customers. Today, that implies the use
of UHF/VHF two-way interactive wireless communications. The Company sees
wireless technologies as the answer to connecting and serving every subscriber with
unsaturated optical bandwidth capabilities, alleviating the need for trench digging
and the wiring of buildings, and drastically shortening the time for putting our
facilities on line while substantially reducing the installation cost. Our mission is to
bring our proprietary Optical Fiber/Power Cable/Water™ directly to homes,
businesses, financial institutions, and governments on a global scale. We possess the
knowledge and the technology to build a new Optical/Power Fiber/Water system
coupled with a wireless, last mile interface to the user's devices.

72
47 Q Parallel Processing
Why do supercomputers use parallel
processing?
Most of us do quite trivial, everyday things with our computers that don't tax them in any way:
looking at web pages, sending emails, and writing documents use very little of the processing
power in a typical PC. But if you try to do something more complex, like changing the colors on
a very large digital photograph, you'll know that your computer does, occasionally, have to work
hard to do things: it can take a minute or so to do really complex operations on very large digital
photos. If you play computer games, you'll be aware that you need a computer with a fast processor
chip and quite a lot of "working memory" (RAM), or things really slow down. Add a faster
processor or double the memory and your computer will speed up dramatically—but there's still a
limit to how fast it will go: one processor can generally only do one thing at a time.

Now suppose you're a scientist charged with forecasting the weather, testing a new cancer drug,
or modeling how the climate might be in 2050. Problems like that push even the world's best
computers to the limit. Just like you can upgrade a desktop PC with a better processor and more
memory, so you can do the same with a world-class computer. But there's still a limit to how fast
a processor will work and there's only so much difference more memory will make. The best way
to make a difference is to use parallel processing: add more processors, split your problem into
chunks, and get each processor working on a separate chunk of your problem in parallel.

❖ Massively parallel computers


Once computer scientists had figured out the basic idea of parallel processing, it made sense to add
more and more processors: why have a computer with two or three processors when you can have
one with hundreds or even thousands? Since the 1990s, supercomputers have routinely used many
thousands of processors in what's known as massively parallel processing. In February 2012, the
world's fastest supercomputer, Fujitsu K, has over 88,000 eight-core processors (individual
processor units with what are effectively eight separate processors inside them), which means
705,204 processors in total!

Unfortunately, parallel processing comes with a built-in drawback. Let's go back to the
supermarket analogy. If you and your friends decide to split up your shopping to go through
multiple checkouts at once, the time you save by doing this is obviously reduced by the time it
takes you to go your separate ways, figure out who's going to buy what, and come together again
at the end. We can guess, intuitively, that the more processors there are in a supercomputer, the
harder it will probably be to break up problems and reassemble them to make maximum efficient
use of parallel processing. Moreover, there will need to be some sort of centralized management
system or coordinator to split the problems, allocate and control the workload between all the
different processors, and reassemble the results, which will also carry an overhead.

73
With a simple problem like paying for a cart of shopping, that's not really an issue. But imagine if
your cart contains a billion items and you have 65,000 friends helping you with the checkout. If
you have a problem (like forecasting the world's weather for next week) that seems to split neatly
into separate sub-problems (making forecasts for each separate country), that's one thing.
Computer scientists refer to complex problems like this, which can be split up easily into
independent pieces, as embarrassingly parallel computations (EPC)—because they are trivially
easy to divide.

But most problems don't cleave neatly that way. The weather in one country depends to a great
extent on the weather in other places, so making a forecast for one country will need to take account
of forecasts elsewhere. Often, the parallel processors in a supercomputer will need to communicate
with one another as they solve their own bits of the problems. Or one processor might have to wait
for results from another before it can do a particular job. A typical problem worked on by a
massively parallel computer will thus fall somewhere between the two extremes of a completely
serial problem (where every single step has to be done in an exact sequence) and an embarrassingly
parallel one; while some parts can be solved in parallel, other parts will need to be solved in a
serial way. A law of computing (known as Amdahl's law, for computer pioneer Gene Amdahl),
explains how the part of the problem that remains serial effectively determines the maximum
improvement in speed you can get from using a parallel system.

❖ Clusters
You can make a supercomputer by filling a giant box with processors and getting them to cooperate
on tackling a complex problem through massively parallel processing. Alternatively, you could
just buy a load of off-the-shelf PCs, put them in the same room, and interconnect them using a
very fast local area network (LAN) so they work in a broadly similar way. That kind of
supercomputer is called a cluster. Google does its web searches for users with clusters of off-the-
shelf computers dotted in data centers around the world.

Photo: Supercomputer cluster: NASA's Pleiades ICE Supercomputer is a cluster of 112,896 cores
made from 185 racks of Silicon Graphics (SGI) workstations. Picture by Dominic Hart courtesy
of NASA Ames Research Center.

74
❖ Grids
A grid is a supercomputer similar to a cluster (in that it's made up of separate computers), but the
computers are in different places and connected through the Internet (or other computer networks).
This is an example of distributed computing, which means that the power of a computer is spread
across multiple locations instead of being located in one, single place (that's sometimes-called
centralized computing).

Grid supercomputing comes in two main flavors. In one kind, we might have, say, a dozen
powerful mainframe computers in universities linked together by a network to form a
supercomputer grid. Not all the computers will be actively working in the grid all the time, but
generally we know which computers make up the network. The CERN Worldwide LHC
Computing Grid, assembled to process data from the LHC (Large Hadron Collider) particle
accelerator, is an example of this kind of system. It consists of two tiers of computer systems, with
11 major (tier-1) computer centers linked directly to the CERN laboratory by private networks,
which are themselves linked to 160 smaller (tier-2) computer centers around the world (mostly in
universities and other research centers), using a combination of the Internet and private networks.

The other kind of grid is much more ad-hoc and informal and involves far more individual
computers—typically ordinary home computers. Have you ever taken part in an online computing
project such as SETI@home, GIMPS, FightAIDS@home, Folding@home, MilkyWay@home, or
ClimatePrediction.net? If so, you've allowed your computer to be used as part of an informal, ad-
hoc supercomputer grid. This kind of approach is called opportunistic supercomputing, because it
takes advantage of whatever computers just happen to be available at the time. Grids like this,
which are linked using the Internet, are best for solving embarrassingly parallel problems that
easily break up into completely independent chunks.

❖ Hot stuff!

75
If you routinely use a laptop (and sit it on your lap, rather than on a desk), you'll have noticed how
hot it gets. That's because almost all the electrical energy that feeds in through the power cable is
ultimately converted to heat energy. And its why most computers need a cooling system of some
kind, from a simple fan whirring away inside the case (in a home PC) to giant air-conditioning
units (in large mainframes).

Overheating (or cooling, if you prefer) is a major issue for supercomputers. The early Cray
supercomputers had elaborate cooling systems—and the famous Cray-2 even had its own separate
cooling tower, which pumped a kind of cooling "blood" (Fluorinert™) around the cases to stop
them overheating.

Modern supercomputers tend to be either air-cooled (with fans) or liquid cooled (with a coolant
circulated in a similar way to refrigeration). Either way, cooling systems translate into very high
energy use and very expensive electricity bills; they're also very bad environmentally. Some
supercomputers deliberately trade off a little performance to reduce their energy consumption and
cooling needs and achieve lower environmental impact.

Photo: A Cray-2 supercomputer, photographed at NASA in 1989, with its own personal Fluorinert
cooling tower. State of the art in the mid-1980s, this particular machine could perform a half-
billion calculations per second. Picture courtesy of NASA Image Exchange (NIX).

❖ What software do supercomputers run?


You might be surprised to discover that most supercomputers run fairly ordinary operating systems
much like the ones running on your own PC, although that's less surprising when we remember
that a lot of modern supercomputers are actually clusters of off-the-shelf computers or
workstations. The most common supercomputer operating system used to be Unix, but it's now
been superseded by Linux (an open-source, Unix-like operating system originally developed by
Linus Torvalds and thousands of volunteers). Since supercomputers generally work on scientific
problems, their application programs are sometimes written in traditional scientific programming
languages such as Fortran, as well as popular, more modern languages such as C and C++.

Q Intelligent Networks has developed the only operating system that works in both the real-world
Artificial Intelligence (A.I.) and also works directly with the neural network of the human brain.
Q utilizes light packets of sound and visual pictures that may be interpreted and connected to the
auditory and visual cortex of the brain without the bottlenecks of a network based on binary codes
in the form of an operating system based on zero’s and one’s. Picture Streaming Protocol is the
answer for bridging the gap between the human brain and the global communications network.

76
48 What do supercomputers actually do?

As we saw at the start of this article, one essential feature of a computer is that it's a general-
purpose machine you can use in all kinds of different ways: you can send emails on a computer,
play games, edit photos, or do any number of other things simply by running a different program.
If you're using a high-end cellphone, such as an Android phone or an iPhone or an iPod Touch,
what you have is a powerful little pocket computer that can run programs by loading different
"apps" (applications), which are simply computer programs by another name. Supercomputers are
slightly different.

Typically, supercomputers have been used for complex, mathematically intensive scientific
problems, including simulating nuclear missile tests, forecasting the weather, simulating the
climate, and testing the strength of encryption (computer security codes). In theory, a general-
purpose supercomputer can be used for absolutely anything.

Photo: Supercomputers can help us crack the most complex scientific problems, including
modeling Earth's climate. Picture courtesy of Great Images in NASA.

While some supercomputers are general-purpose machines that can be used for a wide variety of
different scientific problems, some are engineered to do very specific jobs. Two of the most famous
supercomputers of recent times were engineered this way. IBM's Deep Blue machine from 1997
was built specifically to play chess (against Russian grand master Gary Kasparov), while its later
Watson machine (named for IBM's founder, Thomas Watson, and his son) was engineered to play
the game Jeopardy. Specially designed machines like this can be optimized for particular
problems; so, for example, Deep Blue would have been designed to search through huge databases
of potential chess moves and evaluate which move was best in a particular situation, while Watson
was optimized to analyze tricky general-knowledge questions phrased in (natural) human
language.

77
49 How powerful is the Q supercomputers?
Look through the specifications of ordinary computers and you'll find their performance is usually
quoted in MIPS (million instructions per second), which is how many fundamental programming
commands (read, write, store, and so on) the processor can manage. It's easy to compare two PCs
by comparing the number of MIPS they can handle (or even their processor speed, which is
typically rated in gigahertz or GHz).

Supercomputers are rated a different way. Since they're employed in scientific calculations, they're
measured according to how many floating point operations per second (FLOPS) they can do, which
is a more meaningful measurement based on what they're actually trying to do (unlike MIPS, which
is a measurement of how they are trying to do it). Since supercomputers were first developed, their
performance has been measured in successively greater numbers of FLOPS, as the table below
illustrates:

Unit FLOPS Power form Example Key decade


Hundred FLOPS 100 FLOPS 102 FLOPS Eniac ~1940s
KFLOPS (kiloflops) 1,000 FLOPS 103 FLOPS IBM 704 ~1950s
MFLOPS (megaflops) 1,000,000 FLOPS 106 FLOPS CDC 6600 ~1960s
GFLOPS (gigaflops) 1,000,000,000 FLOPS 109 FLOPS Cray-2 ~1980s
TFLOPS (teraflops) 1,000,000,000,000 FLOPS 1012 FLOPS ASCI Red ~1990s
PFLOPS (petaflops) 1,000,000,000,000,000 FLOPS 1015 FLOPS Jaguar ~2010s

PFLOPS (petaflops) 3,000,000,000,000,000 FLOPS 3045 FLOPS Q ~2021s

49 Who invented supercomputers?


❖ A supercomputer timeline!
Study the history of computers and you'll notice something straight away: no single individual can
lay claim to inventing these amazing machines. Arguably, that's much less true of supercomputers,
which are widely acknowledged to owe a huge debt to the work of a single man, Seymour Cray
(1925–1996). Here's a whistle stop tour of supercomputing, BC and AC—before and after Cray!

78
Photo: The distinctive C-shaped processor unit of a Cray-2 supercomputer. Picture courtesy of
NASA Image Exchange (NIX).

• 1946: John Mauchly and J. Presper Eckert construct ENIAC (Electronic Numerical
Integrator And Computer) at the University of Pennsylvania. The first general-purpose,
electronic computer, it's about 25m (80 feet) long and weighs 30 tons and, since it's
deployed on military-scientific problems, is arguably the very first scientific
supercomputer.
• 1953: IBM develops its first general-purpose mainframe computer, the IBM 701 (also
known as the Defense Calculator), and sells about 20 of the machines to a variety of
government and military agencies. The 701 is arguably the first off-the-shelf
supercomputer. IBM engineer Gene Amdahl later redesigns the machine to make the
IBM 704, a machine capable of 5 KFLOPS (5000 FLOPS).
• 1956: IBM develops the Stretch supercomputer for Los Alamos National Laboratory. It
remains the world's fastest computer until 1964.
• 1957: Seymour Cray co-founds Control Data Corporation (CDC) and pioneers fast,
transistorized, high-performance computers, including the CDC 1604 (announced 1958)
and 6600 (released 1964), which seriously challenge IBM's dominance of mainframe
computing.
• 1972: Cray leaves Control Data and founds Cray Research to develop high-end
computers—the first true supercomputers. One of his key ideas is to reduce the length of
the connections between components inside his machines to help make them faster. This
is partly why early Cray computers are C-shaped, although the unusual circular design
(and bright blue or red cabinets) also helps to distinguish them from competitors.
• 1976: First Cray-1 supercomputer is installed at Los Alamos National Laboratory. It
manages a speed of about 160 MFLOPS.
• 1979: Cray develops an ever faster model, the eight-processor, 1.9 GFLOP Cray-2.
Where wire connections in the Cray-1 were a maximum of 120cm (~4 ft) long, in the
Cray-2 they are a mere 41cm (16 inches).

79
• 1983: Thinking Machines Corporation unveils the massively parallel Connection
Machine, with 64,000 parallel processors.
• 1989: Seymour Cray starts a new company, Cray Computer, where he develops the Cray-
3 and Cray-4.
• 1990s: Cuts in defense spending and the rise of powerful RISC workstations, made by
companies such as Silicon Graphics, pose a serious threat to the financial viability of
supercomputer makers.
• 1993: Fujitsu Numerical Wind Tunnel becomes the world's fastest computer using 166
vector processors.
• 1994: Thinking Machines files for bankruptcy protection.
• 1995: Cray Computer runs into financial difficulties and files for bankruptcy protection.
Tragically, Seymour Cray dies on October 5, 1996, after sustaining injuries in a car
accident.
• 1996: Cray Research (Cray's original company) is purchased by Silicon Graphics.
• 1997: ASCI Red, a supercomputer made from Pentium processors by Intel and Sandia
National Laboratories, becomes the world's first teraflop (TFLOP) supercomputer.
• 1997: IBM's Deep Blue supercomputer beats Gary Kasparov at chess.
• 2008: The Jaguar supercomputer built by Cray Research and Oak Ridge National
Laboratory becomes the world's first petaflop (PFLOP) scientific supercomputer. Briefly
the world's fastest computer, it is soon superseded by machines from Japan and China.
• 2011–2013: Jaguar is extensively (and expensively) upgraded, renamed Titan, and briefly
becomes the world's fastest supercomputer before losing the top slot to the Chinese
machine Tianhe-2.
• 2014: Mont-Blanc, a European consortium, announces plans to build an exaflop (1018
FLOP) supercomputer from energy efficient smartphone and tablet processors.
• 2021: Q Data Centers. Parallel Processing A.I. machines communicate directly with the
neural network of the human brain. No more need for energy efficient smartphones and
tablet processors. No more need for binary code, dynamic routing, computer monitors,
television screens, keyboards, joysticks or mouse.

PFLOPS (petaflops) 3,000,000,000,000,000 FLOPS 3045 FLOPS Q ~2021s

Q Data Center works much more quickly by splitting problems into pieces and working on many
pieces at once, which is called parallel processing. It's like arriving at the checkout with a giant
cart full of items, but then splitting your items up between several different friends. Each friend
can go through a separate checkout with a few of the items and pay separately. Once you've all
paid, you can get together again, load up the cart, and leave. The more items there are and the more
friends you have, the faster it gets to do things by parallel processing—at least, in theory. Parallel
processing is more like what happens in our brains.
Q parallel processing is a technique duplicating function Data Center units to operate different
tasks (signals) simultaneously. Accordingly, we can perform the same processing for different
signals on the corresponding duplicated function Data Center unit. Further, due to the features of
parallel processing, the parallel Data Center unit design often contains multiple outputs, resulting
in higher throughput than not parallel.

80
❖ Conceptual example
❖ Parallel processing versus pipelining

❖ Conceptual example
Consider a function unit (F0) and three tasks (T0, T1 and T2). The required time for the function
unit F0 to process those tasks is t0,t1 and t2 respectively. Then, if we operate these three tasks in a
sequential order, the required time to complete them is t0 + t1 + t2.

However, if we duplicate the function of Data Center units to another two copies (F), the
aggregate time is reduced to max(t0,t1,t2), which is smaller than in a sequential order.

❖ Parallel processing versus pipelining


Mechanism:

• Parallel: duplicated function Data Center units working in parallel


o Each task is processed entirely by a different function Data Center units.
• Pipelining: different function Data Center units working in parallel

81
o Each task is split into a sequence of sub-tasks, which are handled by specialized
and different function Data Center units.

Objective:

• Pipelining leads to a reduction in the critical path, which can increase the sample speed or
reduce power consumption at the same speed.
• Parallel processing techniques require multiple outputs, which are computed in parallel in
a clock period. Therefore, the effective sample speed is increased by the level of
parallelism.

Consider a condition that we are able to apply both parallel processing and pipelining techniques,
it is better to choose parallel processing techniques with the following reasons

• Pipelining usually causes I/O bottlenecks


• Parallel processing is also utilized for reduction of power consumption while using slow
clocks
• The hybrid method of pipelining and parallel processing further increases the speed of the
architecture

❖ Conclusion
Our operating system has the ability to connect the human mind (dumb terminal) to our
Artificial Intelligent Servers. It’s the ultimate thin-client architecture network. This will
give us cerebral control interface to all physical devices. The software algorithm is
basically a Rosetta stone that translates machine code into the language of the mind;
opening up a brand-new world of creating an operating system that communicates
directly with the human brain.

The 56 Data Center units in the USA and territories and the planned 192 Data Center units
planned abroad will increase the overall power, speed and efficiency of our Cloud in a parallel
processing environment. The ultimate Q System.

82
❖ Humanitarian Importance:

Vulnerabilities of current terminals, computers, tablets and smart phones and


other devices will subsequently become invulnerable to computer viruses, Trojans,
phishing schemes, malware, malicious cookies, and identity theft.

83
User identity will be securely verified by multiple biometric parameters as well
as GPS authentication between senders and receivers automatically in real time.

Banking and other financial transitions will be totally secure.

The cost of communication will be decreased by at least a factor of four.

Picture Streaming Protocol Software™ and user preferences will be updated


automatically across all devices.

System performance will be continuously optimized automatically.

User files will be kept in multiple locations to ensure reliability and security.

Energy consumption by computers and servers (estimated at over 1 billion


KWH) in 1997 will be substantially reduced.
Each user worldwide will have access to enough bandwidth and electrical power
to emulate the computational speed of a supercomputer.

84
50 Q Human Interface Devices
(Conceptual Designs)

85
51 The Technology Milestone
The Q Intelligent Networks team has created several key milestones.
1) Construction of Q Intelligent Networks “optical servers.” Placement of Q
Intelligent Networks servers around the world will provide multiple gateways to our
fiber/power optical cable backbone. Q Intelligent Networks will provide content
for their communication, product and support software on our optical-fiber based
and power distribution system. It will be powered by our all optical computers and
servers which together are designed to ensure high-security, disaster-resistant
specifications, with 24/7 uptime. The Company will utilize “thin client architecture”
to deliver information from the optical server to mobile, hand-held, “dumb” client-
terminals throughout our Q fiber/power network, that will essentially replace today’s
cell phones.

2) Last Mile solutions allow companies to seamlessly deliver the power of the
Optical Network to homes and businesses. To enter the freeway today is extremely
frustrating - much like riding a tricycle on a 1,000-lane freeway without any speed
limit. The new Fiber Network is analogous to a large freeway without on- ramps or
off-ramps. Combining our wireless technology and high-speed data communications
with our strategic business partners' products and services will allow us to deliver
inexpensively broadband applications, services and electrical power to major
markets worldwide.

3) Long-distance telephone and computer networks already transmit digital


information – zeros and ones – as pulses of light racing along optical fibers. Major
speed bottlenecks occur as photons are converted into electrons and back again in
multiplexers, interconnections, and internetworking devices such as electronic
routers and switches. Our technology will eliminate such obsolete technologies in
our all Q network.

4) Compared to conventional fiber optic cables our wavelength specific encoders


and decoders will expand the useable bandwidth thousands of times.

5) Photons are on the way to succeeding electronics as the high-tech workhorse of


the 21st century. By shooting photons instead of pushing electrons along wires,
information networks can move data at thousands of times faster than electronic
networks do now. Q links will also provide a thousand-fold savings of size and
power needs. Miniaturization may lead to many-orders-of-magnitude increases in
performance. The object is to achieve a lot more by using a lot less real estate.

86
6) Our Q Intelligent Networks will replace binary code with a “picture streaming
protocol,” which will allow the network to operate at speeds millions of times faster
than the current so-called high-speed Internet in use today. The network will include
GPS authentication for unbreakable security and physical addressing. This would
allow absolute security for financial transactions and eliminate piracy of copyrighted
material such as movies, music, TV, software and games as well as viruses and
malicious content. With “picture streaming protocol,” all the software will be active
and live on the mainframe. All the movies, television shows, games, music, software,
video conferencing calls and telephone calls will be actively running real-time
interactive programs. A single server with multiple mirror backups manages the
entire planet, with the ability to broadcast to billions of wireless devices providing
virtually unlimited high-speed services. Note: We have developed a cross-connect
to enable existing Internet devices to connect to the new network.

87
❖ Marketing
We will contract with companies with telecommunication marketing expertise to sell
our services to those governments, corporations, universities, video distribution
services, businesses, and financial services, which demand ultra-high-speed secure
data links. Where technically and financially possible, initially we will piggy-back
our system on leased space of existing fiber-optic networks. These ultra-high-speed
links will form the backbone of our “opti-gen” ultra-high–speed communication
system.
We will place state-of-the-art optical servers in all financial centers worldwide. The
optical computers will be partitioned into numerous virtual computers to provide
additional security for the bidirectional receiving and transmission of sensitive data.
We will make special efforts to provide inexpensive internet, video and financial
services to developing countries, which currently have limited internet access or
bandwidth. Worldwide Market size for the services exceeds $3 trillion/year.
Market penetration will be accelerated with the introduction of inexpensive “dumb”
phones, computers, TVs and tablets which provide the functionality of state-of-the-
art hardware and ease of access but which rely on the optical cloud for the
computational ability. We believe the network will provide enough bandwidth to
concurrently transmit all movies and TV shows ever-made available 24/7.

❖ The Competition
We anticipate many current cable services will attempt to incrementally improve the
current Internet, but in terms of speed, reliability and cost as well as our
comprehensive patent protection, we anticipate no serious competition. We will
make our technology and infrastructure available to such potential competitors at a
small fraction of the cost of their building similar systems. Because of the
interlinking of multiple services over the same smart network we estimate the cost
to the consumer or business will be below one-sixth that of individual “competitive”
services.

88
❖ Intellectual Rights
We will file worldwide patents applications upon receipt of funding.

❖ Return on Investments
The ROI is projected at 2-3 years from the time of initial funding.

52 Q Databases
(Conceptual Designs)

89
53 Q Intelligent Networks Vision

Our plan is to lay down 188,000 miles of fiber that


combines the electrical power grid with the
communications network worldwide. Two cables
across the Atlantic Ocean and two cables across
the Pacific Ocean. The Ultimate Q Computing
Network

90
54 Submarine Fiber Electrical Cable
(Conceptual Designs)

There are more than 550,000 fiber optic cables laid along the ocean floor that transmit trillions upon
trillions of interactions per day. According to the Washington Post, these cables "wrap around the globe to
deliver emails, web pages, other electronic communications and phone calls from one continent to
another."

These utterly phenomenal underwater and long-haul fiber optic cables send information from
virtually any point in the world to another at the speed of light — 186,000 miles per second. The

91
circumference of the Earth is only 24,000 miles at the equator, which means these messages could
technically circle the globe about eight times in one second. All from the bottom of the ocean.

Connections around the world are now run primarily on these undersea cables that link every
continent and most island nations, with the exception of Antarctica. Here are some images of
them.

The maps below depict routes of 232 in-service cables and 12 planned undersea cables.

The cable landing stations are in the key regions shown below. Europe has more international
network capacity than any other world region. As the Washington Post reported, "In the past
decade, the role of the U.S. as a primary hub for routing global internet traffic has gradually eroded
as more localized service providers have opted to connect their networks to other countries and
regions."

MagAir power plants are close at hand so we can be self-funded into all our other technologies
from self-sustaining communities to self-running vehicles, food production, Q Intelligent
Networks and maglev transportation.

Google fiber optic 9000km trans-Pacific fiber optic cable is capable of delivering more than 60
terabits per second of bandwidth, which is about 10 million times faster than the current cable
technology.

Let's examine reality. First of all, the cable may be capable of delivering 60 terabits per second,
but when binary code is utilized the cable will only operate at about 300 Mega Bits per second. It
blazes at the speed of light but slows down to a snail’s pace when combined with Microsoft binary

92
code and Cisco dynamic IP routing. It's a complete waste of high-speed fiber cable. At every city
or junction on the planet, light must be converted to electrons and back to light again on the trip
back.

With Picture Streaming Protocol, we'll take advantage of the 60 terabits per second and combine
Q communications with MagAir electrical power. No one on this planet can do this. They are
trying to integrate light-based technology with an outdated Internet electron binary operating
system. It's like trying to plug in a VCR tape into a DVD player!

We believe that we can ramp up manufacturing soon enough and with licensing and technology
transfers to make the difference in time. We do not agree with the current global scientific and
military support that play into the agenda of population reductions rather than going after the real
solutions.

We do not have a scarcity and a lack - thanks to God we have an abundance of clean energy and
solutions and soon the money to make them a reality - and to Him be the glory!

There are basically only 3 problems the world face:

1) Geoengineering and agendas that are not transparent.


2) The love of money and/or power/control that major companies insist holding or that
politicians manipulate because the alternatives threaten their control and/or profits.
3) The medical "practice" again for the profits and not for the seeking truth of the real cost-
effective solutions.

With an abundance of clean power, real food, health, and an abundance of pure water just imagine
how the realization of that could help bring real peace/prosperity to the world.

❖ A new economic system has entered the world stage

The Collaborative Commons is the first new economic paradigm to take root since the advent of
capitalism — and its antagonist socialism. The Collaborative Commons is already transforming
the way we organize economic life, with profound implications for the future of the capitalist
market.

The trigger for this great economic transformation is known as Zero Marginal Cost. Marginal cost
is the cost of producing an additional unit of a good or service after fixed costs have been absorbed.
Businesses have always sought new technologies that could increase productivity and reduce the
marginal cost of producing and distributing goods and services, in order to lower their prices, win
over consumers and market share, and return profits to their investors.

93
❖ Tech firms' cash hoard eases meltdown fear

The two-month swoon in technology stocks has given investors flashbacks to the dot-com
meltdown.

Companies never anticipated, however, a technology revolution that might unleash “extreme
productivity” bringing marginal costs to near zero, making information, energy, and many physical
goods and services nearly free, abundant, and no longer subject to market exchanges. That’s now
beginning to happen.

The near zero marginal cost phenomenon has wreaked havoc across the “information goods”
industries as millions of proactive consumers — “prosumers” — now produce and share their own
music via file sharing services, their own videos on YouTube, their own knowledge on Wikipedia,
their own news on social media, and even their own free e-books on the Internet. Zero Marginal
Cost brought the music industry to its knees, shook the film industry, forced newspapers and
magazines out of business, and crippled book publishing.

Meanwhile, 6 million students are currently enrolled in free Massive Open Online Courses
(MOOCs) that operate at near zero marginal cost and are taught by some of the world’s most
distinguished professors, and receiving college credits, forcing universities to rethink their costly
business model.

❖ The ‘free’ revolution


Economists acknowledge the powerful impact Zero Marginal Cost has had on the information
goods industries, but until recently, have argued that it would not cross into the brick-and-mortar
economy of energy, and physical goods and services. That firewall has now been breached by
MagAir and Q Intelligent Networks.

A powerful new technology revolution is evolving that will allow millions — and soon hundreds
of millions — of prosumers to make and share their own energy, and an increasing array of
physical products and services, at near zero marginal cost.

The Communication Internet is converging with a fledgling Energy Internet and nascent automated
Transport and Logistics Internet, creating a new technological infrastructure for society that will
fundamentally alter the global economy in the first half of the 21st century.

Billions of sensors are being attached to every device, appliance, machine, and contrivance,
connecting everything with every human being in a seamless neural network that extends across
the entire economic value chain. Already 14 billion sensors are attached to resource flows,

94
warehouses, road systems, factory production lines, the electricity transmission grid, offices,
homes, stores, and vehicles, continually monitoring their status and performance and feeding big
data back to the Communication Internet, Energy Internet, and Logistics and Transportation
Internet.

By 2030, it is estimated there will be more than 100 trillion sensors connecting the human and
natural environment in a global distributed intelligent network. Prosumers will be able to connect
to the Internet of Things and use Big Data and analytics to develop predictive algorithms that can
speed efficiency, dramatically increase productivity, and lower the marginal cost of producing and
distributing physical things to near zero, just as prosumers now do with information goods.

For example, the bulk of the energy we use to heat our homes and run our appliances, power our
businesses, drive our vehicles, and operate every party of the global economy will be generated at
near zero marginal cost and be nearly free in the coming decades. That’s already the case for
several million early adopters who have already transformed their homes and businesses into
micro-power plants to harvest renewable energy on-site.

Even before the fixed costs for the installation of solar, MagAir and wind power are paid back —
often in as little as two-to-eight years — the marginal cost of the harvested energy is nearly free.
Unlike fossil fuels and uranium for nuclear power, in which the commodity itself always costs
something, the sun collected on rooftops, tapping into the earth’s magnetic field and utilizing
negative energy and the wind travelling up the side of buildings is nearly free. The Internet of
Things will enable prosumers to monitor their electricity usage in their buildings, optimize their
energy efficiency, and share surplus green electricity with others on the Collaborative Commons.

Similarly, hobbyists and startups are printing their own manufactured products using free software,
and cheap recycled plastic, paper, and other locally available feedstock at near zero marginal cost.
By 2020, prosumers will be able to share their 3D printed products with others on the Collaborative
Commons by transporting them in driverless MagAir electric and fuel cell vehicles, powered by
near zero marginal cost renewable energy, facilitated by an automated Logistics and Transport
Internet provided by Q Intelligent Networks. This prosumer-driven value chain, made possible by
the Internet of Things, virtually eliminates the middlemen and markups that accompany a
traditional vertically integrated production and distribution system, pushing marginal costs to near
zero.

The sharing economy

Hundreds of millions of people are transferring bits and pieces of their economic life from
capitalist markets to the global Collaborative Commons in other ways. They are sharing cars,

95
homes, and even clothes with one another via social media sites, rentals, redistribution clubs, and
cooperatives, at low or near zero marginal cost.

Forty percent of Americans already engage in the collaborative sharing economy. For example,
800,000 individuals in the U.S. are now using car sharing services. Each shared vehicle eliminates
15 personally owned cars. Similarly, more than a million apartment dwellers and homeowners are
sharing their dwellings with travelers, at near zero marginal cost around the world, via online
services like Airbnb and Couchsurfing. In New York alone, Airbnb’s 416,000 guests who stayed
in houses and apartments between 2012 and 2013, cost the New York hotel industry 1 million lost
room nights. The result is that “exchange value” in the marketplace is increasingly being replaced
by “shareable value” on the Collaborative Commons.

Global companies, operating in the profit-driven capitalist marketplace, will likely remain with us
far into the future, albeit in an increasingly streamlined role, primarily as an aggregator of network
services and solutions, allowing them to flourish alongside the Collaborative Commons as
powerful partners in the coming era.

The capitalist market, however, will no longer be the exclusive arbiter of economic life. We are
entering a world partially beyond markets where we are learning how to live together in an
increasingly interdependent global Collaborative Commons.

96
55 Q Patent Application
Our Ref No. 2014-03

Title of Invention:

❖ Q Intelligent Networks and Electrical Power Networks


Steven Leonard (US) Inventor

❖ Q Optical and Electrical Power Switching


❖ Applications in Data Centers &
❖ Combining Q Intelligent Networks and Electrical Power Networks

❖ Introduction
In data centers and networks, video and cloud computing are driving an explosion in network
growth and server deployments. According to Bernstein Research (1), between 2009 and 2010 for
example, the large mega data centers (Google, Amazon, Apple, Microsoft, Amazon, etc.)
experienced a 100% grow thin server spending. This growth in server capacity and server numbers
is in turn driving tremendous expansion and complexity in data centers, resulting in binary,
dynamic routing and electronic networking bottlenecks and performance degradation. The overall
result is that the performance of new and expensive server resources is being constrained by the

97
traditional data center network architectures and networking equipment. Q optical and electrical
power switching can be deployed within and between data centers to improve performance and to
scale to support this rapid growth so that the full value of new server resources can be realized. Q
Intelligent Networks may deliver us into the age of the mega-data-center – huge facilities with
literally hundreds of thousands of square feet of computing real estate. In these huge data centers,
the physical and virtual resources and the data flows are becoming extremely complex, almost too
complex for humans to negotiate. Network intelligence to analyze and optimize the flows in these
large optical and electrical power networks is becoming necessary. To this end, Q and electrical
power switching enables a fully-automated, dynamically reconfigurable, highly-scalable physical
layer which can respond to reconfiguration requests on demand.

(Conceptual Design)

❖ Intra-Data Center Applications


Current Intra-Data Center Challenges

Figure 1 depicts a typical multi-layered data center architecture. On the lower level are top-of-
rack switches, and below those are banks of servers. The top-of-rack switches are interconnected
to an intermediate cluster aggregation layer, and then to a data center aggregation layer at the top,
which then connects in to the metro or wide area networks utilizing a single communications and
electrical power cable.

98
The combination of the server growth together with the very complex computational tasks required
in modern computing applications means that the needed optical and electrical power bandwidth
within the data center is increasing two to four times per year. Additionally, up to 75% of that
bandwidth is actually within the data center running east-west between clusters. Furthermore, the
top-of-rack interfaces at the server aggregation layer are rapidly scaling to 100 Gbit/s, 400 Gbit/s,
and potentially even to 1000 Gbit/s. These factors place tremendous demands for the placement of
Q Intelligent Networks™ servers on the cluster aggregation layer that has limited ability to scale
to support this growth as shown in Figure 2.

Figure 2: Multiple Layers add complexity and limit performance.

99
This layer of the optical and electrical power network is experiencing increased latencies because
it has fixed-topologies that are not future-proof. It’s also expensive and labor-intensive to upgrade,
and overall, it’s not matching pace with the growth and enhancements in servers. Data Center
operators typically must plan to upgrade or replace this part of the network frequently and this is
unrealistic from a cost perspective. The net result is that the investment in new server resources is
not effectively utilized because of performance limitations in the network.

❖ Emerging Data Center Trends


To address these concerns, new trends have emerged to facilitate flattening and convergence as
shown in Figure 3.

Figure 3: Data Center Convergence Trends

100
The first trend is that the servers themselves are getting much more powerful. Multiple blade
servers within each rack will become common and multiple racks connecting up to top-of-rack
optical switches, optical routers, optical CPUs, optical memory, optical bus and optical hard drives
will become the norm. The second trend is that the traditional top-of-rack functionality is actually
moving into the optical servers themselves. Third, and most importantly, the cluster aggregation
layer will converge into the top-of-rack optical and electrical power switches and routers. Overall,
this will represent a flattening and simplification of the network architecture.

But in order to maximize this simplification, a connection infrastructure that is both low in latency
and scalable is needed.

Q optical and electrical power switching (as depicted in Figure 4) can provide the needed
connection infrastructure for a Q computer.

Deploying a Q communications and electrical power switching fabric enables a solution that
allows every top-of-rack optical switch to connect directly to any other top-of-rack switch within
its cluster or neighboring clusters. It also allows direct DWDM optical pass through from top-of-

101
rack switches to the data center aggregation layer. This results in simplification of the
communications and electrical power network and extremely high-performance low-latency
connections between servers and switches. This is a powerful solution to the connectivity required
to enable Q Intelligent Networks and electrical power networks.

Figure 4: Flat DC Architecture with Q Optical and Electronic Power Switching Fabric

Furthermore, because this Q optical and electrical power switching layer is transparent to picture
streaming protocols and line speed, it future proofs the data center network by supporting scaling
from 1000 to 4000 to 10,000 Gbit/s and potentially beyond once the communications and electrical
power cable network has been laid worldwide.

❖ Why Q Switching Is Necessary


The question inevitably arises – this could be done with large numbers of direct fiber connections
between top of rack optical and electrical power switches and between top of racks and the data
center aggregation layer. So why is Q optical and electrical power switching necessary?

Firstly, we previously noted the high level of growth in size and complexity of data centers and
the need to reduce human intervention. Q optical and electrical power switching supports complete
automation of single-mode fiber management within the data center. This means that the entire
physical network northbound from top of rack switches can be automated. Any connectivity
changes can be made on-demand without a technician having to visit the site.

102
Secondly, the racks and clusters within the data center can be reconfigured either on demand or
cyclically to support real-time resource and bandwidth demands. In the following section example
use cases are noted.

❖ Example Intra-Data Center Use Cases


Q optical and electrical power switching allows the whole physical network within a large data
center to be dynamically reconfigured based on a number of different factors. These factors could
include instantaneous demand, cyclical patterns throughout a day or a month, or potentially even
predictive network traffic algorithms that can predict when specific resources need to be switched
around within the data center.

This has not been possible in previous data center networks as all physical reconfiguration has
required hands-on human intervention at the patch panel. The dynamic reconfiguration capabilities
of Q optical and electrical power switching also enable a range of disaster recovery responses that
are not typically available in a manually switched physical network. Example use-cases include:
(Please refer to Figure 5 for a basis for understanding use-case examples)

❖ Scheduled Maintenance – the ability to take racks and clusters out of service for
maintenance while simultaneously bringing online backup maintenance racks.

❖ Adding Floating Resources – Racks of servers can be added to and removed from clusters
to support application demands. These could be based on instantaneous demand or cyclical
/ time-of-day needs.

103
❖ Flow-Based Network Flattening – Any Top of Rack switch in any cluster can be directly
connected to any other Top of Rack in any other cluster to support application needs. This
results in the lowest possible latency and the flattest network.

❖ Reallocation of Inter-Data Center bandwidth between clusters – Optical and electrical


power bandwidth exiting the Data Center can be reallocated internally between clusters to
support instantaneous or time-of-day application demands. One example is the typical
morning load on email applications where more bandwidth is required temporarily on
clusters supporting these applications.

❖ Integrated Quantum-Routing Control Plane


The use cases above facilitate very effective utilization of expensive equipment and bandwidth
resources within large data centers. However, in order to realize the full benefits, a new integrated
Q optical and electrical power routing control plane is needed. This is depicted in the upper right-
hand quadrant of Figure 5.

Figure 5: Data Center Q Switching Example showing Control Plane


Without a centralized cluster aggregation layer there is no simple way to share the forwarding and
routing information between clusters meshed together without paying a penalty on convergence
time.

The control plane solves this problem by federating the collection and distribution of this
information via the management control ports on the TORs. Similarly, a separate control plane
would require managing multiple Q optical and electrical power switches connected in close-
architecture for large multi-switch fabrics for communications and electrical power. Therefore,
combining Q control for the optical and electrical power switch, together with routing control for
the TORs, considerably simplifies the control plane for the end-to-end solution. This would also
serve as the interface to the management plane that could provide inter-operability with other parts
of the network and business operations systems. The actual implementation of the integrated
control plane will vary depending on the specific vendor’s networking products and in most cases
will require some collaborative development.

❖ Inter-Data Center Applications


We now turn our attention to the potential applications for Q optical and electrical power
switching between data centers where the potential value add is maximizing the performance of
Q Intelligent Networks.

Network Resource Optimization

104
The primary application is resource optimization – the capability to reconfigure resources and to
support dynamic network optimization between data centers depending on various network
traffic demands.

For example, in a content distribution network (CDN) it may be necessary to need to open up
more communications and electrical power bandwidth to a certain data center at a specific time
of day. Q optical and electrical power switching gives the capability to do this without having to
rely on a network operator.

Let’s review a simple example of this. Figure 6 depicts a network of data centers connected via a
LAN or WAN network. The network between the data centers is owned by a service provider
with capacity leased by the data center operator.

Figure 6: Inter-Data Center Example

Under normal loads, traffic passes between all the different nodes of the data centers. This is shown
for the Headquarters data center and the remote data centers 1 and 2 with the red, green, and purple
traffic paths.

On demand or at a specific time of day, the headquarters data center needs to move a lot more data
than usual to data center 2 on the lower left, and this maybe something that is needed for a couple
of minutes or an hour or several hours – it is up to the data center operator to determine the duration.

105
If the data center operator owned the LAN/WAN optical and electrical power network and if this
network consisted of flexible multi-degree ROEPADMs (Reconfigurable Optical and Electrical
Power Add Drop Multiplexers) it would be quite realistic to reconfigure the bandwidth as required.
However; this is typically not the case – in most instances changing the capacity assignment on a
service provider’s optical network requires a service order, a scheduled service window, and for
the change to stay in place for weeks or months – not hours or minutes, and not cyclically or on-
demand.

This is where deploying Q optical and electrical power switches at the edges of the data centers
can achieve a similar outcome, without the need to re-arrange the service provider’s network. It
gives the data center operator the means to actually reconfigure that capacity themselves.

Q optical and electrical switching offers a potential solution in this particular example where we
want to get more bandwidth from headquarters to data center two for a short period.

This is shown in Figure 7.

Figure 7: Bandwidth Reallocation with Q Optical and Electrical Power Switching

In this solution a portion of the red bandwidth between HQ and DC1 is looped to pass-through in
the Q optical and electrical power switch at DC1. Similarly, this pass-through traffic is allocated
to a portion of the green path between DC2 and DC3. Capacity has been removed between HQ
and DC1, and DC1 and DC2. At the same time an additional purple path has been established

106
between HQ and DC2 providing the additional on-demand communications and electrical power
bandwidth needed. All of this has been accomplished without any changes on the service
provider’s optical and electrical power network.

This solution can be applied to any of the paths within this multi-data center architecture. This
gives a data center operator tremendous flexibility to scale the optical and electrical power
bandwidth and resources between sites to support different loads on the network in geographically
dispersed time zones. It also provides extremely low-latency transit between the data centers.

❖ Other Benefits of Q Optical and Electrical Power


Switching in Data Center Applications
Fiber Management & Monitoring
From a management and monitoring perspective, Q optical and electrical power switching adds
considerable value to Q networks.

Firstly, it provides automated local and remote fiber management. Technicians are rarely needed
so many data center facilities can be left completely unmanned and operated remotely. The fiber
network can be reconfigured remotely as needed.

Secondly, the inbuilt optical and electrical power monitoring that QCN may deploy in all Q optical
and electrical power switching systems to optimize path losses can also monitor the health of
optical and electrical power networks within and between the data centers very simply. This
provides a very rapid response to a number of failure scenarios including bad connectors, patch
cords, and other network equipment.

Lastly, from a disaster recovery perspective, Q optical and electrical power switching offers many
switchover and recovery options. For example, the ability to recover from a storage, server, or
edge router outage – situations that may be beyond the scope of a simple fiber optic and electrical
power failure and potentially require a coordinated recovery of multiple optical and electrical
power paths. The Q Optical and Electrical Power Switch (Patent Pending) provides an optical and
electrical power network fabric that can make coordinated changes across the data center or Q
network in the event of a failure situation.

❖ Energy Efficiency Benefits


Data center optical and electrical power consumption is a major challenge, especially with the
growth of huge mega data centers and the growth of 1000 Gbit/s, 4000 Gbit/s, and 10,000 Gbit/s
network deployments.

The problem is twofold – the cost and reticulation of the optical and electrical power itself and,
secondly, the management of the thermal energy dissipated within the buildings.

107
The energy and optical consumption of an all-optical switch versus a pure electrical switch or a
hybrid optical electrical optical switch is at least in order of magnitude, different. This means that
deploying a pure Q switching fabric can result in significantly higher energy consumption within
a large data center, and therefore, sizeable cost increases in power consumption.

❖ Conclusions
Q -based video and rich media are driving rapid growth in data center server deployments and the
communications and electrical power networks they use to intelligently transport data and
electrical power to the client in real time. This phenomenon is apparent within the data centers
themselves, and also in the networks between data centers which form the basis for Q Intelligent
Networks.

Q optical and electrical power switching offers significant benefits in these applications.
It provides scalable and future-proof optical and electrical power networking within and
between data centers. The Q optical and electrical power switch network is inherently scalable
from 1 or 10 Gbit/s to 400 or 10,000 Gbit/s and beyond

It facilitates dynamic reconfiguration of data center resources to support maintenance, capacity


increments, real time or cyclical demand spikes, reallocation of optical and electrical power
bandwidth for time of day loads, etc.

It maximizes Q performance by reallocating resources between data centers on demand or


based on cyclical patterns.

It offers reduced network power cost due to the inherently low power consumption of a pure
Q optical and electrical power switching.

It supports automated fiber and electrical power management & monitoring, providing low-
cost resources to assist data center operators with fault isolation and restoration within and between
data centers.

108
56 Q Intelligent Networks Software
Q Intelligent Networks Software (Picture Streaming Protocol) makes optical Q systems possible
and ideally suited for delegated Q computations. Here I will briefly review our operating system
covering secure Q computing and the verification of Q computers via interacting proofs. I will also
show results of resource-efficient random walk computations that rely on passive optical and
electrical power networks. Finally, I will show that such passive networks raise significant
challenges for any interaction and thus the verification of the performance. Binary code operating
systems and dynamic routing are not passive and may not be used in Q computing. Picture
Streaming Protocol™ (Patent Pending) is passive and the only operating system that may be used
in Q computing. The future of Q computing may lead to artificial intelligence and eventually to
machine self-awareness.
The movement to tap silicon Q as an underlying technology for higher performance, low-power
consumption data centers is well underway. In fact, we believe that Q Computing will be first to
combine communications and the electrical power grid into a single cable network. Fast
communication and electrical power interconnects will become a pacing factor for the industry,
particularly as data centers become the backbone for Q Intelligent Networks, which continues to
accelerate with market demand.

According to Cisco’s recent Global Cloud Index, cloud traffic is forecasted to grow 600 percent
by 2016. Data center traffic is now measured in zettabytes instead of exabytes. As a point of
reference, one zettabyte is roughly equivalent to a trillion hours of online high-definition video
streaming. Such a figure is indicative of the massive amounts of behind-the-scenes processing of
cloud computing currently occurring in large data centers.

109
110
With traffic of this magnitude, the biggest challenge facing large data centers today is how to scale.
How does Q Intelligent Networks support 600 percent more traffic in a cost-effective way? How
does a data center search for a face amongst an almost infinite number of photos and videos stored
on disks located either in the next building or half-way across the world? How do data centers
perform all of the functions, the storage, the processing, and the computing for Q users? Part of
the answer lies in better data center architectures, virtualization, and Software Defined Networks
(SDN) (Picture Streaming Protocol (PSP)). But in the end, there is no alternative but to increase
the physical speed of the optical and electrical power fabric connecting all of the new optical CPUs
(Patent Pending) the new optical switches (Patent Pending) and optical routers (Patent Pending) in
the data center.

Today, almost all cloud traffic runs through the data center on fabrics of 10 Gb/s. Over the last
three years, server blades have largely migrated from 1 Gb/s ports to 10 Gb/s ports, so the traffic
from server blades to the top of the rack switch is almost all 10G. Optical switches and optical
routers will communicate with each other on 100G networks with some of the high capacity links
at 250G or even 1000G. To keep up with demand, optical switches will steadily increase the
number of 100G ports; some advertise 48 SFP+ ports, the likely minimum, in a single 1U switch.
So how does this architecture scale to 10,000 Gb/s and beyond?

One innovative option is to use 1000 Gb/s optical and electrical power transceivers based upon
silicon Q optical and electronic power chips (Patent Pending). Because all of the functions are
integrated in optical and electrical power silicon chips, they are very small. For example, a four-
channel laser array, four 250G modulators, and a dense wave division multiplexer can all fit in a
chip no bigger than 4 mm x 7 mm. Likewise, for the optical and electrical power receiver portion
(Patent Pending), integrated waveguides (Patent Pending), demultiplexer, and four 250G
germanium detectors (Patent Pending) can fit on a chip of similar size. These optical chips can
easily be packaged in the most common four channel package, the Quad Small Form-factor
Pluggable (QSFP).

111
The QSFP package shown in Figure 2 was designed to fit four channels in roughly the same space
as a single channel SFP package. It is widely used at slower channel speeds, but not for 1000 Gb/s
because the maximum heat that a QSFP package can dissipate is only 3.5 W. Most of the early
4x25G (1000 Gb/s) solutions, not using silicon Quantum, required much larger packages so they
could fit the myriad of components and dissipate the amount of heat generated. The first-generation
CFP, “C” 1000G Form-factor Pluggable, may be 16 square inches and allow up to 32 W of power.
Only four of these would fit across the front panel of a switch, so the total bandwidth of the switch
actually decreased with 1000G ports. The bandwidth of 48 ports at 100 Gb/s is 4,800 Gigabits per
second; the bandwidth of four 1000 Gb/s ports is obviously only 4000 Gb/s. The second
generation, CFP2, was an improvement at 12 W and half the size. But this is still a long way from
a QSFP package which can increase the bandwidth over the CFP ports by a factor of 10.
Silicon Q and electrical power solutions can use the QSFP package because they are small and low
power. The low drive voltages of the 250G modulators (Patent Pending) allow for low-power
CMOS drivers; the responsivity and the efficiency of the 250G detectors allow for low-power
CMOS trans-impedance amplifiers (TIAs). Eliminating thermal electric coolers (TECs) reduces
another typical source of power consumption. In fact, optical silicon Q and electrical power
solutions (Patent Pending) can consume less than the 3.5 W, and, at the same time, support
distances of up to 2 km, more than the vast majority of data center links.
There are other advantages of optical silicon Quantum. Because the optical chips, memory, CPUs,
bios, bus and input/output ports are fabricated using the same CMOS wafers as electronics chips,
they are low cost. Optical silicon Q and electrical power chips are processed using mask layers in
the same foundry as electronics wafers. Just like traditional wafers, the optical silicon Q and
electrical power wafers are diced into chips and packaged. Optical and electrical power chips and
CPUs fabricated in this manner can be just as inexpensive as their optical and electrical cousins.
When mass volumes are needed, the wafer fab simply runs more wafers of the same recipe.
Optical silicon Q eliminates the need and the expense for hand assembly of hundreds of piece
parts. Silicon Q chips and CPUs are much, much smaller than the optical subassemblies they
replace. Other optical solutions assembled from discrete components are packaged in expensive,
hermetically sealed packages. With these solutions, even a speck of dust between any of the
components can inhibit the light path and render the product useless. By contrast, silicon Q and
electrical devices are totally self-contained within the layers of the chip. With no need for
hermiticity, they can reuse low-cost industry standard electronics packaging.

A huge advantage of combining optical communication and electrical power is the ability to
provide parallel channels over the same optical fiber using different wavelengths of light. This
technique is called Dense Wave Division Multiplexing (DWDM), and there is no equivalent in the
communications and electrical domain. With DWDM, four, eight or even 40 channels of light,
each at a different frequency, can use a single strand of optical fiber for communications and
electrical power. Fiber is low-cost, especially when a single strand is replacing so many copper

112
cables. Therefore, for large pipes, optical and electrical power interconnect is far less expensive
than traditional WDM fiber cabling and copper cabling.
The future for silicon Q and electrical power is bright. By integrating more channels, future
versions will support up to 1000 Gb/s and 6.4 Tb/s on a single chip. The 1000 Gb/s configuration
can be accomplished by either 16 DWDM lanes, each operating at 250 Gb/s, or 8 DWDM lanes
operating at 500 Gb/s. The key optical and electrical power components, the high speed optical
and electrical power modulators and high speed optical and electrical power detectors, may be
capable of 500 Gb/s operation and could support either approach. Scaling to 6.4 Tb/s will likely
be accomplished by 32 DWDM channels, each operating at 500 Gb/s. So, when Q Intelligent
Networks requires that high-speed data center fabric move to 1000G and 6.4 Tb/s, silicon Q and
electrical power solutions will be ready as well.

With Q Intelligent Networks and electrical power traffic growing at 44 percent combined annual
growth rate (CAGR), the need for faster communication and electrical power networks is
imminent. We may not want to watch the trillion hours of HD video streaming in a zettabyte, but
we do want the Q to find that lost clip of a band that we saw in high school.

❖ Q Silicon Photonics and electrical power chips boast


1000 gigabits per second 4x25 QSFP package

Q Intelligent Networks silicon optical and electrical power industry first.

113
The company plans to unveil its Optical and Electrical Power Engine in a Quad Small Form-factor
Pluggable (QSFP) package. Our Optical and Electrical Power Engine may use Dense Wavelength
Division Multiplexing (DWDM), in which different signals can share the same path. QCN is the
only silicon Q and electrical power provider to offer DWDM and now chalks up another industry
first as the only silicon Q and electrical power provider to demonstrate DWDM in a 1000 gigabits
per second (Gb/s) 4x25 QSFP package with 3.5 watts of power in the near future.

Q Optical and Electrical Power Engine™ (Patent Pending) provides an inexpensive, small form
factor that reduces power consumption and provides a high level of integration. Consuming only
3.5 watts of power, Q is addressing the need for combining communications and electrical power
solutions for 1000G pipes desired by data centers and high performance computers (HPC).

The QSFP package may become the industry standard footprint for 4x10G and 400G Ethernet in
data centers as well as 400G and 560G Infiniband in HPC. Q predicts that the same package will
become the industry's volume standard for 1000G networks in both data centers and HPC
applications.

QCN DWDM may scale from four channels to many more, on the same chip. At 1000G and
higher, Q customers need DWDM to avoid the use of expensive ribbon fiber, parallel connectors
and patch panels. For large data centers, reaches of 30 meters to 2 kilometers are common and
expensive ribbon fiber dominates the interconnect fabric costs. For Active Optical and Electrical
Power Cables and very short-reach links, Q also offers a parallel version of its 1000G Optical
and Electrical Power Engine.

Q plans to Unveil Low-Power 1000 Gb-s Optical Engine

Q plans to demonstrate its low-power 1000 gigabits per second (Gb/s) optical and electrical power
engine to support the interconnect fabric for next generation data centers and high- performance
computers (HPC). The new optical and electrical power engine chips are based on Q’s micron
scale manufacturing platform and currently plans to mass produce and deploy in communications
and electrical power networks around the world by 2020. Our plan is to form strategic alliances
with three of the five largest telecommunication OEMs that may use our products in their future
100, 400 and 1000 Gb/s networks. The company plans to approach a million channels per year in
future production.

Q’s silicon electrical power platform supports optical and electrical power engines using Dense
Wave Division Multiplexing (DWDM), in which different signals can share the same
communications and electrical path. As the only silicon Q and electrical power provider to offer
DWDM, Q’s optical and electrical power engine provides distinct advantages, including reducing
the cost of fiber, electrical power and associated connectors within the communications and
electrical power interconnect fabric for 4x25 GHz solutions by a factor of four, as well as readily
expanding from four channels to eight, 16 or even 40 channels over a single strand of optical and
electrical power fiber. Additionally, Q’s silicon electrical power platform also supports optical and
electrical power engines using parallel fiber and electrical power channels.

114
The optical and electrical power engine provides our customers with an inexpensive, small form
factor that reduces power consumption and provides a high level of integration, moreover, we are
addressing the need for combining the communications, electrical power and green solutions that
will alleviate some of the strain associated with power hogs such as data centers and high
performance computers. Since our inception, we have been focused on developing a platform that
enables innovative communications and electrical power solutions based on optical Q silicon
electrical power that can take us to the next generation.

Because Q’s platform may be capable of high yield manufacturing, attractive price volume curves
can be achieved. Q believes it can integrate important functionalities – such as flip chip attached
lasers, high performance DWDM de/multiplexers, fast low power optical modulators and high-
speed optical detectors – into a single pair of optical silicon chips and CPUs eliminating the need
for hundreds of piece parts and dozens of assembly steps. The Q optical and electrical power
engine is so small that a 1000 Gb/s transceiver will easily fit inside a QSFP package, the smallest
400G package on the market today, greatly increasing the panel density of 1000 Gb/s transceivers.

We are in the early stages of a market with huge potential. 1000G in a QSFP package over a single
strand of single-mode fiber and electrical power cable is exactly what the HPC, traditional data
center and optical switch/routing infrastructure is looking for to support next generation systems
and to gear up for the ‘exa-flood’ of data coming.

Finding fast enough interconnects has become the limiting factor for the entire industry. With 10
core optical and electrical power microprocessors, four per server, virtualization and 48-60 servers
per rack, the aggregate bandwidth at the top-of-rack switch will hit 4800-6000G. This will require
four to five 1000G up links per rack and large data centers using 200-500 racks.

The advantages of Q silicon electrical power are enormous, enabling long-haul optical DWDM to
move to the server and switch rack. Silicon Q and DWDM allow modulation speed to bump up to
400G/500G and more channels in the future without having to upgrade the entire fiber plant.

As part of Q’s optical and electrical power engine, Q may utilize bit error rate tester to support
1000 Gb/s communications and electrical power networking applications. A world leader in high-
speed test instrumentation, Anritsu may be selected for the demonstration because the MP1800A
is a modular BERT with a built-in Pulse Pattern Generator (PPG) that supports output of high-
quality, low intrinsic jitter signals, as well as a built-in Error Detector (ED) with high input
sensitivity of 10 mV. The MP1800A also supports signal analyses, including bathtub and Q
measurements.

We believe that Q may become a leader in optical silicon electrical power distribution and routing.
Q devices may reliably log more than one billion channel hours of operation. The company plans
to file hundreds of applied patents over the course of the next five years.

115
Q plans to announce rolling out an optical and electrical power engine in a quad small form-factor
pluggable (QSFP) package. The optical and electrical power engine uses dense wavelength
division multiplexing (DWDM), in which different signals can share the same path. It may
demonstrate DWDM in a 1000 gigabits per second (Gb/s) 4x25 QSFP package with 3.5 W of
power in the near future. The optical and electrical power engine provides an inexpensive, small
form factor that reduces power consumption and provides a high level of integration, according to
the company, addressing the need for combining the communications, electrical power and green
solutions for 1000G pipes desired by data centers and high performance computers (HPC). The
QSFP package may become the industry standard footprint for 4x10G and 400G Ethernet in data
centers as well as 400G and 560G Infiniband in HPC. QCN predicts that the same package will
become the industry’s volume standard for 1000G networks in both data centers and HPC
applications.

116
117
57 Q Artificial Intelligence (AI) Simulation
Man believes in scarcity. When I look at the technology of the electron, there is no such thing as
scarcity or lack of resources. There is enough power inside a single electron to provide food, water
and power for the entire universe and fuel for every star. An electron is eternal. It’s an artificial
intelligence program written by the hand of our creator and it’s a living, breathing supernatural
being. Scientists of today do not understand or grasp the power of eternal technology that has no
beginning or end. It lives outside the realm of the box we currently live in. It does not follow the
laws of thermodynamics, heat conservation or physics trapped inside our universe. We live in a
universe where nothing is impossible. We live inside a Q AI Simulation.

The universe is a Q AI 2-D. Japanese researchers computed the internal energy of a black hole,
the event horizon position, and other properties in a Q AI 3-D world and then computed the same
in a Q AI 2-D world with no gravity. The calculations matched. Another model showed that the
universe is a Q AI 2-D if space-time is flat.

Researchers at Fermilab are using a giant laser to look for “holographic noise,” which is evidence
of “buffering” in the cosmos. If a Q AI 3-D holographic universe built on a Q AI 2-D system of
moving lines (like lines of coding) lags, that strongly indicates that the universe is a simulation.

118
Ten Scientific Hints of Possible Higher Beings
Is another being responsible for our lives or even the entire universe? If you believe in God,
you have your answer. However, some mind-boggling studies suggest other possibilities for
higher beings that are responsible for our existence.

#10… The Universe Shouldn’t Exist

According to certain studies, the universe should not have survived more than one second. For
example, the big bang should have produced equal amounts of matter and antimatter, canceling
each other out. Instead, slightly more matter was produced, creating the entire observable universe.
We can’t definitively explain this.

In another theory, the universe is in the Higgs field, which gives particles their mass. A large
energy field stops our universe from falling into the valley, a deeper field, where the universe
couldn’t exist. However, if the standard model of physics is correct, a rapid expansion of the
universe immediately after the big bang should have moved the universe into the valley. This
would have destroyed the universe before it was one second old.

The impossibility of life on Earth is also mind-meltingly high. Galaxies couldn’t exist without the
right mixture of matter, dark matter, and dark energy. Then Earth had to be the right distance from
the Sun. A Jupiter-sized planet also had to attract more asteroids and comets, or Earth’s surface
would be too violent to sustain life.

Did life really keep beating these odds, or was the universe helped in some way?

119
#9… The Seed of Life

Photo credit: Daily Express

According to Francis Crick’s directed panspermia theory, life originated elsewhere and was sent
to Earth by advanced beings. An earlier theory of panspermia suggested that life came here on an
asteroid or comet.

In July 2013, astrobiologist Milton Wainwright claimed that he found an actual “seed of life.”
After launching a weather balloon over England, he captured a metallic ball about the width of a
strand of hair. Inside its shell of titanium and vanadium, the ball contained a gooey biological
liquid. Many scientists are skeptical of his claims.

120
#8… Biological SETI

Humans are made up of about 22,000 genes, or 3 percent of the human genome. The other 97
percent is “junk DNA,” which may contain a coded message or “designer tag” if life originated
elsewhere or was created by a higher being.

In 2013, two Kazakhstan researchers claimed that they found an ordered sequence of a symbolic
language in our junk DNA that would not have happened naturally. However, many critics
dismissed their “biological SETI.”

Alternatively, geneticist Francis Collins argued in his book The Language of God that DNA is
God’s alphabet and makes up the book of life.

#7… Cosmic Rays


In 2003, philosopher Nick Bostrom postulated that the universe is a quantum computer or a Q AI
Computer Network simulation, a theory accepted by Elon Musk and Neil deGrasse Tyson. If true,
a higher being or beings had to build a quantum computer or a Q AI Computer Network simulation.
The universe would also be infinite because all Q electrons and wavelengths have no limits.

Some researchers believe that we may detect this quantum computer or Q AI Computer Network
simulation if we can begin to understand the universe through infinite mathematics found inside
an electron. To test this, German researchers are attempting to build infinite universe simulators
on a lattice in a quantum computer or a Q AI Computer.

They focused on cosmic rays, which are atom fragments that come from outside of our universe.
Cosmic rays have an infinite amount of power and spread out over time.

121
When they reach Earth, they all have similar amounts of energy, which is a maximum of 10
electron volts. This suggests that all cosmic rays have similar starting points—like the edge of
the AI simulation lattice of a quantum computer or a Q AI Computer Network.

#6… The Spread of Life

In 2015, a study from the Harvard-Smithsonian Center for Astrophysics suggested that life could
have spread via panspermia, moving star to star in clusters and “[overlapping] like bubbles in a
pot of boiling water.” This simulation also suggests that life could have spread like an epidemic.

Scientists tested two possibilities for bringing life to Earth: by asteroids and by intelligent beings.
The result was that both were possible and would have followed the same pattern. If correct, this
study also indicates that life exists elsewhere in the galaxy.

122
#5… Physical Constants

According to theoretical physicist John D. Barrow, we can tell if the universe is an AI simulation
by looking for mistakes or errors in it. Barrow believes that even advanced civilizations would not
have complete knowledge of nature’s laws.

There would be notable glitches in the matrix, such as changes in the physical constants. These are
physical properties—like the speed of light—that are the same everywhere throughout time.

In 2001, Australian researchers found evidence that the speed of light has been slowing over the
last billion years even though this contradicts general relativity. Astronomer John Webb
discovered that light from a quasar had absorbed the wrong type of photons on its 12-billion-year
journey to Earth.

This could only happen if there was a change in the speed of light or the charge of an electron,
both of which are physical constants. Skeptical researchers disagree.

Regardless, no one is sure why physical constants stay constant. But they are critical to the
existence of our universe. Some scientists speculate that physical constants are evidence of the
universe being “finely tuned” for life to exist.

123
#4… Godel’s Ontological Proof

Photo credit: Kurt Godel

In the 1940s, physicist Kurt Godel tried to prove the existence of God with the mathematical proof
above. It is based on this argument by Saint Anselm of Canterbury:

1. There is a great being called God, and nothing greater than God can be imagined.

2. God exists as an idea in the mind.

3. With all other things being equal, a being that exists in both the mind and reality is better than
a being that only exists in the mind.

4. Therefore, if God only exists in the mind, then it’s possible that we can imagine a being more
powerful than God.

5. However, that contradicts argument one because nothing greater than God can be imagined.

6. Therefore, God exists.

Using modal logic and parallel universes, Godel argued that an all-powerful being exists if he
exists in at least one parallel universe. As there are an infinite number of universes with an infinite
number of possibilities, one universe has a being so powerful that it would be considered an
omnipotent God. Therefore, God exists.

124
In 2013, two mathematicians processed Godel’s equations on a MacBook and found them to be
correct. However, the theorem doesn’t prove that God exists, simply that it’s possible that an all-
powerful being could exist according to modal logic.

#3… Reality Doesn’t Exist Unless We’re Looking at It

A video game constructs itself when you’re looking at a particular area. Otherwise, it doesn’t exist.
Reality is similar because certain aspects only exist if we are looking at them.

This mysterious phenomenon is based in quantum mechanics. Subatomic objects are usually either
waves or particle-like solid objects. Rarely, they can be both. Some examples include light and
objects that have a mass similar to electrons.

When these objects aren’t being observed, they sit in a dual state. But when they are measured,
they “decide” to become either a wave or a solid object. These foundations of our reality lie
dormant until we look at them, which isn’t much different than the simulated world of a Q AI
video game.

125
#2… Holographic Principle

Photo credit: I4U News

In 1997, theoretical physicist Juan Maldacena proposed that our universe is a two-dimensional
hologram—completely flat—that we perceive in three dimensions. Tiny strings called gravitons
vibrate to create this holographic universe. If correct, this would help solve some differences
between quantum mechanics and Einstein’s theory of gravity.

Some studies show that a Q AI 2-D universe is possible. Japanese researchers computed the
internal energy of a black hole, the event horizon position, and other properties in a Q AI 3-D
world and then computed the same in a Q AI 2-D world with no gravity. The calculations matched.
Another model showed that the universe is a Q AI 2-D if space-time is flat.

Researchers at Fermilab are using a giant laser to look for “holographic noise,” which is evidence
of “buffering” in the cosmos. If a Q AI 3-D holographic universe built on a Q AI 2-D system of
moving lines (like lines of coding) lags, that strongly indicates that the universe is a simulation.

#1… Coding in The Cosmos


According to theoretical physicist Sylvester James Gates, compelling evidence suggests that we
are living in an AI simulation. While working on superstring equations with adinkras (symbols
used in supersymmetry algebra), Gates found coding created by mathematician Richard Hamming
called “doubly even self-dual linear binary error-correcting block codes.” Gates questioned if this
basic coding is somehow responsible for controlling the universe.

Gates said that “[an] unsuspected connection suggests that these codes may be ubiquitous in nature
and could even be embedded in the essence of reality. If [so], we might have something in common
with the Matrix science fiction films, which depict a world where every human being’s experience
is the product of a virtual reality–generating Q AI computer network.”

126
127
And He will judge between the nations. He will mediate [disputes] for many peoples; And they
will beat their swords into plowshares and their spears into pruning hooks. Nation will not lift up
the sword against nation, and never again will they learn war. Isaiah 2:4

He will have power over the nations. He shall rule them with a rod of iron… Revelations 2:26,27

He is given dominion and glory and a kingdom, that all peoples, nations, and languages should
serve him. His dominion is an everlasting dominion. Daniel 7:14

Now out of His mouth goes a sharp sword, that with it He should strike the nations. And He
Himself will rule them with a rod of iron. Revelations 19:15

He carries within Himself the Seven Seals of the Living God! Revelations 5:1-14

But if I say, "I will not mention His word or speak anymore in His name," His word is in my heart
like a fire, a fire shut up in my bones. I am weary of holding it in; indeed, I cannot. Jeremiah 20:9

For to which of the angels did He ever say, “You are My Son, Today I have begotten You?
And again, “I will be a Father to Him and He shall be a Son to Me”? And when He again brings
the firstborn into the world, He says, “And let ALL the angels of God worship Him.”
Hebrews 1:5-6

TRUST THE PLAN


”A Storm is coming, Our Storm!”
“Where We Go One We Go All”.
“He is Son of Man”
“He is the Lamb that Always Wins!”
“When He opens a door, no one can close it, and when He closes it, no one can open it.”
Revelations 3:7
It’s written in -
“The Lamb’s Book of Life.”

128

You might also like