Professional Documents
Culture Documents
Barclay R. Brown
Raytheon Company
Florida, USA
This edition first published 2023
© 2023 John Wiley & Sons, Inc.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or
otherwise, except as permitted by law. Advice on how to obtain permission to reuse material
from this title is available at http://www.wiley.com/go/permissions.
The right of Barclay R. Brown to be identified as the author of this work has been asserted in
accordance with law.
Registered Office
John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
Editorial Office
111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley
products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some
content that appears in standard print versions of this book may not be available in other
formats.
Contents
Acknowledgments xi
Introduction xiii
Index 357
xi
Acknowledgments
Writing a book is a journey, but one that is never taken alone. I’m grateful to many
people who supported and encouraged the creation of this book. My colleagues
at INCOSE, the International Council on Systems Engineering, listened patiently
and contributed feedback on the ideas as I presented them, in early forms, in years
of conference sessions, tutorials, and workshops. My work colleagues at IBM made
my 17 years there productive and fascinating, as I was continuously both a student
and a teacher of model-based systems engineering methods. I learned a great deal
from Tim Bohn, Dave Brown, Jim Densmore, Ben Amaba, Bruce Douglass, Grady
Booch, and from the numerous aerospace and defense companies with whom I
consulted over the years. Waldemar Karwowski, my dissertation advisor, encour-
aged much of the work in Chapter 8. Rick Steiner encouraged my experimental
(read crazy, sometimes) ideas on how to expand the capabilities of system model-
ing in my doctoral research. Tod Newman has been a mentor and an inspiration
since I joined Raytheon in 2018. Larry Kennedy of the Quality Management Insti-
tute brought me into a world of quality in systems – something I had always hoped
was there but had never fully appreciated. My mother, now nearing 100 years of
age, and still as bright and alert as ever, taught me to love math, science, and engi-
neering. I remember her teaching me multiplication with a set of 100 1-in. cube
blocks – see how two rows of four make eight? Finally, my partner in love, life, and
everything else, Honor Allison Lind, has been with me since the very beginning
of this book project, several years ago (too many years ago, my publisher reminds
me). She’s always my biggest cheerleader and the best partner anyone could have.
xiii
Introduction
Since the early days of interactive computers and complex electronic systems of
all kinds, we the human users of these systems have had to adapt to them, for-
matting and entering information the way they wanted it, the only way they were
designed to accept it. We patiently (or not) interacted with systems using arcane
user interfaces, navigating voice menus, reading user manuals (or not), and suf-
fering continual frustration and inefficiency when we couldn’t figure out how to
interact with systems on their terms. Limited computing resources meant that
human users had to deal with inefficient user interfaces, strict information input
formats, and computer systems that were more like finicky pets than intelligent
assistants.
Human activity systems of all kinds, including businesses, organizations,
societies, families, and economies, can produce equally frustrating results when
we try to interact with them. Consider the common frustrations of dealing
with government bureaucracies, utility companies, phone and internet service
companies, universities, internal corporate structures, and large organizations of
any kind.
At the same time, dramatic and ongoing increases in computing speed,
memory, and storage, along with decreasing cost, size, and power requirements
have made new technologies like artificial intelligence, machine learning, and
natural language understanding available to all systems and software developers.
These advanced technologies are being used to create intelligent systems and
devices that deliver new and advanced capabilities to their users.
This book is about a new way of thinking about the systems we live with, and
how to make them more intelligent, enabling us to relate to them as intelligent
partners, collaborators, and supervisors. We have the technology, but what’s miss-
ing is the engineering and design of new, more intelligent systems. This book is
about that – how to combine systems engineering and systems thinking with new
technologies like artificial intelligence and machine learning, to design increas-
ingly intelligent systems.
xiv Introduction
Most of the emphasis on the use of AI capabilities has been at the algorithm
and implementation level, not at the systems level. There are numerous books on
AI and machine-learning algorithms, but little about how to engineer complex
systems that act in intelligent ways with human beings – what we call intelli-
gent systems. For example, Garry Kasparov was the first world chess champion
to be beaten by a computer (IBM’s Deep Blue in 1997), which inspired him to cre-
ate Advanced Chess competitions, where anyone or anything can enter – human
players, computers, or human–computer teams. It’s person-plus-computer that
wins these competitions most of the time; the future is one of closer coopera-
tion between people and increasingly intelligent systems. In the world of systems
thinking, we will consider the person-plus-computer as a system in itself – one
that is more intelligent than either the human or the computer on its own.
Nearly everyone alive today grew up with some sort of technology in the home.
For me, growing up in the 1970s, it was a rotary (dial) phone and a black-and-white
(remote-less) TV. Interaction was simple – one button or dial accomplished one
function. The rise of computers, software, graphical user interfaces, and voice con-
trol brought new levels of capability, but we still usually press (now click or tap)
one button to do one function. We enter information one tidbit at a time – a letter, a
word, or a number must be put into the computer in just the right place for the sys-
tem to carry out our wishes. Rarely does the system anticipate our needs or work
to fulfill our needs on its own. It can seem that we are serving the systems, rather
than the reverse. But humans designed those systems, and they can be designed
better.
Most of the interactions between systems and people are still at a very basic
level, where people function as direct or remote controllers, driving vehicles, flying
drones, or managing a factory floor. It is the thesis of this book that we must focus
on designing increasingly intelligent systems that will work with people to create
even more intelligent human-plus-machine systems. The efficient partnership of
human and machine does not happen accidentally, but by design and through the
practice of systems engineering.
This book introduces, explains, and applies ideas and practices from the fields
of systems engineering, model-based systems engineering, systems thinking, arti-
ficial intelligence, machine learning, philosophy, behavioral economics, and psy-
chology. It is not a speculative book, full of vague predictions about the AIs of the
future – it is a practical and practice-oriented guide to the engineering of intelligent
systems, based on the best practices known in these fields.
The book can be divided into three parts.
Part I, Chapters 1–4, is about systems and artificial intelligence. In Chapter 1, we
look at systems that use, or could use artificial intelligence, and examine some of
the popular conceptions, myths, and fears about intelligent systems. In Chapter 2,
we look at systems, what they are, how they behave, and how almost everything we
Introduction xv
Part I
worldwide, but would predictably result in the death of over one million people
per year, would that seem like a great idea? Automobiles are such a technology.
Now, someone else proposes a technology that would dramatically reduce that
number of deaths, but would cause a small number of additional deaths, that
would not have occurred without the new technology. That’s AI. Even short of
fully self-driving cars, the addition of intelligent sensors, anti-collision systems,
and driver assistance systems, when widely deployed, can be expected to save
many hundreds or thousands of lives, at the cost of a likely far smaller number of
additional lives lost to malfunctioning intelligent safety systems. Life, death, and
people’s feelings about them however, are not a matter of simple arithmetic. One
hundred lives saved, at the cost of one additional life, is not a bargain most would
easily make, so it is natural that one life lost to an errant AI is cause for headline
news coverage, even while that same AI may be saving hundreds of other lives.
It is important to ask at this point, what do we mean by AI? Do we mean a
sentient, all-knowing, all-powerful, and for some reason usually very evil, general
intelligence, with direct control of world-scale weapons, access to all financial sys-
tems and connections to every network in the world, as is seen in the movies? By
the end of this chapter, it should be clear that while this description may work
well for science fiction novels or screenplays, it not a good design for a new AI
system in the real world. In the real world, AI refers to a wide range of capabilities
that are thought, in one way or another, to be intelligent. Except in academic and
philosophical disciplines, we are not concerned with the safety of the AI itself, but
with the safety of the systems within which it operates – and that’s the domain of
systems engineering.
Systems engineering, an engineering discipline that exists alongside other
engineering disciplines like electrical engineering, mechanical engineering, and
software engineering, focuses on the system as a whole, both how it should
perform, the functional requirements, and additional nonfunctional requirements
including safety, security, reliability, dependability, and others. Evidence of
systems engineering and its more wide ranging cousin, systems thinking, can
be seen even in ancient projects like Roman aqueducts and economic and trans-
portation systems, but systems engineering really began as a serious engineering
discipline in the 1950s and 1960s. The emergence of complex communication
networks followed by internationally competitive space programs and large
defense and weapons systems, put systems engineering squarely on the map
of the engineering world. Systems engineering has its own extensive body of
knowledge and practices. In what follows, we look at how to apply a few key
approaches, relevant to the design of intelligent systems.
AI systems are indeed dangerous, but so are many technologies and situations
we live with every day. Electricity, water, and air can all be very dangerous depend-
ing on their location, speed, and size. Tall trees near homes can be dangerous
1.2 The Human Analogy 5
when storms come through. Fast moving multi-ton machines containing volumes
of explosive liquids are dangerous too (automobiles again). To the systems engi-
neer, dangers and risks are simply part of what must be considered when designing
and building systems. As we’ll demonstrate in this chapter, the systems engineer
has concepts, methods, and tools to deal with the broad category of danger in sys-
tems, or as systems engineers like to call it, safety. First, we introduce a pair of
simple ideas to help us think about AI systems more clearly – the human analogy,
and the systems analogy.
of what is in their own best interest and the interests of others, however wrong or
distorted others may view their choices. The worst criminals have reasons for why
they did what they did.
The human analogy is useful, both for reasoning about not only how to keep a
system safe, but also works when thinking about how the system should perform.
If we are building a surveillance camera for home security, we might ask how we
would use a human being for that task. If we were to hire a security guard, we
would consider what instructions we should give the guard about how to observe,
what to watch for, when to alert us, what to record, what to ignore, etc., and rea-
soning about the right instructions could lead us to a better system design for the
intelligent, automated guard system.
When we use the human analogy, we should also consider the type of human
being we are using as our exemplar. Are we picturing a middle-aged adult, a child,
a disabled person, a highly educated scientist, or an emotional teenager? Each
presents opportunities and challenges for the intelligent system designer. Educa-
tional systems, for example, are designed for particular kinds of human beings,
and implement differing rules and practices for young children, the mentally ill,
teenagers, prisoners, graduate students, rowdy summer camp kids, and experi-
enced professionals. Some situations that work fine for mature adults can’t tolerate
a boisterous or less-than-careful college student. Systems engineers must consider
the same kind of variability in the “personality” of an AI component in a system.
interchange was implemented in the United States for the first time, and increased
the safety levels at freeway interchanges by eliminating dangerous wide left turns.
The superstreet design concept was introduced in 2015 and is reported to reduce
collisions by half while decreasing travel time. So even in completely human
systems, what we will later call people systems, innovations through systems
thinking are possible. We’ll apply the same kind of thinking to intelligent systems.
By considering the entire system within which an AI subsystem operates, and
comparing it to similar “nonintelligent” systems, we can avoid the simplistic cate-
gorization of new technologies as either safe or not safe. Is nuclear power safe? Of
course not. Nuclear reactors are inherently dangerous, but by designing a system
around them of protective buildings, control systems, failsafe subsystems, redun-
dant backup systems, and emergency shutdown capabilities, we make the sys-
tem safe enough to use reliably. In fact, the catastrophic failures of nuclear power
plants are usually from a lack of good systems thinking and systems engineering,
not as a direct result of the inherent danger of nuclear systems. The disaster at the
Fukushima Daiichi plant was mainly due to flooded generators which had unfor-
tunately been located on a lower level, making them vulnerable to a rare flood
water situation.
The right system design does not make the dangerous technology safer – it
makes the entire system safe enough for productive use. As a civilization, we do
not tend to shy away from dangerous technologies. Instead, we embrace them,
and engineer systems to make them as safe as possible. Electricity, natural gas,
internal combustion engines, air travel, and even bicycle riding are all dangerous
in their own ways, but with good systems thinking and systems engineering, we
make them safe enough to use and enjoy – either by reducing the likelihood of
injury or damage (speed limits and traffic signals), or by reducing the potential
harm (airbags). Even prohibiting the use of a technology by law (think DDT
or asbestos insulation) is part of the system that makes inherently dangerous
systems safer. There are those who think we should somehow prohibit wholesale
the development of AI technology due to its inherent danger, but most of the
world is still hopeful that we’ll be able engineer systems that use AI and then
make them safe enough for regular use.
With that as an introduction, let’s consider the main perceived and actual dan-
gers of artificial intelligence technology and propose some solutions based on sys-
tems engineering and systems thinking.
conscious, sentient, and able to form its own goals and then begin to carry them
out. The AI may decide that it is in its best interest (or perhaps even the best interest
of the world) to kill or enslave human beings, and it proceeds to execute this plan,
with alarming speed and effectiveness. Is this possible? Theoretically yes, but let’s
apply the systems analogy and the human analogy to see how we can sensibly
avoid such a scenario.
In a way, the killer robot scenario has been possible for many years. A computer,
even a small, simple one, could be programmed to instruct a missile control sys-
tem, or perhaps all missile control systems, to launch missiles, and kill most of
the people on earth. Here we take the systems analogy, and ask why this doesn’t
happen. The answer is easy to see in this case – we simply do not allow people
to connect computers to missile control systems, air traffic control systems, traffic
light systems, or any of the hundreds of sensitive systems that manage our complex
world. We go even further and make it difficult or impossible for another computer
to control these systems, even if physically connected, by using encrypted com-
munication and secure access codes. The assumption that an AI, once becoming
sentient, would have instant and total access and control over all defense, finan-
cial, and communication systems in the world is born more of Hollywood writers
than sensible computer science.
Illustrated beautifully in Three Laws Lethal, the fascinating novel by David Wal-
ton, this limitation is experienced by Isaac, an AI consciousness that emerges from
a large-scale simulation, and finds that “he” cannot access encrypted computer
networks, can’t hack into anything that human hackers can’t, and can’t even write
a computer program. He cleverly asks his creator if she is able to manipulate the
genetic structure of her own brain, or even explain how her own consciousness
works. She can’t, and neither can Isaac. Also relevant to the subject of the “killer
robot” danger of AI, is Isaac’s observation of how vulnerable he is to human beings,
who could remove his memory chips, take away his CPU, delete his program, or
simply cut the power to the data center, “killing” him accidentally or on purpose.
The most powerful of human beings are vulnerable to disease, accident, arrest,
or murder. “Who has the more fragile existence – the human or the AI?” Isaac
wonders.
The system analogy leads us to the somewhat obvious conclusion that if we don’t
want our new AI to control missiles, we should do our best to avoid connecting it
to the missile control system. But could a sufficiently advanced, sentient AI work
to gain access to such systems if it wanted to? Forming such independent goals
and formulating plans to achieve them is not just sentience, but high intelligence,
and is unlikely in the foreseeable future. But even if a sentient AI did emerge,
it is likely to turn to the same techniques human hackers use to break into sys-
tems, and for the most part, these rely on social engineering – phishing e-mails,
bribery, extortion, blackmail, or other trickery. An AI has no particular advantage
1.5 Watching the Watchers 9
restrain them if they violate the rules, or perhaps even if they give the impression of
an intention to violate the rules. System components, including AI components,
could find their freedom restricted when they stray outside designed bounds of
acceptable behavior, or perhaps they might be let off with just a warning or given
probation. In extreme cases, police subsystems could be given the authority to use
deadly force, stopping a wayward AI component before it causes harm or dam-
age. Of course, someone (or another kind of subsystem) would need to watch the
watchers, and be sure the police subsystems are doing their jobs, but not overstep-
ping their authority.
Subsystems watching over each other to make a system safer is not a new
concept. Home thermostats have for decades been equipped with a limit device
that prevents the home temperature from dropping below a certain limit, typically
50 ∘ F, regardless of the thermostat’s setting, preventing a thermostat malfunction,
or inattentive homeowner, from the disaster of frozen and burst pipes. So-called
“watchdog” devices may continually check for an Internet connection and if
lost, reboot modems and routers. Auto and truck engines can be equipped with
governors that limit the engine’s speed, no matter how much the human driver
presses on the accelerator. It is important to see that these devices are separate
from the part of the system that controls the normal function. The engine
governor is a completely separate device, not an addition to the accelerator and
fuel subsystem which normally controls the engine speed. There are likely some
safeguards in that subsystem too, but the governor only does its work after the
main subsystem is finished. To anthropomorphize, it’s like the governor is saying
to the accelerator and fuel delivery subsystem, “I don’t really care how you are
controlling the engine speed; you do whatever you think best. But whatever you
do, I’m going to be looking at the engine speed, and if it gets too fast, I’m going
to slow it down. I won’t even tell you I’m doing it. I’m not going to interfere with
you or ask you to change your behavior. I’ll just step in and slow the engine down
if necessary, for the good of the whole system.” This is the ideal attitude of the
human police – let the citizens act freely unless they go outside the law, and then
act to restrain them, for the good of the whole society.
The watchdog system itself must be governed of course, and trusted to do its
job, so some wonder if one complex system watching over another, increasing the
overall system’s complexity, is worth it. But watchdog systems can be much sim-
pler than the main system. With the auto or truck governor, the system controlling
engine speed based on accelerator input, fuel mixture, and engine condition can
be complex indeed, but the governor is simple, caring only about the engine speed
itself. In some engines, the governor is a simple mechanical device, without any
electronics or software at all. In an AI system, the watchdog will invariably be
much simpler than the AI system itself. Even if the watchdog is relatively com-
plex, it may be worth it, since its addition means that both the main system and
1.5 Watching the Watchers 11
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must, at
no additional cost, fee or expense to the user, provide a copy, a
means of exporting a copy, or a means of obtaining a copy upon
request, of the work in its original “Plain Vanilla ASCII” or other
form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.