You are on page 1of 47

Complex Engineered Systems

Contents

Engineer Systems Overview


Technology Systems
Setting The Context
Stakeholder Analysis
Complex Engineered Systems
Information & Control Systems
Socio-Technical systems

Self-Organization
Self-Organization
Systems Dynamics
Path dependency
Multi-Level Systems

Networks
Technology Networks
Distributed technologies
Multiplex Networks
Network diffusion

Adaptation & Evolution


Adaptive Systems
Robustness
Evolution
Industrial Ecology
Overview

Every time we buy a product in the supermarket, take a flight, recharge our phone or send an email
we are forming part of what we call complex engineered systems. From supply chain networks to
power grids and cities our everyday lives are embedded within and enabled by these complex
networks of technology and services. This is a world in a state of rapid change, in the industrial age
we build individual systems with the advent of information technology and globalization, a new
world of integrated networked systems that cuts across specific domains is emerging, they bring a
whole new paradigm to our technology infrastructure, challenge our engineering capacity, and
understanding these complex systems is more important than ever. This course is an overview to
the new area of research called complex engineered systems that applies models from complexity
theory to analyzing the technology infrastructure that runs our high-tech global economy.

The book is broken down into four major sections: We will start by applying systems theory to
understand the fundamentals of our engineered environment, using it to give us a basic model of
technology before we start adding complexity to it. We will go on to talk about information systems
and sociotechnical systems. Next, we will be looking at nonlinearity and self-organization within our
technology landscape, discussing how no one really designs these complex engineered systems
but instead they are created out of local interactions, feedback loops and attractors that give rise to
the emergence of global patterns of organization.


We will go on to apply network analysis, discussing how IT and alternative technologies are
working to create a new generation of highly integrated but also distributed systems, flipping our
traditional centralized model on its head, as new technologies like 3D printing, solar cells and mesh
networks enable end-users to become producers. In the last section to the course we will be
covering the topics of adaptation and system’s robustness as we look at the evolutionary process
through which complex engineered systems are created, their vulnerabilities and capacity to adapt
to a changing environment.

Throughout the course we will be following a number of major trends that are having a powerful,
transformative effect on our technology infrastructure. Including the rise of sustainability,
globalization, the services revolution and, of course, the information revolution, that continues to be
the most pervasive and radically disruptive force as it works to fundamentally re-architect our
traditional industrial age systems, breaking down barriers between silos and networking them into
integrated systems as disparate technologies become increasingly converged. We will see how all
of these major themes are working to take us into a new world of complexity as we go further into
the 21st century, and presenting engineers with a set of daunting technical challenges as they try
to develop this next generation of integrated, smart and sustainable technology solutions.

This book will not teach you about design or engineering, it is a course on technology analysis
through the lens of complexity theory. Also, this course is not an introduction, you will be expected
to be familiar with basic concepts within complexity theory, science, engineering, and technology. It
is an overview to a broad subject so we will not be drilling down into technical engineering details.
The course is non-mathematical but you will be exposed to the full complexity and abstract models
that are required to make a proper analysis of these very complex systems of technology.
Technology Systems

In this module, we are going to lay down a working definition for what exactly we mean by the term
engineered system or technology, as we will be using the two terms interchangeably. What we
want to try and do is get a grasp on some of the fundamental characteristics that remain invariant
whether we are talking about a very simple technology, like a shovel, or a very complex one like an
airport. Many factors will change with scale and complexity of the system but the fundamental
features of technology will remain continuous and this will give us something to ground our
analysis in, because, in this world of globalization, information technology, and large infrastructure
systems, things can get very abstract and complex very quickly and we don’t want to lose sight of
the fundamentals of what we are dealing with.
So let's start from the beginning and the beginning is us, human beings, because unlike other
systems within the natural environment such as a stars, stones or plants that are governed and
shaped by natural processes, technology is not, it is almost exclusively created by human beings,
for human beings and thus it is governed and defined by a logic that reflects the condition that we
are under, so what is this condition. This condition consists of the fact that we are biological
creatures within a physical environment, like all biological creatures we require a constant input of
energy and resources, we are persistently and actively engaged in trying to maintain and develop
our access to the resources required for self-preservation and development. Like all creatures we
manipulate and alter our environment in order to achieve what is called homeostasis, that is to say,
an environmental context that is optimal for our self-preservation and well-being. Unlike other
creatures though we have developed the capacity of advanced cognitive processing which has
made us somewhat distinct from other creatures and is ultimately the foundation that has enabled
civilization.
With this advanced cognitive capacity, we are able to create complex models of our environment,
understand a wide array of cause and effect interactions, we can conceive of desired optimal future
states, use logic to try and achieve these through a sequence of strategic actions and this is the
very abstract foundation to technology and engineering. The word engineering comes from the
word meaning clever solutions, in its essence it is about developing systematic methods for solving
some given constraints, technology is then the embodiment of this systematic solution within a
physical form that can then perform the set of stages required to resolve the constraints whenever
required. The economist W. Brian Arthur defines technology in a similarly very broad way, as "a
means to fulfill a human purpose", but we will try to give this a bit more definition by creating a
simple systems model that will aid in our reasoning.
In this model we have a current state A and future desired state B. This could be anything from
being on one side of a river, state A and wanting to get to the other side, state B, or being cold and
wanting to be warm etc. We then have some kind of environmental constraints between states A &
B. These environmental constraints generate the problem space that we need to resolve in order to
get to state B. Engineering then is the development of an algorithmic process to resolves this
problem space, that is to say, a set of steps that need to be performed in order to get to our desired
state. Technology is the actual system that performs this process taking us from A to B. In this way,
we can then think about technologies as systems, in that they have some input of resources and
they perform a function on these inputs in order to produce some desired output.

Although technologies are different from natural systems, as we have noted, we can only properly
give them context by understanding them as also an extension of natural processes. Technologies
are both a very organic part of human beings and an extension of us, almost all technologies can
be traced back to some initial process that was performed by our unaided natural physiology,
whether we are talking about transportation performed originally by walking or telecommunications
performed originally by our vocal system.
Through endless iteration of these natural processes and innovation, we have rationalized these
processes and embody them in external automated systems that can typically perform the process
more efficiently and effectively. By automatic we mean that through having rationalized this
process, we no longer have to look for a solution to it every time, the technology is the solution we
just have operate it. The word automatic means acting by itself, in other words, the technology has
automated part of the process. I don’t need to think about how I am going to get to work each day I
simply get in my car and drive, I don’t even need to know how it is actually converting the input to
the system, that is liquid fuel into the functional output, that is the function of personal mobility, and
I don’t even need to think about it because the technology was designed specifically to automate
this process.
Efficiency
All systems operate with some degree of efficiency and efficiency is a central concept within
engineering. The second law of thermodynamics tells us that in this processing of energy or
resources there will always be an increase in entropy, that is to say whenever we run this
technology system it will produce some waste product, that either remains in the system,
degrading its functionality over time, or gets exported to its environment. When I operate my car
only about 25-30% of the energy released by the fuel is used to move the vehicle, the vast majority
is rejected as heat without being turned into useful work and this entropy must be exported from
the system or else it will damage is functioning. This entropy, of course, does not just disappear,
the heat along with other forms of waste such as noise and gas emissions go into the system’s
environment. From this, we can define the system’s efficiency and begin to reason about its
sustainability.
Sustainability is a very abstract concept, but in its essence it describes the relationship between a
system and its environment, it is essentially a function of, on the one hand, the volume of
resources the system requires coupled with the environment’s stock of resources accessible to the
system, and on the other hand, the amount of entropy the system produces, combined with its
environment’s capacity to absorb that entropy without degrading its capacity to continue providing
the system with a future supply of resources.
In trying to overcome some environmental constraint we also manipulate the environment, we alter
it according to our set of instructions. An artificial system is one that is designed according to some
set of principles that do not integrate with natural processes and thus work to disintegrate the
natural environment. This might also be called hacking, the re-engineering of a subsystem within a
larger integrated system, in order to optimize it according to a set of principles that do not integrate
with the overall pattern of organization, and thus work to disintegrate the macro system and reduce
its sustainability. Optimizing a computer's processing unit for speed, what is called overclocking, is
another example of hacking, we are optimizing a subcomponent and thus breaking or
disintegrating the computer's overall design pattern, which will ultimately work to reduce its long-
term functionality and sustainability.
When we talk about sustainability and a technical system within its environment, we are no longer
just dealing with the quantitative technical efficiency of the technology we have to also ask the
qualitative question of why we use the technology in the first place. When we run a system it
produces some output, if this output is immediately consumed without becoming the input to a new
process then we can refer to this as dissipation, meaning that the energy is dispersed or scattered,
thus increasing entropy and decreasing its capacity to do work, the resource has been used up
and is no longer available for work, at least not at the same level of functionality as before. If I use
my iPhone to play computer games, the output to this process cannot be used to enable another
function, the resources inputted to enable the operation of that technology have been dissipated.
Dissipative processes generate entropy and are typically time irreversible, we can’t take the output
and go back to feed it in as the input again because it has been degraded during the operation.
Inversely the output to this system might be used to fuel another, for example, the processing of
crude petroleum within a refinery is required in order to produce the input for a vehicle of transform.
This is an example of an anabolic process, that is one that requires an input of energy in order to
refine or synthesize basic resources into resources of a higher quality. The assembly of parts on a
production line into a finished product is another example, it requires work to be done, that is to say
the system performing a function in order to produce some throughput of a higher value, the
conversion of coal into electricity is another example as electricity is a much high-quality energy
than coal.
With functionality and throughput, we get what we call emergence, when we are supported by a
system of technologies that are working effectively they enable us to function at a new level of
organization. Technology offers the possibility for us to be more productive and live a better quality
of life. Infrastructure systems are a good example of this, unlike consumer goods, the throughput to
infrastructure systems like transportation and electrical power networks enables other technologies
to function more efficiently, in this way we get emergence as we move up the different levels to our
technological substrate and thus infrastructure systems that form the base of this can have a
powerful leveraging effect, where when you invest one dollar in infrastructure you can get 3 dollars
worth of overall economic value back.
New levels of organization to our systems of technology emerge as we go from basic tools to
industrial machines to information technologies, when these systems work properly and are
abstracted way through encapsulation, we can sit on a high-speed train sipping our coffee, and
surfing the web completely oblivious to the many layers of technology that are required to enable
this to happen. When we get functionality, throughput and emergence of a multilevel system with
everything being properly abstracted away we can get the smoothly running infrastructure systems
like that of Hong Kong or Switzerland. Consumer goods like iPhones and sports cars may be the
celebrities of technology but they are enabled by a multi-tiered infrastructure that makes our
globalized world go round.
To summarize then, no matter how complex, sophisticated or large the technology we are talking
about, whether we are trying to peel a potato, build a website or move millions of people around a
city every day, we are always dealing with these same basic features to technology that we have
been discussing in this module. That is, that we wish to get from one state to another more
desirable state and there will be some environmental constraints that we need to overcome in
order to achieve this. There will be a large possibility space for how we do this, but through
engineering, we rationalize the process to develop an optimal, automated solution, that we call the
technology. This technology is a system that performs a function; it performs this function only ever
to a limited extent and thus generates entropy during its operation. Its degree of efficiency and the
environmental conditions define its degree of sustainability. When many technologies work
together as an integrated system we get emergence, as new levels of organization emerge to
create multi-level platform technologies and this is the technical substrate that our societies
depend upon.
Setting the Context

In this module, we will be taking a look at the major trends that are working to shape our
technological infrastructure and drive their increased complexity as we transit further into the 21st
century. These major trends can be encapsulated within the overarching transition that advanced
economies are currently going through, from being industrial economies to post-industrial services
and information economies. This is a major transformation to their deep structure and architecture,
one that is happening at an unprecedented speed. Needless to say, the macro environment of the
21st century is not one of standardized, predictable, business as usual. It is marked by what
business analysis and management call the VUCA world, which is an acronym standing for
Volatility, Uncertainty, Complexity and Ambiguity. A full analysis of this context is beyond the scope
of this course. What we are interested in here is how this fundamental restructuring process plays
out within our systems of technology. In order to get some kind of traction on these very big ideas,
we will break them down into four distinct vectors of change, including the rise of environmental
awareness, the information revolution, economic globalization and the services revolution.
Firstly, sustainability. When we are looking at the factors shaping the development of technology in
the 21st century, one of the key factors is the paradigm of sustainability. The paradigm of
sustainability describes a new way of seeing the world that has over the past few decades moved
from the fringes to the center of our collective conscious. It recognizes the unsustainability of the
industrial model to economic growth and the need for a fundamental transformation in our
technological infrastructure. Environmental awareness and sustainability are the product of a
simple equation that simply doesn’t add up whichever way we look at it. It is the product of a nexus
between a diminishing supply of resources and environmental degradation on the one hand and a
growing demand for energy and resources on the other, as the majority of the world’s population
comes into the global economy. Within just the next 15 years, the global middle class of consumers
will approximately double to almost 5 billion.
Sustainability is both a supply side and a demand side equation. On the supply side, it means
accessing a wider spectrum of energy and materials in a multiplicity of different ways. A whole new
set of technologies are emerging that go beyond the industrial paradigm to create a new distributed
architecture to our infrastructure. Distributed generation, electric vehicles, thin film solar cells,
organic farming and many other alternative technologies run contrary to the industrial age model of
dependency upon a very limited set of energy sources at a mass scale. Our energy architecture to
date has been significantly dependent upon the very high-quality energy source of liquid
petroleum. These alternative technologies are of a much lower energy quality again, driving this
move towards more distributed and pervasive energy sources as we expand the spectrum of input
sources for energy.
On the demand side, it means using these resources more efficiently, both on the micro level as
energy efficiency is becoming a key consideration in the design of everything from light bulbs to
washing machines and houses, but also on the macro scale as sustainability is not the property of
a thing. Things cannot really be sustainable in isolation. It is about integrated systems, looking at
different system interoperation and how to create synergies between them. This again is a major
disruption to our traditional industrial model that is very much focused on optimizing subsystems in
isolation. Key to achieving sustainability is the shift from a linear system to a nonlinear circular
economy, and this is about seeing across domains and across levels to be able to create
connections and processes for recycling materials between disparate systems.
Sustainability is a major transformative vector along which the technological substrate that
supports our economies will be fundamentally altered, as its whole architecture moves towards one
that is cyclical instead of linear, likely running on a much lower grade set of distributed and
pervasive energy sources. It takes us into the world of complexity in that it requires us to include
many more factors into the design of these technologies, not just economic and technical factors
but now a whole new set of environmental factors. It also requires us to understand how the
systems we develop integrate with other systems in their environment in order to achieve not just
subsystem optimization but efficiency on the macro scale, requiring an integrated and more holistic
approach to design engineering.
The information revolution may have started out as just another extension of the industrial model
with expensive massive mainframe computers and highly centralized networks, but it has turned
out to be very different. The revolution in information processing and telecommunications of the
past few decades is having a fundamental and pervasive effect on all areas of society and our
technological substrate, ushering in a new architectural paradigm, as our centralized hierarchical
systems of industrial organization become unbundled and distributed out into networks. The
process of industrialization was or is one of centralization and standardization. In order to achieve
the batch processing required for mass production, distribution and consumption, this mass
centralizing process continues around the world today through rapid urbanization into so-called
mega cities.
But unlike the 19th and 20th century that were largely one-directional processes towards
centralization, the 21st century has strong forces going in both directions. As on many levels
information technology is having a radical redistribution effect, while on other levels traditional
industrial processes of convergence and centralization continue at an unprecedented scale not set
to slow down for may decades. This is a key characteristic of complexes, that is a system
composed of two irreducible components. 21st-century technology and systems of organization are
and will be at least partly characterized by this interplay of the two different patterns of
organization; formal centralized organizations and distributed informal networks. Developing
methods and design patterns to integrate the two is a key consideration going forward.
As the exponential growth in processing power starts to slow down, the value from IT is moving up
the value chain. Value is no longer gained so much from making computers faster and smaller,
although this of course continues. It is instead currently moving up to the level of large information
systems in the form of big data, cloud computing, advanced analytics and the Internet of Things, all
of which are at the forefront of the information revolution today.
These technologies of cloud computing, advanced analytics, mobile computing and the Internet of
Things are all converging as a major force reshaping the technology landscape, and it is
happening rapidly as we speak. These technologies hold out huge potential to making our world
smarter, more adaptive, efficient, process orientated and real-time. But they also engender some of
the most difficult engineering challenges, and above all many risks, including; increased unknown
interdependencies as different systems converge upon a common cloud computing platform, and
mass automation with increased dependence upon automated algorithms and control systems,
which raises many security concerns without even mentioning many social concerns. Information
technology is another vector taking us into the world of complex engineered systems as they
network our world, making systems more adaptive, responsive and again distributed.
Next, economic globalization. Globalization is a very complex phenomenon of which economic
globalization is just one dimension, and it involves the proliferation of economic relations on a
global basis. Through these relations, our global economy becomes restructured away from being
defined by a set of national economies and becomes a multi-dimensional set of global networks
through which goods, services, people and capital can flow without restriction to anywhere that is
integrated into the network. It is driven primarily by multinational corporations and in particular
financial institutions that have been very effective in leveraging new information technologies.
Modern industrial economies and their infrastructure were developed within the context of the
nation-state and were typically monopolistic and monolithic. But globalization has driven the
process of privatization within many post-industrial economies. From England to New Zealand
infrastructure systems have become disaggregated, decoupled from the nation state and
increasingly reintegrated into global networks that are managed by multination corporations. This is
resulting in a much more complex landscape with many different actors both private and public, as
infrastructure systems such as the telecommunication and power grids of Western Europe no
longer stop at borders but increasingly form part of multinational networks composed of many
different stakeholders.

Advanced infrastructure in the age of globalization is no longer the exclusive domain of western
countries. Cities like Dubai and Shenzhen have shown that high-quality infrastructure can be
rapidly developed anywhere on the planet. When given the right context and ingredients of sound
political administration and financial viability then the technology and technical expertise can be
relatively easily deployed on an unprecedented scale and speed.
Lastly, we will talk about the rise of the services economy. The industrial age economy was one of
mechanization and mass production of tangible products in order to provide the basic physical
needs of a mass society. Within developed economies, this basic need has in many ways been
met with the middle-class consumer now forming the majority within these societies. The post-
industrial world is marked by a reduction in industrial production and a steep rise in the services
sector. As manufactured goods have become commodities, services have risen to dominate the
developed economies through an ongoing process that we might call servicization. Servicing is a
transaction through which value is provided by a combination of products and services in which the
satisfaction of customer needs is achieved either by selling the function of the product rather than
the product itself or by increasing the service component of a product offered.
Services are fundamentally different in nature to products. Whereas products are typically physical
objects, services are instead intangible systems that integrate different products in order to provide
a customer solution, often in the form of some kind of process that endures over time. The word
service means ‘the action of helping or doing work for someone’ and services are fundamentally
about people. Today post-industrial societies that have achieved a certain standard of material
well-being are becoming more aware that what people really want from their economic
infrastructure is not just high GDP, but going beyond this to actual quality of life and well-being
which is a much more complex thing that requires us to first actually recognize the social
dimension to the whole situation and recognize that ultimately this is about providing services for a
society that are affordable, accessible, available and acceptable to all stakeholders.
The rise of services is another dimension to the growth in complexity of our engineered systems. It
requires again that we build not just once off technologies but integrated service systems, to put
people at the center of these service ecosystems and understand how to design the processes and
integrated set of services required for them to achieve their well-being. Services require us to not
only cut across domains to intergrade disparate systems but more fundamentally to bridge the
divide between the social and the technical.
In summary, then, we have been taking a quick overview of the context and major trends that are
fundamentally reshaping our economic infrastructure. We have talked about this process as a shift
from an industrial economy to a post- industrial services and information economy. We have given
an outline to four major trends that are part of this process, including the rise of the paradigm of
sustainability, its focus on integrated systems that are required to enable the circular economy, the
information revolution and how it is networking our world making it distributed, smarter and more
responsive. We briefly talked about economic globalization as it has worked to disaggregate our
once monolithic infrastructure systems and increasingly turn them into multi-nation, multi-
stakeholder networks. Lastly, we looked at the rise of the services economy as discrete
technologies become integrated into service systems that are focused around the end-user. Of
course, all of these trends are interconnected and interdependent. Globalization would not be
happening without information technology. Servicization is important to achieving sustainability and
so on, but the net result of all of this is the emergence of the complex engineered systems we will
be discussing in this course.
Complex Engineered Systems Overview

In this video, we will be giving an outline to what we mean by the term complex engineered
system. Like all systems, technologies can be simple linear systems or complex nonlinear
systems. To understand the difference we will first discuss the classical characteristics of simple
linear systems before moving on to itemize the defining features to complex engineered systems
with some examples.
In a previous module we have talked about and defined technologies as automated systems for
solving some given environmental constraint. As such, they are physical systems that automatically
act out an algorithm for solving a problem. A screwdriver is a system that automates part of the
process that is required to overcome the physical constraints needed to put a screw into a material.
A bridge is an automated system for solving the problem of getting from one side of a river to
another. These are examples of simple linear systems. To illustrate some of the characteristics to a
simple linear technology we will take a common household toaster as an example.
Firstly, linear systems are composed of a limited finite amount of interacting components. Our
toaster may have a maximum of a few hundred components. Thus, it is possible to itemize and
describe each component in the system. Because of this, it is often possible to define a fixed
boundary for these simple systems. They are what we call well bounded. By this, I mean that we
can tell exactly what is part of the system and what is not. We can put our toast in a box and say
that what is in the box is part of the toaster and what is outside the box is not part of the toaster.
This may sound like a trivial observation but it is certainly not always the case that we can do this,
and it is typically only with these simple linear systems that we can really define a fixed, meaningful
boundary to them.
Linear systems have a relatively low level of connectivity between components and with other
systems in their environment. Added to this, components interact in a well-defined linear fashion.
There is a limited number of interactions in our toaster and it is possible for us to draw direct cause
and effect interactions between any two components that are connected. Also, we can define and
quantify exactly the operational inputs and outputs to the toaster – a single source of electricity and
bread goes in, with toast coming out. And there is a single simple parameter to the system’s
operation, one dial for varying how well we want our toast cooked.
Next, linear systems are homogeneous, meaning the system performs one single function and all
components are designed towards that same function. Our toaster makes toast. You can’t use it to
cook omelets or make telephone calls with it. A corollary to this is that these systems are
monolithic, meaning that all subsystems and components are constrained by one top-down design
pattern. Individual components have a well-defined function that is constrained within strict
operating parameters governed by the system’s overall design. One engineering team designed
our toaster. They designed the whole system, choosing each component in order to optimize the
functioning of the whole
system. Because of this, these simple engineered systems are well defined engineered objects,
and they are expected to be well behaved in their functioning, meaning they operate in a
standardized, routine and predictable fashion.
Complex engineered systems are qualitatively different from these linear technologies. To help
illustrate this, we will take the classical example of a city.
Firstly, complex engineered systems are open systems, meaning that they have such a high level
of interaction with their environment, that their boundary is not well defined. Added to this, they are
composed of very many elements. We may be talking about millions, billions or even too many
components for us to be able to quantify in any meaningful way. And it may be very difficult to say
which of these components is part of the system and which are not. For example, metropolitan
areas may span large geographical areas as different urban centers morph into each other. We
might draw lines on a map in order to define jurisdiction, but from an engineering perspective, they
are largely arbitrary. Asking how many components there are in the system is again a somewhat
arbitrary question; for all intentive purposes, it is essentially infinite. We cannot itemize each
component in the system. Added to this, components are leaving and joining, coupling and
decoupling from the system in a dynamic fashion. Metropolitan areas provide critical services that
a whole region or country’s infrastructure is dependent upon. They are deeply interconnected and
interdependent in their environment. At this critical level of connectivity and interdependency, the
system becomes more open than closed and it is defined less by its boundary and more by the
flow of resources through the system.
As opposed to simple systems where components interact in a linear fashion, in complex systems
components interact in a nonlinear pattern. Processes don’t just take place from start to finish
independently along one process. Instead, many different processes and functions are taking place
within a parallel architecture. They interact across and between processes and domains in a
network fashion. A metro area is a composite of many overlapping parallel infrastructure systems
from transportation and water supply to the electrical power grid and the telecommunication
networks.
The components in the system are not just interacting across domains but also across scales.
Complex engineered systems are what are called systems of systems. They have a multilayered
hierarchical structure, as elements form part of subsystems, which form part of larger systems,
which in turn form part of the whole system of systems. A subway train is part of the mass transit
system, which is part of the transportation system, which in turn combines with many other
systems to form the whole urban environment.
What is important when we are looking at the whole system is how these different subsystems
interrelate, that is, do they interact in a constructive or a destructive fashion. For example, is the
airport in our metropolis built right beside a major residential area, resulting in noise pollution, a
destructive relationship that reduces the functionality of the whole system? Processes that were
designed in isolation to function in a linear fashion lack integration with other systems in their
environment. We get a dead-end effect and the production of waste that will be destructive to some
other subsystems. For example, when we build large tarmac surfaces that can’t absorb rainwater,
the result is a high level of runoff that needs to be dealt with by the wait water system. This is often
the case when we use a reductionist design paradigm. It results in a focus on the individual
components without full regard for how these components integrate to give us the functionality of
the whole system. Thus, we often end up with optimal solutions on the micro level but sub-optimal
solutions on the macro scale.
When components interact in a constructive fashion, we get synergies. They complement and
enable each other. We can think about the use of greenways and parks to absorb carbon
emissions in a city, the coordination between consumers and producers of electricity on a smart
grid, or the schedule coordination between different modes of transportation. These are examples
of synergies.
Through synergies, we get the emergence of new levels of organization and global functionality.
Ultimately all of this technology and infrastructure that constitutes our metro areas is about
delivering a material quality of life to citizens. Most of these infrastructure systems users don’t own.
They just have access to the service. Material quality of life is not a single product or thing. It is
about everything working together so that we get the emergence of a seamless set of services
enabling end users to live a high material quality of life. And these different types of relations and
interactions are a defining factor in whether these infrastructure systems can deliver this emergent
macro scale functionality that everybody wants. Synergies within complex engineered systems
require both intelligent design and the use of information technology to coordinate different
systems in real-time.
Next, we will talk about networks. Unlike our toaster that was a homogeneous system, these
complex engineered systems are not. As mentioned, they are really composite entities made up of
many different elements and subsystems, heterogeneous components that were never really
designed to work together. The system is distributed out, no one is really in control, and the whole
thing is really just a network of connections. Our city is the product of thousands or even millions of
different actors, businesses, deciding what project to invest in, public administrators deciding which
project to support, citizens choosing where to live and send their children to school. All of these
different actors and subsystems are only loosely associated with each other. Think about the
Internet of Things where many billions of devices, from smartphones to tractors to hospitals couple
and decouple from the system dynamically and operate under their own internal logic. These
complex engineered systems are really networks that link up many heterogeneous subsystems
and components.
And this helps to emphasize the important fact about autonomy, that is, that components are
largely autonomous. They are not fully constrained by the system. This runs very much contrary to
our traditional idea of engineering where control over the entire system is thought to be a
prerequisite, with systems designed in a top-down fashion. But this is not how the Internet was
created, nor electrical power grids, nor our metro area. They all started small and evolved to
become the complex systems they are today. Evolution is a process of development that acts on
technologies on all scales. The electrical power grid is a good example, since its inception in the
Industrial Age, electrical grids have evolved from local systems that serviced a particular
geographic area to wider expansive networks that incorporate multiple areas typically covering a
whole nation. At no point was there the option to simply build the whole national electrical
infrastructure from scratch as a homogeneous system. The U.S. power transmission grid, for
example, consists of about 300,000 km of lines operated by approximately 500 companies. And
through distributed generation it is rapidly evolving into a next-generation smart grid that will
expand the number of producers drastically, making for many, many, actors acting and reacting to
each other’s behavior as the entire system evolves over time, which is an example of what is
called complex adaptive systems.
So this is an outline to what we mean when we use the term complex engineered system. They are
large technologies composed of many very diverse subsystems, densely interacting in a nonlinear
fashion to create a multi-tiered network system that evolves over time. To give some other
examples, we might cite airports, logistics networks, telecommunications networks, enterprise
information systems, IoT platforms and the internet itself, hospitals and healthcare systems, the
global air transportation network and all types of infrastructure systems from mass transit to water
supplies.
Information & Control Systems

Information technology is an integral part of the whole theory and science of complex systems. It is
essentially the only way through which we can access data, process it and infer patterns within
these complex systems due to their scale and number of components. This is also true of complex
engineered systems. Although they are much more than just information technology they are the
product of it and virtually impossible without it. All of this technology that we have developed has to
be managed, operated and maintained in some way, and information technology is today
essentially the only method for managing the massive technological infrastructure of advanced
economies. In the same way that researchers can’t study and interact with vast networks without
computation, it is also becoming increasingly unviable for us to interact with these complex
engineered systems without being enabled by information systems. The two are critically
intertwined.
The information revolution is in many ways the backbone to the current rapid proliferation and
transformation in our technology landscape, as we have already noted the information revolution is
evolving as the focus moves up from the micro level of individual computerized devices to whole
systems of people, technology, and information. We may have a good grasp of the individual
computerized components and their internal workings, but we have very limited understanding of
the systems that emerge out of the interaction between all these digital devices, people, and
physical technologies, and these are what we call information systems.
When we scale the basic operations of computing that involve the storage, manipulation, and
exchange of information up to the macro scale, then datasets become big data, no longer a single
file on a hard disk, but a cloud of data points streamed from a network of different sources. This
data is often unstructured and noisy, such as the millions of images uploaded to the Internet every
day, the hashtags on social media, or data from financial trading platforms. At this macro scale,
computer programs become advanced analytics, which is a set of algorithms that include
sophisticated statistical models, deep learning, and other advanced data mining techniques. In
order to reveal patterns in large data sets, we can’t go into the details of how these algorithms work
here but at a very high level, they automate the process of turning data into information that can be
acted upon.
These machine learning and deep learning algorithms are cutting edge technologies that have only
really come of age in the past one or two years, with companies like Google scrambling to get their
hands on people with expertise in this area. We have never really had the capacity to automatically
turn very large data sets into valuable information before. This technology already provides
solutions to many problems in image recognition, speech recognition, and natural language
processing. It will very likely be the engine behind a lot of innovative groundbreaking technologies
in the coming years. This is the forefront of the information revolution today and it is still a radically
disruptive force.
The last technologies that we will briefly mention are social networking and mobile computing.
Social networking gives people a presence in this world of information systems and makes explicit
our actions. Computing is becoming increasingly pervasive as the digital and physical world are
converging. Social networking and gamification are coming out of their box and will become
increasingly a part of and overlaid on top of the physical world to give everything a social
dimension. This again is very much cutting edge technology but there is growing research on the
convergence of the Internet of Things and social networking to give us what is called the social
network of things. All of these different technologies combine to give us the acronym SMAC,
standing for social, mobile, analytics and cloud. And these technologies are currently at the
forefront of shaping our I.T. infrastructure.
The formal definition of an information system is the combination of user, technology and process
to complete a given goal. Information systems collect, store, process and exchange information.
Today they are used in all forms of organization of any size from enterprise information systems to
manufacturing to transportation and urban information systems. Information systems serve a
number of critical functions within complex engineered systems. They serve the functions of basic
control, automation and of coordination between different systems. We will discuss each one of
these functions separately.
A control system is a specialized subsystem for controlling another system. A very high-level
generic model of this would consist of a sensor for receiving information about the system being
controlled and its environment, a logic unit that processes this information according to some set of
instructions and an actuator that executes on the controller’s instructions. In order to control a
system, all of these elements need to be present and working together.
How we control technology has, of course, evolved over time. If we think about a hand tool like a
shovel, all of these control functions are being performed by the person operating the technology,
who is also inputting the physical energy into the system. Industrial technologies and new energy
sources removed humans from the direct physical control over the system as it became mediated
through mechanical levers. The electrical revolution gave us electronic interfaces but all still largely
controlled by a human operator. With the advent of information technology, basic control processes
such as on production lines and other industrial processes have become automated.
There are only so many technologies that a single human can interact with and manage. At a
relatively low level of technological saturation, such as in pre- modern societies, we can interact
with and directly control all the technologies we own. But as we increase the number of
technologies and the complexity of the technological infrastructure this becomes no longer
possible. To develop the large infrastructure of the industrial age required a certain level of
automation, allowing for any single individual to be enabled by much more and diverse
technologies. It no longer became possible for us to manually interact with, directly control or even
understand all these technologies.
The more technology we have and the more complex this infrastructure becomes the more we
need information in order to interact with it and manage it. The advent of digital computing and
advanced telecommunication is driving a new level of automation that is required to manage the
ever-growing complexity of the technological infrastructure that supports post-industrial societies. A
single premium-class automobile may contain close to 100 million lines of software code that are
executed on 70 to 100 electronic control units networked throughout the body of the car. The
physical operations to whole mass transit rail systems such as that of Dubai have been automated.
Many other basic control process has become automated
Event driven architecture
When supply chains or manufacturing processes become networked and can respond to events
occurring in other systems immediately, when prices on the electrical grid can adapt in real-time to
supply and demand, then things become more contingent upon time and events play out through
processes. This is not just about just making things faster because when a system reaches a
critical level of dynamism the whole paradigm changes towards one that is process orientated,
where systems are driven by event signals in time. And the aim of analytics is to find statistical
patterns in these processes so that we are no longer reacting to things that have already occurred,
but can be preemptive by preventing them from occurring in the first place.
Lastly, before wrapping-up, we will touch upon the subject of security, which is, of course, a major
issue here. And we should not be naïve about the scale of the risk involved as our critical
infrastructure becomes automated, networked and remotely controlled via common IP platforms.
Today a typical car’s airbag, steering, and brakes can all be hacked and controlled through the
Internet for malicious ends. Control systems in nuclear power plants can be broken into and with
the roll out of IoT platforms software will soon be permeating all types of technologies as our
critical infrastructure becomes increasingly dependent upon it. We can think about security with
respect to control in terms of either access to control or the use of control.
Access
Distributed systems like the Internet and IoT drive a new form of security. Traditional security is
built around having something to secure, some well- defined information or system that typically
belongs to some organization. And
we can employ a professional I.T. security team to build a security wall around it because we know
what is part of the system and what is not. But in this world of distributed systems like the Internet,
we are dealing with billions of devices that may belong to end-users with little awareness or
concern for security, and these many exposed and vulnerable end-user devices can be harnessed
for distributed attacks. In this way, security can become a tragedy of the commons. It may be of no
great value for me to change the default password on my router but when millions of other people
do likewise, the net result can be a macro scale security issue with many devices vulnerable to
being harnessed for an attack. This is just an example to the nature of security within these
distributed systems.
But as mentioned, security is more than just ensuring prevention of hackers from breaking into a
system. It is ultimately about the appropriate use of control and power, and with this next
generation of information systems we are consolidating and handing over an extraordinary amount
of power to these automated algorithms. A system is only really in control when awareness,
responsibility, and power are all aligned. This means the exercising of control through a multi-tier
framework with more intelligent and aware systems guiding systems that are lower in their
capability for information and knowledge processing.
Whereas information and data may be growing at an exponential rate, this only works to make
intelligence an increasingly scarce resource, information technology, on the one hand,
commodities information and data driving its value right down. But because of this it also increases
the value of knowledge and intelligence making them scarce resources. Wherever there is demand
for a scarce resource there is a hierarchy based on access to that resource. This drives a new kind
of hierarchical structure that is emerging out of the information revolution, captured in the acronym
of DIKW, which stands for data, information, knowledge, and wisdom. Controlling these systems in
a long-term sustainable and secure way means understanding this hierarchy and building it to our
systems of technology so that this world of complex engineered systems that we are going into is
governed and controlled by true knowledge and ultimately some form of wisdom.
In summary, then, the information systems that have developed over the past few decades are
both a massive added source of complexity within our technologies and also the solution to
complexity for end-users. These information systems enable us to harvest vast amounts of data,
harnessing it to coordinate and optimize systems. Through the convergence and integration of a
number of technologies like cloud computing, analytics, pervasive sensing and social networking,
we are reshaping our technology infrastructure to make it more adaptive, process orientated,
dynamic and real-time. It gives us the capacity to greatly increase the efficiency of our systems of
technology through automation and real-time coordination happening with a new event-driven
architecture. But it also helps our many security concerns that require intelligent design and
management to achieve long-term sustainable solutions.
Socio-Technical System

So far in this section to the book, we have talked about basic technologies, the complex
engineered systems we get out of the aggregation of many different technologies, and the
information systems that increasingly control and operate all this technology. In this module, we will
be completing the first section of the course by adding the social domain into this model. Today the
information revolution is driving rapid technological change, as we live in environments that are
increasingly technology-saturated. It makes the question of the relationship between people and
technology more explicit than ever and it is this relationship between the two that the domain of
sociotechnical systems tries to understand, model and optimize.

Socio-technical systems is an approach to the study and design of complex organizations and
technologies that recognizes the interaction between people and technology as a defining factor in
their overall system make up and functioning. This is both on the micro level of how an individual
interacts with a particular technology in a linear fashion, where we are interested in interface
design and user experience, but also on the macro level, referring to the complex nonlinear
interactions between society's infrastructure and its socio-cultural domains. The ultimate
functioning of almost all technologies will involve the interaction between people and technology.
Whether we are talking about a wheelbarrow, a car or a subway station, the end throughput to the
whole process requires these two elements to function together. And socio-technical systems build
upon systems theory to look at how the whole thing works together in effecting a joint outcome.

To give some examples of this, think about the current state of genetic engineering. Scientists and
engineers may spend decades researching and developing the technologies but if society decides
it will not adopt it for ethical and environmental concerns, then the whole exercise is somewhat
futile, which is exactly what has happened in the European Union. Or to take another example,
web developers in Silicon Valley may build software with all sorts of bells and whistles expecting
everyone to be tech savvy, but if a large percentage of these users are in fact elderly and
unaccustomed to the interface then again the actual throughput to the whole system will be
significantly reduced. And this is what we are interested in with the domain of sociotechnical
systems, the actual throughput of the whole system, not just its technical dimension.

Our traditional design engineering practice is based upon the use of reductionism, which involves
the breaking down of complex systems into components and focusing on optimizing these
components in isolation. The social world of people and the technical world of technology run on
very different principles. And often the first stage in this process of breaking a system
down is the division between people and technology. So let’s first think about how these two
domains differ in their nature.

Firstly, to talk about the technical domain, the technical domain is conceived of and designed by a
relatively small number of scientists and engineers who actually understand how things work and
importantly are responsible for making things work. Most of society has very limited understanding
of this and largely takes these technologies for granted. As we have previously discussed,
technology is the product of a process of rationalization through which we come to a well-defined
and logical sequence of steps for efficiently solving a particular problem, and then embody this in
some physical object or work process. Behind every large organization today there is a mass of
logical processes being performed by our systems of technology. From this perspective, humans
look like they go around with their heads in the clouds wondering who to marry or what color shoes
to buy without a clue for what is really going on.

Logic is not something that typically comes natural to human beings. Your average person is driven
by a mass of physiological, emotional and ideological needs and desires. Logically analyzing
something requires a certain amount of energy and focus that most people will typically avoid
unless specifically required. We use all sorts of heuristics and shortcuts to maintain a certain flow
to our lives. In the US only 15% of graduates are in the technical STEM areas of math, science,
engineering, and technology. In short, most people aren’t engineers or computer geeks. They just
want to get on with pursuing their interests. The last thing they want to do is have to read and
follow each step in the instruction manual. Not only do people actively try to avoid the use of logic,
they often actually feel threatened by it. From the perspective of most people faced with these vast
systems of logic that support us, they appear empty and mechanical. They threaten our sense of
meaning, values, identity and make us feel powerless in a world of in-comprehensive complexity,
where we long for some form of unity and simplicity.

This is a somewhat hyperbolic picture of the dichotomy between the social and technical domains,
but it helps to illustrate the core constraint here. To sum it up in just a few words, we might say it is
the dichotomy between the qualitative and continuous nature to people and the quantitative,
discreet nature to technology. This divide between the fundamental nature to humans and
technology and the friction it creates permeates all areas of our systems of organization, from the
design of user interfaces and health care systems to people’s uneasy feeling about robotics. One
way of understanding and giving structure to this dichotomy is through the DIKW framework that
we previously touched upon. The DIKW framework that describes the hierarchical structure to
information increasingly captures the divide between technology that operates on the level of data
and information on the one hand, and people that in advanced economies are increasingly
required to perform knowledge work.
When we over-emphasize the technical domain we may end up with a very technical efficient
system but it will also be alienating, leaving people feeling disenfranchised and ultimately result in
disengagement. Inversely, when we give precedence to the social domain we can get a lack of
technical efficiency, incapacity to automate basic processes, and lack of technological capabilities.
Developing integrated socio-technical systems requires a balance of both, and importantly the
integration between them through interfaces that are able to translate the language of one domain
into another.

Interfaces like the dashboard on our car, the signs in an airport, or the graphical user interface on
your computer, are the contact points between the two systems that have to communicate in order
to effect the joint outcome. Interfaces are the way of communicating to people the set of
procedures required for operating the technology. They reflect the underlying logical and
algorithms through which the technology functions but are expected to do so in a fashion the
makes sense to the end-user. On the social side, they use symbols, metaphors, and stories that
people instinctively relate to. The point of an interface is to translate the language and functionality
of the system into the language of the end user and vice versa.

But technology is typically designed in domains specific to engineering. A building may be


composed of a hydraulic system, electrical system, heating and so on. It is also the same on the
macro scale with different companies operating airports, subways, and motorways. The services
revolution is about networking these technologies and domains into a process that end users
interact with through digital interfaces, and this is increasingly the structure to our complex
engineered systems – from end-user to digital app, to service process, to physical technology, a
multi-tiered system that works to integrate the functioning of people with our physical technologies
and reflects the hierarchy of information.

The domain of socio-technical systems is not just about how people interact with preexisting
technologies but also about how organizations adapt to, and evolve with, new technologies. Ever
since the advent of the industrial revolution, social organizations have been subjected to increasing
technological change that requires us to adapt to new systems and new ways of working on a
regular basis. There are a number of aspects to this change process that need to be considered in
order to achieve a desired outcome, such as identifying and setting the system’s objectives that
require us to take into consideration the perspective of multiple stakeholders, the training of the
operators, the integration of this subsystem or process into the whole system and the stabilization
of this new pattern of working. And of course, there are many points along the way where inertia
and power dynamics can distort or even reverse the whole process.
People don’t always behave rationally, particularly when the technology or new process touches
upon aspects of culture that involve a sense of identity and
narrative. Some technologies get adopted while others go by the wayside not because of their
technical efficiency but simply because they fit or didn’t fit with that society’s beliefs, values, and
identity. As our earlier example with genetically modified crops illustrated, technologies like GMO
bring with them a whole set of ethical and moral considerations that a society or group of people
may not have found any solutions to yet, and thus they will either have to reject it or adopt it
without being able to integrate it into their value system, which will likely create further problems
down the line. And this is currently the state with many new technologies such as biotech,
nanotech, artificial intelligence and cognitive technologies, all of which our society doesn’t have the
philosophical framework through which to fully contextualize them and their ethical consideration.
Added to this, the way a system was designed to be used and how it is really used may well differ
significantly and reduce its overall realized functionality. Think about urban development in a city
like Rio de Janeiro where informal shantytowns have sprouted up around the city with little regard
for the intentions of the urban planners.

The industrial age was one of standardization and mechanization. In order to reduce the
complexity of the interaction between people and technology, people were simply expected to fit in
with abstract mechanistic procedures, processes, and systems of organization, as exemplified by
the industrial age model of education and factory work. But this industrial economy is rapidly
becoming a thing of the past as manufacturing and basic information processing have become
commoditized. Post-industrial service and information economies require a new set of skills and
human capital based around innovation, entrepreneurship, education and knowledge, none of
which really happens without the engagement of the subjective and qualitative dimension to
people. This requires us to go beyond the technocratic paradigm of industrialism and recognize the
importance of the social dimension within our engineered environment. To build this next
generation of complex sociotechnical systems, in turn, requires engineering based upon diverse
skill sets and cross-domain competencies in both technical domains, social science, and
humanities. It requires inter-domain engineering teams. Again this is another vector that greatly
increases the complexity of our engineered environment through an increase in the nonlinear-
networked interactions between the social and technical domains.

In this module, we have been taking an overview to the area of sociotechnical systems that we
defined as an approach to the study and design of complex organizations and technologies that
recognize the interaction between people and technology as a defining factor in the system’s
overall makeup and functioning. We talked about the dichotomy between the social and technical
domains as one that is marked by this divide between the continuous qualitative nature to social
systems and the discrete quantitative nature to the technical domains. We discussed how the
DIKW framework can be used as a model through which to interpret and give some structure to
these sociotechnical systems. We looked at how interfaces work as the link between different
domains and how a combination of service design and information technology is
working to create a new pattern of organization, where end-users interface with digital devices to
access a service process that aggregates different technologies. Finally, we briefly touched upon
the process of technological change within organizations and how it significantly depends upon
sociocultural dynamics.
Self-Organization
In this video we will be talking about the process of self-organization within complex engineered
systems. We will firstly talk about why self-organization is a key feature and characteristic to these
technologies. We will then go on to discuss the basic ingredients that are required and the process
through which these systems self-organize. Finally, we will wrap up by mentioning how this
process leads to sub-optimal but often robust solutions.

Firstly, to just refresh our memory of what self-organization is, self-organization is basically the
spontaneous creation of a globally coherent pattern out of the local interactions between initially
independent components. There are essentially just two ways in which a system of technology can
achieve the global coordination that is required for it to function. Either this design pattern is
imposed on the system from some external influence or it emerges out of the interaction between
the different components internal to the system itself. And this is self-organization.

With the linear technologies that we previously talked about such as bridges, chairs, and toasters,
the system is designed and controlled by the engineer and operator, who imposes an external
pattern of organization on the system in order to make it operate in a predefined optimal fashion.
This is our traditional industrial paradigm that we typically try to apply to all areas, from
governments running a country’s infrastructure to designing residential areas and parks. With this
centralized design, the components are coordinated through a linear top-down design pattern that
constrains the components and requires them to function in a well-defined specific manner in order
to achieve global functionality.

This paradigm is so familiar to us that the idea of getting global coordination within a system
without some centralized plan or design may sound impossible or like magic, but it is not. It actually
happens all the time. Think about people crossing the street. There is no one orchestrating the
process, but through the local interactions between people they become differentiated out into a
distinct pattern with the formation of coordinated laneways going in each direction. Without formal
design and centralized coordination, components interact in a nonlinear-networked fashion. The
type of organization and patterns that result from this are defined by the type of interactions
between the components and not the design pattern that was imposed on the system.

Whether we are talking about the Internet, a smart power grid, transportation networks or logistic
networks, in these complex engineered systems the elements have some form of autonomy to
choose their course of action. For this reason, these systems are not the product of our traditional
top-down methods of design. The overall structure of the system is a product of the local
interactions between components with global patterns emerging out of these local interactions.
There are a number of basic prerequisites that need to be present for this process of self-
organization to play out. There needs to be first some form of randomness in the initial state to the
system, variation amongst the state of the components, elements with the autonomy to act and
local nonlinear connections between the elements. We will discuss each of these conditions and
why they are important separately.

The first important condition is some form of randomness in the initial state to the system or at
least a lack of centralized coordination. There is no possibility for self-organization when the
system is already designed and held in some well- defined and orderly structure. When a city is
centrally planned and regulated in a top-down fashion, then there will be little space for citizens to
self-organize. But when immigrants move into unregulated areas around large cities, we get self-
organization in the form of shantytowns, specifically because this space is unregulated. Wherever
there is a lack of centralized control and adaptive actors interacting, we will get some form of
spontaneous emergence of order, such as the flow of automotive traffic in cities or the flow of IP
packets on the Internet.
Secondly, the components in the system need to be heterogeneous. Heterogeneity of states is
important. If all components already have the same state, then they are synchronized and there is
no need for self-organization. If all the people crossing the street are crossing it in the same
direction, then they are already coordinated. It is only where there are people going in different
directions, that is to say, components occupying heterogeneous states, that we get friction in the
system as they bump into each other. And there is the possibility to optimize and increase the
efficiency of the system by synchronizing the components’ activities in order to reduce this friction.
It is often this friction created by variation that drives self-organization.

Thirdly, the capacity for components to adapt and respond to events in their environment is
important. We can get self-organization without adaptation such as can be seen in substances like
boiling water, but adaptation is a major accelerator and key element within the process. If things
can adapt then they can easily and quickly alter their state in order to synchronize with other
components, and this is part of why information technology is driving a whole new architecture to
our technological infrastructure. Because of the lack of capacity to adapt, our traditional industrial
technologies had to be designed in a top-down fashion where everything is well-defined,
predetermined and remains static over the course of its life cycle. But as we begin to embed
communication devices in all types of technologies through shared protocols, they can synchronize
their state and self-organize to find new and optimal solutions to dynamically changing
environments.

Lastly, all these systems need to be able to interact. It is only through nonlinear interactions that
the agents in the system can synchronize their states and coordinate their activities. Typically this
requires some form of protocol and interoperable platform. For example, smart cities are enabled
by open standard IoT platforms that allow different devices to share information and coordinate.
Through open traffic control platforms, cars can communicate with parking lots in order for the
system to self-organize. By layering a telecommunications network on top of the power grid,
producers and consumers can adapt to each other and self-organize to optimize the load balance
on the system. Dense nonlinear-networked interaction between components is another key
ingredient in fostering the emergence of a globally distributed pattern of organization.

So these are the basic ingredients that facilitate the process of self-organization, some form of
initial randomness, the capacity for autonomous adaptation and dense interactions, but now let’s
think about the actual process through which self-organization takes place. Firstly as mentioned we
have some environment without centralized regulation. We might call this an open platform.
Examples could be a smart power grid where anyone can join as a producer or consumer, or it
might be a land area that is unregulated by urban planners where people are free to construct
buildings as they wish, or it may be an unregulated mesh communications network.
Heterogeneous agents then join this network or platform and start to interact.

Next, components that interact most frequently and intensely begin to synchronize their states in
order to reduce friction, creating what is called an attractor, meaning that because some small
subset of components have formed a pattern of organization and reduced friction, they will be more
effective at performing their function and this will attract other components to this more effective
pattern of organization. For example, once a number of people have decided to occupy the same
location, a town or city forms with its own set of specialized and coordinated relations that then
enables it to be more effective at developing large infrastructure, conducting trade and other
economic activities which increase its relative wealth per capita. This creates a positive feedback
loop that results in the attraction of others to this particular coordinated state in order to gain the
benefits of being part of this organization, as the city becomes larger and more populous.

Lastly, when all the different components are aligned within different local level attractors these
different local attractors may stay stable for a prolonged period of time. But ultimately evolution will
act on the system and they will have to either cooperate or compete in order to create a global
pattern of organization, or the whole system will become vulnerable to being subsumed by some
more efficient external pattern of organization, the net result of this cooperation or competition
being that all elements will be aligned within a global pattern of organization, and the process of
self–organization will be complete as this new
pattern of organization both enables the components to function more efficiently but also
constrains them by making them comply to the globally accepted protocols of communications and
behavior. As examples of this, we could cite the development of most technology from video
cassette tapes to bicycles and mobile phones. Different design patterns to the technology were
initially developed and competed or converged, with one design paradigm ultimately subsuming all
the others to become the default.

The net result of this process of self-organization is systems that are not so orderly and are often
sub-optimal. If we look at ecosystems, which are self- organizing, they are hugely sub-optimal with
respect to the processing of energy. Only a minute fraction of the energy that enters the system is
processed all the way up the food chain. The vast majority it wasted. The Internet is another
example. It often simply fails to deliver packets. It has a massive amount of redundant and virtually
inaccessible information. Unlike our traditional centrally designed systems that are specifically
designed to be optimal, self-organizing systems are not designed to be optimal and they are
typically not orderly or well behaved, but they do work and are often robust and sustainable in the
long term.

In these distributed complex engineered systems, none of the components in the system has a
global vision of the whole thing. They are acting and reacting to local information. When a new
person moves into a shantytown, a new computer connects to the Internet or a new vehicle enters
an urban transportation network, they have no vision of the whole system and they are only
interested in their particular local environment. And this is all the system is made of, these agents
following their own rules according to the information they have within their local environment. No
one is in control. There may be someone creating the platform and protocols but no one can
control or know what is going to emerge out of these nonlinear interactions. Tim Berners-Lee may
have created the World Wide Web but he has no control over how it will develop; nor is anyone
really capable of predicting it. The net result of all these agents acting and reacting to each other is
inevitably going to be a far-from-optimal situation for the whole system. But self-organizing systems
promote engagement. People feel more engaged because after all they truly create the system.
People feel empowered and actively engaged with their social network. Although someone has
created the platform and defined the rules under which they interact, they still get to create their
network and shape its development.

Self-organizing technologies are a whole new paradigm in systems design. The IT network
enabled technologies of the 21st century are increasingly distributed and adaptive. As we start to
place embedded devices in all kinds of things and network these things, this provides the enabling
platform for the process of self- organization to operate on our technologies, meaning that our
traditional industrial paradigm that technology needs to be fully designed and managed will likely
be less dominant and has to share with a new model where designers and engineers are the
platform and protocol developers, through which different technologies interact and synchronize in
order to find solutions. And as always it is the interplay between the two, the top-down formal
design and bottom-up self-organization, that creates the core constraint within these complex
adaptive systems.

To summarize, we have been talking about how complex engineered systems create global
coordination through the process of self-organization that requires an initial state of randomness
and lack of control, adaptive components and dense interactions as an enabling context. It is a
process through which initially heterogeneous components come into contact, interact and
synchronize their states to create attractors, with these attractors then cooperating or competing to
give rise to the global pattern of organization that is typically characterized as being sub-optimal
with no one in control, but also robust and sustainable.
Engineering Systems Dynamics

In this video, we will be taking a look at the nonlinear feedback loops that drive the dynamics
behind complex engineered systems. We will firstly talk about the difference between linear and
nonlinear interactions before going on to introduce you to positive and negative feedback loops
with examples of how they relate to technology. We will present the model of a causal loop
diagram that tries to capture this set of feedback loops driving a system’s behavior. Finally, we will
look at how feedback plays a critical role in the process of self-organization and the emergence of
distributed coordination. The main aim of this module is to introduce you to some of the basic ideas
within nonlinear systems theory and how they apply to complex engineered system. Systems
dynamics is a modeling method for the analysis of complex systems that are characterized by
feedback loops and time delays. It is a holistic approach in that it gives us a view of the primary
interactions and their feedback effects that drive the functioning of the entire system. It is used in
many areas from trying to model ecosystems to economies and large engineering projects.

In simple linear systems, cause and effect interactions happen through a unidirectional process.
Cause A creates an effect B, we hit a ball with a bat and the ball moves off in the direction we hit it.
We buy a product, use it and throw it away. These are linear processes involving cause and effect
interactions that are characteristic of simple systems that have a low level of interdependency. We
typically tend to think of things as chains of cause and effect and often ignore the time delays
between them, and how a change in one component will possibly feed back to affect its source.
This linear cause and effect paradigm is part of the analytical method of reasoning that focuses on
the individual components in a system and uses linear equations to describe how they interact in a
well-defined manner.

But in complex systems where the components are highly interconnected and interdependent,
cause and effect interactions are no longer one-way, but in fact, feed back on each other. In these
highly interconnected systems, a change in one component will not only affect another, or possibly
several others, but how it affects the other components will, in turn, feed back to affect its future
state. The big idea here is that of interdependence, which is a key characteristic of complex
systems. Because of heightened connectivity, everything affects everything else and this effect
doesn’t just disappear. Sooner or later it feeds back to its source.
We can’t just put large amounts of CO2 into the atmosphere and expect it to disappear. It will
sooner or later feedback to affect the source, and it is these macro scale feedback loops that really
govern the system’s overall development in the long term. I say in the long term because in the
short term the system’s environment may be large enough to absorb and retain this effect without it
feeding back to its source. After all, our industrial economies managed to put a lot of CO2 into the
atmosphere before this effect started to really feed back and affect the economy itself, but in the
end, it has. Thus, we use the term time delay as it may take time for these effects to feedback to
their source.

Systems dynamics recognize relations of interdependency and provide us with models to focus on
these two-way interactions, through what are called causal loop diagrams. Causal loop diagrams
or CLD’s are maps that model the set of relations within a system. They try to capture how the
change in a variable associated with one component in the system will affect another, and these
causal relations are called causal links. When causal links between related variables feed back on
each other, we have what is called a feedback loop. With feedback loops, we are asking how the
relations between two variables affect each other, and there are really just two types of feedback
loops, positive and negative feedback.
A positive causal link is one where the variables associated with two components move in the
same direction. So if one goes up the other goes up also. If it goes down the other does likewise. A
positive feedback loop is when an increase in one variable affects another in the same direction,
but this change then also feeds back to change the source variable in the same direction, and thus
it is a self-reinforcing loop. So, for example, the relationship between honeybees and flowing plants
is a positive feedback loop. The more bees we have the more plants we can have, and the more
plants we get the more bees that can be supported. In other words, more begets more. Such
feedback loops generate exponentially escalating behavior, which can be very beneficial or very
detrimental. Traffic jams are another example of positive feedback loops. The more cars that join
the traffic jam the slower it will move. The slower it moves the more cars that will join. More begets
more.

This example of traffic gridlock is called a vicious cycle because with each iteration of the cycle it
reinforces the previous one, continuing the cycle in the direction of its momentum until an external
factor intervenes and breaks the cycle. Hyperinflation is another good example of a vicious cycle.
Through positive feedback, the value of the currency stays spiraling down. Investment in
infrastructure might be an example of a virtuous cycle. The more we invest in infrastructure the
more efficient our economy may become in the future, meaning we can retain more public revenue
for reinvesting in infrastructure and the cycle begins again. But of course it can’t go on forever and
that is why positive feedback loops are typically seen to be unsustainable. We might think of
urbanization as a positive feedback loop. The more people that move into urban
centers the more resources are concentrated in them, allowing them to leverage economies of
scale to be more economically efficient. But also it has a negative externality in that it reduces the
rural population, and thus the capacity to provide desired services in the countryside. Both of
these, the positive feedback and negative externality, work to attract more people into the city. But
again this creates an unsustainable dynamic and we end up with the overpopulated and under-
serviced megacities like Jakarta and Lagos.

Negative feedback loops are relations where the variables associated with two components feed
back to affect each other in the opposite direction – the more of one, the less of another. The more
fishing we do the less fish there will be, which will feed back to reduce our capacity to fish during
the next season. This is also called a balancing loop. Thermostats use negative feedback to
balance the temperature of your house, as do all forms of control systems. In order to maintain the
state of the system, they are regulating around some optimal equilibrium state. The ballcock or
float is another example of a control system that uses negative feedback to control the water level
in a reservoir. When the water goes up the float cuts into reducing the inflow and vice versa,
creating equilibrium within some set of parameters to the water level.

Within complex systems, these causal loops do not exist in isolation, but there are in fact many
different positive and negative feedback loops and links interacting. And thus, we need to draw a
whole map of these different causal interactions and loops in order to understand the system’s
overall dynamics. Real world complex systems are typically held in their current configuration or
manifest a certain behavior because they are under many both positive and negative feedback
loops, with the strength of these different loops changing over time to create some dynamic state.

System dynamics models capture aggregate variables. As such, they give us general models of
behavior and trajectory, answering such questions as to what is the general shape of the graph
generated by the system over time? Will it be oscillatory, as negative feedback dominates or will
we get nonlinear exponential growth and collapse when positive feedback dominates? As such,
outcomes to these models should not be interpreted as predictions but more as general overviews,
and this is often the best we can hope for when dealing with these very complex engineered
systems. They help us to reason about what underlying structures need to be changed in order to
change the system’s actual mode of behavior.
This is classical systems thinking and it can be of great value because our traditional more
analytical modeling methods are often very brittle, meaning because they are based on very
quantitative methods they are either exactly correct or they give us figures that blind us to the
overall trajectory. The classical example of this being macroeconomic analysis which is based
upon linear
models, which may be exactly correct when it tells us that the Chinese economy will grow by 6.8
percent next year, but also completely fails to predict massive nonlinear changes brought about by
financial crisis. The nonlinear models of system dynamics help us from being blinded by over-
analytical methods and help to predict these nonlinear changes that are driven by positive
feedback. These nonlinear dynamics are behind many important phenomena within complex
engineered systems such as disruptive innovation, the emergence of industry standards, lock-in
effects, and economics of scale within manufacturing.

System dynamics models help us to try and focus on the real drivers of change and sources of
problems. They also let engineers and policy makers experiment with simulations in order to try
and model what effect some intervention in the system will have on other variables and the whole
dynamic. They have been used for everything from modeling the working of the energy sector, to
the development of innovation and environmental policy making, to the modeling of urban
dynamics and critical infrastructure protection. Systems dynamics was used as the modeling
framework for the limits to growth publication, where researchers looked at how the variables of
population, industrialization, pollution, food production and resource depletion interacted to create
the long- term trajectory of the global economy and a possible limit to its growth.

Researchers at Delft University in the Netherlands use system dynamics models to try to
understand and simulate the behavior of Europe’s electrical power grid, as different actors such as
household consumers, power generators and policy makers all with different agendas interact to
define the state of the system. By understanding the different motives that the different actors are
under and the causal links within the system, it is possible to start creating computer simulations
and alter certain parameters in order to see what happens. We might change government
subsidies to see how this will affect the price producers set per kilowatt hour, and what effect that
will have on consumer demand and how this might feed back to affect the producer's price again.

Most of all causal loop diagrams help us to understand self-organization. As we saw in the
previous module, complex engineered systems are distributed systems without centralized control.
Patterns of order emerge through the process of self-organization, and feedback loops are key to
the dynamics of this process. Without nonlinear interactions and feedback loops, self-organization
doesn’t really happen. For example, negative feedback is a self-organizing mechanism for load
balancing. If we think about toll booths on a highway, the system has a certain load, and as new
cars approach they navigate towards the booth with the least load. Because of negative feedback,
the more cars there are at a particular booth the less likely a new car will be to enter that lane and
vice versa. There is no one coordinating this process. The load balance is an emergent
phenomenon of self-organization through negative feedback, and it works to balance and stabilize
the system.

The network effect that is behind the rapid adoption of many new media technologies like
Facebook and Twitter is an example of self-organization through positive feedback loops. The
network effect describes how the value of a technology or system is dependent upon the number
of users of that technology, the telephone being a classical example. Without any users, it has no
value, and every time a new person joins the network this adds some value to the technology for
all other users. The more people that join the network the more attractive it becomes to the next
prospective user. A new technology that can leverage this positive feedback loop can develop very
rapidly to become a default solution, and this is one reason why the networked technologies of I.T.
are highly disruptive and can scale at an unprecedented rate. A thing to note here is that the value
of the technology is really in the network of connections, not so much the technology itself.
Facebook may not be the best social network in terms of design but everyone joins it because of
the wealth of the network.
Lastly, complex engineered systems like cities, logistic networks, and the Internet are nonlinear,
and nonlinear systems are deeply counter-intuitive. They exhibit many phenomena such as the
butterfly effect, emergence, long tail distributions and so on, all of which means they can produce
radical disruptive events, what are called black swans, which are statistically virtually impossible
within linear systems but do occur in nonlinear systems. We are programmed to think in a linear
fashion. As soon as we identify some stable pattern, we believe we can predict the future by taking
this and projecting it onto the future in an incremental fashion, and for linear systems, this is often
possible. But in complex systems with all of these nonlinear interactions, iteration, and feedback,
they don’t develop in an incremental fashion. But these periods of transient incremental progress
and stability are punctuated by rapid seismic shifts, what are called phase transitions, on the other
side of which the system is very different. Because of these phase transitions the future emerges
and it is often very difficult to predict with any accuracy, and chaos theory has taught us that there
is a deep uncertainty to the development of nonlinear systems.

To wrap up then, we have been talking about system dynamics, a modeling language that tries to
capture the nonlinear feedback loops that are a key feature of complex engineered systems. We
talked about the two different types, positive feedback where more begets more as it works to
amplify change and negative feedback where more begets less working to dampen down change.
We talked about causal loop diagrams, which try to model the macro scale set of links and
feedback loops in order to capture the key drivers behind the system’s functioning and how these
models can be used to run simulations of the system under different conditions. Lastly, we looked
at how feedback loops are a key component in the process of self-organization and a distributed
method of self- regulation, but that they can also produce black swan events and rapid change
through phase transitions.
Path-Dependency & Attractors
In this module, we will be continuing our discussion around self-organization and nonlinearity within
complex engineered systems, as we talk about path dependency and attractors. We will look at
how the process of self-organization and evolution results in systems that are sub-optimal and
contingent on historical events. We will talk about the network effect and negative externalities and
how they lead to lock-in. The key thing to take away from this module will be in understanding the
process through which attractors are created and work as defining factors in the evolution to our
technology infrastructure.

Because we can fully control and design simple linear technologies like a refrigerator, we can take
our design and by simply inputting enough resources we can just produce the technology all in one
go, rolling them off the production line. But complex engineered systems like cities and the Internet
don’t just pop into existence like this. Instead they start out simple and they go through a process
of evolution to become complex. The important thing for us to note here is that they are the product
of their evolutionary history, and they carry this history with them in their DNA and overall makeup.
In the same way, the fridge is the expression of the design that created it. These complex
engineered systems are the expression of the evolutionary process that created them.

We often think of engineering and technology as driven by efficiency and this may be the case on
the micro level where we can fully control the design and production process for our refrigerator,
but on the macro scale with self-organization, we often come to sub-optimal solutions due to
inertia, economic and socio-political dynamics. To illustrate this, we might think about the fact that
these complex engineered systems typically involve large amounts of fixed capital and sunk costs.
A city gets one chance to build its expressway out to the airport and that is it. It will be there for the
next fifty years or more. The same with the Internet – now that we have built it, there is no option to
build another one, just because it is in some way suboptimal. History has played out to create
these things and there is no going back. We have to live with and work within that context that it
created, and this is what we call path dependency.

Path dependency describes how the set of decisions one faces currently are limited by the
decisions one has made in the past, even though past circumstances may no longer be relevant.
Even though previous choices were made on chance or limited information with better options now
being available, it is still easier to simply continue upon a pre-existing sub-optimal path than to
create an entirely new one. In other words, the present is never a clean slate where we are free to
make any decision. It is in fact contingent on how we got to this point. In a very broad sense, it
means that history matters.

When we look around us we can see that our systems of technology are very much a product of a
path dependent process. Why do we still have the QWERTY keyboard that was designed for
typewriters when it is not the most efficient for today’s keyboards? Why do we still used the
standard gauge train track designed two centuries ago for horse drawn coal carts to run today’s
powerful trains when it is far from optimal? Why is it so difficult for us to switch to renewable energy
sources? Why do businesses all cluster in a particular area like Silicon Valley when there is nothing
special about that particular location? All of these examples are because the choices we made in
the past as to what technology we adopted influence the choices we make today.

Path dependency is particularly acute in complex systems because of their high degree of
interconnectivity and more importantly interdependency. Things don’t happen in isolation. During
the system’s development parts to the system interact with others and they co-evolve to become
interdependent. Path dependency is particularly acute in these complex engineered systems
because of their hierarchical structure. They are multi-tiered with end-user technologies depending
upon infrastructure technologies lower down the solutions stack. They are what is called platform
technologies.

When a new platform technology is adopted like Microsoft’s Windows operating system in the 90’s,
over time many new technologies are built on top of this and become dependent upon it, a whole
ecosystem of new application, new programming languages, new firmware, hardware, vendors,
instructors, technicians and so on, meaning that small changes in the platform technology may
result in a large effect across the ecosystem that has been specifically designed for it. And this is
often the case for infrastructure systems, like transport networks and electrical power grids. They
are deeply embedded within the socio-economic and technological fabric of a society with many
deep dependencies.

The basic theory to path dependency is that it is a product of a self-organizing process where
some small initial event, that is, often somewhat arbitrary in nature, comes through positive
feedback to create a lock-in effect. This lock-in- effect leads to negative externalities, inertia and
drives a particular course of events that are difficult to change in the future. So let’s analyze this
process a bit further to try and understand it better.

Path dependency maintains that the starting point, as well as feedback loops along the way, affect
and shape the end outcome to the technologies of today. In the language of chaos theory, this is
called sensitivity to initial conditions, more popularly known as the butterfly effect. Because of
feedback loops some small, possibly random event in the past can, in fact, turn out to have very
significant consequences in the present or future, and that we cannot predict this process a priori.
We have to run or simulate the running of the system in order to understand its future state. An
example of this might be the initiation of the First World War through a relatively small event in
Bosnia. There was no way of knowing that this small event would lead to a world war and the
reshaping of Europe’s borders because this phenomenon really emerged out of the nonlinear
interactions during the system development.

Next, positive feedback and negative externalities take hold to drive the system’s development.
Economics of scale is a good illustration of this. The more users there are of a particular
technology the more we can leverage economies of scale to reduce its price, which will, in turn,
feed back to attract more users. This is a positive feedback and this is how some companies can
get exponential growth as they ride this wave of positive feedback during the early state of a new
technologies life cycle. Added to this, we have the network effect. The network effect is really due
to the fact that the value of many technologies is in their capacity to interoperate with other users.
Urban mass transit systems have the network effect. Every time we build a new station it adds
value, not just to users of that particular station, but to the entire network, as everyone now has
more possibilities in their destination.

Both positive feedback and the network effect are powerful forces that once they take hold of, a
particular technology will amplify it. But we can also add to this negative externalities, meaning that
when someone chooses to use a particular technology, that choice decreases the value of another
competing technology for all of its users. Once a particular industry or company adopts a standard,
this will crowd out others, because the more a particular technology or standard grows, the greater
the cost to other people if they choose not to use it. This makes it very difficult to change some
technology or standard once it has taken hold, even if alternatives may be more efficient, and thus,
this particular preexisting technology is essentially being subsidized by the network effect and the
negative extremities of not being able to interoperate with others if you change.

This positive feedback and negative externality combine to create an attractor, meaning once they
have taken hold around a technology, they work to subsidize that technology and make it an easier
solution to any other of a number of different possible solutions, as it becomes the default.
Because the other options are now more costly or difficult, this technology now has an attractor
built around it. In non-mathematical terms, an attractor is a set of states towards which a system
will naturally gravitate from any given initial state and will remain within these set of states unless
significantly perturbed. This is essentially the same thing as a default, where default means a value
or state of a system that is automatically selected if no other option is specified. New entrants to
the industry or new adopters of the technology without specific reason to do otherwise will adopt
this default technology because of the attraction around it. This subsidizing of a technology solution
that comes with the network effect and negative externalities and the attractor space that it creates,
results in inertia, the resistance to change.

An example of this might be what is called Carbon Lock-In referring to the self- perpetuating inertia
created by large fossil fuel-based energy systems that inhibit the adoption of alternative energy
technologies. Now that we have built up sophisticated machinery for extracting and processing
petroleum and the combustion engine has become a default technology, the industry is being
subsidized by economies of scale and the network effect, meaning because of historical events we
can produce a barrel of oil very cheaply. And if you have a barrel of oil, you can use it to do almost
anything from making raincoats to greasing you car’s wheels, to trading it on the futures market. It
is interoperable across a wide set of technologies giving it the network effect, an attractor and
creating inertia.

All of these, positive feedback, the network effect, and negative externalities mean that once you
decided to go down a particular path, it is self-reinforcing and excludes other possibilities in the
future creating the inertia of the lock-in effect. Breaking out of this will require either a greatly more
efficient technology coming along or a very effective organization for people to cooperate on
changing to better available solutions. And cooperation is a very important aspect to this. It would
take widespread cooperation for us to globally standardize the electrical plug or train track gauge,
and this is an example of how the sociopolitical domain influences the development of the technical
domain.

Thus, this inertia of the lock-in effect is not just a technical phenomenon but also a socio-cultural
phenomenon. Ways of doing things become embedded within a culture and there will be
resistance to change. In the 19th century, horses mattered. Today no one really cares about
horses, but instead cars matter and have significance to people. Advertising companies create
stories about them and they become part of our culture and way of life. People like to think that
their lives have meaning and that things are the way they are for some reason. Most people don’t
like the idea that their lives and the world around them are in some way arbitrary. No matter how
impersonal these technologies might seem, they are part of our lives and we create stories around
these things to give them meaning. Added to this is the uncertainty of change. Most people don’t
like uncertainty and they will remain with a particular pattern of organization, technology or solution
because it is known and predictable, again creating inertia due to socio-cultural factors.

In this module, we have been talking about path dependency as we looked at how complex
engineered systems are the product of a self-organizing evolutionary process, that this process
may involve sensitivity to some relatively arbitrary initial condition. But due to their highly
interconnected and interdependent nature, this initial event can become amplified as the network
effect and negative externalities kick in to create an attractor, with the technology becoming a
default standard and the emergence of inertia. The net result of all of this being an outcome that is
far-from an equilibrium, optimal solution but instead contingent on the system’s history, what we
call path dependency.
Multi-Level Engineered Systems
In this module, we will be looking at the process of emergence that is, one of the central concepts
within systems theory. We will first define what we mean when we use this term. We will talk about
how it gives rise to what are called integrative levels and multi-tier systems that have distinct,
irreducible levels to them. We will look at how this process of emergence can lead to macro scale
fractal patterns of organization as a complex dynamic between bottom-up self-organization and
top-down coordination develops. The main learning object of this module will be to understand how
emergence creates a macro scale multi-tier architecture to our engineered environment.

Emergence is a process whereby larger entities or patterns of organization arise through


synergistic interactions among smaller or simpler entities that themselves do not exhibit such
properties. And thus, emergent properties are those that belong to whole systems. They only arise
when all the parts are put together in a particular manner. The functionality of a system of
technologies that has any degree of complexity is in many ways an emergent property. It is only
when we put all the parts to our car together that we get the functionality of a vehicle of
transportation. And many properties to this car such as safety, security, and reliability are emergent
properties, meaning we have to wait until the car is fully put together before we can start to do
testing on it. There is no point in taking the car for a test drive when it is half built because the
functionality that we wish to test has not emerged yet. These emergent properties cannot be
studied by physically decomposing a system and looking at the parts in isolation, what is called
reductionism. They can, however, be studied by looking at each of the parts in the context of the
system as a whole. A microprocessor can only be properly understood by looking at its function
within the whole computer system it is a part of.

Emergent properties are a product of the interaction between the components within a system and
typically cannot be deduced by reference to the properties of the parts. Thus, emergence typically
produces novel phenomena that we could not have predicted until we ran the system and all the
parts have interacted. An example of this might be our concerns around technologies like genetic
engineering or algorithmic trading. We can fully analyze and understand how an individual trading
algorithm behaves, or what effect altering the genes in a plant has on that plant in isolation. But
because ecosystems and financial markets are complex systems where the behavior of the whole
system is an emergent product of the interaction between its parts, we do not know what emergent
behavior will arise from having many different algorithms interacting or differently genetically
modified plants coevolving within a whole ecosystem. The net result is an emergent phenomenon
and we cannot deduce it from analyzing the parts in isolation.

As a side note, we will just mention that emergence leads to one of the key concepts within
complexity theory, that of uncertainty. The fact that the future emerges is a key source of the
fundamental uncertainty within complex systems. If we take something like the Internet, we don’t
know what future technologies will be built on the network or more importantly, how those
technologies will combine to form new possibilities. In this world of complexity, the future is not just
unknown. It may well be in fact unknowable, and this fundamental uncertainty changes our whole
approach to the future. This is a little bit outside our discussion here so we will move on for the
moment.

Emergence gives rise to new levels of organization, what are called integrative levels. The theory
of integrative levels describes how new levels of organization emerge out of lower levels of
complexity. To understand the relevance of this to technology, we might think about how we
needed to have the agricultural revolution before we could have the industrial revolution, and in
turn needed to have both of these to have the information revolution. A mobile phone without any
farms to feed us would be pretty useless. The terms platform technology or multi-tier architecture
are used to capture how technological systems are built on top of and enabled by others. A
functioning urban center that provides a high quality of life to its citizens is an emergent property of
multiple layers of technological infrastructure. Each layer needs to be properly integrated to enable
the technologies it supports.

Although emergent properties can arise without self-organization, as in our former example of the
car, emergence is also a product of self-organization. With self-organization individual components
interact, synchronize to form patterns and out of this emerges a new level of organization. This
process of emergence doesn’t just stop at one level. Elements can interact and self-organize with
new levels of organization emerging, but then this new system starts to also interact with other
systems in its environment, with the net result being that another level emerging. The parts in our
car give rise to the global functionality of the car, but then this car is put into operation within a
transportation system and interacts with other cars as we get the emergence of traffic. Whereas
the car was produced through a formal design process, traffic emerges through the self-
organization of all the individual cars interacting. Thus, through this process of emergence, we get
the development of multi-tier systems.

This multi-tier, hierarchical structure can be seen in the formation of urban centers. From the
perspective of technology analysis, urban networks are really the fabric of our engineered
environment on the macro scale. They are dense concentrations of integrated infrastructure,
technology and services that have emerged over a long process of evolution, and they have a
hierarchical structure to them. From villages to towns to cities to metropolitan areas, this hierarchy
is described by central place theory, a geographical theory that seeks to explain the number, size,
and distribution of urban centers.
It describes how certain differentiated services emerge at a certain scale threshold. A village can
provide some set of basic services, with a collection of villages, we can get the emergence of a
town that will provide certain differentiated services in its functioning as a regional hub. And again
with a dense enough concentration of towns, we will get the emergence of a city and so on, all the
way up until we get globally differentiated metropolitan areas where one can access certain
advanced services that are not available anywhere else, as would be the case with global cities
like New York, London or Tokyo. This is an example of an emergent multi-tier system through
distributed self-organization.

As we go up this hierarchy through different tiers, the economic infrastructure fundamentally


changes, from serving the function of primary production to manufacturing to services, information
and knowledge activities. This is the three-sector hypothesis to the development of our economic
infrastructure and it is the macro-scale structure to our global economy. Each level to this hierarchy
allows for greater specialization and differentiation. A small village serving a few hundred people
can only really provide the basic services that the mass of people need, but a global city like
Singapore can provide highly specialized financial products because it plays a differentiated role
within South East Asia and the global economy as a trade and financial hub.

As we go up this hierarchy, there will be thresholds and tipping points beyond which we get the
emergence of a new phenomenon. One way to think about tipping points is that many emergent
phenomena are discrete, meaning either you have them or you don’t, either a city has an airport or
it doesn’t. You can’t have half an airport. But many factors are also continuous, like the population
of a city. You don’t go from one million people to two million. There are many small steps along the
away. When we combine these two metric systems because one is changing continuously and the
other in a discrete fashion, we get tipping points. To illustrate this, imagine the government in a
country makes a policy that once a city reaches a threshold of a million people then they will fund
the building of an airport. The result is that the population may be growing at a steady state for
many decades or even centuries without any airport. And then just a few more people are added to
get a million and we suddenly get a flip within the discrete variable from a city without an airport to
one with an airport, and this flip came about through a very small change to the continuous
variable. This is a somewhat stylized example but it should help to illustrate the dynamics behind
thresholds and phase transitions
This hierarchical structure that emerges within complex systems often creates patterns that repeat
themselves at various level of scale, what are called fractals such as can be seen in sea shells, the
emergent patterns of a snowflake and the macro-scale structure to our engineered environment.
The emergent pattern created by the central place theory is a fractal that has this scale-invariant
property where we find the same network pattern on the micro level of an individual village as on
the macro level of a national urban network.
Characteristic of these fractals are power law distributions that describe a power relationship
between the frequency of the occurrence to a phenomenon and the scale of that occurrence.
Urban networks have been shown to follow this power law relationship between the size of a city
and how many there are of them. This is quite remarkable, that through a somewhat chaotic, self-
organizing, evolutionary process we get this macro-scale pattern of organization that has a
quantifiable regularity to it. This fractal structure is a very economical way to create a macro-scale
pattern. Through iteration of some simple rule, we get the same structure on the macro and micro
level.

But this scale invariances does not mean that the system is the same on its different levels. It is
simply a global structural pattern that emerges within nonlinear systems. Through emergence and
phase transitions properties describing one level of a complex system do not necessarily explain
another level, despite how intrinsically connected the two may be. At each level of complexity, new
laws, properties, and phenomena arise with their own internal dynamics that are specific to that
level of organization and cannot be reduced to simple aggregates of lower level phenomena.
Different functional levels to our economic infrastructure run on very different principals. The
primary sector, industrial sector, and services sector are all governed by their own internal
dynamics and set of rules, thus applying an industrial logic to services simply doesn’t work. One
may emerge out of the other, but they are based on fundamentally different rules.

With this process of emergence and the creation of a multi-tier system, we get a complex dynamic
forming between the bottom-up process of organization and the global pattern that has emerged,
as it feeds back to enable and constrain the components on the local level. Being part of a city and
that macro-scale pattern of organization both enable us to do more, as we are enabled by a vast
technology infrastructure, giving us access to a wide array of services. But it also constrains us, as
there are global standards and rules that have to be followed in order to coordinate the system on
the macro scale.

As an example, we might think about the phenomenon of urban gardening in Detroit, USA, where
due to a mass exodus of people, there is a significant amount of unused land within the city. Locals
have moved in to start small garden farms on these open spaces. This is a bottom-up self-
organizing process where people are simply reacting to local phenomena, but it is in strong tension
with the macro-scale pattern of organization, as an industrial city like Detroit plays a differentiated
role within a region as a manufacturing, commercial, residential and economic hub. The macro-
scale pattern of organization that has emerged is not designed to accommodate this local self-
organizing phenomenon. Because of the irreducible nature of emergence, this tension between
bottom-up and top- down organization is a fundamental phenomenon with complex multi-tier
systems and trying to resolve it is a key design engineering challenge.

In this video we have been talking about the process of emergence, how novel properties to whole
systems emerge through the interaction of their parts that cannot be deduced by analyzing the
parts in isolation, how this process of emergence gives rise to new levels of organization called
integrative levels, and the formation of multi-tier systems. We looked at how this process involves
differentiation, tipping points and may result in fractals, as structural patterns repeat themselves on
various scales. Finally, we discussed the irreducible nature to emergence and the complex
dynamic between bottom-up and top-down forms of organization that is an inherent part of these
complex engineered systems.
Technology Networks

In this video, we will be taking an overview of technology networks and network analysis, an
approach to the analysis of technology that is based on network theory and focused on interpreting
our engineered environment in terms of connectivity and network structure. We will be looking at
how globalization and the Internet of Things are driving heightened connectivity, and the
emergence of a new architecture, what we will call technology networks, where our engineered
systems become unbundled from their traditional vertically integrated structure, distributed out and
reconfigured through networks. This module is an overview. We won’t be going into any of the
details of network analysis here. Our main learning object is to understand how heightened
connectivity is fundamentally reshaping our technology infrastructure and giving rise to the
emergence of these technology networks.

Over the past few decades with the rise of information technology and globalization, we have
networked our world. Global logistic networks enable global manufacturing networks, multinational
gas and power grids where energy gets traded across borders, and dense multi-modal urban
transportation networks. With all of these networks being supported by telecommunication
networks on all levels from the local to the global, today our daily lives are embedded within and
enabled by a mass of technology networks, and as we transit farther into the 21st century, this is
only set to increase as it has become apparent that networks are the fundamental organizational
structure to the information age.

Understanding these networks is central to analyzing our globalized world of what is sometimes
called hyper-connectivity, brought about by the exponential growth in connectivity across almost all
areas. But this huge and rapid proliferation in connectivity has left us in a world of often unknown
interconnections and interdependencies that we are still scrambling to make sense of. Modeling
and analyzing these networks is the subject to the domain of network science. The study of
networks has in just a few decades gone from almost complete obscurity to one of the hottest
areas of research today by combining the formal mathematical language of graph theory with
network analysis software and new data sources.

As we go from a system with a relatively low level of connectivity to one with a very high level of
connectivity, the make-up and behavior of the system change fundamentally. In relatively isolated
systems, our focus is on the components and their properties. Due to the high cost of interaction,
the system is typically bound into a centralized monolithic configuration to reduce the
organization’s overall cost of transactions. But when we reduce the cost of interaction, as IT,
transport and other innovations have done, then connectivity increases and the system can
become unbundled from this centralized configuration as components become distributed out and
re-coordinated through the network. As an example, we might think about manufacturing.
Traditionally, the majority of components for a technology were manufactured in a single factory or
leased by a single company. As transportation and outsourcing costs have dropped, manufacturing
processes have started to span the globe integrating many diverse producers to deliver a finished
product. Going forward, manufacturing is set to become ever more distributed as it becomes
Internet-based with the rise of digital manufacturing and 3D printing.

Connectivity is really a very abstract concept and in many ways, it is quite counter-intuitive,
because it really requires us to see things in a different way. Network analysis is a very practical
tool but it is also a new paradigm. It gives us a more appropriate way of looking at these highly
interconnected systems, one that is less focused on the static components in the system and more
on the nexus of relations and how this shapes and defines the components. This is a very different
way of seeing things to our traditional analytical approach. It is a paradigm shift that is central to
understanding this networked world.
With technology network analysis, we are asking how the technology component is created by the
network. At a certain level of connectivity, we stop asking how the components create the
connections, and things become flipped around as we start to ask how the network creates the
components. When the level of integration is high enough and the cost of interaction low enough,
there will be very many interactions as we get the emergence of an integrated system that needs
different functions performed, and this feeds back to reshape the components. This is quite
abstract so we will take some examples to solidify it.

We might ask how has the city of Dubai gone from complete insignificance as an air transportation
center to becoming a global hub, surpassing London & Tokyo all within just a few years? To explain
this, we need to understand the network. Dubai lies along an important stop on the trade routes
between Europe and Asia, a key point in aviation’s new Silk Road, and it is within an eight-hour
flight from two-thirds of the world’s population. Dubai’s rise as an air transportation hub is largely
because it connected into the global air transportation network and performed a specific
differentiated function that the rest of the network required. This differentiated node was created by
the network. To go back to our manufacturing example, China’s rise as a manufacturing center
happened because it connected into the global logistic network and performed a specific function
that was required by the rest of the network.

The point that is being illustrated here is that technologies don’t just happen in isolation. They are
the product of a network that is delivering some service, and they emerge out of this because they
perform some function that the network requires to fulfill that service. Although we have been
talking about this on the macro level in the form of globalization, it is of course also a micro-level
phenomenon, as it describes the emerging paradigm of the Internet of Things, a technology
revolution currently taking place that will very likely reshape the whole architecture to our
engineered environment on the micro level. To give it some more formal terminology, we might
borrow this definition from the European Research Cluster on the Internet of Things: “The Internet
of Things (IoT) is a concept and a paradigm that considers pervasive presence in the environment
of a variety of things/objects that through wireless and wired connections and unique addressing
schemes are able to interact with each other and cooperate with other things/objects to create new
applications/services and reach common goals.” In many ways, we can think about the Internet of
Things as the information revolution brought to our engineered environment as it networks our
technologies in the same way that the web has networked our society.

Within this paradigm, technology is less about things and more about platforms. The Internet of
Things is not a device or object. It is a platform or network that integrates components in order to
deliver functionality. Vertically defined stand- alone products and applications will become
increasingly part of large networked horizontal systems, and defined by their role within that
network. When systems become unbundled, devices and technologies are available for
reconfiguration through different networks depending on the context, and thus the context defines
the service and the service defines the network, which brings together the technology components.
As we embed chips into technologies, they are capable of adapting and configuring themselves in
real time. This is driving a new networked-architecture that is based on services called services
orientated architecture, also called SOA.

There are many definitions for SOA but basically, it is an architectural approach to creating
systems built from autonomous services that are aggregated through a network. SOA supports the
integration of various services through defined protocols and procedures to enable the construction
of composite functions that draw from many different components to achieve their goals. It requires
the unbundling of monolithic systems and the conversion of the individual components into
services that are then made available to be reconfigured for different applications. SOA originates
in information systems design, but with IOT, information technology is starting to permeate and
reshape all areas of technology and make this SOA paradigm more pervasive, as it reflects the
emerging next generation structure to our technology landscape built on IOT. Of course, we still
need physical technologies, machines, devices, turbines, and airplanes. But as we network our
world, the next generation of technologies is not so much about these things in the way that it was
during the industrial age, but instead about services enabled by networks that connected up things
to deliver functionality.

In this short module we have been taking an overview of technology networks as we talked about
the network paradigm that focuses our view on connectivity and network structure, how when a
technology system reaches a critical level of connectivity its whole architecture changes, as the
components become increasingly defined by the network instead of their properties in isolation. We
talked about how globalization and IoT are driving this next generation of technology networks. By
giving example on both the macro and micro level we have tried to show how this transition into a
networked information age is truly an omnipresent and structured phenomenon. We finally looked
at how this shift to a network paradigm gives rise to a SOA framework, an architectural approach to
creating systems built from autonomous services that are aggregated through networks.
Distributed Systems
In this video, we will be looking at distributed systems as we analyze some of the factors that have
given rise to a new set of distributed IT enabled technologies, which are increasingly becoming
mainstream solutions and having a highly disruptive effect across almost all of the technology
industries. We will talk about some of the key characteristics of these alternative technologies and
how they flip our traditional industrial age model around, resulting in what is called inverse
infrastructure. Finally, we will look at the emergence of a new hybrid platform model for integrating
centralized and distributed infrastructure systems.

As we have previously discussed, our traditional industrial systems of technology are based upon a
centralized model design to leverage economies of scale through batch processing, as centrally
controlled systems like power generators plants, factories, and broadcast media produced
technologies and services that were pushed out to end users. Although distributed technologies
have always been there on the fringes, today a number of factors are working to fundamentally
change this centralized model to a more distributed one, where capabilities and production can
also take place at the edges of the network by many different actors.

Factors enabling this include; Firstly, the emergence of alternative technologies like solar cells,
wind turbines and 3d printers as increasingly efficient enough to complete and become mainstream
products; Secondly, information technology that allows end-users to set up their own networks of
coordination and collaboration at very low costs; And lastly, the deregulation and privatization of
many previously state-owned monopoly infrastructure industries that is taking place around the
world, as previously vertically integrated national systems are being unbundled allowing for a
multiplicity of private actors to enter the value chain, as producers, traders, brokers, retailers and
many more actors in a complex ecosystem.

A good example of the shift from centralized to distributed is mobile telephony. If we look at the
emerging infrastructures in rural Africa or Asia today, they often bypass the centralized copper
telephone network altogether instead implementing a decentralized cellular network. This helps to
demonstrate how distributed technologies thrive particularly in areas with low population that lack
the critical mass required for traditional centralized batch processing systems, and also where
there is a lack of formal administration and pre-existing incubates, and this goes back to our
discussion about the need for lack of regulated in order to enable distributed self-organization. The
emergence of renewable smart grids is another example of this. Information systems and
distributed technologies are working to fundamentally re-architect the network
away from the centrally controlled and operated traditional power grids that delivered electricity to
end-users towards a distributed architecture where end- users are central as both consumers and
producers, and also as managers of the systems through smart devices and information about
prices and consumption.

In a recent article in Bloomberg Magazine talking to David Crane, the CEO of a wholesale power
company on East coast of the U.S., he had this to say about the reshaping of the U.S. power grid:
What’s afoot is a confluence of green energy and computer technology, deregulation, cheap
natural gas, and political pressure that poses “a mortal threat to the existing utility system.” He
says that in about the time it has taken cell phones to supplant landlines in most U.S. homes, the
grid will become increasingly irrelevant as customers move toward decentralized homegrown
green energy. Rooftop solar, in particular, is turning tens of thousands of businesses and
households into power producers. Such distributed generation is certain to grow. He also adds that
some customers, particularly in the sunny West and high-cost Northeast, already realized that
“they don’t need the power industry at all.”
Mr. Crane may be slightly overstating things, but the same factors that are driving the re-
emergence of distributed systems in power grids as viable competitors to the centralized mode are
emerging across all domains from digital fabrication in manufacturing to organic farming in
agriculture, to car sharing services in transportation and Voice Over IP services within the
telephone industry. Distributed systems are no longer the fringe phenomena that they have always
been, but increasingly accepted into the mainstream as viable and scalable solutions. These
distributed technologies have a number of common features to them, including the fact that they
are typically user-generated, informal networks, involve non-professional producers, with no one in
control of the entire network system. We will take a look at each of these characteristics separately.

Firstly, they are typically user generated. Centrally designed infrastructures are part of a whole
paradigm of industrial age organization based around the nation state and bureaucratic, top-down
organization, where the end-user is seen as a passive recipient of the service. They are defined as
‘consumers’ and that is essentially their role within the system. These distributed technologies flip
this paradigm on its head, actively engaging end-users to become both producers, capable of
managing their own resources and capable of setting up their own networks of collaboration
through IT.

These distributed networks are informal arrangements. Large mass processing industrial systems
like motorways, broadcast media or factories require a significant level of social, political and
economic organization over a prolonged period of time through a very formal process of
development and management. Due to this, they have a very high threshold to entry as a producer.
In contrast to this, these distributed systems are informal with very low barriers to entry. For the
cost of a solar panel, individual homes can become power producers. They can also become
media producer through new media. They can become manufacturers through digital
manufacturing systems. They can become transportation providers through car sharing, all at very
low levels of capital investment and organization management, meaning you can do it without
needing to be a formal well-defined well-capitalized organization.

There is also a shift from professional to non-professional. The formal centralized model required
very high levels of technical capability and specialization in order to produce, operate and maintain.
As technologies become commoditized, they become cheaper, more accessible and easier to
operate. Distributed manufacturing is an example of this. Digital manufacturing processes such as
3D printers and CNC, numerical controlled technologies were first used by a minority of technicians
in factories. As the technology matures and comes to the consumer market, they are increasingly
accessible to any end- user without professional expertise in manufacturing or engineering. Again,
this goes back to a reduction in barriers to entry. A recent study done by IBM showed that the cost
of capital to start a new factory is going to be reduced by 90% in the next decade, which will
inevitably drive a democratization of manufacturing. Many distributed technologies like photovoltaic
cells are enabled by sophisticated science and engineering but are also accessible for use by the
non- technical end-users.

Lastly, no one is really in control of these distributed systems. The centralized infrastructure
systems of the nation state were centrally controlled and managed in a hierarchical fashion. The
privatization of infrastructures like water, road and rail networks added many more actors to this, as
they became managed through market mechanisms. But with truly distributed technologies,
management of the system may become fully distributed out to the local level of the end-user. In
some circumstances, clear ownership may not even apply. Fully distributed technologies are also
managed in a distributed self-organizing fashion. Mesh networks and peer-to-peer file sharing are
examples of this. Every user supports the provision of the network service. They have a swarm-like
dynamic. Control of the system’s behavior and functionality is in the hands of many.

Distributed technologies are also called inverse infrastructure, as they have an almost inverse
effect on the make-up to our infrastructure to that of the industrial model. If you have spent any
time in the technology industries, you would have heard the word ‘disruptive’ repeatedly, and it is
this structural transformation that is a large part of where this disruption is coming from. Because
these alternative technologies simply do not fit into our pre-existing model, and they are not about
to simply go away, something has to give. The whole landscape has to change, and that is a scale
of disruption that we haven’t had for a number of centuries since the industrial revolution.

The key to this transformation is designing new frameworks for integrating centralized top-down
systems with these bottom-up distributed technologies. All of these previously mentioned factors
that differentiate the two architectural paradigms create a tension between a top-down and a
bottom-up model. Progressive companies are leading in creating a new architecture that is better
suited to this emerging reality, where traditional large centralized players reinvent themselves as
platform services providers, providing the infrastructure, protocols and other coordination
mechanisms for individuals and small organizations to co-create solutions.

Going back to our original theme of the smart power grid, a new model for power grids is emerging
called virtual power plants, where centralized wholesale power companies use information
technology to integrate and coordinate a number of distributed micro-generators, thus providing a
stable supply of energy to the market at the large scale required. This is an example of a hybrid
model that may well represent some kind of resolution to the top-down bottom- up core constraint.
It is built on the design paradigm of platform technologies that we previously mentioned. Apple’s
App Store is a good example of a platform technology. The centralized organization provides the
platform for end-users to create the solutions. Other examples might include Zipcar, who provide
the platform for car sharing.
These are examples of engineered systems that have managed to integrate a centralized and
distributed model to create a highly successful hybrid system. And of course, probably the most
successful technology of our time, the Internet, is a classical example of a distributed modular
system. With the advent of web 2.0 and technologies like social networking, over a billion people
contribute to its making, with highly successful companies like Google working by providing
centralized platforms for accessing this mass of distributed user-generated content.

Distributed technologies are having a radical, pervasive and unstoppable effect on our technology
landscape. The infrastructure systems of the 21st century will not be like that of the 20th century,
centralized structure, but also it is unlikely to be fully distributed. It will be a much more complex
interplay of both. Within the industrial model, we had mainstream centralized systems like factories
and motorways that dominated, and we had alternative distributed system that lay on the fringes
like organic farming and artisan production. It was a strong dichotomy. The two never really fit
together. But with the move of distributed systems into the mainstream, this strong dichotomy is
becoming fuzzier. Going forward, there will very likely be a whole spectrum from highly centralized
systems like mega cities, to radically distributed systems like Bitcoin, and all kinds of hybrids in-
between as we develop new methods for integrating them. As we previously mentioned, this
breakdown of the barrier and greater interconnectivity between the two models is a key source of
complexity.

In this module, we have been talking about distributed engineered systems, as increased efficiency
in alternative technologies and the rise of information technology have worked to bring them into
the mainstream of our technology infrastructure, as they are increasingly able to provide solutions
on the large scale that our global economy of the 21 century requires. We looked at some of the
common features of these distributed systems, including the fact that they are typically user-
generated, informal networks, involving non-professional producers, with no one really in control of
the entire network system. Finally, we talked about platform technologies and new hybrid models
that are emerging to integrate legacy centralized industrial systems with these new distributed
networks through IT-enabled platforms.
Technology Convergence
Following on from our previous discussion on distributed systems, we are going to look at the
somewhat inverse phenomenon of technology convergence. In this video, we will talk about how
technology has evolved through a process of increased differentiation and specialization and how
over the past few decades, information technology and services design are enabling the
phenomenon of mass convergence as disparate technologies and previously separate functions
become merged into single devices and systems. We will talk about the challenges and benefits of
this and why it is critical to dealing with the complexity to our next generation infrastructure.

Since the advent of early stone and wood tools, technology has become increasingly differentiated
into specialized functions, as today one can find many millions of technologies and services
available on the market. Whenever we have a major technological transformation, or new energy
source like the advent of petroleum combustion, we get the emergence of a whole new ecosystem
of technologies. If we look at the development of new consumer technologies in the 20th century, it
was largely a history of different ways to apply the newfound energy source of electricity and
petroleum-based plastics, with the result being the proliferation of ever more consumer
technologies filling ever more specialized functions. Today we are on the edge of a new
technological revolution in the form of nanotechnology, the engineering of physical systems on the
scale of atoms which is opening up a whole new dimension to our world that can be engineered,
and again a proliferation of a new wave of technologies enabled by smart materials. Thus, this
process of technological differentiation continues today with ever more specialized technologies,
but at the same time the last few decades has also given rise to a quite radical technological
convergence.

Technological convergence is, as the name implies, the convergence of a number of disparate
technologies or functions into a single integrated system. The Internet and digital convergence are
classical examples of this. Virtually all modes of telecommunication are rapidly converging upon
the Internet protocol as a single standard for telecommunications. Digital convergence refers to the
merging of four distinct industries into one conglomerate: Information Technologies,
Telecommunication, Consumer Electronics, and Entertainment. The digital format is driving rapid
convergence, as a previously disparate array of media from books to television to films and
photographs are all converging upon a single format. Electricity cables are starting to become
information transporter. As the Internet’s infrastructure and power grids merge and as cars become
electric, they too are becoming merged into the power grid. Some of the major factors behind this
convergence that we will discuss are primarily the information revolution, and also globalization,
sustainable technologies and service design.

Information technology is central to this process of convergence, as it is increasingly becoming the


interface between people and the technical infrastructure that supports us. Devices such as smart
phones or tablets allow us to access a wide variety of functions through a single interface. As we
previously discussed, the Internet of Things is driving massive convergence of hugely disparate
technologies from tractors, to washing machines to factories. They are all becoming operated
through the Internet protocol suit and often accessible through mobile devices, and of course
behind of all this is the so-called “cloud” and computer virtualization as data is converging into
centralized data centers. This architecture to next generation I.T., with on the one hand the
centralized systems of cloud computing, big data and analytics, and on the other the massive
proliferation of mobile devices and Internet of Things devices, is a good example of the complex
interplay between increased convergence and divergence. As the CEO of Garner, the I.T. research
and consulting company, put it: ‘In 2020 ... consumer I.T. and organization I.T. are one. Digital
capabilities all through your enterprise are one. Digital is the business, the business is digital’.

The services revolution is an economic paradigm shift that moves the focus of economic activity
from the provision of discrete products and technologies to providing end-users with integrated
services, called product service systems. Within this paradigm the focus is on the end user and
providing them with not a one off product but instead a seamless service experience. This is a
much more sophisticated business and marketing model that adds value and differentiation, and it
is aligned with the dominance of the services sector within post-industrial economies. It is a key
factor leading to convergence in that it is focused on integrating disparate technologies into single
seamless service systems.

Sustainability is also another driver of convergence but more on the macro scale, as it requires us
to focus on how different subsystems interoperate in order to make whole systems more efficient
and sustainable over their full life cycle. Sustainable cities are urban environments that identify how
their different subsystems can work together, and this means designing integrated systems. For
example this might involve asking how the structure of the urban environment relates to its
transportation systems and how their energy consumption relates to the city’s air quality and so on.
Again, this has a networking effect as we cut across traditional domains, and it drives the
convergence of different functions and silos onto common platforms.

Lastly, globalization is also a factor, creating a geographic convergence as infrastructure systems


no longer stop at borders but become networked on a multinational and regional or even global
level. Information technology and financial and economic frameworks allow us to create
multinational markets as something like electricity can be traded across borders, seamlessly
creating interdependency between previously autonomous national infrastructure systems, the best
example of this being the European Union as it starts to create the regulatory frameworks for
transport and utility systems to span the entire continent.

This process of rapidly increasing convergence creates both challenges and solutions within our
technology infrastructure. On the positive side we have dematerialization, as convergence typically
means we can do more with less physical resources required, and dematerialization is very
important to achieving sustainability. Convergence is critical to dealing with the complexity of our
next generation of technology and it also enables seamless processes instead of discrete silo
technologies and functions. But there are also many challenges including the heightened
complexity of engineering these multi-functional systems that require a much higher level of
abstraction. Dealing with fuzzy ill-defined borders and system boundaries is another challenge.
Creating increased security risks and ensuring basic functionality despite greater complexity is an
added engineering challenge. We will go over each of these separately.

Convergence requires the use of significant abstraction. Abstraction is where the system as a
whole becomes increasingly removed from any individual instance of its functionality. An example
of this might be a computer’s operating system. Early mainframe computers had no operating
systems. One simply wrote an application and put it into the computer to be run and then the next
person came along with their application and put that in. Of course, each of these application
developers had to write a lot of low-level code for managing the computers’ hardware resources.
Over time the modern operating system evolved in order to provide a generic platform for
performing basic common hardware resource management. The operating systems provided a
layer of abstracting so that the computer’s hardware can be easily used for many different
applications while also being independent of any particular instance. In order to enable
convergence, new abstractions have to be invented. They have to work and be implemented.

The complexity of designing and managing these complex systems is not trivial. Convergence
means that our traditionally well-defined borders become fuzzy. When things overlap, it is no
longer clear who is responsible or whose jurisdiction we are in. Traditional regulation and
management structures that were designed as domain specific start to erode and appear less
relevant. The overlapping and integration of many different systems and being able to access all
these systems though a single protocol creates significant security challenges. This
interconnectivity and interdependency creates heightened risk of cascading failure. It becomes
very difficult to understand and model all of the linkages and interdependencies within these
complex networks. With convergence, the end user wants to be able to switch seamlessly between
different functions and domains. Building firewalls, buffers and redundancy in order to reduce
failure propagations often reduces their capacity to do this, and thus security may become more of
a concern and more difficult to deal with.

Abstraction is the engine behind convergence. As mentioned, these convergent technologies need
many layers of abstraction in order to enable the many different functions they may be required to
perform. Abstraction creates engineering challenges but it also increases functionality challenges
because it typically involves a much higher number of dependencies within the system. As the
number of functions in a single device escalates, the ability of that device to serve its original basic
function decreases. For example, the iPhone (which by its name implies that its primary function is
that of a phone) can perform many different tasks, and thus the telephone’s functionality is
dependent upon many levels of abstraction involving millions of lines of code, with many possible
errors occurring. Compared to this, a simple mono-functional telephone will likely be much more
reliable and trustworthy.

So those are some of the challenges to convergence. We will now talk about some of its benefits.
Convergence and abstraction are in many ways central to the solution space of dealing with the
complexity to our industrial systems. As Oliver Holmes once said, ‘I would not give a fig for the
simplicity on this side of complexity, but I would give my life for the simplicity on the other side of
complexity.’ Convergence is in many ways the only hope for the simplicity on the other side of
complexity. These factors driving convergence, in particular, product service systems integrating
disparate technologies into seamless processes, and information technology providing a single
interface are critical to dealing with the complexity. They are central to encapsulating the
complexity of next generation infrastructure and providing end-users with integrated solutions, in so
doing enabling technologies to disappear into the background and higher value added activities to
come to the forefront.

A corollary to this is that convergence enables processes instead of discrete functions. We live in a
world where we have to stay switching between devices and systems. With convergent
technologies, functions can be stringed together into processes, and processes are really how our
life works. Often quite simple ones, like going to pick up the children from school, you would
interact with many technologies in the process of this activity. From our coat to the house security
system to the car, the transportation systems to one’s mobile phone and so on, all of these
systems are completely ignorant of the fact that they are part of a process. We spend our time
continuously coordinating these different technologies into the processes that we are performing.
Convergence through product service systems and information technology offers the possibility of
taking us into a much more processes orientated world.

In this video we have talked about some of the drivers to the process of technology convergence
as IT and service design increasingly cut across and breaks down traditional silos between
individual technologies and functional domains. We looked at some of the benefits and challenges
of this process, including fuzzy border, security issues and increased abstraction.
Convergence illustrates again one of the major themes of complexity, that is, a core tension
between integration on the systems level and differentiation of components on the local level. As
our interface with these systems of technology may become integrated with functionality
converging, while at the same time underneath this the actual technology that enables this may
become increasingly differentiated and specialized.
Robustness
In this video, we will be talking about the nature of fault tolerance within complex engineered
systems as we discuss their robustness and resilience. We will be largely talking about this from
the perspective of network theory as it provides us with one of the best tools for analyzing failure
propagation within our highly interconnected infrastructure networks. We will try to give some
context to the subject by talking about some of the limitations to our traditional industrial-age
infrastructure systems and start digging into some of the key factors surrounding system’s
robustness by breaking it down into internal and external factors.
Importance
With the modeling of infrastructure robustness, we are interested in how failures occur, how they
spread within the system and how resilient the system is to those failures, and researchers are
particularly interested in critical infrastructure for obvious reasons. We are so dependent upon
these infrastructure systems that we hardly notice them until a fault occurs. Therefore, the ability to
model and analyze the behavior of these critical infrastructures and their interdependencies is of
vital importance.

Critical infrastructures are defined by US Homeland Security as such: ‘Critical infrastructure is the
backbone of our nation's economy, security, and health. We know it as the power we use in our
homes, the water we drink, the transportation that moves us, and the communication systems we
rely on to stay in touch with friends and family. Critical infrastructure is the assets, systems, and
networks, whether physical or virtual, so vital to the United States that their incapacitation or
destruction would have a debilitating effect on security, national economic security, national public
health or safety, or any combination thereof.’

Due to a number of features to the industrial age model of design and technology development,
our industrial infrastructure has evolved to become highly unsustainable and along many
dimensions, we might say fragile. Key factors to the industrial age model that have contributed to
this are its linear model of take, make and dispose that requires a high input of resources from the
environment, also its centralized structure that creates critical hubs and its model of batch
processing that requires standardization, thus reducing the diversity in the system.

Added to this, globalization and information technology have networked our world creating many
interdependencies between different infrastructure systems. The Amsterdam electricity exchange,
for example, was the first power exchange to be entirely conducted through the Internet, making
the electrical infrastructure dependent upon their IT infrastructure. And today almost all of our
products depend upon the working of a globally distributed supply network. Thus, we are
increasingly dependent upon global networks whose complex inter-linkages and interdependencies
we only partially understand. With every new shock to the system like the financial crisis of 2008,
we become more aware of these global networks and the need to be able to properly model and
analyze them.

So what we are really interested in is the continued functioning of these infrastructure systems, and
this what we call their resilience. Resiliency is the capacity for a system to maintain functionality
despite the occurrence of some internal or external perturbation to the system, which is very similar
to robustness, the ability to withstand or overcome adverse conditions. We can understand
robustness along a number of different parameters primarily relating to the system’s dependency
upon its external environment and the internal structure and make-up of the system.

In terms of the system’s dependency on its environment, we are asking: What inputs or range of
inputs does the system require? Because the technology infrastructure that runs our global
economy is a dynamical system, like all dynamical systems it requires an almost constant input of
resources to maintain that dynamical state 24/7 around the globe. These infrastructure systems
need a constant input of resources and energy from the environment. Without it, they will start to
degrade very quickly. Like all dynamical systems, they are in a precarious situation, engineers and
administrators trying to maintain their high- level of functionality and resource throughput when
things can go wrong at any time.

As we all know, our modern infrastructure systems have developed to become highly dependent
upon a particular subset of energy and resource inputs. This has become a key source of
vulnerability as everything from plastic to shampoo to hairspray to all forms of manufactured
products are dependent upon the stable input of petroleum, and of course all forms of energy
likewise from heating, to transportation to electrical generation. As an analogy we might think of a
tree that receives all of its nutrients from its trunk that then ramifies out to all the branches. Being
so dependent upon a single input value is a vulnerability that reduces the system’s robustness.
Moving towards distributed generation will help to diversify this set of input values and increase its
dependability. Moving towards a circular economy is another factor that reduces dependency upon
the input of raw materials into the system.

We can also think about this in terms of connectivity. Can the dynamical system ensure its
continued access to sufficient resources required for its functioning? Thus, we are interested in
what will happen if we remove one or more of these linkages. With the advent of network science
much of this analysis can now be
done using network theory, as a system with a high level of dependency upon a single input would
be a centralized network, whilst diversifying these dependencies would result in a distributed
network, which are known to be more robust. Network analysis of infrastructure systems is
becoming a key tool and rising topic of research.

Next, we want to consider the internal structure and makeup to the system. Again, we can
represent this as a network. We want to know how centralized the network is as a centralized
system. Such as a hub and spoke air traffic network will be susceptible to strategic attack, taking
down one major hub will drastically reduce the network’s level of connectivity and may result in its
disintegration. This is why distributed systems like the peer-to-peer file sharing networks are
typically very robust. They often come under attack from law enforcement agencies due to
copyright violations, but because the system is distributed there is no single node or cluster of
major nodes through which you can damage the entire network. These distributed networks
typically have a low level of specialization between components, meaning any node’s function can
be easily replaced by another or simply duplicated to another location. The first generation of
Internet peer-to-peer networks like Napster resided on a single server. Due to this, it was possible
to take the network down. The second and third generations of P2P networks are able to operate
without any central server, thus eliminating the central vulnerability by connecting users directly to
each other remotely.

This kind of distributed network has a very low level of criticality. They are extremely resilient and
can be, for all intents and purposes, virtually impossible to destroy. And this is in strong contrast to
many of our centralized industrial systems such as broadcast media, cities and airports, that all
exhibit a high level of criticality because the networks are dependent upon centralized nodes. But it
is not just dependence upon a single set of major hubs that is important to robustness, but also
dependence upon a limited number of linkages. These critical linkages between nodes are called
bridging connections. Peer-to-peer networks also have a high level of resilience, owing to their low
level of linkage criticality. Any linkage between two computers can be replaced by using a proxy
server as an alternative pathway, meaning the network is not dependent upon any specific
connection. This independence from any particular node or edge is central to achieving
robustness.

Next, we need to consider how failures spread within the system. A primary consideration here is
the overall degree of connectivity to the network. With a relatively isolated system like a small farm
in a rural community, failures don’t spread very far. Isolation through low connectivity is the most
basic contagion mechanism. But if we take an urban center like central Hong Kong, a dense
network of many interconnected infrastructure systems have to be working for it to run smoothly.
Small glitches propagate quickly. In these highly interconnected and coordinated systems, we can
also get positive feedback loops that can work
to amplify some small change into a large effect. This is the butterfly effect that we previously
mentioned, and it is often the source of major systemic shocks such as bank runs or cascading
failures in power grids.

Key barriers to disaster propagation are redundancy and buffers. These can be engineered into the
network, and are also an emergent phenomenon of maintaining diversity within the system. There
is often a trade-off between diversity and optimization. Supply chain networks are a good example
of this. Holding just the right amount of inventory is crucial to optimizing costs. After all, inventory
costs are incurred every hour of every day in areas including warehouse storage, heating and
electricity, staffing, product decay and obsolescence, making for a strong drive towards every
increasing optimization and just-in-time practices, which can lead to self-organized criticality where
we reduce the diversity of the components and the buffers between them to such a low level that
we position the entire network at a critical point where a small event can trigger an avalanche of
failures. And there is a core tension here between optimization of components and the system’s
overall robustness. It takes intelligent design and management to integrate both, thus maintaining
an efficient, sustainable system.

As different technologies and systems converge, the interconnections and interdependencies


across different infrastructure systems increase, and so does the level of unknown linkages
increases. A basic premise of complexity theory is that we never know all of the linkages in these
complex systems. As an example of unknown interdependencies, we might think of the 2011
flooding in Indonesia, a country that accounts for approximately 25% of the world’s computer hard
disk production. This flooding caused a disruption to the manufacturing supply chains for
automobile production and a global shortage of hard disks, which lasted throughout 2012. Now
some people know that Indonesia is a major producer of hard drives. Less people know these hard
drives are in our cars and very few know the dependency of the automotive industry on this critical
supply linkage.

This is an example of teeny linkage in the vast complex system of our global economy, that is both
interdependent but also no one manages or fully understand. There is no instruction manual where
every interdependency is listed. This is the nature of our distributed globalized world, where IT
enables people to set up their own networks. Companies, financial institutions, engineers, software
developers, criminal gangs, government security agencies, and hackers – they just set up these
connections. They don’t have to tell anyone, there is no government of globalization to keep track
of it all. We simply do not know all of these connections, and often we only really find out about all
these linkages when the system breaks down. Because we can never know all of the linkages
within a complex system, we can never say it is fully fault tolerant and instead often the best option
is building robustness into the system through diversity.

In this video, we have been looking at the nature of fault tolerance and robustness in complex
engineered systems through the lens of network theory. We talked about how a system’s
robustness is a product of both external factors as to the system’s dependency upon its
environment, and internal factors as to the system’s network structure, its overall degree of
connectivity and its dependence upon centralized hubs and critical bridging links. We briefly
mentioned the importance of diversity as a barrier to failure propagation, looking at the tension
between subsystem optimization and resilience. Finally, we talked about the fact that we often do
not know all of the linkages within these complex distributed networks, and thus cannot ensure
complete fault tolerance, making inherent diversity an important security strategy.
Evolution & Technology Development

In this video we will be covering the topic of technology development, as we talk about a number of
different models that help us understand the macro scale process of change within complex
engineered systems. We will first discuss why evolution is a key feature of complex systems, and
talk about what exactly technology development is. We will go on to briefly introduce you to the
model of a fitness landscape, looking at the dynamics of evolution and the adaptive cycle model to
disruptive innovation.

The continental U.S. power transmission grid, consisting of about 300,000 km of lines operated by
approximately 500 companies. Most power grids in Western Europe and the U.S. started out as
local enterprises but over the course of time, due to demand, they have had to evolve to become
the integrated national and multinational networks they are today. The industrial systems that we
inherit today, like power grids, were not designed as an integrated system, but gradually evolved
over time and are thus best understood as the product of hundreds or even thousands of years of
technological evolution patched together. We can see this most clearly within emerging markets.
Due to recent rapid economic growth, we can see pre-industrial technologies, coexisting with
industrial and post-industrial information technologies all in one big mash- up.

These massive networks, like power grids and global supply chains, illustrate why evolution is very
important within complex systems. Because they are too complex to build from scratch, we never
get a clean slate. One person or organization could not create the Internet with all its content.
These things only really get created by many different actors with different local level agendas. We
can’t just come in, smack down our design and build the whole thing just like that from scratch.
These things get built over a prolonged period of time, primarily due to the local incentives of
individuals and local organizations as they act and react to each other’s behavior, self-organizing
to create patterns of coordination, which both compete and cooperate to eventually give us some
kind of emergent global coordination. And all the time, evolution is acting on the system in order to
define which patterns of organization are best suited and which are not. Whereas self-organization
is an internal process, evolution is an external force that acts on the whole system. It is a macro-
scale phenomenon.

Evolution is a particular type of systems development. So before we begin to analyze it, let’s first
define what we mean by the development of technology. As previously discussed, a technology is
a system that performs the function of efficiently solving a particular constraint. We can then think
about this idea of the development of technology as an increase in its efficiency, where efficiency is
defined as a ratio of inputted resources to the output of a solution, resources of
both natural capital and human resources. Rationalization is the making of a process or system
more efficient. We can then define a simple parameter spanning from low efficiency to high
efficiency, with rationalization being the function that maps the system to a different value along
this metric. This gives us some basic context to what we mean when we talk about the
development of technology.

To illustrate this, let’s think about a technology for grinding flour, what we call a flour mill. At the low
end of this spectrum, we will have a technology that requires a high input of resources with limited
throughput. We might have one of the original stone mills developed a few thousand years ago,
driven by manual labor at a low throughput of flour. Through the process of rationalization we have
made this system more efficient at grinding flour, thus at the high end of the spectrum, we have a
contemporary mill that is automated with a high throughput of flour to energy inputted. This process
of technological rationalization has not only increased the rate of throughput to the system but also
reduced the requirement for physical resources and human capital by automating it. At the
beginning of this process of rationalization, the system was very inefficient and thus there was still
a lot of value to be gained by rationalizing it. But by the time we get to the end of the process, the
system may be very efficient and thus there is often very little value to be gained by increased
rationalization.

But a technology doesn’t exist in isolation. It is part of a whole ecosystem of other technologies and
its utility is also defined by how well it fits into that environment. As we have previously discussed,
technologies today rarely stand alone. They more often form part of service networks that deliver
functionality, and thus their effectiveness is also largely in their capacity to interoperate with other
technologies and provide a required differentiated function within these service systems.

In order to capture these two factors of how efficient a technology is and its relation to other
technologies, we can use the model of what is called a fitness landscape, which is a three-
dimensional representation of the technology landscape. Like a rouged mountain range, it has
points of different elevation with these different elevations representing how efficient that
technology is. The higher you are up on one of these mountains the better or fitter the technology
is at solving the problem at hand, but also similar technologies that interoperate are clustered
together on the landscape. Through innovation, rationalization and evolution technologies try to
climb to higher peaks on this landscape.

Because different technologies need to inter-operate, they are interdependent, meaning the fitness
of one technology is dependent upon others. For example, there are over 25,000 companies
developing technologies with Bluetooth capabilities. If the protocol was to be significantly altered or
even discontinued, this would affect the entire ecosystem. Because of this interdependency, the
landscape is not static but in fact dancing around in response to all the small and large changes
that are being made to the individual technologies. But also due to the fact that the actual problem
that they are trying solve is also changing, at different stages of technological development, new
possibilities and challenges emerge, fundamentally altering the landscape.

The model of a fitness landscape is a powerful model for understanding the evolution of complex
adaptive systems. But it gives us a somewhat narrow vision to the evolution of technology because
new technologies and ideas can create whole new industries and landscapes. Technology is just
one part of the broader technical framework of what is called STEM, which stands for Science,
Technology, Engineering, and Mathematics. As we know technological development is intimately
interconnected with and dependent upon these other domains. This acronym should really read
MSET in order to represent the process through which our technical body of knowledge develops
and the set of dependencies between them. Technology is dependent upon engineering, which is
dependent upon science, which is dependent upon the formal systems, primarily mathematics.
There may be many nonlinear cross-pollinations within the domain of technology and engineering
to drive innovation, but ultimately new major technological paradigm shifts require breakthroughs in
math and fundamental science.

This is most evident when we look at how the break-through of the modern scientific revolution
gave birth to a new set of engineering methods and the industrial revolution. These major
paradigm shifts result in the whole landscape changing. Not only is the set of solutions redefined,
that is to say, the set of engineering methods and technologies, but also the actual problem space
itself may be redefined, because that is what theory and science do. They redefined how we see
the world, and thus what exactly the problem we are trying to solve is. We might call this thinking
outside the box. We are not just trying to define what the solution is but actually redefining the
problem. The two can co-evolve, because ultimately what we are trying to do here is solve
problems, and we can do that by changing the problem or changing the solution. For example, the
shift from a pre-modern to a modern view of the world based upon science redefined that problem
space that we are trying to solve, and that’s a real paradigm shift and change in the whole
landscape.

Evolution is a search over this landscape in order to find new and better solutions to the given
environmental challenges. Evolution involves a number of key stages; Firstly, the production of a
variety of solutions to the given problem, secondly, the application of these innovations to the
problem to see which is best suited, and third selection, in order to remove those variants that were
least effective and make the efficient solutions more prevalent in the next lifecycle of the system.
Lastly, we need to be able to iterate on this process for a number of life cycles. Each iteration of
the process should change the location of individual technologies on the landscape. The adaptive
cycle gives a visual representation to the stages within the process of evolution. The adaptive cycle
is a model used to capture the different stages that ecosystems go through during the course of
their evolution, but it is equally applicable to all complex adaptive systems from social
organizations to the development of new industries and technologies. It defines the macro-state to
the process of evolution during four distinct stages of development, including growth, conservation,
collapse, and reorganization.

In the growth phase, new scientific or fundamental engineering knowledge provides a new fertile
ground on which innovation can happen. Without incumbents, many new possible solutions can
emerge. An example of this growth phase at the moment might be the era of 3D printing. Without
an industry established enough to support any of the big players, it is full of small tinkerers and
startups that are created out of garages. In the conservation stage some technologies have proven
more effective, and by leveraging the positive feedback loop of economies of scale are able to
outperform any newcomers to the industry. Economics of scale creates high barriers to entry as the
industry becomes consolidated and mature. This is a period of maximum efficiency and minimum
flexibility, with all available resources held within a productive configuration making the
environment conservative towards change.

During the release phase, some external environmental disturbance such as some disruptive
innovation eventually triggers the collapse of the system as elements have become inflexible from
over-exploiting a single niche. The relationships are broken with the elements and the resources
they held becoming released. The elements that remain after the release stage will reorganize. In
this stage, the connectedness of the system is low but the potential is very high. Therefore, novelty
arises. Foreign elements that would in other stages be out-competed can establish at this point.
The growth stage follows and a new cycle begins. The adaptive cycle is a very generalized model
and we are far from fully understanding the dynamics behind it, but it does capture much of the
macro scale stages that are characteristic of the development of adaptive systems, developing
through an evolutionary process that is engendered in some dynamic between order and chaos.

In this video, we have been looking at some of the key factors involved in the process of evolution
within our technology landscape, discussing why evolution is important and what we mean by
technology development. We have talked about the fitness landscape model, and how paradigm
shifts in science can result in a whole problem space emerging and ensuing new set of engineering
solutions. We looked at how evolution can be understood in terms of a set of stages through which
variety is created. These variants are left to adapt before having selection performed upon them,
and through countless iteration of this cycle, with one cycle building upon the previous, we can get
the development of complex systems without anyone having designed the system. This
evolutionary approach to the development of technology is very different to our traditional industrial
paradigm, but through new technology, like biotech, nanotech and information technology it is
increasingly one we are learning to harness.

You might also like