You are on page 1of 18

Civil Engineering and Environmental Systems

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/gcee20

Practical wisdom in an age of computerisation

David Blockley

To cite this article: David Blockley (2020) Practical wisdom in an age of computerisation, Civil
Engineering and Environmental Systems, 37:4, 197-213, DOI: 10.1080/10286608.2020.1810675

To link to this article: https://doi.org/10.1080/10286608.2020.1810675

Published online: 25 Aug 2020.

Submit your article to this journal

Article views: 245

View related articles

View Crossmark data

Citing articles: 8 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=gcee20
CIVIL ENGINEERING AND ENVIRONMENTAL SYSTEMS
2020, VOL. 37, NO. 4, 197–213
https://doi.org/10.1080/10286608.2020.1810675

Practical wisdom in an age of computerisation


David Blockley
Department of Civil Engineering, University of Bristol, Bristol, UK

ABSTRACT ARTICLE HISTORY


All professions are facing some formidable challenges including the Received 28 June 2020
‘big three’ – climate change, pandemics and increasing Accepted 12 August 2020
computerisation. The potential threats include new uncertainties
KEYWORDS
about unexpected extreme events, widespread predictions of job Systems; project
losses and the dangers of inappropriate unsafe automation. The management; information
opportunities include new types of jobs and new understanding technology
of what civil engineers do, how they do it together with large
scale improvements in effectiveness. Meeting these challenges
requires us first to understand better and re-evaluate the
overview of the services we provide. Secondly, we need to get a
better handle on how we structure our problems to encompass
both the ‘big picture’ and the detail and to use that structuring to
develop workflow modelling tools that handle new sources of
uncertain evidence such as AI and the IoT. In this paper I argue
that we need to change the conversation about AI to one about
AAI or Assistive Artificial Intelligence whilst recognising that the
term AI will probably remain in general use. To do that we have
to understand and communicate what is special about
professional engineering expertise. We need a principle of
ingenuity that ‘value, nurture and develop practical wisdom’
because it contains the crucial necessary qualities of professional
engineering that cannot be computerised in the foreseeable future.

Introduction
All professions, including civil engineering, are facing some formidable challenges.
We are:

(1) changing the natural world in ways we don’t fully understand;


(2) connecting with each other, sharing information at a rate never been known before;
(3) realising that we know less than we thought we knew and in particular we don’t have a
firm grasp on the nature of uncertainty;
(4) experiencing the effects of the unintended consequences of our decisions as never
before;
(5) experiencing a major loss of public confidence in ’expertise’ and authority;
(6) experiencing developments in digital technology that offer systems that potentially
can replace human decision making – leading to what some call a fourth industrial

CONTACT David Blockley D.Blockley@bris.ac.uk


To view the special issue, please refer the path https://www.tandfonline.com/toc/gcee20/37/4
© 2020 Informa UK Limited, trading as Taylor & Francis Group
198 D. BLOCKLEY

revolution integrating the physical, digital and biological including robotics, artificial
intelligence, nanotechnology, quantum computing, biotechnology, The Internet of
Things (IoT), 3D printing and autonomous vehicles;
(7) replacing products that once were straightforward and understandable to the layper-
son with products that are opaque and understandable only by specialists.

Some of the questions these challenges pose are:

(a) Will there be enough work in the future to maintain full employment, and if so what
will that work be?
(b) Which occupations will thrive, and which ones will wither?
(c) What are the potential implications for skills and wages as machines perform some or
the tasks that humans now do?

The purposes of this paper are fourfold. First to identify some of the potential threats
and opportunities to professional civil engineering practice inherent in these challenges.
Second to suggest that in order to manage them we need to understand better and re-
evaluate the overview of the services we provide. Third to improve the ways in which
we deliver those services through the processes we civil engineers undertake by
effective use of the modelling tools and the AAI (assistive artificial intelligence) available
to us. Fourth to understand and nurture the essence of practical intelligence and
wisdom that cannot easily be computerised (at least in the foreseeable future).

The challenges, threats and opportunities


The challenges and the questions identified above quite obviously contain the three over-
whelmingly important and interdependent problems of the twenty-first century i.e.
climate change (numbers 1, 3 and 4), pandemics (1, 2, 3, 4) and the increasing role of
digital technology and computers (numbers 2, 6, 7). Media reports about the impact of
artificial intelligence on our lives and on jobs suggest that changes could be dramatic
and likely benefits and challenges considerable (BBC 2019; Kiersz 2015; MIT Review
2013). Reports of driverless cars, computer that can beat humans at chess and other
games as well as systems that can diagnose cancer cells better than doctors are impress-
ive. McKinsey Global Institute (2017) have analysed the likely numbers of jobs that may be
lost and gained through automation and forecast that by 2030, 75 million to 375 million
workers (3–14 percent of the global workforce) will need to switch occupations. All
workers will need to adapt, as their jobs change to incorporate increasingly capable
machines. Others will need help to develop the social and emotional skills, creativity
and cognitive abilities that are hard to automate.
Civil engineering jobs may seem to be vulnerable, by those outside our profession,
because there is a widespread perception that engineers simply apply science that is
’true’ with little or no creativity and judgement. The argument may be ’If computers can
control cars, they surely can make buildings’. Of course, as engineers we routinely use soft-
ware to analyse the performance of our designs. More recently we are developing systems
such as BIM to facilitate collaborative working. AI and robotics promise even more compu-
terisation of the design, construction and maintenance processes.
CIVIL ENGINEERING AND ENVIRONMENTAL SYSTEMS 199

Susskind and Susskind (2015) assert that in the past people have effectively established a
’grand bargain’ with the professions. The professions provide certain types of services for
which the people entrust them to act as gatekeepers of their particular expertise. Doctors
look after medical practical expertise, lawyers look after legal practical expertise, and engin-
eers look after engineering practice and so on. But they ask, as digital information technol-
ogy advances, it is time to revisit the grand bargain and ask if it is still fit for purpose. They
conclude that it isn’t because we now have the means to share practical expertise much
more widely. Their central question is ‘How do we share expertise in society?’ In other
words, they and others, are challenging our profession to justify itself as never before.
One possible consequence of an inadequate response to these challenges, could be that
the big digital technology companies may think of moving into the construction industry.
They may see an opportunity to automate and profit from it as they have done in other
industries such as retailing, manufacturing and telecommunications.
But the picture is not all doom and gloom. The ICE State of the Nation report 2017 states
that an improved use of smart technology, data and analytics offers a way forward in
improving the performance of UK infrastructure and addressing the UK’s ongoing pro-
ductivity deficit.
New technologies are ’already transforming the built environment, but also the shape of our
industry itself. They will also change expectations and demands around the level of trust
required in technologies … autonomous vehicles, machine learning, the internet of things,
artificial intelligence … will change how our existing infrastructure is used … as well as the
skills required to design, build and maintain it.

An Arup report in 2017 is again cautionary. It states that


The civil engineer is no stranger to digital innovation … but modest progress … with isolated
instances of innovation, focusing on efficiently producing outputs, blinkered to individual
infrastructure and the short term. Civil engineers paint digital innovation as an unappealing
pursuit … high risk, limited value, and a poor understanding of their role in its delivery … If
the civil engineering profession continues its current approach … no matter the vigour, we
predict new competitors … and shrinking influence. Civil engineers need to perceive the
wider potential of digital innovation … . The change needed is substantial in areas lie
outside the profession’s comfort zone … rewrite its strategy to digital innovation. It must
broaden its world view, invest in soft infrastructures, and draw on its roots as an innovative
profession that brings great societal good.

Shanks (2018) calls for structural engineers to look beyond their discipline to invest in
long-term relationships and innovate through better understanding of how industry
works.
As more engineering tasks are automated some people have begun to question what it is
that engineers contribute. In one discussion (Thornton Tomasetti 2019) one opinion was that
A lot of work that architects and engineers do is straight production: that’s going to go away
first. Then basic decision-making is going to go away—you structural engineers are probably
the first to go because you’ve been so rigorous about proceduralising all your knowledge and
your work.

An American structural engineering report (ASCE, SEI 2013) states


Today we see ourselves in a shrinking space because many of the technical tasks that a struc-
tural engineer used to do are now being done automatically by computers or completed
200 D. BLOCKLEY

overseas. We have further limited the space by developing standards and codes that attempt
to define the design parameters of upwards of 95% of the structures being built today.

What jobs are most at threat? The chief economist of the Bank of England has said that
administrative, clerical and production tasks were most vulnerable (Frey 2018). Frey and
Osborne (2013) have estimated the probability of computerisation (by which they mean
computers taking over a significant proportion of the work) for 702 detailed occupations.
They conclude that, at that time, about 47 percent of total US employment is at risk in the
future, with a strong negative relationship with an occupation’s probability of computer-
isation. Civil engineering ranks 84th (with a probability of computerisation p = 0.019) and
civil engineering technicians come in at number 413 (p = 0.75). If we contrast these figures
with those for sales engineers (who rank as 14th) we get some insight into the assump-
tions made in their analysis. The jobs that are the most unlikely to be computerised,
according to the authors, are those that help others, involve negotiation, are creative or
require cognitive skills. They quote examples such as therapists, surgeons, choreogra-
phers, pre-school teachers, and clergy. Those that are most like to be computerised are
those that involve assembling and manipulating objects.

Should civil engineers be worried for their jobs?


Both the McKinsey report (2017) and the work of Frey and Osborne (2013) are under-
pinned by the data contained in the Occupational Information Network
(O*NET Content, Summary Model 2019) – developed under the sponsorship of the U.S.
Department of Labor/Employment and Training Administration. O*NET is one example
of big data – but is it big enough to capture the important subtleties that are so important
to good engineering? If cars can be driverless (even though car accidents kill large
numbers of people) the argument may be that although engineering failures can also
kill people, they can still be computerised. More familiar data is held by UK-SPEC (2014),
the UK standard for professional engineering competence and the USA Civil Engineering
Body of Knowledge (CEBOK 2019) as statements of civil engineering competence and
commitment. These data set standards and are central to defining what civil engineers
do. They make common sense assumptions about personal qualities such as what it is
to understand, to apply and to use engineering knowledge. However, they do not, in
any depth, identify the special skills that demarcate engineers from others – apart from
our specialist knowledge. Two factors are important in driving a need to examine these
skills in more depth. The first is the poor understanding of many outside of the profession
about what civil engineers do and what is special about what they bring to the ‘construc-
tion-party’. The second is the important uses to which these data may be put in speculat-
ing and making business investment decisions – particularly concerning possible future
computerisation. Both factors have a potential for large scale adverse consequences
from inappropriate use of these data if the special skills of practical understanding, intelli-
gence, wisdom and ethics are not sufficiently identified and valued.

What is behind the new computerisation claims?


The short answer is algorithms that ’learn’ to recognise patterns in big data sets. O*NET, as
we have seen, is an example. More widely known perhaps are the artificial neural nets
CIVIL ENGINEERING AND ENVIRONMENTAL SYSTEMS 201

(ANN) that are the basis for many applications such as driverless cars. A longer answer is
that an ANN consists of extremely large sets of graphs with numerical weightings attached
to every node. The nodes are connected by links. The patterns of weights form a transfer
function between a set of inputs and a set of outputs. The patterns are established using
algorithms that ’train’ the ANN. These training algorithms operate on examples of known
inputs that lead to known outputs to adjust the weightings of the ANN. Many thousands of
training examples may be needed. The algorithms are basically simple, but the massive
numbers of interactions used in the training make the overall effect quite complex and
totally opaque. The goal of the training is to find an effective transfer function that
allows new outputs (predictions) from new inputs.
The ANN transfer functions are surprisingly effective but totally opaque because they
are unable to explain or, more importantly, justify their conclusions and the range of appli-
cability and numbers of parameters are rarely stated. Unless and until new research pro-
vides ways in which this can be done then an ANN cannot in anyway be compared to a
professional human decision maker. Without an ability to justify a decision to exercise a
professional duty of care for public safety under the law of tort, no ANN can replace a pro-
fessional decision maker. For a detailed overview of the various implementations of neural
nets see (Neural Nets 2019).

Understanding and re-evaluating the services that civil engineers provide


Members of the general public who are not technically qualified could be justified in
asking we civil engineers ‘If computers can control driverless cars might they not be
able to design and build a skyscraper building?’ It remains to be seen how many
people will travel in driverless cars – the answer may well depend on perceived safety
levels and numbers of accidents. Our reply to the questions might be ‘If you are willing
to travel in a driverless car, would you live on the 100th floor of a building designed by
AI and built by robots?’ If the answer is yes, then we need to be able to explain why
that may not be a great idea.
Public understanding of what engineers do is known to be poor (Blockley 2020). As
engineers we think of ourselves as technical problem solvers. Our culture developed
over generations and passed through our education and training focusses us on the
behaviour of physical systems and how we put them together in new ways to provide
some human need. We rarely describe engineering as a ‘people profession’ – one that pro-
vides ‘things’ and systems of ‘things’ that are intended to improve the human condition.
Those ‘things’ vary from places to live and work to modes of transport to means of com-
municating and to attacking and defending ourselves. Historically therefore we have
developed our use of computers to analyse physical systems. We have done that very suc-
cessfully through techniques such as the finite element method. Only recently have we
started to think about how we use computers to help our workflow with systems such
as BIM. Progress on systems like BIM is slow for many complex reasons not the least of
which is that our industry is very fragmented. However, another important factor is that
we have a poor grasp of how to structure our problems, model our workflow, and
manage the impact of uncertainties and the inevitable unintended consequences of our
decisions.
202 D. BLOCKLEY

It is unsurprising therefore that many people, inside and outside of the profession see
the potential for improvement. But how do we guard against inappropriate change that
may promise much but gain very little. History tells us that people and organisations
are adaptable and that new types of work will emerge. Therefore, the emphasis should
perhaps be on how we are resilient and prepare for the inevitable changes.
In order to assess the impact of computerisation on jobs we need to look much further
and deeper than Frey and Osborne’s overview analysis and we need to develop what UK-
SPEC and CEEBOK mean by being able to understand, apply and to use engineering knowl-
edge to solve engineering problems. What are the human attributes that are necessary for
good and safe engineering that cannot be computerised?
The common sense understanding of understanding is, at its simplest ‘being able to
restate a problem in one’s own words’. A more nuanced definition might differentiate
between understanding conceptually and mathematically. Still further as well as being
able to explain, we should add interpreting, applying, placing in context or perspective,
empathising and having self-knowledge. UK-SPEC and CEEBOK include examples of
how these levels of understanding might be demonstrated.
A different, but potentially more revealing approach, taken by Blockley and Robertson
(1983) was to analyse the characteristics of a good civil engineer. There is not the space
here to compare this analysis with the O*NET database and UK_SPEC or CEEBOK but a
casual glance demonstrates that the qualities required of a civil engineer in practice are
much more subtle than a data base (even of big data) can capture. In other words, we
can largely discount the conclusions of Frey and Osborne other than as an overview com-
parison. In the next sections of this paper I will simply focus on three high level essential
attributes, suggested by Blockley and Robertson (1983) that cannot easily be compu-
terised. These are technical soundness, professionalism and personal qualities. Specifically,
I will focus on an important single aspect of each of the three as: understanding as a
necessary quality of technical soundness, practical wisdom as an under recognised attri-
bute of professionalism and ethics as a necessary personal quality. First, however I will
examine how we need to get better at the way we formulate our problems and the
role of computers and AI in assisting workflow.

Delivering workflow with AAI -Assistive Artificial Intelligence


As already stated, at its simplest, to understand is to be able to restate a problem in one’s
own words. More deeply it is the ability to think and act flexibly with some phenomena. It
is manifest when put to work i.e. to explain, to perform and to predict. A person who
understands can do things that a person who does not have that understanding cannot
do. The first proposition we must establish is that ‘AI from big data cannot yet provide
understanding’ at any level comparable to a human being and specifically that of a pro-
fessionally qualified engineer. Perhaps at some time in the future computers may be
able to provide explanations by associating patterns of connections between inputs
and outputs. But as any statistician knows such associations are not necessarily causal.
They will contain important contextual and latent relationships and a potential for
future unintended consequences. AI may be able to perform by associating inputs with
actions as outputs as in robotics and driverless cars but those predictions may not be prac-
tically intelligent or wise at the level required of a professional.
CIVIL ENGINEERING AND ENVIRONMENTAL SYSTEMS 203

The reason is simple and complex at the same time. Artificial intelligence is presently a
very primitive and very specific form of intelligence – good at some things and poor at
others. AI is good on perceiving and learning but very poor at abstracting and reasoning.
AI systems can occasionally come up with bizarre answers that no human would counte-
nance – for example identifying a toothbrush as a baseball bat. The systems can also be
tricked into giving wrong answers by overlaying invisible distorting patterns designed
by a malevolent expert. Bias or distortion, intended or unintended is built into the data
– for example gender, racial or other cultural assumptions. Any human being can recog-
nise the face of another human being after one glimpse whereas. An AI programme or
ANN requires many thousands of training examples – it is far from mimicking human intel-
ligence. As one acerbic commentator said if a computer ANN had to learn how not to get
killed by a car then it will have to get killed before knowing how not to get killed.
Of course, this is not to say that AI cannot be useful. It may perhaps be better to inter-
pret it as AAI or Assisted Artificial Intelligence (although the term AI is so embedded it will
probably continue to be used). In this way we use it just as we use software for finite
element analysis or as a BIM process. But the modern context has changed. The challenges
outlined at the start of this paper are complex with a high degree of uncertainty. To tackle
them with the help of AAI we need to improve the ways in which we structure our pro-
blems and capture the workflow needed to deliver our objectives. We need a much
firmer grasp of the nature of the resilience we will require to cope with unexpected and
unintended outcomes. We need to be robust in defending ourselves against inappropriate
automation and to make sure that AAI, data from the IoT and robotic interventions are
controlled through a human duty of care. In other words, we need a theory of
workflow, incorporating AI, which is suitable for an age of increasing computerisation.
We need new not just better workflow models but also a theory behind the models
which captures the essence of practical wisdom and capitalises on AAI without compro-
mising professional ethics of duty of care.
Workflow models derive from the way we structure our problems. The latter has been
driven by software and systems analysts and merits much more attention by civil engin-
eers. For further reading see (Blockley and Godfrey 2017 Section 2.3). The simplest
workflow models are flowcharts (used widely for writing computer code) which are
simple, easily understood but very limited in scope and much too prescriptive. Currently
the most developed workflow models are those of IDEF (2019) but they are rarely used in
construction probably because whilst they are less prescriptive than simple flowcharts,
they rapidly become difficult to manage for large complex projects. Of course, the most
widely used, but perhaps not usually described as workflow models, are critical path net-
works typically translated into Gannt Bar charts. Whilst critical path analyses are undoubt-
edly useful, they are about just one aspect – time – and not ‘rich’ enough to capture all
necessary aspects of ‘joined-up’ workflow.
In order to introduce a new approach to problem structuring and modelling of
workflow based on sound theoretical and philosophical foundation, Blockley and
Godfrey (2000, 2017) introduced the interacting objects process model (IOPM). The meth-
odology was first introduced to tackle the challenges of the Egan report (DETR 1998) as
well as those mentioned earlier. The IOPM is based on 5 axioms, 7 principles and 17 Cor-
ollaries (Engineering Synergy 2019) and on a new and extended understanding of what
constitutes a systemic process. In all other workflow models a process is simple a
204 D. BLOCKLEY

transformation of inputs to outputs. In the IOPM a process is a much richer idea. It is a ‘sys-
temic’ process of a potential that drives a flow of change against an impedance. The
potential derives from answers to questions ‘why’. Examples of potential in hard physical
systems are changing velocity and voltage. In soft systems potential is purpose. The flow of
change is captured in answers to questions, ‘who, what, where and when’. Examples of
flow in hard systems are force and electrical current. Flow in soft systems is change –
changes in people (who), changes in states of affairs (what), in context (where) and in
time (when). Impedance is dissipation of energy (friction, damping, resistance), storage
of potential (capacitance) and storage of flow(inductance) – and all are captured by
models based on answers to questions ‘how’.
The IOPM approach:

(1) facilitates an integrated (joined-up) views of ‘big-picture’ high level models with
detailed models;
(2) is based on a strong theoretical and philosophical foundation;
(3) permits the user to navigate through layers of complexity;
(4) provides a common structure of concepts and relationships for all aspects or ‘views’ of
the system;
(5) models the drivers and flows of change in ‘active data’;
(6) is equally applicable to ‘hard’ physical systems as well as ‘soft’ human systems;
(7) can model all aspects of uncertainty including fuzziness and incompleteness;
(8) highlights the importance of good leadership.

Figures 1–6 show examples. Figure 1 illustrates how models can be built in layers of pro-
cesses so that the top layers allow a big picture overview whilst lower layers capture detail.
The layers are logically connected in that success in a group of processes at one level are
deemed to be jointly necessary and sufficient for success for the process in the layer above
to which the group is connected. The processes within each layer are connected in time so
that a critical path may be calculated and shown on a horizontal time axis. Of course, in the
diagram there is space only to show a few processes but in software each layer could of
course fill a number of complete computer screens. Figure 2 illustrates the differing views
and information about a single interacting object capturing the potential as answers to

Figure 1. Layers of connected interacting object processes.


CIVIL ENGINEERING AND ENVIRONMENTAL SYSTEMS 205

Figure 2. Simultaneous projected views of the attributes of an interacting object process.

questions ‘why’, the flow as answers to questions (‘who, what, where and when’) and the
transformations as answers to the questions ‘how’. The diagram also shows the colours of
an ‘Italian Flag’ (Blockley and Godfrey 2017). The flag expresses the level of confidence that
the process owner has in the dependability of the available evidence that this single
process will be successful – green represents evidence for, red represents evidence
against and white represents the incompleteness of the evidence or the extent to
which we don’t know. Note that the relationships between this single interacting object
and other objects is the same for all attributes so that the structure of relationships
between people (who) and things (what) are identical. This is important because it

Figure 3. Making sandwiches.


206 D. BLOCKLEY

Figure 4. Some attributes of ‘Making sandwiches’.

reduces the likelihood of confusions where structurally related but differing aspects (such
as job titles, organisational charts, functional diagrams and process/activity diagrams) are
often currently modelled in different ways. Figures 3 and 4 show a simple example of a
process of ‘making sandwiches’ with associated critical paths in Figure 3 to illustrate
some of the ‘when’ attributes. Of course, the detail of information about why, how,
who, what, where and when can be much fuller than shown in Figure 4 and held in sep-
arate linked files. Figure 5 shows a simple IOPM for ‘constructing a building’ with some
‘when’ (time) attributes for illustration.
It is worth noting that in this methodology every process must have a named process
owner. One of the important responsibilities of the process owner is to ensure that the

Figure 5. Constructing a building.


CIVIL ENGINEERING AND ENVIRONMENTAL SYSTEMS 207

Figure 6. An interacting object process model (Iopm) of a beam element.

data is continually updated – including the Italian Flag. It is clear from Figure 5 that the
process owner for ‘erecting the structure’ has understood that there is a ‘cladding’
problem (a high degree of red). However, that is not being recognised by the process
owner for ‘constructing the building’ (high degree of green) so the model is telling
them they need to talk.
Finally Figure 6 shows an IOPM for a physical process – in this case a typical beam
element in a structure. The element has neighbouring IOPM beam elements either side
as well as a loading IOPM. The beam IOPM is receiving force and displacement ‘messages’
(values of shear force, bending moment and displacements – only forces are shown for
clarity) from adjacent elements. It then sends out messages based on familiar equilibrium,
compatibility and constitutive equations to its neighbours. The channels of communi-
cation are the degrees of freedom of the element. For more on the IOPM for modelling
physical hard systems see Blockley (1995).

Understanding and nurturing ingenuity and practical wisdom


With an improved system for problem structuring and workflow modelling we can begin
to see how data and evidence from whatever source (measurements, AAI, physical systems
analysis, other computer models but also crucially human professional judgement) can be
accommodated better and crucially what it is special that qualified professionals ‘bring to
the party’. The key to that ‘specialness’ is in the Latin root of the word engineer – ingenuity
Blockley (2012). I have previously argued that we need a principle of Ingenuity (Blockley
2020) which says ‘value, nurture and develop practical wisdom’.
The first person to understand the special nature of practical wisdom was Aristotle – he
called it phronesis (Blockley 2019) – an idea unfortunately lost in the mists of time. Of
course, we cannot simply apply ideas from that long ago, but we can use them to build
our own understanding. Aristotle saw five ways of arriving at the truth – he called them
art (ars, techne), science (episteme), intuition (nous), wisdom (sophia), and practical
208 D. BLOCKLEY

wisdom – sometimes translated as prudence (phronesis). Ars or techne (from which we get
the words art and technical, technique and technology) was concerned with production
but not action. The Greeks did not distinguish the fine arts as the work of an inspired indi-
vidual – that came only after the Renaissance. So techne as the modern idea of mere tech-
nique or rule-following was only one part of what Aristotle was referring to. Episteme (from
which we get the word epistemology or knowledge) was, to the Greeks, of necessity and
eternal; it was knowledge that cannot come into being or cease to be; it was demonstrable
and teachable and depends on first principles. Practical wisdom or phronesis was an intel-
lectual virtue of perceiving and understanding in effective ways and acting benevolently
and beneficently. It was not an art but necessarily involved ethics, not static but always
changing, individual but also social and cultural. Aristotle thought of human activity in
three categories praxis, poeisis (from which we get the word poetry), and theoria (contem-
plation – from which we get the word theory).
As I see it phronesis is a means towards an end arrived at through moral virtue. It is con-
cerned with the capacity for determining what is good for both the individual and the
community. It is a virtue and a competence, an ability to deliberate rightly about what
is good in general, about discerning and judging what is true and right but it excludes
specific competences (like deliberating about how to build a bridge or how to make a
person healthy). It is purposeful, contextual but not rule-following. It is not routine or
even well-trained behaviour, but rather intentional conduct based on tacit knowledge
and experience, using longer time horizons than usual, and considering more aspects,
more ways of knowing, more viewpoints, coupled with an ability to generalise beyond
narrow subject areas. Phronesis was not considered a science by Aristotle because it is vari-
able and context dependent. It was not an art because it is about action and generically
different from production.
Thomas Homer-Dixon (1995) wants to close the ‘ingenuity gap’ between those who
adapt well to complex changes and those that don’t. He wants to see us move from
authoritative top down command and control to a mature collaborative culture that
focuses on enabling others to be successful.
Of course, we humans have shown much ingenuity in the past but there is a danger of a
kind of ’technical triumphalism’ that claims we can meet our challenges simply by carrying
on as we have done. We need to understand that practical rigour is not the same as logical
rigour. Rigour is the strict enforcement of rules to an end. Mathematical logic is the ulti-
mate form of logical rigour: it has one value – truth. Theorems are deduced using
axioms (rules) which are true-by-definition. Physical science aims at precise truth but
truth as a correspondence to the facts, which have to be set in a context. Practical
rigour is much more complex. It is meeting a need by setting clear objectives involving
many values (some in conflict) and reaching those objectives in a demonstrably depend-
able and justifiable way. Practical rigour is about using dependable evidence to move a
process towards a specific goal. That is why the Italian Flag of evidence is so important
to the Iopm. Practical rigour implies practical intelligence, which in turn implies practical
experience. In other words, experience is necessary but not sufficient for practical intelli-
gence – a capacity to learn, reason and understand practical matters. And practical intelli-
gence is necessary but not sufficient for practical rigour. That is because practical
intelligence and rigour require reflective learning and development on that experience.
CIVIL ENGINEERING AND ENVIRONMENTAL SYSTEMS 209

Practical wisdom is ethical


I see practical wisdom as having 7 attributes – necessarily encompassing ethical value
judgements:

(1) making it work – creating practical solutions to meet explicit needs and delivering a
system valued in a variety of ways, not just cost;
(2) creating appropriate models – understanding – working with nature – making sensible
approximations that respect nature. Far from being the cause of loss of rigour, as our
academic accusers may hold, the approximations of our models are the sources of the
practical rigour required to create a solution that meets the needs. Practical rigour
requires diligence and duty of care that leaves no stone unturned with no sloppy or
slip-shod thinking;
(3) considering the whole as well as the parts. The scientific approach is one where we
look at a problem, break it down into its separate components, take out the difficult
bits that we do not know how to solve and focus on what we can solve. It is a
process of selective inattention. Practical rigour does not have that luxury – it requires
a rigour that deals with the bits of the problem that we do not always understand too
well;
(4) making judgments. Professional opinions are not arbitrary, they are based on Karl
Poppers objective world 3 evidence (1979) of varying dependability. Opinion based
on experience may be less dependable than measurement or standard theory, but
it has to be testable against world 3 objective knowledge, ultimately, perhaps, in
the courts;
(5) exercising creative foresight. Practice requires the creativity to imagine what might
happen – how physical things will respond and how people might behave in future
situations or scenarios;
(6) developing and evaluating dependable evidence. The only clear way to judge the
dependability of evidence is to subject it to as many tests as seems appropriate;
(7) feedback and learning. One of the seven habits of successful people identified by
Covey (2005) is learning to improve or self-renewal.

As Sellman (2012) states that a wise practitioner (phronimo) will recognise not knowing
our competencies, i.e. not just knowing what we know, but also our uncompetencies, i.e.
not knowing what we do not know. He writes
The competent practitioner is not concerned with merely getting through the work, not even
with mere skills acquisition; rather, he/she aspires toward the Aristotelian ideal of doing the
right thing to the right person at the right time in the right way for the right reason.

The virtues required are integrity, openness and honesty with the humility necessary to
acknowledge that there are things we do not know we do not know and the willingness
to act so as to rectify any identified deficits that threaten our claim to competent practice.
Ethics, as the moral rules that determine conduct, are central to wise practice as phron-
esis. Morals are concerned with what is good or bad what we ought to do rather than what
is. Ethics is about values. Values are the worth we give to something – not simply financial
but in all of the relevant ways. Phronesis is action after deliberation based on values, using
210 D. BLOCKLEY

practical judgement informed by reflection and is pragmatic, variable, and dependent on


context.
Can AAI based on big data be ethical? The value of data lies in what it ’means’ in a given
context. In other words, how the data is interpreted – and that depends on who is doing
the interpretation. Meaning is the significance, importance or intention of the data for a
purpose and is highly contextual. So often in everyday life the meaning is uncontroversial
and clear – for example your height and weight. However, changes in those data will have
different implications according to context – for example weight loss can be one factor,
acting with others, as a warning sign to a medical doctor making a medical diagnosis. A
crack in a brick wall or a reinforced concrete beam can stimulate very different interpret-
ations to a member of the public worried about his property or to a structural engineer
who judges it to be benign.
The professional issues around the interpretation of big data are around the compe-
tence of the person or AI systems doing the interpretation. Whilst the ethics of a wise prac-
titioner lie within that practitioner the ethics in AI lie in the often-unstated ethics assumed
as the data is collected – the possibilities are many and profound and deeply embedded.
The obvious ones are assumptions about race and gender. The more enigmatic are con-
textual assumptions contained in the theories and modelling to create the data in the
first place.
For example to understand the power of this point, the philosopher Karl Popper in his
book Conjectures and Refutations (1976) states
Twenty-five years ago I (began) a lecture with the following instructions: ’Take pencil and
paper; carefully observe, and write down what you have observed!’ They asked, of course,
what I wanted them to observe. Clearly the instruction, ’Observe!’ is absurd … . Observation
is always selective. It needs a chosen object, a definite task, an interest, a point of view, a
problem. And its description presupposes a descriptive language, with property words; it pre-
supposes similarity and classification, which in their turn presuppose interests, points of view,
and problems.

Popper’s point is that all observations contain assumptions which may or may not be
made clear. Whilst those assumptions can be appreciated by a wise practitioner they
will be embedded and unknown in any AI system that relies purely on the data itself

Summary and conclusions

(1) All professions are facing some formidable challenges from climate change and
increasing computerisation.
(2) The potential threats include widespread predictions of job losses and the dangers of
inappropriate unsafe automation. All workers will need to adapt, as their jobs change
to incorporate increasingly capable machines. Some will need help to develop new
social and emotional skills that are essential but hard to automate.
(3) As seen from the outside civil engineering jobs may seem to be vulnerable because
there is a widespread perception that engineers simply apply science that is ’true’
with little or no creativity and judgement.
(4) Some commentators go so far as to argue that we now have the means to share practical
expertise much more widely and they are wanting to know how that could be done.
CIVIL ENGINEERING AND ENVIRONMENTAL SYSTEMS 211

(5) Inappropriate business investment decisions based on misunderstandings of engin-


eering expertise have a potential for large scale adverse consequences.
(6) To counter these trends, we need to understand, articulate, identify, communicate
and value much more clearly the special skills of engineering ingenuity through prac-
tical understanding, intelligence and wisdom.
(7) Ethics is a basic constituent of practical wisdom. Its role in civil engineering needs to
be much more widely understood, appreciated and developed.
(8) Increasing computerisation will lead to new types of jobs developed through a
better, wider public understanding of what civil engineers do, how they do it as
well as large scale improvements in effectiveness in the way we structure problems
and workflows.
(9) New problem structuring methods must encompass and integrate both the ‘big
picture’ and the detail. Workflow modelling tools must derive from those methods
and be able to handle new types of uncertainty, such as incompleteness and unin-
tended consequences as well as incorporate new sources of evidence such as AI
and the IoT.
(10) The current emphasis on data is unhelpful. Data is not information. For data to
become information it has to be interpreted and that is always done in the
context of a decision process at various levels of detail in a defined and well under-
stood context.
(11) The conversation about AI needs to be changed to one about AAI or Assistive Artifi-
cial Intelligence whilst recognising that the term AI will continue in general use.
(12) We need a principle of ingenuity which states that ‘value, nurture and develop prac-
tical wisdom’ because these are important necessary qualities that cannot easily be
computerised.
(13) Aristotle was the first to identify and describe practical wisdom. He called it phron-
esis and described it as a means towards an end arrived at through moral virtue. It
is the capacity for determining what is good for both the individual and the com-
munity. It is a virtue and a competence, an ability to deliberate rightly about what
is good in general, about discerning and judging what is true and right. This virtue
which has been lost in the mists of time through the rise of science and technical
rationality needs to be reclaimed since many practicing engineers have it without
articulating it.
(14) AI systems based on large data sets (big data) cannot yet provide understanding and
hence cannot exercise decision making equivalent to a professional duty of care.
(15) Big data contains unstated ethics, the effects of which may be many and profound
and deeply embedded. The obvious ones are assumptions about race and gender.
The more enigmatic are contextual assumptions contained in the theories and mod-
elling to create the data in the first place. Whilst those assumptions can be appreci-
ated by a wise practitioner they will be embedded and unknown in any AI system
that relies purely on the data itself

Disclosure statement
No potential conflict of interest was reported by the author(s).
212 D. BLOCKLEY

Notes on contributor
David Blockley is an Emeritus Professor of Civil Engineering and one-time Head of the Department of
Civil Engineering and Dean of Engineering at the University of Bristol, UK. He graduated from the
University of Sheffield in 1964 with a PhD in 1967. He worked for the British Constructional Steelwork
Association in London before moving to Bristol. He holds a DSc from the University of Bristol, is a
Fellow of the Royal Academy of Engineering, of the Institution of Civil Engineers, and of the Insti-
tution of Structural Engineers. He was President of the Institution of Structural Engineers 2001–02.
He was a Non-Executive Director of Bristol Water plc 2003–2009.

References
Arup, University of Bristol, ICE. 2017. How can Civil Engineering Thrive in a Smart City World? London,
UK: Arup.
ASCE, SEI. 2013. “A Vision for the Future of Structural Engineering and Structural Engineers: A case for
change.” American Society of Civil Engineers, Structural Engineering Institute, USA.
BBC. 2019. “Will a Robot Take Your Job?” BBC, UK. Accessed April, 2019. https://www.bbc.co.uk/news/
technology-3406694.
Blockley, D. I. 1995. “Computers in Engineering Risk and Hazard Management.” Archives of
Computational Methods in Engineering 2 (2): 67–94.
Blockley, David, and ICE. 2017. State Of The Nation 2017: Digital Transformation. London, England,
United Kingdom: Institution of Civil Engineers.
Blockley, D. I. 2019. “Practical Wisdom and Why we Need to Value It.” Accessed May, 2019. https://
blog.oup.com/2014/07/practical-wisdom-vsi/.
Blockley, D. I. 2020. Creativity, Problem Solving and Aesthetics in Engineering. Switzerland: Springer
Nature.
Blockley, D. I., and David Blockley. 2012. VSI Engineering. Oxford, Bristol, United Kingdom: Oxford
University Press.
Blockley, D. I., and P. S. G. Godfrey. 2000. Doing it Differently. London: Thomas Telford.
Blockley, D. I., and P. S. G. Godfrey. 2017. Doing it Differently. 2nd ed. London: ICE.
Blockley, D. I., and C. Robertson. 1983. “An Analysis of the Characteristics of a Good Civil Engineer.”
Proceedings of the Institution of Civil Engineers, Part 2 75: 77–94.
CEEBOK (Civil Engineering Body of Knowledge) Preparing the Future Civil Engineer Third Edition.
2019. “ASCE.” Accessed May, 2019. https://ascelibrary.org/doi/pdf/10.1061/9780784415221.
Covey, S. 2005. The 7 Habits of Highly Effective People. Simon & Schuster, USA.
DETR. 1998. Rethinking Construction. London: Department of the Environment, Transport and the
Regions.
Engineering Council 3rd Edition. 2014. “UK-SPEC UK Standard for Professional Engineering
Competence.” Accessed May, 2019. https://www.engc.org.uk/engcdocuments/internet/Website/
UK%20SPEC%20third%20edition%20(1).pdf.
Engineering Synergy. 2019. Accessed May, 2019. http://myengineeringsystems.co.uk/5-axioms/.
Frey, C. B., Bank of England. 2018. “Attitudes Towards Automation Past Present and Future.” Accessed
April, 2019. https://www.bankofengland.co.uk/-/media/boe/files/events/2018/may/attitudes-
towards-automation-past-present-and-future.
Frey, C. B., M. A. Osborne, and David Blockley. 2013. The Future of Employment: How Susceptible are
Jobs to Computerisation. Oxford, England, United Kingdom: Oxford Martin School, University of
Oxford.
Homer-Dixon, T. 1995. The Ingenuity Gap. Accessed May, 2019. https://homerdixon.com/wp-content/
uploads/2017/05/Homer-Dixon-The-Ingenuity-Gap-1995.pdf.
IDEF. 2019. Accessed May, 2019. http://www.idef.com/.
Kiersz, A. 2015. You’ve Got a Big Problem if you Lack ‘Interpersonal Skills’. USA: Business Insider.
McKinsey Global Institute Report December. 2017. “Jobs Lost, Jobs Gained: Workforce Transitions in a
Time of Automation.” Accessed April, 2019. https://www.mckinsey.com/~/media/mckinsey/
featured%20insights/Future%20of%20Organizations/What%20the%20future%20of%20work%
CIVIL ENGINEERING AND ENVIRONMENTAL SYSTEMS 213

20will%20mean%20for%20jobs%20skills%20and%20wages/MGI-Jobs-Lost-Jobs-Gained-Report-
December-6-2017.ashx.
MIT Technology Review. 2013. “How Technology is Destroying Jobs.” Accessed April, 2019. https://
www.technologyreview.com/s/515926/how-technology-is-destroying-jobs/.
Neural Nets. 2019. “A Basic Introduction To Neural Networks.” Accessed May, 2019. http://pages.cs.
wisc.edu/~bolo/shipyard/neural/local.html.
O*NET Content model. 2019. Accessed April, 2019. https://www.onetcenter.org/content.html.
Popper, K. 1976. Conjectures and Refutations. London: Routledge and Kegan Paul.
Sellman, D. 2012. “Reclaiming Competence for Professional Phronesis.” In Phronesis as Professional
Knowledge: Practical Wisdom in the Professions, edited by E. A. Kinsella and A. Pitman, 115–130.
Alberta, Canada: Sense publishers. Chapter 9.
Shanks, J. 2018. “New Means of Construction: What Will be the Next Big Leap?” The Structural
Engineer. Accessed May, 2019. https://www.istructe.org/resources/blog/new-means-of-
construction-next-big-leap/.
Susskind, R., and D. Susskind. 2015. The Future of the Professions: How Technology Will Transform the
Work. Oxford: OUP.
Thornton Tomasetti. 2019. How Far Can Automation Go? Accessed April, 2019. https://www.
thorntontomasetti.com/how_far_can_automation_go/.

You might also like