You are on page 1of 40

14/58 HUMAN FACTORS AND HUMAN ERROR

Table 14.15 General estimates of error probability used in the Rasmussen Report (Atomic Energy Commission,
1975)
Estimated error Activity
probability
4
10 Selection of a key-operated switch rather than a non-key switch (this value does not include the
error of decision where the operator misinterprets situation and believes key switch is correct
choice)
3
10 Selection of a switch (or pair of switches) dissimilar in shape or location to the desired switch (or
pair of switches), assuming no decision error. For example, operator actuates large-handled switch
rather than small switch
3
3  10 General human error of commission, e.g. misreading label and therefore selecting wrong switch
10 2 General human error of omission where there is no display in the control room of the status of
the item omitted, e.g. failure to return manually operated test valve to proper configuration after
maintenance
3
3  10 Errors of omission, where the items being omitted are embedded in a procedure rather than at
the end as above
2
3  10 Simple arithmetic errors with self-checking but without repeating the calculation by re-doing it on
another piece of paper
1/x Given that an operator is reaching for an incorrect switch (or pair of switches), he selects a
particular similar appearing switch (or pair of switches), where x = the number of incorrect
switches (or pairs of switches) adjacent to the desired switch (or pair of switches). The 1/x applies
up to 5 or 6 items. After that point the error rate would be lower because the operator would take
more time to search. With up to 5 or 6 items he does not expect to be wrong and therefore is
more likely to do less deliberate searching
1
10 Given that an operator is reaching for a wrong motor operated valve (MOV) switch (or pair of
switches), he fails to note from the indicator lamps that the MOV(s) is (are) already in the desired
state and merely changes the status of the MOV(s) without recognizing he had selected the
wrong switch(es)
1.0 Same as above, except that the state(s) of the incorrect switch(es) is (are) not the desired state
1.0 If an operator fails to operate correctly one of two closely coupled valves or switches in a
procedural step, he also fails to correctly operate the other valve
1
10 Monitor or inspector fails to recognize initial error by operator. Note: With continuing feedback of
the error on the annunciator panel, this high error rate would not apply
1
10 Personnel on different work shift fail to check condition of hardware unless required by checklist
or written directive
1
5  10 Monitor fails to detect undesired position of valves, etc., during general walk-around inspections,
assuming no checklist is used
0.2±0.3 General error rate given very high stress levels where dangerous activities are occurring rapidly
2(n 1)x Given severe time stress, as in trying to compensate for an error made in an emergency situation,
the initial error rate, x, for an activity doubles for each attempt, n, after a previous incorrect
attempt, until the limiting condition of an error rate of 1.0 is reached or until time runs out. This
limiting condition corresponds to an individual's becoming completely disorganized or ineffective
1.0 Operator fails to act correctly in first 60 seconds after the onset of an extremely high stress
condition, e.g. a large LOCA
1
9  10 Operator fails to act correctly after the first 5 minutes after the onset of an extremely high stress
condition
1
10 Operator fails to act correctly after the first 30 minutes in an extreme stress condition
2
10 Operator fails to act correctly after the first several hours in a high stress condition
x After 7 days after a large LOCA, there is a complete recovery to the normal error rate, x, for any
task
Notes:
(1) Modifications of these underlying (basic) probabilities were made on the basis of individual factors pertaining to
the tasks evaluated.
(2) Unless otherwise indicated estimates or error rates assume no undue time pressures or stresses related to
accidents.

14.25.8 Human error probabilities: Rasmussen Report 14.25.9 Human error probabilities: hazard analysis
One of the earliest sets of estimates for the probability of Estimates of human error probability are given in a
human error was that given in the Rasmussen Report as number of fault trees. Operator error occurs in the tree
shown in Table 14.15. These estimates are mostly of the as errors which either initiate or enable to the fault
type used in THERP, but they include the time reliability sequence and as errors which constitute failures of
correlation just referred to. protection. Table 14.16 is a summary by Lees (1983a)

07:22 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 58
HUMAN FACTORS AND HUMAN ERROR 14/59

Table 14.16 Some estimatesa of operator error used in fault tree analysis (Lees, 1983a; after Lawley, 1974, 1980)
(Courtesy of the Institution of Chemical Engineers)
Crystallizer plant
Probability
Operator fails to observe level indicator or take action 0.04
Operator fails to observe level alarm or take action 0.03
Frequency
(events/year)
Manual isolation valve wrongly closed (p) 0.05 and 0.1
Control valve fails to open or misdirected open 0.5
Control valve fails shut or misdirected shut (l) 0.5
Propane pipeline
Operator fails to take action: Time available Probability
To isolate pipeline at planned shut-down 0.001
To isolate pipeline at emergency shut-down 0.005
Opposite spurious tank blowdown given alarms and flare header signals 30 min 0.002
Opposite tank low level alarm 0.01
Opposite tank level high given alarm with 5±10 min
(a) controller misdirected or bypassed when on manual 0.025
(b) level measurement failure 0.05
(c) level controller failure 0.05
(d) control valve or valve positioner 0.1
Opposite slowly developing blockage on heat exchanger revealed as 0.04
heat transfer limitation
Opposite pipeline fluid low temperature given alarm 5 min 0.05
Opposite level loss in tank supplying heat transfer medium pump given 5 min 0.2
no measurement (p)
Opposite tank blowdown without prior pipeline isolation given alarms 30 min
which operator would not regard as significant and pipework icing
(a) emergency blowdown 0.2
(b) planned blowdown 0.6
Opposite pipeline fluid temperature low given alarm 0.4
Opposite pipeline fluid temperature low given alarm Limited 0.8
Opposite backflow in pipeline given alarm Extremely short 0.8
Opposite temperature low at outlet of heat exchanger given failure of 1
measuring instrument common to control loop and alarm
Misvalving in changeover of two-pump set (stand-by pump left valved 0.0025/changeover
open, working pump left valved in)
Pump in single or double operation stopped manually without isolating pipeline 0.01/shut-down
Low pressure steam supply failure by fracture, blockage or isolation error (p) 0.1/year
Misdirection of controller when on manual (assumed small proportion of time) 1/year
Notes:
a
l, Literature value; p, plant value; other values are assumptions.

of the estimates used in two fault trees published by values selected take into account relevant influencing
Lawley (1974b, 1980). Errors which result in failure of factors such as variable measurement and alarm and
protection (expressed as probabilities) predominate over time available for action. The way in which allowance is
errors which initiate or enable the fault sequence made for these influencing factors is illustrated by the
(expressed as frequencies). Initiating and enabling errors following extracts from the supporting notes:
tend to be associated with an item of equipment, and
protection errors with a process variable. Such a variable There is therefore a very high probability that the opera-
may have (1) no measurement, (2) measurement only, or tor would be made aware of a spurious blowdown condi-
(3) measurement and alarm, and this is an important tion by the alarms and this would be augmented by
feature influencing the error rate. observation of excessive flaring and header noise which
The great majority of figures given are assumed would highlight the cause of the problem. Because
values, with a few values being obtained from the alarms will be set quite close to normal operating pres-
literature and a few from the works. Engineering sure and level, there would be almost 30 min available for
judgement is used in arriving at these values and the action before the pipeline is chilled to 15 C.

07:22 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 59
14/60 HUMAN FACTORS AND HUMAN ERROR

and
The probabilities quoted are based on experience assum-
ing that 5 min would be available for action, and including
allowance for failure of the alarm. They take into account
factors such as whether or not the operator would be in
close attendance at the time of the fault, ease of diagnosis
of the problem, whether or not the fault could be cor-
rected from the control room or only by outside action,
reluctance to shut down the export pumps until correc-
tion of the fault has been attempted because back-up trip
protection is provided, etc.

14.26 Assessment of Human Error: Qualitative


Methods
The foregoing account has described some early
approaches to human reliability assessment in support
of probabilistic risk assessment. It is now necessary to
backtrack a little and revert to a consideration of
qualitative methods for analysis of human error.
Methods are available which may be utilized to identify
and so reduce error-likely situations and also to support
incident investigation. The elements of such methods are
the underlying model of human reliability, the taxonomy
of human error and the analysis technique.
These methods may be used in their own right to
reduce error or as a stage in a human reliability
assessment method. Their significance in the latter
application is that they provide a structured approach to
gaining understanding of the problem. In the absence of
a high quality technique for this essential preliminary
stage, quantification is premature.
Task analysis may be regarded as the prime techni-
que, or rather family of techniques, but there are also a
number of others. Early work in this area was that of
J.R. Taylor (1979), who described a variety of
approaches.
A method based on hierarchical task analysis is
predictive human error analysis, described below.
Another method is the work analysis technique
described by Pedersen (1985).
A fuller discussion of human error analysis methods is
given by Kirwan (1990) and the CCPS (1994/17).

14.26.1 Some error analysis strategies


Against the background of a long-term programme of
work on human error J.R. Taylor (1979) has developed a
set of four error analysis strategies:
(1) action error method;
(2) pattern search method;
(3) THERP;
(4) sneak path analysis.
Figure 14.23 Outline structure of action error method
The third of these has already been outlined and is (J.R. Taylor, 1979. Reproduced by permission.)
considered further below. The others are now described.

14.26.2 Action error method outline of the structure of the procedure in the form of
The action error method is applicable to a sequence of a cause±consequence diagram is shown in Figure 14.23.
operator actions which constitute intervention on the The range of errors handled is shown in Table 14.17.
plant. The structure of the sequence takes the form: Usually it is found that for any reasonably large
action/effect on plant/action/effect on plant. . . . An operating procedure it is practical to take into account

07:22 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 60
HUMAN FACTORS AND HUMAN ERROR 14/61

Table 14.17 Operator errors addressed in action error Table 14.18 Error classification for predictive human
method (J.R. Taylor, 1979. Reproduced by permission.) error analysis (Center for Chemical Process Safety,
1994/17) (Courtesy of the American Institute of Chemical
Cessation of a procedure Engineers)
Excessive delay in carrying out an action or omission of
an action Action
Premature execution of an actionÐtoo early A1 Action too long/short
Premature execution of an actionÐpreconditions not A2 Action mistimed
fulfilled A3 Action in wrong direction
Execution on wrong object of action A4 Action too little/too much
Single extraneous action A5 Misalign
In making a decision explicitly included in a procedure, A6 Right action on wrong object
taking the wrong alternative A7 Wrong action on right object
In making an adjustment or an instrument reading, an A8 Action omitted
error outside tolerance limits A9 Action incomplete
A10 Wrong action on wrong object

Checking
C1 Checking omitted
only single initial errors, although in a few cases it may C2 Check incomplete
be possible to use heuristic rules to identify double C3 Right check on wrong object
errors which it is worthwhile to explore. For example, C4 Wrong check on right object
one error may result in material being left in a vessel, C5 Check mistimed
while a second error may result in an accident arising C6 Wrong check on wrong object
from this.
Taylor states that the method is not suitable for Retrieval
quantitative assessment, because the spread of error R1 Information not obtained
rates on the individual elements is considered to be too R2 Wrong information obtained
wide. Factors mentioned as influencing these error rates R3 Information retrieval incomplete
are cues, feedback and type of procedure (freely planned,
trained). Transmission
T1 Information not transmitted
14.26.3 Pattern search method T2 Wrong information transmitted
The pattern search method is addressed to the problem T3 Information transmission incomplete
that an accident is typically the result of a combination of
operator errors. For such cases detailed analysis at the Selection
task element level is impractical for two reasons. One is S1 Selection omitted
the combinatorial explosion of the number of sequences. S2 Wrong selection made
The other is that the error rates, and above all the error
rate dependencies, are not determinable. Plan
An important feature of such accidents is that they P1 Plan preconditions ignored
may have a relatively long sequence of errors, say 3 to 5, P2 Incorrect plan executed
which have a common cause, such as error in decision- P3 Correct but inappropriate plan executed
making, in work procedure or in plant state assessment. P4 Correct plan executed too soon/too late
Often the sequence is associated with an unrevealed P5 Correct plan executed in wrong order
plant failure.
The pattern search method is based on identifying a
common cause error, developing its consequence, per-
haps using an event tree, and using the results to `steer'
the construction of the fault tree.
Nearness may be temporal, spatial or psychological.
14.26.4 Sneak path analysis Often such an action is very near to the normal operator
Sneak path analysis is concerned with the identification action.
of potential accident situations. It is so called by analogy
with sneak circuits. It seeks to identify sources of hazard 14.26.5 Predictive human error analysis (PHEA)
such as energy or toxins and targets such as people, Predictive human error analysis (PHEA) is described by
critical equipment or reactive substances. The standpoint Embrey and co-workers (e.g. Murgatroyd and Tait, 1987;
of the analysis is similar to the accident process model of Embrey, 1990) and by the CCPS (1994/17). PHEA uses
Houston (1971) described in Chapter 2. hierarchical task analysis to discover the plan involved in
For an accident to occur it is necessary for there to be the task, combined with the error classification shown in
some operator action, operator error, equipment failure Table 14.18. The task is then analysed step by step in
or technical sequence. A search is made to determine terms of the task type, error type, task description,
whether any of these necessary events can occur. In consequences, recovery and error reduction strategy.
examining operator error attention is directed particularly In a validation study of PHEA, Murgatroyd and Tait
to actions which are `near' to the necessary error. (1987) found that the proportion of errors with potentially

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 61
14/62 HUMAN FACTORS AND HUMAN ERROR

Table 14.19 HRA Handbook: contents (Swain and Task analysis and PHEA have already be described and
Guttmann, 1983) PIF analysis is treated in Section 14.33.
1. Introduction
2. Explanation of Some Basic Terms 14.27 Assessment of Human Error: Human
3. Some Performance Shaping Factors Affecting Human Reliability Analysis Handbook
Reliability The first systematic approach to the treatment of human
4. Man±Machine Systems Analysis error within a probabilistic risk assessment (PRA) was
5. A Technique for Human Reliability Analysis the Technique for Human Error Rate Prediction
6. Sources of Human Performance Estimates (THERP). An early account of THERP was given by
7. Distribution of Human Performance and Uncertainty Swain (1972). Its origins were work done at Sandia
Bounds Laboratories on the assessment of human error in
8. Use of Expert Opinion in Probabilistic Risk assembly tasks. This work was then extended to
Assessment human error in process control tasks, with particular
9. Unavailability reference to nuclear reactors. This extension to nuclear
10. Dependence reactor control was used in the Rasmussen Report, or
11. Displays WASH-1400 (AEC, 1975), which contains generic PRAs
12. Diagnosis of Abnormal Events for US commercial nuclear reactors.
13. Manual Controls The accident at Three Mile Island gave impetus to
14. Locally Operated Valves work in this area and led to the publication of the
15. Oral Instructions and Written Procedures Handbook of Human Reliability Analysis with Emphasis on
16. Management and Administrative Control Nuclear Power Plant Applications. Final Report (the HRA
17. Stress Handbook) by Swain and Guttmann (1983). This report
18. Staffing and Experience Levels was widely circulated in draft form in 1980 and many
19. Recovery Factors literature references are to the draft. Further work is
20. Tables of Estimated Human Error Probabilities described in Accident Sequence Evaluation Program.
21. Examples and Case Studies Human Reliability Analysis Procedure (Swain, 1987a).
22. Concluding Comments The HRA Handbook gives a complete methodology for
addressing the human error aspects of a PRA, but
Appendices THERP is central to the approach, and the methodology
A Methods for Propagating Uncertainty Bounds in a as a whole is generally referred to by that acronym.
Human Reliability Analysis and for Determining However, the Handbook represents a considerable exten-
Uncertainty Bounds for Dependent Human Activities sion of the original THERP methodology, particularly in
B An Alternative Method for Estimating the Effects of respect of its adoption of the time reliability correlation
Dependence method.
C Calculations of Mean and Median Trials to Detection A further review of human reliability analysis (HRA)
D Calculations of Basic Walk-around Inspections as a and THERP is given by P. Miller and Swain (1987).
Function of Period between Successive Walk-arounds Table 14.19 gives the contents of the HRA Handbook and
E Reviews of the Draft Handbook Figure 14.24 shows the structure of the principal data
F A Comparison of the October 1980 and Present tables. An account is now given of THERP. The HRA
Versions of the Handbook Handbook should be consulted for a more detailed
treatment.

14.27.1 Overall approach


The overall approach used in the Handbook is shown in
Figure 14.25. The tasks to be performed are identified as
significant consequences which actually occurred in a part of the main PRA. The HRA involves the assessment
equipment calibration task over a 5 year period was 98%. of the reliability of performance of these tasks.

14.27.2 Technique for human error rate prediction


14.26.6 System for predictive error analysis and (THERP)
reduction (SPEAR) The starting point is a task analysis for each of the tasks
The system for predictive error analysis and reduction to be performed. The method is based on the original
(SPEAR) is a set of qualitative techniques, of which THERP technique and uses a task analytic approach in
PHEA is one. It is described by the CCPS (1994/17). which the task is broken down into its constituent
SPEAR comprises the following techniques: (1) task elements along the general lines described above. The
analysis, (2) performance influencing factor (PIF) analy- basic assumption is that the task being performed is a
sis, (3) PHEA, (4) consequence analysis and (5) error planned one.
reduction analysis. The task is described in terms of an event tree as
Consequence analysis involves consideration not just of shown in Figure 14.26. This figure gives the event tree
the consequences of failure to perform the task but also for two tasks A and B which are performed sequentially
of the consequences of any side-effects which may occur and which constitute elements of a larger overall task. In
whether or not the task is executed. Error reduction reliability terms the relationship between the two
analysis is concerned with measures to reduce those constituent tasks to the overall task may be a series or
errors which do not have a high probability of recovery. a parallel one.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 62
HUMAN FACTORS AND HUMAN ERROR 14/63

Figure 14.24 HRA Handbook: structure of principal data tables (Swain and Guttmann, 1983). CR, control room;
HEP, human error probability; MOV, motor operated value

The probability that task A will be performed success- that it will not be performed successfully is B. Since task
fully is a and the probability that it will not be performed B is performed after task A it is assumed to be
successfully is A. Since task A is the first in the dependent in some degree on task A and the probabil-
sequence it is assumed to be independent of any other ities b and B are therefore conditional probabilities. Thus
task and the probabilities a and A are therefore for b it is necessary to distinguish between b j a and b j A
unconditional probabilities. The probability that task B and for B between B j a and B j A. The probabilities to be
will be performed successfully is b and the probability used are therefore as shown in Figure 14.26.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 63
14/64 HUMAN FACTORS AND HUMAN ERROR

Figure 14.25 HRA Handbook: methodology for human reliability analysis (Swain and Guttmann, 1983)

The structure of the tree is the same whether the 14.27.3 Human error probability
configuration is a series or parallel one but the status of The event tree so produced, and the corresponding
the outcomes is different. As shown in Figure 14.26 for a equations, are then used to determine the probability of
series system in which it is necessary for both tasks to failure for the overall task. For this estimates are
be successful, there is only one which rates as a required of the human error probability (HEP) for each
success, whilst for a parallel system in which it is of the constituent tasks.
sufficient for only one of the tasks to be successful, there Several different human error probabilities are distin-
are three outcomes which rate as success. guished: nominal, basic, conditional and joint. A nominal
A distinction is made between step-by-step tasks and HEP is a generic value before application of any
dynamic tasks. The latter involve a higher degree of performance shaping factors. A basic HEP (BHEP) is
decision-making. The approach just described is most the basic unconditional HEP after application of perfor-
readily justified for step-by-step tasks. mance shaping factors. A conditional HEP (CHEP) is a

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 64
HUMAN FACTORS AND HUMAN ERROR 14/65

Figure 14.26 HRA Handbook: event tree for two tasks in sequence illustrating conditional probabilities (Swain and
Guttmann, 1983); F, failure; S, success

basic HEP adjusted to take account of dependency. A Dependence is likely, for example, where an operator
joint HEP (JHEP) is the HEP for the overall task. has to change two valves one after the other.
The simple application of HEP values which make no The other situation is where two people are involved in
allowance for dependence tends to give very low the same task. The form of the involvement may vary.
probabilities of failure which do not accord with The two persons may be involved in a joint task such as
experience and carry little conviction. Allowance for calibrating an instrument. They may perform separate
dependence is therefore important. There is in any but closely linked functions such as those of two
event some value of the HEP below which the estimate operators sharing a control room but controlling differ-
is no longer credible. A cut-off value is therefore applied. ent sections of the plant. The work of one person may
Reference is made to a cut-off of about 5  10 5. be subject to the general supervision of another person.
Many of the HEPs in the Handbook are expressed as Or it may be formally checked by another.
log±normal distributions, quoted in terms of the two There are various approaches which may be used to
parameters median and error factor. quantify dependence, including data and expert judge-
ment. There are few relevant data. Expert judgement can
14.27.4 Dependence model be used to a limited extent. An example is given of the
The dependence model is an important feature of the use of expert estimates of dependence in a calibration
methodology. A significant proportion of the Handbook is task.
concerned with dependency. There are two basic forms The approach used in the Handbook, therefore, is the
of dependence: dependence between tasks and depen- development of a generalized dependence model. The
dence between people. degrees of dependence used are zero, low, medium, high
Where two tasks are performed in sequence the and complete. The determination of the appropriate
second task may be influenced by the first. degree of dependence depends on the situation under

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 65
14/66 HUMAN FACTORS AND HUMAN ERROR

consideration. The Handbook gives quite extensive supervisor both because of his greater experience and
guidance. Here it is possible only to give a few his seniority.
examples. The other aspect of the dependence model is the
In some cases there may be judged to be zero quantification of the adjustment to be made given that
dependence. An example given of a situation where the degree of dependency has been determined. The
zero dependence between two tasks would be assumed adjustment is made to the basic HEP (BHEP). The
is check-reading of one display followed by check-reading relation used is:
of another display as part of periodic scanning of the Pc ˆ …1 kPb †=…k ‡ 1† ‰14:27:1Š
displays. Zero dependence is not normally assumed for
persons working as a team or for one person checking where Pb is the basic HEP and Pc is the conditional
another's performance. HEP. k is a constant which has the following values: low
The assumption of even a low level of dependence dependency k ˆ 19; medium dependency k ˆ 6; high
tends to result in an appreciably higher HEP than that of dependency k ˆ 1.
zero dependence. If there is any doubt, it is conservative Equation 14.27.1 and the values of the constant k are
to use low dependence rather than zero dependence. selected to give CHEPs of approximately 0.05, 0.15 and
A level of dependence between people which would be 0.50 of the BHEP for low, medium and high dependency,
assessed as low is illustrated by the checking of the respectively, where BHEP  0.01. Where BHEP > 0.01
work of a new operator by a shift supervisor. In this the effective multiplying factor is slightly different. Thus,
situation the shift supervisor has an expectation that the for example, for a BHEP of 0.1 the values of the CHEP
new operator may make errors and is more than usually are 0.15, 0.23 and 0.55, respectively, for these three
alert. levels of dependency.
A moderate level of dependence is usually assessed
between the shift supervisor and the operators for tasks
where the supervisor is expected to interact with them. 14.27.5 Displays, annunciators and controls
A high level of dependence, or even complete Common tasks in process control are obtaining informa-
dependence, would be assigned for the case where the tion from displays, responding to annunciators and
shift supervisor takes over a task from a less experi- manipulating controls. This is very much the home
enced operator, since the latter may well defer to the ground of human factors and there is a good deal of

Table 14.20 HRA Handbook: human error probability (HEP) estimates and error factors (EFs) for oral instructionsa
(after Swain and Guttmann, 1983)
(a) (b) (c)
Itemb Number of oral Pr(F) to recall Pr(F) to recall all Pr(F) to recall all
instruction items or item `N', order of recall items, order of recall items, order of recall is
perceptual units not important not important important

HEP EF HEP EF HEP EF


Oral instructions are detailed:
(1) 1c 0.001 3 0.001 3 0.001 3
(2) 2 0.003 3 0.004 3 0.006 3
(3) 3 0.01 3 0.02 5 0.03 5
(4) 4 0.03 5 0.04 5 0.1 5
(5) 5 0.1 5 0.2 5 0.4 5

Oral instructions are general:


(6) 1c. 0.001 3 0.001 3 0.001 3
(7) 2 0.006 3 0.007 3 0.01 3
(8) 3 0.02 5 0.03 5 0.06 5
(9) 4 0.06 5 0.09 5 0.2 5
(10) 5 0.2 5 0.3 5 0.7 5
a
It is assumed that if more than five oral instruction items or perceptual units are to be remembered, the recipient will
write them down. If oral instructions are written down, use Table 20±5 in the Handbook for errors in preparation of
written procedures and Table 20±7 for errors in their use.
The first column of HEPs (a) is for individual oral instruction items, e.g. the second entry, 0.003 (item 2a), is the Pr(F)
to recall the second of two items, given that one item was recalled, and order is not important. The HEPs in the other
columns for two or more oral instruction items are joint HEPs, e.g. the second 0.004 in the second column of HEPs is
the Pr(F) to recall both of two items to be remembered, when order is not important. The 0.006 in the third column of
HEPs is the Pr(F) to recall both of two items to be remembered in the order of performance specified. For all
columns, the EFs are taken from Table 20±20.
b
The term `item' for this column is the usual designator for tabled entries and does not refer to an oral instruction
item.
c
The Pr(F) values in rows 1 and 6 are the same as the Pr(F) to initiate the task.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 66
HUMAN FACTORS AND HUMAN ERROR 14/67

Table 14.21 HRA Handbook: human error probability (HEP) estimates and error factors (EFs) for written procedures
(after Swain and Guttmann, 1983)
A Preparation of written proceduresa
Item Potential errors HEP EF
b
(1) Omitting a step or important instruction from a formal or ad hoc procedure 0.003 5
or a tag from a set of tags
(2) Omitting a step or important instruction from written notes taken in response Negligible
to oral instructions c
(3) Writing an item incorrectly in a formal or ad hoc procedure or a tag 0.003 5
(4) Writing an item incorrectly in written notes made in response to oral Negligible
instructions c
a
Except for simple reading and writing errors, errors of providing incomplete or misleading technical information are
not addressed in the Handbook. The estimates are exclusive of recovery factors, which may greatly reduce the nominal
HEPs.
b
Formal written procedures are those intended for long-time use; ad hoc written procedures are one-of-a-kind,
informally prepared procedures for some special purpose.
c
A maximum of five items is assumed. If more than five items are to be written, use 0.001 (EF = 5) for each item in
the list.
B Neglect of written procedures
Item Task HEP EF
(1) Carry out a plant policy or scheduled tasks such as periodic tests 0.01 5
or maintenance performed weekly, monthly or at longer intervals
(2) Initiate a scheduled shiftly checking or inspection functiona 0.001 3

Use written operations procedures under:


(3) Normal operating conditions 0.01 3
(4) Abnormal operating conditions 0.005 10

(5) Use a valve change or restoration list 0.01 3


(6) Use written test or calibration procedures 0.05 5
(7) Use written maintenance procedures 0.3 5
(8) Use a checklist properlyb 0.5 5
a
Assumptions for the periodicity and type of control room scans are discussed in Chapter 11 of the Handbook in the
section, `A General Display Scanning Model'. Assumptions for the periodicity of the basic walk-around inspection are
discussed in Chapter 19 of the Handbook in the section, `Basic Walk-around Inspection'.
b
Read a single item, perform the task, check off the item on the list. For any item in which a display reading or other
entry must be written, assume correct use of the checklist for that item.
C Use of written proceduresa
Itemb Omission of item HEP EF
c
When procedures with check-off provisions are correctly used:
(1) Short list,  10 items 0.001 3
(2) Long list, > 10 items 0.003 3
When procedures without check-off provisions are used, or when check-off provisions are
incorrectly used:d
(3) Short list  10 items 0.003 3
(4) Long list, > 10 items 0.01 3
(5) When written procedures are available and should be used but are not usedd 0.05e 5
a
The estimates for each item (or perceptual unit) presume zero dependence among the items (or units) and must be
modified by using the dependence model when a nonzero level of dependence is assumed.
b
The term `item' for this column is the usual designator for tabled entries and does not refer to an item of instruction
in a procedure.
c
Correct use of check-off provisions is assumed for items in which written entries such as numerical values are
required for the user.
d
Table 20±6 in the Handbook lists the estimated probabilities of incorrect use of check-off provisions and of non-use of
available written procedures.
e
If the task is judged to be `second nature', use the lower uncertainty bound for 0.05, i.e. use 0.01 (EF = 5).

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 67
14/68 HUMAN FACTORS AND HUMAN ERROR

Table 14.22 HRA Handbook: human error probability (HEP) estimates and error factors (EFs) for manipulation and
checking of locally operated valves (after Swain and Guttmann, 1983)
A Selection of valve
Item Potential errors HEP EF
Making an error of selection in changing or restoring a locally operated valve when the
valve to be manipulated is:

(1) Clearly and unambiguously labelled, set apart from valves that are similar in 0.001 3
all of the following: size and shape, state, and presence of tagsa
(2) Clearly and unambiguously labelled, part of a group of two or more valves 0.003 3
that are similar in one of the following: size and shape, state, or presence of
tagsa
(3) Unclearly or ambiguously labelled, set apart from valves that are similar 0.005 3
in all of the following: size and shape, state, and presence of tagsa
(4) Unclearly or ambiguously labelled, part of a group of two or more valves 0.008 3
that are similar in one of the following: size and shape, state, or presence of
tagsa
(5) Unclearly or ambiguously labelled, part of a group of two or more valves 0.001 3
that are similar in all of the following: size and shape, state, and presence of
tagsa
a
Unless otherwise specified, level 2 tagging is presumed. If other levels of tagging are assessed, adjust the tabled
HEPs according to Table 20±15 in the Handbook.
B Detection of stuck valves
Item Potential errors HEP EF
a
Given that a locally operated valve sticks as it is being changed or restored, the operator
fails to notice the sticking valve, when it has:
(1) A position indicatorb only 0.001 3
(2) A position indicatorb and a rising stem 0.002 3
(3) A rising stem but no position indicatorb 0.005 3
(4) Neither rising stem nor position indicatorb 0.01 3
a
Equipment reliability specialists have estimated that the probability of a valve's sticking in this manner is
approximately 0.001 per manipulation, with an error factor of 10.
b
A position indicator incorporates a scale that indicates the position of the valve relative to a fully opened or fully
closed position. A rising stem qualifies as a position indicator if there is a scale associated with it.
C Checking, including valvesa
Item Potential errors HEP EF
(1) Checking routine tasks, checker using written materials (includes 0.1 5
over-the-shoulder inspections, verifying position of locally operated valves,
switches, circuit breakers, connectors, etc., and checking written lists, tags,
or procedures for accuracy)
(2) Same as above, but without written materials 0.2 5
(3) Special short-term, one-of-a-kind checking with alerting factors 0.05 5
(4) Checking that involves active participation, such as special measurements 0.01 5

Given that the position of a locally operated valve is checked (item (1) above), noticing 0.5 5
that it is not completely opened or closed:
(5) Position indicatorb only 0.1 5
(6) Position indicatorb and a rising stem 0.5 5
(7) Neither a position indicatorb nor a rising stem 0.9 5
(8) Checking by reader/checker of the task performer in a two-man team, or 0.5 5
checking by a second checker, routine task (no credit for more than 2 checkers)
(9) Checking the status of equipment if that status affects one's safety when 0.001 5
performing his tasks
(10) An operator checks change or restoration tasks performed by a maintainer Above HEPs/2 5
a
This table applies to cases during normal operating conditions in which a person is directed to check the work
performed by others either as the work is being performed or after its completion.
b
A position indicator incorporates a scale that indicates the position of the valve relative to a fully opened or fully
closed position. A rising stem qualifies as a position indicator if there is a scale associated with it.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 68
HUMAN FACTORS AND HUMAN ERROR 14/69

Figure 14.27 Some time–reliability correlations relevant to nuclear power plants (R.E. Hall, Fragola and Wreathall,
1982). LOCA, loss of coolant accident; SI, safety interlock. Notation <x, y> denotes median m, error factor f of log-
normal distribution

information available on response times and probabilities 14.27.6 Oral instructions and written procedures
of error in such tasks. The Handbook gives a number of Communication in process control includes both oral
tables for HEPs for these tasks. instructions and written procedures. For oral instructions
For unannunciated displays, the Handbook gives the a distinction is made between general and detailed
following HEP values for selection of a display (Table instructions. Table 14.20 gives some HEP estimates for
20±9). The HEP depends on the existence of similar these two cases.
adjacent displays. It is assumed to be negligible if the For written instructions the types of HEP treated
display is dissimilar to adjacent displays and the operator include error in the preparation of the instructions,
knows the characteristics of the display he requires. The failure to refer to them and error in their use. Table
HEP is taken as 0.005 (error factor (EF) 10) if it is from 14.21 gives some HEP estimates for these three cases,
a group of similar displays on a panel with clearly drawn respectively.
mimic lines which include the displays; as 0.001 (EF 3) if
it is from a group of similar displays which are part of a 14.27.7 Locally operated valves
well-delineated functional group on the panel; and as Another common task in process control is the manip-
0.003 (EF 3) if it is from an array of similar displays ulation of locally operated valves (LOVs). This task
identified by label only. These HEP values do not therefore receives special treatment in the Handbook.
include recovery from any error. The probability that The valves concerned are manually operated. They
this will occur is high if the reading obtained is grossly include valves with or without a rising stem and with
different from that expected. or without position indicators.
For check reading from displays the HEPs given are Three principal errors are distinguished. One is
as follows (Table 20±11). The HEP is taken as 0.001 for selection of the wrong valve. Here the base case is a
digital displays and for analogue meters with easily seen single isolated valve. Where there are other valves
limit marks; as 0.002 for analogue meters with limits present, the possibility exists that the wrong valve will
marks which are difficult to see and for analogue chart be selected. Another type of error is reversal, or moving
recorders with limits marks; as 0.003 for analogue meters the valve in the wrong direction. The operator opens it
without limit marks; and as 0.006 for analogue chart instead of closing it, or vice versa. One form of this error
recorders without limit marks. In all cases the EF is is to reverse the state of a valve which is in fact already
taken as 3. The HEP for confirming a status change on a in the desired state. The third type of error is failure to
status lamp and that for misinterpreting the indications detect that the valve is stuck. A common form of this
an indicator lamp are assumed to be negligible. These error is to fail to effect complete closure of a valve. Table
HEPs apply to the individual checking of a display for 14.22 gives some HEP estimates for selection of a valve,
some specific purpose. for detection of a stuck valve and for checking a valve.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 69
14/70 HUMAN FACTORS AND HUMAN ERROR

Figure 14.28 HRA Handbook: nominal diagnosis model (Swain and Guttmann, 1983): human error probability (HEP)
of diagnosis of one abnormal event by control room personnel

The manipulation of valves is a particular case where An aspect of operator performance of particular
there may exist a strong dependence, or coupling, concern to the nuclear industry is behaviour following
between two tasks. This case is one which was taken the initial event. One of the main methods used to study
into account in the Rasmussen Report. An operator may such behaviour is the use of simulators. Work on
be required to cut off flow by closing two valves, closure simulators has shown that the probability of success in
of either of which is sufficient to stop the flow. If the post-event behaviour correlates strongly with time. Early
probability of error in closing one valve is 10 2 and there work on this was done by R.E. Hall, Fragola and
is zero dependence, the probability of error in the overall Wreathall (1982). A number of TRC models have since
task is 10 4 (10 2  10 2). On the other hand, if there been produced based on simulator results.
is complete coupling, the probability of error is 10 2 Figure 14.27 shows some TRCs given by Hall, Fragola
(10 2  1). These two cases represent the two extremes and Wreathall for operator vigilance and for particular
and constitute lower and upper bounds on the probability events to which a nuclear reactor operator may have to
of failure. For the more realistic case of loose coupling, respond.
the approach used in WASH-1400 was to take for the
probability of error the log±normal median or square
1
root 14.27.9 Nominal diagnosis model
of the product of the two bounds: (10 2  10 4)2 ˆ 10 3. The task analysis approach on which THERP is based is
The dependence model given in the Handbook allows the not well adapted to handling the behaviour of the
use of more levels of dependence. operators in an abnormal situation. For this use is
made of the TRC approach. Several TRCs are given in
the Handbook. Two are used for screening: one for
14.27.8 Time±reliability correlation diagnosis and one for post-diagnosis performance. The
As already mentioned, there has been an increasing main TRC is the nominal diagnosis model which is
tendency, associated mainly with experimental work with applicable to diagnosis only and not to post-diagnosis
operators on simulators, to correlate the probability of performance. This TRC is shown in Figure 14.28. In
operator failure with time. In particular, use is made of contrast to the other HEP relations, which refer to
the time±reliability correlation (TRC) to obtain human individuals, the TRCs refer to the behaviour of a team.
error probabilities for complex, or non-routine, tasks, The nominal diagnosis model therefore implies a
including handling an emergency. The assumption particular manning model. This is described in the next
underlying such a TRC is that, although there are in section.
principle other factors which affect operator performance Figure 14.28 includes curves for the upper and lower
in such tasks, time is the dominant one. bounds. The Handbook gives guidance on the choice of

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 70
HUMAN FACTORS AND HUMAN ERROR 14/71

Table 14.23 HRA Handbook: manning model for nominal diagnosis modelÐillustrative example (after Dougherty
and Fragola, 1988, from Swain and Guttmann, 1983)
Time Conditional probability of error Joint probability TRC
(min) of error value
10 Operator 1 0.1 (basic probability)
Operator 2 1.0 (complete dependence)
Shift supervisor 0.55 (high dependency)
Shift technical adviser 1.0 (no credit) 0.055 0.1

20 Operator 1 0.1 (basic probability)


Operator 2 0.55 (high dependency)
Shift supervisor 0.23 (moderate dependency)
Shift technical adviser 1.0 (high dependency) 0.007 0.01

30 Operator 1 0.1 (basic probability)


Operator 2 0.55 (high dependency)
Shift supervisor 0.15 (low dependency)
Shift technical adviser 0.15 (low dependency) 0.0012 0.001

curve. Essentially the lower bound is applicable if the and the shift technical advisor. At 10 minutes into the
event is a well-recognized one and the operators have incident for operator 1 the BHEP is 0.1. Operator 2 has
practised on a simulator and demonstrate by interview complete dependency. The shift supervisor has high
that they know how to handle it. The upper bound is dependency. At this stage no credit is taken for the
applicable if the event is not covered by training or is shift technical advisor. For operator 1 the BHEP remains
covered only in initial training or if the operators constant at 0.1. At 20 minutes for operator 2 the
demonstrate by interview that they do not know how dependency reduces to the high level and for the shift
to handle it. The main, or nominal, curve is applicable if supervisor it reduces to medium level. The shift technical
the operators have practised the event only on simulator advisor is now taken into account with a high depen-
requalification exercises or if none of the rules for the dency. At 30 minutes for operator 2 the dependency
lower or upper bound apply. remains at the high level but for the shift supervisor it
The nominal diagnosis TRC does not itself fit a log± reduces again to the low level. For the shift technical
normal distribution, but it may be approximated by such advisor the dependency reduces to the low level. The
a distribution. The parameters of the log±normal dis- CHEPs shown are those given by Equation 14.27.1. The
tribution approximating to this TRC have been estimated JHEPs shown are the products of the BHEP for operator
by Dougherty and Fragola (1988), who obtain values for 1 and the CHEPs for the other members of the team.
the median and the error factor of m = 4 and f = 3.2, These JHEPs are then rounded to give the actual values
respectively. The relation between the nominal diagnosis used in the nominal diagnosis TRC.
TRC and the approximating log±normal distribution is:
14.27.11 Recovery model
Time (min) Human error probability Some errors are not recoverable, but many are and the
recovery model is therefore another important feature of
Nominal Approximating the methodology. The probability of recovery depends on
diagnosis TRC log±normal distribution the opportunities for detection, the use made of these
opportunities and the effectiveness of the recovery action.
5 0.9 0.4 Recovery mechanisms include:
10 0.1 0.1
20 0.01 0.01 (1) human actions ± checking;
30 0.001 0.002 (2) plant states ± panel indications;
60 0.0001 0.00006 (3) equipment states ± inspections.
Recovery is treated under the headings:
14.27.10 Manning model (1) human redundancy;
The nominal diagnosis TRC just described applies to a (2) annunciated indications;
whole team and is in fact postulated on a particular team (3) active inspections;
composition. In other words there is an implied manning (4) passive inspections.
model.
The manning model used is shown in Table 14.23. The Human redundancy is essentially the checking of one
team consists of operators 1 and 2, the shift supervisor person's work by another person. For checking the

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 71
14/72 HUMAN FACTORS AND HUMAN ERROR

Figure 14.29 HRA Handbook: event tree for task of handling loss of steam generator feed in a nuclear power plant,
illustrating recovery from error (Swain and Guttmann, 1983)

Handbook gives the following HEP values (Table 20±22). EF is taken as 5. These HEPs apply where a person is
The HEP for checking is taken as being determined by directed to check the work of others, either as the work
two distinct errors, failure to execute the check at all and is being performed or after its completion. Credit for
error in performing it. The HEP is taken as 0.1 if a checking is limited to the use of two checkers. The HEP
written procedure is used; as 0.2 if a written procedure is of the second checker in a routine task is taken as 0.5.
not used; as 0.05 if the check is a one-off with alerting Recognition is given to a number of problems
factors; and as 0.01 if the check involves active associated with checking. Checking is particularly
participation. The HEP is taken as 0.5 for checking by affected by psychological considerations. There is an
a second member of a two-man team or by a second expectation that an experienced person will not make
checker. It is taken as 0.001 for checking of equipment errors. Conversely, there is an expectation that an
which affects the safety of the checker. In all cases the inexperienced person may well do so.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 72
HUMAN FACTORS AND HUMAN ERROR 14/73

Table 14.24 HRA Handbook: some performance shaping factors (PSFs)a (after Swain and Guttmann, 1983)
External PSFs Stressor PSFs Internal PSFs
Situational characteristics: Task and equipment Psychological stressors: PSFs Organismic factors:
those PSFs general to one characteristics: those PSFs which directly affect mental characteristics of people
or more jobs in a work specific to tasks in a job stress resulting from internal and
situation external influences
Architectural features Perceptual requirements Suddenness of onset Previous training/experience
Quality of environment Motor requirements (speed, Duration of stress State of current practice or skill
Temperature, humidity, strength precision Task speed Personality and intelligence
air quality, and radiation Control±display Task load variables
Lighting relationships High jeopardy risk Motivation and attitudes
Noise and vibration Anticipatory requirements Threats (of failure, loss Emotional state
Degree of general Interpretation of job) Stress (mental or bodily
cleanliness Decision-making Monotonous, degrading, or tension)
Work hours/work breaks Complexity (information meaningless work Knowledge of required
Shift rotation load) Long, uneventful vigilance performance standards
Availability/adequacy of Narrowness of task periods Sex differences
special equipment, tools Frequency and Conflicts of motives about Physical condition
and supplies repetitiveness job performance Attitudes based on influence of
Manning parameters Task criticality Reinforcement absent or family and other outside
Organizational structure Long- and short-term negative persons or agencies
(e.g. authority, memory Sensory deprivation Group identifications
responsibility, Calculational requirements Distractions (noise, glare,
communication channels) Feedback (knowledge or movement, flicker, colour)
Actions by supervisor, results) Inconsistent cueing
co-workers, unions Dynamic vs step-by-step
representatives, and activities
regulatory personnel Team structure and
Rewards, recognition, communication Physiological stressors:
benefits Man-made interface PSFs which directly
factors: design of prime affect physical stress
equipment, test Duration of stress
equipment, manufacturing Fatigue
Job and task instructions: equipment, job aids, Pain or discomfort
single most important tools, fixtures Hunger or thirst
tool for most tasks Temperature extremes
Procedures required Radiation
(written or not written) G-force extremes
Written or oral Atmospheric pressure extremes
communications Oxygen insufficiency
Cautions and warnings Vibration
Work methods Movement constriction
Plant policies (shop Lack of physical exercise
practices) Disruption of circadian rhythm
a
Some of the tabled PSFs are not encountered in present-day nuclear power plants (e.g. g-force extremes), but are
listed for application to other man±machine systems.

It is often suggested that if a person knows that his Annunciated indicators, or alarms, are treated at two
work is to be checked he may perform it with less care levels. The HEP for taking the prescribed corrective
and that the end result may be a lower task reliability action in response to a single alarm is 0.0001, but this
than if checking were not employed. This view is may be drastically modified for other alarm situations.
rejected. It is argued that for any credible values of the An annunciator response model is used which applies
basic HEP and the conditional HEP for checking the to multiple alarms and is expressed by two equations.
joint HEP will be lower with checking. The probability Pi of failure to initiate action in response
The possibility exists that on a particular plant to the ith alarm in a group of n alarms is:
checking may have fallen into disuse. This is one 4
feature in particular which it is prudent for the analyst Pi ˆ 10 iˆ1 ‰14:27:2aŠ
to observe and check. There is a tendency in some
situations for the task and its checking to become elided Pi ˆ 2i 2
 10 3
1 < i  10 ‰14:27:2bŠ
and for the whole to become a joint operation. Where
this occurs, there is no longer an independent check. Pi ˆ 0:25 i > 10 ‰14:27:2cŠ

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 73
14/74 HUMAN FACTORS AND HUMAN ERROR

The probability Pr of failure to initiate action in response procedures instead of narrative procedures (Table 3±8).
to a randomly selected alarm in a group of alarms and is: The error factors given alongside the nominal HEPs also
Xn
provide further guidance for the analyst.
Pr ˆ Pi =n ‰14:27:3Š
iˆ1
Active inspection is defined as inspection for a specific 14.27.13 Stress
purpose. The main forms are prescribed periodic logging As already described, stress is an important determinant
of readings and prescribed audit of indications with of performance and must be taken into account. It is one
written instructions, both in the control room, and a of the performance shaping factors, but is accorded
walk-around inspection on the plant with instructions. special treatment. Stress may be caused by both external
The inspection may be based on oral instructions or and internal factors. Some of these stressors are listed in
written instructions. The HEP for an active inspection is Table 14.24 for stress which could potentially be rather
that applicable to oral instructions and written proce- complex.
dures, already described. The approach adopted in the Handbook is to simplify
Passive inspection is defined as a casual type of and to treat stress as a function of workload. The
inspection. There are no written instructions and no assumption underlying this approach is that, although
instructions to look for any particular feature. The main there are in principle other factors which affect stress,
forms are scanning of the control room displays and a workload is the dominant one.
walk-around on the plant. At a very low workload performance is less than
As described above, HEPs are given for detection of optimal. There is some higher workload at which it is
deviant unannunciated displays in a periodic scan. optimal. At a higher workload yet, performance again
For passive inspection by walk-around the Handbook deteriorates. Finally, the situation may induce threat
gives the following HEP values (Table 20±27). The event stress, which is qualitatively different and is accorded
concerned is failure to detect a particular deviant state separate treatment.
within 30 days. It is assumed that there is one inspection The four levels of stress are therefore defined as:
per shift. The HEPs are taken as 0.52, 0.25, 0.05, 0.003,
0.002, 0.001 and 0.001 for periods between walk-around (1) very low task load;
of 1, 2, 3, 4, 5, 6 and 7 days, respectively. (2) optimum task load;
For a planned task recovery may be introduced into (3) heavy task load;
the task event tree. This is illustrated in the event tree (4) threat stress.
shown in Figure 14.29, which shows recoveries from
A heavy task load is one approaching the limit of human
error represented by tasks C and G. C is a recovery
capacity.
from failure at task B. G is a recovery from failure at
For the first three levels of stress the Handbook gives
task E and also at task H.
multiplying factors which are applied to the nominal HEP
(Table 20±16). The multiplying factors for an experienced
14.27.12 Performance shaping factors operator are: 2 for a very low task load and for a heavy
The performance shaping factors (PSFs) are divided into task load of a step-by-step task; and 5 for a heavy task
the following classes: load of a dynamic task and for a threat stress condition
(1) external factors ± of a step-by-step task. The multiplier for optimum
(a) situational characteristics; workload is unity. Different factors are given for an
(b) task and equipment characteristics; inexperienced operator.
(c) job and task instructions; A situation which can arise is where an error is made
(2) internal factors; and recognized and an attempt is then made to perform
(3) stressors. the task correctly. Under conditions of heavy task load
the probability of failure tends to rise with each attempt
Stressors are treated separately and are considered in the as confidence deteriorates. For this situation the doubling
next section. rule is applied. The HEP is doubled for the second
The performance shaping factors used listed in the attempt and doubled again for each attempt thereafter,
Handbook are shown in Table 14.24. Each PSF is until a value of unity is reached. There is some support
discussed in some detail. The Handbook does not, for this in the work of Siegel and Wolf (1969) described
however, appear to give any simple method of adjusting above.
the nominal HEPs by way of a multiplying factor or For a dynamic task or for diagnosis under threat
otherwise. It is evidently up to the analyst to judge the stress the approach is different. A multiplier is not used,
quality of a particular PSF for the situation concerned but instead a HEP value is given. The HEP for these
and to make a suitable adjustment. cases is taken as 0.25 for an experienced operator and as
However, some estimates, described as speculative and 0.5 for an inexperienced one (both EF 5). The Handbook
conservative, are given of the potential benefit of the gives guidance on the assignment of the levels of
adoption of good ergonomic practices. The authors state workload, and hence stress.
that in nuclear power plants violations of conventional The basis of the HEP value of 0.25 is the work on
human factors practices are the rule rather than the behaviour in emergencies by Ronan and Berkun already
exception. The Handbook indicates that a reduction in described. The probability of ineffective behaviour from
HEP by a factor in the range 2±10 might be attained by the work on in-flight emergencies is about 0.15. The
adoption of good human factors practices in the design training of a pilot is particularly intensive. Operators are
or displays and controls, and a similar reduction can be not expected to perform as well. Hence the HEP value of
achieved by the use of checklists and well-written 0.25 is used for operators.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 74
HUMAN FACTORS AND HUMAN ERROR 14/75

This HEP for threat stress conditions applies to are often very narrow. It tends to be a poor indicator of
dynamic tasks and to diagnosis, not to step-by-step absolute performance in real situations, but is a much
tasks, for which, as already stated, a multiplier of 5 is more reliable guide to comparative performance. The
used. A different treatment again is applied to a loss-of- correction which needs to be applied to such data to
coolant accident (LOCA). allow for the broader tolerances in industrial situations
was the subject of a study by Payne and Altman (1962),
14.27.14 Sources of human performance estimates who obtained an average correction factor of 0.008. The
Ideally, the data on human performance used in this Handbook states that using this factor the HEP values
study would have been obtained from the nuclear obtained are similar to those found in field operations.
industry alone. In fact the data from this source were Expert judgement was utilized extensively to obtain
very limited and a much wider range of sources was HEP estimates where hard data were not available. Use
used, as follows: nuclear power plants (NPPs); NPP was made of scaling techniques to calibrate HEPs for
simulators; process plants; other industrial and military tasks estimated by the experts against known task HEPs.
sources; experiments and field studies on real tasks; and For the HEP in an emergency or highly stressed
experiments on artificial tasks. This list is in order of situation use was made of the work of Ronan (1953)
decreasing relevance but, unfortunately, of increasing and Berkun and co-workers (Berkun et al., 1962; Berkun,
data availability. Some 29 `experts' on human error were 1964).
approached for assistance in providing HEP estimates, The Handbook discusses the estimation of HEPs where
but virtually none were forthcoming. these are a function of time. For such tasks the three
For the nuclear industry the main potential source of relevant features are: the time to begin the task,
human error data is licensee events reports (LERs). essentially the response time, and the time required for
These contain an entry `Personnel error'. HEPs were diagnosis, if any; the time required to do the task
determined from these LERs for tasks involving opera- correctly; and the time available to do the task correctly.
tion, maintenance and testing of manual isolation valves, Data on the time to perform the task were obtained from
motor operated valves (MOVs) and pumps. The HEP operating records and from experts. The time available
values obtained were low. Work by Speaker, Thompson was often determined by the characteristics of the plant.
and Luckas (1982) on valves has shown, however, that The Handbook also gives an account of the determina-
for every LER classification of `Personnel error' there tion of HEPs for: displays and controls; locally operated
were some 5 to 7 additional reportable events which in valves; oral instructions and written procedures; adminis-
their judgement involved human error. Multiplication of trative controls; and abnormal events.
the original HEP estimates by a correction factor of 6
brought them much closer to those from other sources.
Most studies on NPP and other simulators have not 14.27.15 Human performance estimates from expert
yielded usable human error data. The first systematic judgement
study found which does was that of Beare et al. (1983). The estimates of human performance given in the
This work came too late for incorporation in the Handbook, whether for human error probabilities or
Handbook but was used as a cross-check. Extensive use performance shaping factors, are based on expert judge-
was made of HEP data from the process industries given ment. A discussion of expert judgement techniques
by Kletz and Whitaker (1973), E. Edwards and Lees applicable to human factors work is given in the
(1974) and the ICI Safety Newsletter (1979). Other Handbook by Weston (1983). He discusses the following
industrial HEP data mentioned are those of Rook methods: (1) paired comparisons, (2) ranking and rating
(1962), Rigby and Edelman (1968a,b) and Rigby and procedures, (3) direct numerical estimation and (4)
Swain (1968, 1975) on the production and testing of indirect numerical estimation. In dealing with estimates
military systems. of human error it is particularly important to give a full
A number of field studies and experiments in industrial definition of the task for which the estimate is to be
settings were conducted and yielded usable data. These made. If the definition is poor, the estimates obtained are
were subject, however, to the usual caution that the very liable to exhibit wide differences.
fact that an experiment is being conducted tends to A study is quoted by Seaver and Sitwell (1983) in
distort the results. which these methods were compared in respect of six
There is a large amount of experimental data on criteria. For the three criteria selected by Weston
artificial tasks such as those conducted in laboratories. (quality of judgements, difficulty of data collection and
This work suffers from the fact that not only is it empirical support) the rankings obtained by these
artificial, but also the limits of acceptable performance workers were as follows: (1, best; 4, worst):

Criterion Type of procedure

Paired Ranking/ Direct Indirect


comparisons rating numerical numerical
estimation estimation
Quality of judgement 1 2 4 3
Difficulty of data collection 4 1 2 3
Empirical support 3 4 1 1

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 75
14/76 HUMAN FACTORS AND HUMAN ERROR

14.27.16 Uncertainty bounds and sensitivity analysis The basic premise of SLIM is that the HEP depends on
It is normal to include in a hazard assessment a the combined effects of the PSFs. A systematic approach
sensitivity analysis and this creates a requirement to is used to obtain the quality weightings and relevancy
express an estimate of human error probability not just factors for the PSFs, utilizing structured expert judge-
as a point value but as a distribution. The distribution ment. From these PSFs the SLI for the task is obtained.
which is generally used is the log±normal distribution. As defined by Embrey (1983a) the SLI for n PSFs is:
As described in Chapter 7, the log±normal distribution is Xn
characterized by the two parameters m* and . SLI ˆ ri wi ‰14:28:1Š
Alternatively, it may be defined instead in terms of the iˆ1
log±normal median m and the error factor f. Often only a
where r is a relevancy factor and w a quality weighting.
point value is available, and generalized values of  or f
Thus the SLI approach makes explicit the distinction
are used to give the spread.
between quality and relevance.
The log±normal distribution is that used in the
The quality weighting is obtained from the judgement
Handbook. It is admitted that the basis for preferring
of a panel of experts and is assigned a value on the scale
this distribution is not strong. The experimental support
1±9. The relevancy factor, which again is obtained from
that exists relates to distributions of response times. On
the judgement of the expert panel, is a measure of the
the other hand, it is argued that the choice of
contribution of that PSF, the sum of the relevancy factors
distribution does not appear critical. It is also the case
being unity. The SLI so obtained is a relative value.
that the log±normal distribution is a convenient one to
In order to convert an SLI into an HEP it is necessary
use. Uncertainty arises from (1) lack of data, (2)
to calibrate it against tasks for which the HEP is known.
deficiencies in the models used, (3) the effects of the
The relation used is:
PSFs and (4) the variable quality of analysts.
Using the log±normal distribution, characterized by the log10 …HEP† ˆ a SLI ‡ b ‰14:28:2Š
log±normal median m and the error factor f, the where a and b are constants. These constants are
uncertainty bounds (UCBs) are expressed in terms of obtained from two tasks of known HEP. Due to the
the error factor. As an illustration, consider the following logarithmic relationship, SLI values which do not differ
case: greatly (e.g. 5.5 and 5.75) may correspond to very
Nominal HEP ˆ 0.01 different HEPs.
Lower UCB ˆ 0.003 The SLI methodology has two modules. The first is
Upper UCB ˆ 0.03 the multi-attribute utility decomposition (MAUD), usually
1
Error factor f ˆ (0.03/0.003)2  3 referred to as SLIM-MAUD. The second is the systema-
For the most part symmetrical UCBs are used, but there tic approach to the reliability assessment of humans
are exceptions. If the median value is low, the use of (SARAH). The former is used to obtain the SLI and the
symmetrical UCBs may give a lower bound which is latter to perform the calibrations. Both are embodied in
below the HEP cut-off, whilst at the other extreme for an computer programs. Accounts of work done to validate
HEP  0.25 it may give an upper bound which exceeds this approach have been given by Embrey (1983a) and
unity. In these cases asymmetrical bounds are used. (Embrey and Kirwan, 1983). Illustrative examples of
The general guidelines given in the Handbook for SLIM are given by Dougherty and Fragola (1988), as
estimating the error factor (Table 20±20) are given in described below, and Kirwan (1990).
Table 14.25.
14.29 Assessment of Human Error: Human Error
14.27.17 Validation Assessment and Reduction Technique (HEART)
A methodology of the type just described is clearly In the human error assessment and reduction technique
difficult to validate. There is a good deal of information (HEART), described by J.C. Williams (1986, 1988a,b,
provided in the Handbook in support of individual HEP 1992), the human error probability of the task is treated
estimates and some of this is described above. An as a function of the type of task and of associated error
account is also given of validation exercises carried out producing conditions (EPCs), effectively PSFs. The
in support of the original THERP methodology, but these method is based on a classification of tasks into the
relate to tasks such as calibration and testing rather than generic types shown in Section A of Table 14.26, which
process control. also gives the proposed nominal human unreliabilities for
execution of the tasks. There is an associated set of
14.28 Assessment of Human Error: Success EPCs, for each of which is given an estimate of the
Likelihood Index Method (SLIM) maximum predicted normal amount by which the
A method of obtaining HEP estimates based on PSFs is unreliability might change going from `good' to `bad'.
the success likelihood index (SLI) which is incorporated Section B of the table shows the first 17 EPCs listed,
in the SLI method (SLIM). Accounts are given in SLIM± those with the strongest influence; entries 18±38 list
MAUD: An Approach to Assessing Human Error further EPCs with a weaker influence.
Probabilities using Structured Judgement by Embrey et In applying an EPC use is made of a weighting for the
al. (1984 NUREG/CR-3518) and by Embrey (1983a,b) proportion of the EPC which is effective. Thus for a task
and Kirwan (1990). of type D with a nominal unreliability of 0.09, a single
SLIM treats not only the quality of the individual PSFs EPC of 4 and a weighting of 0.5, the resultant
but also the weighting of these in the task. It is thus a unreliability is:
complete method for assessing of human error, and not 0:09‰…4 1†  0:5 ‡ 1Š ˆ 0:23
merely a technique for determining values of the PSFs.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 76
HUMAN FACTORS AND HUMAN ERROR 14/77

Table 14.25 HRA Handbook: general guidelines on estimation of the error factor (EF)a (after Swain and Guttmann,
1983)
Item Task and HEP guidelinesb EFc
d
Task consists of performance of step-by-step procedure conducted under routine circumstances
(e.g. a test, maintenance, or calibration task); stress level is optimal:

(1) Estimated HEP < 0.001 10


(2) Estimated HEP 0.001±0.01 3
(3) Estimated HEP > 0.01 5

Task consists of performance of step-by-step procedured but carried out in non-routine


circumstances such as those involving a potential turbine/reactor trip; stress level is
moderately high:

(4) Estimated HEP < 0.001 10


(5) Estimated HEP 0.001 5

Task consists of relatively dynamicd interplay between operator and system indications,
under routine conditions, e.g. increasing or reducing power; stress level is optimal:

(6) Estimated HEP < 0.001 10


(7) Estimated HEP 0.001 5

(8) Task consists of relatively dynamicd interplay between operator and system indications but 10
carried out in non-routine circumstances; stress level is moderately high
(9) Any task performed under extremely high stress conditions, e.g. large LOCA, conditions in 5
which the status of ESFs is not perfectly clear, or conditions in which the initial operator
responses have proved to be inadequate and now severe time pressure is felt (see text of
Handbook for rationale for EF = 5)
a
The estimates in this table apply to experienced personnel. The performance of novices is discussed in Chapter 18 of
the Handbook.
b
For UCBs for HEPs based on the dependence model, see Table 7±3 of the Handbook.
c
The highest upper UCB is 1.0.
See Appendix A to calculate the UCBs for Pr(FT), the total-failure term of an HRA event tree.
d
See Table 18±1 of the Handbook for definitions of step-by-step and dynamic procedures.

Table 14.26 Classification of generic tasks and associated unreliability estimates (J.C. Williams, 1986)
A Generic classifications
Generic task Proposed nominal human
unreliability
(5th±95th percentile
boundaries)
A Totally unfamiliar, performed at speed with no real idea of likely consequences 0.55 (0.35±0.97)
B Shift or restore system to a new or original state on a single attempt without 0.26(0.14±0.42)
supervision or procedures
C Complex task requiring high level of comprehension and skill 0.16 (0.12±0.28)
D Fairly simple task performed rapidly or given scant attention 0.09 (0.06±0.13)
E Routine, highly practised, rapid task involving relatively low level of skill 0.02 (0.007±0.045)
F Restore or shift a system to original or new state following procedures, with 0.003 (0.0008±0.007)
some checking
G Completely familiar, well-designed, highly practised, routine task occurring 0.0004 (0.00008±0.009)
several times per hour, performed to highest possible standards by
highly motivated, highly trained and experienced person, totally aware of
implications of failure, with time to correct potential error, but without the
benefit of significant job aids
H Respond correctly to system command even when there is an augmented 0.00002 (0.000006±0.00009)
or automated supervisory system providing accurate interpretation of system stage

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 77
14/78 HUMAN FACTORS AND HUMAN ERROR

Table 14.26 Continued


M Miscellaneous task for which no description can be found. (Nominal 5th to 95th 0.03 (0.008±0.11)
percentile data spreads were chosen on the basis of experience suggesting
log-normality)
B Error-producing conditionsb
Maximum predicted nominal
amount by which unreliability
might change going from `good'
Error-producing condition conditions to `bad'
1. Unfamiliarity with a situation which is potentially important but which only  17
occurs infrequently or which is novel
2. A shortage of time available for error detection and correction  11
3. A low signal-to-noise ratio  10
4. A means of suppressing or overriding information or features which is too easily 9
accessible
5. No means of conveying spatial and functional information to operators in 8
a form which they can readily assimilate
6. A mismatch between an operator's model of the world and that imagined by 8
the designer
7. No obvious means of reversing an unintended action 8
8. A channel capacity overload, particularly one caused by simultaneous presentation 6
of non-redundant information
9. A need to unlearn a technique and apply one which requires the application of an 6
opposing philosophy
10. The need to transfer specific knowledge from task to task without loss  5.5
11. Ambiguity in the required performance standards 5
12. A mismatch between perceived and real risk 4
13. Poor, ambiguous or ill-matched system feedback 4
14. No clear direct and timely confirmation of an intended action from the portion 3
of the system over which control is to be exerted
15. Operator inexperienced (e.g. a newly qualified tradesman, but not an `expert') 3
16. An impoverished quality of information conveyed by procedures and 3
person±person interaction
17. Little or no independent checking or testing of output 3
18. A conflict between immediate and long-term objectives.  2.5
19. No diversity of information input for veracity checks  2.5
20. A mismatch between the educational achievement level of an individual and the 2
requirements of the task
21. An incentive to use other more dangerous procedures 2
22. Little opportunity to exercise mind and body outside the immediate confines of  1.8
the job
23. Unreliable instrumentation (enough that it is noticed)  1.6
24. A need for absolute judgements which are beyond the capabilities or experience  1.6
of an operator
25. Unclear allocation of function and responsibility  1.6
26. No obvious way to keep track of progress during an activity  1.4
27. A danger that finite physical capabilities will be exceeded  1.4
28. Little or no intrinsic meaning in a task  1.4
29. High-level emotional stress  1.3
30. Evidence of ill-health amongst operatives, especially fever  1.2
31. Low workforce morale  1.2
32. Inconsistency of meaning of displays and procedures  1.2
33. A poor or hostile environment (below 75% of health or life-threatening severity)  1.15
34. Prolonged inactivity or highly repetitious cycling of low mental workload tasks  1.1 for first half-hour
 1.05 for each hour thereafter
35. Disruption of normal work-sleep cycles  1.1
36. Task pacing caused by the intervention of others  1.06
37. Additional team members over and above those necessary  1.03 per
to perform task normally and satisfactorily additional man
38. Age of personnel performing perceptual tasks  1.02
a
If none of the task descriptions A±H fits the task under consideration, the values given under M may be taken as reference
points.
b
Conditions 18±38 are presented simply because they are frequently mentioned in the human factors literature as being of
some importance in human reliability assessment. To a human factors engineer, who is sometimes concerned about
performance differences of as little as 3%, all these factors are important, but to engineers who are usually concerned with
differences of more than 300%, they are not very significant. The factors are identified so that engineers can decide whether or
not to take account of them after the initial screening.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 78
HUMAN FACTORS AND HUMAN ERROR 14/79

The method also includes a set of associated remedial Human Reliability Analysis by Dougherty and Fragola
measures to be applied to improve the reliability. (1988). Like THERP, this method has been developed
HEART has been designed as a practical method, and essentially as an adjunct to fault tree analysis, but the
is easy to understand and use. It was on of the principal approach taken is rather different.
techniques used in the quantitative risk assessment for The earlier approach associated human errors essen-
Sizewell B, as described in Section 14.39. tially with equipment or procedure failures. One effect
was the generation of a large number of human events
linked to the equipment failures. Another was the neglect
of more significant but complex human errors. The
14.30 Assessment of Human Error: Method of
approach taken in the human reliability analysis (HRA)
Dougherty and Fragola
of Dougherty and Fragola is to concentrate on a smaller
The deficiencies of THERP in respect of non-routine number of more significant failures and human errors.
behaviour have led to the development of alternative There is also strong emphasis on integrating the HRA
methods. One of these methods is that described in into the probabilistic risk assessment (PRA).

Figure 14.30 Method of Dougherty and Fragola: methodology for human reliability analysis (Dougherty and Fragola,
1988) (Reproduced with permission of John Wiley & Sons Ltd from Human Reliability Analysis by E.M. Dougherty and
J.R. Fragola, 1988, Copyright #)

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 79
14/80 HUMAN FACTORS AND HUMAN ERROR

14.30.1 Overall approach (1) The description of the human event should refer to
The HRA process is shown in Figure 14.30. The failure of a function rather than to some lower level
generation of the human events to be considered is of abstraction.
part of the PRA. As with physical hazards, identification (2) The human events should be confined to the three
has a good claim to be the most difficult stage of the categories of pre-initiator (or latent) events, human-
process. Error classification schemes have been devel- induced initiator events and post-initiator events.
oped as an aid, but give no guarantee of completeness. (3) Human events in the latent category should be incor-
The human events are identified, characterized and porated in the fault tree at the highest appropriate
quantified according to the scheme shown in Figure level.
14.31. (4) A human-induced initiator event should be subsumed
Certain principles have been identified to guide the in the initiator type which includes the human event.
development of the PRA so that there is compatibility The data required should be expressed as a probabil-
with the HRA. In broad outline these guidelines include ity and not as a frequency.
the following:

Figure 14.31 Method of Dougherty and Fragola: classification of human events (Dougherty and Fragola, 1988)
(Reproduced with permission of John Wiley & Sons Ltd from Human Reliability Analysis by E.M. Dougherty and J.R.
Fragola, 1988, Copyright #)

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 80
HUMAN FACTORS AND HUMAN ERROR 14/81

(5) A human event in the post-initiator category which rather comprehensive set of emergency response guide-
relates to failure of a system should be modelled as lines. Following such guidelines, a planned activity is
a single event under the gate below the top event of classed as a response, whilst an unplanned activity is
the fault tree for that system, the gate being (a) an OR classed as a recovery. Recovery activity is applicable only
gate if the system is manually activated or (b) an AND to those events from which recovery is possible; there
gate if the system is automatically activated. are some events for which there is no recovery.
The incorporation of recovery is not undertaken during
14.30.2 Task analysis the development of the main fault tree but is deferred
The starting point is again a task analysis. A typical task until the tree, and its cut sets, are available. There are
analysis is outlined in Table 14.27. two reasons for this. One is that introduction of recovery
during the main synthesis tends to result in an undue
14.30.3 Response and recovery events increase in the size of the tree. The other is that
Human events which occur in the post-initiator period recovery analysis is an iteration through much of the
are treated as either response or recovery events. The PRA/HRA process, embraces system and human aspects,
nuclear industry has gone to some pains to provide a and tends to be highly specific to the set of events from
which the recovery is to be made. It is therefore better
Table 14.27 Task analysis in support of a human to consider the set of events and the associated recovery
reliability analysis for a task in nuclear power plant as a separate exercise at the end.
operation (Dougherty and Fragola, 1988) (Reproduced
with permission of John Wiley & Sons Ltd from Human 14.30.4 Operator action event trees
Reliability Analysis by E.M. Dougherty and J.R. Fragola, Some of the activity of the operator consists of planned
1988, Copyright #) tasks, but he may also have to respond to an abnormal
condition on the plant. Thus, although some events can
Goal be incorporated into the main fault tree of the PRA, post-
Obtain reactor core system water make-up and cooling initiator events involving action by the operator require
following a small loss of coolant accident (LOCA) the introduction of a specific event tree ± the operator
Step Means action event tree (OAET).
The response of the operator to an abnormal condition
Diagnose event may be described in terms of an event tree. The use of
Detect plant upset condition Several alarms event trees for this purpose has been formalized as the
Observe RCS level indicator Pressurizer or OAET or, simply, the operator action tree (OAT)
reactor level technique.
Observe decreasing RCS pressure Pressure indicator The OAET method has been described by R.E. Hall,
Observe sump level increasing Level indicator Fragola and Wreathall (1982). Three basic features are
Observe containment pressure Pressure indicators recognized: (1) perception, (2) diagnosis and (3)
increasing response. Figure 14.32 shows a typical OAET for a
Observe no secondary side Radiation monitors nuclear reactor coolant pump seal LOCA. The OAET
radiation tends to be used in conjunction with a time±reliability
correlation.
Isolate the LOCA
Close PORV block valve 1 One valve control 14.30.5 Modelling of human events
Close PORV block valve 2 Another valve control The classification of human events and the associated
Close letdown line One valve control event models used in the HRA are shown in Figure
Close RCP seal isolation valve 1 One valve control 14.31. The human activity is divided into three classes
Close RCP seal isolation valve 2 Another valve control which are specifiable and one which is not. The three
definable activities are planned tasks, response and
Verify safety system actuation recovery. Each of these definable activities is further
Observe HPI pump meters Two flow indicators divided into slips and mistakes, and these in turn are
Start HPI pumps Two pump start divided into errors of omission or commission.
controls In each case a slip is modelled using a modified
Observe AFW pump meters Two flow indicators version of THERP. Mistakes of response and recovery
Start AFW pumps Two pump start are modelled using TRCs. Other mistakes are not
controls modelled, except that response mistakes of commission
may be modelled. Recovery mistakes of commission may
Obtain long-term cooling be estimated as described below.
Await low level tank alarm One level indicator
Open sump valve 1 One valve control 14.30.6 Filtering of human events
Open sump valve 2 One valve control The human events identified are subjected to both a
Open tank valve 1 One valve control qualitative and a quantitative filter. This process is
Open tank valve 2 One valve control assisted if the analysis of the hardware and human
aspects proceeds simultaneously. This is the most
AFW, auxiliary feed water; HPI, high pressure injection; effective way to apply a qualitative filter to the large
PORV, power operated relief valve; RCP, reactor coolant proportion of the human events identified.
pump; RCS, reactor cooling system. A quantitative filter is then applied. This means that an
approximate estimate is made of the probability of each

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 81
14/82 HUMAN FACTORS AND HUMAN ERROR

Figure 14.32 Operator action event tree for a nuclear reactor pump seal loss-of-coolant accident (LOCA) (Dougherty
and Fragola, 1988). BWST , borated water storage tank; RCP, reactor coolant pump; RCS, reactor cooling system;
SI, safety interlock; SW, seal water (Reproduced with permission of John Wiley & Sons Ltd from Human Reliability
Analysis by E.M. Dougherty and J.R. Fragola, 1988, Copyright #

human event and, if at this level of probability the event adjusted using appropriate performance shaping factors.
has negligible effect on the fault tree, it is not pursued. The authors also describe alternative, more complex
The approach used draws on THERP. A probability of methods of estimating the probability of slips.
0.001 is used as a basic screening value for the
probability of a human event for latent or human-induced
initiator failures. If there is redundancy so that a second 14.30.8 Modelling of mistakes
error must occur for the failure to occur, a conditional Response and recovery mistakes are modelled using
probability of value of 0.1 is used, corresponding to time±reliability correlations. The underlying assumption
moderate dependency. For post-initiator events use is in this approach is that, although the probability of
made of the appropriate TRC, taking the probability at 5 success depends, in principle, on many factors, the time
minutes as the screening value. available is the dominant one.
Four separate TRCs are used. These are for the cases
(1) response without hesitancy;
14.30.7 Modelling of slips (2) response with hesitancy;
Slips are errors in an activity which has to some degree (3) recovery without hesitancy;
been planned. This applies even to recovery, where the (4) recovery with hesitancy.
operator first formulates and then executes a plan. The
method used to model slips is a modified version of Hesitancy is associated with the burden of the task
THERP. One modification is to consider only one slip per which in turn depends on a number of factors as
task. The probability of the slip is taken as a basic value described below. Account is also taken of the effect of
of 0.001 or, if there is redundancy, 0.0001. These values performance shaping factors. The incorporation of these
are the same as those used for screening. They are then two features into the TRCs is described below.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 82
HUMAN FACTORS AND HUMAN ERROR 14/83

14.30.9 Time–reliability correlations to absence of hesitancy. On this basis this curve is taken
It is found empirically that response times tend to fit a as that for recovery without hesitancy.
log±normal distribution. The TRC models used are based The other TRC curves are then obtained from this
on the assumption that the log±normal distribution is basic curve. The curve for response without hesitancy
applicable. They are characterized by a log±normal has the parameters m = 2 and f = 3.2; that for response
median m and an error factor f and give a straight line with hesitance m = 2 and f = 6.4; and that for recovery
when plotted on log±probability paper. with hesitance m = 4 and f = 6.4.
The basic response time  is given by: The TRC curves so derived are shown in Figure 14.33.
These curves already incorporate the factor k1 but not
r ˆ k1 k2  ‰14:30:1Š
the factor k2. The response time obtained from the
where  is the median response time,  r is the adjusted curves should be multiplied by k2.
median response time and k1 and k2 are adjustment
factors. The value of m used is the adjusted median
14.30.10 Performance shaping factors
response time.
Performance shaping factors are treated using SLIM. In
The first factor k1 takes account of the availability or
the format given by the authors the SLI is defined as:
otherwise of a rule, in other words of the difference Xn
between response and recovery. It has the values: SLI ˆ rni qi ‰14:30:4Š
iˆ1
k1 ˆ 1 rule available ‰14:30:2aŠ
with
k1 ˆ 0 no rule available ‰14:30:2bŠ X
n
rni ˆ ri = ri ‰14:30:5Š
The second factor k2 takes account of the performance iˆ1
shaping factor as measured by the success likelihood where q is the quality, r the rank and rn the normalized
index (SLI). It is assumed that at best (SLI = 1) the rank.
median response time is halved and at worst (SLI = 0) it In general, q has a value in the range 0±1. This is so if
is doubled. The factor is defined as: the PSF can have either a bad or a good influence. If it
k2 ˆ 2…1 2x†
‰14:30:3Š can have only a bad influence, the range of q is
restricted to 0±0.5 and if it can have only a good
where x is the SLI. influence the range is restricted to 0.5±1. The determina-
The base TRC is the nominal diagnosis curve given in tion of the SLI is illustrated in Table 14.28, which gives
THERP. This curve has a median m of 4 and an error the SLI for a human error in a recovery mistake. The
factor f of 3.2. It therefore has a relatively high median SLI is can be used to adjust probabilities of slips or
and low error factor. The high median corresponds to a mistakes. In the latter case it is applied to the median
recovery rather than a response and the low error factor response time of the TRC, as described below.

14.30.11 Recovery mistakes of commission


In an incident situation the possibility exists that the
operator will take some action which actually makes the
situation worse. As so far described, the methodology
does not include a model for such an action which is
called a `recovery mistake of commission'.
A method is given, however, for estimating the
probability that such an error will occur. This probability
is the product of three probabilities. The first is the
probability of a significant and extended commission
error, the second the probability that the emergency
response guidelines (ERGs) do not cover the resulting
situation, and the third the probability that the senior
reactor operator (SRO) or other personnel fail to recover
the situation.
The values of these three probabilities are estimated as
follows. At the time of writing there had been 10 000
reactor scrams. Two involved misdiagnoses which led to
core melt, including Chernobyl. The probability of a
significant and extended commission error is thus
estimated as 0.0002 (2/10 000). It is assumed that
ERGs would effect a reduction in probability of between
one and three orders of magnitude and a reduction of
two orders of magnitude is selected so that the second
Figure 14.33 Method of Dougherty and Fragola: probability is estimated as 0.01. The action of the SRO,
time–reliability correlations (Dougherty and Fragola, whose function is to stand back and monitor plant status,
1988) (Reproduced with permission of John Wiley & is assumed to have only low dependency with that of the
Sons Ltd from Human Reliability Analysis by E.M. crew and the probability that he will fail to effect
Dougherty and J.R. Fragola, 1988, Copyright #) recovery is estimated as 0.05. From these figures the

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 83
14/84 HUMAN FACTORS AND HUMAN ERROR

Table 14.28 Success likelihood index calculation for a recirculation event in nuclear power plant operation
(Dougherty and Fragola, 1988) (Reproduced with permission of John Wiley & Sons Ltd from Human Reliability
Analysis by E.M. Dougherty and J.R. Fragola, 1988, Copyright #)
Influence Type Rank Relative rank Quality Product %
Competing resources bad 10 0.05 0.4 0.02 3
Tank level indication good 50 0.23 0.9 0.21 31
Size of LOCA both 10 0.05 0.2 0.01 1
Expectation of failure both 50 0.23 0.3 0.07 10
Training on contingencies both 100 0.45 0.8 0.36 54

220 SLI ˆ 0.67

Table 14.29 Proforma for a mistake in a recovery event in nuclear power plant operation (Dougherty and Fragola,
1988) (Reproduced with permission of John Wiley & Sons Ltd from Human Reliability Analysis by E.M. Dougherty and
J.R. Fragola, 1988, Copyright #)
Event designators NDXOVERH Event type: Recovery
Event description
The crew fails to realign equipment following recirculation hardware failures

Option information: Screening value: 4E-1


Rule based? No
Hesitancy? No
SLI calculated? Yes
Standard TRC? Yes

Influences
Rank Normed-rank Quality Product
1. Display adequacy 10 0.06 70 4.2
2. Procedure adequacy 40 0.24 30 7.2
3. Team effectiveness 20 0.12 80 9.6
4. Communication effectiveness 10 0.06 80 4.8
5. Workload 40 0.24 30 7.2
6. Training adequacy 50 0.29 70 20.3
7.
8.
9.
10.

53.3

SLI 50.3
Available time (min) 20
Mean probability and statistics 1E-2
Lower bound 4E-4
Upper bound 4E-2
Median time (min) 4.0
Error factor 3.2
n
E n ˆ 10

authors derive a figure for the frequency of unrecovered 14.30.13 Application to nuclear power plants
mistakes of commission. Detailed illustrations of the application of the HRA
technique to particular nuclear power plants are given
14.30.12 Computer aids by the authors.
The documentation required of the HRA by the PRA is
quite extensive and a computer program ORCA 14.30.14 Validation
(Operator Reliability and Assessment) has been devel- The HRA method just outlined is described by its
oped to assist in producing it. Table 14.29 shows a authors as speculative. In other words, it lacks valida-
typical document for a recovery mistake. tion. In this it is on a par with most of the techniques

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 84
HUMAN FACTORS AND HUMAN ERROR 14/85

used for quantifying human error in nuclear and process qualitative prediction of human error, (3) representation
plants. Nevertheless, this general type of approach of event development and (4) quantification of significant
represents the most systematic method currently avail- human errors.
able for the treatment of the human error aspects of a The qualitative error prediction stage utilizes the
PRA. SPEAR method involving task analysis, PIF analysis,
PHEA, consequence analysis and error reduction analy-
sis. The representation of event development typically
14.31 Assessment of Human Error: CCPS Method takes the form of fault trees and event trees. The
techniques described for the quantitative error prediction
Another methodology for human reliability assessment is stage are THERP, SLIM and influence diagram analysis
that described in the CCPS Human Error Prevention (IDA). The latter is described in the next section.
Guidelines (1994/17). The structure of the method is The CCPS HRA methodology has the important
shown in Figure 14.34. The core is the four stages of (1) characteristic that it requires the analyst to start by
critical human interaction identification and screening, (2) acquiring a thorough understanding of the system first
by critical human interaction identification and then by
detailed qualitative analysis. Only then does the work
progress to the use of the quantitative methods.

14.32 Assessment of Human Error: Other Methods


The following methods also merit mention. A brief
account of each is given in the Second Report of the
Study Group on Human Factors of the ACSNI (1991).
Accounts are also given by J.C. Williams (1985), Waters
(1989) and Brazendale (1990 SRD R510).

14.32.1 TESEO method


A simple model for the estimation of the probability of
operator error is that used in TESEO (Tecnica Empirica
Stima Errori Operatori) developed by Bello and
Colombari (1980). The probability, q, of error is assumed
to be the product of five parameters K1±K5 as follows:
q ˆ K1 K2 K3 K4 K5 ‰14:32:1Š
Definitions and values of the parameters are given in
Table 14.30.
An illustration of the use of TESEO has been given by
Kletz (1991e). He considers a daily task of filling a tank
by watching the level and closing a valve when the tank
is full. He suggests that a reasonable estimate of failure
is 1 in 1000, or once in 3 years. In practice, men operate
such systems without incident for periods of 5 years.
Using the TESEO approach he sets K1 as 0.001, K2 as
0.5 and the other parameters as unity, obtaining a
probability of failure of 1 in 2000, or once every 6 years.

14.32.2 Absolute probability judgement (APJ) method


In the absolute probability judgement (APJ) method,
described by Seaver and Sitwell (1983), experts are
asked to make direct estimates of human error prob-
abilities for the task.

14.32.3 Method of paired comparisons (PCs)


The use of the method of paired comparisons (PCs) in
expert judgement was outlined in Chapter 9. It may be
applied to the estimation of human error probabilities, as
described by Blanchard et al. (1966) and Hunns (1980,
1982).

14.32.4 Influence diagram approach (IDA)


Influence diagram analysis (IDA) is a tool developed in
the context of decision analysis (R.A. Howard and
Figure 14.34 A methodology for human reliability Matheson, 1980). It has been adapted for work on
assessment (CCPS, 1994/17) (Courtesy of the American human factors by L.D. Phillips, Humphreys and
Institute of Chemical Engineers) Embrey (1983). Essentially it is a form of logic tree

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 85
14/86 HUMAN FACTORS AND HUMAN ERROR

Table 14.30 Operator error probability parameters used 14.32.6 Systematic human error reduction and
in TESEO (Bello and Colombari, 1980) (Courtesy of prediction approach (SHERPA)
Elsevier Applied Science Publishers Ltd) The systematic human error reduction and prediction
approach (SHERPA), and its application to human
Type of activity performance in ultrasonic inspection, has been described
K1 by Murgatroyd et al. (1986). SHERPA involves the three
stages of identification of the set of tasks, hierarchical
Simple, routine 0.001 task analysis and human error analysis. Hierarchical task
Requiring attention, routine 0.01 analysis (HTA) is used to identify the two main types of
Not routine 0.1 task element handled: those involving manual skills, and
those involving the application of condition±action rules.
Temporary stress factor for routine activities The human error analysis utilizes a flowchart technique
to identify the external error modes of the actions
Time available (s) K2
comprising the task and the psychological error mechan-
2 10 isms which give rise to these modes. Other types of task
10 1 element are dealt with by some other appropriate
20 0.5 method.
SHERPA is one of the two methods cited in the SRDA
Temporary stress factor for non-routine activities Operating Procedures Guide (Bardsley and Jenkins, 1991
SRDA-R1) for the treatment of human error in the
Time available (s) K2 development of operating procedures.
3 10
30 1 14.32.7 Critical action and decision approach (CADA)
45 0.3 The other method quoted by these authors is the critical
60 0.1 action and decision approach (CADA) of Gall (1990).
This is a technique for systematic examination of
Operator qualities decision-making tasks and is thus complementary to
SHERPA. CADA utilizes checklists to classify and
K3 examine decision errors and to assess their likelihood.
Carefully selected, expert, well trained 0.5
Average knowledge and training 1 14.32.8 Maintenance personnel performance simulator
Little knowledge, poorly trained 3 (MAPPS)
The maintenance personnel performance simulator
Activity anxiety factor (MAPPS) method is concerned principally with the
effect of manning levels on maintenance tasks. The
K4 basic premise is that the probability of failure is a
function of the loading on the personnel.
Situation of grave emergency 3
Situation of potential emergency 2
14.32.9 Comparative evaluations
Normal situation 1
Comparative evaluations of sets of human reliability
Activity ergonomic factor techniques have been given by several authors, includ-
ing Brune, Weinstein and Fitzwater (1983), J.C. Williams
K5 (1983, 1985a), Bersini, Devooght and Smidts (1988),
Humphreys (1988a,b) and Kirwan (1988). J.C. Williams
Excellent microclimate, excellent interface with 0.7
(1985b) has compared six methods of human reliability
plant assessment, including the AIR data bank, APJ, PC and
Good microclimate, good interface with plant 1 SLIM.
Discrete microclimate, discrete interface with 3 The methods treated in the comparative evaluation by
plant Humphreys (1988a,b), described more fully in Section
Discrete microclimate, poor interface with plant 7
14.35, are APJ, PC, TESEO, THERP, HEART, IDA, SLIM
Worst microclimate, poor interface with plant 10
and HCR. Another account of this work is given by
Kirwan (1988).
showing the relations between particular performance A comparative evaluation was also made in the
shaping factors. As such it may be used to obtain Benchmark Exercise described in Section 14.36.
quantitative estimates based on expert judgement of the
effects of these factors.
14.33 Assessment of Human Error: Performance
14.32.5 Human Cognitive Reliability (HCR) correlation Shaping Factors
The human cognitive reliability (HCR) correlation of 14.33.1 THERP method
Hannaman and co-workers (Hannaman et al., 1985; The use of PSFs in the THERP methodology as given in
Hannaman and Worledge, 1987) is a method in which the HRA Handbook has been outlined above. The factors
the actions of the operating crew are represented in the are listed and described but a formal quantitative method
form of an extended operator action tree and the of determining for each PSF an adjustment factor to be
probability of failure to respond is assessed using a set applied to the human error probability does not appear to
of three time±reliability correlations. be used.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 86
HUMAN FACTORS AND HUMAN ERROR 14/87

14.33.2 SLIM 14.33.5 Whalley model


PSFs are the basis of SLIM, as described above. In this A methodology for identifying error causes and making
method the value, or quality, of a PSF is determined by the link between them and the performance shaping
structured expert judgement. The influence diagram factors has been developed by Whalley (1987). The
technique is used to show the structure of the relation- classification structure of the PSFs used is shown in
ship between PSFs. Figure 14.35. A total of 146 PSFs is defined. The PSFs
influence the error causes. Each error cause may be
affected by several PSFs and each PSF may affect several
14.33.3 HEART error causes.
HEART is another method in which PSFs play a central The method utilizes classifications of (1) task types,
role. The classification of the generic tasks is itself based (2) response types, (3) error types, (4) error mechanisms
on PSF-like distinctions and the error producing condi- and (5) error causes. There are seven task types (TTs),
tions (EPCs) which are applied to the basic generic task which are shown in Table 14.31, Section A. There are
reliability estimates are in effect PSFs. also seven response types (RTs) as shown in Section B
of the table. The ten error types (ETs) are shown in
Section C. The error mechanisms and error causes are
shown in Figure 14.36. There are 10 error mechanisms
14.33.4 White method (EMs) and 37 error causes (ECs) (including causes 23a
R.F. White (1984 SRD R254) has described a form of and 30a).
PSF, the observable operational attribute (OOA) in The task types may be summarized as follows. TT 1 is
which, as the name implies, the emphasis is on the response to a familiar input and requires essentially no
observability of the attribute. He provides a checklist of decision-making. TT 2 is response to several familiar
attributes, broken down into (1) plant attributes and (2) inputs, matches the mental model and requires no
maintenance attributes. Examples of attributes listed as decision-making. TT 3 is interpretation of, and response
observable are: to, a developing situation using the mental model. TT 4
is a pre-determined response to a recognized situation.
(1) What is the time-scale of a filling operation of one tank TT 5 is a self-determined activity which may involve
(or two at a time)? (e.g. Within one shift? Longer than planning. TT 6 is selection of one of several alternative
one shift?) plans. TT 7 is correction of error, and thus differs from
(2) Is formal training given at instrument fitter level? the other types. For the response types the primary

Figure 14.35 Classification structure of performance shaping factors (Whalley, 1987. Reproduced by permission.)

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 87
14/88 HUMAN FACTORS AND HUMAN ERROR

Table 14.31 Some classifications of task, response and influencing factors (PIFs). The PIF classification structure
error for the human operator (Whalley, 1987. Repro- is shown in Table 14.32. The Guidelines give a detailed
duced with permission.) commentary on each of these factors.
A Task types
14.34 Assessment of Human Error: Human Error
1. Stimulus Data
2. Integration
3. Interpretation The various methods described for quantitative assess-
4. Requirement ment of human reliability give rise to a demand for data
5. Self-generation on human error. Some aspects of this are now
6. Choice considered. Human error data and its acquisition has
7. Correction required been discussed by a number of workers. The account
given in Human Error Prevention Guidelines by the CCPS
B Response typesa (1994/17) deals with the essentials.

Discrete Sequence 14.34.1 Human error data


Data on human error may be acquired by a number of
Action Y Y
methods. One approach involves study of the task from
No action Y N
documentation, which may be aided by task analysis.
Give information Y Y
Another group of methods are those based on some
Get information Y Y
form of direct observation. This may be informal or may
C Error types utilize formal techniques such as activity sampling, verbal
protocol, or withholding of information.
1. Not done A third group of methods are based on debriefing.
2. Less than One technique here is the critical incident technique,
3. More than originally used by Flanagan (1954), in which a person
4. As well as who has experienced a near miss is debriefed.
5. Other than Debriefing may also be used to gain information about
6. Repeated more normal tasks and situations. A fourth approach is
the elicitation of information from experts.
Timing errors The situation studied may be that on real plant or on a
7. Sooner than simulator. Simulation is widely used to present situations
8. Later than representative of real life in a compressed time-scale. A
9. Misordered review of these and other methods of acquisition of data
10. Part of on human error is given by the CCPS.
a
Y, valid combination; N, invalid combination 14.34.2 Human error data collection
The acquisition of high quality data on human error is
classification is into discrete or sequence responses, the clearly of central importance. Human error data collec-
former involving a single unit of performance, the latter a tion and data collection systems are also treated by the
sequence. The secondary classification is into action and CCPS. The account deals with (1) types of data collection
communication activities. The error types are cast as system, (2) design principles for data collection systems,
guidewords which are broadly similar to those used in (3) organizational and cultural aspects of data collection,
hazop studies. (4) types of data collected, (5) methods of data collection
The overall structure of the method is shown in Figure and storage and (6) data interpretation.
14.37. The primary linkages are from task type, response The CCPS distinguishes between consequence-driven
type and error type to possible error causes. Each task and causally oriented systems. The traditional reporting
type can be mapped to a set of error causes, either system in the process industries is one triggered by
through an information processing chain (IPC) (TT ! incidents. It is often largely a matter of chance, however,
IPC ! EC) or through a set of error mechanisms (TT whether or not an error has significant consequences.
! EM ! EC). Each response type can be mapped to a For this reason the Guidelines concentrate on causally
set of error causes either through a set of error types oriented systems. Some types of data collection system
(RT ! ET ! EC) or directly (RT ! EC). which are described are (1) the incident reporting and
The analyst uses these various routes to identify the investigation system (IRIS), (2) the root cause analysis
error causes for the task under consideration and then system (RCAS), (3) the near miss reporting system
determines for each cause the relevant PSFs and their (NMRS) and (4) the quantitative human reliability data
impact. A computer aid has been developed to assist in collection system (QHRDCS).
identifying the error causes, linking these to the PSFs It is primarily the latter type of system which has the
and quantifying the effect of the latter. potential to generate data for HRA. There is little
evidence of the development of such systems. The
CCPS sees them as most likely to emerge in the first
14.33.6 CCPS method instance at in-house level.
Another set of performance shaping factors are those The type of data collected will depend on the
given in the Human Error Prevention Guidelines by the perspective dominant in the organization, on the human
CCPS (1994/17), which refers to them as performance error model implied and on the associated error

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 88
HUMAN FACTORS AND HUMAN ERROR 14/89

Figure 14.36 Structure of error mechanisms and error causes (Whalley, 1987. Reproduced by permission.)

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 89
14/90 HUMAN FACTORS AND HUMAN ERROR

on these approaches. Nor is it clear whether the needs


of the process industries are perceived as sufficiently
urgent to support the creation of data banks which are
both based on this more recent work and applicable to
those industries.

14.34.4 Human error database (HED)


The Human Error Database (HED), described by Kirwan
(1988), is based on the human error probability data
given in the Rasmussen Report, tempered by expert
judgement. In that it derives from that report, it has
similarities to THERP, but it is less decompositional and
is not dependent on any specific model.

14.34.5 Accident and human error classification system


(TAXAC)
Figure 14.37 Relations between task and response A design for an accident and human error classification
types, error causes and other quantities (Whalley, 1987. system, TAXAC, as the basis of a human error data
Reproduced by permission.) bank, has been given by Brazendale (1990 SRD R510).
The accident classification is obtained as a function of
the accident signature (a skeletal account) S, and the
classification. The CCPS list the following types of causal accident causes and conducive factors. The features
data. Data may refer to (1) event sequence and structure, which contribute to the latter are grouped under the
(2) human error tendencies, (3) performance-influencing headings: activity A, man M, organization O and plant P.
factors and (4) organisational issues. Accidents are also categorized by industry I. Checklists
are given for features I, S, A, M, O and P. The author
14.34.3 Human reliability data banks also reviews the requirements and prospects for a human
Human error data banks were a relatively early develop- error data bank. Such a venture should not be an
ment. The data entered into these data banks were of isolated entity but should be part of a wide activity on
the type necessary to support techniques such as human performance.
THERP. One such data bank is the American Institute
for Research (AIR) data bank developed by Altman and
14.35 Assessment of Human Error: SRD Human
co-workers (Payne and Altman, 1962; Altman, 1964) and
Error Guides
also described by Meister (1964) and others (Swain,
1968; De Greene, 1970). The store provides data for the Guidance on several aspects of human error in design
execution time and probability of success of task and assessment has been issued by the Safety and
`elements' or `steps'. Each step has an input, a mediating Reliability Directorate (SRD).
and an output component. Typical input components are Design is dealt with in Guide to Reducing Human Error
indicators such as scales and lights, typical output in Process Operation (short version) by the Human
components are controls such as knobs and push but- Factors in Reliability Group (HFRG) (1985 SRD R347)
tons, and the two mediating components are identifica- (the HFRG Guide) and The Guide to Reducing Human
tion/recognition and manipulation. Each component has Error in Process Operation by P. Ball (1991 SRDA-R3),
several parameters. Those for a light include size, the short and full versions.
number, and brightness. Each parameter has several The treatment of human error in hazard assessment is
discrete levels or `dimensions'. Data on time and covered in A Suggested Method for the Treatment of
reliability for each parameter are recorded as functions Human Error in the Assessment of Major Hazards by
of its dimensions. The time and reliability of the R.F. White (1984 SRD R254), Human Reliability Assessors
component are estimated by summing and multiplying, Guide by Humphreys (1988a) and Human Error in Risk
respectively, the time and reliability of the parameters. Assessment by Brazendale (1990 SRD/HSE/510).
The time and reliability of the element are obtained from A related publication is Developing Best Operating
those of the components in a similar way. Procedures by Bardsley and Jenkins (1991 SRDA-R1).
Two other data banks from this era are the Aerojet- This is considered in Chapter 20.
General data bank, described by Irwin and co-workers The HFRG Guide (1985 SRD R347), as its title
(Irwin, Levitz and Freed, 1964; Irwin, Levitz and Ford, indicates, is concerned with qualitative measures for
1964), and the Bunker-Ramo data bank, described by reducing human error rather than with predicting it. It
Meister (1967). These three data banks have been is essentially a collection of checklists under the
reproduced in Topmiller, Eckel and Kozinsky (1982). following headings: (1) operator±process interface, (2)
Another somewhat similar data bank is the Sandia procedures, (3) workplace and working environment, (4)
Human Error Rate Bank (SHERB) described by Rigby training and (5) task design and job organization.
(1967) and Swain (1970). It will be clear that the The Guide distinguishes three meanings of procedure:
development of these data banks was largely driven by (1) general guidance, (2) an aid and (3) prescribed
the needs of the defence and aerospace industries. behaviour. With regard to the latter, it states the
Work on the cognitive and socio-technical aspects of following principles: (1) there should be no ambiguity
human error is almost certainly too recent for the about when a procedure is to be used, (2) if a procedure
emergence of data banks utilizing classifications based is mandatory there should be no incentive to use another

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 90
HUMAN FACTORS AND HUMAN ERROR 14/91

Table 14.32 Classification structure of performance influencing factors (CCPS, 1994/17) (Courtesy of the American
Institute of Chemical Engineers)
Operating environment Task characteristics
Chemical process environment: Equipment design:
Frequency of personnel involvement Location/access
Complexity of process events Labelling
Perceived danger Personal protective equipment
Time dependency
Suddenness of onset of events Control panel design:
Content and relevance of information
Physical work environment: Identification of displays and controls
Noise Compatibility with user expectations
Lighting
Thermal conditions Grouping of information
Atmospheric conditions Overview of critical information and alarms

Work pattern:
Work hours/rest hours Job aids and procedures:
Shift rotation Clarity of instructions
Level of description
Specification of entry/exit conditions
Quality of checks and warnings
Degree of fault diagnostic support
Compatibility with operational experience
Frequency of updating

Training:
Clarity of safety and production requirements
Training in using new equipment
Practice with unfamiliar situations
Training in using emergency procedures
Training in using automatic systems
Operator characteristics Organizational and social factors
Experience: Teamwork and communications:
Degree of skill Distribution of workload
Experience with stressful process events Clarity of responsibilities
Communications
Authority and leadership
Personality: Group planning and orientation
Motivation
Risk taking
Risk homeostasis Management policies:
Locus of control General safety policy
Emotional control Systems of work
Type A versus type B Learning from operational experience
Policies for procedures and training
Physical condition and age Design policies

method, (3) where possible a procedure should support The Human Reliability Assessors Guide (Humphreys,
the operator's skill and discretion rather than replace 1988a), given in overview by Humphrey (1988b), is in
them and (4) a procedure should be easy to understand two parts, the first of which gives a summary of eight
and to follow. techniques for human reliability assessment and the
R.F. White (1984 SRD R254) describes the use of event second evaluation criteria for selection and a comparative
tree and fault tree methods to identify the points at evaluation based on these criteria. The eight techniques
which a human action occurs which has an effect on the are APJ, PC, TESEO, THERP, HEART, IDA, SLIM and
outcomes. As described earlier, he uses a form of PSF, HCRM. For each technique the Guide gives a description
the observable operational attribute (OOA), which is and a statement of advantages and disadvantages. The
distinguished by the fact that it is observable and for evaluation criteria used relate to (1) accuracy, (2)
which he provides a checklist. He gives as an illustrative validity, (3) usefulness, (4) effective use of resources,
example the analysis of the filling of an liquefied natural (5) acceptability and (6) maturity. Accuracy has to do
gas tank. with correspondence with reality and with consistency;

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 91
14/92 HUMAN FACTORS AND HUMAN ERROR

validity with incorporation of human factors knowledge alternative outcomes and the assignment to these events
and of the effect of appropriate PSFs. of basis error probabilities (HEPs). These human error
The report by Brazendale (1990 SRD/HSE/510) probabilities are then modified in respect of dependen-
presents a taxonomy for an accident classification cies and of performance shaping factors. The HEP for
scheme, TAXAC, intended for use in connection with a the task is then determined. The robustness of the result
human error data bank. The classification is preceded by is assessed by means of a sensitivity analysis.
a discussion of models of human error. An account of The variations on this basic procedure lie in the areas
this work is given in Section 14.34. of the data sources, the rules for combination and the
handling of time. The different techniques vary in the
use which they make of field data, data banks, and
14.36 Assessment of Human Error: Benchmark
expert judgement. They also differ in the way in which
Exercise
they combine the influencing factors. In one method
A benchmark exercise of human reliability is described some factors may be taken into account by making
in HF-RBE: Human Factors Reliability Benchmark distinctions in the basic tasks, whilst in another they may
Exercise: Summary Contributions of Participants (Poucet, be handled as PSFs. The effect of time may be taken
1989). The study is one of a series of benchmark into account by treating it simply as another PSF, or,
exercises conducted by the Commission of the alternatively, it may be accorded a special status. In this
European Communities (CEC) and organized by the latter case the favoured method is the time response
Joint Research Centre (JRC) at Ispra, Italy. The exercise correlation, in which time is the principal independent
was carried out over 1986±88 with 13 participants. The variable determining the HEP, the HEP decreasing as
system studied was the emergency feedwater system the time available increases.
(EFS) on a nuclear reactor at the Kraftwerk Union The report examines some of the areas of HRA in
(KWU) site at Grohnde. Two studies were performed: a which problems arise or over which care must be taken.
routine test and an operational transient. The include (1) classification of errors, (2) modelling and
The study carried out by SRD is described in Poucet's auditing, (3) operator error probabilities, (4) maintenance
volume by Waters. The work showed the importance of error probabilities, (5) interaction between PSFs, (6)
the qualitative modelling stage prior to any quantification sensitivity analysis and (7) changes in management and
and the value of event trees as a tool for such modelling. organization.
Methods investigated included APJ, TESEO, THERP, It adopts the distinction between skill-based, rule-based
HEART and SLIM. No single method appeared superior and knowledge-based behaviour and that between slips
in all applications. and mistakes. Essentially it argues that slips, associated
with skilled behaviour, are amenable to prediction, but
mistakes, associated with rule-based behaviour, are
14.37 Assessment of Human Error: ACSNI Study
harder to predict, whilst for errors in knowledge-based
Group Report
behaviour there is currently no method available. It also
The accounts given of attempts to assess human error draws attention to wilful actions, or violations, which
illustrate the difficulties and raise the question of again are not well covered in current methods.
whether such assessment is even feasible. The use of The first step in an assessment of HEPs is the
quantitative risk assessment has created a demand for modelling of the task. There is evidence that for skill-
the development of techniques for the assessment of based errors at least, the variability between techniques
human error and this demand has been satisfied by the is less than that between analysts. This points to the
development of methods, which have just been need for training of analysts and for an independent
described, but the validation of these methods leaves check. For auditing it is necessary that the analyst
much to be desired. record sufficient detail on his procedures and reasoning.
This problem is addressed in Second Report of the The report cites estimates of HEPs given in the
ACSNI (1991) entitled Human Reliability Assessment - A literature such as the Rasmussen Report, described
Critical Review. The study deals with the control of the below. It draws attention to the importance of depen-
process, but emphasizes that the question is wider than dency between human actions. It is characteristic of
this, embracing also maintenance and other activities. human error that the probability of a further error
Although it is a report to the nuclear industry, its following an initial one is often not independent but
findings are in large part applicable to the process conditional on, and increased by, the occurrence of the
industries also. first error. On the other hand, humans have the
As systems become more automated and reliable, but capability to recover from error. It is also necessary to
still vulnerable to human error, the relative importance of consider the HEP for tasks where more than one
human error increases. operator is involved.
The report quotes the view expressed to the Sizewell With regard to maintenance error, the report makes
B Inquiry by the Nuclear Installations Inspectorate (HSE, the point that hardware failure rate data already include
1983e) that it was `considered that comprehensive the effects of maintenance errors. Nevertheless, it is
quantification of the reliability of human actions was prudent to make some assessment of the maintenance
not, with current knowledge, meaningful or required', and error. In particular, there is the possibility that such
considers how far the situation has changed since errors may be a source of dependent failures, as
that time. described in Chapter 9.
The report describes the basic procedure of human Most methods make the assumption that the PSFs are
reliability assessment (HRA). This is the breakdown of independent of each other. Yet the effects of lack of
the task into a series of events, each of which has two supervision and of training, for example, are likely to be

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 92
HUMAN FACTORS AND HUMAN ERROR 14/93

greater when they occur in combination. An exception is experimental, simulator and plant sources, and found that
SLIM, which does allow for this effect. The HEP on some criteria some methods seemed reasonably
estimates should be explored using sensitivity analysis. successful. The work was limited, however, to the
The HRA is likely to more affected than the PRA as a assessment of the HEP for specified errors and did not
whole by changes in the management and organization deal with modelling of tasks. Two studies of convergent
of the company, but the assessment of such changes on validity, by Brune, Weinstein and Fitzwater (1983) and
HEPs is not straightforward. It seems likely that certain Bersini, Devooght and Smidts (1988), have found that
types of error will be more affected than others and, different assessors give widely differing estimates, even
therefore, that their relative importance will change. Skill- when using the same method. Content validity is
level errors may be less influenced by management probably best assessed by peer review. A study by
changes than by knowledge-level ones. Violations might Humphreys (1988a) based on a comparison of eight
well be sensitive to such changes. methods against a checklist concluded that the relative
The report reviews the strengths and limitations of performance of the methods depends on the problem
HRA. It is accepted that PRA has an essential role to and that few are completely comprehensive. The report
play in improving reliability; HRA is a logical and vital states that construct validity does not appear to have
extension. Its use has been given impetus by the been applied in HRA and that it may not be applicable at
retrospective assessments conducted after the Three this stage of development.
Mile Island incident. An HRA contributes to system Considering the areas where work is required, the
design in three ways: it provides a benchmark for report suggests that one important topic is the develop-
designs and safety cases; it gives a quantitative assess- ment of an improved model of operator behaviour. Here
ment of alternative design or organizational solutions; it is the high level decision-making processes which are
and it provides a means and a justification for searching of prime importance, since it is failures at this level
out the weak points in a system. which have the most serious consequences. There is a
The limitations of HRA principally considered are hierarchy of patterns of learned behaviour, highly learned
those of HEP data and of validation. There is a wealth at the lower levels and but less so towards the upper
of data from various kinds of human factors experiments, ones. It is the conservatism of these processes which is
but the situations in which they are obtained are usually responsible for behaviour such as `mind set' and switch
to a degree artificial and their applicability is question- to `automatic pilot' and for slips when the mind `jumps
able. They provide guidance on the relative importance the points' and then continues with a whole series of
of different PSFs but generally need to be supplemented inappropriate actions. Another area where work is
by information from other sources. The quantity and needed is the creation of a data set which may be
quality of field data are relatively low. Those data which used for validation studies.
do exist are predominantly for slips rather than mistakes The report supports the development and use of
or violations. Again they are a guide to the relative methods of HRA. It is an essential element of PRA and
importance of different PSFs, but need to be supplemen- is beneficial in its own right. But the techniques available
ted. A third source of data is expert judgement. The for predicting slips are better than those for mistakes,
methods for maximizing the quality of assessments and and particular caution should be exercised with the
avoiding pitfalls are discussed. The importance is latter. It should be recognized that HRA is still in its
emphasized of the experts having full information on infancy.
the operational context, such as normal operation or It rejects the view that HRA in the nuclear industry
abnormal conditions. should be optional and supports a requirement for its
The report addresses the question of the lack of use. HRA provides a structured and systematic consid-
validation of HRA methods. It quotes the following eration of human error. It is already a valuable tool and
critique of J.C. Williams (1985b): `It must seem quite has the potential to become an invaluable one. A
extraordinary to most scientists engaged in research in requirement for its use is necessary to provide the
other areas of the physical and technological world that necessary impetus for this. The report emphasizes,
there has been little or no attempt made by human however, that effort should not be concentrated exclu-
reliability experts to validate the human reliability sively on assessment; there is equal need for a
assessment techniques which they so freely propagate, systematic approach to the reduction of human error.
modify and disseminate.' The report contains appendices (Appendices 1±4) on
Four kinds of validity are considered by the authors: HRA methods and their evaluation, dependability of
(1) predictive validity, (2) convergent validity, (3) content human error data, attempts to establish validity of HRA
validity and (4) construct validity. Essentially, for the methods, and approaches to reducing human error. Some
prediction of a given analyst, predictive validity is of the procedures reviewed in Appendix 1 are APJ, IDA,
concerned with agreement with the real situation, TESEO, THERP, HEART, SLIM, TRCs, MAPPS and
convergent validity is concerned with agreement with HED.
the predictions of other analysts, content validity is Appendix 4 of the report gives guidance on approaches
concerned with agreement between the elements of the to the reduction of human error. It discusses accident
model and the features which are critical in real life, and chains and latent and active (enabling and initiating)
construct validity is concerned with agreement between failures and gives the accident model shown in Figure
the structure of the model and that of the real-life 2.5 which shows the `shells of influence' for the PSFs.
situation. The approach is based on the identification by the HRA,
A study of predictive validity has been made by including the sensitivity analysis, of the features where
Kirwan (1988). He compared six HRA methods in human error is critical, the application of human factors
respect of their ability to predict accident data from methods and the implementation of the improvements

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 93
14/94 HUMAN FACTORS AND HUMAN ERROR

indicated by these methods. The role of human factors influence modelling and assessment system (IMAS). A
both in system design and in the design of the man± critical review is given of each method.
machine interface and the workplace is described. Human error analysis is represented by predictive
human error analysis (PHEA) and work analysis. The
PHEA method is that utilized in the Guidelines' metho-
14.38 CCPS Human Error Prevention Guidelines
dology for HRA.

14.38.1 CCPS Guidelines for Preventing Human Error 14.38.5 Methods of predicting human error probability
in Process Safety for risk assessment
The prevention of human error on process plants is The Guidelines describe a methodology for HRA, as part
addressed in the Guidelines for Preventing Human Error of quantitative risk assessment, utilizing both qualitative
in Process Safety edited by Embrey for the CCPS (1994/ and quantitative methods for predicting human error.
17) (the CCPS Human Error Prevention Guidelines). They begin with an illustrative example of fault tree
The Human Error Prevention Guidelines are arranged analysis in which the prime contributors are human error
under the following headings: (1) the role of human events.
error in chemical process safety, (2) understanding The HRA methodology presented in the Guidelines is
human performance and error, (3) factors affecting that already described in Section 14.31 and outlined in
human performance in the chemical industry, (4) Figure 14.34. A detailed commentary is given on each of
analytical methods for predicting and reducing human the stages involved. The core of the method is the four
error, (5) qualitative and quantitative prediction of human stages of (1) critical human interaction identification and
error in risk assessment, (6) data collection and incident screening, (2) qualitative prediction of human error, (3)
analysis methods, (7) case studies and (8) setting up an representation of event development and (4) quantifica-
error reduction program in the plant. tion of significant human errors.
It is a feature of this HRA methodology that it directs
14.38.2 Approaches to human error the analyst to acquire understanding of the system and
The Guidelines distinguish four basic perspectives on its problems by critical human interaction identification
human error, which they term: (1) the traditional safety and to undertake a detailed qualitative analysis to further
engineering approach, which treats the problem as one enhance this understanding before embarking on the use
of human behaviour and seeks improvement by attempt- of the quantitative methods. It is this rather than the
ing to modify that behaviour; (2) the human factors introduction of any new quantitative technique which is
engineering and ergonomics (HF/E) approach, which its most distinctive characteristic of the method.
regards human error as arising from the work situation
which it therefore seeks to improve; (3) the cognitive 14.38.6 Methods for data collection and incident
engineering approach, which again accords primacy to analysis
the work situation, but places its main emphasis on the The Guidelines review the collection of data on human
cognitive aspects; and (4) the socio-technical systems error. The treatment given is outlined in Section 14.34.
approach, which treats human error as conditioned by The Guidelines give a number of methods of incident
social and management factors. Until quite recently it has analysis. They are (1) the causal tree/variation diagram,
been the HF/E approach which has made the running. (2) the management oversight and risk tree (MORT), (3)
In the last decade, however, the last two approaches, the sequentially timed events plotting procedure (STEP),
cognitive engineering and socio-technical systems, have (4) root cause coding, (5) the human performance
emerged strongly. investigation process (HPIP) and (6) change analysis.
The stance of the Guidelines is one of system-induced They also refer to the CCPS Incident Investigation
error. This is akin to the work situation approach but Guidelines, where most of these techniques are
enhanced to encompass the cognitive and socio-technical described. A relatively detailed account is given of
approaches. HPIP by Paradies, Unger and Ramey-Smith (1991);
HPIP is a hybrid technique combining several of those
14.38.3 Performance influencing factors just mentioned. The CCPS treatment of incident investi-
The factors shaping human performance are referred to gation both in these Guidelines and in the Incident
in the Guidelines as performance influencing factors Investigation Guidelines is described Chapter 27.
(PIFs). They give a PIF classification structure and a
detailed commentary on each factor. 14.38.7 Case studies
A feature of the Human Error Prevention Guidelines is
14.38.4 Methods of predicting and reducing human the number of case histories given. The section on case
error studies gives five such studies: (1) incident analysis of a
The Guidelines deal with methods for predicting and hydrocarbon leak from a pipe (Piper Alpha); (2) incident
reducing human error in terms of (1) data acquisition investigation of mischarging of solvent in a batch plant;
techniques, (2) task analysis, (3) human error analysis (3) design of standard operating procedures for the task
and (4) ergonomics checklists. in Case Study 2; (4) design of VDUs for a computer
They describe a number of methods of task analysis, controlled plant; and (5) audit of offshore emergency
grouped into action-oriented techniques and cognitive blowdown operations.
techniques. The former include hierarchical task analysis Other case studies occur throughout the text. One
(HTA) and operation action event trees (OAETs). The illustrates system induced error. Other scene-setting case
two representatives of the latter are the critical action studies cover (1) errors occurring during plant changes
and decision evaluation technique (CADET) and the and stressful situations, (2) inadequate human±machine

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 94
HUMAN FACTORS AND HUMAN ERROR 14/95

interface design, (3) failures due to false assumptions, 14.39.2 Human factors studies
(4) poor operating procedures, (4) routine violations, (5) A review of the applicability of ergonomics at Sizewell B
ineffective organization of work, (6) failure explicitly to was undertaken by Singleton (1986), who identified
allocate responsibility and (7) organizational failures. applications in control room design, use of VDU dis-
There are case studies illustrating the application of plays, documentation, fault diagnosis, maintenance and
models of human error such as the step ladder and task analysis. He stated: `We know that human operators
sequential models. can achieve superb performance if they are given the
The importance of human error in QRA is illustrated right conditions. Appropriate conditions in this context
by the case study, already mentioned, dealing with the can be listed within the four categories: the information
prevalence of human error in fault trees. Other case presentations, the training, the support systems and the
studies illustrate HTA, SPEAR, THERP and SLIM. working conditions and environment.' An overview of the
application of human factors at Sizewell B has been given
by Whitfield (1994).
14.38.8 Error reduction programmes
Human factors specialists are involved in extensive
The Guidelines provide guidance on the implementation
work on training, supported by task analysis, as
of an error reduction programme in a process plant. A
described below. In the main control room one feature
necessary precondition for such a programme is a
is the use of a plant overview panel, separate from the
management culture which provides the background
other displays, for the monitoring of safety critical
and support for such initiatives. The general approach
parameters. Another area of involvement is in operating
is essentially that given in the CCPS Process Safety
instructions. Use is made of `event-based' instructions to
Management Guidelines described in Chapter 6.
diagnose a fault and to initiate recovery. But in addition,
Since both safety-related and quality-related errors tend
for safety critical functions, there are `function-based'
to have the same cause, an error reduction programme
procedures, which assist in restoring the plant to a safe
may well run in parallel with the quality programme. An
condition if for some reason the event-based instructions
error reduction programme should address both existing
are inappropriate. Further details are given by McIntyre
systems and system design. The tools for such a
(1992).
programme given in the Human Error Prevention
Human reliability analysis within the PSA utilizes
Guidelines include (1) critical task identification, (2)
OAETs and the HEART method, with some use of
task analysis, (3) PIF analysis and (4) error analysis,
THERP. An account is given by Whitworth (1987). The
as described in Sections 14.38.3 and 14.38.4. System
application of task analysis at Sizewell B is described by
design should also address allocation of function. Error
Ainsworth (1994). The programme for this comprised five
reduction strategies for these two cases are presented in
stages:
the Guidelines.

(1) preliminary task analysis of critical tasks;


14.39 Human Factors at Sizewell B (2) task analysis of selected safety critical tasks;
(3) preliminary talk-through/walk-through evaluations of
procedures in a control room mock-up;
14.39.1 Sizewell B Inquiry (4) validation of procedures on a control room simulator;
The potential contribution of human factors to the design (5) task analysis of tasks outside the main control room.
and operation of nuclear power stations was urged in
evidence to the Sizewell B Inquiry by the Ergonomics
Society, as described by Whitfield (1994). One crucial task on which task analysis was performed
The Society saw this contribution as being in the areas was the cooling down and depressurization of the
of: (1) setting operational goals; (2) allocation of function reactor. This was a major study involving some 60 task
between humans, hardware and software; (3) definition of elements and taking some 44 person-weeks. Time line
operator tasks; (4) job design; (5) overall performance analysis was used to address issues such as manning.
assessment and monitoring of operational experience; (6) A review of procedures revealed a number of defects.
operator±plant interface and workplace conditions; (7) Besides obvious typographical errors, they included: (1)
operator support documentation; (8) selection and train- incorrect instrument numbering (in procedures); (2)
ing of operating staff; (9) human reliability assessment; incorrect instrument labelling (on panels and in proce-
and (10) construction and quality assurance. The Nuclear dures); (3) omission of important clarifiers such as `all',
Installations Inspectorate also laid emphasis on human `or', `either' and `if available'; (4) omission of important
factors aspects (HSE, 1983e). cautions and warnings; (5) requirements for additional
The report of the Inquiry Inspector (Layfield, 1987) information; and (6) lack of consistency between proce-
states: `I regard human factors as of outstanding dures, panels and VDU displays. Overall, the task
significance in assessing the safety of Sizewell B since analyses identified a number of mismatches between
they impinge on all stages from design to manufacture, task requirements and man±machine interfaces.
construction, operation and maintenance.' (paragraph Ainsworth makes the point that the ergonomists were
25.90). It recommended the involvement of the discipline. often better at identifying problems than in devising
The plan put forward by the Central Electricity solutions, those which they proposed often being
Generating Board in support of its licence application impractical, but that it is possible to achieve a mode of
included commitments to an extensive schedule of working in which these problems are communicated to
training and to the use of probabilistic safety assess- the designers who take them on board and come up
ment, including quantification of human error. with effective solutions.

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 95
14/96 HUMAN FACTORS AND HUMAN ERROR

14.40 Notation Section 14.28


a,b constants
HEP human error probability
Section 14.3
n number of performance shaping factors
H(s) operator transfer function
r relevancy factor
K gain
SLI Success Likelihood Index
w quality weighting
d reaction time (s)
I compensatory lag time constant (s)
L lead time constant (s) Section 14.30
N neuromuscular lag time constant (s) f error factor
k1 ; k2 constants
Section 14.27 m median
q quality
Subsection 14.27.4 r rank
k constant rn normalized rank
Pb basic human error probability x Success Likelihood Index
Pc conditional human error probability
 mean response time
Subsection 14.27.11 r adjusted median response time
n number of alarms in group
Pi probability of failure to initiate action in response Section 14.32
to ith alarm K1 K5 parameters
Pr probability of failure to initiate action in response q probability of error
to a randomly selected alarm

07:23 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 14 Page No. 96
15 Emission and
Dispersion

Contents 15.33 Dispersion of Dense Gas: SLUMP and


HEAVYGAS 15/194
15.1 Emission 15/2 15.34 Dispersion of Dense Gas Dispersion: Workbook
15.2 Two-phase Flow 15/14 Model 15/195
15.3 Two-phase Flow: Fauske Models 15/27 15.35 Dispersion of Dense Gas: DRIFT and Related
15.4 Two-phase Flow: Leung Models 15/31 Models 15/204
15.5 Vessel Depressurization 15/35 15.36 Dispersion of Dense Gas: Some Other Models
15.6 Pressure Relief Valves 15/40 and Reviews 15/205
15.7 Vessel Blowdown 15/44 15.37 Dispersion of Dense Gas: Field Trials 15/208
15.8 Vessel Rupture 15/45 15.38 Dispersion of Dense Gas: Thorney Island
15.9 Pipeline Rupture 15/56 Trials 15/223
15.10 Vaporisation 15/58 15.39 Dispersion of Dense Gas: Physical
15.11 Dispersion 15/70 Modelling 15/228
15.12 Meteorology 15/77 15.40 Dispersion of Dense Gas: Terrain, Obstructions
15.13 Topography 15/101 and Buildings 15/243
15.14 Dispersion Modelling 15/102 15.41 Dispersion of Dense Gas: Validation and
15.15 Passive Dispersion 15/104 Comparison 15/252
15.16 Passive Dispersion: Models 15/106 15.42 Dispersion of Dense Gas: Particular Gases
15.17 Passive Dispersion: Dispersion over Particular 15/257
Surfaces 15/119 15.43 Dispersion of Dense Gas: Plumes from Elevated
15.18 Passive Dispersion: Dispersion in Particular Sources 15/265
Conditions 15/123 15.44 Dispersion of Dense Gas: Plumes from Elevated
15.19 Passive Dispersion: Dispersion Parameters SourcesÐPLUME 15/271
15/124 15.45 Concentration and Concentration
15.20 Dispersion of Jets and Plumes 15/136 Fluctuations 15/276
15.21 Dispersion of Two-phase Flashing Jets 15/154 15.46 Flammable Gas Clouds 15/285
15.22 Dense Gas Dispersion 15/159 15.47 Toxic Gas Clouds 15/293
15.23 Dispersion of Dense Gas: Source Terms 15/163 15.48 Dispersion over Short Distances 15/297
15.24 Dispersion of Dense Gas: Models and 15.49 Hazard Ranges for Dispersion 15/300
Modelling 15/167 15.50 Transformation and Removal Processes 15/302
15.25 Dispersion of Dense Gas: Modified Conventional 15.51 Infiltration into Buildings 15/309
Models 15/171 15.52 Source and Dispersion Modelling: CCPS
15.26 Dispersion of Dense Gas: Van Ulden Model Guidelines 15/313
15/171 15.53 Vapour Release Mitigation: Containment and
15.27 Dispersion of Dense Gas: British Gas/Cremer Barriers 15/314
and Warner Model 15/175 15.54 Vapour Cloud Mitigation: CCPS Guidelines
15.28 Dispersion of Dense Gas: DENZ and 15/326
CRUNCH 15/178 15.55 Fugitive Emissions 15/328
15.29 Dispersion of Dense Gas: SIGMET 15/182 15.56 Leaks and Spillages 15/332
15.30 Dispersion of Dense Gas: SLAB and FEM3 15.57 Notation 15/333
15/184
15.31 Dispersion of Dense Gas: HEGADAS and
Related Models 15/186
15.32 Dispersion of Dense Gas: DEGADIS 15/192

07:24 7/11/00 Ref: 3723 LEES ± Loss Prevention in the Process Industries Chapter 15 Page No. 1

You might also like