Professional Documents
Culture Documents
AND DIAGNOSIS
OF SYSTEM FAILURES
NATO CONFERENCE SERIES
I Ecology
II Systems Science
III Human Factors
IV Marine Sciences'
V Air-Sea Interactions
VI Materials Science
Edited by
Jens Rasmussen
Ris; National Laboratory
Roskilde, Denmark
and
William B. Rouse
University of Illinois
Urbana, Illinois
v
vi PREFACE
Jens Rasmussen
William B. Rouse
CONTENTS
Introduction 1
vii
viii CONTENTS
TRAINING
Chairman: A. Shepherd 541
Participants 695
A SET OF ISSUES
CHAIRMAN: D, L, PARKS
SECRETARY: A, WINTER
REAL LIFE PERSPECTIVES
Donald L. Parks
CHAIRMAN'S REVIEW
9
10 D. L. PARKS
REAL-LIFE PERSPECTIVES
Methods, Models, and Diagnostic Techniques
Discussion T ools/information
Author Subject area
vehicle presented
Fig. 1
Sheridan
DISCUSSION OVERVIEW
Sheridan
Thompson
Bond
Brooke
REFERENCES
Thomas B. Sheridan
19
20 T. B. SHERIDAN
2. Cause of Error
3. Classifying Errors
6. Correction of Error
8. Staged Responsibility
The data base for most types of machine error is large and
24 T. B. SHERIDAN
:t
1
1
I
supervisory controller
I
1
1
m I s
0 1 o
Internal (human)
Models
(computer) 1m's'
1 ______
L 0 0 -'I
1. Procedures
M motion to consult a procedure (or to modify it)
SO what is read in procedure
mo effort to remember procedure (or to modify memory)
o
s what is remembered in procedure
o
2. Controls
Ml motion to activate a specific control
Sl what is observed as to which and how control is activated
ml making assumption of move to activate specific control
s1 decision of what would be observed if ml
3. System State
M2 motion to observe system state display
S2 what is observed from system state display
m2 alteration of ml + s2 model to conform to Ml + S2 experience
s2 decision of how system state would change if ml
3. Consequences
M3 motion to observe alarms, consult with other persons
S3 what is observed from alarms, other persons
m3 alteration of m + s3 model to conform to Ml + S3 experience
s3 decision of what consequences would result 1f ml
actual and model systems and thereby identi~y the new para-
meters, small adjustments to the old parameters of the human
operator's internal model will not suffice. Other means will
be required, such as computer aiding. This is discussed in
the next section.
Ga(X.X.X
lJ k
) = Gb(XlX mn
X ) = Gc(XqXrX
s
)
determine what settings provide the best fit to recorded data from
the actual plant. Because of the complexity and non-linearity of
nuclear plants the choice of parameter variations for such
fast-time identification experiments must be guided by hints from
other sources as to which parameters may have gone awry. There are
far too many parameters to vary all of them in any systematic way.
PLANT
AGGREGATED MODEL
DISAGGREGATED MODEL
CONCLUSION
REFERENCES
INTRODUCTION
37
38 D. A. THOMPSON
Figure 1. British Aerospace's advanced flight deck design simulator is built around seven
CRT's portraying flight, engine and systems information (North, 1979). ~
42 D. A. THOMPSON
REFERENCES
Van Gigch, J. P., "A Model Used for Measuring the Information
Processing Rates and Mental Load of Complex Activities",
J. Can. Operational Res. Soc. 8, 116, 1970.
Veitengruber, J. E., and G. P. Boucek, Collation and Analysis of
Alerting Systems Data, Summary Program Plan, FAA Contract
DOT-FA73WA-3233 (Mod 11), Document D644l99, Boeing Commer-
cial Airplane Compan~ Seattle, Washington, October, 1976.
Vreuls, D., et al. "All Weather Landing Flight Director and Fault
Warning Display Simulator Studies", Human Factors Society
5th Annual Symposium Proceedings, Los Angeles, CA, June,
1968, (Western Periodicals, Co.), pp. 79-93.
SHIP NAVIGATIONAL FAILURE DETECTION AND DIAGNOSIS
INTRODUCTION
x)
The author is strictly responsible for this paper; it does not
necessarily represent U. S. Coast Guard official positions or
policy.
49
50 J. S. GARDENIER
Ships are high technology systems, but they have only a few
rather simple navigational sensors, controls, and indicators. With
a few exceptions, these are sufficient to allow knowledgeable,
skilled, and alert crews and pilots to navigate ships safely
wherever they need to go in the world I s waters. Despite this,
ships are lost worldwide at the rate of about one a day, and less
disastrous casualties occur more frequently in United States
waters alone (U.S. Coast Guard, annual).
TASK: Operates the radar and fathometer in order to detect and identify navigational hazards and aids to navigation.
• Selects the optimum combination of range scales, How to manipulate radar unit, i.e., vary range acales,
sector search, intensity, etc., for the most accurate • sector search selector, intensity, range and besring
and prompt detection of navigational hazards and aids circles and lines, true or relative motion mode, etc.
to navigation. How to manipulate fathometer unit, i.e., vary depth
• Accurately detects various aids to navigation and • scale, intensity, etc.
navigational hazards on radar. How to detect navigational hazards and aids to navi-
• Accurately detects any navigational hazards on • gat ion on radar and fathometer.
fathometer. How to identify navigational hazards and aids to navi-
• gation on radar and fathometer.
Numerical:
Specific:
• In 100% of the cases, all necessary navigational aids
and all naVigational hazarda are detected. Knowledge of navigational aids along track, or man-
• made and geophysical characteristics which present
good radar targets.
• Knowledge of sppcial hazards known along route which
present radar targets. !-
• Knowledge of individual ship's particulsr radar unit. !'>
Knowledge of individual ship's particular fathometer C)
• unit. »
::tI
C
m
Z
Fig. 1. Radar/fathometer lookout task.
m
::tI
NAVIGATIONAL FAILURE DETECTION 53
A Ship Collision
on the bridge with the Navy helmsman and numerous other personnel.
The local pilot was to testify later that the large number
(seventeen on duty plus some unspecified number of others
off-duty) of naval personnel on the bridge hampered his view of
indicators and the external scene.
Let us recoup the errors: the L.Y. SPEAR pilot let the helm
remain on 10 0 right rudder too long at an inappropriate time; he
may have forgotten that order was operative. He claimed later that
a shear current pulled the ship to the right. The rudder setting,
however, seems sufficient to explain the rightward movement of the
ship especially at the 18-20 knot speed the pilot had ordered.
Even if a shear current were experienced, as is plausible, he
should have allowed and compensated for such currents. The pilot
also failed to compensate for stern swing until reminded to do so
by the C.O.
Q)
Q)
s...
+>
+>
r-l
::l
(1j
c....
>.
:-
If;
s...
(1j
E
E
i! ::l
Ul
I'~
.!J
C\I
I!'"
••0
00
i!~ .,-i
• ~
!i~
;.1
I~i
II
64 J. S. GARDENIER
SYSTEM
DISTURBANCES
d Ikl
- - ~--------------- 1 SUPERVISED SYSTEM
OBSERVED OUTPUT V Ikl
--INPUT
J;o HROL I I SYSTEM STATE
I SYSTEM I /INCLUDING THE AUTOMATIC CONTROL·
uclkl
lDYNAMICSJ
x Ikl
I LER AND THE
I DISPLAY I INTERFACE)
OBSERVED INPUT
I
uc(kl I
I
L J
-------------~-----~
CONTROL ACTIONS OBSERVATION ACTIONS OBSERVATIONS
,-- -----4--- OBSERVER/CONTROLLER/
I
I CONTROL DECISION-MAKINGI OBSERVATION DECISION MODEL OF
...... THE HUMAN
I DECISION PART DECISION INPUT SUPERVISOR
I I I OBSERVATION
Vx(kl
I I NOISE
ERROR
I VARIANCE
Q(kl DYNAMIC .~
I II.. IOB~!:~ERI: U((k!
I STATE ESTIMATE z kl
I OUTPUT I vo(kl
OBSERVATION NOISE
L
'-
Figure 3. Structure of the observer/controller/decision model !"
G>
»
::0
o
m
z
m
::0
NAVIGATIONAL FAILURE DETECTION 67
A POINT OF VIEW
REFERENCES
A SUCCESS STORY
INTRODUCTION
But many success stories can be cited too. There are several
areas wherein complicated equipments are well maintained, and are
quickly returned to servi ce when they fai 1. To name jus t five of
these areas, consider broadcasting companies, the TV repair shops,
NASA space instrumentation program, physical and chemical labora-
tory operations, and large commercial computer centers. In all
75
76 N. A. BOND
ADMINISTRATIVE CONSIDERATIONS
FINDING TROUBLES
RECAPITULATION
REFERENCES
Bryan, G. L., Bond, N. A., Jr., LaPorte, H. R., Jr., and Hoffman,
L. , 1956, "Electronics Troubleshooting: A Behavioral
Analysis, "University of Southern California, Los Angeles,
California.
Csikszentmihalyi, M., 1975, "Beyond Boredom and Anxiety", Jossey-
Bass, San Francisco, California.
Davis, L. E., and Taylor, J. C., 1979, "Design of Jobs", Goodyear,
Santa Monica, California.
Foley, J. P., 1978, "Executive Summary Concerning the Impact of
Advanced Maintenance Data and Task-oriented Training
Technologies in Maintenance, Personnel, and Training
Systems", Wright-Patterson Air Force Base, Dayton, Ohio.
TROUBLESHOOTING: A SUCCESS STORY 85
J.B. Brooke
INTRODUCTION
87
88 J. B. BROOKE
TYPES OF ERROR
a) Syntactic errors.
could be done. Thus the selection of job aids that are considered
is necessarily biased towards those where some assessment has been
done. Furthermore, the author is primarily a psychologist and not
a computer scientist; thus the discussion, especially of the
direct job aids, is conducted in general terms, since the nature
of hardware and software is so volatile in the present
technological climate. This may mean that descriptions of certain
types of aid do not fully describe facilities that are available.
However, in justification of this approach, many programmers are
now working, and are likely to continue to work under the types of
operating system software described, at least for the near future.
a) Syntax Checking
d) Testing Packages
The Stanford BIP system (Barr, Beard and Atkinson, 1976) has
already been mentioned in other contexts. This is a tutorial
system which teaches students BASIC programming by setting tasks
and comparing their solutions to model solutions. Among its
debugging features are error diagnostics which can be expanded on
request through several level of detail (finally providing a
reference in a book!). Hints on the correct way to solve problems,
based on the model solutions are given, if the student requests
them. A facility is also provided that interactively traces the
execution of the program written by the student. This latter
feature is rather different to the interactive debugging tools
mentioned earlier in that a portion of the text of the program is
displayed and a pointer indicates the instruction currently being
executed. Up to six of the program variables can be selected and
their current values displayed simultaneously with the program
text. Iterative loops and conditional jumps are graphically
represented by an arrow pointing from the end of the loop or the
conditional statement to the destination instruction.
designing and debugging the user's programs. Thus the user can
avoid errors in the design stage by consulting SPADE-0 and can ask
for help when errors are made.
a) Language Design
b) Writing Techniques
d) Diagrammatic Representations
B C
Figure 1
•
, ,
- -"t-
Input Output
Switche. Switch••
.- ...•
2-~ . ..l- Valve
S.l ector Val.e
3-
But tona
~/~
INI'ORIIATIOII
.> PROCESSING
-+
+- UIIIT
~
1 1
i.!..
0
,
1
5
1
0 -- PETROL
PUMP
1
;---
7 Price per unU 2-
•
~
Price per unU 3-
j
:~! p:p
unH .. -
'--
1-- 0 0
............- Flow
~
Ftnhhed Stana.
C\J
r...
0
'+-< tlll
o..!
(l) f>..
r...
;:j s::
'0 o..!
(l)
Do cont"'nt. of .clllory YI:5 () s::
pointed to b~' H·1 = O? 0
r...
:.:0
o...c::
rn
'+-<
0 0.
E
s:: ;:j
0 0.
o..!
+'r-l
0. 0
o..! r...
Do contents of mc.ory YES r... +'
nninted to hy R·1 =- O? () (l)
rn 0.
(l)
'0 tlll
s::
+' o..!
r... r-l
ro r-l
..c::() 0
r...
:.:0 +'
r-l 0
s::
f>.. ()
(Y)
tlll
o..!
f>..
YES
NO
8?
of R3
104 J. B. BROOKE
ACKNOWLEDGEMENTS
REFERENCES
INTRODUCTION
Notice:
----
This paper will also appear as a chapter in a forthcoming book
enti tled : What Every Engineer Should Know About Human Factors
Engineering, to be published by Marcel Dekker, Inc. New York and
Basel. Permission of the authors and Marcel Dekker, Inc. to
include this material in this volume is appreciated.
111
112 J. M. CHRISTENSEN AND J. M. HOWARD
L oe
Lee Ie + £: i
i = 1 (1 + d)
where,
Ie initial cost
oe operating cost
L life in years
d discount rate
FIELD EXPERIENCE IN MAINTENANCE 113
IU
...
k
3
:!
E
....IIIIU
>.
III
B
...oc::
B
.......
k
c:: Maintenance error
oCJ
k
o
k
k
IU
c::
~
E Installation error
~
S
Acceptance
Assembly error
Begin
Representative life cycle Phase out
Cause % of Total
Loose nuts/fittings 14
Incorrect installation 28
Dials and controls (mis-read, mis-set) 38
Inaccessibility 3
Misc. 17
100
The Burrows :figures agree very well with those given to the
undersigned by a maintenance o:f:ficer in the ground :forces who
states that 40 percent o:f the tank engines could have been
repaired locally rather than sent back to a depot.
FIELD EXPERIENCE IN MAINTENANCE 119
Installation Errors
Environmental Factors
a. more experience
b. higher aptitude
c. greater emotional stability
d. fewer reports of fatigue
e. greater satisfaction with the work group
f. higher morale
FIELD EXPERIENCE IN MAINTENANCE 121
a. years of experience
b. time in career field
c. ability to handle responsibility
d. morale
a. anxiety level
b. fatigue symptoms
Maintenance Records
Troubleshooting Strategies
SUMMARY
REFERENCES
Anon, "One Way to Minimize NC Downtime", Iron Age, 222, May, 1979.
Blandow, R.W., "A Maintenance Overview of CAM Technology", Manu-
facturing Engineering, July 1979.
Christensen, J .M., "Human Factors Considerations in Design for
Reliabili ty and Maintainability", In Pew, R.W., "Human Fac-
tors Engineering", Course No. 7936, Engineering Summer con-
ferences, The University of Michigan, 1979.
Crawford, B.M. and Altman, J.W., "Designing for Maintainability"
in VanCott, H.P. and Kinkade, R.G. (eds.), Human Engi-
neering Guide to Equipment Design, Washington, D.C.: U.S.
Government Printing Office, 1972.
Feineman, G., "How To Live with Reliability Engineers", Spectrum,
Spring, 1978.
Geise, J. and Haller, W.W. (eds.), Maintainability Engineering,
Martin-Marietta Corporation and Duke University, 1965.
Goldstein, D.B. and Rosenfeld, A. T., Energy Conservation in Home
Applicances Through Comparison Shopping: Facts and Fact
Sheets, LBL-5236, Energy Extension Services, State of
California, 1977.
King, J. B. (Chm. ), Safety Recommendations A-79-98 through 105,
National Transportation Safety Board, Washington, D.C.,
December 21, 1979.
Kirkman, J., "Controlled English Avoids Multi-Translations",
Industrial Engineering, February; 1978.
Leuba, H.R., "Maintainability Prediction - The Next Generation",
Proceedings of the Spring Session or Reliability, Main-
tainability, etc., IIIE, Boston, 1967.
126 J. M.CHRISTENSEN AND J. M. HOWARD
1. Accessibility (general)
2. Accessibility (specific)
2.1. Access openings
2.1.1. Sufficient number
2.1.2. Sufficient size (one large versus two small)
2.1.3. Not hidden by other components
2.1.4. Same plane as related controls and displays
2.1.5. Convenient height, reach, posture, etc.
requirements
2.1.6. When required, safe, study, convenient stands,-
ladders, etc.
2.1.7. Convenient removal (drawers, hinged units,
etc.); power assists for removal or units over
100 pounds (or availability to two or more
persons) .
2.2. Access covers
2.2.1. Hinged or tongue and slot
2.2.2. Easily opened
2.2.3. Easily held open
2.2.4. Positive indication of improper securing
3. Packaging
3.1. Clear identification (number codes, color codes,
etc. )
3.2. Convenient size, shape, weight, etc.
3.3. Convenient number of units/package
3.4. Handles for units over 10 pounds
3.5. Logical, consistent flow, if troubleshooting
important
3.6. Component grouping, if mass replacement used
3.7. Protection from vibration, heat, cold, dust, etc.
3.8. Easily removed and replaced (plug-in, rollers,
roll-out drawers, etc.)
3.9. Error-free replacement (guides, keys, alignment
pins, etc.)
3.10 Easy inspection
3.11 Easy servicing (tightening, lubricating, etc.)
4. Connectors
4.1. Labelled
4.2. Physically accessible
4.3. Visually accessible
4.4. Hand-operated (at least no special tools)
4.5. Quick disconnent
4.6. Screw Terminals (rather than solder)
:4.7. U-lugs rather than a-logs
4.8. Alignment aids (keys, colored strips, asymmetry
etc. )
4.9. Unique (prevent mismating by differences in
color, size, number of pins, pattern of pins,
etc. )
4.10 Receptacle "hot", plugs "cold"
4.11 Self-locking
5. Conductors
5.1. Labelled or coded
5.2. Avoided sharp edges, etc. in routing
5.3. Automatic rewind, if appropriate
5.4. Out of the way (clamps, tie-downs, etc.)
5.5. Ample length without stretching
FIELD EXPERIENCE IN MAINTENANCE 129
6. Displays
6.1. Use of characteristic odors, sounds, etc.
6.2. Consider maintenance display requirements inde-
pendently of, and co-equally with, operator dis-
play requirements
6.3. See also MIL-STD1472B
7. Controls
7.1. Consider maintenance control requirements inde-
pendently of, and co-equally with, operator con-
trol requirements
7.2. Non-shared controls under quick access covers
7.3. No special tools
7.4. See also MIL-STD1472B
8.3. Tools
8.3.1. Minimum of different kinds of tools
8.3.2. Few (preferably none) special tools
8.3.3. Adequate grips
8.3.4. Insulated handles
8.3.5. Easily positioned
9. Maintenance Procedures
9.1. All materials consistent with knowledge and
skills of users
9.2. Written supplemented with diagrams, schematics,
etc.
9.3. Brief but clear
9.4. Procedures that give unambigious results
9.5. Cross-checks where possible
9.6. Realistic tolerances
A. Requirements Phase
E. Design Verification
CHAIRMAN: T. B. SHERIDAN
SECRETARY: A. MICHAELS
THEORIES AND MODELS
Thomas B. Sheridan
137
138 T. B. SHERIDAN
some defense. One was the contention that Kantowi tz and Hanson
miss the point of optimal models, that any precise theory carries
wi th i t a normative model of how peop~ should behave (if they
conform to that theoretical norm) and experimental comparisons
naturally show some discrepancy from that precisely specified
theory. (This is in contrast to the descriptive model, of course,
wherein the best model is the simplest yet reasonable summary of
the data). The optimal control model is not quite simple, but it
certainly is robust.
INTRODUCTION
143
144 A. R. EPHRATH AND L. R. YOUNG
Not so, may say those who favour keeping the human in the
control loop. Systems with potentially-catastrophic failure modes
are normally designed with a high degree of reliability; during
that extremely long mean-time-between-failures the human, with
nothing to do but monitor a perfectly-normal system, may become so
bored and complacent that he might miss the rare fai lure when it
does occur. Furthermore, even in the event that a fai lure is
correctly detected and identified, the operator needs to switch
from monitoring to manual control to assume the role of the
malfunctioning automatic device. Such mode-shifts are rarely
instantaneous, especially after extremely long periods of monitor-
ing, Humans need time to adapt to the role of an active control
element; in an emergency, that time may not be available.
edge) from the control loop and observing his own error signal as
recorded during the first set of experiments. It was verified in
post-experiment debriefings that none of the subjects was aware of
this fact.
MULTI-LOOP CONTROL
This experiment was carried out in a static cockpit
simulator and utilized fifteen professional airline pilots as
subjects. The simulator was a mock-up of the captain's station in
a Boeing transport· aircraft, and it was programmed to duplicate
the dynamics of a large transport aircraft in the landing-approach
flight envelope.
d) Fully manual.
a) No wind.
a) No failure.
100
90
80
70
><
Q)
"C
c:: 60
H
"C
III
0
.-I 50
,.II!
1-1
0
:3
40
30
20
10
PI P2 P3 P4
PI - Fully Automatic
P4 - Manual
MONITORING AIRCRAFT CONTROL FAILURES 149
70
o Automatic
6 Manual
60
50
fIl
"C
s::
0
tJ
Q)
r
en
~
~
..-!
E-I
40 T A
11
A
s::
0
..-!
.j..I
tJ
Q)
1
.j..I
Q)
CI
30
0
1
T
0
1
0
Workload Index
Figure 2: Detection Times of Longitudinal
(Pitch) Failures
150 A. R. EPHRATH AND L. R. YOUNG
I I
70
0 Automatic
tl Manual
&
A
1
60
C/l
50
1
't:I
=
0
CJ
QJ
Ul
C/l
~
T
~ 40 (!)
1
•..!
Eo-<
.....0=
T
+I (!)
1
CJ
QJ
+I
QJ
Q 30
20
10 20 30 40 50 60 70 80 90 100
Workload Index
Gust Level
ParticiEation Mode 1 2 3 Overall
Monitor o. o. o. o.
Manual Yaw o. O. o. o.
Manual Pitch 12.5 14.3 12.5 13.0
Manual Control 12.5 14.3 37.5 21. 7
Gust Level
ParticiEation Mode 1 2 3 Overall
Monitor o. o. o. o.
Manual Yaw 37.5 14.3 37.5 30.4
Manual Pitch o. o. o. o.
Manual Control 14.3 o. 14.3 9.1
REFERENCES
INTRODUCTION
155
156 C. D. WICKENS AND C. KESSEL
Noise
Visual Display ~isturbance
(;f-\,---- Error
, 9" +}------,
-',, ,
r' "
,
I
,
,
I
+
Remnant
AV MODE
DYNAMIC SYSTEM
Control
Response
The extent to which each task varied with changes in its own
demand is reflected by coherence measures computed between primary
demand and primary performance. The mean coherence bet~en
detection performance and detection difficulty was high (p =
.82). That between 2 tracking performance and difficulty was
considerably lower (p = .40).
REFERENCES
Renwick E. Curry
INTRODUCTION
171
172 R. E. CURRY
"The real concern here is not that the operators "did not"
but rather "why did they not" follow the written procedures.
First, they had a poor direct indication of leakage, by
design. While there were other indirect means and observa-
tions by which they could have inferred leakage from these
val ves, it is not necessarily reasonable to expect the
required analyses and deductions under conditions that then
HUMAN FAULT DETECTION 173
Amdl3
SAUNAS MUNI
VORl DME RWY ] 3 AL·363 (FAA )
CALIfORNIA
MONTEREY API' CON
133 .0 302 .0
• SALINAS TOWER
119." 239.3
GND CON
121 .7
RADAR VECTORING
/.1~ '~':3333
~' ''7 6 DMf -A- JJ3 1"20 2820
s~ -"'6"6
1398 ,',
1\ .2250
MOVERINT
I8DME MISSED API'ROACH
Climbing nghl !um 10 2000
4QOO~ on SNS R· 275 10 .........no
Inl/ DMf and hold,
I ~ I '
Pto<>O<lv," T~m NA I
I I
I I
ORCJNG 500-1
"16 (500-1)
Inopercrliv. lObi. doet not opply 10 HIRL Rwy 13.
'i1
l:!.
LEGEND
INSTRUMfNT Al'AIOACH PROCEDURES rCHARTS)
PlANvrEW SYMBOLS
~
.......3...,5 ·
OBSTRUCTIONS
• Spot E",W'CO,j,gn • Ht.gh.tit Spot E.¥Otlon
PtO<.edutol 'toe"
P,oc:edvr. Turn
( ... ,S·oH coun. bearing for
MMd Uie1'1-Otg.,.. ond point A Unr'9h.od i ligMod A G,,,,,p Unlightod
of fur" ~ .. ft to di1c,relion of paol)
-" .,-
Ali: ~- H;ghnt Ob'lol,uctton
---------.-
G,oup I ightod
C~
090
O .0.";,,,1
Hold.",,,
PaII_
G::)
oO M;.,.d
App,ooeh
Holding
RADIO AIDS T~
LEGEND
~ "o/fI"-~~~~I600
~()
. . . ,<>
f ;noI Appt_h
MAPWIP
AERODROME SKETCH
c::::J
Hard Surfoc.. Othe, Than
Hard Surioc.
Clowd Ru,."
o",j -
(1)
EXAMPLE CALCULATION
Pf
w
IX
::J
~ p*
W
IX
a..
TIME
-:.'\
P = P + vet) (7)
z(t) (8)
Experience Level
pressure (r C r 2) Ie; 2 2
Results
10
~
o
~ 5
w
o P* Pf o p* Pf
EXPERIENCED MONITOR INEXPERIENCED MONITOR
rate). The plots were made under the assumption that all
indicators are equally likely to be observed. If, for example, the
annunciator is not readily visible from the operator's station, or
if he is wearing cushioned shoes, then these indicators will have
less contribution to the averages observed likelihood function.
Also, correct instructions to the inexperienced operator about the
vibration and the pressure range of the cooling system activation
will increase his effectiveness.
SUMMARY
REFERENCES
Neville Moray
University of Stirling
Scotland
INTRODUCTION
185
186 N. MORAY
would rather have turned off all the warning lights, since they
were providing no useful diagnostic information.
Given these properties, how does man use attention to detect and
diagnose errors?
SAMPLING BEHAVIOUR
where t is the time since the last observation, and the constants
depend upon the number of items to be remembered, and the amount
and type of training of the observer; and Loftus, 1979, reports
similar results for the recall of ATC messages. Hence if we assume
with Senders that man samples in order to reduce his uncertainty,
low bandwidth sources will tend to be sampled at intervals shorter
than the Nyquist interval due to the loss of information from
memory.
¢(y)~----~~~----~~------~~~------;----
DATA ACQUISITION
DECISION CRITERIA
2. Let the system moni tor the moni tor. Wi th modern compu-
ters, and especially with displays where the human monitor must
call up the variables which he wishes to examine, the answer to
the classical question, "Quis custodet ipsos custodes?" is at last
available. The system should keep a log of the intervals at which
its variables are inspected, and should prompt the human if he has
ignored some state variable for too long a time.
REFERENCES
Sheridan, T.B., 1970, "On how often the supervisor should sample."
IEEE Trans. SSC-6, 140-145.
Shiffrin, R.J. and Schneider, W., 1977, "Controlled and automatic
processing II." Psychological Review, 84, 127-190.
Taylor, F. J ., 1975, "Fini te fading memory fi 1 tering." IEEE Trans.
SMC-5, 134-137.
Weiner, E.L., 1977, Controller Flight into Terrain Accidents.
System induced errors." Human Factors, 19, 171-181.
EXPERIMENTAL STUDIES AND MATHEMATICAL MODELS OF HUMAN
William B. Rouse
INTRODUCTION
199
200 w. B. ROUSE
question, various bits and pieces of an answer have emerged and
will be briefly discussed here.
** ;22,30 =1
=1
** 23,30
3(1,38 = 1
** 31,38 = 0
24,31 = 1
25,31 = 1
*
FAIL~ '? 31
RIGHT!
It 20 25 - I
,. 13 24 = 0
It 15 13 = 0
.. 8 15 - 0
,.
,. I 25 - 0
FAILURE ?
RIGHT I
B
~
.. I
~
lFyel~
B
f!o;;---,
~
EXPERIMENTS
Experiment One
Experiment Two
Experiment Three
Experiment Four
Experiment Five
them rather than simply accepting the aid as a means of making the
task easy).
Experiment Six
Experiment Seven
Experiment Eight
Rule-Based Models
While the fuzzy set model has proven useful, one wonders if
an even simpler explanation of human problem solving performance
would not be satisfactory. With -this goal in mind, a second type
of model has been developed (Pellegrino, 1979; Rouse, Rouse, and
Pellegrino, 1980). It is based on a fairly simple idea. Namely, it
starts with the assumption that fault diagnosis involves the use
of a set of rules-of-thumb (or heuristics) from which the human
selects, using some type of priority structure.
EXPERIMENTAL STUDIES AND MATHEMATICAL MODELS 213
CONCLUSIONS
ACKNOWLEDGEMENTS
REFERENCES
Joseph G. Wohl
INTRODUCTION
BACKGROUND
The basis for this paper arose out of two separate but
related activities. The first summarized the results of experimen-
tal studies on the effects of packaging design on equipment repair
time (Wohl 1961). These studies indicated that repair times taken
under laboratory conditions were exponentially distributed, in
contrast to data for many other equipments taken under field
conditions, which were found to be more or less lognormally
distributed. One study in which repair time data for the same
equipment (an FPS-20 radar) were taken in both laboratory and
field environments (Kennedy 1960) indicated an exponential
distribution for the laboratory data but a non-exponential
217
218 J.G. WOHl
distribution for the field data. This work suggested that a model
of the interaction process between man and machine in a mainten-
ance situation might be developed. It also indicated that such a
model would have to account for differences between maintenance
environments.
DISCUSSION
- - 4
~Z
99.9
99
;- . .~ 1
/
z
~ -1
i EaUIPMENT "A"
ow r = 282 REPAIRS
/"
0- MTR- 3.24 HR
w -2
...r 75% = PERCENT OF REPAIRS
... 10
-3
a:
:
w
...oa: -4
...z
/
w -5
~
...
w
-6
O. 1
0.1 10 100
REPAIR TIME. t (HOURS)
within 2 hours. Similar results have been found to hold for almost
all repair time data taken under field conditions, with one
exception; namely, fl ight-line maintenance of highly modularized
airborne equipment for which B = 1.0.
INITIAL HYPOTHESIS
13
C5 19
0.Q1 /If R2
20 a
TO LOUDSPEAKER
Table 1
Frequency Distributions of Component Lead Density (CLD) and
Junction Point Lead Density (JPLD) for Two Equipments
2 15 2 11 2 5926 2 1370
3 5 3 3 3 1278 3 951
4 0 4 2 4 0 4 660
5 2 5 1 5 210 5 458
6 0 6 0 6 786 6 318
7 0 7 546 7+ 31
8 0
9 0
10 0
11 1
Total = 22 Total = 18 Total = 8746 Total = 3788
N = 2.5 M+1 = 3.05 N = 2.5 M+1 = 3.28
I = NM = 5.12 I = NM = 5.70
TEST OF HYPOTHESIS
IMPROVED HYPOTHESIS
•
A PREDICTIVE THEORY 225
-2 -1 o 2 4
.
99.9 2
99
~.
a:
~ 90 ~ 1
z
~
~ o
~
I • 5.7
i
Q
w
w
. A • 3.33 HR-l
• • 0.85
-1
.
oJ
I.
a:
0
-2
-3
~
w
...
a:
Q
~
~
1
. . -5
~
-6
O. 1 7
0.1 10 100
REPAIR TIME. t IHOURS)
..
99.9 A • 13.3 HR-l I • 5.7
A • '.OHR-l
f · O.23HR f · 0.85 HR • • 0.85
99 A • 4.0 HR- 1
a: f · 1.93 HR
~
z
~ CURVE 1 LOGICAL PACKAGING OF RADAR
SIMULATOR. LABORATORY
0
i ENVIRONMENT
Q
w CURVE 2 STANDARD PACKAGING OF
I;j -1
RADAR SIMULATOR. LABORATORY
oJ
ENVIRONMENT
..~
a:
10~--------------------~----------
CURVE 3
CURVE 4
FPS-20 RADAR. LABORATORY
ENVIRONMENT
FPS-20 RADAR. FIELD
-2
~
w
ENVIRONMENT
-3
a:
.
15
I -4
["""oro,, "'~l
zw
~ VALUES; ALL OTHER
~ PARAMETER VALUES ARE -5
ESTIMATES.
Figure 4. CFD Data for Radar Simulator and for FPS-20 Radar
Plotted Against F(t) as Computed from Improved
Hypothesis (Appendix C)
A PREDICTIVE THEORY 227
(1) For a = 1,
T =
T = co
(3) For 1 > a > (~)
I
1
2 -1 - 1 + a]
-a (2 - a)1n [ I a - a1n1 - 1a(1 - a)(1 - t)
1 2] (1 _ a)2
- -)
I
It is clear from cases (1) and (3) above that the general
relationship
T =,G(I,a)
may be useful in predicting T for various types of equipment under
both laboratory and field conditions.
Table 2
T mean active repair time, hours T average test time per component, hours = 11 A
I complexity index F data taken under field conditions
A average diagnostic rate per component, hr- l L data taken under laboratory conditions
CONCLUSION
ACKNOWLEDGEMENT
REFERENCES
Berndt Brehmer
Department of Psychology
University of Uppsala
Box 227
S-751 04 Uppsala, Sweden
231
232 B. BREHMER
the subject will, of course, not detect that his judgments do not
follow the rules he intend to use, (Brehmer, Hagafors, &
Johansson, 1980).
REFERENCES
Jens Rasmussen
INTRODUCTION
241
242 J. RASMUSSEN
.....
Emotional EmotioMl must be found. Descriptions of human mental
stote &biological functions typically depend on situation
........i.toryl analysis and information process models.
Descriptions of subjective values and pref-
erencea typically depend on factor and scal-
ing analysis and emotional state models.
Topographic Search
Identity
Paths or
Fields
Symptomatic Search
PATTERN RECOGINITION
Data-
driven
"'---1> laM'
in terms of
Network
- cause, effect,
event, state,
action, ect.
Tactica.
~---------------~ ~
Rules
Accept
.-----=----1> hypothesis
no
Search
Strategy
Generate
Hyp.
From other
_rch strategies
CONCLUSION
REFERENCES
'llIE
a:w. OF 0PmATCl<' S 'D\SK AND S'mA'l'mY aJoIVlD.'S SUPRRT OF ~ 'S CXIII'ImJI'S AI1IOII\TIC 'D\SK AND S'mA'l'mY
DINXlSIS 'D\SK AND S'mA'l'mY
Verification of _ t i c safety act10ns _totion of patterns of criticsl variables MxU.toring of critical var1abl_; autanatic actions ac-
and _ t plant state. Aeal9nition in cIirect relstion to lltarldanVreference values a:>rcIin;I to stored deci.ion tsbles.
and deci.ion tsbles. by vi....lly percspt1ble patterns.
m:mrr
PlANI' 'nIIlul4tion of safety action sequences _ re-
1._ state patterns.
Identify encIongered critical variables, hInitiate operator's attention _ action. M::rlit:oring of measured variables: derlvatl® of "normal",
reference data, detect disc:repanc:ies.
~=:~=:~_=~~~f~~. ~~l=~~~~~__ _
Identify di.tw:l>Ed function. e.g • .....1 QUde operators by ....._ obout infcmnative
m:mrr
ClIDATIOO
_ fl ... paths. "nlpo;rapb1c search in
flow structures 6
__ diBplays to attend to.
l'rMSIt _ of flCOf paths with state of fl"""
_tic tcpograph1c search in IIIISS/energy balance
structures fcc idec:tific:ation of faulty/diaturbed
balances.
rela_ to ncnel. fcc cIirect visual search. Convert all IIII!IIO!Nred infcmnation into fl... _ level
1nfomIation rela_ to balances.
~----------------- ----------------------1-------------------------
Ind1cste ~bI.e ccntml points on flCOf napa. _ _ i;n data bose.
Select proper ccntml facUity to oaunter-
actdisturbsnce.
=:-ccntrol~ -= =~y not used paths SiJIa1l.ate systan.s response.
SUpport Wl'ificstion of deci8ions.
~
C
Table 2. Illustrative example of operator and computer roles during different ~
diagnostic task situations in a process plant control room. m
Z
PROCESS PLANT DIAGNOSIS 257
Lisanne Bainbridge
Department of Psychology
University College London
London WCIE 6BT, England
INTRODUCTION
259
260 L. BAINBRIDGE
Future Targets
Problem Solving
On-Line Behaviour
the end of each of the control choice routines. One might model
this by returning, at the end of each routine, to some "executive"
which decides what would be the most effective way for the
operator to spend the next few moments. In the unacceptable-power-
-usage context there is a choice between refining the action
choice, sampling the power usage, or making a control action, and
the choice between these is a function of the urgency of the
control state at the last sample. Utility "calculations" are
presumably used in these "executive" decisions, but play only a
partial role within a more complex structure.
Evaluation
Conclusion
INPUT PROCESSES
have not received much accurate analysis, although there are many
sensible design recommendations, so the discussion here will
concentrate on concepts which do appear in several models, and on
comparing these concepts with actual operator behaviour, to
suggest aspects which need to be included in a fuller understand-
ing of how the process operator takes in information.
Summary
(These extracts come from the same operator in the same session).
This suggests that predicting and comparing actions may be done
primarily during the development of new behaviour in unfamiliar
si tuations, which would occur particularly during learning, or
when something has gone wrong with the process. Here is an extract
from some operators having difficul ty with starting up a turbine
(Rasmussen and Goodstein, personal communication):
Summary
GENERAL CONCLUSIONS
REFERENCES
Jacques Leplat
INTRODUCTION
287
288 J. LEPLAT
Activity
Eo
*:----------------41 Procedure P 1----------...
.. --.t<·I:Goal
+C (1 )
l/Task T
*-----1
+C
(2)
fT> ~ E
T = P
0 FT < Po
>
Po the largest procedure internalized by the operator
more summary than <: more detailed than
3. "Task-activity" dialectics
being the greater, the more finely the task is described in terms
of sub-tasks. Consequently, work analysis frequently proceeds in a
spiral movement; better task approximation leads to better
activity approximation, and inversely. Progressively the analyses
can enrich each other until reaching the desired quality of
precision.
Gaps between the prescribed and effective task may involve goals
or sub-goals. The first important origin of gaps is a function of
the permitted tolerance. Some operators accept larger deviations
of the system than those officially stated; e. g., they intervene
only when indications deviate appreciably from the zone considered
as "normal". Analysing the origin of these allowances may conclude
that the cause is simple ignorance. It may also result from a
knowledge of the installation's functioning, showing for instance
that, all other conditions being considered, the gap should
decrease rapidly without intervention (De Keyser, 1972). Gaps can
reveal a strategy.
Task analysis and models are very useful but appear limited
if not enlightened by activity analysis, as we shall see in the
f~llowing chapter.
296 J. LEPLAT
CONCLUSION
REFERENCES
INTRODUCTION
301
302 B. H. KANTOWITZ AND R. H. HANSON
Perhaps some concrete examples may help make this poiDt more
salient. Imagine an operator faced with a visual display
consisting of ten green squares. If any square turns red the
operator is required to detect this stimulus change and to respond
by pushing a large button labelled STOP. Since there is but a
single response, this situation is close to an ideal detection
task. There is no response information, although there is stimulus
information of up to 10 bits depending upon the probability
distribution over the stimulus set. (The next paragraph explains
why more than one stimulus was used in this example). Since there
is more information in the stimulus set than in the response set,
this situation is an example of a many: 1 stimulus-response
mapping. There is virtually no diagnosis or response selection
imposed upon the operator. Once a change in the stimulus set is
detected the requisite action has already been pre-selected by the
task structure.
Queueing Models
Information Theory
Control Theory
into those that have slow signal arrival rates on the order of one
event per minute or slower, and rapid rates where stimuli occur
more often than one per minute. Thus, signal rate is an index of
the temporal density of signals. Event predictability is divided,
like Caesar's Gaul, into three parts: random processes, partially
predictable processes and completely predictable events. Note that
this tripartite division refers only to the type of stimulus and
not to its temporal uncertainty. Hence an experiment with only one
known signal that occurred at random times would be classified as
a completely predictable event.
before one can state that any given increase in detection accuracy
was offset by an increase in latency. Furthermore, a more
appropriate statistical data analysis would have been a multiple
analysis of variance of detection accuracy and latency together,
rather than separate univariate analyses.
CONCLUSIONS
REFEHENCES
INTRODUCTION
317
318 J. S. BROWN AND J. de KLEER
METHODOLOGY
TECHNICAL ISSUES 1 )
COIL
CLAPPER-
SW ITCH
BATTERY
Figure 1: Buzzer
,,( ... )*" indicates that " ... " may occur an arbitrary number of
times. The <defini tional-part> can be used in two ways: the first
concerns its use as a criterion for determining whether the
component is in a given state and the second concerns its use as
an imperative: given that the component is declared to be in a
particular state (criterion) then statements made in the <defini-
tional-part> are asserted to be true (imperative) about the
component's behaviour. These assertions can then be examined by an
inferential process to determine their ramifications. In simple
cases these ramifications can be determined by examining the tests
of the current state of every component model.
SWITCH OPEN:
battery is disconnected,
if coil is not pulling, switch will become CLOSED.
CLOSED:
battery is connected,
if coil is pulling, switch will become OPEN.
COIL ON:
coil is pull ing,
if battery is disconnected, coil will become OFF.
OFF:
coil is not pulling,
if battery is connected, coil will become ON.
2)
For more complex devices this choice cannot be arbitrary since
some of the composite device states may be either contradictory
or inaccessible - requiring several initial states. A composite
state is contradictory if the definitional parts of two of its
component states make contradictory assertions.
3)
However, many of these difficulties can be avoided by detecting
and eliminating self-contradictory device states and invoking
other deductive machinery. It is the responsibility of the
simulation process Pl to identify the ambiguities, but it is
also the responsibility of Pl to prune as many of the resulting
envisionments as possible based on local considerations.
QUALITATIVE REASONING ABOUT MECHANISMS 327
T =0 T =1
T =2 T =3
WIRE 1
Figure 5: Enablement
Figure 6: Cause
Figure 7: Propagate
Figure 8: Antagonism
Both state changes SCI and SC2 cannot hold simultaneously and are
therefore termed antagonistic. For example, the switch state open
is antagonistic to the switch state closed.
CONCLUSIONS
REFERENCES
CHAIRMAN: H. G. STASSEN
SECRETARY: H. TALMON
SYSTEM DESIGN AND OPERATOR SUPPORT
Henk G. Stassen
INTRODUCTION
339
340 H. G. STASSEN
SURVEY PAPERS
PAPERS ON METHODS
APPLICATION PAPERS
PECENIRALISA TlON
·
ASK ALLOCATION
EVEL Of INIERVEN.
· .
*
I
HREE-LEVEL HUH.
•
.
EHAVIOR MODEL
NIERNAL MODEL
AUL T COVERAGE
•
•
* · • •
AIL. TRANSPAR. • ·
REDUND. MANAG. *
. .
f AULI PROPAGA I • •
FAUll TOLERANCE •
RELIABILI IY /SAfEIY
PA ITERN RECOGN.
· •
·
f AUL T TREE :AUSE
. .
.·
CONSEQ. DIAG.
SYMPIOM EO. • • •
•
. .
fLOW MODELS
DIST. AN SYST. • •
CONCLUDING REMARKS
1 Habitual
t routines
Discussion (continued)
REFERENCES
DECENTRALIZED SYSTEMS
Gunnar Johannsen
INTRODUCTION
353
354 G. JOHNANNSEN
Information
Link
Controller 1 Controller2
~1 (t) ~1( t)
System
Coordinator
System
-
~
~ n» ~ nlz
z :J: ~
SENSING AND PRESENTING RELEVANT INFORMATION 0 ~
~ Z
ABOUT PRESENT PROCESS STATE m
display data to human operator in given format + + + +
find and display data which meets given criterion + + + +
apply given measure (extrapolation, correlation, etc) + +
find best sensory process to meet criterion + +
make diagnosis of measured symptoms + +
EVALUATING ALTERNATIVE ACTIONS
indicate to operator cammand doesn't meet criterion + +
determine model response to given test input + +
determine which cantrol is best by given criterion + +
test whether actual response matches model response +
suggest an action to human operator + + + +
request data from operator, process recommendation + + + +
IMPlEtv1ENTING ACTIONS
request data from human, process it for action + +
take certoin action when operator gives signal + +
take cantrol actiori unless operator gives signal + +
take cantrol action independently of· human operation + +
Human
Supervisor
Gl
t-
o
Block Diagram of a Supervised Decentrallized J:
Figure 3. z
»z
Control System.
Z
en
m
Z
CONTROL OF DECENTRALIZED SYSTEMS 361
AN EXAMPLE
Control (SS.PC.HO):
-SS
-
SS
-PC
-
SS
---n--- a x •
.A-______ ________!!
Diagnosis (SS.PC.HO):
~ I
Correction: Suggest
f£® -PC
-
PC
/5se1
o
a actual value
x predicted value blinking information
REFERENCES
F.P. Lees
INTRODUCTION
Modern process plants are very large and complex and their
control involves large flows of information. In fault conditions
there is a severe problem of the interpretation of this
information in order to diagnose the fault. The task of fault
diagnosis is normally assigned, if only by default, to the process
operator, but the use of the process computer to assist him in
this task appears a natural development. The process computer
normally takes in measurements from the plant, compares them with
alarm limits and generates alarms. Clearly this task may in
principle be extended to that of analysing the alarms to diagnose
the fault.
369
370 F. P. LEES
the synthesis of large fault trees from digraph models for process
plants. The input data are the system topography and a single
digraph model for each unit. This work has given rise to a
considerable discussion in the literature. This discussion is of
interest in that it brings out a number of important points in the
technology of representing fault propagation.
,-- - - - -- --&-- SP
Pipe 5 Pipe 7
V3
Pipe 6 Pipe 8
Pipe
A t B
FB =f (pA, - PB)
dP A =f (FA, - FB)
dt
Open tank
IA C
LB
dLC f (FA, - FB)
dt
FB =f (L
C, - PB)
Control valve
A_"._~~-""'-B
Assumption: air-to-open
FB = f (PA, - PB, ZC)
dP A = f (FA, - FB)
dt
Controller
~C
Sensor A B
-+-c::::::J-.-
ZB = f (ZA)
control loop
MF Flow alarm
ML Level alarm
FOUT LO
As already stated the fault trees are not developed and held
in the program in advance. When a set of alarms occurs in real
time, however, one alarm is selected as the top event and the
fault tree is developed.
Figure 8 after Andow and Lees (1978) shows a fault tree for
part of the system given in Figures 1 and 2 constructed using
mini-fault trees such as that shown in Figure 7.
-
."---'
-I FI
HI I
loop I I
-----'-
fai lure in a control loop. Like other parts of the plant the
elements in the control loop are described by unit models. A fault
tree representation of a control loop generally includes an AND
gate. Method 2 generates an AND gate for a control loop by the
normal application of the rules.
68. The process alarm network has been stored in the process
computer using RTL/2. The plant has then been operated so that
alarms are generated. The work has been limited in extent, but the
alarm analysis program has worked satisfactorily.
ACKNOWLEDGEMENT
REFERENCES
Andow, P.K., 1973, "A Method for Process Computer Alarm Analysis",
Ph.D. thesis, Loughborough University of Technology.
Andow, P.K., 1979, Private communication.
Andow, P.K., 1980a, "Difficulties in Fault Tree Synthesis for
Process Plant", IEEE Trans. Reliab., in press.
Andow, P. K., 1980b, "Real-Time Analysis of Process Plant Alarms
Using a Mini-Computer", Computers in Chern. Engng., in
press.
386 F. P. LEES
Powers, G.J. and Tompkins, F.C., 1974a, "A Synthesis Strategy for
Faul t Trees in Chemical Processing Systems", in "Loss
Prevention", Vol. 8, Am.lnst.Chem.Engrs., New York, p. 91.
Powers, G.J. and Tompkins, F.C., 1974b, "Fault Tree Synthesis for
Chemical Processes", AIChE J., 20:376.
Powers, G. J. and Tompkins, F. C., 1976, "Computer-Aided Synthesis
of Fault Trees for Complex Processing Systems", in "Generic
Techniques in Systems Reliability Assessment", (edited by
E.J. Henley and J.W. Lynn), Noordhoff, Amsterdam, p. 307.
Rasmussen, J., 1968, "On the Communication Between Operators and
Instrumentation in Automatic Process Plants", Danish Atomic
Energy Comm., Res.Est., Ris0, Denmark, Rep. Ris0-M-686.
Rasmussen, J. and Jensen, Aa., 1973, "A Study of Mental Procedures
in Electronic Trouble-Shooting", Danish Atomic Energy
Comm., Res.Est., Ris0, Denmark, Rep. Ris0-M-1582.
Salem, S.L., Apostolakis, G.E., and Okrent, D.L., 1976, "Com-
puter-Oriented Approach to Fault-Tree Construction", Uni v.
of Calif., Los Angeles, Rep. UCLA-ENG-7635.
Taylor, J.R., 1974, "A Semi-Automatic Method for Qualitative
Fai lure Mode Analysis", Danish Atomic Energy Comm.,
Res.Est., Ris0, Denmark, Rep. Ris0-M-1707.
Welbourne, D., 1965, "Data Processing and Control by a Computer at
Wylfa Nuclear Power Station", in "Advances in Automatic
Control", Instn. Mech. Engrs., London, p. 92.
Welbourne, D., 1968, "Alarm Analysis and Display at Wylfa Nuclear
Power Station", Proc. lEE, 115:1726.
APPLICATION OF PATTERN RECOGNITION TO FAILURE
1. F. Pau
INTRODUCTION
389
390 L. F. PAU
PATTERN MEASUREMENT
specific Aspects
Methods
LEARNING
Specific Aspects
Methods
FEATURE EXTRACTION
Speci:fic Aspects
Methods
CLASSIFICATION
Specific Aspects
Methods
REFERENCES
Morten Lind
INTRODUCTION
411
412 M. LIND
Measurements
Inferences
Conditions
Xl X2 X3 XIj
Data Transformation
Measured information
o.,
® Condition
o Inferred information
~
r
Fig. 1. Plant variable inferences and corresponding assumption tree, an example. z
o
FLOW MODELS FOR AUTOMATED PLANT DIAGNOSIS 417
State Categories
FN FD N 0
'"t.........
:;3
(/)
0 ......
z Cl
Steady S 1 1
Transient T 1 1
1: 2 1: 2
DISTURBANCE COMPENSATION
SEARCH STRATEGIES
ACKNOWLEDGEMENT
REFERENCES
APPENDIX A
Storage Processes
Transport Processes
Boundaries
Conditioned Processes
Flow Structures
0
INTENSIVE a r ....
STORAGE CONTENT ( I
EXTENSIVE e: .... J
,.
<>
-
ACROSS
<v )
(l
TRANSPORT FLOW
THROUGH 'T
-----
-
POTENTIAL
FLUX -----
BOUNDARY INTERFACE
* -- ...
(condition)
Storage process
Boundary
(imaginary surface)
Content
Aggregate
Material source
Material sink
Energy source
,-,
,
\~,
~---- Energy sink
.-0
.
co
FLOW MODELS FOR AUTOMATED PLANT DIAGNOSIS 429
APPENDIX B
N
L m =0
n=1 n
N M
L u m + L e.=O (2)
n=1n n i=ll
where
m is the mass flow rate of material flow no. N
unm is the energy flow rate associated with material flow n
n n (u is the energy per mass unit)
e. is tRe flow rate of pure energy flow i.
1
-N M
-[E m u + E e.]
n=ln n itkl
or
M
u + E e.]
r
i=ll
known)
m.
J
M
u + E e.]
r
i=ll
or
N
-[E mu +
n=ln n
m.+m
J r
M
u.m.+u m E e.]
J J+ r r i=ll
L.P. Goodstein
INTRODUCTION
433
434 L. P. GOODSTEIN
TASK SPECTRUM
Routine.
Familiar.
Preplanned.
New.
FREQUENCY
OF TASK
LEARNED, EMPIRICAL
RULES
PRESCRIBED INSTRUCTIONS +
PROCEDURES
PROBLEM SOLVING
+ IMPROVISATION
RISK
L---------------------------------~~LEVEL
involve using information from the system to "check that -" and
"see whether -" in accordance with the particular search strategy
being used and the associated reference model of the system.
I
Low or falling pressurizer
pressure & level j
Abnormally low steam pressure Rising or normal steam pressure
in one or both steam generators in both steam generators indicat"es
indicates steam break 1055 of coolant or tube rupture
1 ~ ~
Verify by checking for Ei ther increasing Air ejector radi-
I. Lower than normal steam gener- containment pressure alian alarm or
ator levels or containment high steam generator
2. A possible first out annunci alian radiation alarm or blowdown radia-
of rising sump water tion alarm or
a) steam/feedwater flow mismatch level indicates a possible observed
or 1055 of coolant differenti al rate
b) low-low steam generator water of rise of steam
level generator levels
1 ~ l
Go to detai led recovery Go to detailed Go to detai led
procedure xxx recovery procedure YYl recovery
procedure zzz
A SUITABLE MODEL
FREQUENCY
OF TASK
GOALS
ICNOWLEOGE· BASED
BEHAVIOUR
----------------------------------
RULE'BASED
SKILL - BASED
S...sory Inputs Time-space
Actions
~ informatton
Skill-Based Behaviour
Rule-Based Behaviour
Knowledge-Based Behaviour
DISCRIMINATION
x)
kind of "screening" of the plant to find deviations from normal ,
an immediate possibility is to display updated sets of information
as patterns which are amenable to reliable perception by the
operator as a prerequisite to activating a conscious response.
x)
"Normal" is often relatively undefined - compared with the
degree of refinement (and effort) which goes into defining
abnormalities. It seems reasonable to expect that a combination
of design intentions coloured with a good portion of
operational experience would permit a usable clarification to
be made.
DISPLAY SUPPORT FOR PROCESS OPERATORS 443
NORMAL
LOSS-OF-COOLING
ACCIDENT
PRIMARY TO
SECONDARY LEAK
Figure 5
L. P. GOODSTEIN
SYMBOL REPERTOIRE
~ SOURCE/SINK NODE
o ~Tt«ES
o ST(Rfa t«E
I
a
CCtl)ITI~It«; tm
Figure 6
446 L. P. GOODSTEIN
nnI
PRI. SEc'O.
CONTAINM CONTAn.....
L) I
REACTOR STEAM TURBINE - GEN. --MASS FLOW
GENS -----ENERGY FLOW
CON - COOLING
~DENSER - TOWER
PRI. SEC'O.
INVENTORY INVENTORY
FLOW STRUCTURE
WITH DISTURBANCE
IN STEAM GENERATOR
ENERGY BALANCE
(COLOUR CHANGE
+ BLINK) .
Figure 7
DISPLAY SUPPORT FOR PROCESS OPERATORS 447
ENERGY DISTRIBUTION IN
MAIN PROCESS
STRUCTURES ARE
THE SAME
t
WHITE IS ACTUAL FLOW MAGNITUDE
GREY ENDS ARE NORMAL FLOWS
---- IS ZERO FLOW
Power control
Branchings, feedback of energy
Levels of energy "accumulation" as reflected in state of
critical variables
Means for control and routing
Inventory control
Supply and loss
Levels of accumulation
Means for control and routing
CONCLUSION
ACKNOWLEDGEMENTS
REFERENCES
Andow, R.K. and Lees, F.P., 1974, "Process Plant Alarm Systems -
General Considerations", in Buschmann, C.H. (ed.), Loss
Prevention and Safety Promotion in the Process Industries,
Amsterdam (Elsevier).
Coekin, J .A., 1969, "A Versatile Presentation of Parameters for
Rapid Recognition of Total State", International Symposium
on Man-Machine Systems, Sept. 8-12, Cambridge, IEEE Conf.
Record 69 C58-MMS.
Goodstein, L.P. and Rasmussen, J., 1980, "Man-Machine System
Design Criteria in Computerized Control Rooms", in "ASSOPO
80", IFIP /IFAC Symposium in Trondheim, June 16-18, to be
published by North Holland.
DISPLAY SUPPORT FOR PROCESS OPERATORS 449
Lind, M., 1980, "The Use of Flow Models for Automated Plant
Diagnosis" (this volume).
Oversight Hearings, 1979, "Accident at the Three Mile Island
Nuclear Power Plant", Washington DC, May 9, 10, 11, 15,
Document Serial No. 96-8 Part I, US Government Printing
Office.
Rasmussen, J., 1980, "Models of Mental Strategies in Process Plant
Diagnosis" (this volume).
Rasmussen, J., 1980, "Some Trends in Man-Machine Interface Design
for Industrial Process Plants, in "ASSOPO 80", IFIP/IFAC
Symposium in Trondheim, June 16-18, to be published by
North Holland, also Ris0-M-2228.
DISTURBANCE ANALYSIS SYSTEMS
GENERAL
Notice:
This study was prepared by Gesellschaft fur Reaktorsicherheit
sponsored in part by the Commission of the European Communi ties
Joint Research Center, establishment of ISPRA.
451
452 w. BASTL AND L. FELKEL
OBJECTIVES
CEGB Approach
deduced alarm
fault messages associated with deduced alarms.
EPRI Approach
,
I
'J
FROM OFF-LI NE
I ENGINEERING
I ANALVSIS AND
Preprocessor
Disturbance Analyser
The most important and the most commonly used level seems to
be the level two (Cause-consequence analysis). After completion of
the multi-level analysis some clean-up is performed and so-called
"second-best" messages are attached to those events which are
activated but which could not be adequately analysed by the DAS.
The system comprises two major data bases, one is the model
data base and the other is the dynamic data base. The model data
base contains the plant models in form of cause-consequence trees.
The dynamic data base contains all those variables whose values
are sampled by the preprocessor, and which have to be checked
against pre-set limits.
Vl/HIGH----~
V 21H IGH-.......--<
V7/MANlJAL
V31 L OW--+--4C
V 4/0 F F
VS/O N V6/AUTO
u
PLANT
PLANT ~ CORRECTIVE
r DATA
BASE
--11 ACTIONS
'----r--r---' I
SYSTEMS
ANALYSIS
sing module also stores all the data collected for disturbance
analysis on a magnetic tape to allow replay and off-line
evaluation of the disturbance analysis results.
Data A
KPlant &OJ:leratod
acquisition ...
~
,
Data selectio n
Data Limit checki ng
preprocessing Data validity checking
Data storage on M-Tape
,
Alarm g roupi ng
Alarm
according to
monItoring plant system s
Prime Cause s
Disturbance present statu s
Analysis possible prop agation
recommende d corrective actIon
~
Editing of res ults
Communication Interactive Communicatlon
Information r etrleval
holwell
start-up and
shut-down pumps
~ ~1_5 I~ ~
I
-_..
I
I
I
___2 ______ __
11( ~ 12
L--~l~e=v~e~ll~n~L~.~p~~
feed heater
A3/ND2 Alarm
2 ndlimit
13
Automatic shutdown
of aU condensate ~~
if feed heater A3/ND2 Message
is not bypassed
14 ~
Feed heater
bypass in Question to operator
operation
no I yes
16 15
level in LP Continued plant
feed heater operation with
Secondary cause A3IND2 feed heater bypass Message
3rd limit A3/ND2
17 It' J_Ll 18' 'I 'J 19: II ' I '
A Shut down of AI Shut down of Shut down of
larm main arm main main Alarm
condensate condensate condensate
lpump 1 Ll:p;..;u;....;m.;.:;'p;......;2~_..J pump 3
The basic assumption is that the drains pump for the low
pressure feedwater is either switched-off or in repair (event 3).
There may be a tube break in the low pressure feedheater or a
spray degasifier in the feedwater tank is lost. Subsequently,
there will be an increase in the water level of the feedheater.
This increase will continue until the first limit is reached
(event 8). This is the first observable event from which the
disturbance analysis system will conclude that the primary cause
can only be the tube break in the low pressure feedheater (event
4). At this point appropriate corrective actions could already be
taken. In case the corrective action is not taken, the water level
will continue to rise and will exceed a second limit (event 12).
The control system automatically isolates the defective low
pressure feedheater after the second limit has been exceeded and
the appropriate pre-heater will be by-passed. This may not be
successful, however, in which case an emergency shutdown of all
main condensate pumpts is imminent. The operator will be notified
about this situation by appropriate messages. In the Grafen-
rheinfeld nuclear power plant the status of the valves for the
pre-heater by-pass is not available on the process computer and
there is only the operator to know whether or not the pre-heater
by-pass was successful. The disturbance analysis system therefore
asks the operator an appropriate question (event 14). The question
is answered by means of a special keyboard in the disturbance
analysis system. If the pre-heater by-pass was unsuccessful there
will still be continued increase of the water level in the low
pressure feedheater which after the excess of a third limit causes
the automatic shutdown of the two operational main condensate
pumps and prevents the stand-by condensate pump from being
started. As a consequence the feedwater tank will be emptied by
and by which eventually leads the shutdown of the main feedpumps
and reactor scram.
Alarm
Library
r====I~~-~ Tree
WWR-SM Measurong I T - - - j Descriplions
Primary Alarm
&
Prim~y Dolo Dala Analys.r
Processing Ba.. (ANAL) . . . - -..........
Syslem
Alarm Alarm
Conlrol PreHnla- Iree
Syslem loons Generation
Emer MC Sla"
(ALDYS ) (ALGEN)
Operator
Alarm Console
Display
Keypoints of the general layout of a DAS are taken and the systems
under consideration are examined for similarities and differences.
Hardware requirements
Software
JPS fI z JPS 2z
JPS~up
JPS 2up
Methodologies used
REFERENCES
Max Syrbe
INTRODUCTION
475
476 M. SYRBE
degree of
automation
10
1 610
SYSTEM
I AGGREGATION,
1 1
DECOMPOSITION AGGREGATE SYNTHESIS
I
MODULE
DYNAMIC, FUNCTION-SHARING'
STATIC REDUNDANCY REDUNDANCY
(FAULT MASKING) (RECONFIGURATION AND
(E.G. 2 OUT OF 3) RECOVERY AFTER FAULT
ACCESS)
Fig. 3. Fault-tolerant system structures
B
B'
ISPOSABLE REPAI A
/III, . )1 TIME
ER.~OR DIAGNOSIS RECONFIGURATION
TIME TIME
Fig. 4. Processing capacity and partial task control
478 M.SYRBE
the I/O parallel bus where the various process I/O modules
(digi tal I/O, analogue I/O) as well as an operator control
panel for the station (CPL) are located. Here actual values
of the process signals, set points, signal limi ts or other
parameters can be put into the data base of the station.
Thus a simplified operation of the station can be guaranteed
in spite of a breakdown of the communication
480 M.SYRBE
Fig. 6.
The distributed,
tault-tolerant
RDC-system
I/O-SECTI UN
o MODULES OF A
RDC-STATION
TESTCYCLE
PAUSETELEGR., CRC
Fig. 12.
Microcomputer
station's control
panel
--
m
• M.
=-
--
~I
AUTOMATIC ERROR DETECTION 485
Fig. 13.
~ -
Q.. B; "0
Adjoined matrix for
0
DATE ~ ~ III ct I.u
II) '<{
§ I/) § determination of
single fault parts
C1QO /0001 X X
C 100/0002 X
CIOO /FOI3 X
C,101 /1111 X
Cl0l / FOIl X X
and
so
on
ACKNOWLEDGEMENT
REFERENCES
SYSTEM RECOVERY
W. J. Dellner
INTRODUCTION
APPROACH
487
488 W. J. DELLNER
After doing the above, the foundation will have been laid for
designing the user's role for a specific fault tolerant system.
Design factors which will be reviewed include: research on
displays, needs for system testing and needs for gaining and
maintaining expertise.
The ESS system designer has specified the criteria and means
for fault detection and recovery to the good processsor. The ESS
indicates, via a display, the processor which is in control. From
the display, one can infer if a fault detection module is
"insane", since it shows ping-pong of control from processor 1 to
processor 2 and back and forth. This latter, infrequent state
requires manual override to lock one processor into control
(LaCava, 1978).
GENERIC MODEL
Machine Subsystem
SUMMARY
REFERENCES
APPENDIX
SWITCHER 1 SWITCHER 2
CUSTOMER CUSTOMER
LINES LINES
TRUNKS
Figure 1.
(program store) and the temporary memory (call store) which acts
as an electronic slate. When the call is completed, the slate is
wiped clean of information pertaining to that call.
~,
SCAN
\/ ~
DISTRIBUTOR
..... CENTRAL
CONTROL
~~ 4~
,, ,,
CALL PROGRAM
STORE STORE
(TEMPORARY) (PERMANENT)
Figure 2.
'7 7
" ~ ~
CENTRAL CENTRAL A B C
r--+ MATCH
...
CONTROL ~
CONTROL
0 1
0 1 1 2 2 0
~ ~~ .~
bATA
1""1
,...
, ,
r- ~
BUSES
17 171 ,,1 '7 , 1
Figure 3.
David A. Lihou
INTRODUCTION
501
502 D. A. LlHOU
Null AND null, No, Less or More gives null, No, Less or More,
respectively;
Less AND Less gives Less;
More AND More gives More;
No AND Less gives Less;
No AND More gives Less;
Less AND More gives null.
Fault Trees
Case Study
OPERABILITY STUDIES
In the method presented here, for storing all the cause and
effect combinations which are considered during an Operability
Study, the Key Words are identified by number indices which are
listed in brackets, following the line identification. This is
followed by a string of causes, comprising the Cause Equation,
separated by + to indicate OR and * to indicate AND. The indices
used to define the Key Words are shown in Table 1. Also shown on
this table are the components which occur in the acetone recovery
process, which is used as a case study. When describing the
deviation (Flow, As well as) it is convenient to identify the
contaminating component by a third number. Similarly, concen-
tration deviations should have a component number.
1 Flow No Acetone
2 Temperature Less Water
3 Pressure More -Steam
4 Level As well as Air
5 Concentration Part of Ammonia
6 Heat Reverse
7 Cool Other than
Nodes
L303 (13) = FE101 (-1) +V61 (1) +V59 (0 ).+FIC101 (1) +FCVIOl (1)
+V3(1)*Vl(1)*V2(1).
L338 L323
L312
-- --.,
. . ---~-.---r.-
I ~ I~
tFR\
I
I F'1'
1021---...
L315
V2l
Vl
L316A
P102A vel L314
,...-
I
----- ----_ ...
I
I V1l7
I
I
r·------J
L V10S
s
V19
Index Numbers
Equipment
0 -1 1 L
L302(32) = TP6(32)+V40(0)*{V38(-1)+FCVI03(-1)+V39(-1)}
This equation means Line 302 (Press, Less) is caused by Terminal
Point 6 (Press, Less) OR V40 closed AND {V38 OR FCV103 OR V39
insufficiently open}.
L302(12)+N20(12)*N20(32)*N18(12)*N17(12)*N19(12)*N19(22)*N19(32)
This equation means that Line 302 (Flow, Less) will cause the
following symptoms Node 20 (Flow, Less) AND Node 20 (Press., Less)
AND Node 18 (Flow, Less) AND Node 17 (Flow, Less) AND Node 19
(Flow, Less) AND Node 19 (Temp., Less) AND Node 19 (Press., Less).
FAULT FINDING AND CORRECTIVE ACTION 509
HEI01(L)+N18(13)*N20(13)*N19(12)
This equation means that HE101 (Leaking) will cause the following
symptoms Node 18 (Flow, More) AND Node 20 (Flow, More) AND Node 19
(Flow, Less).
Note from the Appendix that the Cause Equation for L330(531)
is N2(22); Le. Node 2 (Temp., Less). Node 2 is listed in the
Symptom Equations for C101 and the only two faults producing
N2(22) in this set of Fault Symptom Matrices is L329(l1) and
L329(12). Similarly, the Cause Equations for L329(11) and L329(12)
contain only N18(11) and N18(12), respectively. Searching the
Symptom Equations of HE101 for these nodal responses at N18,
produces the first row of causes on Figures 4 and 5, respectively.
in line 329
D OR ~AND
Event Probabilities
n Jl/r(l+l/B) (2 )
(3 )
Node Nwnbers
Events Prop.
Words
2 3 4 6 8 9 10 11 12 17 18 19 20
1) TTrl0l(1) Flow 0 0 0 L L
0 0 0 0 0 0 L
-L
causes Temp. 0 0 0 0 0 0 0 0 0 0 0 L 0
L302(12) Press 0 0 0 0 0 0 0 0 0 0 0 L L
Level 0 0 0 0 0 0 0 0 0 0 0 0 0
L L
2) L302(12) Flow L L L L L L L L 0 L
-L
causes L329(l2)* Temp. L 0
-
0
-0
0 0 0 (~)
0
0 0
-L
0
0
0
0 L
L
0
L
L330(531)* Press. 0
- -L -L -L 0 0
L333(531) Level 0 0 0 0 0 0 0 0 0 0 0 0 0
Go to LICl02 = 3min.
Check V75 open, open V77; is V75(0)? = 1.5min.
Set LICl02(-1) and go to LCVl02 = 2min.
Go directly to LCVl02 = 2min.
Open V19 = 0.25min.
Total Time to Check LCVl02 Stuck 3+2
- - : 12.7
Prob.{LCVl02 stuck} - .393
516 D. A. LlHOU
Open V19
V17 & V18
high?
N
falling?
Spurious Alarm
stuck Close V19 and
Control leve check LAH102
by V19
r-----~~----~.y
Prepare to start
~102B. Close V82
Open V1l7 and
V14
LXSLI02A or
PI02A are
~
V13 is
Figure 10
518 D. A. LlHOU
ACKNOWLEDGEMENT
REFERENCES
APPENDIX
Cause Equations
L302(11) = FEI03(0)+V40(0)*{V38(0)+FCVI03(0)+V39(0)}+TP6(11)
L302(12) = FEI03(1)+V95(0)+TICI01(-1)+TTr101(1)+V40(0)*{V38(-1)+FCVI03(-1)+
V39(-1)}
L302(32) = TP6(32)+V40(0)*{V38(-1)+FCVI03(-1)+V39(-1)}
L303(33) = TPl(33)+HE104(-1)+L306(33)
L312(23)=TICI02(1)+TCVI02(-1)+HEI02(72)+TP8(23)+TP8(12)
L313(11) = 0101(41)+GOI0l(0)
Note: GOI0l is the grille in the bottom of 0101 at the inlet to L313.
L313(23) = L312(23)
L316A(11) = L313(11)+Vll(0)+LF103(0)+P102A(0)+LSXL102A(0)+V13(0)
L316A(12) = Vll(-1)+LFI03(-1)+PI02A(-1)+V13(-1)
L316B(11) = L313(11)+V14(0)+LFI04(0)+PI02B(0)+LSXLI02B(0)+V16(0)
L316B(12) = V14(-1)+LFI04(-1)+PI02B(-1)+V16(-1)
L317(11) = L316A(11)*L316B(11)~V19(0)*{V17(0)+LCVI02(0)+V18(0)}
L317(12) = L316A(12)+L316B(12)+LICI02(1)+V76(0)+V19(0)*{V17(-1)+LCVI02(-1) +
V18( -I)}
L328(11) = N2(11)+HEI0l(0)
L328(23) = N2(23)
L329(11) = N18(11)
L329(12) = N18(12)
L330(531) = N2(22)
L330(32) = PI01A(-1)
PI01A(-1) = N2(23)*{N2(32)+V23(-1)+LF101(-1)}+V89(1)
L332(531) = N2(22)
L332(32) = PI0IB(-1)
PI01B( -1) = N2( 23)*{N2( 32),+ V26 (-1)+LFl02( -1) }+V90(l)
L333(32) = {L330(32)+V25(-1)}*{L332(32)+V28(-1)}+V29(-1)*V30(1)+V29(1)*
V30( 1) *V3l{1 )
L333( 5 31) = L330( 531)+L332( 531)+HEI04( L)'\{ L303( 33 )+L333( 32)}
Symptom Equations
INTRODUCTION
523
524 W. B. GAD DES AND L. R. BRADY
Method Objectives
b. of the manual
Definition
---- information, procedures and
decision logic required of the human to enable fault
detection, isolation and repair in those cases where
diagnostic programs, operational readiness tests, and
buil t-in tests default.
ORT
L,....
4 ANIZATIONS
CH MANUALS
AINING
GISTICS
Fig. 1. Methodology for determining human performance requirements for manual maintenance.
til
t.J
-0
530 w. B. GAD DES AND L. R. BRADY
a. Systems Engineering
1. Programming Performance Specifications (PPSs):
Operational Program (AOPs)
. Maintenance Test Programs (MTPs)
. Operational Readiness Tests (ORTs)
2. Cable Interconnection Data
3. Troubleshooting logic diagrams
4. System mission functional requirements
b. Reliability Engineering
1. Reliability block diagrams
2. Failure modes effects criticality analyses (FMECAs)
OPTIMIZING HUMAN PERFORMANCE 531
c. Maintainability Engineering
1. Maintainability analyses, predictions
2. Maintenance test definitions
d. Logistics Engineering
1. Logistics support analysis (LSA)
2. Special tools, test equipments
e. Maintenance Engineering
1. Maintenance task descriptions, task times, other main-
tainer characterizations
g. Software Engineering/Development
1. Maintenance test program descriptions
2. MTP logic diagrams
SUBSYSTEM
FUNCTIONAL GROUP
FUNCTION
OPERATIONAL MODE ;E
~
Gl
»o
o
m
en
»z
o
Fig. 2. Sample mission avionics system analysis structure.
r
:0
OJ
:0
»o
-<
OPTIMIZING HUMAN PERFORMANCE 533
_ CP4 CP31
i . __ •• __ ._ i CP5 * At
s~ CP32~SIGNAL ~~~DATA ~J~13~__~
1.6 DATA PROCESSING
PROCESSING SET NO.2 ~
GROUP J CD
CONNECTOR MTP
DATA IDENTiFiER Gl
' - - - - - - - - - - - - - - - UPDATED DIAGRAM-----------------~
»
o
o
" '* SYSTEMS MAINTENANCE TEST m
en
z»
o
Fig. 3. SAMM flow diagram development. r
::tJ
CD
::tJ
»
o
-<
OPTIMIZING HUMAN PERFORMANCE 535
CONCLUSIONS
REFERENCES
Andrew Shepherd
CHAIRMAN'S REVIEW
INTRODUCTION
541
A. SHEPHERD
(i) the set of faults to be distinguished are known and can all
be included in the training programme;
INS~RUCTIONAL ISSUES
Pre-instructional issues;
Pre-Instructional Issues
Task Simulation
Knowledge of Results
CONCLUDING REMARKS
DISCUSSIONS OVERVIEW
The distinction was drawn between those rare events that are
merely infrequent and those that are unforeseen. This prompted
discussion of the distinction between the operator, who would try
to cope with problems as they arise, and the specialist, who would
provide a more '''expert'' back-up. It was argued that training for
such specialists, e.g. plant superintendents or engineers, should
be much more wi thin the context of the task or plant than the
rather general theoretical training such people typically receive,
e.g. university or college education.
MOTIVATING TRAINEES
ends. This was just one of several occasions when the need to
consider diagnosis training wi thin the context of overall task
training or overall human factors considerations was emphasised.
K.D. Duncan
University of Wales
Institute of Science and Technology
Cardiff, South Glamorgan, CF3 7UX
INTRODUCTION
553
554 K. D. DUNCAN
Algorithmic Procedures
No
LlC 30
high? Ves
, - - - - - - - - , Ves
LIC 30 ~----------~----~ R2 evaporator reboiler
high? No steam supply
,....--------. Ves
LlC 29
high? No
,....--------. Ves
LlC 29
high?
LI C 29
low?
Ves
No
In the event of several alarms at once select the one which is highest in the above list
Task Analysis
remove leaves
replace leaves
clean leaves
inspect leaves
filter leaves
clarified liquor
slurry inlet outlet
sight .glass
pressure vessel
(a) Scan the panel to locate the general area of failure, i.e. Feed,
Reactor/Heat-Exchange complex, Column A or Column B.
(b) Check all control loops in the affected area. Are there any
anomalous valve positions?
(c) High level in a vessel and low flow in associated take-off line
indicates either a pump failure or valve failed 'closed'. If
valves OK (see b), then pump failure is probable diagnosis.
(d) High temperature and pressure in column head associated with low
level in reflux drum indicates overhead condenser failure - provided
all pumps and valves are working correctly (rules b and c).
SIMULATION FIDELITY
MEAN CORRECT
DIAGNOSES
8 OLD OLD
r-
..--
£!!:.Q
7
6
NEW
~
5 --
--
4 --
NEW
I-- ---
3 -- (---
NEW
f-- -- r--
2 -- --- --
-- 1--- --
--- --- --
-- -- --
CRITERIA
CONCLUSION
ACKNOWLEDGEMENTS
REFERENCES
PLANT OPERATORS
INTRODUCTION
575
576 E. C. MARSHALL AND A. SHEPHERD
Simulation
reac'~ nmmn
'0."",,, ,.'0"0,,, ..... W~~W column B
"plant" and show only the panel instrumentation. Using video films
or photographs of this simulation, in conjunction with withheld
practice trainees can, in a very short time, achieve 100% diagno-
sis scores on the level control loop. We believe that this
constitutes a useful introduction to the mysteries of plant
instrumentation and will help to prepare trainees for dealing with
large arrays of instruments.
FEEl)
fROC[SS CONTROL TRAIN[R
AUTOMATIC L[V[L CONTROL
LAH _
o* 100
LICI
-----------------~~Q
TANK
LAL , TRI
I
FAL
I
they can understand and apply the material they have just learnt.
FIELD TRIALS
Technical ~olleges
Industrial Manufacturers
~. • _.
.0~
- c:D-
(ii) Check all pressure controllers to make sure that they are
functioning correctly.
(v) and (vi) These rules require the operator to check flows and
levels in other equipment further upstream in the process.
tests had not been seen by trainees during any training practice.
A strict time limit of two minutes was allowed for diagnosis of
each failure. The test was thus extremely rigorous as the trainees
had little time to cOl')template this large array of presented
information.
(iii) Validation Test C - This test was given on the last day of
the course. Trainees by now practised the diagnostic rules
especially developed for the oxidation panel and they had
practised applying these rules both in the withheld and presented
modes. They had all attempted a test where they had been given 20
presented failures to identify. The 20 failures comprised a
mixture of 10 failures they had practised and 10 that they had
only practised previously in the withheld mode. All operators
achieved high scores on this test; the average for the 24
operators was over 90% accuracy. In the final validation test C
operators were presented with 10 failures they had never seen
before. They achieved an average accuracy of 63% which represents
a considerable and significant improvement over performance in
the first two validation tests.
ACKNOWLEDGEMENTS
are also grateful for the assistance received from colleges and
industrial organisations. In particular we are indebted to
Imperial Chemical Industries Ltd., for their cooperation in the
development of plant specific training in fault diagnosis.
REFERENCES
INTRODUCTION
589
590 J. PATRICK AND R. B. STAMMERS
training (Duncan & Gray, 1975b) suggests that such skills should
not be learnt in a vacuum. The relationships between functional
elements, technical diagrams and actual components have to be
learned to some degree. Consequently the fast generation of alter-
native representations of the same task with a computer may be an
important instructional feature for training and particularly for
transfer to the real situation. At present there is no evidence to
support this popular notion although these principles are embodied
in for example the AIDE system (Towne & Rigney, 1979). The EClI
simulation which has been evaluated in a number of maintenance
contexts (eg McGuirk et aI, 1975) can provide various combinations
of task representation under computer control. Simulations for
problem diagnosis training may have varying degrees of physical
and psychological fidelity. At one extreme full scale realistic
simulations have been developed for complex systems such as
nuclear power plants. At the other extreme the trainee can
interact with a teleprinter to practice fault finding. In between
can be found examples where the trainee interacts with schematic
representations of the task, by pressing buttons on a panel, eg
early versions of the EClI device (Finch, 1971) or where the
interaction is with a display of system components, eg a circuit
diagram (May et aI, 1977), questions being input via a keyboard.
REFERENCES
Leon H. Nawrocki 2
60S
606 L. H. NAWROCKI
The format for this paper is to first address the nature and
scope of the problem of maintenance in the US military (hence
forth references to military or military agencies should be
assumed to refer to the United States military unless otherwise
indicated). The unique characteristics of military maintenance,
including constraints, will be reviewed followed by a discussion
of al ternati ve approaches to improving the current system. These
approaches generally fall into three clusters, those dealing with
the management procedures and processes wi thin the maintenance
establishment, those dealing with hardware solutions and those
dealing with training solutions. In the last category, a case will
be made for employing the confluence of technologies in computer
science, instructional development and simulation technology.
Finally, a brief review will be provided regarding the current
efforts within computer based maintenance training simulation and
thoughts concerning directions which appear fruitful to pursue.
ORGANIZATIONAL FACTORS
HARDWARE ALTERNATIVE
Recently though King (1978). and King and Hemel (1978) have
pointed out that ATE is far from a panacea. Not only is the
current and expected cost likely to exceed the cost of more
conventional approaches, but ironically, one of the biggest
problems with ATE is the repair and maintenance requirement for
ATE itself! For example, the largest use of ATE to date has been
for the Navy's Versatile Avionics Shop Test (VAST). However,
following ,the introduction of this ATE system, substantial new and
additional training, training materials, and even a new skill area
had to be added in order to optimize the use of the system.
Similarly, another ATE system in the Air Force, a Converter/Flight
Control Test Station, was found to require a high degree in
training towards its use as well as maintaining the system itself
(Baum et al., 1979). Moreover, the cost of the system, its low
reliability and student safety factors resulted in the need for a
training simulator to teach the operation and use of the ATE.
By now, the alert reader will have guessed that the third
approach to improving diagnosis is by training the technician. The
sheer magnitude of the training cost within the military as
reviewed earlier in this paper is sufficient evidence that the
need for training is recognized and that it will continue despite
partial solutions such as ATE and aids. On the other hand, two
issues have become of increasing concern in the training arena.
First there is the problem of using actual equipment for training,
and second there is the issue of task specific training versus
general knowledge and procedures.
REFERENCES
Douglas M. Towne
INTRODUCTION
621
622 D. M. TOWNE
Functional Analysis
TOP LEVEL 00
(SYSTEM)
EQUIPMENT
o D
SCENE
LEVEL
CJ D
Figure 1. Hierarchical System Representation
COMPLEX DIAGNOSIS AND TROUBLESHOOTING 625
Malfunction Analysis
Data Base
Visual Representations
HARDWARE
The dual disc drives are employed such that each student
"owns" a disc for the duration of a course. At the completion of
each problem, whether successfully solved or not, data are written
to the student's disc summarizing the final status of the problem
and effectiveness with which the problem was handled. A list of
items recorded is presented in a later section.
628 D. M. TOWNE
S'l'UDmr ACCa>U?LISHED
J\C'l'Kti BY 'lOOQJlOO
Changing a switch The desired (new) A new image, with the
setting setting of the switch in the new
switch position (and all
other indicators
displaying as they
would in that equipment
state
SOFTWARE
• Machine-readable data
- Normal operation
- Malfunction effects
• Color Photographs
- Front panels
- Test equipments
- Internals
SPECIFIC-STUDENT DATA-BASE
• Progress (Problems/Time)
• rleasures of Effectiveness
• Last-Problem details
FUNCTIONAL CHARACTERISTICS
Practice Mode
Test Mode
Data Recording
The BTL trainer records the following data for each problem
attempted:
Problem number
Final solution state (solved, not solved, interrupted)
Number of incorrect claims of problem solution
Number of replacements made
Elements incorrectly replaced
Total time spent on problem
Number of usages of support functions (practice mode only)
634 D. M. TOWNE
Over the past two years, three unique systems have been
implemented on the trainer/simulator, as follows:
REFERENCES
Perceptronics, Inc.
Woodland Hills, CA 91367
OVERVIEW
ACKNOWLEDGEMENT
637
638 A. FREEDY AND L. F. LUCACCINI
Individualized Instruction
Adaptive Instruction
r--- -----------1
I I
I
I PERFORMANCE INTERACTIVE TASK
EVALUATION 1 - -.... INSTRUCTIONS t-'t----II-t STUDENT SIMULATOR
I
I
L- _ _ _ _ _ I~T~C!!.O~l ~O~C _ _ _ 1
STUDENT
DECISION
MODEL
INSTRUCTOR
DECISION
r-IJDEl
r Pij (M-Mij)/M
j
Cost:
C.
1
MAU:
MAU.
1
rk UA·
K
K
1
where
ACTS uses the MAU model not only as the description of the
student's decision making but also as the basis for estimating
changes in his knowledge as inferred from his decision behaviour.
A technique of artificial intelligence, known as the learning
network approach to pattern classification, is used to estimate
the student's utilities in the EU model (Crooks, Kuppin and
Freedy, 1977). The utility estimator observes the student's
choices among the possible decision alternatives, viewing his
decision making as a process of classifying patterns of event pro-
babili ties. The utility estimator then attempts to classify the
event probability patterns by means of a multi-attribute discrimi-
nant function. These classifications are compared with the
student's choices and an adaptive error-correction training
algorithm is used to adjust pattern weights, which correspond to
utilities, whenever the classifications are incorrect. This
utility estimator operates concurrently in real time as the
student performs troubleshooting operations; thus, the MAU model
continuously tracks the student's decision performance as it
changes during the course of training.
P..1J [I
= fKl / [ K£SI PK]
K£Qij J
Where S is the current set of faul ts, Oij is the subset of S for
which the outcome of action i is the j' th outcome. The apriori
probabili ties are obtained from an expert technician during the
development of the task fault model.
INSTRUCTIONAL APPROACH
OUl"!'UT
-------.
.•
·---i
I
I
I
A.C. I
JNI'UT I !
-ii,oA"
J-------'L----r---..;---------.
COMMON
TPA
RnFHRnNcn COMMON
VOL CUR RnS UNDF.R TIIF.SE CJRCIIMSTANCES, TIlE INSTRUCTOR WOIIW CONSIDER TIlE
- - -_. FOU.OWJNr. FOIIR ACTIONS:
OUTP L L
TP 1 TP9DCVR
TP 2 TPAIlCVR
TI' 3 TPSIlCCR
TP 4 N TP9DCCR
TP 5
TP 6 TO CONTI NilE PRESS "RF.TURN".
TP 7
TP 8
TI' 9
TP A
EVALUATION
expected. It was found that the expected value (EV) model quickly
converged on the decision behaviour of students who exhibited
consistent decision strategies. In these initial studies, students
varied widely in rate of decision making and consistency of
approach. When aiding (provision of the alternatives an expert
would consider) and feedback (identification of the alternative an
expert would choose) were given, students solved circuit fault
problems at lower cost than without such assistance (Crooks,
Kuppin and Freedy, 1977).
12
COMMERCIAL
GAIN UTILITY 6
25
COST
UTILITY
50
4 8 12 16 20
TRAINING PROBLEMS
Figure 3. Convergence of students' utilities to those
of expert model during acts training
TRAINING SYSTEMS (ACTS) 653
150
ACTUAL
EQUIPMENT
GROUP
112
II)
z
LL.O
0 .....
I-
or:u
UJ~
CC
::Et!I
=>z
z .....
75
I-
UJO
>0
..... :r::
I- II)
~UJ
....J....J
=>cc
::E=>
=>0
u or:
I-
37
ACTS GROUP
o
ACTUAL
EQUIPMENT
GROUP
1800
II)
§
LL. .....
01-
U
I-~
~~~ 1200
.....
uzor:
UJI-....J
~
>O....J
..... 0 0
I-:t: C
~II)~
....J UJ
=> ....J
::ECC
=> =>
u~ 600
I-
2 3 4 5
4-TRIAL BLOCKS
12
VI
Z ACTUAL
a
u..._ 6 EQUIPMENT
01-
u GROUP
0:<
LLI
co~ ACTS GROUP
:EZ
::J
Z I--
a
LLI a
~~
<VI 3
0: LLI
LLI ....I
>co
<::J
~
I-
300
-
VI
Z
a
u...1-
au
I-
< ~ ACTUAL
VI~VI
a zo: 200 QUIPMENT
u_<
1-....1 GROUP
LLlO....l
~oo
~:!:O
VI .....
LLI LLI ACTS GROUP
>....1
<co
::J
a
0:
I- 100
O~~------~----~------~-----L---
2 3 4 5
TEST PROBLEMS
Figure 5. POST-TRAINING: Troubleshooting per~ormance o~ students
on text problems a~er training on acts or actual
equipment, compared to expert model
TRAINING SYSTEMS (ACTS) 655
8 ACTUAL
EQUIPMENT
GROUP
6 ACTS GROUP
VI
~ 4
I.L. ....
01--
U
IX<C
lLI
co <.!J
:::;: z:
:J ....
ZI--
0
lLIO
<.!J:t: 2
<CVI
IX lLI
lLI...J
>co
<C:J
0
IX
I--
0
ACTUAL
EQUIPMENT
GROUP
3g0
VI
~
....
I.L.I--
ou
<C
I-- ~
Vl<.!JII'I
o z IX
u .... <C 200
I--...J
lLIO...J
<.!Joo
<C:t:o
a: V'I~
lLIlLI
>...J ACTS GROUP
<CCO
:J
0
IX
I-- 100
REFERENCES
INTRODUCTION
659
660 T. SVANES AND J. R. DELANY
Scenarios
System Control
Save/Restore Feature
,-~--ACCESS
LINE
INTERNODAL
LINK
SUBSCRIBER
~
ACCESS
LINE
CONTROLLING A NETWORK
~
668 T. SVANES AND J. R. DELANY
( TIME ELAPSED )
S
SCENARIO SYSTEM W
PRIORITY STATUS
EVENTS I
CONTROLS T
DISTRIBUTION OF
C
H
OF ACTIVE COMMON
TRAFFIC SWITCH
THE DIAGONAL OF THE TRIANGULAR MATRIX REPRESENTS THE NETWORK SWITCH NODES. AND THE
MATRIX ELEMENTS REPRESENT THE INTERNODAL LINKS. AN ENTRY OF -1 IN THE MATRIX INDICATES NON·
CONNECTIVITY BETWEEN THE NODES INTERSECTING AT THAT POINT. FOR EACH NODE ALONG THE
DIAGONAL, THE NODE 10 IS LISTED. ABOVE EACH 10, IN PARENTHESES USUALL Y, ARE THE NUMBER OF CALLS
CURRENTLY ACTIVE AT THAT NODE. THE PARENTHESES ARE REPLACED BY OTHER STATUS INDICATORS IF
THE NODE 15 NOT IN A NORMAL MODE OF OPERATION. FOR EACH INTERNODAL LINK IN THE MATRIX. THE
NUMBER OF CHANNELS CURRENTL Y OPERATIONAL AND NOT BEING USED BY A CALL IS LISTED.
AT THE TOP OF THE DISPLAY, THE CURRENT SIMULATED TIME, IN MINUTES. APPEARS. SCENARIO EVENTS
AND NETWORK CONTROLS ARE LISTED ALONG WITH THEIR TIMES OF ACTIVATION. THE TRAFFIC CURRENTL Y
ACTIVE AT EACH NODE IS DECOMPOSED INTO THE FIVE PRECEDENCE LEVELS, AND THE NUMBERS OF EACH OF
THE HIGHEST FOUR LEVELS AT EACH NODE ARE PRINTED TO THE LEFT OF THE VERTICAL COLUMN OF NODE
ID'S. TO THE RIGHT OF THE VERTICAL COLUMN OF NODE ID'S THE CURRENTLY INACTIVE. BUT OPERATIONAL,
COMMON EQUIPMENTS AT EACH NODE ARE ENUMERATED.
THE ENTIRE DISPLAY IS UPDATED AND RECREATED AT EACH OPERATOR INTERFACE PERIOD. THE
OPERATOR SELECTS THE NODES DISPLAYED ON THE NETWORK STATUS MAP INTERACTIVELY.
The status reports are formed from the SCAT datq base at the
time they are requested. Consequently, they reflect the current
status of the parameters being displayed. The Network Status Map,
Figure 3, has been discussed previously. Other status reports list
the current status of access lines by node, internodal trunks by
link, common equipment queueing by node and type of equipment, and
scheduled simulation event queueing by time and call identity.
These last status reports have been included primarily for
SYSTEM CONTROL ANALYSIS AND TRAINING 673
MULTIPLE TERMINALS
REFERENCES
L.P. Goodstein
681
682 L. P. GOODSTEIN
Terminology
Attention
The workshop groups were asked to devote the first two and a
half hour session to identifying for each topic the important
issues which thereafter were to be discussed and, if possible,
resolved during a second two and a half hour session on the
following day. This was of course done in an optimistic spirit
since the main emphasis was on providing a catalyst for the debate
and discussion process itself which could feed on each partici-
pant's interests, experience, enthusiasm and, perhaps, scepticism
about alternate viewpoints.
---
the two activities in perspective:
~
HUMAN HUMAN
SYSTEM
MODELS MODELS
PHYSICAL PROCESSES
OF OF
/CHEM, El,
THE THE
MECH ..... 1
---
SYSTEM HUMAN
~
I
WOHl I
MORAY
1
For a brief description of the referenced model see Goodstein
(this volume).
684 L. P. GOODSTEIN
t--
TEST HYPOTHESIS
><=
:><==: fROM SUGGESTIONS IN LIST SIGNALS FOR TEST FROM HYPOTHESIS
i
6
CHOOSE ACTION NO CHOICE:
SELECT ACTION MOOEL SELECTION PROCESS
(COMPENSATION, CORRECTION) REF LEI OR HABIT
'"
'"=>
.., 7
MOVEMENT OR
"--ACTION
CLASSIFICATION
Figure 1.
>< ~
Classification table describing human diagnostic behaviour 0-
0)
VI
686 L. P. GOODSTEIN
Definitions of terms.
Purposes of modelling.
Classes of systems.
Classes of models.
Purpose of Models
Classes of Models
Example
STRUCTURAL
COMPLEXITY
NUCLEAR POWER
PLANT CONTROLLER
~_ _~.....,.~~ FUNCTIONAL
COMPLEXITY
MAN - MACHINE
INTERACTION
COMPLEXITY
3
See contributions by these authors in this volume.
4
See also the previous workshop summary.
690 L. P. GOODSTEIN
1. Scanning
2. Injecting signals
3. Symptom interpretation
4. Search (especially information-cost trade offs)
5. Component replacement/adjustment/repair
Andersson, H. Ergonomrad AB
Box 10032
S-650 10 Karlstad 10
Sweden
Bergman, H. Rijksuniversitet
Psychologisch Lab.
Varkenmarkt 2
Utrecht
Holland
695
696 PARTICIPANTS
Kessel, C. Hanasher 3
Nve Rom
Ramat Hasharon
Israel
Marshall, E. UWIST
Dept. of Applied Psychology
Llwyn-y-Grant
Penylan, Cardiff CF3 7UX
Wales
Skans, S. LUTAB
Snormakarvagen 29
S-161 47 Bromma
Sweden
Vees, C. TU Berlin
Inst. fur Luft- und Raumfahrt
Sekr. F3
Marchstrasse 14
D-1000 Berlin 10
West Germany
Wahlstrom, B. VTT/SAH
Vuorimiehentie 5
SF-02150 Espoo 15
Finland
705
706 AUTHOR INDEX
Lind seq.
Marshall & Shepherd Brooke
Rasmussen de Kleer & Brown
Rouse Lind
Wohl :Systems reliability, 361 et
Ship navigation, 49 et seq. seq., 487 et seq.
Gardenier Dellner
Short term memory, 40, 176, 188 Johannsen
Curry Task analysis, 288 et seq., 557
Moray et seq., 595 et seq.
Thompson Duncan
Signal detection theory, 196, Leplat
308 Patrick & Stammers
Kantowitz & Hanson Team training, 60 et seq.
Moray Gardenier
Simulators, computer, 204 et Tracking task, 144 et seq., 155
seq., 593 et seq., 612 et et seq., 311
seq., 621 et seq., 637 et Ephrath & Young
seq., 659 et seq. Kantowitz & Hanson
Freedy & Lucaccini Wickens & Kessel
Nawrocki Utility plant, 488
Patrick & Stammers Dellner
Rouse Utility theory, 638, 640, 643
Svanes & Delaney et seq.
Towne Freedy & Lucaccini
Simulators, paper and pencil, Verbal protocols, 243, 262,
576 et seq. 265, 276-7. 280 et seq.
Marshall & Shepherd Bainbridge
Steel industry, 262, 479 Rasmussen
Bainbridge Workload, 25-6, 147 et seq.,
Syrbe 163 et seq.
Supervisory control, 358 et Ephrath & Young
seq. Sheridan
Johannsen Wickens & Kessel
System representation, 100 et
seq., 324 et seq., 424 et