Professional Documents
Culture Documents
Applied Ergonomics
journal homepage: www.elsevier.com/locate/apergo
a r t i c l e i n f o a b s t r a c t
Article history: The Strategies Analysis Diagram (SAD) is a recently developed method to model the range of possible
Received 27 May 2013 strategies available for activities in complex sociotechnical systems. Previous applications of the new
Accepted 7 April 2014 method have shown that it can effectively identify a comprehensive range of strategies available to
Available online 2 May 2014
humans performing activity within a particular system. A recurring criticism of Ergonomics methods is
however, that substantive evidence regarding their performance is lacking. For a method to be widely
Keywords:
used by other practitioners such evaluations are necessary. This article presents an evaluation of
Strategies Analysis Diagram
criterion-referenced validity and test-retest reliability of the SAD method when used by novice analysts.
Cognitive Work Analysis
Validation
The findings show that individual analyst performance was average. However, pooling the individual
analyst outputs into a group model increased the reliability and validity of the method. It is concluded
that the SAD method’s reliability and validity can be assured through the use of a structured process in
which analysts first construct an individual model, followed by either another analyst pooling the in-
dividual results or a group process pooling individual models into an agreed group model.
Ó 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
http://dx.doi.org/10.1016/j.apergo.2014.04.010
0003-6870/Ó 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
M. Cornelissen et al. / Applied Ergonomics 45 (2014) 1484e1494 1485
Validation studies can benefit Ergonomics methods by providing and neither applied as often as the earlier phases. Recently, the
clear evaluation and empirical evidence of their performance and Strategies Analysis has regained attention (Cornelissen et al., 2012,
value (Stanton and Young, 1999, 2003). Further, a key requisite of 2013; Hassall and Sanderson, 2012; Hilliard et al., 2008). In
ergonomics methods is that they are usable by non-experts and particular, Cornelissen et al. (2013) developed and applied a
that they achieve acceptable levels of performance when used by structured method, the Strategies Analysis Diagram (SAD) to model
other analysts (Stanton and Young, 1999, 2003). Without this, up- strategies available in complex sociotechnical systems. Initial
take of the method by ergonomics practitioners, designers and evaluations of this method have gathered evidence of the methods
engineers may be limited. effectiveness to identify strategies possible (Cornelissen et al., 2012,
The aim of the study reported in this article was to provide a 2013.); however, the method would benefit from more formal
more formal reliability and validity analysis of CWA. This study was evaluations. This paper is a response to this requirement.
conducted using non-expert analysts to provide an evaluation of
CWA’s level of performance when used by other analysts and 1.1.1. Strategies Analysis Diagram
ensure uptake of the method by ergonomics practitioners. The SAD method models how activities can potentially be
executed within a system’s constraints. It also models criteria for
1.1. Cognitive Work Analysis when or why work will be executed in a certain way. SAD builds
upon the first phase of the CWA framework by adding verbs and
The CWA framework comprises five phases (Vicente, 1999). Each criteria to the constraints identified. This allows further specifica-
phase models a different set of constraints. First, WDA models the tion of courses of action possible within the system’s constraints as
system constraints by describing what the system is trying to well as criteria influencing the employment of courses of action.
achieve and how and with what it achieves its purpose. Second, The SAD, see Fig. 1, is a networked hierarchical diagram using
Control Task Analysis models situational constraints and decision- means ends links to represent ‘how’ and ‘why’ relationships be-
making requirements. Third, Strategies Analysis models ways in tween the different levels of the diagram. Links upwards explain
which activities within the system can be carried out. Fourth, social why a certain object or function is there, whereas links downwards
organisation and cooperation analysis models communication and explain how a system works to achieve its purpose or execute its
coordination demands imposed by organisational constraints. Fifth, functions. The levels transferred from the WDA are illustrated in
worker competencies analysis describes skills, rules and knowledge light grey and the SAD specific levels are dark grey.
required for the activities possible within the system. The lower half of the SAD models how activities can potentially
To date, applications have focussed on the development and be executed. The diagram describes, bottom up, verbs that describe
application of the first two phases: WDA and Control Task Analysis. potential interactions with or manipulations of objects in the sys-
The third phase, Strategies Analysis, is useful for providing insight tem (e.g. follow), physical objects present in the system (e.g. lane
into the different response options that enable a systems adaptive markings), object related processes afforded by the physical objects
capacity. This phase has traditionally not been as well developed (e.g. display information) and purpose related functions describing
1486 M. Cornelissen et al. / Applied Ergonomics 45 (2014) 1484e1494
activities that need to be carried out for the system to achieve its CWA are models developed by expert analysts who are highly
purpose (e.g. determine path). The top half of the diagram describes experienced in applying the methods from the CWA framework. It
why certain ways of work, or courses of action, are seen within the is worth noting that expert CWA models are not normative
system. The top of the diagram describes top-down the functional models representing the expert’s knowledge of a system, but
purpose, this is the reason why the system exists (e.g. support rather well developed formative CWA analyses conducted by
negotiation of right hand turn by road users), values and priority expert analysts. The assumption is that the expert model is most
measures evaluating the effectiveness of a system and driving accurate and highly valuable in circumstances in which uncer-
behaviour (e.g. safety) and criteria, describing when certain courses tainty exists and behaviour has yet to occur, is noisy or complex
of action at the lower end of the diagram are valid or likely to be (Bolger and Wright, 1992). In absence of other objective standards,
chosen (e.g. high traffic volume). Together, nodes from the different a model developed by expert analysts with no time constraints, is
levels provide analysts with syntax for strategy definition. For likely to be the best standard against which to compare other
example bottom up a strategy could be defined as ‘assess’ ‘road analysts’ CWA results.
users’ ‘show behaviour’ when ‘avoiding conflict with other road Once the standard against which the novice analyst’s results will
users’. Assess whether the ‘road user is unfriendly’, for ‘safety’ be assessed is established, measures to assess the quality of the
purposes and to ‘support negotiation of right hand turns by road novice results have to be determined. Quantitative methods to
users’. compare a expert results versus novices results (or predicted versus
actual outcomes) are often based on the use of signal detection
1.2. Evaluating the Strategies Analysis Diagram theory to calculate the sensitivity of the method under analysis
(Baber and Stanton, 1994; Stanton et al., 2009; Stanton and Young,
Based on initial evaluations of the SAD method (Cornelissen 2003). The signal detection theory sorts the method’s outputs into
et al., 2012; Cornelissen et al., 2013) the evidence suggests that, hits, misses, false alarms and correct rejections. Hits represent
when used by its developers, it can effectively identify a compre- items identified by novice analysts that were also identified by
hensive range of strategies available to humans performing activity expert analysts. Misses represent items that were identified by
within a particular system. Following this, the next critical step is to expert analysts but not by novice analysts. False alarms refer to
more formally evaluate the methods usefulness and ensure that the those items identified by novice analysts and not the expert ana-
SAD can be applied by other practitioners. The methods’ reliability lysts. Correct rejections are those items neither identified by the
and validity when used by novices in the SAD method should be expert or novice analysts. The signal detection theory metrics are
established. commonly used to assess the reliability and validity of ergonomics
The aim of the study reported in this article was to evaluate the methods such as human error prediction (Stanton et al., 2009);
performance of the SAD method when used by analysts not already however, not all of the metrics may usefully apply to formative
skilled in the CWA framework. Specifically, the study aimed to methods.
evaluate SAD in terms of its reliability and validity, when used by Since formative methods describe things that could be possible
novice analysts to identify the range of strategies available to within a particular system, it is complicated to use the correct re-
different road user groups at intersections. This allows for a more jections category because they are unknown and possibly infinite.
ample validation and reliability evaluation of formative methods. That is, theoretically the total number of items that could be
included is anything in the world as we know it, as formative
1.2.1. Reliability and validity analysis methods are not restrained by what is currently happening or
Evaluating the methods criterion-referenced validity entails should be happening. Therefore using an artificial number for the
ensuring that the method allows analysts to come up with a SAD number of items representing the world would be infinite and it is
that contains accurate, and not too much irrelevant, content. hard to know what items were actively rejected from this large pool
Evaluating its test-retest reliability comprises ensuring that the of items by expert and novice analysts. While others (Stanton and
method produces a similar model when used by the same analyst Baber, 2002; Stanton and Stevenage, 1998) have been able to
for the same system more than once. argue for a theoretical maximum based on a set number of tasks
and error categories provided by a taxonomy, such theoretical
1.2.1.1. Assessing validity. Studies assessing the validity of Ergo- maximum would be artificially inflated when used for formative
nomics methods have been reported in the literature (Baber and methods. Therefore, it is argued that measures using correct re-
Stanton, 1996; Stanton et al., 2009; Stanton and Young, 2003). jections are not suitable for assessing the validity of the SAD
Many of those have focussed on human reliability and error pre- method.
diction methods (Baysari et al., 2011; Kirwan et al., 1997; Stanton Measures involving hits, misses or false alarms can be used for
and Young, 2003). In those studies, the validity of methods was the evaluation of CWA. Such measures include hit rate (hits divided
assessed by comparing a method’s results (e.g. errors predicted) by hits and misses), which allows comparison of items identified by
against actual observations (e.g. errors observed). Since the CWA novice analysts versus items identified by expert analysts. The false
framework is formative and models behaviour possible and include alarm rate cannot be used here as that includes using correct re-
behaviour beyond that currently prescribed or actually seen within jections. To still account for the number of false alarms a novice
a system, a comparison of the methods results with actual obser- analyst identifies, a measure often used in clinical studies or recall
vations would not provide sufficient conclusive evidence about the studies can be used: positive predictive value (Descatha et al., 2009;
validity of the method. Lindegård Andersson and Ekman, 2008). Predictive value (hits
Alternatively, CWA results could be compared to results using a divided by hits and false alarms) reflects the amount of items
similar but validated method. However, CWA is unique in that it is a identified by novice analysts that were also identified by expert
constraints-based approach and is one of few formative methods analysts compared to the total number of items identified by novice
and substantial validation studies of such methods remain absent analysts.
from the literature to date. Therefore, no method is available that
could be used for SAD’s validation. 1.2.1.2. Assessing reliability. Reliability of Ergonomics methods is
Expert models are used when other validated standards are often assessed using a test-retest paradigm (Baysari et al., 2011).
not available (Gordon et al., 2005). Expert models in the context of Measures include percentage agreement (Baber and Stanton, 1996;
M. Cornelissen et al. / Applied Ergonomics 45 (2014) 1484e1494 1487
Baysari et al., 2011) and Pearson’s correlation (Harris et al., 2005; discussion. Therefore, the present validity analysis includes both an
Stanton and Young, 2003). As discussed above, the formative na- unadjusted multi-analyst model (collating all individual items) and
ture of CWA provides some challenges to traditional Ergonomics an adjusted pooled multi-analyst model (collating all individual
reliability and validity measures. To avoid unwarranted over- items into one model, but eliminating all items that have only been
complicating of analysis, percentage agreement was applied here to identified by one novice analysts) to evaluate the value of a multi-
evaluate the test-retest reliability. The reliability measure compares analyst approach to SAD.
novice analyst’s items identified at two different times. Fig. 2 summarises the approach taken to evaluate the SAD
method. Validity was measured by quantifying hit rate and pre-
1.2.2. Multi analyst approach dictive value of participant’s results. Reliability was be measured by
CWA is a resource intensive method. The formative nature of the using a test-retest paradigm. Results were analysed for individual
method requires participants to go beyond what they currently analysts as well as for a pooled multi-analyst approach.
know which is a challenging activity. Further, the analysis concerns There were a number of hypotheses. It is expected that if the
a complex system of which analysts may understand a part of the SAD method is valid, novice analysts’ models will be similar to the
system in great detail while missing the lack of knowledge on other expert analyst model. It is expected, however, that novice analyst’s
parts. A multi-analyst approach, using more than one analyst to models will fail to produce complete coverage of the expert ana-
conduct the analysis, is therefore practicable for a method such as lyst’s model within the constraints of the study. Reasons for this
CWA to decrease resources required and compensate for shortfalls include the use of novice analysts, time constraints of a reliability
of individual analysts (Stanton et al., 2009). and validity study and the semi-structured formative approach of
Other validation studies have pooled individual results (Harris SAD. It is expected that by using a multi-analyst approach, and
et al., 2005) or proposed multiple analyst approaches (Stanton especially an adjusted multi-analyst model, results will improve
et al., 2009). These approaches are suggested to increase the val- and resemble the expert analyst’s model better. If SAD can be used
idity and comprehensiveness of the methods’ results. reliably to conduct a SAD and elicit strategies, novice analysts’
Unfortunately, one of the main weaknesses of a multi-analyst models over time are expected to be similar.
approach is the increase in false alarm rate (e.g. items identified
that are not accurate) (Stanton et al., 2009). To ensure that the 2. Method
benefits of a multi-analyst approach outweigh the cost of intro-
ducing false alarms, it is worth exploring strategies to reduce the 2.1. Participants
false alarms and ensure the quality of the output. For example, it is
assumed that if a method is accurate, relevant content should be 17 transportation safety professionals aged between 27 and 61
identified more consistently than irrelevant content. A practical (M 38, SD 11) took part in the study, see Tables 1 and 2. While the
solution for a multi-analyst approach to CWA would be to only participants were all professionals with an interest in Human Fac-
include items if more than one novice analyst generated that item tors, only 4 of them had applied CWA once or twice before but
or multiple analysts agreed on the relevance of that item in a group never in full (i.e. using all five phases). None of the participants had
1488 M. Cornelissen et al. / Applied Ergonomics 45 (2014) 1484e1494
Table 1
Participant background.
Participants (n ¼ 17) n
Gender Female 8
Male 9
Educational background Human Factors 10
Engineering 7
Employment background Academia 5
Government agency 8
Fig. 3. Example on-road and pedestrian crossing route for right hand turns at
Industry 4
Australian intersections.
M. Cornelissen et al. / Applied Ergonomics 45 (2014) 1484e1494 1489
Participants were once more prompted to the formative nature of overall group of 17 analysts generated it. The output of this approach
the method and urged to identify possible verbs and criteria not is referred to in the text as the adjusted pooled model.
only those currently seen in our intersection system. Participants The reliability of SAD was assessed by comparing participants’
were given an hour and 45 min to build a SAD. results at two different times, using a test retest paradigm. The verbs,
After a short break, participants were provided with a SAD criteria and strategies identified by participants at T1 were compared
prepared for them and were asked to elicit strategies from this with the verbs, criteria and strategies identified at T2. To assess the
diagram. Participants elicited strategies for a simple and complex reliability of a multi-analyst approach, items identified by all 17
scenario see Table 3. Participants were given 30 min to elicit stra- novice analysts were combined into one group model and compared
tegies for the simple scenario and an hour for the complex scenario. to the items identified by all 17 novice analysts at T2. In the adjusted
multi-analyst model, items that were only identified by one of the 17
2.5.2. Workshop 2: data for test-retest reliability novice analysts were eliminated from the group model.
A month after the first workshop participants returned for a
second workshop. In workshop two, participants received a
3. Results
refresher training familiarising them again with the method. After
the refresher training, participants started with the main exercise.
3.1. Criterion-referenced validity
This was the same exercise as in workshop one. Once finished with
the exercise participants filled out a feedback form and were
3.1.1. Building the Strategies Analysis Diagram
thanked for their participants and reimbursed for their expenses.
3.1.1.1. Individual analyst approach. All participants defined verbs
2.6. Data analysis and criteria. Not all participants linked the nodes in the SAD while
some connected all nodes with each other. Therefore the definition
The expert model was developed by one analyst, and reviewed of verbs and criteria is analysed but linking of the nodes was not
by two analysts, all with extensive experience in the CWA frame- analysed further.
work and SAD method.
The validity of SAD was assessed by comparing the verbs, criteria
Table 3
and strategies in the participants’ model (novice analyst) with those Scenarios strategy elicitation.
in the expert model (expert analyst). Measures of validity included
Simple The road user (driver or cyclist) has entered the right hand
hit rate and predictive value. The analysis is depicted in Fig. 5. The scenario turning lane. There is no traffic in front. The road user wants
validity of the method using a multi-analyst approach was analysed to position him/herself at the traffic lights aiming to activate
by pooling individual participant’s analyses into a group model. That the traffic light sensor embedded in the tarmac.
is, the raw data was combined into one group model and the verbs, What strategies could the road user apply to position him or
herself at the traffic light sensor?
criteria and strategies identified in this group model were compared
Complex The road user (cyclist) has entered the right hand turning
to the expert analyst’s model. In addition, an adjusted multi-analyst scenario lane. Traffic has banked up in the turning lane. The cyclist is
model was tested to evaluate whether a practical solution could be approaching the stopped traffic and is deciding whether to
found to counter the increase of false alarms in a multi-analyst filter to the front of the traffic cue.
approach. In the adjusted multi-analyst model, items were elimi- What strategies could the road user apply to decide whether to
filter to the front?
nated from the group model if only one novice analyst from the
1490 M. Cornelissen et al. / Applied Ergonomics 45 (2014) 1484e1494
Fig. 5. Example analysis of evaluating accuracy novice model compared to expert model; identifying verbs and criteria.
It was found that the accuracy of verbs and criteria identified by 3.1.2. Eliciting strategies
individual participants, produced within the time constraints set, Next participants’ ability to accurately elicit strategies from the
compared to the verbs and criteria identified by the expert analysts SAD diagram was assessed. This involved evaluating whether
was average, see Fig. 3. The results showed that only 21e24% of the novice analyst’s courses of action (comprising verbs, physical ob-
verbs and criteria described in the expert model were identified by jects and object related processes) were similar to the courses of
the novice analysts. action identified by expert analyst in their model.
3.1.1.2. Multi-analyst approach. The novices’ group model, pro- 3.1.2.1. Individual analyst approach.. The results showed that nov-
duced by pooling individual verbs and criteria into one model, ices’ individual results for eliciting courses of action did not achieve
improved the accuracy of verbs and criteria identified when high levels of accuracy, see Fig. 7. The novice analyst’s identified
compared with the expert model, see Fig. 6. Pooled results only 8e13% of the strategies identified in the expert model.
resemble the expert model well, identifying 81e88% of the verbs
and criteria defined in the expert model. 3.1.2.2. Multi-analyst approach. Pooling individual participant’s
Unfortunately, pooling participant’s results into a group model courses of action identified into a group model improved the re-
increases the number of false alarms due to pooling of error data. sults, see Fig. 4. As a group, the novice analysts identified 57e58% of
Therefore the predictive value of the pooled model is low (29e33%). the strategies in the expert model.
3.1.1.3. Adjusted pooled model. The rise in false alarms was coun- 3.1.2.3. Adjusted pooled model. Again, strategies identified by one
tered in the adjusted pooled model by removing items that were participant only were removed. This resulted in an adjusted pooled
identified by one novice analyst only. This reduced the irrelevant model. This adjustment reduced the accuracy of the group model,
verbs and criteria significantly while only removing a small number with the group now only identifying 32e43% of strategies identi-
of verbs and criteria that were in fact relevant. This improved the fied in the expert model; see Fig. 7. Participants elicited a great
predictive value of the pooled model to 48 and 66% respectively, see variety in courses of action and therefore many relevant strategies
Fig. 6. were removed. Adjusting the pooled model did increase the
M. Cornelissen et al. / Applied Ergonomics 45 (2014) 1484e1494 1491
Fig. 6. Individual (Mdn and IQR) and pooled results for verbs and criteria. 3.2.2. Eliciting strategies from the Strategies Analysis Diagram
3.2.2.1. Individual analyst approach. Reliability for eliciting strate-
predictive value. Hence, the chances that a course of action that was gies was lower. There was only a slight agreement for individuals
identified was relevant were greater for the adjusted than the non eliciting strategies, see Fig. 9. Between .11 and .13 of the strategies
adjusted pooled model. elicited at T1 were also elicited at T2.
Fig. 7. Individual (Mdn and IQR) and pooled results for eliciting strategies.
whether more appropriate methods could be developed for reli- Embrey, D.E., 1986. SHERPA: a systematic human error reduction and prediction
approach. In: Paper Presented at the International Meeting of Advances in
ability and validity evaluations of formative methods. Moreover,
Nuclear Power Systems, Knoxville, Tennessee.
establishing the reliability and validity of Ergonomics methods is a Gordon, R., Flin, R., Mearns, K., 2005. Designing and evaluating a human factors
key endeavour for all within the discipline, particularly to support investigation tool (HFIT) for accident analysis. Saf. Sci. 43 (3), 147e171.
increased involvement of Ergonomics methods and practitioners in Harris, D., Stanton, N.A., Marshall, A., Young, M.S., Demagalski, J., Salmon, P., 2005.
Using SHERPA to predict design-induced error on the flight deck. Aerosp. Sci.
the design and evaluation of safety critical systems. Continued Technol. 9 (6), 525e532.
discussion and investigation in this area is therefore encouraged. Hassall, Maureen E., Sanderson, Penelope M., 2012. A formative approach to the
Future directions should focus on providing more detailed strategies analysis phase of cognitive work analysis. Theor. Issues Ergon. Sci., 1e
47 http://dx.doi.org/10.1080/1463922x.2012.725781.
guidance on the SAD process and how to define verbs, criteria and Hilliard, A., Thompson, L., Ngo, C., 2008. Demonstrating CWA strategies analysis: a
strategies, similar to directions provided on conducting a WDA by case study of municipal winter maintenance. Proc. Hum. Factors Ergon. Soc.
Naikar et al. (2005). While in the current study the pooling exercise Annu. Meet. 52 (4), 262e266.
Hollnagel, E., 2002. Understanding Accidents-from Root Causes to Performance
was conducted by a separate analyst, future directions include variability. In: Paper presented at the IEEE 7th Human Factors Meeting,
exploring the difference with such an approach and allowing the Scottsdale, Arizona.
individuals who build their own models to pool their models into a Hollnagel, E., 2004. Barriers and Accident Prevention. Ashgate Publishing,
Aldershot.
group model through a structured group process. Hollnagel, E., 2006. Resilience: the challenge of the unstable. In: Hollnagel, E.,
Woods, D.D., Leveson, N. (Eds.), Resilience Engineering: Concepts and Precepts.
Acknowledgement Ashgate, Aldershot, United Kingdom.
Jenkins, D.P., Stanton, N.A., Salmon, P.M., Walker, G.H., Young, M.S., 2008. Using
cognitive work analysis to explore activity allocation within military domains.
Miranda Cornelissen’s contribution to this article was conducted Ergonomics 51 (6), 798e815.
as part of her PhD candidature. This was funded by a Monash Kirwan, B., Kennedy, R., Taylor-Adams, S., Lambert, B., 1997. The validation of three
Graduate Scholarship and a Monash International Postgraduate Human Reliability Quantification techniques d THERP, HEART and JHEDI: part
II d results of validation exercise. Appl. Ergon. 28 (1), 17e25.
Research Scholarship. Paul Salmon’s contribution to this article was Lindegård Andersson, A., Ekman, A., 2008. Reply to the short communication paper
funded through his Australian National Health and Medical by T.P. Hutchinson regarding “Concordance between VDU-users’ ratings of
Research Council post doctoral fellowship. comfort and perceived exertion with experts’ observations of workplace layout
and working postures”. Applied Ergonomics (2005) 36, 319e325. Appl. Ergon.
39 (1), 133e134.
References Naikar, N., 2005. A methodology for work domain analysis, the first phase of
cognitive work analysis. In: Paper Presented at the Human Factors and Ergo-
Ahlstrom, U., 2005. Work domain analysis for air traffic controller weather displays. nomics Society 49th Annual Meeting, Orlando, Florida.
J. Saf. Res. 36, 159e169. Naikar, N., Hopcroft, R., Moylan, A., 2005. Work Domain Analysis: Theoretical
Baber, C., Stanton, N.A., 1994. Task analysis for error identification: a methodology Concepts and Methodology. DSTO, Fishermans bend.
for designing error-tolerant consumer products. Ergonomics 37 (11), 1923e Rasmussen, J., Pejtersen, A.M., Goodstein, L.P., 1994. Cognitive Systems Engineering.
1941. Wiley, New York.
Baber, C., Stanton, N.A., 1996. Human error identification techniques applied to Stanton, N.A., 2006. Hierarchical task analysis: developments, applications, and
public technology: predictions compared with observed use. Appl. Ergon. 27 extensions. Appl. Ergon. 37 (1), 55e79.
(2), 119e131. Stanton, N.A., Baber, C., 2002. Error by design: methods for predicting device us-
Baysari, M.T., Caponecchia, C., McIntosh, A.S., 2011. A reliability and usability study ability. Des. Stud. 23 (4), 363e384.
of TRACEr-RAV: the technique for the retrospective analysis of cognitive errors Stanton, N.A., Salmon, P.M., Harris, D., Marshall, A., Demagalski, J., Young, M.S.,
e for rail, Australian version. Appl. Ergon. 42 (6), 852e859. Dekker, S.W.A., 2009. Predicting pilot error: testing a new methodology and a
Birrell, S.A., Young, M.S., Jenkins, D.P., Stanton, N.A., 2011. Cognitive Work Analysis multi-methods and analysts approach. Appl. Ergon. 40 (3), 464e471.
for safe and efficient driving. Theor. Issues Ergon. Sci., 1e20. Stanton, N.A., Salmon, P.M., Rafferty, L., Walker, G.H., Baber, C., Jenkins, D.P., 2013.
Bolger, F., Wright, G., 1992. Reliability and validity in expert judgment. In: Human Factors Methods: A practical guide for Engineering and Design. second
Wright, G., Bolger, F. (Eds.), Expertise and Decision Support. Plenum Press, New ed. Ashgate Publishing, Aldershot, United Kingdom.
York, pp. 47e76. Stanton, N.A., Stevenage, S.V., 1998. Learning to predict human error: issues of
Burns, C.M., Bisantz, A.M., Roth, E.M., 2004. Lessons from a comparison of work acceptability, reliability and validity. Ergonomics 41 (11), 1737e1756.
domain models: representational choices and their implications. Hum. Factors Stanton, N.A., Young, M.S., 1998. Is utility in the mind of the beholder? A study of
46 (4), 711e727. ergonomics methods. Appl. Ergon. 29 (1), 41e54.
Cornelissen, M., Salmon, P.M., Jenkins, D.P., Lenné, M.G., 2012. A structured Stanton, N.A., Young, M.S., 1999. What price ergonomics? Nature 399, 197e198.
approach to the strategies analysis phase of cognitive work analysis. Theor. Stanton, N.A., Young, M.S., 2003. Giving ergonomics away? The application of er-
Issues Ergon. Sci., 1e19. gonomics methods by novices. Appl. Ergon. 34 (5), 479e490.
Cornelissen, M., Salmon, P.M., McClure, R., Stanton, N.A., 2013. Using cognitive work Vicente, K.J., 1999. Cognitive Work Analysis: toward Safe, Productive and Healthy
analysis and the strategies analysis diagram to understand variability in road Computer-based Work. Lawrence Erlbaum Associates, Inc, Mahwah, New Jersey.
user behaviour at intersections. Ergonomics, 1e17. Woods, D.D., 1988. Coping with complexity: the psychology of human behaviour in
Descatha, A., Roquelaure, Y., Caroly, S., Evanoff, B., Cyr, D., Mariel, J., Leclerc, A., 2009. complex systems. In: Goodstein, L.P., Andersen, H.P., Olsen, S.E. (Eds.), Tasks,
Self-administered questionnaire and direct observation by checklist: comparing Errors, and Mental Models. Taylor & Francis, London, pp. 128e148.
two methods for physical exposure surveillance in a highly repetitive tasks
plant. Appl. Ergon. 40 (2), 194e198.