You are on page 1of 12

The American Journal of Surgery (2016) 211, 214-225

Review

An overview of research priorities in surgical


simulation: what the literature shows has been
achieved during the 21st century and what
remains
Maximilian J. Johnston, M.B., B.Ch., M.R.C.S.a,*,
John T. Paige, M.D., F.A.C.S.b,
Rajesh Aggarwal, M.B.B.S., M.A., Ph.D., F.R.C.S.c,
Dimitrios Stefanidis, M.D., Ph.D., F.A.C.S.d,
Shawn Tsuda, M.D., F.A.C.S.e, Ankur Khajuria, B.Sc. (Hons)f,
Sonal Arora, Ph.D., M.R.C.S.a, on behalf of the Association for Surgical
Education Simulation Committee
a

Imperial Patient Safety Translational Research Centre, Department of Surgery and Cancer, Imperial
College London, Room 5.03, 5th floor, Wright-Fleming Building, St Marys Campus, Norfolk Place,
London, W2 1PG, UK; bDepartment of Surgery, Louisiana State University Health Services Center, New
Orleans, LA, USA; cDepartment of Surgery, Faculty of Medicine, McGill University, Montreal, Canada;
d
Department of Surgery, Carolinas Medical Center, Charlotte, NC, USA; eDepartment of Surgery,
University of Nevada School Of Medicine, NV, USA; fDepartment of Surgery and Cancer, Imperial
College London, London, UK

KEYWORDS:
Simulation;
Training;
Education;
Curriculum;
Skills;
Competence

Abstract
BACKGROUND: Key research priorities for surgical simulation have been identified in recent years.
The aim of this study was to establish the progress that has been made within each research priority and
what still remains to be achieved.
METHODS: Members of the Association for Surgical Education Simulation Committee conducted
individualized literature reviews for each research priority that were brought together by an expert panel.
RESULTS: Excellent progress has been made in the assessment of individual and teamwork skills in
simulation. The best methods of feedback and debriefing have not yet been established. Progress in
answering more complex questions related to competence and transfer of training is slower than other
questions. A link between simulation training and patient outcomes remains elusive.

M.J.J. and S.A. are associated with the National Institute for Health Research Imperial Patient Safety Translational Research Centre (grant number
40490). The views expressed are those of the authors. R.A. is a consultant for Applied Medical. J.T.P. co-edited Simulation in Radiology.
The authors declare no conflicts of interest.
* Corresponding author. Tel.: 144 (0) 207-594-7925; fax: 144 (0) 207-594-3137.
E-mail address: m.johnston@imperial.ac.uk
Manuscript received April 27, 2015
0002-9610/$ - see front matter 2016 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.amjsurg.2015.06.014

M.J. Johnston et al.

Progress in surgical simulation research

215

CONCLUSIONS: Progress has been made in skills assessment, curricula development, debriefing and
decision making in surgery. The impact of simulation training on patient outcomes represents the focus
of simulation research in the years to come.
2016 Elsevier Inc. All rights reserved.

Since the turn of the century, simulation-based education


and training has become an important topic of research and
study. Since 2000, the 10 highest impact surgical journals
have published more than 350 articles in which simulation
is a major methodology. This large volume of research has
contributed significantly to the field of surgery, leading to
advances in surgeons operative,1 crisis management,2 and
critical care3 skills. In addition, it has lead to important
breakthroughs in surgery and simulation-based science.
This explosion of surgical simulation research, however, has
taken place in an uncoordinated fashion. As a result, the
progress made in the different areas of surgical simulation
research is uneven and remains relatively unknown. To bring a
degree of focus and direction to this important area of research,
the Association for Surgical Education (ASE) Simulation
Committee applied a systematic approach (ie, a Delphi process)
to identify the most important research priorities in surgical
simulation.4 With this article, the ASE Simulation Committee
aims to report on the progress made since the turn of the century
in each research priority for surgical simulation and to identify
prime areas for ongoing research.

Methods
Participants and identification of relevant
literature
Select members of the ASE Simulation Committee
conducted individualized narrative literature reviews for
each of the 10 most important research priorities identified
in the previous Delphi study.4 An expert panel then collated
the results to give a comprehensive overview of the current
landscape for each priority.

Assimilation of literature into progress reports


The ASE Simulation Committee meets bi-annually.
Each investigator presented a summary of the literature
they had reviewed when answering their research priority
before the Committee to ensure that important publications
and reports had not been missed. Once the Committee had
approved all reviews, a core team of investigators collated
the results of each review into the following report.

Results
Each research priority is listed subsequently as a
question and answered according to the findings identified
in the published literature (see Table 1).

How should a simulator curriculum be designed


and evaluated?
A curriculum for training an individual, a predefined group,
or all the staff within a hospital system should be founded on 2
key aspects. The first is to perform a needs analysis to
determine where the delta lies for improvement, who the
learners would be, in which setting they should be taught, and
when (ie, to what level) the curriculum should be offered.5 The
second aspect is to then define the learning objectives for the
curriculum, which provide a measure of its efficacy. A curriculum may be based around a simple task, such as removal of
an abdominal drain, a complex set of tasks related to the management of the complex trauma patient, or an entire skills curriculum.6 It is an absolute must that the curriculum sets out the
knowledge, skills, and attitudes that the learner will achieve.
The curriculum can be a single event, lasting for 1 to 2 hours
(the so-called little c curriculum) or it can be delivered
over a period of many years (the big C curriculum).7,8
With regard to a simulator curriculum, the process
should be as follows:9
1. Planningdthe purpose of the curriculum should be stated,
including its links and appropriateness to other stages of
the learners education, for example, to learn basic colonoscopy skills before a rotation on the colorectal service.
2. Contentdthe curriculum must set out the general, professional, and specialty-specific content to be
mastered, for example, to learn the knowledge, skills,
and behaviors required to safely perform a colonoscopy on a patient.
3. Deliverydindication should be given of how curriculum implementation will be managed and assured
locally, for example, through standardization of a colonoscopy curriculum across a region, learners and
trainers can travel across geographical boundaries to
participate in the curriculum.
4. Outcomesdrobust assessment against transparent
criteria must be undertaken, with relevant feedback
to trainees, which feeds into standards for classification and documentation, for example, the colonoscopy
training curriculum has an end point, which relates to
expansion of ones clinical activities.
5. Reviewdplans for curriculum evaluation and monitoring must be set out, with resources identified to support trainee learning and assessment, for example, the
effectiveness of the colonoscopy curriculum should be
measured through predefined outcomes, such as cecal
intubation rates, and anyone who has the ability and
wishes to learn basic colonoscopy should have the
opportunity to engage in the curriculum.

216
Table 1
Research
priority

The American Journal of Surgery, Vol 211, No 1, January 2016


Achievements and areas for future work for each research priority
Question

Key achievements

Areas for future research

How should a simulator


curriculum be designed
and evaluated?

1. Curricula developed for


procedural skills and
laparoscopic operations
using simulation
2. Curricula incorporated
into residency programs
in multiple countries

1. Determine which simulation


delivery methods are effective
in skill acquisition
2. Identify quality indicators and
outcomes to measure curriculum
effectiveness

What are the best methods/


metrics to assess
technical and nontechnical
performance on simulators
and in the OR?

1. Development of simple
metrics for technical skill
measurement and rating
scales for nontechnical skills
2. Publication of multicenter
studies and statewide
registries for remote
rating of surgical skill

1. Determine the relevance


of current metrics
2. Establish whether measures
differ based on the procedure or
skill being measured

What are the performance


criteria that surgical
trainees need to achieve
to be competent
based on their training
level (on a national level)?

1. Learning curve of
275 procedures calculated
for colonoscopy
2. Performance scoring
criteria developed for
gynecology

1. Research performance criteria


using consistent methodology,
taking level of training into account
2. Implement multicenter studies at a
national level

What is the role of simulation


for the certification of
residents and practicing
surgeons?

1. Research published showing


that recommendations for
certification not yet an
accurate reflection of
surgical skills
2. Pilot research from the
United States and Canada
demonstrated that scores for
clinical skills using simulated
patients were valid

1. Comparison of different tools


to assess feasibility,
reliability, and validity
2. Investigate certification at all
clinical levels
(student to independent practitioner)

How do we train and assess


teams effectively using
simulation?

1. Multiple scales developed


and validated for teamwork
skills assessment in the
simulated setting
2. Teamwork assessment
has been embraced
by OR teams

1. Further evidence for feasibility


of these scales for use in real
time in the clinical environment
2. Evaluate team-training
programs using multiple methods

How can we use simulation


to teach and assess
judgment and decision
making in routine and
crisis situations?

1. High-fidelity simulation
used to assess
decision making in
vascular and general
surgery for a decade
2. Virtual reality simulation
used in trainee
curricula to teach
decision-making skills

1. Explore the use of cognitive task


analysis for procedure-specific
simulation
2. Correlate crisis management
skills with other
core management skills
taught in trainee curricula

(continued on next page)

M.J. Johnston et al.


Table 1
Research
priority

Progress in surgical simulation research

217

(continued )

Question

Key achievements

Areas for future research

What type and method of


feedback is most effective
to improve performance on
simulators?

1. TeamGAINS developed
as a debriefing tool for
team-based surgical
simulation training
2. OSAD scale produced
to assess the
quality of debriefing

1. Continue research exploring


the efficacy of within-team
and self-debriefing
2. Compare available debriefing tools to
establish efficacy

Does documented simulator


competence equal clinical
competence?

1. FLS training linked to


improved performance
in laparoscopic general
surgical procedures
2. Advances made in
simulation training
for microsurgery and
ophthalmology

1. Link simulation training to surrogate


markers of competence such as time
to completion of a program
2. Encourage a global study aiming to
correlate simulator and
clinical performance
of surgical tasks

Does training on simulators


transfer to improved
clinical performance?

1. Wide availability of FLS


and virtual reality
simulation
2. 29 of 34 studies using
simulation training
resulted in improved
performance of a
surgical task

1. Ensure consistency in
methods of assessing
performance
2. Address methodologic limitations of
research in this area

10

Does simulator training lead


to improved patient outcomes,
safety and quality of care?

1. Links between simulation


training and improved
outcomes demonstrated
in hernia repair and
central line insertion
2. Published evidence of
improved neonatal
outcomes after simulation
training in obstetrics

1. Aim to link simulation training with


improved mortality using a
collaborative approach
2. Establish the lessons of
simulation training
within both the intraoperative
and postoperative
phases of care

FLS 5 Fundamentals of Laparoscopic Surgery; OR 5 operating room; OSAD 5 Objective Structured Assessment of Debriefing.

These basic principles can assist surgical educators to


design, develop, and implement robust simulation-based
curricula. It is also important to recognize that this strategy
is not focused exclusively on the learning of procedural
skills but it can be applied to curricula for nonprocedural
skills, such as patientdoctor communication in defined
settings, such as end-of-life care, through the application of
simulated patients.10 The overall evaluation of any
simulation-based curricula must then be applied to measures of clinical safety, quality of care, and patient
experience.
A key research priority moving forward in simulator
curriculum development is the determination of which
delivery methods (ie, high vs low technology, hybrid vs
virtual reality vs standardized patient) are most effective in
the acquisition of technical and/or nontechnical (ie, behavioral skills).11 Other areas for focus include identification of
useful, easy to use quality and outcomes measures by which

to measure curriculum effectiveness and the incorporation


of standardized, systematic curriculum evaluation models
into research protocols to provide a common reference
point for comparing the utility and efficacy of different
curricula.

What are the best methods/metrics to assess


technical and nontechnical performance on
simulators and in the operating room?
The simplest and most directly relevant metric for highquality surgical practice is the measurement of patientderived outcomes. Thus, if we were to undergo a surgical
intervention, we would like the end result to have fixed the
problem, with minimal upset to our well-being and daily
activities and certainly no risk of death. Because of the
multitude of events, locations, and caregivers involved in a

218
single patient journey, researchers tend to compartmentalize the measurement of surgical performance. As the
research question suggests, this compartmentalization can
be broadly divided into technical and nontechnical
performance.
Technical performance in surgery can be measured with
simple metrics, such as time and dexterity, through use of
error-based checklists and global rating scales, end product
analysis, such as burst pressure testing, or higher level
cognition, such as eye-tracking and functional bran imaging.12 Simulation-based training and assessment, by definition of being in a controlled and standardized environment,
espouses the use of empirical metrics to measure technical
ability. In the operating room (OR), the assessment tools
used must not interfere with clinical activity and, thus,
more often use rating scales as their mode of measure.
The issue here is that rating must be performed either
live (with the risk of a Hawthorne effect) or retrospectively
through video review which is both time consuming and
may miss some important aspects of care.
Nontechnical performance assessment has almost exclusively used rating scales and checklists, for example, Observational Teamwork Assessment for Surgery (OTAS), Oxford
Non-Technical Skills (NOTECHS), State Trait Anxiety Inventory, Utterance Frequency, etc.13 The benefit of these tools
is that they can be used both within a simulation-based and
clinical environment, enabling comparative effectiveness
studies to be performed. They once again, however, rely on
live observation or post hoc video review, with the concomitant limitations alluded to above.
More recently, a statewide surgical registry has been
used to attempt to correlate technical and safety measures
(the latter as a proxy for nontechnical skills [NTS]) with
patient-derived outcomes.14 Birkmeyer et al15 reported
higher scores on a global measure of skill on surgical
videos of gastric bypass procedures to be indicative of
fewer postoperative complications and lower rates of reoperation, readmission, and visits to the emergency department. In a similar vein, a 22-hospital survey of 53
surgeons, 102 nurses, and 29 OR administrators with regard
to safety culture (comprising measures for quality of teamwork, co-ordination, and communication) revealed significant correlations with rates of serious surgical
complications in bariatric surgery.
Given such findings, the optimal measures of technical
and nontechnical performance, therefore, may already
exist. The research challenge is to determine their clinical
relevance and whether such measures differ based on the
procedure or activity being evaluated.

What are the performance criteria that surgical


trainees need to achieve to be competent based
on their training level (on a national level)?
To date, few studies have tried to address this research
question in the literature. A study by Jelovsek et al16

The American Journal of Surgery, Vol 211, No 1, January 2016


published in 2010 established performance cutoff scores
for vaginal hysterectomy based on prospectively collected
data from 27 trainees in 2 institutions. The authors used
the modified Angoff method and 2 previously validated
performance scales, including the Global Rating Scale
and Vaginal Surgical Skills Index, to demonstrate that the
cutoff scores for competency on these scales should be 18
(16.5 to 20.3) and 32 (27.7 to 35.5), respectively. They
also established that, in their sample, trainees met these cutoff scores after performing 21 and 27 vaginal hysterectomies, respectively. Another study by Sedlack published
in 2011 established a minimum number of 275 colonoscopies to achieve competency using the contrasting groups
method and the ratings of 41 Gastroenterology Fellows
from the Mayo Colonoscopy Skills Assessment Tool.
More recently, Mackenzie et al17 used risk-adjusted cumulative sum curves to evaluate proficiency gain curves and
learning rates of colorectal surgery fellows in laparoscopic
colorectal surgery. They demonstrated that 40 cases were
required for the fellows to feel confident to perform most
laparoscopic colectomy tasks except dissection of the mesorectum and splenic flexure, which required further
training.
Thus, the available literature offers initial attempts at
establishing performance criteria that define competency
for surgical trainees. Each of the published studies to date,
however, uses a different methodology to arrive at the
recommended criteria or number of operative cases. In
addition, they do not specify performance criteria based on
level of training, and their results come from 1 or 2
institutions minimizing their generalizability to a national
level. Furthermore, none of the published criteria have been
applied prospectively at a national level to assess their
appropriateness. For this research question to be answered
adequately, therefore, several more studies will be required.
They will have to identify criteria for competent performance of trainees in all surgical disciplines and at a
national level. Multi-institutional, multidisciplinary
research designs are, thus, needed to address issues related
to this question.

What is the role of simulation for the


certification of residents and practicing
surgeons?
There is a paucity of empirical data that support the use
of simulation for the certification of trainees and practicing
surgeons. Recently, Ahn et al18 used simulation to assess
the appropriateness of the Accreditation Council for Graduate Medical Education (ACGME) recommendation that
residents perform 6 cardiac pacing attempts during training
to qualify for graduation; they found that the actual number
of cardiac pacing procedures (transcutaneous and transvenous) in the simulated environment needed by residents
to become competent was 50% higher than the ACGME
requirement. Hafford et al19 assessed laparoscopic

M.J. Johnston et al.

Progress in surgical simulation research

performance of 83 practicing general surgeons using the


Fundamentals of Laparoscopic Surgery (FLS) skills- and
knowledge-based examinations and documented a 33%
failure rate at baseline; the authors proposed, therefore,
that FLS certification (currently mandated to sit for the
American Board of Surgery [ABS] Qualifying Examination) should also be considered for practicing surgeons to
ensure their competency. In an editorial piece by the president of the ABS published in 2010, Buyske20 opined that
simulation-based evaluation could play an important role
for the ongoing assessment of surgical trainees and surgeons. It could complement the ACGME/ABS milestones
project rather than become a component of the Certifying
Examination of the ABS. On the other hand, the use of
simulation in the form of simulated (ie, standardized) patients has been used successfully for high-stakes assessment
of clinical skills for medical licensure and certification purposes in the United States and Canada. Boulet et al21
demonstrated recently in a review article that the validity
and generalizability of the test scores used in these
simulation-based assessments are supported by a substantial number of research studies and are effective in
providing a fair and equitable assessment of the clinical
skills of their test populations. The authors predicted an
increasing use of additional simulation-based modalities
for competency assessment in the near future. In a recent
study, de Montbrun et al22 developed the Colorectal Objective Structured Assessment of Technical Skill, a simulationbased objective method of technical skills assessment for
graduating subspecialists in colorectal surgery. They
demonstrated that this examination effectively discriminated between graduating general surgery chief residents
and colorectal fellows and suggested that it should be incorporated into the Colorectal Board Examination.
Thus, the review of the available literature indicates that
very little empirical data exist that address the role of
simulation for the certification of residents and practicing
surgeons. As indicated by the high ranking of this Delphi
project research question,4 a need for additional good quality studies that demonstrate the best way to incorporate
simulation-based assessments in the certification process
of surgical trainees and practicing surgeons is needed.
Key research priorities toward such a goal include
comparing different assessment tools for feasibility, ease
of use, and validity and reliability related to evaluating
competency for specific procedures and behavioral skill
sets.

How do we train and assess teams effectively


using simulation?
Individual technical skills are necessary but not sufficient
for optimal surgical performance.23 Breakdowns in teambased skills, such as communication, situation awareness,
and decision making, have been convincingly identified as
key root causes of patient harm and near misses in the

219
OR.24 Ineffective operative teams, especially in crisis scenarios, can be more prone to error, potentially compromising
patient safety and operative outcomes.25 Other high-risk industries, such as aviation and the military, have, thus, developed Crew Resource Management team training programs
to help create more cohesion and effective responses among
teams; these types of training programs, however, remain
niche players in surgical education and practice.25 Given
that surgeons are fairly accurate at self-assessing their technical skills but lack the necessary insight to self-assess their
own NTS,26 team training and assessment may act as the
bridge between learning technical skills on bench-top/virtual
reality simulation and overall competence in the OR.
Indeed, implementation of team training programs has
been associated with a reduction in surgical mortality.27
Regarding simulation-based training of surgical teams,
research has previously shown that multidisciplinary crisis
simulations for team training using a high-fidelity, immersive, and reproducible simulated operating theatre is feasible
and well received by surgical teams.25 In this setting, a standardized theatre team is exposed to validated crisis scenarios, such as hemorrhagic shock or cardiac arrest. Using
a computer-based mannequin simulator, the trainers in the
control room can simultaneously manipulate a mannequins
hemodynamic parameters. All simulations are video recorded for the purposes of assessment and debriefing, conducted by a surgeon for technical skills and a human
factor expert for NTS in the example of Undre et al.25
For team assessment, 2 approaches have been pursued in
the literature: (1) measuring the skill of the individual
within the team and/or (2) evaluating the team as a whole.
Intraoperative NTS of the individual surgeon are assessed
by skills taxonomies, such as the Non-Technical Skills for
Surgeons. It is feasible, reliable, and demonstrates content
and construct validity.28 Additionally, the NOTECHS scale
is a psychometrically robust tool to assess both the individual surgeons and the overall teams NTS. Originally developed for Crew Resource Management training in aviation,
it was revised by Sevdalis et al29 to make it more applicable
for use in the OR for assessing nontechnical skills, especially in the context of surgical crisis scenarios. NOTECHS
is feasible and differentiates between good and poor behaviors when used by surgeons, anesthetists, and nurses.28
Another assessment tool is the Anaesthetists NonTechnical Skills scale. It assesses similar domains and is
reliable and has demonstrated validity in assessing
anaethetists.28
For measuring the performance of the entire OR team,
the OTAS has been developed as a tool for comprehensive
assessment of interprofessional teamwork in the OR.11
Although OTAS has demonstrated to construct validity
and reliability, questions remain about feasibility and
cost-effectiveness because assessment relies on real-time
observation.28 Moreover, novice assessors themselves
must be adequately trained on using such assessment tools
because a poor correlation exists between novice and expert
rater pairs.11

220
Future research should, therefore, continue to expand on
the many individual- and team-based assessment tools
through their validation in different learner populations
and training settings. In addition, investigators should focus
on identifying the best assessment tools to use for a
particular team training program to elucidate which strategy is most effective in improving and optimizing surgical
performance. Through the use of multiple assessment
methods, team-training programs would undergo more
rigorous evaluation and different programs could be more
equitably compared with one another in terms of
effectiveness.

How can we use simulation to teach and assess


judgment and decision making in routine and
crisis situations?
To answer comprehensively this research question, 2
separate conditions must be met, each of which is necessary
but not sufficient for addressing this area of research: (1)
the existence of simulators/simulation scenarios that have
enough fidelity to create situations in which the learner/
participant must exercise surgical judgment and/or decision
making and (2) the availability of adequately validated
assessment tools for evaluating surgical judgment and
decision making. This targeted review of the literature
over the last 10 years looking at decision making, surgery,
assessment, and simulation reveals that both, to a limited
extent, do exist, but gaps are present for targeting future
investigations.
Before reviewing the literature, it is important to stress
that surgical judgment and decision making are not
synonyms for all NTS or team-based behaviors. True,
both good judgment and intelligent decision making are
important components of highly reliable team function.
They are not substitutes, however, for such team-based
competencies as open communication, leadership, and
back-up behavior. Thus, although much work has been
conducted in NTS-related topics for surgical teams, it does
not necessarily correspond with teaching and assessing
judgment and decision making in routine and crisis
situations.
Simulators and simulation-based scenarios have been
developed over the last 10 years that allow teaching and
assessment of surgical judgment and decision making.
Standardized patients were first used to assist in assessing
surgical decision making a decade ago.30 Both Dale et al31
and Hemmerich et al32 successfully adapted a simple
screen-based computer program to evaluate vascular surgeons decision making related to the timing of operation
for abdominal aortic aneurysm repair. The American College of Surgeons has developed a similar screen-based, virtual realityteaching tool for residents known as the
Fundamentals of Surgery Curriculum, and the Society for
Surgery of the Alimentary Tract is currently working with
a Web-based computer to develop a virtual patient

The American Journal of Surgery, Vol 211, No 1, January 2016


evaluation tool.33 Such computer-based simulations provide
learners with scenarios of varying difficulty for exercising
judgment and eliciting clinical decisions related to patient
care.
Another successful approach to using simulation for
teaching and assessing surgical judgment and decision
making has been to layer both psychomotor and cognitive
training together to mimic the multitasking environment of
the practicing surgeon.34 Deka et al35 added cognitive loads
onto a virtual reality peg transfer psychomotor task to
demonstrate improved task transfer and learning. Pugh
et al36,37 developed a low technology laparoscopic ventral
hernia trainer using cognitive task analysis to force participants to make intraoperative decisions based on key aspects of such a repair (eg, port placement, mesh selection).
Notwithstanding the caveat related to NTS mentioned
earlier, several groups have developed in situ38 and centerbased25,39 high-fidelity simulation-based team training
models that can be used to teach and assess judgment and
decision making in addition to team-based competencies.
Typically, they have focused on crisis scenarios (eg, intraoperative hemorrhage, trauma), but Andrew et al40
have created a scenario for assessing team decision making
based on a routine laparoscopic ventral hernia repair,
whereas crisis and routine scenarios have been developed
by Black et al41 for carotid endarterectomy.
Some assessment tools for surgical judgment and decision making have been developed. Pugh et al42 developed
an error-based intraoperative decision-making checklist using cognitive task analysis for ventral hernia simulations.
Krishnamurthy et al43 have developed an intraoperative
assessment scale for strategic management simulations designed to elicit decision making in a real-world task environment. They were able to correlate performance based
on this tool with the American Board of Surgery InTraining Examination scores.
Continued development and refinement of procedurespecific simulators and situation-specific scenarios is
needed with concomitant development of more assessment
tools. Cognitive task analysis and human reliability analysis
are promising approaches. Finally, the effectiveness of
curricula designed to teach judgment and decision making
must be systematically evaluated using an established
model such as Kirkpatricks 4 levels to help create a
more comprehensive evaluation of the learner.

What type and method of feedback is most


effective to improve performance on
simulators?
Feedback is often regarded as the most crucial component
of simulation-based education to optimize learning.44 Debriefing forms the core of Kolbs experiential learning cycle
and Schons reflective practice, facilitating exchange of
reflective dialog between the trainer and trainee.45 This practice consolidates learning and has been shown to improve the

M.J. Johnston et al.

Progress in surgical simulation research

surgeons technical skills and reduce adverse events.46 However, debriefing culture in surgery remains sparse compared
with other high-risk organizations.44 Recent literature has
highlighted core, evidence-based features of effective debriefing.45 These have served as basis for the birth of
simulation-based feedback models, such as TeamGAINS
(guided team self-correction, advocacyinquiry, systemic
constructivist), to enhance the quality and quantity of
feedback.
TeamGAINS is a debriefing tool for simulation-based
team training that amalgamates 3 debriefing approaches:
guided team self-correction, advocacyinquiry, and
systemicconstructivist debriefing.47 Guided team selfcorrection fosters a detailed self-analysis of the trainee
with the trainer assuming a neutral, nonjudgmental position.48 Conversely, advocacyinquiry focuses on
instructor-led critical feedback with trainee reflection.49
Finally, systemicconstructivist technique focuses on interaction and relationships rather than isolated individual
behaviors.47
Exploiting the advantages of these 3 established
debriefing approaches, TeamGAINS aims to optimize the
debriefing exercise, studying interactions between causes
and effects of team behavior.47 However, a major limitation
is that instructor-led debriefing is cost and resource
intensive.50 To address this limitation, self-debriefing
and within-team debriefing have recently been investigated as viable alternative debriefing strategies.
Self-debriefing is an exercise whereby an individual
identifies his/her own strengths and weaknesses with
reflection after assessment through the use of a grading
tool (eg, Non-Technical Skills for Surgeons). Indeed, this
strategy has been shown to improve performance in
simulated crisis scenarios in the absence of instructor-led
debriefing.51 Moreover, Boet et al50 have also demonstrated
that within-team debriefing after a simulated crisis scenario, led by team members themselves without an
instructor, might be as effective as instructor-led debriefing
in improving team performance. Both self-debriefing and
within-team debriefing strategies have the additional potential advantage of facilitating resource utilization and
enhancing feasibility of team simulation and debriefing.50
To determine which method of feedback is effective in
improving performance on simulators, the quality of
debriefings needs to be assessed. Tools fulfilling this role
include (1) Objective Structured Assessment of Debriefing,
(2) Debriefing Assessment for Simulation in Healthcare,
and (3) Self-report Debriefing Quality scale.
Objective Structured Assessment of Debriefing is a
psychometrically robust, 5-point scale comprising 8 key
evidence-based domains to assess debriefing quality after
simulation scenarios.45 It is feasible, valid, and reliable, and
these properties have also been demonstrated for assessment of debriefings in the OR.52 Likewise, Debriefing
Assessment for Simulation in Healthcare is a reliable and
validated behaviorally anchored rating scale to assess debriefings in a variety of simulation contexts.53 These tools

221
also have the potential to provide feedback to the inexperienced trainer, with the aim to enhance his/her ability to
effectively debrief, in turn improving the quality of debriefings and learning in simulation.
Although several established methods for debriefing
exist, there are currently no studies that have used the
available, validated assessment tools to compare the
efficacy of different debriefing methods. This is an
important avenue for future research.

Does documented simulator competence equal


clinical competence?
This question is one of the most difficult to answer in
terms of research because its target, competence, is a rather
loaded term, meaning different things to different people. A
more precise definition, therefore, is required to ensure a
common starting point for answering the question. The
ACGME defines competencies as [s]pecific knowledge,
skills, behaviors and attitudes and the appropriate educational experiences required of residents to complete GME
programs. These include patient care, medical knowledge,
practice-based learning and improvement, interpersonal
and communication skills, professionalism, and systemsbased practice.54
Given the earlier mentioned definition, the question
essentially is targeting whether simulation-based training
affects an individuals clinical practice patterns. That is, it
attempts to link simulation education to what a surgeon is
doing when no one is looking (ie, when [s]he is not being
assessed). As such, it attempts to look at the higher order
components of training effectiveness (ie, Kirkpatrick55
level 3 [behavior] and level 4[outcomes] results or Millers
Does level56). Measuring these types of performance
markers is difficult in the best of circumstances! Making
such a linkage, however, is essential because, as Chris de
Gara has warned, competency-based [simulation] programs do not necessarily ensure competence. In fact, he
argues that simulation-based training in isolation can
decontextualize surgical skills, creating a false sense of
security among learners and faculty who are enchanted
by the method.57
Focusing on the linkage between training and actual
clinical behaviors and outcomes, proficiency-driven,
simulation-based curricula have been demonstrated to
make an impact over the last 5 years (see Does simulator
training lead to improved patient outcomes, safety, and
quality of care? section). Areas of particular strength
include technical training for minimally invasive surgical
procedures. Although Hogle et al58 did not succeed in making a link between training and clinical performance of
laparoscopic cholecystectomy, subsequent researchers
have demonstrated a link between proficiency training in
FLS and improved performance of laparoscopic cholecystectomy,59 right colectomy,60 and inguinal hernia repair.61
This last study reaches Kirkpatrick Level 4 training

222
effectiveness and can serve as an example of how to
approach demonstrating this oftentimes-elusive target.
Other noteworthy nonminimally invasive surgical studies
that have linked simulation-based training to better clinical
performance/outcomes include mastery training in central
venous catheter placement leading to improved performance.62 In addition, simulation-based training in cataract
surgery has been linked to decreased capsular tears in clinical practice63 and training in microsurgical techniques to
improved performance.64
Although the earlier cited studies do make a connection
between simulation-based training and clinical performance, they do not necessarily prove clinical competence because the term implies an overall gestalt related to
performance. Here lies the gap in the research to date. To
declare that simulator training to competence is equivalent
to clinical competence, a more global, multisource evaluation is needed. Fernandez et al65 have moved in such a direction by trying to correlate performance during an intern
boot camp with subsequent American Board of Surgery
In-Training Examination, in-training evaluations, and operative assessment scores. Linking simulation-based activities
with parameters related to milestones within a program is
another potential target for demonstrating that simulator
competence leads to clinical competence. Only through
this comprehensive approach will researchers be able to
convince program directors and the public that
simulation-based competence equals clinical competence.

Does training on simulators transfer to improved


clinical performance?
Simulators were developed to supplement clinical
training in an effort to improve patient safety while
maximizing training. Although skills acquisition with
simulation-based training has been established, evidence
is also growing to demonstrate skills transfer to the clinical
setting.
The effectiveness of simulator training has been mostly
investigated for laparoscopic cholecystectomy and endoscopy. Grantcharov et al66 demonstrated faster operative
times and improvement in error and movement economy
during laparoscopic cholecystectomies among virtual realitytrained participants compared with nontrained participants. In a study by Cohen et al,67 virtual reality training
resulted in improved skills competence and patient comfort
levels for colonoscopy. However, the durability of skills
transfer remains in question. Interestingly, Sedlack and
Kolars68 demonstrated that while clinical performance
improved in the initial phase of virtual reality training, no
difference existed in the quality of the colonoscopies performed between trained and untrained fellows by the end
of the study.
Increasingly, studies are exploring the utility of simulators in other laparoscopic and endoscopic procedures, such
as Nissens fundoplication, hemicolectomy, tubal ligation,

The American Journal of Surgery, Vol 211, No 1, January 2016


inguinal hernia repair, esophagogastroduodenoscopy, endoscopic sinus surgery, and transnasal flexible laryngoscopy.
In a recent systematic review of 27 randomized trials and 7
comparative studies in various laparoscopic and endoscopic
procedures, participants who received simulation training
overall performed better in a patient-based setting than
those who did not receive simulation training. Only 5 of the
34 studies failed to show improvement in performance, and
none showed poorer performance after simulation-based
training. Because the majority of these studies used virtual
reality simulators and the Fundamentals of Laparoscopic
Surgery system, the generalizability is limited to these
specific types of simulators.69
Currently, clinical performance is assessed by various
means. Many studies define clinical performance by scores
from global measures of performance, time to procedure
completion, or achievement of procedural tasks. Clinical
performance may also be affected by the quality of work,
autonomy and competence, and patient discomfort or
satisfaction. Clinical performance is typically evaluated
by direct observation or video-recording assessments.
Although consistent criteria for clinical performance should
be established, it is important not to measure clinical
performance in areas that are beyond the capabilities of the
simulator.
In addition, many factors determine skills transfer
including simulator design and functionality, debriefing or
feedback methods, level of mentorship, and inherent
learning style. These factors should also be evaluated as
it is difficult to differentiate between skills transfer from
simulator use alone and from deliberately designed training
curricula using a simulator. In an educational setting,
simulator training and assessments should be chosen
appropriately for the intended goal or audience.
Although many studies show supportive results, many
studies also lack consistency or completeness. Additional
limitations to most studies include single-institution investigation, small sample size, and disparity in training
duration and intensity.69 Investigators involved in future
work in this field should address such issues.

Does simulator training lead to improved patient


outcomes, safety and quality of care?
This question can be considered the holy grail of
surgical simulation research. To answer it, some distinctions must be made and the definition of what constitutes
patient outcome must be considered. Both Donabedian and
Davies believe that a positive outcome can be defined as
achieving health and satisfaction.70 Therefore, the more
traditional view of outcomes simply being measures of
life or time (ie mortality or length of stay) can be expanded
to include complications, as a patient who is discharged
from hospital, having an unexpected complication may
well not be satisfied.

M.J. Johnston et al.

Progress in surgical simulation research

Furthermore, defining improved patient safety and


improved quality of care is equally difficult. Patient safety
is defined by the Institute of Medicine as the prevention of
harm to patients.71 To answer the question of whether
simulation-based training can improve patient safety, the
definition that will be used in this article must be the
reduction of harm to patients, rather than prevention as
the published literature does not include many articles reporting adverse event rates or complication rates of 0%.
Similarly, the Institute of Medicine defines quality of
care as the degree to which health services for individuals
and populations increase the likelihood of desired health
outcomes.71 As desired health outcomes include the
avoidance of mortality, complications, and prolonged
length of stay, these are the indicators that could be used
to answer this question.
Regarding complications, bloodstream infections represent a serious risk to life and a health burden for patients
and health care providers.72 Only recently has the link between simulation-based education and the number of
catheter-related bloodstream infections been formally and
rigorously explored. Barsuk et al73,74 showed that a
simulation-based mastery-learning program in central
venous catheter insertion skills, delivered to residents,
significantly reduced the rate of catheter-related bloodstream infections in 2 separate studies. Further work by
Zendejas et al61 outlined the positive of mastery learning
on resident skills in inguinal hernia repair, and significant
improvements in the rate of intraoperative and postoperative complications were reported.
These studies are to be commended for aiming to link
complications to simulation but are limited either by their
observational nature or the fact that they were conducted in
a single center. There are also published articles from the
specialty of obstetrics and gynecology that report improved
outcomes in neonates; however, they share some of the
same limitations as those from surgical specialties.7577
Regarding length of stay, a study by Azari-Rad et al78
explored the link between discrete event simulation
modeling and the number of elective operation cancellations based on each surgeons patients average length of
stay. Unfortunately, this study did not explore length of
stay itself as an outcome. The study by Zendejas et al61
also reported a lower likelihood of an overnight stay after
hernia repair by a simulation-trained resident. However,
the multiplicity of factors that affect a patients length of
stay means that firmly and reliably linking a simulation
intervention to this outcome will remain a significant
challenge.
The literature is even sparser when it comes to
considering the impact of simulation on patient mortality.
Mortality tends to be the headline grabber of any study
exploring patient outcomes, so it is safe to assume that any
group of researchers who had successfully linked a
simulation intervention to reduced mortality would have
reported it in a high-impact publication. It is highly likely
that a robust, multicenter study involving collaboration

223
between leaders in this field will be required to establish
this link. To date, there are no studies that can claim this. A
plethora of studies have shown that simulation can translate
to improved simulated performance and, to a lesser degree,
clinical performance but the definitive link with patient
outcome measures remains out of reach. This statement is
supported by several articles, which have reported similar
findings.79,80

Conclusions
Simulation appears to be a very valuable educational
tool. There are numerous examples of its role in curricula,
technical and nontechnical performance, decision-making,
individual and team training, feedback, and, to a degree,
clinical performance. In this safety conscious era of
checklists and proficiency-based progression from undergraduate to attending,81,82 simulation is a vital component
of surgical training. Furthermore, simulation can protect
the patient from clinicians who have not yet reached proficiency.83 However, the subject of using simulation for certification of surgeons remains up for debate.
The paucity of literature exploring the relationship
among simulation-based training, quality indicators, and
patient outcomes will require input from collaborating
investigators and assistance from funding bodies; it should,
however, remain a major focus of future research.
The extension of the use of simulation from the OR84 to
the surgical ward85 and beyond has strengthened its usefulness to the surgical community. Simulation-based training
is becoming well established in surgical education and
will, no doubt, continue to be used for many years to come.

Acknowledgments
The authors would like to thank all members of the
Association for Surgical Education Simulation Committee
for their assistance.

References
1. Aggarwal R, Mytton OT, Derbrew M, et al. Training and simulation
for patient safety. Qual Saf Health Care 2010;19(Suppl 2):i3443.
2. Arora S, Cox C, Davies S, et al. Towards the next frontier for
simulation-based training: full-hospital simulation across the entire patient pathway. Ann Surg 2013;260:2528.
3. Frengley RW, Weller JM, Torrie J, et al. The effect of a simulationbased training intervention on the performance of established critical
care unit teams. Crit Care Med 2011;39:260511.
4. Stefanidis D, Arora S, Parrack DM, et al. Research priorities in surgical simulation for the 21st century. Am J Surg 2012;203:4953.
5. Phillips AW, Madhavan A. A critical evaluation of the Intercollegiate
Surgical Curriculum and comparison with its predecessor the
Calman curriculum. J Surg Educ 2013;70:55762.
6. Stefanidis D, Coker AP, Green JM, et al. Feasibility and value of a procedural workshop for surgery residents based on phase II of the APDS/
ACS national skills curriculum. J Surg Educ 2012;69:7359.

224
7. Aggarwal R, Crochet P, Dias A, et al. Development of a virtual reality
training curriculum for laparoscopic cholecystectomy. Br J Surg 2009;
96:108693.
8. Buchholz J, Vollmer CM, Miyasaka KW, et al. Design, development
and implementation of a surgical simulation pathway curriculum for
biliary disease. Surg Endosc 2015;29:6876.
9. The General Medical Council. Standards for Curricula and Assessment
Systems. Available at: http://www.gmc-uk.org/education/postgraduate/
standards_for_curricula_and_assessment_systems.asp; 2010. Accessed
October 14, 2014.
10. Barnato AE, Mohan D, Lane RK, et al. Advance care planning norms
may contribute to hospital variation in end-of-life ICU use: a simulation study. Med Decis Making 2014;34:47384.
11. Sevdalis N, Lyons M, Healey AN, et al. Observational teamwork
assessment for surgery: construct validation with expert versus novice
raters. Ann Surg 2009;249:104751.
12. van Hove PD, Tuijthof GJ, Verdaasdonk EG, et al. Objective assessment of technical surgical skills. Br J Surg 2010;97:97287.
13. Hull L, Arora S, Aggarwal R, et al. The impact of nontechnical skills
on technical performance in surgery: a systematic review. J Am Coll
Surg 2012;214:21430.
14. Birkmeyer JD, Finks JF, OReilly A, et al. Surgical skill and complication rates after bariatric surgery. N Engl J Med 2013;369:143442.
15. Birkmeyer NJ, Finks JF, Greenberg CK, et al. Safety culture and complications after bariatric surgery. Ann Surg 2013;257:2605.
16. Jelovsek JE, Walters MD, Korn A, et al. Establishing cutoff scores on
assessments of surgical skills to determine surgical competence. Am J
Obstet Gynecol 2010;203:81.e16.
17. Mackenzie H, Miskovic D, Ni M, et al. Clinical and educational proficiency gain of supervised laparoscopic colorectal surgical trainees.
Surg Endosc 2013;27:270411.
18. Ahn J, Kharasch M, Aronwald R, et al. Assessing the accreditation
council for graduate medical education requirement for temporary cardiac pacing procedural competency through simulation. Simul Healthc
2013;8:7883.
19. Hafford ML, Van Sickle KR, Willis RE, et al. Ensuring competency:
are fundamentals of laparoscopic surgery training and certification
necessary for practicing surgeons and operating room personnel?
Surg Endosc 2013;27:11826.
20. Buyske J. The role of simulation in certification. Surg Clin North Am
2010;90:61921.
21. Boulet JR, Smee SM, Dillon GF, et al. The use of standardized patient
assessments for certification and licensure decisions. Simul Healthc
2009;4:3542.
22. de Montbrun SL, Roberts PL, Lowry AC, et al. A novel approach to assessing technical competence of colorectal surgery residents: the development and evaluation of the Colorectal Objective Structured
Assessment of Technical Skill (COSATS). Ann Surg 2013;258:10016.
23. Vincent C, Moorthy K, Sarker SK, et al. Systems approaches to surgical quality and safety: from concept to measurement. Ann Surg 2004;
239:47582.
24. Gawande AA, Zinner MJ, Studdert DM, et al. Analysis of errors reported by surgeons at three teaching hospitals. Surgery 2003;133:
61421.
25. Undre S, Koutantji M, Sevdalis N, et al. Multidisciplinary crisis simulations: the way forward for training surgical teams. World J Surg
2007;31:184353.
26. Arora S, Sevdalis N, Ahmed M, et al. Safety skills training for surgeons: a half-day intervention improves knowledge, attitudes and
awareness of patient safety. Surgery 2012;152:2631.
27. Neily J, Mills PD, Young-Xu Y, et al. Association between implementation of a medical team training program and surgical mortality.
JAMA 2010;304:1693700.
28. Sharma B, Mishra A, Aggarwal R, et al. Non-technical skills assessment in surgery. Surg Oncol 2011;20:16977.
29. Sevdalis N, Davis R, Koutantji M, et al. Reliability of a revised
NOTECHS scale for use in surgical teams. Am J Surg 2008;196:
18490.

The American Journal of Surgery, Vol 211, No 1, January 2016


30. Clever SL, Novack DH, Cohen DG, et al. Evaluating surgeons
informed decision making skills: pilot test using a videoconferenced
standardised patient. Med Educ 2003;37:10949.
31. Dale W, Hemmerich J, Ghini EA, et al. Can induced anxiety from a
negative earlier experience influence vascular surgeons statistical
decision-making? A randomized field experiment with an abdominal
aortic aneurysm analog. J Am Coll Surg 2006;203:64252.
32. Hemmerich JA, Elstein AS, Schwarze ML, et al. Risk as feelings in the
effect of patient outcomes on physicians future treatment decisions: a
randomized trial and manipulation validation. Soc Sci Med 2012;75:
36776.
33. Andersen DK. How can educators use simulation applications to teach
and assess surgical judgment? Acad Med 2012;87:93441.
34. Kahol K, Vankipuram M, Smith ML. Cognitive simulators for medical
education and training. J Biomed Inform 2009;42:593604.
35. Deka V, Kahol K, Smith M, et al. Honing a surgeons decision making
skills in the presence of mechanical tasks. Am J Surg 2011;202:4929.
36. Pugh C, Plachta S, Auyang E, et al. Outcome measures for surgical
simulators: is the focus on technical skills the best approach? Surgery
2010;147:64654.
37. Pugh CM, DaRosa DA, Santacaterina S, et al. Faculty evaluation of
simulation-based modules for assessment of intraoperative decision
making. Surgery 2011;149:53442.
38. Paige JT, Kozmenko V, Yang T, et al. High-fidelity, simulation-based,
interdisciplinary operating room team training at the point of care. Surgery 2009;145:13846.
39. Powers KA, Rehrig ST, Irias N, et al. Simulated laparoscopic operating
room crisis: an approach to enhance the surgical team performance.
Surg Endosc 2008;22:885900.
40. Andrew B, Plachta S, Salud L, et al. Development and evaluation of a
decision-based simulation for assessment of team skills. Surgery 2012;
152:1527.
41. Black SA, Nestel DF, Kneebone RL, et al. Assessment of surgical
competence at carotid endarterectomy under local anaesthesia in a
simulated operating theatre. Br J Surg 2010;97:5116.
42. Pugh CM, DaRosa DA. Use of cognitive task analysis to guide the
development of performance-based assessments for intraoperative decision making. Mil Med 2013;178:227.
43. Krishnamurthy S, Satish U, Foster T, et al. Components of critical decision making and ABSITE assessment: toward a more comprehensive
evaluation. J Grad Med Educ 2009;1:2737.
44. Issenberg SB, McGaghie WC, Petrusa ER, et al. Features and uses of
high-fidelity medical simulations that lead to effective learning: a
BEME systematic review. Med Teach 2005;27:1028.
45. Arora S, Ahmed M, Paige J, et al. Objective structured assessment of
debriefing: bringing science to the art of debriefing in surgery. Ann
Surg 2012;256:9828.
46. Hamad GG, Brown MT, Clavijo-Alvarez JA. Postoperative video debriefing reduces technical errors in laparoscopic surgery. Am J Surg
2007;194:1104.
47. Kolbe M, Weiss M, Grote G, et al. TeamGAINS: a tool for structured
debriefings for simulation-based team trainings. BMJ Qual Saf 2013;
22:54153.
48. Smith-Jentsch KA, Kraiger K, Cannon-Bowers JA, et al. Do familiar
teammates request and accept more backup? Transactive memory in
air traffic control. Hum Factors 2009;51:18192.
49. Rudolph JW, Simon R, Rivard P, et al. Debriefing with good judgment:
combining rigorous feedback with genuine inquiry. Anesthesiology
Clin 2007;25:36176.
50. Boet S, Bould MD, Sharma B, et al. Within-team debriefing versus
instructor-led debriefing for simulation-based education: a randomized
controlled trial. Ann Surg 2013;258:538.
51. Neira VM, Bould MD, Nakajima A, et al. GIOSAT: a tool to assess
CanMEDS competencies during simulated crises. Can J Anaesth 2013;
60:2809.
52. Ahmed M, Sevdalis N, Paige J, et al. Identifying best practice guidelines for debriefing in surgery: a tri-continental study. Am J Surg 2012;
203:5239.

M.J. Johnston et al.

Progress in surgical simulation research

53. Brett-Fleegler M, Rudolph J, Eppich W, et al. Debriefing assessment


for simulation in healthcare: development and psychometric properties. Simul Healthc 2012;7:28894.
54. Glossary of Terms. Available at: http://www.acgme.org/acgmeweb/
Portals/0/PFAssets/ProgramRequirements/ab_ACGMEglossary.pdf;
2013. Accessed May 08, 2013.
55. Kirkpatrick DL, Kirkpatrick James D. Evaluating Training Programs:
The Four Levels. 2nd ed. San Francisco, CA: Berret-Koehler; 1998.
56. Miller GE. The assessment of clinical skills/competence/performance.
Acad Med 1990;65:S637.
57. Brindley PG, Jones DB, Grantcharov T, et al. Canadian Association of
University Surgeons Annual Symposium. Surgical simulation: the solution to safe training or a promise unfulfilled? Can J Surg 2012;55:
S2006.
58. Hogle NJ, Chang L, Strong VE, et al. Validation of laparoscopic surgical skills training outside the operating room: a long road. Surg Endosc 2009;23:147682.
59. Sroka G, Feldman LS, Vassiliou MC, et al. Fundamentals of laparoscopic surgery simulator training to proficiency improves laparoscopic
performance in the operating roomda randomized controlled trial.
Am J Surg 2010;199:11520.
60. Palter VN, Grantcharov TP. Development and validation of a comprehensive curriculum to teach an advanced minimally invasive procedure: a randomized controlled trial. Ann Surg 2012;256:2532.
61. Zendejas B, Cook DA, Bingener J, et al. Simulation-based mastery
learning improves patient outcomes in laparoscopic inguinal hernia repair:
a randomized controlled trial. Ann Surg 2011;254:5029; discussion 911.
62. Evans LV, Dodge KL, Shah TD, et al. Simulation training in central
venous catheter insertion: improved performance in clinical practice.
Acad Med 2010;85:14629.
63. McCannel CA, Reed DC, Goldman DR. Ophthalmic surgery simulator
training improves resident performance of capsulorhexis in the operating room. Ophthalmology 2013;120:245661.
64. Ghanem AM, Hachach-Haram N, Leung CC, et al. A systematic review of evidence for education and training interventions in microsurgery. Arch Plast Surg 2013;40:3129.
65. Fernandez GL, Page DW, Coe NP, et al. Boot cAMP: educational outcomes after 4 successive years of preparatory simulation-based
training at onset of internship. J Surg Educ 2012;69:2428.
66. Grantcharov TP, Kristiansen VB, Bendix J, et al. Randomized clinical
trial of virtual reality simulation for laparoscopic skills training. Br J
Surg 2004;91:14650.
67. Cohen J, Cohen SA, Vora KC, et al. Multicenter, randomized,
controlled trial of virtual-reality simulator training in acquisition of
competency in colonoscopy. Gastrointest Endosc 2006;64:3618.
68. Sedlack RE, Kolars JC. Computer simulator training enhances the
competency of gastroenterology fellows at colonoscopy: results of a
pilot study. Am J Gastroenterol 2004;99:337.

225
69. Dawe SR, Pena GN, Windsor JA, et al. Systematic review of skills
transfer after surgical simulation-based training. Br J Surg 2014;101:
106376.
70. Davies AR. Patient defined outcomes. Qual Health Care 1994;3(Suppl):69.
71. Mitchell PH. Defining patient safety and quality care. In: Hughes RG, ed.
Patient Safety and Quality: An Evidence-based Handbook for Nurses.
Rockville, MD: Agency for Healthcare Research and Quality; 2008.
72. Lillie PJ, Allen J, Hall C, et al. Long-term mortality following bloodstream infection. Clin Microbiol Infect 2013;19:95560.
73. Barsuk JH, Cohen ER, Feinglass J, et al. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern
Med 2009;169:14203.
74. Barsuk JH, Cohen ER, Potts S, et al. Dissemination of a simulationbased mastery learning intervention reduces central line-associated
bloodstream infections. BMJ Qual Saf 2014;23:74956.
75. Draycott TJ, Crofts JF, Ash JP, et al. Improving neonatal outcome
through practical shoulder dystocia training. Obstetrics Gynecol
2008;112:1420.
76. Riley W, Davis S, Miller K, et al. Didactic and simulation nontechnical
skills team training to improve perinatal patient outcomes in a community hospital. Jt Comm J Qual Patient Saf 2011;37:35764.
77. Phipps MG, Lindquist DG, McConaughey E, et al. Outcomes from a
labor and delivery team training program with simulation component.
Am J Obstet Gynecol 2012;206:39.
78. Azari-Rad S, Yontef AL, Aleman DM, et al. Reducing elective general
surgery cancellations at a Canadian hospital. Can J Surg 2013;56:
1138.
79. Shear TD, Greenberg SB, Tokarczyk A. Does training with human patient simulation translate to improved patient safety and outcome?
Curr Opin Anaesthesiol 2013;26:15963.
80. Stefanidis D, Sevdalis N, Paige J, et al. Simulation in surgery: whats
needed next? Ann Surg 2014;261:84653.
81. Stefanidis D, Korndorffer Jr JR, Black FW, et al. Psychomotor testing
predicts rate of skill acquisition for proficiency-based laparoscopic
skills training. Surgery 2006;140:25262.
82. Russ SJ, Sevdalis N, Moorthy K, et al. A qualitative evaluation of the
barriers and facilitators toward implementation of the WHO surgical
safety checklist across hospitals in England: lessons from the Surgical checklist implementation project. Ann Surg 2014;261:8191.
83. Lateef F. Simulation-based learning: just like the real thing. J Emerg
Trauma Shock 2010;3:34852.
84. Arora S, Sevdalis N, Aggarwal R, et al. Stress impairs psychomotor
performance in novice laparoscopic surgeons. Surg Endosc 2010;24:
258893.
85. Pucher PH, Aggarwal R, Srisatkunam T, et al. Validation of the simulated Ward environment for assessment of ward-based surgical care.
Ann Surg 2013;259:21521.

You might also like