You are on page 1of 9

Original Article 233

Anastomosis Lapse Index (ALI): A Validated End


Product Assessment Tool for Simulation
Microsurgery Training
Ali M. Ghanem, MRCS, MD, PhD1 Yasser Al Omran, BSc (Hons)1 Bashar Shatta, MD, MRCS1
Eunsol Kim, BSc (Hons), MBBS1 Simon Myers, MBBS, FRCS (Plast), PhD1

1 Academic Plastic Surgery Group, Barts and The London School of Address for correspondence Ali M. Ghanem, MRCS, MD, PHD,
Medicine and Dentistry, London, United Kingdom Academic Plastic Surgery Group, Centre for Cutaneous Research,
Blizard Institute, Barts and The London School of Medicine and
J Reconstr Microsurg 2016;32:233–241. Dentistry, London, United Kingdom (e-mail: a.ghanem@qmul.ac.uk).

This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.
Abstract Background Over the last decade, simulation has become a principal training method
in microsurgery. With an increasing move toward the use of nonliving models, there is a
need to develop methods for assessment of microvascular anastomosis skill acquisition
substituting traditional patency rate. The authors present and validate a novel method
of microvascular anastomosis assessment tool for formative and summative skills
competency assessment.
Methods In this study, 29 trainees with varying levels of experience in microsurgery
undertook a 5-day microsurgery course. Two consecutive end-to-end microvascular
anastomoses of cryopreserved rat aortas performed on day 3 and day 5 of the course
were longitudinally split and photographed for randomized blinded qualitative evalua-
tion. Four consecutive anastomoses by two experienced microsurgeons were analyzed
as expert controls. Errors potentially leading to anastomotic leak or thrombosis were
identified and logged. Statistical analysis using the Kruskal–Wallis analysis of variance
(ANOVA) and a two-way repeated measure ANOVA was used to measure construct and
concurrent validity, respectively.
Results A total of 128 microvascular anastomoses were analyzed for both student and
control groups. Ten errors were identified and indexed. There was a statistically
significant difference detected between average errors per anastomosis performed
Keywords between groups (p < 0.05). Average errors per anastomosis was statistically decreased
► microsurgery on day 5 of the course compared with day 3 (p < 0.001).
► education Conclusion Evaluation of anastomosis structural patency and quality in nonliving
► educational models is possible. The proposed error list showed construct and predictive validity. The
intervention anastomosis lapse index can serve as a formative and summative assessment tool during
► technique microvascular training.

“See one, do one, teach one!” For years, this has been the the way in which many of today’s surgeons have acquired
axiom of the “Halstedian” apprenticeship model; a model their technical skill and flourished into competent
that has been used widely in postgraduate surgical train- surgeons. With surgical training programs witnessing
ing.1 It involves giving trainees “increased responsibility,” increasing pressures to reduce clinical training hours, the
until they can perform the operation unassisted; it forms Halstedian model is becoming less feasible and is being

received Copyright © 2016 by Thieme Medical DOI http://dx.doi.org/


March 31, 2015 Publishers, Inc., 333 Seventh Avenue, 10.1055/s-0035-1568157.
accepted after revision New York, NY 10001, USA. ISSN 0743-684X.
October 1, 2015 Tel: +1(212) 584-4662.
published online
December 8, 2015
234 Anastomosis Lapse Index Ghanem et al.

superseded by competency-based surgical teaching with


variable reliance on nonclinical simulation environments
worldwide.2 Microvascular anastomosis involves surgery of
small vessels down to less than 1 mm in diameter. This
technique is now widely employed by many surgical
specialties, including those relating to plastic and recon-
structive surgery.3,4 The refined movements, precision, and
stamina microsurgery require make it technically
challenging, and as a result, it is associated with a very
steep learning curve. 5,6 The development of competency-
based curriculum in microsurgical simulation training
would facilitate optimum training and patient safety.
Many courses have been developed with both didactic
and practical components to equip surgical trainees with

This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.
principles of microsurgical techniques.7 At the end of these
courses, participants are awarded a “certificate of comple-
tion” as oppose to a “certificate of competence.”3,6 In high-
fidelity live animal model courses, assessment of microvas-
cular anastomosis success is easily quantifiable by means of
patency rate.8 These courses are both expensive and may be
ethically questionable when used for early training in
microsurgery.1 For beginner trainees, simulated low-
fidelity microvascular models can be as effective as higher
fidelity ones.9 Nonetheless, nonliving low-fidelity models
courses, although more distributed globally, lack the value
of feedback on skill levels when compared with live high-
fidelity models courses.7 To address this gap, a Fig. 1 Schematic diagrams showing (A) biangulation and (B) trian-
feedback system for nonliving low-fidelity models courses gulation techniques.
is required.
The aim of this study was to develop a simple valid
objective and low cost assessment method that can by a 2-day delay before completing the other set of three
serve as formative and summative evaluation index of (to match the assessment environment of the other candidates).
microvascular anastomosis skill level in both living and Upon completion of each anastomosis, the cryopreserved
nonliving simulation training environments. rat aorta was cut longitudinally to view the intimal side and
was photographed at 25 magnification. Anastomosis photo-
graphs were randomized using MS Excel random number
Methods
generator. Two expert reviewers, blinded to the participant’s
Following Institution Review Board approval, informed identity and sequence of the anastomosis, analyzed the
consent from students and postgraduate trainees who photographs and identified all anastomosis errors. Identified
registered to take part in the 5-day basic microsurgical errors were further reviewed and indexed, and their
training course at Queen Mary, University of London was association with the participant’s level of training (pretest)
obtained to participate in this study. Two microsurgery and course progress (posttest) was assessed.
educators who organize and instruct on established micro-
surgery courses with extensive experience in experimental Validity Assessment
microsurgery were recruited to form a positive expert control In broad terms, validity refers to the extent of which an
(EC) group. All participants completed a questionnaire assessment tool can measure what it aims to evaluate.10 In the
detailing previous microsurgical experience and their current context of microsurgery skills’ evaluation, the assessment
level of training, and were placed into distinct cohorts. tool is required to measure not only skill level but
A low-fidelity, cryopreserved rat aorta model was used for also changes occurring in these levels as a result of training
the purpose of this study.7 With the exception of the ECs, each (skill acquisition) and/or pattern of practice and continuous
participant performed four end-to-end microvascular anasto- development (skills’ maintenance and loss).
moses using standard microsurgery set and 9/0 nylon suture; Two types of validity were assessed, predictive validity and
two anastomoses were performed on each assessment day (day construct validity. Predictive validity evaluates the extent to
3 and day 5 of the course) using both biangulation and triangu- which an assessment tool can detect future performance. In
lation techniques (►Fig. 1). Each EC performed six end-to-end the context of this study, this parameter was used to evaluate
anastomoses on the same cryopreserved rat aorta model using the change in the levels of competence as a result of training.
their preferred technique. Three of the six anastomoses per- Construct validity evaluates whether an assessment tool can
formed by the ECs were undertaken at one time point, followed consistently identify with theoretical concepts. In this case,

Journal of Reconstructive Microsurgery Vol. 32 No. 3/2016


Anastomosis Lapse Index Ghanem et al. 235

This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.

Fig. 2 (A) Images demonstrating an error free anastomosis, (B) an anastomosis with multiple errors, and (C–L) anastomoses highlighting the 10
identified errors: (C) Error 1: Disruption of the anastomosis line (dotted line) created by the opposed vessel ends. (D) Error 2: Inadvertently
catching the back- or sidewall when taking suture bites (white arrows) causing occlusion of the lumen. (E) Error 3: Placing of an oblique stitch
(white arrow) causing tissue distortion. (F) Error 4: Taking too wide a bite (white arrow) that causes tissue infoldment. (G) Error 5: Placing of a
stitch that does not go through the full thickness of the vessel (partial thickness stitch: white arrow), allowing some of the intimal layer to conceal a
part of the stitch. (H) Error 6: Unequal distancing of sutures (between parallel lines) that is more than twice what is expected. (I) Error 7: Causing a
visible tear in the vessel wall (white arrow. (J) Error 8: Excessively tight suture (white arrow) that strangulates the tissue. (K) Error 9: Threads left in
the lumen (white arrow). (L) Error 10: Allowing for a large edge overlap (white arrow).

Journal of Reconstructive Microsurgery Vol. 32 No. 3/2016


236 Anastomosis Lapse Index Ghanem et al.

whether it can distinguish the participant with more micro- enfoldment; partial thickness stitches; vessel tears or cheese
surgical experience from that with less experience. wiring; a large (more than twice the average distance)
To determine predictive validity, the average errors per interstitch gap, tight stitches causing tissue strangulation;
anastomosis identified on day 3 and day 5 of each participant suture thread protrusion in lumen; and finally, large edge
were compared using a two-way analysis of variance (ANOVA) overlaps. The frequency of these errors varied with distortion
with repeated measures, followed by the post hoc Bonferroni of the anastomosis line being the most commonly identified
test for multiple comparisons. Average errors per anastomo- error and catching the back-wall being the least
sis of experts’ first two anastomosis were compared with the identified. ►Fig. 3 presents a summary of error distribution
following set performed 2 days later. To explore construct within the study sample.
validity, a Kruskal–Wallis ANOVA was used to evaluate the
average errors per anastomosis of the different cohorts Validity Assessment
followed by the Dunn multiple comparison test. The use of Construct validity was significantly demonstrated by com-
each of these statistical tests was validated with a Shapiro– paring the average total errors per anastomosis between
Wilk test for normality, and with Levene test for homogeneity the study cohorts on two time points (day 3 and day 5 of the

This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.
of variance. Results are presented as the mean  standard course for the trainees) of the 5-day training course
deviation, and statistical significance was assumed when (►Fig. 4). On day 3 of the course and using the first
differences were at p < 0.05. time point of the expert group, the average error per
anastomosis made by the various cohorts varied signifi-
cantly. This ranged from 7.4  1.0 and 7.1  1.3 errors per
Results
anastomosis for the MSs and BSTs down to 5.4  1.2 and
In all, 31 participants were recruited into the study: 11 were 2.0  2.5 for the HSTs and ECs cohorts. This variability in
medical students (MS), 9 were basic surgical trainees (BSTs), 9 average error per anastomosis continued until the end of
were higher surgical trainees (HSTs), and 2 were ECs. the course (day 5 for the trainees compared with the later
time point of the expert group) with the EC cohort making
Anastomosis Lapse Index Errors Identification less errors compared with the trainees group (2.5  1.6
A total of 128 anastomoses were analyzed. Ten anastomosis error per anastomosis for experts vs. 3.8  1.0, 3.9  0.7,
errors were identified and these were indexed in the anasto- 3.5  1.0 error per anastomosis for MS, BSTs, and HSTs,
mosis lapse index (ALI) error list (►Fig. 2). These errors were respectively). The variability in errors per anastomosis
disruptions of the anastomosis line; back wall catches; obli- made by different cohorts at their respective later time
que stitches that cause distortion; wide bites causing tissue point was not statistically significant.

Fig. 3 Frequency of errors from all anastomoses examined. Black, high frequency; gray, medium frequency; white, low frequency.

Journal of Reconstructive Microsurgery Vol. 32 No. 3/2016


Anastomosis Lapse Index Ghanem et al. 237

7.4  1.0 on day 3 to 3.8  1.0 on day 5. A similar significant


reduction in errors made was also observed in the other
trainee cohorts of basic and HSTs with average numbers of
errors per anastomosis reducing from 7.1  1.3 to 3.9  0.7
for BSTs and from 5.4  1.2 to 3.5  1.0 for HSTs. The reduc-
tion in average error per anastomosis between day 3 and
day 5 within the trainee groups was statistically significant
(p < 0.0001; ►Fig. 5).
For the EC cohort, the average number of errors per
anastomosis for the earlier set of four consecutive anastomo-
ses performed in the first time point was 2.0  2.5. In the later
set of four consecutive anastomoses performed, the average
number of errors per anastomosis was 2.5  1.6. There was
no statistically significant difference between the two sets

This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.
(►Fig. 5).

Discussion
From replantation of amputated parts,11 microsurgery has
evolved to facilitate free tissue transfer for wound and defect
reconstruction, and recently, total composite tissue allotrans-
plantation of hand and face.12,13 As evidenced in this study,
and in others,5,6,14 success in microsurgery requires extensive
range of technical skills that are associated with a steep
learning curve.3
In modern times, many microsurgical simulation models
and educational interventions have been developed to facili-
tate assessment and development of proficiency in the bundle
of technical skills contained in microsurgery.3,9,15 The aim is
to place trainees in an environment that replicates, as much as
possible, experiences to be encountered in the operating
room. Therefore, microsurgery training and assessment mod-
els can be categorized into higher or lower fidelity according
to how close to reality they are able to recreate the training
environment.
At the beginning of trainees learning curve, low-fidelity
Fig. 4 The average total errors per anastomosis on day 3 and day 5 models are usually used to practice the basic skills of micro-
between the groups were evaluated individually. (A) On day 3, a surgery including instruments handling and suturing of latex
statistical significance was seen between medical students and higher gloves, simple polyurethane cards,16 and silicone tubing.17 As
surgical trainees (p ¼ 0.031), medical students and expert controls
trainees skills advance and the trainee has conceptualized,
(p ¼ 0.019), and basic surgical trainees and expert controls ( p
¼ 0.047). (B) On day 5, there was no statistically significant difference the general principles of microsurgery higher fidelity models
between the groups. are then introduced in advanced courses that allow in vivo
live practice on live animal models such as anesthetized rats
or other larger mammals.9 High-fidelity models not only have
the advantage of recreating a realistic training environment
By exploring this significant variability through post hoc simulating all aspects of in vivo microvascular anastomosis
tests, the average error per anastomosis was found to depend but also present an excellent feedback and assessment system
on the level of expertise with the EC cohort making signifi- in which both the trainees and their trainers can evaluate
cantly less number of errors compared with the MSs and BSTs the outcome of skills acquisition when the end product—the
(p ¼ 0.019 and 0.047, respectively). Similarly, the senior anastomosis—flows or fails.
trainees cohort (HSTs) made significantly less number of However, higher fidelity models present significant chal-
errors per anastomosis compared with the MS cohort lenges to microsurgery education because of the ethical and
(p ¼ 0.031; ►Fig. 4). logistical issues they raise, lack of evidence supporting their
Predicative validity was assessed by comparing the aver- use,9 or even justification as a valid educational interven-
age errors per anastomosis made by each cohort on different tions.18 Therefore, low-fidelity ex vivo models, such as
time points (day 3 and day 5 of the 5-day training course for chicken legs, porcine coronary arteries, and cryoprotected
the trainees and two different time points for the EC cohort). rat aortas remain the most common training model used
The MSs’ average number of errors per anastomosis fell from in basic microsurgery courses worldwide.7 However,

Journal of Reconstructive Microsurgery Vol. 32 No. 3/2016


238 Anastomosis Lapse Index Ghanem et al.

This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.
Fig. 5 The average errors per anastomosis of each group on day 3 were compared with those in day 5, except in the expert control whereby the
average errors for the first three anastomoses were compared with the second three anastomoses. Asterisks indicate significance within groups,
between the two time points. All apart from the expert controls show statistical significance between the time points (p < 0.0001).

other models have been developed and may be soon less numbers of errors and performed better than those
introduced.19–21 with less experience (construct validity).
The aim of this study was to devise and validate a nonliving Furthermore, following the level of skill as evaluated by
microsurgery tool that can be used to evaluate and quantify this 10-error list from the third day of a basic microsurgery
skill levels in the early stages of microsurgery training for the course to the fifth day, we observe significant
purpose of formative and summative assessment. For this improvement in the performance of trainee operators. In
purpose, we used a cryoprotected rat aorta model as it contrast, the cohort of expert did not demonstrate any
represents an example of popular nonliving vessels models significant change of such level compared at two different
commonly used in basic microsurgery courses.7 By opening time points.
the vessel at the end of the procedure and examining the The system continued to show a relationship between
regularity and structural architecture of the end product, we the level of experience of the operator and average number
were able to assess microsurgical skill through the identifi- of errors per anastomosis made with the MS making the
cation of 10 anastomotic errors. The rationale behind the largest number of errors per anastomosis on the final day of
identification of these particular errors was based from one the course and the experts maintaining their position as the
end on the current end product assessment components ones with least number of errors. However, the statistically
present in most of microsurgery global rating scales.3,4 significant variance in performance was not demonstrated
From the other, it was based on the stipulation that such at the end of the 5-day basic surgical course indicating both
an error may lead to anastomotic failure such as leaking or the advantage of this system in demonstrating skill acqui-
thrombosis. Whether or not these identified errors have face sition as far as anastomosis architecture as well as its
validity to be considered a true measure of in vivo anasto- limitation to this context only (anastomosis architecture)
motic patency rate and/or microsurgery skill requires further as we know already that the level of novice operators would
investigation as “the actual training device, system, simulator, not have reached that of their more senior counterparts at
or curriculum requires an assessment of its own value for the end of 5-day course.
effectiveness.”2 Another advantage of this method in the assessment of
Nevertheless, to evaluate the validity of this identified basic microsurgery skills lies in its simplicity and availability.
10-error list, the average errors per anastomosis of the A common problem with microsurgical training courses is
participants were compared among a group of operators at that competency is not formerly assessed. Although trainees
various stages of their surgical training to see if the findings receive guidance and feedback throughout the course, their
exhibits construct validity. The distribution of errors level of microsurgical competency is not usually formally
committed between the various cohorts of these operators assessed and the student receive a certificate of attendance or
varied significantly on the third day of a 5-day microsur- completion of the course rather than a measure of their
gery training course where novice MSs and BSTs were at a skill at the end of the course.3 Criteria-based observations
relatively early stage of their skills learning curves. and global rating scales (GRSs) can provide a valid means
Operators with greater microsurgery experience made of assessing microsurgical competency Grober et al22

Journal of Reconstructive Microsurgery Vol. 32 No. 3/2016


Anastomosis Lapse Index Ghanem et al. 239

This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.

Fig. 6 The anastomosis lapse index (ALI) assessment tool.

(modified later by Atkins et al23) and was the first GRSs Thus, GRSs represent a labor-intensive method that may
adapted for the assessment of microsurgical skills. GRSs not be simply and readily available to enhance learning and
incorporate several important outcome measures such as feedback in conventional 5-day courses.
tissue and instrument handling, time and hand motions, Hand motion analysis presents a solution to some of the
and are graded on a Likert scale. challenges associated with GRSs. However, despite excellent
The principal disadvantage of GRSs use in this regard is construct, concurrent, and predictive validity, hand motion
their time consuming and relatively complex requirements of analysis was not popularized globally due to its own set of
special equipment (e.g., video recording for blinded assess- disadvantages not to mention its prohibiting costs.3 Another
ment, monitor display) as well as the presence of a trained simpler alternatives to assess microsurgical skill are still to be
reviewer provide feedback on the trainees’ performance. found.

Journal of Reconstructive Microsurgery Vol. 32 No. 3/2016


240 Anastomosis Lapse Index Ghanem et al.

Based on the principles of cognitive psychology, hand movements, and time to complete the anastomosis.
Kaufman et al24 noted that mastery of a technical skill is These parameters are very important indicators of micro-
achieved in three critical stages: cognitive, associative, and surgery skills at an advanced level that cannot be
autonomous. The cognitive stage refers to the understanding ignored.3,25,26 ALI is an end product assessment tool of
and development of skill, the association stage relates to early microsurgery skill and it should not be generalized or
repetition and practice to the point where the autonomous assumed to extend beyond this limited use. Thus, for a
stage is reached, whereby performance of the skill no longer comprehensive trainees’ formative and summative
requires external cognitive intervention. The error list used in feedback, it is important to couple the ALI tool with other
this study addresses the cognitive stage of the three-step validated methods such as skills GRS and hand motion
process to proficiency. By supplementing the guidance and analysis. Further educational research will be necessary
feedback with this error index measure, course tutors may to address these limitations.
provide participants with a more conceptualized under-
standing of the errors they performed, and more importantly
Conclusion
how they may improve. This will, in theory, prime partic-

This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.
ipants for the associative stage of skill development and ALI is a simple low-fidelity training model feedback system
reduce the steepness of their learning curve. developed and validated for the teaching and assessment of
Therefore, the 10-error list is a simple, readily available early microsurgery skills. This system has good construct
tool that demonstrated very good construct and concurrent and predictive validity and has the potential to provide
validity when used to evaluate the skill required to produce a semiquantitative immediate feedback on microsurgery
structurally sound anastomoses at early stages of microsur- skill’s level and its development. Its simplicity
gery training. On this basis, the use of this index at this stage and availability may facilitate early microsurgery
for the purpose of feedback tool to aid learning and/or training and positively contribute to trainees’ learning
facilitate assessment can be justified. To facilitate this, a experience.
specific assessment tool was generated by combining the
10-error identified in a card named the ALI.
ALI provides a pictorial representation of each error and Funding
a simple index to refer to the level of competency achieved This study was funded by STeLI London Deanery.
(►Fig. 6). We have demonstrated earlier that deliberate
practice is one of the most important tenets of a compe-
tency-based curriculum in microsurgery.9 By using ALI Note
during basic microsurgery training in either ex vivo or This article was presented in the XII Congress of the
live animal models, participants can easily identify and European Federation of Societies for Microsurgery Barce-
learn from their errors upon evaluation of the lumen of lona, Spain, April 3–5, 2014.
their anastomosis with or without the aid of a more senior
trainer. This may lead to a focused error-centered
deliberate practice that may lead to avoiding errors made
References
earlier in a competency-based accelerated learning.
1 Reznick RK, MacRae H. Teaching surgical skills—changes in the
Despite the potential benefit that this study may pro- wind. N Engl J Med 2006;355(25):2664–2669
vide, it is not without some limitations. First, this study did 2 Tsuda S, Scott D, Doyle J, Jones DB. Surgical skills training and
not evaluate whether this index does indeed facilitate simulation. Curr Probl Surg 2009;46(4):271–370
reduced anastomotic errors, and although in theory it 3 Ramachandran S, Ghanem AM, Myers SR. Assessment of micro-

should it is an important criteria that needs to be investi- surgery competency-where are we now? Microsurgery 2013;
33(5):406–415
gated. Second, concurrent validity, that is, the extent to
4 Temple CL, Ross DC. A new, validated instrument to evaluate
which an assessment tool correlates with the “gold stan- competency in microsurgery: the University of Western
dard” was only determined by means of level of pre- Ontario Microsurgical Skills Acquisition/Assessment instru-
existing microsurgery skill. It will be important to correlate ment [outcomes article]. Plast Reconstr Surg 2011;127(1):
ALI to other “gold standards” of microsurgery skill indica- 215–222
5 Chan WY, Srinivasan JR, Ramakrishnan VV. Microsurgery training
tors such as vessel patency, flap survival, or even hand
today and future. J Plast Reconstr Aesthet Surg 2010;63(6):
motion analysis.3 Furthermore, although predictive validi- 1061–1063
ty was assessed by assuming increased skill at day 5 versus 6 Lascar I, Totir D, Cinca A, et al. Training program and learning curve
day 3 of a 5-day microsurgery course, this assumption is a in experimental microsurgery during the residency in plastic
semiquantitative tool that cannot replace trainer judgment surgery. Microsurgery 2007;27(4):263–267
or objective assessment. 7 Leung CC, Ghanem AM, Tos P, Ionac M, Froschauer S, Myers SR.
Towards a global understanding and standardisation of educa-
Finally, ALI’s assessment deals with one set of param-
tion and training in microsurgery. Arch Plast Surg 2013;40(4):
eters (errors performed and identified in the finished 304–311
product of an end-to-end microvascular anastomosis) 8 Starkes JL, Payk I, Hodges NJ. Developing a standardized test for the
and does not take into account other parameters of micro- assessment of suturing skill in novice microsurgeons. Microsur-
surgery skills such as tissue handling, pedicle dissection, gery 1998;18(1):19–22

Journal of Reconstructive Microsurgery Vol. 32 No. 3/2016


Anastomosis Lapse Index Ghanem et al. 241

9 Ghanem AM, Hachach-Haram N, Leung CC, Myers SR. A systematic clinically relevant outcome measures. Ann Surg 2004;240(2):
review of evidence for education and training interventions in 374–381
microsurgery. Arch Plast Surg 2013;40(4):312–319 19 Bates BJ, Wimalawansa SM, Monson B, Rymer MC, Shapiro R,
10 McDougall EM. Validation of surgical simulators. J Endourol 2007; Johnson RM. A simple cost-effective method of microsurgical
21(3):244–247 simulation training: the turkey wing model. J Reconstr Microsurg
11 Kleinert HE, Tsai TM. Microvascular repair in replantation. Clin 2013;29(9):615–618
Orthop Relat Res 1978;(133):205–211 20 Sener S, Menovsky T, Maas AI. Use of bubble wrap for microsurgi-
12 Dorafshar AH, Bojovic B, Christy MR, et al. Total face, double jaw, cal training. J Reconstr Microsurg 2013;29(9):635–636
and tongue transplantation: an evolutionary concept. Plast Re- 21 Nam SM, Shin HS, Kim YB, Park ES, Choi CY. Microsurgical training
constr Surg 2013;131(2):241–251 with porcine thigh infusion model. J Reconstr Microsurg 2013;
13 Holmes WJ, Williams A, Everitt KJ, Kay SP, Bourke G. Cross-over 29(5):303–306
limb replantation: a case report. J Plast Reconstr Aesthet Surg 22 Grober ED, Hamstra SJ, Wanzel KR, et al. Validation of novel and
2013;66(10):1428–1431 objective measures of microsurgical skill: Hand-motion analysis
14 Hui KC, Zhang F, Shaw WW, et al. Learning curve of microvascular and stereoscopic visual acuity. Microsurgery 2003;23(4):317–322
venous anastomosis: a never ending struggle? Microsurgery 2000; 23 Atkins JL, Kalu PU, Lannon DA, Green CJ, Butler PE. Training in
20(1):22–24 microsurgical skills: Does course-based learning deliver? Micro-

This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.
15 Singh M, Ziolkowski N, Ramachandran S, Myers SR, Ghanem AM. surgery 2005;25(6):481–485
Development of a five-day basic microsurgery simulation training 24 Kaufman HH, Wiegand RL, Tunick RH. Teaching surgeons to
course: a cost analysis. Arch Plast Surg 2014;41(3):213–217 operate—principles of psychomotor skills training. Acta Neurochir
16 Usón J, Calles MC. Design of a new suture practice card for (Wien) 1987;87(1–2):1–7
microsurgical training. Microsurgery 2002;22(8):324–328 25 Moulton CA, Dubrowski A, Macrae H, Graham B, Grober E, Reznick
17 Weber D, Moser N, Rösslein R. A synthetic model for microsurgical R. Teaching surgical skills: what kind of practice makes perfect?: a
training: a surgical contribution to reduce the number of animal randomized, controlled trial Ann Surg 2006;244(3):400–409
experiments. Eur J Pediatr Surg 1997;7(4):204–206 26 Ilie V, Ilie V, Ghetu N, Popescu S, Grosu O, Pieptu D. Assessment of
18 Grober ED, Hamstra SJ, Wanzel KR, et al. The educational impact of the microsurgical skills: 30 minutes versus 2 weeks patency.
bench model fidelity on the acquisition of technical skill: the use of Microsurgery 2007;27(5):451–454

Journal of Reconstructive Microsurgery Vol. 32 No. 3/2016

You might also like