You are on page 1of 2

Acta Ophthalmologica 2017

modules A to D (Fig. 1A–C). Based on failed and all experienced passed


Letter to the Editor expert recommendations, two modifi-
cations were made to improve the
(Fig. 1G).
To our knowledge, this is the first
alignment of the assessment content study to explore and present validity
with the construct: we removed ques- evidence of a direct ophthalmoscope
Virtual reality-based tions covering general knowledge in virtual reality simulator. Overall, we
proficiency test in direct ophthalmology in module D because found strong evidence of validity:
they were unrelated to the skill of content was aligned with direct oph-
ophthalmoscopy direct ophthalmoscopy; we turned off thalmoscopy skills as evaluated by
scores adjustment for percentage of experienced consultants. Response
Nanna Jo Borgersen,1,2,3 total retinal area examined because process was ensured through a pilot
Ann Sofia Skou Thomsen,1,3,4 experienced individuals examine more study and standardizing all given
Lars Konge,1,3 Torben Lykke Sørensen2,3 focused. All participants signed informed instructions. Internal structure showed
and Yousif Subhi2,3 consent. Principles of the Declaration high level of reliability with intermod-
of Helsinki were followed. Ethics ule reliability approaching a level con-
1
Copenhagen Academy for Medical committee deemed that approval was sidered eligible for certification (0.8)
Education and Simulation, Copenhagen, unnecessary (waiver no. 16025789). (Downing 2004). Relations to other
Denmark; 2Department of Ophthalmology, Validity evidence was evaluated using variables showed that the experienced
Zealand University Hospital, Roskilde, Messick’s framework (Thomsen et al. significantly outperformed the novices.
Denmark; 3Faculty of Health and Medical 2015): Consequences of applying a pass/fail
Sciences, University of Copenhagen, standard showed excellent discrimina-
Copenhagen, Denmark; 4Department of tory ability with no false positives/
Ophthalmology, Copenhagen University Content
negatives.
Hospital Rigshospital-Glostrup, Experienced participants considered The pass/fail standard at 2615 points
Copenhagen, Denmark the simulator realistic and found that represents the point where a trainee has
the training programme met the train- received sufficient training, whether
doi: 10.1111/aos.13546 ing needs on how to perform direct trained on the simulator or otherwise.
ophthalmoscopy (Fig. 1D). Hence, our results not only establish a
proficiency test for direct ophthal-
Editor, moscopy training on the simulator –
Response process
D irect ophthalmoscopy is poorly
mastered among young physi-
cians and avoided due to lack of
A pilot study was conducted to
address potential sources of bias in
the test can also be used for other
purposes (e.g. exams for medical stu-
dents).
proficiency and confidence in the skill data collection. The simulator auto- Future studies should explore details
(Gupta & Lam 2006). It is tradition- matically collected all scores. The of weighting and composition of mod-
ally taught by an instructor guiding same instructor interacted with all ules. Whether shorter programmes
students through the procedure with- participants using predefined informa- provide better balance between time
out actually seeing what the student tion script. To account for familiar- consumption and reliability remain
sees through the ophthalmoscope. ization effect, participants underwent unexplored for future studies. Cheaper
This limits the ability to deliver con- 20 minutes warmup before data col- eye models exist and should be consid-
textual feedback (Borgersen et al. lection. ered when appropriate and relevant
2016). Virtual reality simulators have validity evidence exists (Wu et al.
the potential to deliver important 2014).
Internal structure
contextual feedback not otherwise
possible with traditional teaching. Internal consistency between the simu-
However, ensuring an evidence-based lator modules was high (Cronbach’s
approach is important when imple- a=0.79). References
menting costly simulators as lack of Borgersen NJ, Henriksen MJ, Konge L,
validity evidence may lead to expen- Sørensen TL, Thomsen AS & Subhi Y
Relations to other variables
sive irrelevant training at best or (2016): Direct ophthalmoscopy on You-
misdiagnosis at worst. Each of the four module scores, the Tube: analysis of instructional YouTube
We conducted a prospective validity total score and the total training time videos’ content and approach to visualiza-
study of the automated assessments all significantly differed between no- tion. Clin Ophthalmol 10: 1535–1541.
Downing SM (2004): Reliability: on the repro-
provided by the EyeSi Direct Ophthal- vices and experienced (Fig. 1E).
ducibility of assessment data. Med Educ 38:
moscope Simulator (v1.4, VRmagic, 1006–1012.
Mannheim, Germany) on 13 novices Gupta RR & Lam WC (2006): Medical
Consequences
(medical students with no previous students’ self-confidence in performing
experience in direct ophthalmoscopy) Based on the sum score of all modules, direct ophthalmoscopy in clinical training.
and eight experienced (consultants in a pass/fail score of 2615 was calculated Can J Ophthalmol 41: 169–174.
ophthalmology). The simulator provided using the contrasting groups method Thomsen AS, Subhi Y, Kiilgaard JF, la Cour
scores for different aspects of the fundo- (Fig. 1F). Consequences of applying M & Konge L (2015): Update on simula-
tion-based surgical training and assessment
scopic examination through prespecified this standard were that all novices

1
Acta Ophthalmologica 2017

Fig. 1. (A) The EyeSi Direct Ophthalmoscope Simulator is a virtual reality-based interactive simulator. (B) Module A and B consist of cases where
the trainee practice handling of the ophthalmoscope, examining the fundus and identifying objects. (C) Module C and D consist of cases where the
trainee practice the interpretation and documentation of healthy and pathological retinas. (D) Content questionnaire including only the experienced,
who found the content to align with the construct of interest. (E) Scores and training time in relation to experience showed a positive discriminatory
ability for all modules. (F) The contrasting groups’ methods were applied to determine a pass/fail level of ophthalmoscopy skills: the proficiency level
was defined by the intersection of the mean sum scores for the novices (blue) and experienced (green). The curves show normal distribution using
mean and standard deviations for both groups. (G) An investigation of the consequences of the determined pass/fail level showed that all novices
failed (no false positives) and that all experts passed (no false negatives).

in ophthalmology: a systematic review. DK-4000 Roskilde


Ophthalmology 122: 1111–1130. Correspondence: Denmark
Wu GT, Kang JM, Bidwell AE, Gray JP & Yousif Subhi, MD Tel: +45 4732 3900
Mirza RG (2014): The use and evaluation Department of Ophthalmology Fax: +45 4636 2645
of an inexpensive eye model in direct Zealand University Hospital Email: ysubhi@gmail.com
ophthalmoscopy training. JAO 7: e21– Vestermarksvej 23
e25.

You might also like