You are on page 1of 18

See discussions, stats, and author profiles for this publication at: https://www.researchgate.


Faculty development for educators: A realist

Article in Advances in Health Sciences Education · August 2014
DOI: 10.1007/s10459-014-9534-4 · Source: PubMed


5 130

4 authors:

Olanrewaju Sorinola Jill Elizabeth Thistlethwaite
The University of Warwick University of Technology Sydney


David A Davies ed Peile
The University of Warwick The University of Warwick


Some of the authors of this publication are also working on these related projects:

A Qualitative process evaluation of training for non-physician clinicians/ associate clinicians
(NPCs/ACs) in emergency maternal, neonatal care and clinical leadership, impact on clinical services
improvements in rural Tanzania: The ETATMBA project View project

ETATMBA View project

All content following this page was uploaded by David A Davies on 17 February 2015.

The user has requested enhancement of the downloaded file.
Adv in Health Sci Educ
DOI 10.1007/s10459-014-9534-4

Faculty development for educators: a realist evaluation

Olanrewaju O. Sorinola • Jill Thistlethwaite • David Davies •

Ed Peile

Received: 23 June 2013 / Accepted: 22 July 2014 
Springer Science+Business Media Dordrecht 2014

Abstract The effectiveness of faculty development (FD) activities for educators in UK
medical schools remains underexplored. This study used a realist approach to evaluate FD
and to test the hypothesis that motivation, engagement and perception are key mechanisms
of effective FD activities. The authors observed and interviewed 33 course participants at
one UK medical school in 2012. An observed engagement scale scored participants’
engagement while interviews explored motivation for attendance, engagement during the
course and perception of relevance/usefulness. Six months later, using the realist frame-
work, 12 interviews explored impact on learning outcomes/behavioural changes, the
mechanisms that led to the changes and the context that facilitated those mechanisms. The
authors derived bi-axial constructs for motivation, engagement and perception from two
data-sources. The predominant motivation was individualistic rather than altruistic with no
difference between external and internal motives. Realist evaluation showed engagement
to be the key mechanism influencing learning; the contextual factor was participatory
learning during the course. Six months later, engagement remained the key mechanism
influencing learning/behavioural changes; the context was reflective practice. The main
outcome reported was increased confidence in teaching and empowerment to utilise pre-
viously unrecognised teaching opportunities. Individual motivation drives FD participa-
tion; however engagement is the key causal mechanism underpinning learning as it induces
deeper learning with different facilitating contexts at various time points. The metrics of
motivation, engagement and perception, combined with the realist framework offers FD
developers the potential to understand ‘what works for whom, in what context and why’.

Keywords Engagement  Faculty development  Medical education  Medical school 
Motivation  Perception  Realist evaluation

O. O. Sorinola (&)  D. Davies  E. Peile
Warwick Medical School, University of Warwick, Coventry CV4 7AL, UK

J. Thistlethwaite
University of Technology Sydney, Sydney, Australia

O. O. Sorinola et al.


The central mission of every medical school is education, however most teachers have not
received formal teacher training (McLean et al. 2008). Doctors are experts in what they
teach but most have had little or no training in how to teach, hence the need for faculty
development (FD) with its principal purpose of improving teaching and ultimately, patient
care (Boelen 1999; Branch et al. 1997; MacDougall and Drummond 2005).
Faculty development is defined as a planned programme to prepare institutions and
faculty members for their roles in the areas of teaching, research, administration and career
management (Bland et al. 1990). FD in medicine has taken place since the late 1970s
stimulated by the growing demand for more innovative teaching (Herrmann et al. 2007).
While there has been continued emphasis on teaching in FD there is now broader coverage
of other faculty roles such as organizational and leadership development (McLean et al.
2008). Attendance by people from over 70 countries at the 1st international conference on
FD in the health professions held in Toronto, Canada in May 2011 reflected an increasing
emphasis on FD. The conference highlighted the need for systematic, developmental FD in
medical education (2011). The 2nd conference on FD in August 2013 was held in Prague,
Czech Republic (2013).
Although the current widespread investment in FD is predicated on the belief that it
enhances the effectiveness of teaching and student learning, evidence of long-term impact
is still limited (Steinert et al. 2006). The fact that there are only a few generalizable
evaluations of FD activities available to help FD developers remains a notable finding
(McLeod and Steinert 2010).
Out of context, it is pointless to focus on answering the question ‘is FD effective?’
Sorinola and Thistlethwaite (2013), in a systematic review, have shown the importance of
context and environment on the success of educational initiatives. Context is important, as
FD needs to address both individual and organizational needs; understanding the context of
an intervention would be beneficial to those who might wish to replicate successful
interventions and provide clarification on how and under what conditions an intervention
Although design, objectives and measured outcomes have varied between studies, most
have reported a positive correlation with motivation and engagement influencing learning
and study behaviour, academic performance and success (Sobral 2004; Wilkinson et al.
2007). Sorinola et al. (2013) in a review of educators’ learning found motivation and
engagement to be very important in adult learners. Mattick and Knight (2009) in the UK
found that motivation to be a good doctor and avoid harm to patients is related to a
vocational approach to study, suggesting that learners are motivated to gain knowledge that
will help them in their practice of medicine. Kusurkar et al. (2011) constructed motivation
as an independent and dependent variable and suggested that the learning environment
plays an important role in enhancing motivation.
Therefore we suggest that realist evaluation which uses a ‘context-mechanism-outcome
pattern configuration’ (Pawson and Tilley 1997), is a more useful evaluative framework as
it answers ‘what works, how, in which conditions, and for whom in FD’, rather than ‘does
FD work?’ Realism utilises contextual thinking to address the issues of ‘for whom’ and ‘in
what circumstances’ a programme will work, as it is axiomatic that certain contexts will be
supportive to the intervention and some will not (Pawson and Tilley 2004). Realist eval-
uation is now considered as one of the ‘theory-driven inquiry’ schools (Marchal et al.
2012). The process of how subjects learn from, interpret and act upon the intervention
stratagem is known as the mechanism, while context describes those features of the

Realist evaluation of FD

conditions in which interventions are introduced that are relevant to the operation of the
programme mechanism.
This study focuses on evaluating FD using the realist framework. Our hypothesis is that
motivation, engagement and perception are three key mechanisms that influence learning
that derives from FD activity. Motivation, engagement, and perception as defined by various
authors (Martin 2007, 2008; Reeve 2001; Schaufeli et al. 2002) are detailed in Box 1. The
study was designed to address two key objectives:
1. To test our hypothesis that motivation, engagement and perception are the mechanisms
involved in effective learning during FD activities and, if so, to explore which is the
key mechanism of the learning process.
2. To reassess, by realist evaluation, the context, mechanism and outcome as reported by
medical educators six months following the FD activity.


Intervention and setting

This study was carried out with participants attending a 3 day FD course accredited by the
UK Higher Education Academy (HEA) at the Warwick Medical School. There were 16
participants in the January cohort and 17 in the April cohort attending the ‘Essentials of
Clinical Education’ course. In total there were thirty physicians and three nurses. The
taught element of the course was followed by a post course assessment which included a
self-directed learning component (whereby participants had to decide, read, reflect and
write an action plan on how they want to develop further as an educator). Further details of
the course are described in Table 1.

Observation and interviews during the course

Thirty-three participants were observed and interviewed during the 3-day course. Demo-
graphic data including gender, seniority, funding source, job type and previous attendance
at a teaching course were collected on the 33 participants. An observed engagement scale
piloted on a previous course had been shown to accurately represent participants’

Box 1 Definitions of motivation, engagement and perception (Martin 2007, 2008; Reeve 2001; Schaufeli
et al. 2002)
Motivation is defined as a set of interrelated beliefs/emotions that influence and direct behaviour.
Motivation is the impetus behind what a person actually does, the interior mental state that leads to
action. It influences what people choose to do, how well, and for how long. Motivation is generally
thought to be that which gives behaviour its energy and direction
Engagement is defined as a positive and fulfilling learning-related state of mind that is characterised by
vigour, dedication and absorption. Engagement is the link between what learners do, between the inner
mental states of motivational and prosocial orientation and learning success. It involves a sustained
thoughtful attention to learning with cognitive, behavioural and emotional components
Perception (from the Latin perceptio, percipio) is defined as the organization, identification, and
interpretation of information in order to represent and understand the environment or context in which
it happens. It is an intuitive recognition or appreciation of the qualities (moral, psychological, aesthetic,
etc.) associated with information from the external world

O. O. Sorinola et al.

engagement. The observed engagement scale was devised by condensing three descriptors:
behavioural, cognitive and emotional into a 5-point engagement scale where 1 represents
no engagement and 5 represents full engagement (Table 2). Engagement scores were filled
in by the observer, not by participants. The scores for individual participants were recorded
session by session over the 3-day period to see the trend in individual participants’
engagement over the 3 days, as well as to determine if there was a group effect (e.g. low
engagement) for any particular session. The scale was not given to participants, however
participants were asked at intervals (breaks, lunch, end of the day, etc.) to describe in their
own words their engagement during the session(s). The observations focused on individual
participants as well as the interaction among the participants (e.g. at their tables, during
tasks, discussions, etc.), but the key focus of the observation was on participants’
engagement. Observation was carried out by the same researcher every day to maintain
consistency and reduce inter-observer variation. Observation of two groups (January and
April) allowed comparative analysis and improved validity. Descriptive statistics including
mean, median and range are reported for each group as well as the overall observed
engagement score mean for the 33 participants.
Participants were interviewed during the course to explore their motivation for atten-
dance, their engagement during the sessions and their perception about the course rele-
vance and usefulness. The interview data of the 33 participants was then coded using
descriptive and versus coding (Saldaña 2013) to derive the bi-axial constructs for each
mechanism under exploration: motivation, engagement and perception. Fisher’s exact test
was used to analyse the descriptors in each construct for statistical significance.

Post course interviews

Six participants from each course were selected for follow up interviews 6 months after the
course. Selection was based on engagement score (above average, average and below
average), seniority and gender. The one hour, in-depth, semi-structured (based on the key
research questions) interviews were, audiotaped to explore the longer-term impact on
learning/behavioural changes. They were transcribed and analysed qualitatively. We used
an eclectic coding style involving the application of a combination of descriptive, evalu-
ating, and causation coding to segments of data, revising the code list as patterns in the data
became more apparent as well as having ‘conversations’ with the data (Bazeley 2007;
Saldaña 2013). This involved moving back and forth between the data sources, the codes,
and the analytic framework (realist evaluation). With this concept serving as our inter-
pretative lens, we iteratively formed assertions to capture the insights gained from the data
about the impact of the programme on individuals, and more importantly what aspect
works and in what context.



The two groups were similar. The demographic data of the 33 participants are summarised
in Table 3. For the January cohort, the mean observed engagement score was 3.88, and
median was 3.7 (range 1–5). For the April cohort, the mean was 3.22, and median 3.4
(range 1–5). There was no statistically significant difference between the two groups. The
median values showed that majority of participants were ‘partly engaged’ to ‘mostly

Table 1 Details of essentials of clinical education course
Day 1 Day 2 Day 3

1. Getting to know you: Welcome, introductions & overview of the module 7. On the Job Teaching 12. Teaching and learning in large groups
Realist evaluation of FD

2. Being a clinical educator
3. How to teach a practical skill 8. E-Learning 13. Portfolios for learning and assessment
Includes guidance on module assessment
4. Research and scholarship in clinical education 9. Planning a teaching session 14. Facilitating learning in small groups
5. Assessment, evaluation and feedback
10. Strategies for active and 15. What next? Developing and evidencing
interactive learning teaching practice
Includes portfolio assessment Q&A session
16. Reflection and feedback on day 3
6. Reflection and feedback on day 1 11. Reflection and feedback on day 17. Module evaluation session
Post course assessment
Assessment is based on a reflective teaching portfolio comprising seven key tasks:
1. Teaching observations: Two teaching observations with reflective comments (one
as teacher, one as observer)
2. Giving feedback to learners: Reflective comments on giving feedback to students
on at least three occasions
3. Evaluation: Evidence of collecting and using feedback from students.
4. Lesson planning: Ensuring effective learning in clinical settings
5. Appropriate use of technology to enhance learning: Review of a learning event
6. Commitment to the HEA Professional Values Statements: A reflective overview
7. Action Plan: Future professional development as an educator

O. O. Sorinola et al.

Table 2 Observed engagement descriptors and scale
Category Descriptors

A. Behavioural Vigour Participation
Intensity Attention
Absorption Task completion
B. Emotional Dedication Vitality
Enthusiasm Anxiety
Interest Boredom
C. Cognitive Investment Metacognition–Summarising
Interactivity Inquiry–Asking/answering questions
5-Point observed engagement scale
1. No Engagement: Inattentive and unresponsive
2. Minimal Engagement: Emerging/fleeting low level of engagement, some evidence of awareness
3. Partial Engagement: Emerging engagement but not sustained or unpredictable
4. Mostly Engaged: Engagement occur the majority of the time.
5. Full Engagement: Completely engaged

Table 3 Demographic details of
Jan Cohort Apr Cohort Total
the 33 participants
Male 8 6 14
Female 8 11 19
Previous teaching course attendance
Yes 5 10 15
No 11 7 18
Career stage description Self 6 6 12
Early: training/less than 5 years Others 10 11 21
post qualification Career stagea
Mid: 5–10 years post Early 4 3 7
qualification Mid 6 6 12
Established: more than 10 years Established 6 8 14
post qualification

engaged’ during the sessions. The overall observed engagement score mean was 3.46 for
the 33 participants. Two participants (one from each cohort) had the lowest scores in their
cohort. Their median score of 2, compared with the cohort medians (3.7 in Jan and 3.4 in
Apr), was statistically significant. These two participants are discussed further under
engagement constructs.


The numbers reported here are the numbers of participants coded under each descriptor.
Motivation was analysed on a bi-axial construct: external versus internal and individual-
istic versus altruistic. An example of participant response coded as altruistic is: ‘‘Teaching
is like performing, it’s quite a good rush in a way. You get that sort of warmth in you from
helping the students, and if your students like it, then it’s a good feeling’’ (10A–code for
participant number 10 in the April cohort). Another is, ‘‘Teaching is something that I really

Realist evaluation of FD

(b) Engagement: Engaged 31;
Unengaged 2
(a) Motivation: Individual motivation Engagement
is the most common
25 6
Interactive Intense
2 13

Altruistic Individualistic 0 2
1 17

29 2

Relevant Irrelevant

0 2

(c) Perception: Most participants
found FD useful and relevant.

Fig. 1 Motivation, engagement and perception constructs

enjoy that’s why I’ve ended up doing it. I think it is an important part of everyone’s job… I
think you have to do it well and actually one of the things about teaching is that it reminds
you, about being the best doctor doesn’t it really?’’ (17A).
For the January cohort, the predominant motivation for attending the course was
individualistic, i.e. related to personal need (n = 14 participants vs. 2 participants altru-
istic), with no difference between external (9) and internal (7) motivating factor. Similarly,
for the April cohort, the motivation for attending the course was predominantly individ-
ualistic rather than altruistic (16 vs.1) again with no difference between external (9) versus
internal (8) motivating factor. Thus, in total a statistically significant majority of 30 out of
33 (p = 0.012) participants studied across both courses reported a predominantly indi-
vidualistic motivation for attending. However, as Fig. 1a shows, the external (18) and
internal (15) motivating factors were similar (p = 0.579). Further analysis showed no
correlation between motivating factor, funding source, previous attendance on a teaching
course or seniority status.
For engagement, the bi-axial constructs were informative versus repetitive and inter-
active versus intense. In the January cohort, 12 participants were fully engaged in the
informative/interactive quadrant, three participants were partially engaged (informative/
intense) and one participant was unengaged reporting the course to be repetitive. In the
April cohort, 13 participants were fully engaged in the informative/interactive quadrant,
three partially engaged and one unengaged. Overall, 31 participants found the sessions
engaging (informative quadrants) and only two were unengaged (Fig. 1b) and this was
statistically significant (p = 0.011). Participatory learning utilising an interactive

O. O. Sorinola et al.

multimodal process was the most commonly reported contextual factor in maintaining
engagement during the course.
The bi-axial constructs for perception were useful versus unproductive and relevant
versus irrelevant. Most participants (29: 13 from January and 16 from April) perceived the
FD for teaching as useful/relevant on the basis that it gave them the confidence to practice
various teaching methods and improve their teaching skills. Two participants (one from each
cohort) felt some of the sessions were partly irrelevant to them but overall found the FD
useful and only two participants (one from each cohort) found the course irrelevant and
unproductive. Figure 1c summarises the perception construct showing 31 participants found
the FD activity useful and only two found it unproductive (p = 0.011). There was good
correlation between the two data sources (the observed engagement scale and the constructs
derived from the interview data) as the two participants describing the session as irrelevant/
unproductive were the two already identified above as appearing disengaged to the observers.

Realist evaluation

The interview data was coded under Context (C), Mechanism (M) and Outcome (O) in line
with the CMO theory development of the realist framework (Pawson 2013). In the sections
below, we have summarised the findings under each subheading and further qualitative
data has been provided in the Appendix.


Context refers to those features of the conditions in which interventions are introduced that
are relevant to the operation of the programme mechanism, i.e. facilitate the effectiveness
of the programme. During the course, there were many contextual factors to consider
including peer learning within a small group of highly educated individuals, experienced
facilitators giving feedback to participants, and lack of pedagogy (17 participants during
the course interviews reported lack of pedagogical knowledge on education and learning
theories). Further details of the contextual factors are provided in Table 4 and qualitative
interview data quotes are provided in the appendix. However, the key contextual factor
influencing learners’ engagement was the participatory learning as the majority of par-
ticipants reported that the interactive mutual learning process with actual practice and
learning from each other’s experience was key to their engagement throughout the course.
It was also noted in the contemporaneous observer notes that participants were involved in
interactive tasks and discussions in their groups and tables at least 50 % of the time.
Quotes from two participants clearly expressed this participatory context:
So I enjoyed the sessions as they were short and participatory. I would have found
the sessions very dry and quite unhelpful if they were totally lecture based and the
delivery was not participatory (3A).
I need to be doing it and learning it, and that works well for me. I’m not one of those
people who can say ‘oh I can remember that from 10 years ago’…, for me to remember
things I need to be doing it and learning it, that’s me, so that’s why it works, that’s how
it works for me. Whether it works as well for everybody I don’t know (2J).

Six months later, the key contextual factor maintaining learners’ engagement with
teaching was the reflective practice. Initially this was through the post-course assessment

Realist evaluation of FD

Table 4 Realist evaluation–Contexts (C), Mechanisms (M) and Outcomes (O)

Contexts Learning with Peers Positive experience of FD
Participatory Learning Feedback
Reflective practice Lack of pedagogy
Mechanisms Motivation

Career development Specific instructional skills
Personal teaching interest Theoretical knowledge
Personal development Optional module

Multimodal approach Variety of teaching methods were
used to appeal to different learners
(visual, auditory, kinaesthetic)

Metacognition Reflective approach making
participants think

Interactivity Highly interactive, task based
Perception Relevance

Outcomes Confidence Confidence in this context is defined
as the ability to be certain and was
mentioned in three key areas:
delivery, design and discussion of
teaching episodes

Empowerment In managing and / or utilising teaching
CMO Context + Mechanism = Outcome
Realist FD using participatory Engagement Increased confidence in
Theory learning and reflective (informative and teaching
practice interactive) /empowerment

(reflective portfolio, teaching practice, peer observation, feedback, etc.), with action plans
developed by the participants that were relevant to their own teaching, and later reflection
was fostered through experiential practice in their working environment. This was
expressed by all twelve participants interviewed at six months as the single most important
factor that kept them further engaged and learning beyond the course. Below are detailed
quotes from three participants:
But really the big thing that changed my practice was the assessment. Err so having,
you know going away spending all that time doing the reading myself, actually
reflecting, you know putting something in practice into your own teaching, thinking
about it, is the thing that’s really changed my… so if one thing has changed my
practice it was the reflective process of assessment (7A).
Going through my personal reflection side of it, it was important to consolidate and
further my knowledge. In terms of knowledge through attending the course alone, I
gained, but even more so through the write up, the directed reading and the reflective

O. O. Sorinola et al.

practice we did as part of the learning portfolio. I think the reflective practice helped
me to gain even more to further myself as it encourages you to reflect more per-
sonally on yourself as an educator rather than just within a group setting, which is
more generalised. The personal reflection that comes in afterwards with the learning
portfolio is where the individual development and direction took place. Without the
reflective practice at the end the application of your knowledge would not be as
thorough. You are forced to focus individually on your own teaching environment so
it helps you to put into practice and apply it to your own role. By being forced to go
through these hoops if you like, you are forced to apply that knowledge. Kind of
shape your teaching skills through your own experience and understanding (6 J).
Even two participants, who initially found reflection a difficult concept to understand or
accept, later became advocates of reflective practice and eulogised about its importance to
their learning.
The issue of reflection you know, it wasn’t sort of a bad word that I was not interested in
knowing about yeah. But you know that is something that has been forced into me.
Initially it was being forced into me but I have come to the understanding that it is
important to sit back and think about, how did it go? What else can be done? Yeah, so I
do know that it is important. For me to consolidate my understanding, it was useful to
do the reflective practice. The reflective practice helped my learning because I had to
actually sit back and then think about how I could do it as I had to sort of, plan
something myself, there in practice. I needed to plan this; I mean the whole teaching
session. So I had to put it into practice… so that was important (21A).


Mechanism refers to that aspect of the programme which makes it work. Motivation was
important in the decision making to attend the course. Four participants (two from each
cohort) were attending the course on a mandatory basis because of their contractual
requirement (two lecturers and two GP clinical tutors); the rest were attending voluntarily.
Motivation was mostly individualistic and the top three individual motivations were career
development (improving CV, obtaining a qualification, getting on in the health service),
personal teaching interest (enjoy teaching, interest in teaching, opportunity to update) and
personal development (become more confident, relief in having some training and vali-
dation that what they are doing is right) as detailed in Table 4. This last point for which we
coined the term ‘peer validation’ was neatly summed up by one participant: ‘‘I don’t know
if I am teaching correctly, I enjoy teaching, but is my teaching effective? That is a
completely different kettle of fish. Am I doing it the way I‘m supposed to?’’ (24A). While
motivation was important for attendance, other factors were more important for the
learning process once the course got underway. As noted in the observer comments and the
interview data, when asked what was responsible for their learning during the FD, par-
ticipants’ responses no longer reflected the aforementioned individual factors as the main
reason for learning during the course.
The causal mechanism that extended, deepened, and affirmed learning during and after
the course was engagement. Further exploration of engagement showed three key com-
ponents to be important: multimodal, metacognitive and interactive approach (Table 4).
The multimodal approach, including mini-lecture, role-play, participant presentations and
small group discussions, was commented on by various participants as very engaging. The

Realist evaluation of FD

contemporaneous observer comments also documented that participants were involved in a
lot of task-based activities, which they found interesting, enjoyable and interactive. They
readily reported this as a key factor in maintaining engagement throughout the course
during their interviews. Towards the end of each day there was time spent in reflective
period to check whether the learning outcomes have been achieved using metacognitive
skills including questioning, summarising, etc. As described above under contexts, edu-
cators’ engagement with teaching was activated by different contextual factors during the
course and six months later.


The main outcome was the increased level of confidence in teaching which empowered the
educators to recognise and utilise various teaching opportunities (Table 4). Confidence was
viewed as the ability to be certain and this was expressed in various ways by interviewees,
especially in terms of delivery methods, design of teaching sessions and discussion on
teaching. When explored further, confidence was independent of the seniority or level of
the participant. A possible explanation for this is because confidence wasn’t to do with
content knowledge but more to do with confidence in other aspects of the teaching process.
In the contemporaneous observer notes, we did comment that in both cohorts (with very
few exceptions), there seems to be a degree of doubt and uncertainty on the part of the
educators regarding the design and delivery of their teaching. Some later expressed this as
experiencing a feeling of uncharted areas regarding the principles underpinning the
learning process, hence the relief later followed by the confidence in this process.
I feel as an educator, I feel a lot more confident in myself and not necessarily with the
content of you know what I am teaching because my knowledge of that has not
changed (14A).
It definitely did improve my confidence like I said before; I do feel a confident
teacher now’. ‘I also take the responsibility for updating myself but perhaps the way
in which I do it and the dialogue that I can have with colleagues regarding the
education process I feel much, much, more confident (6J).
I felt better because as I said I had more confidence, I felt more relaxed (during
teaching) and as one of my colleagues said, I was more in control you can say (18J).
But to be confident as a teacher does matter you know, the students don’t find it easy
to learn from a teacher who doesn’t appear to be sure (12A).


Main findings

There was good correlation between the three data sources (the observed engagement
scale, the interviews during the course and the 6 months post course interviews) used in
exploring the three mechanisms under evaluation. The participants who appeared disen-
gaged to the observers also had the lowest observed engagement scores in their cohort and
their perception of the course was that of being unproductive. This supports our assertion
that motivation, engagement, perception as the underlying mechanisms that are key to
participants’ learning. While the importance of motivation, engagement and perception in

O. O. Sorinola et al.

learning, behaviour, and education is well researched in general education, this is not the
case in medical education nor specifically in FD.
A significant finding in our study is that the main motivating factor is ‘individual’ rather
than ‘altruistic’ whereas there is no difference between ‘external’ and ‘internal’ motiva-
tion. Previously, the focus has been on internal and external motivators as the key drivers.
Knowles et al. (1998), and Merriam and Caffarella (1999), suggested that while adults are
responsive to some external motivators (better jobs, promotions, higher salaries, etc.) the
most potent motivators are internal such as the desire for increased job satisfaction, self-
esteem, quality of life, etc. However, others have argued that to construe motivation as a
simple internal or external phenomenon is to deny the very complexity of the human mind
(Brissette and Howes 2010; Misch 2002). Our view is that motivation is multifaceted,
multidimensional, mutually interactive, and dynamic concept. Thus a person can move
between different types of motivation depending on the situation.
While motivation was important, we found engagement to be the key mechanism that
made the programme work. Engagement is the bridge between the learner and their
learning target i.e. between the inner mental states of motivational orientation and
learning success. Without engagement, there is no deep learning (Hargreaves 2006),
effective teaching, meaningful outcome, real attainment or progress (Carpenter 2010).
Although the set of social rules, norms, values and interrelationships present in an
institution contribute to the prevailing contextual conditions, we found two key con-
textual factors that activated engagement. During the course, the participatory learning
approach using a multimodal, interactive style was the key context while after the course
it was the reflective practice (experiential practice with peer observation of teaching and
feedback). This is probably not surprising if reflective practice is viewed as a meta-
cognitive process in which individuals engage to explore their experiences in order to
develop a greater understanding of both self and the situation. According to the par-
ticipants, the experiential practice followed by reflection and development of action
points became the driver for individual learning. This self-directed learning (or directed
self-learning) is a type of self-regulated learning activity which has been shown to lead
to a deeper approach to learning and improved performance (Baumeister and Vohs
2004). Furthermore, engagement was linked with participants’ perception of course
usefulness as they believed it would improve their teaching skills.
While the triad of motivation-engagement-perception are useful mechanisms to explore
in evaluating FD, the context-mechanism-outcome (CMO) realist framework allowed us to
further explore which mechanism was most influential in the learning process. Our study
has shown that engagement is the key mechanism which led to the principal outcome of
increased confidence in teaching and empowerment to utilise previously unrecognised
teaching opportunities as well as practice various teaching methods. Therefore, engaging
the learner is the most important thing that an educator should do as there is no deep
learning without engagement. Furthermore, understanding the contextual factors that
activates engagement is critical. Our realist theory for faculty development is that par-
ticipatory learning and reflective practice (C) facilitates engagement (M) which leads to
increased confidence in teaching and empowerment (O).

Strengths and limitations

Our study had a number of strengths: the use of multiple data sources (observations and
interviews) with two similar cohorts of educators. Rather than limit the qualitative data
collection to interviews, we carried out direct observation which is fundamental to

Realist evaluation of FD

understanding the context and setting and a valuable source of information (Safman and
Sobal 2004). The process of observation enabled us to understand the context of FD, to be
inductive, to move beyond perception-based data (e.g. opinions in interviews) and access
personal knowledge. The case for such observational data in the understanding of causal
processes is powerful (Cohen et al. 2011) as it is sensitive to contexts and demonstrate
strong ecological validity (Moyles 2002). Using observational data also answers some of
the criticisms levelled at qualitative research by Silverman (2006) in that interviews (which
already involve a great deal of interpretation on the part of the interviewee) are often the
sole method employed to gather data, which is further interpreted by the researcher. In
addition, the use of observed engagement score to standardise data collection provided a
coherent and consistent structure to the observation enhancing the methodological quality.
Lastly, inductively deriving the constructs for the three mechanisms from the participants’
data gave credence to what was important to them as adult learners.
However, the study has limitations. Firstly, the engagement scale was based on
descriptors that are external and observable; hence it could be argued that internal
engagement will not be captured. While this may be so, we believe the multidimensional
descriptors used over the 3 day period were enough to capture a participant engagement
pattern. This view is supported by other authors who explained that the past decade has
provided a wealth of research on engagement but much of the early engagement research
incorporated definitions of engagement from one of three uni-dimensional categories:
behavioural, cognitive, and emotional (Fredericks et al. 2004). However, the field is
undergoing a shift to a more multidimensional view of engagement by combining together
more than one of the prior dimensions (behavioural, cognitive or emotional) into studies of
multiple dimensions of engagement (Yonezawa et al. 2009). Glanville and Wildhagen
(2007) conducted confirmatory factor analysis (CFA) on previous research on engagement
to test whether the indicators used were acceptable measures of the types of engagement
they claimed to measure and concluded that a more multidimensional concept of
engagement was more robust. Furthermore, a more multidimensional conception of
engagement may also help to remedy the methodological difficulties still apparent within
the engagement research literature. Our view is that the shift to a more multidimensional
view of engagement is a step forward in enriching the engagement concept development.
Secondly, the study was carried out in one medical school involving only one type of
FD activity which focused on teaching. We need to see whether the constructs for the three
mechanisms adequately evaluate various types of FD activities in other contexts. However,
we believe that engaging adult learners will always be important for deep learning to occur.

Comparison with existing literature

While previous studies on motivation and/or engagement have varied in design, objectives,
and measured outcomes, most have reported a positive correlation with motivation influ-
encing learning and study behavior, academic performance and success (Sobral 2004;
Wilkinson et al. 2007). However, all these studies have focused mostly on intrinsic
motivation and none answered the question, what are the mechanisms that cause this
relationship? Does this change over time? Our study identified this gap in the literature. We
found engagement at the time of the course to be the key causal mechanism which made
the programme work and engagement was still the mechanism 6 months later, but the
contextual factor activating the engagement did change. Identifying the contextual factors
that positively influence engagement could help medical educators incorporate them into
curriculum design and into the development of their institute’s teaching culture and

O. O. Sorinola et al.

learning environment. Sorinola and Thistlethwaite (2013) in a systematic review reported
the impact of context and environment on the success of FD initiatives in family medicine.


This study was important for trialling metrics of motivation, engagement and perception,
across clinicians attending a FD course. The realist evaluative framework is useful for FD
developers as it helps to answer the question ‘what works for whom, in what context and
why’. While individual motivation drives participation, engagement is the mechanism that
makes the programme work in the presence of the facilitating contextual factors.

Acknowledgments The authors wish to thank to all the participants at the essentials of clinical education
course who took part in this study.

Conflict of interest None.

Ethical standard The study was approved by the University of Warwick, UK, research ethics committee.

Appendix: realist evaluation interview data


Examples of positive experiences of FD

Yeah I really enjoyed it a lot more than I expected I would. I was interested in the
education side of things but really got an awful lot more out of it. I didn’t think…
Well I thought I’d enjoy it but I really, really enjoyed it, and I think it started me
thinking about it as a career (16A).
I thought the course was very useful because it was very practical and I could see that
I have been able to implement a lot of the ideas. I can see that it has influenced my
teaching positively so I have a positive attitude towards the course and I’m glad I
went on it (6J).
It’s been absolutely valuable for the work that I do as an educator in the Trust. The
insight that I now have in terms of education has been really, really enlightening
experience to have actually done the module. All in all it’s been really useful, really
invaluable experience (14J).

Examples of lack of pedagogical knowledge

Prior to the course, I think one of the bigger gaps was the educational theory because
I hadn’t had the time, opportunity or direction to look into it and I wasn’t aware of
some of the literature that was out there and a lot of the broad principles. Now I am
better grounded in the theoretical side of medical education and now I feel I can
ground my teaching… it gave me a framework for my teaching. It has given me the
underlying principles and theory where there was a big gap before (6J).

Realist evaluation of FD

I didn’t realise there was so much theory behind education and educational practice. I
really didn’t realise there was this much theory behind the way students are taught at
the medical school and the reason we do bedside teaching… I just thought it just kind
of evolved, but there actually is some basis behind the method (16A).
So teaching in the university we don’t have any pedagogical knowledge or theory. I
teach medical students in the first term of the first year. Personally, I am relieved to
have some formal training in education; to learn these theories and different styles
because I’ve never had any training before (18J).

Examples of reflective practice

But I guess the personal reflection comes in afterwards with the learning portfolio
and that’s where the individual development and direction can take place (4A).
I’m in a fortunate position where I do quite a lot of teaching and so I constantly
reflect on what I’m doing (10J).
In terms of knowledge through the course itself but even more so through the write
up and the directed reading and the exercises we do as part of the learning portfolio
I think without that write up at the end, the application of your knowledge would not
be as thorough (22A).
Also I found at some points learning very challenging. You sort of reflect on your
own, on how you learn in your own way and so, it was to identify, in many ways it
was a self-analysis. It was identifying how I learnt, why I learnt like that, then oh
goodness me there’s other ways of learning that other learners might use! And that
sort of makes sense, you know (14J).

Example of feedback

Yes, certainly there were gaps in my knowledge and practice and one thing clear to
me is certainly this issue of giving feedback which I, I have worked on. I’ve come to
appreciate how important it is and in a sense I know, I now know how to provide it.
So it is also much easier for me to do and of course knowing that it is important (9J).


Examples of perception (usefulness and relevance)

Personally I’ve always found staff development to be great fun and really worthwhile
Fantastic, FD is very important. It keeps me up to date, it makes me a better teacher
doesn’t it, the more you do the better you get at it ‘in theory’. I think that’s it, I think
in many ways, it just makes you a better all-rounder doesn’t it? (2A).

O. O. Sorinola et al.

I would say it’s very important because, it’s really helped me to develop as a
practitioner you know to do the things that I’m teaching the students to do, doing
them myself. I find having the students around is generally invigorating. They
question what you’re doing and they make you think ‘why am I doing what I’m
doing?’ so they improve your quality. It just gives you something different to focus
on. Actually teaching it’s a very different thing from clinical practice (5J).
I think they’ve been very effective for me and I think that they are particularly
effective if as I say they are linked with the problems that you are dealing with at the
time and thing things you’re trying to sort out (18A).


I really learnt a lot about education theory. I learned a lot about, you know, how to
teach a student through the different modalities that are there. You know, it gave me
experience about how to teach them and how to give a lecture, how to write lectures,
how to set out an exam, how to organise a module of a course even (11A).
I mean the faculty development just allowed me to… has allowed me to know what
to do. I knew things needed to change but it’s sort of helped me made the changes


1st International Conference on Faculty Development in The Health Professions. (2011). Retrieved
21.03.2012, from
2nd International Conference on Faculty Development in The Health Professions. (2013). Retrieved
09.09.2013, from
Baumeister, R. F., & Vohs, K. D. (Eds.). (2004). Handbook of self-regulation: Research, theory, and
applications (2nd ed.). New York: Guilford.
Bazeley, P. (2007). Qualitative data analysis with NViVo. London: Sage.
Bland, C. J., Schmitz, C. C., Stritter, F. T., Henry, R. C., & Alieve, J. J. (1990). Successful faculty in
academic medicine: Essential skills and how to acquire them. New York: Springer.
Boelen, C. (1999). Adapting health care institutions and medical schools to societies’ needs. Academic
Medicine, 74(8), S11–S20.
Branch, W., Kroenke, K., & Levinson, W. (1997). The clinician-educator present and future roles. Journal
of General Internal Medicine, 12(Suppl 2), S1–S4.
Brissette, A., & Howes, D. (2010). Motivation in medical education: A systematic review. Webmed Central
Medical Education, 1(12), WMC001261.
Carpenter, B. (2010). A vision for the 21st century special school. London: Specialist Schools and Acad-
emies Trust.
Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education (7th ed.). London:
Fredericks, J. A., Blumenfeld, P., & Paris, A. H. (2004). School engagement: Potential of the concept, state
of the evidence. Review of Educational Research, 74(1), 59–109.
Glanville, J. L., & Wildhagen, T. (2007). The measurement of school engagement: Assessing dimensionality
and measurement invariance across race and ethnicity. Educational and Psychological Measurement,
67, 1019–1041.
Hargreaves, D. (2006). Personalising learning 6: The final gateway: School design and organisation.
London: Specialist Schools Trust.
HEA (Higher Education Academy). (2006). Available online at:

Realist evaluation of FD

Herrmann, M., Lichte, T., Von Unger, H., Gulich, M., Waechtler, H., Donner-Banzhoff, N., et al. (2007).
Faculty development in general practice in Germany: Experiences, evaluations, perspectives. Medical
Teacher, 29(2–3), 219–224.
Knowles, M. S., Holton, E., & Swanson, R. A. (1998). The adult learner: The definitive classic in adult
education and human resource development (5th ed.). Houston: Gulf Publishing Company.
Kusurkar, R. A., Ten Cate, T. J., Van Asperen, M., & Croiset, G. (2011). Motivation as an independent and a
dependent variable in medical education: A review of the literature. Medical Teacher, 33, e242–e262.
MacDougall, J., & Drummond, M. J. (2005). The development of medical teachers: An enquiry into the
learning histories of 10 experienced medical teachers. Medical Teacher, 39, 1213–1220.
Marchal, B., van Belle, S., van Olmen, J., Hoerée, T., & Kegels, G. (2012). Is realist evaluation keeping its
promise? A review of published empirical studies in the field of health systems research. Evaluation,
18(2), 192–212.
Martin, A. J. (2007). Examining a multidimensional model of student motivation and engagement using a
construct validation approach. British Journal of Educational Psychology, 77(2), 413–440.
Martin, A. J. (2008). Enhancing student motivation and engagement: The effects of a multidimensional
intervention. Contemporary Educational Psychology, 33(2), 239–269.
Mattick, K., & Knight, L. (2009). The importance of vocational and social aspects of approaches to learning
for medical students. Advances in Health Sciences Education, 14, 629–644.
McLean, M., Cilliers, F., & Van Wyk, J. (2008). Faculty development: Yesterday, today and tomorrow.
Medical Teacher, 30, 555–584.
McLeod, P. J., & Steinert, Y. (2010). The evolution of faculty development in Canada since the 1980s:
Coming of age or time for a change? Medical Teacher, 32, e31–e35.
Merriam, S. B., & Caffarella, R. S. (1999). Learning in adulthood: A comprehensive guide (2nd ed.). San
Francisco: Jossey-Bass Publishers.
Misch, D. A. (2002). Andrology and medical education: Are medical students internally motivated to learn?
Advances in Health Sciences Education, 7(2), 153–160.
Moyles, J. (2002). Observation as a research tool. In M. Coleman & A. J. Briggs (Eds.), Research methods in
education (pp. 172–191). London: Paul Chapman.
Pawson, R. (2013). The science of evaluation: A realist manifesto. London: Sage.
Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage.
Pawson, R., & Tilley, N. (2004). Realist evaluation. Retrieved Dec, 2012, from http://www.
Reeve, J. (2001). Understanding motivation and emotion (3rd ed.). Fort Worth: Harcourt.
Safman, R. M., & Sobal, J. (2004). Qualitative sample extensiveness in health education research. Health
Education and Behaviour, 31(1), 9–21.
Saldaña, J. (2013). The coding manual for qualitative researchers. London: Sage.
Schaufeli, W. B., Martı́nez, I. M., Pinto, A. M., Salanova, M., & Bakker, A. B. (2002). Burnout and
engagement in university students: A cross-national study. Journal of Cross-Cultural Psychology,
33(5), 464–481.
Silverman, D. (2006). Interpreting qualitative data (3rd ed.). London: Sage Publications.
Sobral, D. T. (2004). What kind of motivation drives medical students learning quests? Medical Education,
38, 950–957.
Sorinola, O., & Thistlethwaite, J. (2013). A systematic review of faculty development activities in family
medicine. Medical Teacher,. doi:10.3109/0142159X.0142013.0770132.
Sorinola, O., Thistlethwaite, J., & Davies, D. (2013). Motivation to engage in personal development of the
educator. Education for Primary Care, 24(4), 226–229.
Steinert, Y., Mann, K., Centeno, A., Dolmans, D., Spencer, J., Gelula, M., et al. (2006). A systematic review
of faculty development initiatives designed to improve teaching effectiveness in medical education:
BEME Guide No. 8. Medical Teacher, 28(6), 497–526.
Wilkinson, T. J., Wells, J. E., & Bushnell, J. A. (2007). Medical student characteristics associated with time
in study: Is spending more time always a good thing? Medical Teacher, 29, 106–110.
Yonezawa, S., Jones, M., & Joselowsky, F. (2009). Youth engagement in high schools: Developing a
multidimensional critical approach to improving engagement for all students. Journal of Educational
Change, 10(2), 191–209.

View publication stats