You are on page 1of 34

PERSONNEL PSYCHOLOGY

2013, 66, 975–1008

HELPING TEAMS TO HELP THEMSELVES:


COMPARING TWO TEAM-LED DEBRIEFING
METHODS
ERIK R. EDDY
Siena College

SCOTT I. TANNENBAUM
The Group for Organizational Effectiveness (gOE), Inc.

JOHN E. MATHIEU
University of Connecticut

Team-based structures have become more widely used in organizations.


Therefore, it is important for team members to perform well in their
current team and to build skills and enthusiasm for working on future
teams. This study examined team debriefing, an intervention in which
team members reflect on recent experiences to prepare for subsequent
tasks. Prior researchers have shown that facilitated team debriefs work,
but they have not examined how to enable teams to conduct their own
debriefs or studied how debriefs affect individual level outcomes. There-
fore, we compared 2 team-led debriefing techniques: (a) an unguided
debrief and (b) a guided debrief designed to incorporate lessons learned
from prior debriefs. We collected data from 174 business students who
were members of 35 teams from 9 sections of a Strategic Management
course. Class sections were randomly assigned to one of the debriefing
conditions, and teams completed 4 business cases over 10 weeks. A mul-
tilevel design was employed and a multistage model building approach
was used to test the hypotheses using hierarchical linear modeling tech-
niques. Results of this cluster randomized, quasi-experimental design
suggest that the team-led guided debrief intervention resulted in supe-
rior team processes as compared to the unguided debriefing method.
Team processes, in turn, related significantly to greater team perfor-
mance and increased individual readiness for teamwork and enthusiasm
for teaming. Implications for future research and practice are discussed.

Organizational effectiveness relies more heavily on team members


working well together than ever before. Unfortunately, simply being on a
team together does not mean the team will learn to work well together be-
cause experience alone does not ensure team success (DeRue & Wellman,

Correspondence and requests for reprints should be addressed to Erik R. Eddy, Associate
Professor of Management, Siena College, Colbeth Hall Room 209, 515 Loudon Road,
Loudonville, NY 12211; EEddy@Siena.edu.


C 2013 Wiley Periodicals, Inc. doi: 10.1111/peps.12041

975
976 PERSONNEL PSYCHOLOGY

2009; Kim, 1997; Littlepage, Robison, & Reddington, 1997). Research has
explored myriad ways to enhance team effectiveness (Mathieu, Maynard,
Rapp, & Gilson, 2008). For example, choosing the best mix of members,
training them as a whole, and shaping the environment to support them
are all potentially powerful means of improving team effectiveness (Ilgen,
Hollenbeck, Johnson, & Jundt, 2005). However, as described below, there
are challenges inherent in each of these options.
Certainly it is preferable to have an ideal composition of members
from the start, but timing and geographic constraints, membership dy-
namics, and competing demands often make it difficult to compose the
perfect team (Tannenbaum, Mathieu, Salas, & Cohen, 2012). Team build-
ing interventions can help to address some but not all such shortcomings
(Tannenbaum, Beard, & Salas, 1992). And though team training is usu-
ally effective (Salas et al., 2008), it is expensive and time consuming to
develop and conduct effectively. Furthermore, research finds that less than
10% of competency acquisition in organizational settings occurs in formal
training (Tannenbaum, 1997), so ideally, organizations should find ways
to accelerate learning from experience.
Debriefing is one of the most promising methods for accelerating
learning from experience. Research has consistently shown that teams
that conduct debriefs outperform those that do not, with a recent meta-
analysis reporting that, on average, debriefs enhance team effectiveness
by over 20% (Tannenbaum & Cerasoli, 2013). Most of the research to
date has been conducted on facilitator-led debriefs with the few reported
examples of unfacilitated debriefs appearing to be far less effective. Un-
fortunately, not many people have been trained to facilitate debriefs, and
simply possessing good interpersonal skills does not enable someone to
lead an effective debrief (Dismukes, Jobe, & McDonnell, 2000). This
greatly limits the widespread adoption of team debriefs. Thus, though
debriefs hold great promise for accelerating team learning, what is needed
are tested techniques and tools that teams can employ to conduct debriefs
on their own, without a trained facilitator.
A recent review of debriefing research concluded that a few “obvious
gaps that deserve study were identified, such as comparing debriefing
techniques” (Raemer et al., 2011, p. 52). Historically, studies have exam-
ined debriefing versus no debriefing, but only a few studies have compared
different debriefing techniques (e.g., Ellis & Davidi, 2005), and to the best
of our knowledge none have compared team-led debrief techniques. The
purpose of this investigation is to compare two types of team-led debriefs,
an unguided version and a guided version that incorporates lessons learned
from research and practice. The study examines how team-led debriefs
impact team processes and, in turn, team effectiveness and individual-level
outcomes.
ERIK R. EDDY ET AL. 977

Team Debriefs—Theoretical Foundation

A team debrief session (sometimes referred to as an after action review


or after event review) is a relatively inexpensive intervention designed to
promote learning from experience. During a debrief, team members reflect
upon a recent experience, discuss what happened, and identify opportu-
nities for improvement. Team debrief sessions have been used in the
military for over 30 years (Morrison & Meliza, 1999) and, more recently,
have been used with medical teams (Fanning & Gaba, 2007), fire fighting
teams (Allen, Baran, & Scott, 2010), and a variety of management, sales,
retail, and project teams (e.g., Kinni, 2003). A debrief can be conducted
after any performance episode (e.g., at the conclusion of a work shift,
meeting, or training event; at a key point during a project; or after any
team “action”). Research studies have shown that well-conducted debriefs
enhance team performance (DeVita, Schaefer, Lutz, Wang, & Dongilli,
2005; Smith-Jentsch, Cannon-Bowers, Tannenbaum, & Salas, 2008; Tan-
nenbaum & Cerasoli, 2013).
There is strong theoretical support for the effectiveness of debriefs.
For example, Ellis and Davidi (2005) and Tannenbaum, Beard, and Cera-
soli (2013) described how key functions that take place during a debrief
help promote learning and self-correction. Tannenbaum et al.’s (2013)
theoretical framework suggests that debriefs work because they promote
(a) reflection and self-explanation; (b) data verification, feedback, and in-
formation sharing; and (c) goal-setting/action planning. Tannenbaum and
colleagues argue that collectively these three facets enhance teams’ abil-
ity to process new information and to effectively coordinate their future
actions.
Reflection and self-explanation. According to Barmeyer (2004), “in
the heart of all learning, lies the way in which experience is processed,
in particular, the critical reflection of the experience” (p. 580). Reflec-
tion involves looking back at an experience, comparing what happened
against desired results, and assessing the consequences (Marsick & Volpe,
1999). Though reflection is the process of determining what happened,
knowledge of the past is insufficient. To improve team effectiveness, indi-
viduals must understand why certain events transpired and at times may
need to challenge implicit assumptions. Self-explanation involves a con-
scious attempt to make sense of one’s experiences, which can involve
self-critiquing or recognizing why something is working or not working
(Schon, 1987). A debrief provides a structure for shifting from automatic
and habitual information processing to more conscious and deliberate
processing (DeRue, Nahrgang, Hollenbeck, & Workman, 2012). It is this
actively driven, deeper understanding that can lead to the identification of
potential improvements.
978 PERSONNEL PSYCHOLOGY

Data verification, feedback, and information sharing. Debriefs provide


a forum for discussion and an opportunity to verify data against personal
beliefs, enabling individuals and the team to “recalibrate” as needed (Ellis
& Kruglanski, 1992). For knowledge-type tasks that have an objectively
correct solution, teams that can verify data against one another tend to
make better decisions (Stasser & Titus, 1985). For judgment-type tasks,
performance may improve if team members can share the rationale they
used to arrive at a particular decision (Kerr, Maccoun, & Kramer, 1996).
In any case, when individuals can verify either their observed data or
personal judgment processes with others, they can calibrate and align their
mental models of the situation, which can produce superior performance
(DeChurch & Mesmer-Magnus, 2010).
Debriefs also provide a relatively safe forum for feedback that can
emanate from team members, a team leader, a facilitator, or even from
the task itself. Group discussions act as a form of informal feedback to
team members (Tindale, 1989), and feedback provides individuals with
information needed to extract appropriate lessons from their experiences
(Balzer, Doherty, & O’Connor, 1989).
Debriefs are also designed to promote information sharing among
team members. Sharing of information and ideas is what defines a team
as “information processors” (Hinsz, Tindale, & Vollrath, 1997). Informa-
tion elaboration is related to higher quality decision making (van Ginkel,
Tindale, & van Knippenberg, 2009) and better team performance (Homan
et al., 2008). There is also strong meta-analytic evidence that information
sharing is an important enabler of team performance (Mesmer-Magnus &
DeChurch, 2009).
Goal setting/action planning. The final function that well-designed
debriefs fulfill is the establishment of intentions or agreements in the
form of goals or action plans (DeShon, Kozlowski, Schmidt, Milner, &
Wiechmann, 2004). Participants who set goals for behavior change during
training should be more likely to demonstrate subsequent improvements
and individuals who vocalize commitments in front of others are more
likely to take action on them (Hollenbeck, Williams, & Klein, 1989).
Moreover, when feedback is combined with goal-setting, it is far more
likely to enhance performance (Bandura & Cervone, 1983). So teams
that establish forward-looking agreements, goals, or plans about how
they want to work together should be more focused and motivated
to implement self-corrections or make adjustments to team processes
(DeChurch & Haas, 2008).
There is solid empirical evidence that debriefs work and a strong the-
oretical foundation for why they work. However, if left on their own,
teams often fail to debrief, and even when they do, their natural infor-
mation processing tendencies can inhibit the quality of the debrief. For
example, research reveals that teams tend to discuss information that its
ERIK R. EDDY ET AL. 979

members already share rather than unique information and perspectives


(Stasser & Titus, 1985; Wittenbaum & Stasser, 1996). This is particularly
true in constrained time periods (Karau & Kelly, 1992) as is the case
with debriefs. That research is consistent with the observation that during
debriefs teams tend to gravitate toward discussing taskwork (that they are
all familiar with and discuss regularly) rather than teamwork, which they
generally feel less comfortable discussing (Tannenbaum & Goldhaber-
Fiebert, 2012). In addition, groups often wait until late in a discussion
before they bring up unshared information (Larson, Foster-Fishman, &
Keys, 1994), which explains why we have frequently observed debriefs
in which the participants spend 95% of the session discussing topics they
already agree about and only a few minutes on the issues they really need
to discuss. Further, teams often fail to fully integrate unique information
when making decisions (Gigone & Hastie, 1993), so action plans may
not be based on input from all team members. Finally, one or two team
member may dominate the discussion, and less experienced or shy team
members may fail to speak up (Rowe & Baker, 1984) which means some
key observations may be missed. Moreover, without active participation,
team members are less likely to take responsibility for improvements
(Cooper & Wood, 1974).
When trained facilitators lead debriefs, they attempt to conduct it
in a way that overcomes the team’s information processing limitations
and emphasizes the key functions that drive debriefing effectiveness. Un-
fortunately, there are not enough trained facilitators in organizations to
support all teams. Moreover, there are some potential advantages to team-
led debriefs. Self-generated and process-oriented feedback may reduce
emotions that interfere with learning (DeRue et al., 2012), and teams are
more likely to assume “ownership” of self-generated plans and goals. But
because unguided team debriefs suffer from the problems noted above, it
is important to develop a team-led debrief approach that enables teams
to conduct debriefs effectively. Based on the key debriefing functions
and known limitations in team information processing, we would suggest
that an effective team-led debrief approach should include the follow-
ing five features: (a) allow team members to reflect independently and
anonymously, for psychological safety and to avoid being influenced by
the most vocal team member; (b) ensure all team members provide in-
put to enhance their sense of ownership and capture all perspectives; (c)
focus attention on teamwork and not just taskwork, because teamwork
also drives team effectiveness and groups tend not to discuss it; (d) guide
the team to discuss divergent or high priority needs early in the debrief
and not simply areas of agreement or comfortable topics; and (e) lead to
the formation of future-looking action plans and agreements. The guided
debrief employed in this study incorporates these five features.
980 PERSONNEL PSYCHOLOGY

This study. In this study we compare two team-led debrief techniques:


an unguided debrief and a guided debriefing technique that incorporate
the lessons from prior research. Almost all team-led debriefs that we
have observed in organizational settings have been unstructured or, at
best, have used a semistructured approach that is akin to a plus/delta
session where team members reflect on their work together and then
identify and discuss what went well and what they could do better (Eckes,
2002). Such designs are likely to fall prey to the types of process losses
discussed above. So as to provide a realistic point of comparison, the
unguided debriefing condition in this study makes use of a semistructured
approach based on the plus/delta technique. In contrast, the guided debrief
condition employs an interactive computer interface that ensures input is
gathered from all team members, and based on that input guides the team
to discuss and reach agreements regarding specific, prioritized teamwork
challenges. Teams in both conditions conduct debriefs based on self-
generated feedback without any external facilitation support. Below we
advance specific hypotheses, and then describe our quasi-experimental
study used to test our hypotheses. We conclude with a discussion of our
findings and their implications for future research and practice.

Hypotheses

Team Debriefs and Team Processes

Team processes are the means by which members convert resources


and their efforts into meaningful outcomes (Ilgen et al., 2005; Marks,
Mathieu, & Zaccaro, 2001; Mathieu et al., 2008). Marks et al. (2001)
articulated an episodic theory of team effectiveness that highlighted three
general types of team processes: (a) action, (b) transition, and (c) inter-
personal. For them, performance episodes were meaningful periods of
time during which members work to achieve shared goals and feedback
becomes available. Action processes occur during performance episodes
and include specific processes such as monitoring resources and progress
toward goals, coordinating efforts with others, and monitoring and back-
ing up team members. Transition processes occur as teams cycle from
one performance episode to another. During these times, teams reflect on
how they have previously functioned (a backward glance) and develop
plans for future efforts (a forward glance). Examples of specific transition
processes include mission analysis, strategy formulation and planning,
and goal specification. Marks et al. (2001) also suggested that teams need
to manage interpersonal processes in order to remain effective over time.
These interpersonal processes can occur at any time during the team’s
ERIK R. EDDY ET AL. 981

life cycle and include managing members’ conflict, motivation, and affect
levels.
Meta-analytic findings illustrate the importance of these team pro-
cesses for effective team functioning (LePine, Piccolo, Jackson, Math-
ieu, & Saul, 2008). Therefore, a goal of a team debrief should be to
allow the team to reflect upon those processes and make adjustments
as needed. The guided debrief has several elements that should increase
the likelihood that the intervention will result in better team processes.
Most critically, the guided debrief explicitly targets relevant teamwork
processes. To combat the natural inclination to discuss taskwork, the
guided debrief directs members to discuss different facets of their team
processes. The reflection phase of a guided debrief focuses discussion
on transition processes where individuals conduct a “backwards glance”
and consider what previously went well and what warrants improve-
ment. It also prompts a discussion about how members will orchestrate
their actions during task accomplishment and encourages them to develop
strategies and establish action plans for the future, including matters such
as monitoring resources and each other, performing backup behavior,
and coordinating their efforts (i.e., action processes). And finally, the
guided debrief guides the group to consider and discuss any dysfunc-
tional interpersonal processes, in an honest, psychologically safe, and
developmental manner. In addition, the technique makes it clear to them
that the discussion points were identified based on feedback from all
the team members, ensuring that all team members have had input into
the debrief. Collectively, these elements should increase the likelihood
that a team identifies, commits to, and makes improvements in their team-
work processes in a comprehensive and synergistic fashion. Therefore, we
hypothesize:

Hypothesis 1: Teams that conduct guided debriefs will demonstrate


better team processes than will those conducting un-
guided debriefs.

Team Processes and Team Performance

Whereas Marks et al.’s (2001) taxonomy outlines three superordi-


nate team process dimensions and 10 more specific ones, these typically
work synergistically over time. In other words, cumulated over episodes,
teams that exhibit better transition processes will be better positioned
to coordinate their activities during action phases and thereby encounter
fewer interpersonal process challenges. Although there may be instances
where it is advantageous to discern specific nuanced effects, our belief is
that a well-conducted debrief will serve to enhance all team processes.
982 PERSONNEL PSYCHOLOGY

Indeed, LePine et al. (2008) found high correlations among the 10 specific
and three higher-order dimensions of Marks et al.’s theory using meta-
analyses, and submitted that “[w]hen theory focuses on relationships with
the overall quality of team processes, hypotheses could be tested with
broad measures of teamwork.” (p. 294). Accordingly, we concentrate on
an overall measure of team processes as have previous researchers (e.g.,
Mathieu, Maynard, Taylor, Gilson, & Ruddy, 2007) and consistent with
previous research we hypothesize:
Hypothesis 2a: Team processes will exhibit a significant positive cor-
relation with subsequent team performance.
Given that Hypothesis 1 predicts that guided team debriefs would
improve team processes, and Hypothesis 2a predicts that team processes
would, in turn, relate positively to team performance, we are implicitly
hypothesizing a fully mediated relationship. This follows from the fact
that all aspects of the guided debrief are focused on improving teamwork
processes. Stated formally:
Hypothesis 2b: Team processes will fully mediate the relationship
between the debrief condition and team performance.

Team Debriefs and Individual Outcomes

Hackman (1987) suggested that an assessment of team success must


consider both current and future team effectiveness. Teams need to suc-
cessfully complete their current task, but it is also important for team
members to feel they can be successful in the future (Barrick, Stew-
art, Neubert, & Mount, 1998). Further, given the trend in organizations
to assign members to multiple teams and to move team members from
one team to another, it is important that individuals are both prepared
and enthusiastic to work well on teams in the future. In other words,
there are both team and individual-level outcomes associated with cur-
rent team functioning (Mathieu et al., 2008). Individual outcomes have
often been overlooked in prior team research. However, building on
Hackman’s work, recent studies have begun to explore the importance
of individual-level outcomes of working in teams (e.g., Kukenberger,
Mathieu, & Ruddy, in press). This study examines two future-direct,
individual-level outcomes, readiness for teamwork and enthusiasm for
teaming.
Hackman (1982) proposed a theory of social influence that has been
further advanced by Chen and his colleagues (e.g., Chen & Gogus, 2008;
Chen & Kanfer, 2006; Chen, Kanfer, DeShon, Mathieu, & Kozlowski,
2009). These authors argue that teams represent a particularly salient
ERIK R. EDDY ET AL. 983

context for individual members and will likely impact their work-related
attitudes and beliefs. For example, Chen and Gogus (2008, p. 286) sub-
mitted that “the interdependent nature of work in teams makes individual
members especially susceptible to contextual influences of team processes
. . . [and] . . . studying motivation in the context of teams is important, as
teams constitute a proximal social environment influencing individuals at
work (Hackman, 1982).” As detailed below, we feature team processes,
as influenced by guided debriefs, as salient team-level stimuli influencing
individuals’ readiness for teamwork and enthusiasm for teaming. Partic-
ipation in generic team training interventions has been demonstrated to
enhance individuals’ teamwork-related declarative knowledge and skills
(Ellis, Bell, Ployhart, Hollenbeck, & Ilgen, 2005; Rapp & Mathieu, 2007).
However, the mechanisms by which these outcomes arise have yet to be
specified. We submit that enhanced team processes serve as a critical me-
diating mechanism linking participation in guided debriefs with members’
preparation for, and personal views toward, working in teams in the future.
Individual readiness for teamwork. Modern-day team-based organiza-
tions often have individuals working simultaneously in multiple teams
and changing team memberships over time (Ellis et al., 2003; Hirst, 2009;
Tannenbaum et al., 2012). Therefore, it is critical that team experiences not
only help the current team improve but also help individual team members
build personal, transportable teamwork competencies that they can use in
future team assignments, including teamwork-related skills and attitudes
(Cannon-Bowers, Tannenbaum, Salas, & Volpe, 1995). In other words, it
is important to continue to develop their “readiness” for teamwork.
By readiness for teamwork, we mean that individuals are better pre-
pared to work effectively in teams in the future. In this sense, readiness for
teamwork may include enhanced individual competencies (i.e., Cannon-
Bowers et al., 1995) as well as related teamwork self-efficacy percep-
tions (e.g., Tasa, Taggar, & Seijts, 2007). Importantly, whereas teamwork
self-efficacy is a situated perception concerning individuals’ perceptions
that they can contribute effectively to a specific current team, readiness
for teamwork refers to a more generalized sense that an individual can
contribute to other teams in the future. That is, we envision readiness
for teamwork as a more portable individual state stemming from the de-
velopment of team-related competencies through being a member of a
well-functioning current team.
Team members who participate in team discussions and, in turn, expe-
rience effective team processes are well-positioned to learn what it takes
to operate effectively as a team. At the team level, team processes have
been shown to be related to team-level potency (LePine et al., 2008).
Similarly, on an individual level, participating on a team that works well
together should help team members build teamwork skills and a sense of
984 PERSONNEL PSYCHOLOGY

personal readiness for teaming in the future. The guided debrief process
is hypothesized to increase the likelihood that a team will exhibit better
team processes, which help build a sense of future readiness for teaming.
However, if a team does not subsequently experience effective processes,
then team members are less likely to build a greater sense of competence
and readiness for teamwork. Therefore, we hypothesize:

Hypothesis 3: Team processes will fully mediate the relationship be-


tween guided debriefs and the development of individ-
uals’ readiness for teamwork.

Individual enthusiasm for teaming. Being on a team that works well


together should be perceived as a more positive experience than being on
one that is plagued with poor teamwork (LePine et al., 2008). As such,
participating in a team that demonstrates effective team processes should
enhance team members’ enthusiasm for teaming in the future. Here again,
our focus is on resulting individual attitudes that may positively or nega-
tively predispose people toward working in teams in the future. Affective
reactions to working in a team, such as organizational satisfaction (e.g.,
Janz, Colquitt, & Noe, 1997) and commitment (e.g., Tesluk & Math-
ieu, 1999), have received substantial research attention. However, those
attitudes refer to outcomes of previous team experiences with limited ref-
erence toward working in teams in the future. Similarly, team viability
has been extensively studied and refers to individuals’ collective sense
of belonging to a current team. Viability has also been used to refer to
whether individuals wish to participate in the team in the future or the
extent to which the team is a sustainable entity in the future (see Bell &
Marentette, 2011; Mathieu et al., 2008). Thus, though viability does have
a future orientation, it is limited to members’ ongoing participation in
the current team. In contrast, enthusiasm for teaming refers to individuals
desire to work on any teams in the future.
We believe that it is important that individuals build a sense of enthusi-
asm for teaming from their team experiences. To the extent that members
become jaded or disillusioned from a miserable team experience, they
are not likely to approach new team opportunities with motivation and
enthusiasm. This, in turn, may well spark a self-fulfilling prophecy as
unmotivated members are not likely to perform effectively. In contrast,
to the extent that members approach future team experiences with mo-
tivation and zeal, they are more likely to have positive experiences and
successes and thereby spark positive performance spirals. The guided de-
brief process is hypothesized to increase the likelihood that a team will
exhibit better team processes, which should enhance individual members’
enthusiasm for teaming in the future. But if a team fails to work well
ERIK R. EDDY ET AL. 985

together, that experience is likely to dampen individuals’ enthusiasm for


teaming. Therefore, we hypothesize:

Hypothesis 4: Team processes will fully mediate the relationship be-


tween guided debriefs and the development of individ-
uals’ enthusiasm for teaming.

Method

Sample

Data were collected from 174 business students enrolled in nine class
sections of a Strategic Management course in a small northeastern uni-
versity. Within classes, members were randomly assigned to 35 teams, al-
though an attempt was made to have an equal representation of majors (i.e.,
marketing, management, finance, accounting, economics) while compos-
ing the teams. There were 33 five-member teams, one four-member team,
and one six-member team. On average, participants were 21.6 years old,
91% Caucasian, and 48% were women. Their average academic compe-
tence (grade point average) was 3.06 (SD = .38). The representations of
majors were: 25% accounting, 20% finance, 31% marketing, 21% man-
agement, and 3% economics.
Typically, the teams examined in this study would be referred to
as “project teams” or “student teams” (Sundstrom, 1999). However, as
Tannenbaum, Mathieu, Salas, and Cohen (2012) suggest, teams are chang-
ing, and our description of teams must be more precise. Hollenbeck,
Beersma, and Schouten (2012) provide a dimensional scaling conceptu-
alization for describing teams. Using the Hollenbeck et al. dimensions of
skill differentiation, authority differentiation, and temporal stability, we
would describe the current teams as follows: high in skill differentiation—
teammates were not easily interchangeable and students on teams repre-
sented various functions (i.e., management, marketing, finance, account-
ing, and economics); low in authority differentiation—no one person held
a position of formal authority or leader on the team; and moderate in
temporal stability—as a student project team, teammates only worked
together for 15 weeks.
During the course of their work together, teams read four case analyses
(e.g., Harvard Business School case study) on four separate organizations.
Teams reviewed the company facts, analyzed the current situation, and
developed a 5-year recommended strategic plan for the company. Team 1
presented the case analysis for the first case, whereas Teams 2, 3, and 4
wrote a case analysis. Team 2 presented the case analysis for the second
case, whereas Teams 1, 3, and 4 wrote a case analysis. This continued for
986 PERSONNEL PSYCHOLOGY

each case. Though there were no differences in course requirements, there


were differences in order of requirements.
The student teams completed four business cases over 10 weeks of
the semester. Each team also did an oral presentation of one of their
cases, which was distributed over the semester. In other words, 25% of the
teams did oral presentations of each of the four cases. Survey data were
collected at two points in time. First, following the delivery of their second
case study, team members completed a survey that included measures of
their team processes along with demographic information (i.e., gender,
age, ethnicity, major) and their current academic grade point average.
Following the fourth case, team members completed a second survey
that assessed their individual readiness for teamwork and enthusiasm for
teaming scales, as well as some additional items concerning time spent on
the debriefs for comparability analyses (see below). Usable surveys were
obtained from all but one participant.

Measures

Participants completed surveys at two points during the study. All


items were answered using five-point Likert-type response scales that
ranged from 1 = not at all to 5 = to a very great extent. We created scale
scores for multi-item measures by averaging item responses per construct.
Team processes. Team processes were measured on the first survey
at approximately the midpoint of the semester and indexed using multi-
item scales developed by Mathieu and Marks (2006), which correspond
to Marks et al.’s (2001) three super-ordinate categories. James, Demaree,
and Wolf’s (1984) rwg agreement index was used to justify aggregating
individual members’ responses to the team level. Median rwg values >.70
are generally considered sufficient agreement to warrant aggregation. We
also report intraclass correlations (ICCs). ICC(1) represents the percent-
age of members’ level variance that is attributable to team membership,
whereas ICC(2) represents a reliability index of mean scores. This pro-
vides information concerning the relative variance of responses within and
between teams and classes. Finally, we calculated and report the team-
level internal consistencies using the average item response per team as
the inputs. This strategy aligns the measurement reliability information
with the level of analysis used in the substantive tests (cf. Chen, Mathieu,
& Bliese, 2004).
The three scales each exhibited acceptable psychometric properties:
Transition Processes (six items, e.g., “To what extent has our team worked
to prioritize and agree upon our goals and tasks?”; median rwg = .94;
ICC1 = .36; ICC2 = .74; α = .93); Action Processes (14 items, e.g.,
“To what extent has our team worked to monitor and manage our time
ERIK R. EDDY ET AL. 987

wisely”; median rwg = .94; ICC1 = .30; ICC2 = .68; α = .97); and
Interpersonal Processes (seven items, e.g., “To what extent has our team
worked to encourage healthy debate and exchange of ideas”; median
rwg = .96; ICC1 = .29; ICC2 = .67; α = .97). The agreement indices
were uniformly high, justifying aggregation. These three subscales were
also highly correlated (rs = .82 to .93, p < .01) so we averaged them to
form a composite team process score (median rwg = .98; ICC1 = .34;
ICC2 = .72; α = .98). LePine et al. (2008) found support for a single,
higher-order process dimension underlying the three separate subscales,
thereby justifying this approach.
Team performance. Whereas all teams completed case studies, the
specific assignments were not identical across classes. Moreover, the oral
presentations of the cases were staggered over the course of the semester.
In other words, though at the end of the semester all teams had completed
a comparable body of work, we do not have equivalent performance
measures that are in a neat repeated measures style design. Therefore, at the
end of the semester we had professors rate each team’s overall performance
using the following scale: 1 = horrible; 2 = poor; 3 = below average;
4 = average; 5 = above average; 6 = great; or 7 = extraordinary (mean =
5.62, SD = .23). Much of the research on team performance outcomes
has utilized supervisor ratings as a measure of team performance (LePine
et al., 2008). For instance, Tesluk and Mathieu (1999) examined supervisor
rated performance for construction and maintenance road crews. Langfred
(2000) used supervisors’ ratings of the accuracy of work performed by
social service teams, and Lester, Meglino, and Korsgaard (2002) used
instructor-rated performance scores to measure performance outcomes. To
gauge the construct validity of professors’ ratings in the current context, we
also had them rank-order their teams from best to worst within each class.
The rank-order correlation between professors’ team ratings and rankings
was ρ = .80, p < .001. Moreover, the correlation between professors’
ratings and the teams’ average case study grades was r = .40, p < .05.
These findings provide evidence of the validity of the professors’ ratings
for use as a performance criterion.
Individual-level outcomes. Given the lack of well-established mea-
sures, we developed three-item scales for readiness for teamwork and
enthusiasm for teaming. We developed items with the intention of measur-
ing individuals’ perceptions that their team related skills and knowledge,
as well as their enthusiasm, had improved as a result of being a member
of the team. Readiness for teamwork was assessed using the following
three items (α = .88): (a) I feel better prepared to lead teams in the future
as a result of my experiences with this team; (b) Being a part of this team
will help me be a more effective member of teams in the future; and (c)
I learned about teamwork by participating in this team. Enthusiasm for
988 PERSONNEL PSYCHOLOGY

teaming was indexed using the following three, reverse coded, items (α =
.84): (a) Being on this team has decreased my enthusiasm for working
in team settings in the future; (b) Given my experience with this team, I
would prefer to work alone in the future; and (c) If I could have left this
team, I would have done so.
Given that both of these newly developed measures were collected
from team members at the same time, we conducted a confirmatory factor
analysis (CFA) using MPlus (Muthen & Muthen, 2007) to evaluate their
discriminant validity. To gauge model fit, we report the standardized root
mean square residual (SRMR) and the comparative fit index (CFI). We
also report chi-square values that provide a statistical basis for comparing
the relative fit. We adopted the following guidelines advocated by Mathieu
and Taylor (2006): Models with CFI values <.90 and SRMR values >.10
are deficient, those with CFI ≥ .90 to < .95 and SRMR > .08 to ≤ .10
are acceptable, and those with CFI ≥ .95 and SRMR ≤ .08 are excellent.
The two-factor CFA model yielded excellent fit indices (χ 2 [8] = 27.60,
p < .01; CFI = .95; SRMR = .04). All items had significant (p < .05)
relationships with their intended latent variable. Moreover, the two-factor
CFA model fit significantly better (χ 2 [1] = 76.33, p < .001) than did a
single-factor model (χ 2 [9] = 103.93, p < .001; CFI = .77; SRMR = .10)
lending additional evidence of discriminant validity.

Debrief Intervention

Student teams were assigned to one of two conditions based on class


section. In other words, all teams within a given class were assigned to the
unguided or to the guided debrief condition. Because we believed that the
debrief condition would influence team performance and thereby student
grades, we did not want to disadvantage any teams within a class for ethical
reasons. Moreover, we wished to minimize threats to internal validity
such as diffusion of treatments and compensatory rivalry by respondents’
receiving less desirable treatments that may arise when members are aware
of the fact that they are in different conditions (see Cook & Campbell,
1979). Therefore, we randomly assigned all teams in a given class to one
of the two debriefing conditions, or what is more formally known as a
cluster randomized quasi-experimental (treatment) design (Raudenbush,
1997).
Teams in both conditions were instructed to conduct team-led de-
briefs after their first and third case assignments. All teams received email
reminders when it was time to debrief, and all were instructed to hold
face-to-face team discussions. The following paragraphs provide greater
detail on these conditions.
ERIK R. EDDY ET AL. 989

Unguided debrief condition. Teams in this condition were asked to


debrief utilizing an unguided team-led process. They were given an in-
struction sheet that told them when to debrief and provided a set of general
questions that they should discuss, such as “what went well in our recent
case,” “what didn’t go so well in our recent case,” and “what changes
can we make to improve our performance on the next case.” This format
is similar to a “plus/delta” technique (Wong, 2007), commonly used to
determine what is working (“plus”) and what needs to change (“delta”).
All teams in the unguided debrief condition were asked to discuss the
same set of questions and were reminded via e-mail to debrief with their
team using those questions as guidance.
Guided debrief condition. Teams in the guided debrief condition were
aided by an online debriefing tool called DebriefNow (Group for Orga-
nizational Effectiveness, 2011). The tool was originally developed for
and used in practice with medical teams and was then refined and ex-
tended for use in this study with student project teams. The tool guides
a team to reflect on recent work experiences, specifically focusing on
those teamwork factors that have been shown to be important for team
effectiveness (e.g., Marks et al., 2001) and that subject matter experts
identified as consistently problematic for the specific type of team in
question (e.g., for a surgical team performing an operation or for a stu-
dent team working on a strategic planning project). The debriefing pro-
cess employed in this condition incorporates all of the recommended
attributes of team-led debriefs described in our earlier review of the
literature.
The tool captures each team member’s perceptions about the team’s
recent work experiences independently and anonymously, ensuring that
all team members provide candid input. Each team member answered nine
questions about their recent work as a team. The nine questions were devel-
oped with input from the instructors who teach the class. Consistent with
the Marks et al. (2001) framework, the questions examined team processes
that can affect team performance, including transitional processes (e.g.,
“we understood the Professor’s expectations”), action processes (e.g., “we
used our meeting time wisely”), and interpersonal processes (e.g., “mem-
bers of the team were open to ideas and input from other team members”).
To ensure that the debrief discussion focused on the team’s most
important needs, the tool analyzed team member responses; categorized
issues as high, medium or low priority; and produced a customized debrief
guide containing a prioritized and ordered list of questions the team should
discuss. This ensured the team would discuss the most important issues
first and avoid discussing issues unnecessarily.
To get the team to establish forward looking action plans or agree-
ments, the guided debrief also contained prompts asking them how they
990 PERSONNEL PSYCHOLOGY

intended to work together to address any teamwork concerns they identi-


fied. The Appendix shows the nine topics and provides examples of a two
questions (regarding clarity of expectations and use of meeting times) that
could appear in a debrief guide.

Comparability Analyses

Records showed that 100% of team members in the guided debrief


condition completed the online debrief tool at both times. We asked par-
ticipants in the guided debrief condition whether they agreed with the
following item on both surveys: “As a team, we discussed the results
of the online debriefing tool.” On the first survey, 78% agreed with the
statement and 75% agreed with the statement on the second survey. In
addition, we asked participants in both conditions to respond to two ques-
tions concerning how much time they devoted to discussing teamwork
during the semester. We compared their answers to the two questions to
discern whether any condition differences might have been attributable to
differing amounts of time spent on debriefs. Neither question evidenced a
significant difference: (a) “We met regularly to review how we were, and
should be, working together as a team” (t[172] = 1.82, ns) and (b) “We
allocated time during the semester specifically to review and discuss our
teamwork” (t[172] = 1.16, ns). Therefore, any briefing condition effects
cannot be attributable to differing time devoted to them across conditions.
We also collected survey data from 147 students who took the same
course and worked in 35 teams during the previous semester at the same
university. These teams did not engage in any form of debrief (neither
unguided nor guided). Data were collected in a time frame that paralleled
the second survey administration for our focal teams and can serve as a
quasi-control condition. Naturally there are a number of factors that differ
across semesters, even at the same university and course, and neither indi-
viduals nor teams were randomly assigned across semesters. Nevertheless,
these data do help to gauge the impact of the two debriefing conditions.
At the individual level of analysis, members in the guided debrief condi-
tion reported significantly higher (p < .05) readiness for teamwork (M =
4.24, SD = .75) than did individuals in the unguided debrief (M = 3.89,
SD = .92) or quasi-control conditions (M = 4.03, SD = .83), though the lat-
ter two conditions did not differ significantly from each other. Similarly,
members in the guided debrief condition reported significantly higher
(p < .05) enthusiasm for teaming (M = 3.93, SD = 1.00) than did indi-
viduals in the unguided debrief (M = 3.34, SD = 1.07) or quasi-control
conditions (M = 3.55, SD = 1.01). And again, the latter two conditions
did not differ significantly from each other.
ERIK R. EDDY ET AL. 991

At the team level of analysis, teams in the guided debrief condition


reported significantly better (p < .05) overall team processes (M = 4.02,
SD = .39) than did teams in the unguided debrief (M = 3.72, SD = .56)
or quasi-control conditions (M = 3.79, SD = .41), with the latter two
conditions not differing significantly. Finally, the performance of teams
in the guided debrief condition was rated significantly (p < .05) higher
(M = 3.88, SD = .72) than the quasi-control teams (M = 3.29, SD = 1.13)
but not significantly better than the unguided debrief teams (M = 3.68,
SD = 1.06). And, the quasi-control and unguided debrief conditions did
not differ significantly.
In sum, these results suggest a consistent pattern in that, with the
exception of team performance, the guided debrief condition evidenced
better outcomes at both the individual and team levels of analysis than did
the unguided debrief or quasi-control conditions. The unguided debrief
condition appears to have offered little benefit, as it did not differ signifi-
cantly from the quasi-controls on any variable at either level of analysis.
In effect, the unguided debrief condition might be viewed as equivalent to
a quasi-control condition.

Analytic Framework

Given our cluster randomized quasi-experimental (treatment) design,


as well as the multilevel nature of our sample, we used hierarchical linear
modeling (HLM) techniques to account for the lack of independence of
observations within teams and classes (Raudenbush, 1997). For testing
team-level effects, therefore, this constitutes a two-level study with team
variables (i.e., average member academic competence, team processes,
team performance) residing at the lower level and the debriefing conditions
representing the class upper level. For testing individual-level effects,
our study represents a three-level design with individuals’ readiness for
teamwork and enthusiasm for teaming as the lowest-level criteria, along
with individuals’ academic competence as a lowest-level covariate. We
have included individuals’ academic competence, as indexed by their
grade point average, to control for its demonstrated influence of debriefing
related outcomes at both levels of analysis (Burke & Hutchins, 2007).
Team variables, therefore, reside at the middle level (i.e., 2) for testing their
influences on individual outcomes, and the debriefing condition resides at
the class highest level (i.e., 3) for these analyses.
We followed a multistage model building approach to test the hypoth-
esized relationships (Raudenbush, Bryk, Congdon, & du Toit, 2004). We
first fit a baseline or “null” model to determine the percentage of outcome
variance that resided within- and between-levels (Pituch, Murphy, & Tate,
2010). In the next stage we introduced substantive predictors from the
992 PERSONNEL PSYCHOLOGY

TABLE 1
Study Correlations and Descriptive Statistics

1 2 3 4 5 6 7
Class level
1. Debrief conditiona —
Team level
2. Team processes .49∗ –
3. Mean academic competence .24∗ .24∗ –
4. Team performance −.06 .31∗ .35∗ –
Individual level
5. Academic competence .12 .11 .50∗∗ .18∗ –
6. Readiness for teamwork .20∗∗ .37∗∗ .13 .15∗ −.01 –
7. Enthusiasm for teaming .22∗∗ .36∗∗ .11 .19∗ −.02 .41∗∗ –
Mean 1.46 3.79 3.07 5.62 3.06 4.05 3.67
SD .51 .46 .19 .23 .38 .86 1.23

Note. Rows 1–4, N = 35 teams. Rows 5–7, N = 175 individuals. Team scores were assigned
down to individual members and correlations have not been adjusted for nonindependence.
a
Coded: unguided debrief = 1, guided debrief = 2.

p < .05. ∗∗ p < .01.

same level of analysis in to the equation to test their relationships. For


example, we introduced members’ average academic competence and
team processes as predictors of team performance. For the individual-
level outcomes, we introduced individuals’ academic competence and the
other individual dependent variable in to their respective equations at this
stage (the latter because the two individual-level outcomes were corre-
lated significantly, r = .41, p < .001). In other words, when readiness for
teamwork was the criterion, we controlled for enthusiasm for teaming,
and when enthusiasm for teaming was the criterion, we controlled for
readiness for teamwork. We then successively added substantive predic-
tors from higher levels of analysis to test their influences. Specifically,
the debriefing condition code was introduced in to the team performance
equation. For the individual-level outcomes, the team-level variables (i.e.,
average member academic competence, team processes, and team perfor-
mance) were added in one step, and then the debriefing condition code
was added in later step. Although overall effect size estimates are tenu-
ous in multilevel models, we report ∼R2 (see Snijders & Bosker, 1999)
for comparison purposes. We employ p < .05 as our significance level
throughout.

Results

Table 1 reports descriptive statistics and correlations among all vari-


ables. Notably, at the team level of analysis, team process and members
average academic competence, but not the debrief intervention, correlated
ERIK R. EDDY ET AL. 993

TABLE 2
Results of Two-Level HLM Analyses Predicting Team Outcomes

Team processes Team performance

Predictors 1 2 1 2
Team level
1. Mean academic competence .17 (.15) .12 (.15) .37 (.18)∗ .36 (.18)∗
2. Team processes – – .29 (.13)∗ .42 (.15)∗∗
Class level
3. Debriefing conditiona .92 (.25)∗∗ −.53 (.36)
∼R2 .01 .10∗∗ .14∗∗ .03
∼R2 .01 .11∗∗ .14∗∗ .17∗∗

Note. N = 35 teams in 9 classes. Table values are parameter estimates with standard errors
within parentheses.
a
Coded: unguided debrief = 1, guided debrief = 2.

p < .05. ∗∗ p < .01.

significantly (p < .05) with performance. The debrief intervention did,


however, correlate significantly with team process. At the individual level
of analysis, all variables except individuals’ academic competence cor-
related significantly (p < .05) with enthusiasm for teaming, whereas all
variables except individual and team average academic competence cor-
related significantly with readiness for teamwork. Tests of our hypotheses
using HLM are summarized below, and parameter estimates with standard
errors for the team-level outcome analyses are shown in Table 2, and those
for the individual-level outcomes are presented in Table 3.

Team-Level Outcomes

The baseline analyses revealed that 25.5% of the variance of team


processes was between classes (χ 2 [8] = 20.93, p < .01), with 74.5%
occurring within classes. These results indicate that the nesting of teams in
classes should be taken into account for the substantive tests. We regressed
team processes on to members’ average academic competence, which
failed to account for any significant variance (R2 = 1%; β = .17, SE =
.15, ns). However, adding in the debriefing code (γ = .92, SE = .25,
p < .05) produced a significant positive ∼R2 of 10%. These results are
consistent with Hypothesis 1.
Next, we regressed team performance on to team processes (β = .29,
SE = .13, p < .05) and average academic competence (β = .37, SE = .18,
p < .05), which collectively accounted for a significant (χ 2 [2] = 7.43,
p < .05) 14% of criterion variance. The team process results are consistent
with Hypothesis 2a. We then added the debrief condition dummy code to
the equation, which failed to account for significant additional criterion
994 PERSONNEL PSYCHOLOGY

TABLE 3
Results of Three-Level HLM Analyses Predicting Individual Outcomes
Readiness for teamwork Enthusiasm for teaming

Predictors 1 2 3 1 2 3
Individual level
1. Academic −.05 (.08) −.06 (.08) −.06 (.08) −.04 (.07) −.07 (.07) −.07 (.07)
competence
2. Readiness for – – – .35 (.07)∗∗ .30 (.07)∗∗ .30 (.07)∗∗
teamwork
3. Enthusiasm for .40 (.07)∗∗ .32 (.07)∗∗ .32 (.07)∗∗ – – –
teaming
Team level
4. Mean academic .03 (.07) .03 (.07) −.01 (.08) −.03 (.08)
competence
5. Team processes .25 (.08)∗∗ .25 (.09)∗∗ .21 (.09)∗ .18 (.09)∗
6. Team performance .00 (.07) .00 (.08) .10 (.08) .12 (.08)
Class level
7. Debriefing .02 (.16) .17 (.22)
conditiona
∗∗ ∗∗
∼R 2
.18 .04 .00 .17∗∗ .06∗ .00
∼R2 .18∗∗ .22∗∗ .22∗∗ .17∗∗ .23∗∗ .23∗∗

Note. N = 174 individuals in 35 teams. Table values are parameter estimates with standard
errors within parentheses.
a
Coded: unguided debrief = 1, guided debrief = 2.

p < .05. ∗∗ p < .01.

variance (χ 2 [1] = 2.38, ns). We tested whether team processes mediated
a relationship between the debriefing manipulation and team performance
using Selig and Preacher’s (2008) bootstrapping procedure. The bootstrap-
ping procedure used the parameter estimates associated with the debriefing
intervention team processes, and the team processes–team performance
linkages, along with their respective standard errors, and generated 20,000
versions of their product term. The 95% confidence interval of these es-
timates (.03–.63) did not include zero, which is evidence consistent with
mediational effect. Notably, however, the debriefing conditions did not
exhibit a significant effect on team performance when considered alone,
akin to a zero-order correlation with the correct error term (γ = –.12,
SE = .24, ns). Therefore, the relationship between the two is perhaps
more accurately referred to as an indirect effect (see Mathieu & Taylor,
2006, 2007) rather than full mediation as anticipated in Hypothesis 2b.

Individual-Level Outcomes

Readiness for teamwork. Table 3 presents a summary of the three-


level HLM models used to test hypothesized influences on the individual
outcomes. The baseline analyses revealed that 85% of the total variance in
ERIK R. EDDY ET AL. 995

members’ readiness for teamwork scores resided at the individual Level 1


of analysis, 8% was observed between teams at Level 2, and the remaining
7% resided at Level 3. These findings indicate that the three-level nesting
of individuals in teams and classes should be modeled. We next regressed
readiness for teamwork on to individuals’ academic competence (β =
–.05, SE = .08, ns) and enthusiasm for teaming (β = .40, SE = .07, p <
.01), at the individual level, which collectively accounted for a significant
(χ 2 [2] = 28.96, p < .01) 18% of criterion variance. We added the three
team-level predictors to the equation, which combined yielded a signifi-
cant ∼R2 = .04, p < .05 (χ 2 [3] = 12.12, p < .01). Inspection of the
unique effects revealed that team processes (γ = .40, SE = .07, p < .05)
had a significant positive cross-level influence, supporting Hypothesis 3.
Neither average academic competence (γ = .03, SE = .07, ns) nor team
performance (γ = .00, SE = .07, ns) had significant unique effects. Fi-
nally, adding the debriefing condition code (γ = .02, SE = .16, ns), from
Level 3 failed to produce a significant ∼R2 = .00, ns (χ 2 [1] = .01,
ns). However, we tested the anticipated cascading cross-level mediation
effect between the debriefing condition (Level 3) on individuals’ readi-
ness for teamwork (Level 1) via team-processes (Level 2) using Selig and
Preacher’s (2008) bootstrapping procedure. The 95% confidence interval
of these estimates did not include zero (.05–.51), which is consistent with
an inference of a cross-level full mediation effect as proposed in Hypoth-
esis 3. Notably, we also regressed readiness for teamwork on to just the
debriefing manipulation and found a significant cross-level relationship
(γ = .41, SE = .16, p < .05). Again, this analysis is akin to a zero-order
correlation between a Level-3 predictor (i.e., the debriefing condition)
and a Level-1 outcome (i.e., individual teamwork readiness) using the
correct error term and is consistent with an inference of a cross-level full
mediation (Mathieu & Taylor, 2007).
Enthusiasm for teaming. Table 3 also summarizes the findings for in-
fluences on individuals’ enthusiasm for teaming. The baseline analyses
revealed that 76% of the total variance in individuals’ enthusiasm for
teaming scores resided at the individual Level 1 of analysis, 17% was
observed between teams at Level 2, and the remaining 7% resided at
Level 3. Again, these findings indicate that the three-level nesting of in-
dividuals in teams and classes should be modeled. We then regressed en-
thusiasm for teaming onto individuals’ academic competence (β = –.04,
SE = .07, ns) and readiness for teamwork (β = .35, SE = .07, p <
.01) at the individual-level, which collectively accounted for a significant
(χ 2 [2] = 24.18, p < .01) 17% of criterion variance. Adding the three
team-level predictors to the equation produced a significant ∼R2 = .06,
p < .05 (χ 2 [3] = 9.81, p < .05) that was solely attributable to the impact
996 PERSONNEL PSYCHOLOGY

of team processes (γ = .21, SE = .09, p < .05; average academic compe-


tence: γ = –.01, SE = .08, ns; team performance: γ = .10, SE = .08, ns).
Finally, adding the debriefing condition code (γ = .17, SE = .22, ns) did
not produce a significant ∼R2 = .00, ns (χ 2 [1] = .56, ns). Once more
we tested the cascading cross-level mediation effect from the debriefing
condition (Level 3) via team processes (Level 2) on individuals’ enthusi-
asm for teaming (Level 1) using Selig and Preacher’s (2008) bootstrapping
procedure. The 95% confidence interval of these estimates did not include
zero (.01–.42), suggesting a modest full cross-level mediation effect as
anticipated in Hypothesis 4. Regressing enthusiasm for teaming on to just
the debriefing manipulation revealed a significant cross-level relationship
(γ = .44, SE = .22, p < .05), which is further evidence consistent with a
significant cross-level full mediation inference.

Discussion

Though prior researchers have clearly shown that well-facilitated


team debriefs work, they have not examined how to enable teams to
conduct team-led debriefs, adequately compared different debrief tech-
niques, or studied the mechanisms through which debriefs affect team-
and individual-level outcomes. The lack of research-validated, team-led
debriefing techniques is a problem because it is unlikely that there will
ever be enough trained debrief facilitators to support all the teams that
could benefit from debriefs. Developing a better understanding of how
teams can conduct effective team-led debriefs may be the key to their
widespread adoption.
This study revealed how enhancements to an unguided, team-led de-
brief can significantly impact subsequent team processes and thereby
improve both team and individual outcomes. Our results suggest that the
guided debrief technique yielded significantly better results than did a
basic debrief approach. Teams in both conditions in this study had the
opportunity to conduct two team-led debriefs during a series of perfor-
mance episodes. Both were provided with instructions that encouraged
them to reflect upon their recent past experiences. However, based on
the common problems seen in prior team debriefs and the common team
information processing limitations revealed in prior research, the guided
debriefing process led them to consider key team processes, ensured input
from all team members, provided prioritized discussion questions, and
directed them to consider future actions. This was not an “on” versus
“off” comparison of debriefing versus no debriefing as was the case in
most prior studies; instead, fairly subtle changes in debriefing technique
were tested and shown to yield significant statistical differences at both
ERIK R. EDDY ET AL. 997

the team and individual level of analysis. That said, comparisons against
teams in a quasi-control condition suggested that the unguided debrief
was not particularly beneficial. It appears that, at least in this case, if left
to their own devices, teams do not conduct effective debriefs.
The study also reconfirmed the central role that team processes play
in team effectiveness. As numerous prior studies attest (LePine et al.,
2008), team processes were significantly related to team performance.
Team processes were also related to two key individual outcomes and
fully mediated the effect of debrief condition on the dependent variables.
Together, these results suggest that the way in which the guided debrief
operates is by focusing attention on teamwork processes and enabling
members to self-correct the way they work together, which in turn can
boost performance and enhance team member enthusiasm and readiness
for teaming.
Notably, we did not observe any direct effects between the debriefing
condition and team performance. We believe that this is likely attributable
to two factors. First, the guided debriefing condition was designed to help
team members better coordinate their efforts in terms of team processes.
The feedback and guidance was not driven by task considerations per se
but rather was designed to help members focus upon and understand how
they worked with one another and coordinated their efforts. In other words,
the focus of the intervention was on team processes not performance. Sec-
ond, we employed instructors’ performance evaluations as our team-level
criterion so as to have a consistent team outcome measure across classes.
However, raters are subject to contextual effects such as rating individu-
als relative to other group members (Yammarino, Dubinsky, & Hartley,
1987), or in this context, rating teams relative to others in the same class.
To the extent that instructors implicitly “curved ratings” within classes,
between group comparisons would be attenuated. The HLM analyses that
we employed serve to control for rater effects (Lahuis & Avis, 2007),
although the influence of any implicit curving will not be eradicated. Of
course, to the extent that any such rater effects might be operating, they
would serve to attenuate observed relationships rendering our findings as
conservative.
Team performance is a traditional and important outcome, but given
the trend in organizations to move team members on and off project
teams and for people to be members of multiple teams simultaneously
(see Mathieu et al., 2008), individual readiness and enthusiasm for future
teaming may be just as important as immediate team performance. Our
findings show that a positive team experience not only enhances current
team performance but can better prepare team members for their future
team assignments and engender positive affect toward working in teams.
A well-designed debrief intervention may help build the type of human
998 PERSONNEL PSYCHOLOGY

capital (in the form of personal readiness and enthusiasm for teamwork)
needed for future team assignments.

Limitations

This study has several positive features. It was a multilevel experiment,


incorporating a quasi-experimental manipulation, with participants who
had reason to care about their performance and who worked together
for a period of 10 weeks. Team process and performance measures were
collected from different sources, and the design allowed for the testing
of multilevel hypotheses. We found significant effects based on a small
experimental manipulation and accounted for 11% of team processes,
17% of team performance, and 22% and 23% of the individual outcomes.
However, like most studies, this one had a few limitations worth noting.
We sampled students who worked on a course-related task, which
could raise questions about the generalizability of the findings. However,
Anderson, Lindsay, and Bushman (1999) found that there is typically
high correspondence between findings in lab and field-based settings.
Moreover, though the tasks were academic in this study, the motivational
dynamics and temporal period of performance can be seen as analogous
to those exhibited by many project teams in organizations. Team member
rewards (in the form of grades) were linked to their team’s performance,
so they had a meaningful incentive to perform well as a team. In addition,
they performed together over a period of weeks (rather than hours), which
is similar to how project teams typically operate. Finally, the debrief
intervention in both conditions was the same as those used by teams
in organizational environments and relevant to the work the teams were
performing. Thus, there was a high level of psychological fidelity to the
experiment, so the primary question about generalizability is probably
whether the results would be similar for teams that perform significantly
different types of tasks.
Though we conducted a cluster randomized field experiment, neither
individuals nor teams were randomly assigned to conditions. Rather, given
ethical and logistical concerns, we randomly assigned classes to debrief
conditions. This design is not unlike ones employed in education, political
science, marketing, and other disciplines where “lower level” units within
clusters must be treated similarly. However, the design does necessitate
additional modeling considerations and a boundary for generalizability
(namely, the nature of the higher-level clusters). Nevertheless, this design
permitted us to introduce an intervention in a natural setting while main-
taining an even playing field for participants and minimizing many threats
to internal validity. In a related vein, given the relatively modest class and
ERIK R. EDDY ET AL. 999

team sample sizes, there are concerns about statistical power to discern
fully from partially mediated cross-level effects (Mathieu & Taylor, 2007).
However, we did obtain significant indirect effects in this design, and given
the nonsignificant and minuscule direct effects of the debriefing condi-
tion (after accounting for team processes) on team performance and the
individual-level outcomes, even enormous sample sizes would not likely
lead to an inference of partial mediation. Naturally, however, additional
studies that test the replicability of these findings are encouraged.
Given that this was a quasi-experimental design with relatively few
teams, we chose to focus primarily on the impact of the debriefing ma-
nipulation. Naturally, however, additional variables could be considered.
For example, teams may have adopted different work designs that could
have influenced their processes and performance. Or, perhaps different
team compositions might interact with debriefing or other interventions.
For example, it might be that diverse groups would benefit more from a
debriefing manipulation than would more homogeneous ones.1 Debrief-
ing effects might combine with other potential interventions in interesting
ways. For example, determining whether debriefing proves to be redun-
dant with, or complementary to, early team interventions such as team
charters (Mathieu & Rapp, 2009) offers both theoretical and practical
implications that warrant investigation.

Implications for Practice and Future Research

Implications for practice. This study has several implications for prac-
tice. It illustrates that team-led debriefs can be readily deployed and that
a small adjustment in technique matters. The results reveal that “guided”
team-led debriefs can yield greater benefits than unguided debriefs. More-
over, it appears that simply providing a team with time and basic instruc-
tions to discuss what is going well and poorly may be insufficient for
boosting team processes and performance. For team-led debriefs to work,
teams need more guidance.

1
Notably, we did derive team level composites based on members’ demographics and
correlated them with the debriefing manipulation, and with team processes and performance.
Specifically, we indexed the (a) percentage of men, (b) percentage of members who reported
their ethnicity as White (other categories collectively had little representation), (c) average
age, (d) age variation, and (e) major diversity, along with average academic competence.
The findings revealed the significant academic competence–team performance correlation
(r = .35, p < .05) and a curious (r = –.34, p < .05) correlation between major diversity and
the debrief condition. However, including major diversity as a predictor in either the team
process equation (γ = –.12, SE = .21, ns) or the team performance equation (γ = .03,
SE = .15, ns) failed to unearth any significant effects, while the other predictors retained
their same effects. Further details are available from the authors.
1000 PERSONNEL PSYCHOLOGY

Though this study used an online tool to operationalize the features


of the enhanced debriefs, this is not necessarily a “technology solution.”
The guided debrief gathered perceptions from team members, integrated
them, and used the results to steer members to consider particularly salient
team processes and establish future action plans. These same functions
could be performed by a trained leader or using more conventional paper-
and-pencil techniques. Whereas computer-enabled guided debriefs are
certainly timely and efficient, and may reduce the need to involve skilled
facilitators in some cases, their effectiveness versus other delivery systems
that incorporate the same key features remains a question for future re-
search. In any case, our results suggest that teams can effectively conduct
team-led debriefs if provided with these tools.
The findings regarding individual outcomes are also of practical im-
port. It appears that debriefs that enhance a team’s way of working together
have the potential to positively affect individuals’ attitudes. For those or-
ganizations that tend to form and disband project teams regularly, these
findings provide a further rationale for equipping teams to conduct peri-
odic debriefs. Not only can well-designed debriefs help a team have more
positive team experiences and better performance, but it is possible that,
over time, those positive experiences will build employees’ enthusiasm
and readiness for future team assignments. In effect, positive team ex-
periences can become force multipliers and pay dividends many times
over as better skilled and motivated members work on teams in the future
(Kukenberger et al., in press).
Finally, for professors who assign students to project teams that will
work together over a period of several weeks, the study revealed a few
potential benefits for using debriefs in educational settings. Academic
teams are notorious for being dysfunctional and yielding jaded students,
with limited teamwork skills, who dread working on teams in the future.
Accordingly, we would encourage professors to provide student teams
with guidance that enables them to conduct their own team debriefs and
learn how to discuss and self-correct teamwork deficiencies. It could help
students have better team experiences and may increase their readiness to
work on teams after they graduate.
Future research needs. This study confirmed the potential value of
debriefs and demonstrated how a guided team-led method was superior
to an unguided team-led approach. There is now a solid body of research
that shows that, overall, debriefs can work (cf. Tannenbaum & Cerasoli,
2013), so future research should seek to clarify why and how debriefs
work, what inhibits their effectiveness in certain conditions, and how
best to enhance their utility. Programmatic, theory-based research can
provide insights about debriefing and team effectiveness. Such a program
of research could examine four key areas: (a) the boundary conditions
ERIK R. EDDY ET AL. 1001

under which debriefs are likely to work, (b) the focus of debriefs, (c)
the features or attributes of the debriefing process, and (d) organizational
implementation issues.
It will be important to better understand the boundary conditions under
which debriefs are likely to work. For example, this study involved a
sample of students working on a multimonth team project. In recent years,
there has been a wave of research on medical debriefs, mostly focusing
on debriefing after simulation training. A key question for future research
is to what extent does team type, stability, interdependence, and purpose
influence whether and how a debrief will be effective, and how might this
interact with other debriefing characteristics?
To date, researchers have not studied how the focus of or content
covered in a debrief influences debriefing outcomes. In this study, we
featured teamwork processes as a primary focus of the guided debriefs,
and, based on our previous observations of teams debriefing in naturalistic
settings, we assumed that teams in the unguided condition would gravitate
more toward discussing taskwork. Future research should more system-
atically examine how a focus on teamwork and/or taskwork influences
the outcome of debriefs. In addition, in this study we examined team-
work processes as a single integrated dependent variable. Future research
should more carefully parse out the different types of team processes (cf.
Marks et al., 2001) to better understand how the focus of a debrief may
differentially influence unique aspects of team processes and, in turn, team
performance.
Future research should also examine the features or attributes of the
debriefing process. For example, we are not aware of research that com-
pares facilitator-led and team-led debriefing methods. Conceptually, all
else being equal, a debrief that is led by an impartial third party should
allow all team members to be actively engaged in the debrief discussion.
In contrast, a team-led debrief might be hypothesized to produce a greater
sense of ownership and commitment to change. Which of these hypothe-
ses is valid under which conditions? Similarly, to what extent does the
skill of the person leading the debrief and the amount of structure and
guidance built into the debriefing process drive debriefing dynamics be-
yond that accounted for by whether the session is facilitator led or team
led? Is debrief effectiveness more a function of skills and structure or the
position of the person who is leading it?
The guided debrief condition in this study attempted to overcome prior
concerns by incorporating features such as anonymous input from all team
members, automated prioritization of team needs, and encouragement to
establish future agreements. As we studied these features in combination,
we cannot definitively state which are essential or even most important.
The guided debrief technique was designed to provide greater structure and
1002 PERSONNEL PSYCHOLOGY

encourage reflection; promote data verification, feedback, and information


sharing; and lead to goal-setting/action planning. Future research should
decompose the elements to yield a better understanding of how much and
what type of debriefing structure is needed under what conditions.
Finally, debriefing is fundamentally an intervention, and as such it
can be beneficial to view and research it from an organizational change
perspective. Future research should examine how key implementation is-
sues can influence debriefing effectiveness, including how it is introduced,
the readiness of key stakeholders, timing of use, and long-term effects.
For example, to what extent does team leader readiness and openness
influence effectiveness, and what can be done to introduce debriefing to
enhance team receptivity? Future research might also extend this study
by tracking teams who participate in debriefs over time. Does the impact,
and perhaps novelty, of the intervention wane over time or gain power as it
becomes second nature? Are there any benefits to periodically introducing
live facilitators or process consultants to guide the discussion?
A few research questions would need to be examined at the organi-
zation or business unit level. What if any organizational benefits accrue
from conducting regular team debriefs? Are individuals who participate
in effective debriefs as team members more apt to lead debriefs when they
are team leaders? Do business units that conduct debriefs regularly build
a cadre of individuals who are more ready and enthusiastic to participate
in teams? Over time, can debriefs help build a “culture” of teamwork in
an organization where individuals naturally discuss teamwork and feel a
sense of efficacy when give team assignments?

Conclusion

Team-based structures have become more widely used in organiza-


tions, and the trend is for individuals to participate in many teams during
their career. Therefore, it is critical that team members both perform well
in their current team and also build skills and enthusiasm for working on
future teams. This study explored a team-led team debrief that organiza-
tions can implement to enhance team performance and build individual
readiness and enthusiasm. It is clear that the guided team debrief exam-
ined in this study provided substantial benefits to teams and team members
above and beyond the unguided debrief. Organizations that are looking
for a relatively easy to deploy intervention to improve teamwork should
give serious consideration to the use of guided team debriefs.

REFERENCES

Allen JA, Baran BE, Scott CW. (2010). After-action reviews: A venue for the promotion
of safety climate. Accident Analysis and Prevention, 42, 750–757.
ERIK R. EDDY ET AL. 1003

Anderson CA, Lindsay JJ, Bushman BJ. (1999). Research in psychological laboratory:
Truth of triviality? Current Directions in Psychological Science, 8, 3–9.
Balzer WK, Doherty ME, O’Connor R. (1989). Effects of cognitive feedback on perfor-
mance. Psychological Bulletin, 106, 410–433.
Bandura A, Cervone D. (1983). Self-evaluative and self-efficacy mechanisms governing the
motivational effects of goal systems. Journal of Personality and Social Psychology,
45, 1017–1028.
Barmeyer CI. (2004). Learning styles and their impact on cross-cultural training: An in-
ternational comparison in France, Germany, and Quebec. International Journal of
Intercultural Relations, 28, 577–594.
Barrick MR, Stewart GL, Neubert MJ, Mount MK. (1998). Relating member ability and
personality to work-team processes and team effectiveness. Journal of Applied
Psychology, 83, 377–391.
Bell ST, Marentette BJ. (2011). Team viability for long-term and ongoing organizational
teams. Organizational Psychology Review, 1, 275–292.
Burke LA, Hutchins HM. (2007). Training transfer: An integrative literature review. Human
Resource Developmental Review, 6, 263–296.
Cannon-Bowers JA, Tannenbaum SI, Salas E, Volpe CE. (1995). Defining team compe-
tencies: Implications for training requirements and strategies. In Guzzo R, Salas E
(Eds.), Team effectiveness and decision making in organizations (pp. 333–380). San
Francisco, CA: Jossey-Bass.
Chen G, Gogus CI. (2008). Motivation in and of work teams: A multilevel perspective. In
Kanfer R, Chen G, Pritchard RD (Eds.), Work motivation: Past, present, and future
(pp. 285–318). New York, NY: Routledge.
Chen G, Kanfer R. (2006). Toward a systems theory of motivated behavior in work teams.
Research in Organizational Behavior, 27, 223–267.
Chen G, Kanfer R, DeShon RP, Mathieu JE, Kozlowski SWJ. (2009). The motivating
potential of teams: Test and extension of Chen and Kanfer’s (2006) cross-level model
of motivation in teams. Organizational Behavior and Human Decision Processes,
110, 45–55.
Chen G, Mathieu JE, Bliese PD. (2004). A framework for conducting multilevel construct
validation. In Dansereau FJ, Yammarino F (Eds.), Research in multi-level issues:
The many faces of multi-level issues (Vol. 3, pp. 273–303). Oxford, UK: Elsevier
Science.
Cook TD, Campbell DT. (1979). Quasi-experimentation: Design and analysis issues for
field settings. Boston, MA: Houghton Mifflin Co.
Cooper MR, Wood MT. (1974). Effects of member participation and commitment in group
decision making on influence, satisfaction, and decision riskiness. Journal of Applied
Psychology, 59, 127–134.
DeChurch LA, Haas CD. (2008). Examining team planning through an episodic lens effects
of deliberate, contingency, and reactive planning on team effectiveness. Small Group
Research, 39, 542–568.
DeChurch LA, Mesmer-Magnus JR. (2010). Measuring shared team mental models: A
meta-analysis. Group Dynamics, 14, 1–14.
DeRue DS, Nahrgang JD, Hollenbeck JR, Workman K. (2012). A quasi-experimental study
of after-event reviews and leadership development. Journal of Applied Psychology,
97, 997–1015.
DeRue DS, Wellman N. (2009). Developing leaders via experience: The role of develop-
mental challenge, learning orientation, and feedback availability. Journal of Applied
Psychology, 94, 859–875.
1004 PERSONNEL PSYCHOLOGY

DeShon RP, Kozlowski SWJ, Schmidt AM, Milner KR, Wiechmann D. (2004). A multiple-
goal, multilevel model of feedback effects on the regulation of individual and team
performance. Journal of Applied Psychology, 89, 1035–1056.
DeVita MA, Schaefer J, Lutz J, Wang H, Dongilli T. (2005). Improving medical emergency
team (MET) performance using a novel curriculum and a computerized human
patient simulator. Quality and Safety in Healthcare, 14, 326–331.
Dismukes RK, Jobe KK, McDonnell LK. (2000). Facilitating LOFT debriefings: A critical
analysis. In Dismukes RK, Smith GM (Eds.), Facilitation in aviation training and
operations (pp. 13–25). Aldershot, UK: Ashgate.
Eckes G. (2002). Six sigma team dynamics: The elusive key to project success. Hoboken,
NJ: Wiley.
Ellis APJ, Bell BS, Ployhart RE, Hollenbeck JR, Ilgen DR. (2005). An evaluation of
generic teamwork skills training with action teams: Effects on cognitive and skill-
based outcomes. P ERSONNEL P SYCHOLOGY, 58, 641–672.
Ellis APJ, Hollenbeck JR, Ilgen DR, Porter CO, West BJ, Moon H. (2003). Team learn-
ing: Collectively connecting the dots. Journal of Applied Psychology, 88, 821–
835.
Ellis S, Davidi I. (2005). After-event reviews: Drawing lessons from successful and failed
experience. Journal of Applied Psychology, 90, 857–871.
Ellis S, Kruglanski AW. (1992). Self as an epistemic authority: Effects on experiential and
instructional learning. Social Cognition, 10, 357–375.
Fanning RM, Gaba DM. (2007). The role of debriefing in simulation-based learning.
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare,
2, 115–125.
Gigone D, Hastie R. (1993). The common knowledge effect: Information sharing and group
judgment. Journal of Personality and Social Psychology, 65, 959–974.
Group for Organizational Effectiveness, Inc. (2011). Debriefs: A powerful tool for enhanc-
ing team effectiveness (White Paper). Albany, NY: gOE Inc.
Hackman JR. (1982). Studying organizations: Innovations in methodology (six mono-
graphs). Beverly Hills, CA: Sage.
Hackman JR. (1987). The design of work teams. Englewood Cliffs, NJ: Prentice Hall.
Hinsz VB, Tindale RS, Vollrath DA. (1997). The emerging conceptualization of groups as
information processors. Psychological Bulletin, 121, 43–64.
Hirst G. (2009). Effects of membership change on open discussion and team performance:
The moderating role of team tenure. European Journal of Work and Organizational
Psychology, 18, 231–249.
Hollenbeck JR, Beersma B, Schouten ME. (2012). Beyond team types and taxonomies: A
dimensional scaling conceptualization for team description. Academy of Manage-
ment Review, 37, 82–106.
Hollenbeck JR, Williams CR, Klein HJ. (1989). An empirical examination of the an-
tecedents of commitment to difficult goals. Journal of Applied Psychology, 74,
18–23.
Homan AC, Hollenbeck JR, Humphrey SE, van Knippenberg D, Ilgen DR, van Kleef GA.
(2008). Facing differences with an open mind: Openness to experience, salience
of intragroup differences, and performance of diverse work groups. Academy of
Management Journal, 51, 1204–1222.
Ilgen DR, Hollenbeck JR, Johnson M, Jundt D. (2005). Teams in organizations: From I-P-O
models to IMOI models. Annual Review of Psychology, 56, 517–543.
James LR, Demaree RG, Wolf G. (1984). Estimating within-group interrater reliability with
and without response bias. Journal of Applied Psychology, 69, 85–98.
ERIK R. EDDY ET AL. 1005

Janz BD, Colquitt JA, Noe RA. (1997). Knowledge worker team effectiveness: The role of
autonomy, interdependence, team development, and contextual support variables.
P ERSONNEL P SYCHOLOGY, 50, 877–904.
Karau SJ, Kelly JR. (1992). The effects of time scarcity and time abundance on group perfor-
mance quality and interaction process. Journal of Experimental Social Psychology,
28, 542–571.
Kerr NL, MacCoun RJ, Kramer GP. (1996). Bias in judgment: Comparing individuals and
groups. Psychological Review, 103, 687–719.
Kim PH. (1997). When what you know can hurt you: A study of experiential effects on
group discussion and performance. Organizational Behavior and Human Decision
Processes, 69, 165–177.
Kinni T. (2003). Getting smarter every day: How can you turn organizational learn-
ing into continual on-the-job behavior? Harvard Management Update, February,
3–4.
Kukenberger MR, Mathieu JE, Ruddy TM. (in press). Cross-level test of empowerment and
process influences on members’ informal learning and team commitment. Journal
of Management.
Lahuis DM, Avis JM. (2007). Using multilevel random coefficient modeling to investigate
rater effects in performance ratings. Organizational Research Methods, 10, 97–
107.
Langfred CW. (2000). The paradox of self management: Individual and group autonomy
in work groups. Journal of Organizational Behaviors, 2, 563–585.
Larson JR Jr., Foster-Fishman PG, Keys CB. (1994). Discussion of shared and unshared in-
formation in decision making groups. Journal of Personality and Social Psychology,
67, 446–461.
LePine JA, Piccolo RF, Jackson CL, Mathieu JE, Saul JR. (2008). A meta-analysis of
teamwork processes: Tests of a multidimensional model and relationships with
team effectiveness criteria. P ERSONNEL P SYCHOLOGY, 61, 273–307.
Lester SW, Meglino BM, Korsgaard MA. (2002). The antecedents and consequences of
group potency: A longitudinal investigation of newly formed work groups. Academy
of Management Journal, 45, 352–368.
Littlepage G, Robison W, Reddington K. (1997). Effects of task experience and group
experience on group performance, member ability, and recognition of expertise.
Organizational Behavior and Human Decision Processes, 69, 133–147.
Marks MA, Mathieu JE, Zaccaro SJ. (2001). A temporally based framework and taxonomy
of team processes. Academy of Management Review, 26, 356–376.
Marsick VJ, Volpe M. (1999). The nature and need for informal learning. Advances in
Developing Human Resources, 1, 1–9.
Mathieu J, Maynard MT, Rapp T, Gilson L. (2008). Team effectiveness 1997–2007: A
review of recent advancements and a glimpse into the future. Journal of Management,
34, 410–476.
Mathieu JE, Marks MA. (2006). Team process items. Storrs, CT: University of Connecticut.
Mathieu JE, Maynard MT, Taylor SR, Gilson LL, Ruddy TM. (2007). An examination
of the effects of organizational district and team contexts on team processes and
performance: A meso-mediational model. Journal of Organizational Behavior, 28,
891–910.
Mathieu JE, Rapp TL. (2009). Laying the foundation for successful team performance tra-
jectories: The roles of team charters and performance strategies. Journal of Applied
Psychology, 94, 90–103.
1006 PERSONNEL PSYCHOLOGY

Mathieu JE, Taylor S. (2006). Clarifying conditions and decision points for mediational
type inferences in organizational behavior. Journal of Organizational Behavior, 27,
1031–1056.
Mathieu JE, Taylor S. (2007). A framework for testing meso-mediational relationships in
organizational behavior. Journal of Organizational Behavior, 28, 141–172.
Mesmer-Magnus JR, DeChurch LA. (2009). Information sharing and team performance:
A meta-analysis. Journal of Applied Psychology, 94, 535–546.
Morrison JE, Meliza LL. (1999). Foundations of the after action review process. U.S.
Army Research Institute for the Behavioral and Social Sciences. Special Report 42,
contract DASW01-98-C-0067, 1–71.
Muthen LK, Muthen BO. (2007). Mplus user’s guide (3rd ed.). Los Angeles, CA: Muthen
& Muthen.
Pituch KA, Murphy DL, Tate RL. (2010). Three-level models for indirect effects in school-
and class- randomized experiments in education. The Journal of Experimental Ed-
ucation, 78, 60–95.
Raemer D, Anderson M, Cheng A, Fanning R, Nadkarni V, Salvodelli G. (2011). Research
regarding debriefing as part of the learning process. Simulation in Healthcare: The
Journal of the Society for Simulation in Healthcare, 6, S52–S57.
Rapp TL, Mathieu JE. (2007). Evaluating an individually self-administered generic team-
work skills training program across time and levels. Small Group Research, 38,
532–555.
Raudenbush SW. (1997). Statistical analysis and optimal design for cluster randomized
trails. Psychological Methods, 2, 173–185.
Raudenbush SW, Bryk AS, Cheong YF, Congdon RT, du Toit M. (2004). HLM6: Hi-
erarchical linear and nonlinear modeling. Lincolnwood, IL: Scientific Software
International, Inc.
Rowe MP, Baker M. (1984). Are you hearing enough employee concerns? Harvard Business
Review, May-June, 27–35.
Salas E, DiazGranados D, Klein C, Burke CS, Stagl KC, Goodwin GF, Halpin SM. (2008).
Does team training improve team performance? A meta-analysis. Human Factors,
51, 903–933.
Schon D. (1987). Educating the reflective practitioner. San Francisco, CA: Jossey-Bass.
Selig JP, Preacher KJ. (2008). Monte Carlo method for assessing mediation: An interactive
tool for creating confidence intervals for indirect effects. [Computer software].
Retrieved from http://quantpsy.org/
Smith-Jentsch KA, Cannon-Bowers JA, Tannenbaum SI, Salas E. (2008). Guided team self-
correction: Impacts on team mental models, processes, and effectiveness. Journal
of Small Group Research, 39, 303–327.
Snijders TAB, Bosker RJ. (1999). Multilevel analysis: An introduction to basic and ad-
vanced multilevel modeling. London, UK: Sage.
Stasser G, Titus W. (1985). Pooling of unshared information in group decision making:
Biased information sampling during discussion. Journal of Personality and Social
Psychology, 48, 1467–1478.
Sundstrom E. (1999). The challenges of supporting work team effectiveness. In Sundstrom
E (Ed.), Supporting work team effectiveness (pp. 3–23). San Francisco, CA: Jossey-
Bass.
Tannenbaum SI. (1997). Enhancing continuous learning: Diagnostic findings from multiple
companies. Human Resource Management, 36, 437–452.
Tannenbaum SI, Beard RL, Cerasoli CP. (2013). Conducting team debriefs that work:
Lessons from research and practice. In Salas E, Tannenbaum SI, Cohen D, Latham
ERIK R. EDDY ET AL. 1007

GG (Eds.), Developing and enhancing high–performance teams: Evidence-based


practices and advice. San Francisco, CA: Jossey-Bass.
Tannenbaum SI, Beard RL, Salas E. (1992). Team building and team effectiveness: An
examination of conceptual and empirical developments. In Kelley K (Ed.), Issues,
theory, and research in industrial/organizational psychology (pp. 117–153). Ams-
terdam: Elsevier.
Tannenbaum SI, Cerasoli C. (2013). Do team and individual debriefs enhance performance?
A meta-analysis. Human Factors, 55, 231–245.
Tannenbaum SI, Goldhaber-Fiebert S. (2012). Medical team debriefs: Simple, powerful,
underutilized. In Salas E, Frush K (Eds.), Improving patient safety through teamwork
and team training (pp. 249–256). New York, NY: Oxford University Press.
Tannenbaum SI, Mathieu JE, Salas E, Cohen D. (2012). Teams are changing – Are research
and practice evolving fast enough? Industrial and Organizational Psychology: Per-
spectives on Science and Practice, 5, 2–24.
Tasa K, Taggar S, Seijts GH. (2007). The development of collective efficacy in teams: A
multi-level and longitudinal perspective. Journal of Applied Psychology, 92, 17–27.
Tesluk P, Mathieu JE. (1999). Overcoming roadblocks to effectiveness: Incorporating man-
agement of performance barriers into models of work group effectiveness. Journal
of Applied Psychology, 84, 200–217.
Tindale RS. (1989). Group vs. individual information processing: The effects of outcome
feedback on decision-making. Organizational Behavior and Human Decision Pro-
cesses, 44, 454–473.
van Ginkel WP, Tindale RS, van Knippenberg D. (2009). Team reflexivity, development of
shared task representations, and the use of distributed information in group decision
making. Group Dynamics: Theory, Research, and Practice, 13, 265–280.
Wittenbaum GM, Stasser G. (1996). Management of information in small groups. In Nye
JL, Brower AR (Eds.), What’s social about social task representations? Research
on socially shared task representations in small groups (pp. 3–28). Thousand Oaks,
CA: Sage.
Wong Z. (2007). Human factors in project management: Concepts, tools, and techniques
for inspiring teamwork and motivation. San Francisco, CA: Jossey-Bass.
Yammarino FJ, Dubinsky AJ, Hartley SW. (1987). An approach for assessing individual
versus group effects in performance evaluations. Journal of Occupational Psychol-
ogy, 60, 157–167.

APPENDIX

Guided Debrief Themes, Sample Questions, and Discussion Probes

Themes
r Understanding professor’s expectations
r Clarity of team expectations and norms
r Completing case analysis with enough time to review and revise
r Using meeting time wisely
r Offering to assist and help one another
r Effort of team members
r Openness to ideas and input from others
r Willingness to and effectiveness of challenging one another
r Frustration and getting along with one another
1008 PERSONNEL PSYCHOLOGY

Sample Questions and Discussion Probes

Questions Related Discussion Probes


Which best describes your team Moderate concern: Team Expectations: Your team
expectations and norms? members’ responses suggest that either your team
has not established clear expectations or that when
someone breaks a norm the team does not address
it with that teammate.
High concern: Team Expectations: At least half of
your team thought that your team has not
established clear expectations or that when
someone breaks a norm the team does not address
it with that teammate.
• We have established clear Discussion probes:
expectations/norms (e.g., about Have we established clear expectations for our team?
effort, attending meetings on • If we have clear expectations, are we holding each
time, completing assignments), other accountable when someone slips? What
and if a teammate doesn’t live might be keeping us from holding others
up to these expectations we accountable?
address it. • Do we need to establish or clarify our team
• We have established clear expectations (e.g., be on time for meetings,
expectations, but we often don’t communicate with teammates, pull your own
address problems when they weight)? If so, how can we clarify our
occur. expectations?
• We have not established clear • How should we let them know when someone
expectations or norms for our doesn’t live up to our team’s expectations? What
team. should we do? Do we need to be willing to say
something?
We used our meeting time wisely. Moderate concern: Meeting Time: At least 20% of
your team felt that meeting time was not used
wisely.
High concern: Meeting Time: At least half of your
team felt that meeting time was not used wisely.
• Strongly Agree Discussion probes:
• Agree How effective were our meetings?
• Disagree • What did we do well and what took us off track
• Strongly Disagree during our meeting? Why did this happen?
• How should we spend our time when we meet as a
team? What types of work/discussion/issues
should we focus on?
• What work needs to be completed prior to the
meeting to ensure that we use our meeting time
more wisely?
• What should we do to ensure that our meeting time
is used more wisely for the next case? If we are
not using our time wisely during a meeting, what
should we do?

You might also like