Professional Documents
Culture Documents
research-article2014
SGRXXX10.1177/1046496414552285Small Group ResearchCoultas et al.
Article
Small Group Research
2014, Vol. 45(6) 671–703
A Conceptual Review © The Author(s) 2014
Reprints and permissions:
of Emergent State sagepub.com/journalsPermissions.nav
DOI: 10.1177/1046496414552285
Measurement: Current sgr.sagepub.com
Problems, Future
Solutions
Abstract
Team research increasingly incorporates emergent states as an integral
mediator between team inputs and outcomes. In conjunction with this, we
have witnessed a proliferation and fragmentation of measurement techniques
associated with emergent states. This inconsistency in measurement presents
a problem for scientists and practitioners alike. For the scientist, it becomes
difficult to better understand the nature and effects of various emergent
states on team processes and outcomes. For the practitioner, it complicates
the process of measurement development, selection, and implementation.
To address these issues, we review the literature on emergent states focusing
on various measurement strategies, to better unpack best practices. In so
doing, we highlight existing research that suggests innovative solutions to the
conceptual, methodological, and logistical problems that consistently plague
emergent state research. Our aim is to enhance emergent state theory by
applying psychometric principles to the measurement techniques associated
with them.
Corresponding Author:
Chris W. Coultas, University of Central Florida, 4000 Central Florida Blvd., Psychology Bldg.
99, Ste. 320, Orlando, FL 32816, USA.
Email: ccoultas@ist.ucf.edu
This article is part of the special issue: 2014 Annual Review Issue, Small Group Research,
Volume 45(6).
Keywords
cohesion, collective efficacy, multilevel, team cognition, transactive memory
systems
The past 30 years has witnessed a surge in team research (Wuchty, Jones, &
Uzzi, 2007). Accordingly, there is now a large body of evidence that points to
the critical drivers of team performance (e.g., Bell, 2007; De Dreu &
Weingart, 2003; Gully, Incalcaterra, Joshi, & Beaubien, 2002). While initial
research focused on identifying antecedents to team performance (i.e., team,
individual, and task characteristics), more recent research has begun to
unpack the black box within input–mediator–output (IMO) models of team
performance (Mathieu, Maynard, Rapp, & Gilson, 2008), with emergent
states theorized to be a primary explanatory variable mediating the relation-
ship between team inputs and outcomes. Over the past decade, emergent
states—relatively dynamic, collective-level characteristics that “vary as a
function of team context, inputs, processes, and outcomes” (Marks, Mathieu,
& Zaccaro, 2001, p. 357)—have been consistently demonstrated to influence
desirable team outcomes (e.g., Kozlowski & Ilgen, 2006; Mathieu et al.,
2008; Rico, Sánchez-Manzanares, Gil, & Gibson, 2010). Given the impor-
tance of emergent states across many disciplines (e.g., organizational, sports,
military), assessment of these constructs is paramount. However, emergent
state measurement suffers from fragmented definitions, operationalizations,
aggregation techniques, and disjointed methodologies (cf. Lewis & Herndon,
2011; Mohammed, Klimoski, & Rentsch, 2000). This article reviews com-
mon emergent state measurement practices, gauging adherence to psycho-
metric best practices (e.g., Nunnally & Bernstein, 1994), which may have
implications for the internal validity and generalizability of emergent state
research. Issues addressed will relate to construct clarity, measure develop-
ment, multilevel aggregation, and measurement over time. We will highlight
instances where research has fallen short, and where it has made significant
advancements. Ultimately, this targeted conceptual review shall advance
emergent state research by providing practical and insightful guidelines to
enhance researchers’ and practitioners’ ability to assess and address team
emergent states.
Research Methodology
To document the emergent state measurement science, the authors conducted
a multipronged search of several databases (i.e., PsycINFO, PsycARTICLES,
Business Source Premier, Military and Government Collection, ERIC,
that each performed differently depending on the criterion. Despite the avail-
ability and diversity of multiple agreement indices, research typically simply
justifies aggregation by appealing to rwg cutoffs. Indeed, this defaulting to
arbitrary cutoffs continues to occur, despite recent advances in this very area
that address some of the most problematic issues (Lance et al., 2006). Rather,
the more appropriate practice would be to use significance testing to deter-
mine 95% critical values for setting agreement cutoffs (Dunlap, Burke, &
Smith-Crowe, 2003; Lebreton, James, & Lindell, 2005). Despite this, of the
259 emergent state articles reviewed, only 1 (Wholey et al., 2011) mentioned
using the critical value approach to determine cutoff scores. This is not to
claim that the critical value approach has not made an impact—a quick
Google Scholar search shows 135 references to the Dunlap and colleagues
article; but the majority of empirical studies were on organizational climate
(e.g., McKay, Avery, & Morris, 2009), team processes (e.g., Vecchio, Justin,
& Pearce, 2010), or static team characteristics (Murphy, Cronin, & Tam,
2003; Van Mierlo, Rutte, Vermunt, Kompier, & Doorewaard, 2006).
Several articles used (or appeared to use) strictly additive models (i.e.,
aggregating to the collective with mean indices without referencing any check
for agreement). Additive models are appropriate if within-group variance is
irrelevant (Chan, 1998; Kozlowski & Klein, 2000); this may be the case in
loosely interdependent collectives (Molleman, 2009; Saavedra, Earley, & Van
Dyne, 1993; Steiner, 1972). Most emergent state research that does not report
agreement studied loose collectives (e.g., neighborhoods) where there was no
specific collaborative task (e.g., Sherrieb, Norris, & Galea, 2010; Tendulkar,
Koenen, Dunn, Buka, & Subramanian, 2012). However, we also noticed sev-
eral studies that failed to mention agreement indices even though they studied
collectives with highly interdependent structures and tasks (e.g., non-profit
boards, various therapy groups). It would be helpful, given that the common
practice is to clearly explicate the rationale and method for aggregation, for
future researchers to clearly point out why they chose to use an additive model,
especially when the task is somewhat interdependent.
Cole and colleagues’ work seems to not be that widely cited, as a Google
Scholar search of articles citing this work returned only 15 hits; of these hits,
only one dealt with an emergent state—trust. De Jong and Dirks (2012) found
that mean trust, trust dispersion, and their interaction term all significantly
predicted team performance.
Convergence over time. Arthur, Bell, and Edwards (2007) found support for the
hypothesis that within-team agreement on measures of collective efficacy
should increase, especially when using referent-shift measures. Their argu-
ment, which applies to all emergent states, was that “continued interaction
among team members provides a basis for which the team members can better
estimate” (Arthur et al., 2007, p. 39) the presence of an emergent state. Grow-
ing convergence over time was also evidenced in other studies (Dunlop, Falk,
& Beauchamp, 2013; Goncalo, Polman, & Maslach, 2010; Hommes et al.,
2014; Kanawattanachai & Yoo, 2007; Lee, Zhang, & Yin, 2011). Accordingly,
the general consensus in the literature seems to be that teams do trend toward
agreement over time. However, Kozlowski, Ployhart, and Lim (2010, cited in
Kozlowski & Chao, 2012) measured teams consistently (using experience
sampling) over an 8-week period and found that some teams converged toward
Summary
Teams and teamwork are increasingly important in modern society; accord-
ingly, research interest in measuring emergent states has grown considerably.
Yet despite the importance of these variables for predicting and improving
team performance, the extant emergent state literature remains somewhat
nascent. Furthermore, from our review of the emergent state literature, we
noted several problematic trends.
First, constructs are frequently either not defined sufficiently or defined
inconsistently across different studies (Lewis & Herndon, 2011). This obfus-
cates trends that may be apparent across different streams of research. It com-
plicates theory building, because cohesion or efficacy in one domain might
not mean the same thing in another domain. We urge researchers to intention-
ally distinguish the exact nature of their emergent state of interest and resist
the temptation to infer generalizability across research domains with diver-
gent definitions of a given construct. To do this, we suggest that researchers
also understand the evolution of the construct of interest over time. Because
emergent state research is relatively recent, many constructs have fluid defi-
nitions. Building theory and making inferences across studies and situations
can be problematic when a construct meant something different 50 years ago.
Understanding the breadth (across research domains) and depth (over time)
of the construct of interest should not only enable better integration of find-
ings across studies but also facilitate more nuanced and insightful theory
building and research design.
Second, and a related problem, is the observation that there is no clear
criterion for developing appropriate item-specific operationalizations of dif-
ferent constructs (Lewis & Herndon, 2011). This issue is especially compli-
cated by the fact that the meaning and impact of various emergent states can
change somewhat depending on the referent, size of the group, and the team’s
task. Regarding team task type, we noted that different emergent states are
impacted differently by task interdependence. Although we did not notice
clear trends for understanding how interdependence affects specific emergent
states, we encourage researchers to pay attention to this potentially moderat-
ing factor.
Third, even when items are developed or selected correctly, researchers
tend to be fairly limited in the ways in which they operationalize the con-
structs at different levels of analysis. We encourage researchers to more con-
sistently use complex models to represent emergent states—accounting for
both emergent state level (e.g., individual, collective) and method of aggrega-
tion (e.g., sharedness, dispersion, structure). It has been at least 15 years since
researchers began highlighting the importance of multilevel modeling, and
the complex ways that this can happen (e.g., Chan, 1998; Kozlowski & Klein,
2000). However, team-level and multilevel models rarely incorporate config-
ural elements in their team-level models of emergent states (see Cole et al.,
2011). We have presented several recent articles that we believe can and
should continue to make a strong impact on the field (e.g., Carron et al.,
2004; Cole et al., 2011; Dunlap et al., 2003). These works may help research-
ers better conceptualize levels of analysis in their theoretical and statistical
models. Doing so will facilitate more accurate and insightful multilevel mod-
els, allowing researchers to generate and answer new and important research
questions, which will be increasingly important as organizations to look to
different kinds of teams (e.g., distributed, cross-functional, multiteam sys-
tems) to achieve objectives.
Finally, research has yet to give consistent attention to the role of tempo-
rality and the dynamic emergence of various constructs. Recently, research-
ers have begun developing non-obtrusive methods for measuring some
emergent states, which may facilitate more frequent and less cumbersome
measurement. We recommend that researchers leverage these measurement
advances to further research in all areas of emergent state measurement.
Furthermore, to help address the role of time, we theoretically tie Kozlowski
and colleagues’ (2009) team development phases to emergent state develop-
ment to suggest a few ways in which these states may shift over time.
In an effort to synthesize the literature on emergent state measurement, we
have provided recommendations to what we view as the central issues of the
day. These recommendations are intended to act as guideposts for both
researchers and practitioners alike. Practically, many of these recommenda-
tions can act as standalone best practices that can and should be immediately
implemented into practice. Some of these recommendations are best prac-
tices that have already been acknowledged and developed elsewhere in semi-
nal works on measurement and multilevel theory (e.g., Chan, 1998; Kozlowski
& Klein, 2000; Nunnally & Bernstein, 1994). Nonetheless, our review high-
lights that some best practices are not being consistently followed. We point
to some of these inconsistent practices to help narrow the gap between where
we should be as a science and where we currently are. Specifically, better
abiding by these best practices will increase construct clarity, facilitate
research across domains, and strengthen the validity and generalizability of
findings, among other benefits. From a theoretical standpoint, these recom-
mendations are intended to stimulate debate on emergent state measurement
and act as a jumping-off point for future critical analysis and research. More
research on the role of agreement across different emergent states (e.g.,
Carron et al., 2004), the nature of various types of swift emergent states (e.g.,
Dufresne, 2013; Meyerson et al., 1996), and the role of time in emergent state
emergence is needed. We also encourage researchers to continue developing
and using innovative ways to unobtrusively assess various emergent states.
As we seek to understand the development and performance of collectives in
increasingly complex environments, these methodologies will become
increasingly important. The importance of understanding emergent states
will only grow as we continue to rely on teamwork to accomplish societal and
organizational goals; it is therefore essential that we not only better under-
stand these states, but that we better understand how to measure them. This
work represents one step toward the goal of continuing to improve the sci-
ence of emergent state measurement.
Funding
The author(s) disclosed receipt of the following financial support for the research,
authorship, and/or publication of this article: This research was supported by an Army
Research Institute grant (Contract No. W5J9CQ-11-D-0002, Task Order 11-10002) to
Dr. Christina Curnow of ICF International.
References
Adams, B. D., Bruyn, L. E., & Chung-Yan, G. (2004). Creating a measure of
trust in small military teams. Retrieved from http://www.dtic.mil/cgi-bin/
GetTRDoc?AD=ADA436363
Alavi, S., & McCormick, J. (2008). The roles of perceived task interdependence and
group members’ interdependence in the development of collective efficacy in
university student group contexts. British Journal of Educational Psychology, 78,
375-393. doi:10.1348/000709907X240471
Alexandrov, A., Babakus, E., & Yavas, U. (2007). The effects of perceived man-
agement concern for frontline employees and customers on turnover intentions
moderating role of employment status. Journal of Service Research, 9, 356-371.
doi:10.1177/1094670507299378
Allen, M. S., Jones, M. V., & Sheffield, D. (2009). Attribution, emotion, and collec-
tive efficacy in sports teams. Group Dynamics: Theory, Research, and Practice,
13, 205-217. doi:10.1037/a0015149
Arthur, W. R., Bell, S. T., & Edwards, B. D. (2007). A longitudinal examination of
the comparative criterion-related validity of additive and referent-shift consen-
sus operationalizations of team efficacy. Organizational Research Methods, 10,
35-58. doi:10.1177/1094428106287574
Aryee, S., Chen, Z. X., & Budhwar, P. S. (2004). Exchange fairness and employee
performance: An examination of the relationship between organizational politics
and procedural justice. Organizational Behavior and Human Decision Processes,
94, 1-14. doi:10.1016/j.obhdp.2004.03.002
Austin, J. R. (2003). Transactive memory in organizational groups: The effects of
content, consensus, specialization, and accuracy on group performance. Journal
of Applied Psychology, 88, 866-878. doi:10.1037/0021-9010.88.5.866
Avnet, M. S., & Weigel, A. L. (2013). The structural approach to shared knowl-
edge: An application to engineering design teams. Human Factors, 55, 581-594.
doi:10.1177/0018720812462388
Baer, M., & Frese, M. (2003). Innovation is not enough: Climates for initiative and
psychological safety, process innovations, and firm performance. Journal of
Organizational Behavior, 24, 45-68. doi:10.1002/job.179
Bain, P. G., Mann, L., & Pirola-Merlo, A. (2001). The innovation impera-
tive: The relationships between team climate, innovation, and performance
in research and development teams. Small Group Research, 32, 55-73.
doi:10.1177/104649640103200103
Barrick, M. R., Bradley, B. H., Kristof-Brown, A. L., & Colbert, A. (2007). The
moderating role of top management team interdependence: Implications for
real teams and working groups. Academy of Management Journal, 50, 544-557.
doi:10.5465/AMJ.2007.25525781
Bartram, D. (2005). The great eight competencies: A criterion-centric approach to
validation. Journal of Applied Psychology, 90, 1185-1203. doi:10.1037/0021-
9010.90.6.1185
Bayazit, M., & Mannix, E. A. (2003). Should I stay or should I go? Predicting team
members’ intent to remain in the team. Small Group Research, 34, 290-321.
doi:10.1177/1046496403034003002
Bell, S. T. (2007). Deep-level composition variables as predictors of team per-
formance: A meta-analysis. Journal of Applied Psychology, 92, 595-615.
doi:10.1037/0021-9010.92.3.595
Biemann, T., Ellwart, T., & Rack, O. (2014). Quantifying similarity of team men-
tal models: An introduction of the rRG index. Group Processes & Intergroup
Relations, 17, 125-140. doi:10.1177/1368430213485993
Blecharz, J., Luszczynska, A., Scholz, U., Schwarzer, R., Siekanska, M., & Cieslak,
R. (2014). Predicting performance and performance satisfaction: Mindfulness
and beliefs about the ability to deal with social barriers in sport. Anxiety, Stress
& Coping: An International Journal, 27, 270-287. doi:10.1080/10615806.2013
.839989
Bradley, B. H., Baur, J. E., Banford, C. G., & Postlethwaite, B. E. (2013). Team play-
ers and collective performance: How agreeableness affects team performance
over time. Small Group Research, 44, 680-711. doi:10.1177/1046496413507609
Brahm, T., & Kunze, F. (2012). The role of trust climate in virtual teams. Journal of
Managerial Psychology, 27, 595-614. doi:10.1108/02683941211252446
Brown, R. D., & Hauenstein, N. A. (2005). Interrater agreement reconsidered: An
alternative to the rwg indices. Organizational Research Methods, 8, 165-184.
doi:10.1177/1094428105275376
Burke, C. S., Sims, D. E., Lazzara, E. H., & Salas, E. (2007). Trust in leadership:
A multi-level review and integration. Leadership Quarterly, 18, 606-632.
doi:10.1016/j.leaqua.2007.09.006
Butler, J. K., Jr. (1991). Toward understanding and measuring conditions of trust:
Evolution of a Conditions of Trust Inventory. Journal of Management, 17, 643-
663. doi:10.1177/014920639101700307
Cannon-Bowers, J. A., Tannenbaum, S. I., Salas, E., & Volpe, C. E. (1995). Defining
competencies and establishing team training requirements. In R. A. Guzzo & E.
Salas (Eds.), Team effectiveness and decision making in organizations (pp. 333-
380). San Francisco, CA: Jossey-Bass.
Carron, A. V., & Brawley, L. R. (2000). Cohesion: Conceptual and measurement
issues. Small Group Research, 31, 89-106. doi:10.1177/1046496412468072
Carron, A. V., Brawley, L. R., Bray, S. R., Eys, M. A., Dorsch, K. D., Estabrooks,
P. A., & Terry, P. C. (2004). Using consensus as a criterion for groupness:
Implications for the cohesion-group success relationship. Small Group Research,
35, 466-491. doi:10.1177/1046496404263923
Carron, A. V., Widmeyer, W. N., & Brawley, L. R. (1985). The development of
an instrument to assess cohesion in sport teams: The Group Environment
Questionnaire. Journal of Sport Psychology, 7, 244-266.
Chan, D. (1998). Functional relations among constructs in the same content domain
at different levels of analysis: A typology of composition models. Journal of
Applied Psychology, 83, 234-246. doi:10.1037/0021-9010.83.2.234
Chen, G., Bliese, P. D., & Mathieu, J. E. (2005). Conceptual framework and
statistical procedures for delineating and testing multilevel theories
for homology. Organizational Research Methods, 8, 375-409. doi:10.1177/
1094428105280056
Chen, G., Gully, S. M., & Eden. D. (2001). Validation of a new General Self-Efficacy
Scale. Organizational Research Methods, 4, 62-83. doi:10.1177/109442810141004
Chiocchio, F., & Essiembre, H. (2009). Cohesion and performance: A meta-analytic
review of disparities between project teams, production teams, and service teams.
Small Group Research, 40, 382-420. doi:10.1177/1046496409335103
Chou, H. W., Lin, Y. H., & Chou, S. B. (2012). Team recognition, collective effi-
cacy, and performance in strategic decision-making teams. Social Behavior and
Personality, 40, 381-394. doi:10.2224/sbp.2012.40.3.381
Clariana, R. B., & Wallace, P. (2007). A computer-based approach for deriving and
measuring individual and team knowledge structure from essay questions. Journal
of Educational Computing Research, 37, 211-227. doi:10.2190/EC.37.3.a
Cohen, A., Ben-Tura, E., & Vashdi, D. R. (2012). The relationship between
social exchange variables, OCB, and performance: What happens when
you consider group characteristics? Personnel Review, 41, 705-731.
doi:10.1108/00483481211263638
Cohen, A., Doveh, E., & Nahum-Shani, I. (2009). Testing agreement for multi-item
scales with the indices rWG(J) and AD m(J). Organizational Research Methods,
12, 148-164. doi:10.1177/1094428107300365
Cole, M. S., Bedeian, A. G., Hirschfeld, R. R., & Vogel, B. (2011). Dispersion-
composition models in multilevel research: A data-analytic framework.
Organizational Research Methods, 14, 718-734. doi:10.1177/1094428110389078
Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust
propensity: A meta-analytic test of their unique relationship with risk taking and
performance. Journal of Applied Psychology, 92, 909-927. doi:10.1037/0021-
9010.92.4.909
Comu, S., Iorio, J., Taylor, J. E., & Dossick, C. (2013). Quantifying the impact of
facilitation on transactive memory system formation in global virtual project net-
works. Journal of Construction Engineering and Management, 139, 294-303.
doi:10.1061/(ASCE)CO.1943-7862.0000610
Cooke, N. J., Gorman, J. C., Myers, C. W., & Duran, J. L. (2013). Interactive team
cognition. Cognitive Science, 37, 255-285. doi:10.1111/cogs.12009
Cooke, N. J., Gorman, J. C., & Winner, J. L. (2007). Team cognition. In T. F.
Durso, R. S. Nickerson, S. T. Dumais, S. Lewandowsky, & T. J. Perfect (Eds.),
Handbook of applied cognition (2nd ed., pp. 239-268). Hoboken, NJ: John Wiley.
doi:10.1002/9780470713181.ch10
Costa, P. L., Graca, A. M., Marques-Quinteiro, P., Santos, C. M., Caetano, A., &
Passos, A. M. (2013). Multilevel research in the field of organizational behav-
ior: An empirical look at 10 years of theory and research. Sage Open, 1, 3-17.
doi:10.1177/2158244013498244
Crawford, E., & LePine, J. (2012). A configural theory of team processes: Accounting
for the structure of taskwork and teamwork. Academy of Management Review,
38, 32-48. doi:10.5465/amr.2011.0206
DeChurch, L. A., & Mesmer-Magnus, J. R. (2010a). The cognitive underpinnings of
effective teamwork: A meta-analysis. Journal of Applied Psychology, 95, 32-53.
doi:10.1037/a0017328
DeChurch, L. A., & Mesmer-Magnus, J. R. (2010b). Measuring shared team mental
models: A meta-analysis. Group Dynamics: Theory, Research, and Practice, 14,
1-14. doi:10.1037/a0017455
De Dreu, C. K. W., & Weingart, L. R. (2003). Task versus relationship conflict, team
performance, and team member satisfaction: A meta-analysis. Journal of Applied
Psychology, 88, 741-749. doi:10.1037/0021-9010.88.4.741
De Jong, B. A., & Dirks, K. T. (2012). Beyond shared perceptions of trust and moni-
toring in teams: Implications of asymmetry and dissensus. Journal of Applied
Psychology, 97, 391. doi:10.1037/a0026483
DeRue, D. S., Hollenbeck, J., Ilgen, D., & Feltz, D. (2010). Efficacy dispersion in
teams: Moving beyond agreement and aggregation. Personnel Psychology, 63,
1-40. doi:10.1111/j.1744-6570.2009.01161.x
Dion, K. L. (2000). Group cohesion: From “field of forces” to multidimen-
sional construct. Group Dynamics: Theory, Research, and Practice, 4, 7-26.
doi:10.1037/1089-2699.4.1.7
Dionne, S. D., Sayama, H., Hao, C., & Bush, B. (2010). The role of leadership in
shared mental model convergence and team performance improvement: An
agent-based computational model. Leadership Quarterly, 21, 1035-1049.
doi:10.1016/j.leaqua.2010.10.007
Dithurbide, L., Sullivan, P., & Chow, G. (2009). Examining the influence of team-ref-
erent causal attributions and team performance on collective efficacy: A multilevel
analysis. Small Group Research, 40, 491-507. doi:10.1177/1046496409340328
Dufresne, R. (2013). Learning from critical incidents by ad hoc teams: The impact
of storytelling on psychological safety. Academy of Management Proceedings.
doi:10.5465/AMBPP.2013.13939abstract. Retrieved form http://proceedings.
aom.org/content/2013/1/13939.short
Dunlap, W. P., Burke, M. J., & Smith-Crowe, K. (2003). Accurate tests of statis-
tical significance for rWG and average deviation interrater agreement indexes.
Journal of Applied Psychology, 88, 356-362. doi:10.1037/0021-9010.88.2.356
Dunlop, W. L., Falk, C. F., & Beauchamp, M. R. (2013). How dynamic are exercise
group dynamics? Examining changes in cohesion within class-based exercise
programs. Health Psychology, 32, 1240-1243. doi:10.1037/t01866-000
Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams.
Administrative Science Quarterly, 44, 350-383. doi:10.2307/2666999
Edmondson, A. C. (2004). Psychological safety, trust, and learning in organizations:
A group-level lens. In R. M. Kramer & K. S. Cook (Eds.), Trust and distrust in
organizations: Dilemmas and approaches (pp. 239-271). New York, NY: Russell
Sage Foundation.
Edwards, J. R. (2001). Multidimensional constructs in organizational behav-
ior research: An integrative analytical framework. Organizational Research
Methods, 4, 144-192. doi:10.1177/109442810142004
Ellwart, T., Konradt, U., & Rack, O. (2014). Team mental models of expertise loca-
tion: Validation of a field survey measure. Small Group Research, 45, 119-153.
doi:10.1177/1046496414521303
Espinosa, J., & Clark, M. A. (2014). Team knowledge representation: A network per-
spective. Human Factors, 56, 333-348. doi:10.1177/0018720813494093
Fullagar, C. J., & Egleston, D. O. (2008). Norming and performing: Using micro-
worlds to understand the relationship between team cohesiveness and perfor-
mance. Journal of Applied Social Psychology, 38, 2574-2593. doi:10.1111/
j.1559-1816.2008.00404
Funk, C. A., & Kulik, B. W. (2012). Happily ever after: Toward a theory of late
stage group performance. Group & Organization Management, 37, 36-66.
doi:10.1177/1059601111426008
Gibson, C. B., Randel, A. E., & Earley, P. (2000). Understanding group efficacy:
An empirical test of multiple assessment methods. Group & Organization
Management, 25, 67-97. doi:10.1177/1059601100251005
Gill, J. (2003). Hierarchical linear models. In Kimberly Kempf-Leonard (Ed.), Encyclopedia
of social measurement (pp. 209-214). Amsterdam, The Netherlands: Elsevier.
Goddard, R. D. (2001). Collective efficacy: A neglected construct in the study of
schools and student achievement. Journal of Educational Psychology, 93, 467-
476. doi:10.1037/0022-0663.93.3.467
Goncalo, J. A., Polman, E., & Maslach, C. (2010). Can confidence come too soon?
Collective efficacy, conflict and group performance over time. Organizational
Behavior and Human Decision Processes, 113, 13-24. doi:10.1016/j.
obhdp.2010.05.001
Gonzales, A. L., Hancock, J. T., & Pennebaker, J. W. (2010). Language style match-
ing as a predictor of social dynamics in small groups. Communication Research,
37, 3-19. doi:10.1177/0093650209351468
Gully, S. M., Incalcaterra, K. A., Joshi, A., & Beaubien, J. (2002). A meta-analysis of
team-efficacy, potency, and performance: Interdependence and level of analysis
as moderators of observed relationships. Journal of Applied Psychology, 87, 819-
832. doi:10.1037/0021-9010.87.5.819
Harrison, D. A., & Klein, K. J. (2007). What’s the difference? Diversity constructs
as separation, variety, or disparity in organizations. Academy of Management
Review, 32, 1199-1228. doi:10.5465/AMR.2007.26586096
Heuze, J. P. Raimbault, N., & Fontayne, P. (2006). Relationships between cohe-
sion, collective efficacy and performance in professional basketball teams:
An examination of mediating effects. Journal of Sports Sciences, 24, 59-68.
doi:10.1080/02640410500127736
Hirak, R., Peng, A., Carmeli, A., & Schaubroeck, J. M. (2012). Linking leader inclu-
siveness to work unit performance: The importance of psychological safety
and learning from failures. Leadership Quarterly, 23, 107-117. doi:10.1016/j.
leaqua.2011.11.009
Hommes, J. J., Bossche, P. P., Grave, W. W., Bos, G. G., Schuwirth, L. L., &
Scherpbier, A. A. (2014). Understanding the effects of time on collaborative
learning processes in problem based learning: A mixed methods study. Advances
in Health Sciences Education. Advance online publication. doi:10.1007/s10459-
013-9487-z
Hornsey, M. J., Dwyer, L., & Oei, T. S. (2007). Beyond cohesiveness: Reconceptualizing
the link between group processes and outcomes in group psychotherapy. Small
Group Research, 38, 567-592. doi:10.1177/1046496407304336
Huang, R., Kahai, S., & Jestice, R. (2010). The contingent effects of leadership on
team collaboration in virtual teams. Computers in Human Behavior, 26, 1098-
1110. doi:10.1016/j.chb.2010.03.014
Huber, G. P., & Lewis, K. (2010). Cross-understanding: Implications for group cogni-
tion and performance. Academy of Management Review, 35, 6-26. doi:10.5465/
AMR.2010.45577787
Idris, M., Dollard, M. F., Coward, J., & Dormann, C. (2012). Psychosocial safety
climate: Conceptual distinctiveness and effect on job demands and worker psy-
chological health. Safety Science, 50, 19-28. doi:10.1016/j.ssci.2011.06.005
Kanawattanachai, P., & Yoo, Y. (2007). The impact of knowledge coordination on
virtual tem performance over time. MIS Quarterly, 31, 783-808. doi:10.1108/
eb028933
Kennedy, D. M., & McComb, S. A. (2010). Merging internal and external pro-
cesses: Examining the mental model convergence process through team
communication. Theoretical Issues in Ergonomics Science, 11, 340-358.
doi:10.1080/14639221003729193
Klein, K., & Kozlowski, S. W. (2000). From micro to meso: Critical steps in concep-
tualizing and conducting multilevel research. Organizational Research Methods,
3, 211-236. doi:10.1177/109442810033001
Kozlowski, S. W. J., & Chao, G. T. (2012). The dynamics of emergence: Cognition
and cohesion in work teams. Managerial and Decision Economics, 33, 335-354.
doi:10.1002/mde.2552
Kozlowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the effectiveness of work
groups and teams. Psychological Science in the Public Interest, 7, 77-124.
doi:10.1111/j.1529-1006.2006.00030.x
Kozlowski, S. W. J., & Klein, K. J. (2000). A multilevel approach to theory and
research in organizations: Contextual, temporal, and emergent processes. In K.
J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and methods
in organizations: Foundations, extensions, and new directions (pp. 3-90). San
Francisco, CA: Jossey-Bass.
Kozlowski, S. W. J., Watola, D. J., Jensen, J. M., Kim, B. H., & Botero, I. C. (2009).
Developing adaptive teams: A theory of dynamic team leadership. In E. Salas, G.
F. Goodwin, & C. S. Burke (Eds.), Team effectiveness in complex organizations:
Cross-disciplinary perspectives and approaches (pp. 113-155). New York, NY:
Routledge.
Lance, C. E., Butts, M. M., & Michels, L. C. (2006). The sources of four commonly
reported cutoff criteria: What did they really say? Organizational Research
Methods, 9, 202-220. doi:10.1177/1094428105284919
Langan-Fox, J., Anglim, J., & Wilson, J. R. (2004). Mental models, team mental
models, and performance: Process, development, and future directions. Human
Factors and Ergonomics in Manufacturing & Service Industries, 14, 331-352.
doi:10.1002/hfm.20004
LeBreton, J. M., James, L. R., & Lindell, M. K. (2005). Recent issues regarding rWG,
rWG, rWG(J), and rWG(J). Organizational Research Methods, 8, 128-138.
doi:10.1177/1094428104272181
Lee, J., Zhang, Z., & Yin, H. (2011). A multilevel analysis of the impact of a profes-
sional learning community, faculty trust in colleagues and collective efficacy on
teacher commitment to students. Teaching and Teacher Education, 27, 820-830.
doi:10.1016/j.tate.2011.01.006
Lewis, K. (2003). Measuring transactive memory systems in the field: Scale
development and validation. Journal of Applied Psychology, 88, 587-604.
doi:10.137/0021-9010.88.4.587
Lewis, K., & Herndon, B. (2011). Transactive memory systems: Current issues and
future research directions. Organization Science, 22, 1254-1265. doi:10.1287/
orsc.1110.0647
Li, J., & Roe, R. A. (2012). Introducing an intrateam longitudinal approach to the
study of team process dynamics. European Journal of Work & Organizational
Psychology, 21, 718-748. doi:10.1080/1359432X.2012.660749
Lusher, D., Kremer, P., & Robins, G. (2014). Cooperative and competi-
tive structures of trust relations in teams. Small Group Research, 45, 3-36.
doi:10.1177/1046496413510362
Marks, M. A., Mathieu, J. E., & Zaccaro, S. J. (2001). A temporally based framework
and taxonomy of team processes. Academy of Management Review, 26, 356-376.
doi:10.2307/259182
Mathieu, J., Maynard, M. T., Rapp, T., & Gilson, L. (2008). Team effectiveness 1997-
2007: A review of recent advancements and a glimpse into the future. Journal of
Management, 34, 410-476. doi:10.1177/0149206308316061
May, D. R., Gilson, R. L., & Harter, L. M. (2004). The psychological conditions of
meaningfulness, safety, and availability and the engagement of the human spirit
at work. Journal of Occupational and Organizational Psychology, 77, 11-37.
doi:10.1348/096317904322915892
McComb, S., Kennedy, D., Perryman, R., Warner, N., & Letsky, M. (2010).
Temporal patterns of mental model convergence: Implications for distributed
teams interacting in electronic collaboration spaces. Human Factors, 52, 264-
281. doi:10.1177/0018720810370458
McKay, P. F., Avery, D. R., & Morris, M. A. (2009). A tale of two climates: Diversity
climate from subordinates’ and managers’ perspectives and their role in store
unit sales performance. Personnel Psychology, 62, 767-791. doi:10.1111/j.1744-
6570.2009.01157.x
Mesmer-Magnus, J. R., & DeChurch, L. A. (2009). Information sharing and team
performance: A meta-analysis. Journal of Applied Psychology, 94, 535-546.
doi:10.1037/a0013773
Meyerson, D., Weick, K. E., & Kramer, R. M. (1996). Swift trust and temporary
groups. In R. M. Kramer & T. R. Tyler (Eds.), Trust in Organizations: Frontiers
of theory and research (pp. 166-195). Thousand Oaks, CA: Sage.
Mohammed, S., Ferzandi, L., & Hamilton, K. (2010). Metaphor no more: A 15-year
review of the team mental model construct. Journal of Management, 36, 876-
910. doi:10.1177/0149206309356804
Mohammed, S., Klimoski, R., & Rentsch, J. R. (2000). The measurement of team
mental models: We have no shared schema. Organizational Research Methods,
3, 123-165. doi:10.1177/109442810032001
Molleman, E. (2009). Attitudes toward flexibility: The role of task characteris-
tics. Group & Organization Management, 34, 241-268. doi:10.1177/105960
1108330090
Mullen, B., & Copper, C. (1994). The relation between group cohesiveness and perfor-
mance: An integration. Psychological Bulletin, 115, 210-227. doi:10.1037/0033-
2909.115.2.210
Murase, T., Doty, D., Wax, A. M. Y., Dechurch, L. A., & Contractor, N. S. (2012).
Teams are changing: Time to “think networks.” Industrial and Organizational
Psychology, 5, 41-44. doi:10.1111/j.1754-9434.2011.01402.x
Murphy, K. R., Cronin, B. E., & Tam, A. P. (2003). Controversy and consensus
regarding the use of cognitive ability testing in organizations. Journal of Applied
Psychology, 88, 660-671. doi:10.1037/0021-9010.88.4.660
Murrell, A. J., & Gaertner, S. L. (1992). Cohesion and sport team effectiveness: The
benefit of a common group identity. Journal of Sport & Social Issues, 16, 1-14.
doi:10.1177/019372359201600101
Myers, N. D., Payment, C. A., & Feltz, D. L. (2004). Reciprocal relationships
between collective efficacy and team performance in women’s ice hockey. Group
Dynamics: Theory, Research, and Practice, 8, 183-195. doi:10.1037/1089-
2699.8.3.182
Newman, D. A., & Sin, H.-P. (2009). How do missing data bias estimates of within-
group agreement? Sensitivity of SDWG, CVWG, rWG(J), rWG(J)*, and ICC to system-
atic nonresponse. Organizational Research Methods, 12, 113-147. doi:10.1177/
1094428106298969
Ng, K., & Van Dyne, L. (2005). Antecedents and performance consequences of
helping behavior in work groups: A multilevel analysis. Group & Organization
Management, 30, 514-540. doi:10.1177/1059601104269107
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. New York, NY:
McGraw-Hill.
Oliver, L. W., Harman, J., Hoover, E., Hayes, S. M., & Pandhi, N. A. (1999). A quan-
titative integration of the military cohesion literature. Military Psychology, 11,
57-83. doi:10.1207/s15327876mp1101_4
Owens, J. E. (2003). Part 1: Cohesion: Explaining party cohesion and discipline
in democratic legislatures: Purposiveness and contexts. Journal of Legislative
Studies, 9, 12-40. doi:10.1080/1357233042000306236
Pain, M. A., & Harwood, C. G. (2008). The performance environment of the England
youth soccer teams: A quantitative investigation. Journal of Sports Sciences, 26,
1157-1169. doi:10.1080/02640410802101835
Quintane, E., Pattison, P. E., Robins, G. L., & Mol, J. M. (2013). Short- and long-term
stability in organizational networks: Temporal structures of project teams. Social
Networks, 35, 528-540. doi:10.1016/j.socnet.2013.07.001
Scherbaum, C. A., & Ferreter, J. M. (2009). Estimating statistical power and required
sample sizes for organizational research using multilevel modeling. Organizational
Research Methods, 12, 347-367. doi:10.1177/1094428107308906.
Schoorman, F. D., Mayer, R. C., & Davis, J. H. (1996). Organizational trust:
Philosophical perspectives and conceptual definitions. Academy of Management
Review, 21, 337-340. doi:10.5465/AMR.1996.27003218
Sherrieb, K., Norris, F. H., & Galea, S. (2010). Measuring capacities for community
resilience. Social Indicators Research, 99, 227-247. doi:10.1007/s11205-010-
9576-9
Siebold, G. L. (2006). Military group cohesion. In T. W. Britt, C. A. Castro, & A. B.
Adler (Eds.), Military life: The psychology of serving in peace and combat (pp.
185-201). Westport, CT: Praeger.
Smith-Jentsch, K. A., Cannon-Bowers, J. A., Tannenbaum, S. I., & Salas, E. (2008).
Guided team self-correction: Impacts on team mental models, processes, and effec-
tiveness. Small Group Research, 39, 303-327. doi:10.1177/1046496408317794
Smith-Jentsch, K. A., Kraiger, K., Cannon-Bowers, J. A., & Salas, E. (2009). Do
familiar teammates request and accept more backup? Transactive memory in air
traffic control. Human Factors, 51, 181-192. doi:10.1177/0018720809335367
Solanas, A., Manolov, R., Leiva, D., & Andres, A. (2013). A measure of group dis-
similarity for psychological attributes. Psicologica, 32, 343-364.
Sorensen, L. J., & Stanton, N. A. (2011). Is SA shared or distributed in team work?
An exploratory study in an intelligence analysis task. International Journal of
Industrial Ergonomics, 41, 677-687. doi:10.1016/j.ergon.2011.08.001
Stajkovic, A. D., Lee, D., & Nyberg, A. J. (2009). Collective efficacy, group
potency, and group performance: Meta-analyses of their relationships, and
test of a mediation model. Journal of Applied Psychology, 94, 814-828.
doi:10.1037/a0015659
Staples, D. S., & Webster, J. (2008). Exploring the effects of trust, task interde-
pendence and virtualness on knowledge sharing in teams. Information Systems
Journal, 18, 617-640. doi:10.1111/j.1365-2575.2007.00244.x
Steiner, I. (1972). Group processes and productivity. New York, NY: Academic
Press.
Suddaby, R. (2010). Challenges for institutional theory. Journal of Management
Inquiry, 19, 14-20. doi:10.1177/1056492609347564
Susskind, A. M., Kacmar, K. M., & Borchgrevink, C. P. (2003). Customer service
providers’ attitudes relating to customer service and customer satisfaction in
the customer-server exchange. Journal of Applied Psychology, 88, 179-187.
doi:10.1037/0021-9010.88.1.179
Swaab, R. I., Postmes, T., Neijens, P., Kiers, M. H., & Dumay, A. M. (2002).
Multiparty negotiation support: The role of visualization’s influence on the
development of shared mental models. Journal of Management Information
Systems, 19, 129-150.
Tendulkar, S. A., Koenen, K. C., Dunn, E. C., Buka, S., & Subramanian, S. V. (2012).
Neighborhood influences on perceived social support among parents: Findings
from the project on human development in Chicago neighborhoods. PLoS ONE,
7(4), 1-9. doi:10.1371/journal.pone.0034235
Author Biographies
Chris W. Coultas is a research and consulting psychologist at Leadership Worth
Following in Dallas, Texas, USA, where he conducts research on coaching effective-
ness and leadership development. He received his doctorate in industrial/organiza-
tional psychology from the University of Central Florida in 2014.
Tripp Driskell is a research scientist at Florida Maxima Corporation in Orlando,
Florida, USA. He received his doctorate in human factors psychology from the
University of Central Florida in 2013.
C. Shawn Burke is an associate professor (research) at the Institute for Simulation
and Training of the University of Central Florida, USA. Her expertise includes teams,
leadership, team adaptability, team training, measurement, evaluation, and team
effectiveness. She earned her doctorate in industrial/organizational psychology from
George Mason University.
Eduardo Salas is Pegasus & Trustee Chair Professor of psychology at the University
of Central Florida, USA, where he also holds an appointment as program director for
the Human Systems Integration Research Department at the Institute for Simulation
and Training. He earned his doctorate in industrial/organizational psychology from
Old Dominion University.