You are on page 1of 33

552285

research-article2014
SGRXXX10.1177/1046496414552285Small Group ResearchCoultas et al.

Article
Small Group Research
2014, Vol. 45(6) 671­–703
A Conceptual Review © The Author(s) 2014
Reprints and permissions:
of Emergent State sagepub.com/journalsPermissions.nav
DOI: 10.1177/1046496414552285
Measurement: Current sgr.sagepub.com

Problems, Future
Solutions

Chris W. Coultas1, Tripp Driskell2,


C. Shawn Burke1, and Eduardo Salas1

Abstract
Team research increasingly incorporates emergent states as an integral
mediator between team inputs and outcomes. In conjunction with this, we
have witnessed a proliferation and fragmentation of measurement techniques
associated with emergent states. This inconsistency in measurement presents
a problem for scientists and practitioners alike. For the scientist, it becomes
difficult to better understand the nature and effects of various emergent
states on team processes and outcomes. For the practitioner, it complicates
the process of measurement development, selection, and implementation.
To address these issues, we review the literature on emergent states focusing
on various measurement strategies, to better unpack best practices. In so
doing, we highlight existing research that suggests innovative solutions to the
conceptual, methodological, and logistical problems that consistently plague
emergent state research. Our aim is to enhance emergent state theory by
applying psychometric principles to the measurement techniques associated
with them.

1University of Central Florida, Orlando, USA


2The Florida Maxima Corporation, Orlando, USA

Corresponding Author:
Chris W. Coultas, University of Central Florida, 4000 Central Florida Blvd., Psychology Bldg.
99, Ste. 320, Orlando, FL 32816, USA.
Email: ccoultas@ist.ucf.edu

This article is part of the special issue: 2014 Annual Review Issue, Small Group Research,
Volume 45(6).

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


672 Small Group Research 45(6)

Keywords
cohesion, collective efficacy, multilevel, team cognition, transactive memory
systems

The past 30 years has witnessed a surge in team research (Wuchty, Jones, &
Uzzi, 2007). Accordingly, there is now a large body of evidence that points to
the critical drivers of team performance (e.g., Bell, 2007; De Dreu &
Weingart, 2003; Gully, Incalcaterra, Joshi, & Beaubien, 2002). While initial
research focused on identifying antecedents to team performance (i.e., team,
individual, and task characteristics), more recent research has begun to
unpack the black box within input–mediator–output (IMO) models of team
performance (Mathieu, Maynard, Rapp, & Gilson, 2008), with emergent
states theorized to be a primary explanatory variable mediating the relation-
ship between team inputs and outcomes. Over the past decade, emergent
states—relatively dynamic, collective-level characteristics that “vary as a
function of team context, inputs, processes, and outcomes” (Marks, Mathieu,
& Zaccaro, 2001, p. 357)—have been consistently demonstrated to influence
desirable team outcomes (e.g., Kozlowski & Ilgen, 2006; Mathieu et al.,
2008; Rico, Sánchez-Manzanares, Gil, & Gibson, 2010). Given the impor-
tance of emergent states across many disciplines (e.g., organizational, sports,
military), assessment of these constructs is paramount. However, emergent
state measurement suffers from fragmented definitions, operationalizations,
aggregation techniques, and disjointed methodologies (cf. Lewis & Herndon,
2011; Mohammed, Klimoski, & Rentsch, 2000). This article reviews com-
mon emergent state measurement practices, gauging adherence to psycho-
metric best practices (e.g., Nunnally & Bernstein, 1994), which may have
implications for the internal validity and generalizability of emergent state
research. Issues addressed will relate to construct clarity, measure develop-
ment, multilevel aggregation, and measurement over time. We will highlight
instances where research has fallen short, and where it has made significant
advancements. Ultimately, this targeted conceptual review shall advance
emergent state research by providing practical and insightful guidelines to
enhance researchers’ and practitioners’ ability to assess and address team
emergent states.

Research Methodology
To document the emergent state measurement science, the authors conducted
a multipronged search of several databases (i.e., PsycINFO, PsycARTICLES,
Business Source Premier, Military and Government Collection, ERIC,

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 673

MEDLINE, Human Resources Abstracts). We paired search terms relating to


different emergent states (i.e., transactive memory, mental model, situation(al)
awareness, shared cognition, team knowledge, psychological safety, cohe-
sion, shared/group trust, collective/group efficacy), terms for measurement
(i.e., measurement, assessment, monitoring), and terms indicating a collec-
tive level of analysis (i.e., team, collective, group, multilevel). To facilitate
relevance and parsimony, we limited search results to those published from
the year 2000 and on. This search strategy yielded 703 unique articles. We
leveraged insights from subject matter experts in teams and organizational
research to eliminate some articles and identify others we had missed. Articles
that were deemed to not include relevant information (e.g., unclear explica-
tion of measurement practices, did not study the focal constructs, or used
non-adult or individual-level samples) were removed. These articles were
reviewed by the authors and used for the basis of claims made throughout this
article. Finally, to further enhance relevance, we conducted targeted searches
to identify studies that used innovative solutions to address problems within
emergent state measurement practices. Ultimately, we reviewed 259 articles;
due to the conceptual nature of the review, and for reasons of parsimony, only
a subset of these articles is directly referenced in the article. We reference
articles that engage in good examples of emergent state measurement prac-
tices, and draw special attention to work that offers innovative solutions to
particularly troublesome measurement and research conundrums. We review
articles with an eye toward internal validity and generalizability (i.e., good
methods and research) rather than focusing on specific metrics (e.g., correla-
tion/regression coefficients, consistency/agreement indices)—as specific
metrics may be suppressed or inflated depending on the nature of the meth-
odological violation (Cooke, Gorman, & Winner, 2007; Nunnally &
Bernstein, 1994). A full list of reviewed articles is available from the first
author.

Issues in Emergent State Measurement


Clearly Defining the Construct
Lack of construct clarity was identified as a common problem. As Suddaby
(2010) noted, “perhaps the most common definitional issue in manu-
scripts is that others simply fail to define their constructs” (p. 347).
Conceptually ambiguous, multifaceted, cross-disciplinary constructs—as
are common with emergent state research (Edwards, 2001; Hornsey,
Dwyer, & Oei, 2007)—make systematic research difficult. Indeed, the
inability to converge on consistent definitions has led some researchers to

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


674 Small Group Research 45(6)

suggest that some constructs be abandoned altogether (Hornsey et al.,


2007). Despite this, advancements have been made recently (e.g.,
Mohammed, Ferzandi, & Hamilton, 2010; Mohammed et al., 2000), but
we see room for much improvement across all emergent states. Lack of
definitional clarity has been identified as a significant impediment to
emergent state measurement (Mohammed et al., 2010). To effectively
measure constructs, all definitions/operationalizations must be enumer-
ated. This is particularly germane for emergent states, due to the only
fairly recent clarification of emergent states from team processes (Marks
et al., 2001). Transactive memory theory, for instance, originated from
social psychology and intimate dyadic relationships (Wegner, 1987); it
has since focused on “the cognitive processes in groups, the factors that
affect those processes, and the group performance outcomes that result”
(Lewis & Herndon, 2011, p. 1254, emphasis added). Similarly, cohesion
has been studied in sports teams (Carron, Widmeyer, & Brawley, 1985;
Pain & Harwood, 2008), virtual teams (Huang, Kahai, & Jestice, 2010),
military teams (Oliver, Harman, Hoover, Hayes, & Pandhi, 1999), and
even political parties (Owens, 2003; Rice, 1925). However, more modern
conceptualizations define cohesion as an affective attraction to the task
and/or group, shifting from initial behavioral conceptualizations (e.g.,
degree to which political party members voted collectively, rather than
individually).
Slight definitional shifts are important to understand when operationaliz-
ing and theorizing emergent states; beyond simply confusing researchers and
practitioners, definitional nuances can be impactful in numerous ways. First,
definitions may range in degree of specificity. For example, collective effi-
cacy is often operationalized either as beliefs regarding a collective’s ability
to either meet an overarching goal (e.g., “defeat the enemy”; Chen, Gully, &
Eden, 2001), or multiple specific goals (e.g., “communicate effectively,”
“minimize unnecessary casualties”; Heuze, Raimbault, & Fontayne, 2006;
Myers, Payment, & Feltz, 2004). Despite these differences (which have been
shown to be empirically meaningful; see Stajkovic, Lee, & Nyberg, 2009),
both are often considered the same construct. Without considering divergent
emergent state definitions/operationalizations, researchers risk developing or
selecting unsuitable measures. Second, related constructs may not be appro-
priately distinguished. For example, transactive memory systems (TMS),
shared mental models (SMM), and team situation awareness (TSA) have
been positioned under the umbrella of team cognition theory (Cooke,
Gorman, Myers, & Duran, 2013; Cooke et al., 2007). In addition, group
learning, strategic consensus, cross-understanding, and shared task under-
standing may also fall under this purview; however, the exact relationships

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 675

between these myriad concepts remain unclear. Consider TMS compared


with cross-understanding. Although theoretically and operationally distinct,
Huber and Lewis (2010) admitted that they “are similar in that they are both
composed in some way of members’ understandings” (p. 9). If a researcher
uses superficial definitions of TMS—who knows what—these concepts
would be very difficult to differentiate. Similar issues are apparent for non-
cognitive emergent states. For example, reviewing the literature on team inti-
macy and cohesion, Rosh, Offermann, and Van Diest (2012) noted that these
have been often confused, merged, and used interchangeably. Ultimately,
misappropriated definitions may yield flawed research and misinterpreted
results, as previous reviews of cohesion (Hornsey et al., 2007) and TMS
(Lewis & Herndon, 2011) have pointed out.
Nonetheless, progress has been made toward greater construct clarity in
numerous emergent states. Meta-analysis facilitates greater construct clarity
despite multiple different definitions. For example, while trust is typically
defined in terms of its antecedents (Adams, Bruyn, & Chung-Yan, 2004;
Schoorman, Mayer, & Davis, 1996), it has been operationalized with 3
(Rempel, Holmes, & Zanna, 1985), 4 (Schoorman et al., 1996), or even 10
(Butler, 1991) factors. Despite this variation, Colquitt, Scott, and LePine
(2007) were able to meta-analyze the literature because researchers explicitly
defined/operationalized trust, providing strong support for the validity of a
4-factor structure. However, meta-analyses are only as accurate as the data
they integrate. Considering the recent uptick in meta-analytic (e.g., Gully
et al., 2002; Mesmer-Magnus & DeChurch, 2009; Stajkovic et al., 2009), this
can be particularly problematic.

Recommendation 1: Develop a complete understanding of an emergent


state’s construct space by better understanding its evolution over time and
across disciplines and, in practice, clearly state which operationalization
of an emergent state is being used, including the specific facets/dimen-
sions that are important to the research.

Generating Items and Collecting Data


Generating/selecting a measurement tool and collecting data are no easy
tasks. However, these tasks can be made easier—at least more efficient—if
several factors beyond simple construct clarity are taken into consideration.
These factors largely relate to the contextual nature of emergent states, and
their meaning can shift slightly depending on various key factors, including
(a) type of entity that the construct references, (b) size of the collective, and
(c) type of collective task.

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


676 Small Group Research 45(6)

Construct referent. When developing emergent state measures, the referent


should be clarified, because construct meaning can vary accordingly (e.g., by
tasks, groups, people). For instance, cohesion and psychological safety within
a long-standing group may mean qualitatively different things than in larger,
more nebulous groups (Edmondson, 2004). Similarly, trust operates differ-
ently when referring to different trustees; research has found meaningfully
different trust effects when accounting for the referent (Colquitt et al., 2007;
Robertson, Gockel, & Brauner, 2013). In a similar manner, the referent
should match the intended level of performance measurement. That is, when
examining the relationship between collective efficacy and performance in
an eight-person military squad, both collective efficacy and performance
should be measured at the team level. This is evidenced by Gully et al.’s
(2002) meta-analysis, which found stronger relationships between both team
efficacy and potency and performance at the team level as opposed to the
individual level. That is, when measuring emergent states as explicitly team
level (with team as the referent), these perceptions tend to predict team-level
performance more so than individual-level performance. Some researchers
even go so far as to suggest that emergent states, specifically team cognition,
should only be measured at the team level and, contrary to the majority of
past research, in real time from team interactions (Cooke et al., 2013).

Recommendation 2: Select measurement strategies that are most relevant


for the entity to which the construct is referring and, if necessary, modify
measurement strategies such that item wordings are relevant for the even-
tual level of analysis.

Collective size. Traditional guidance within the literature suggests that


researchers should avoid common method bias by triangulating measure-
ment. Although this is foundational science, adherence may be difficult
depending on context, such as the size of the collective. Our review revealed
variability in the formats available for measuring cognitive states. However,
these measurement tools are typically time-consuming to develop and imple-
ment. For example, measurement complexity increases as the size of the col-
lective to be measured increases. In larger collectives, methods that require
more laborious data aggregation (e.g., establishing consensus via intraclass
correlation [ICC], rwg, rwg(j)) are less practical. Moreover, the size of the col-
lective can also affect data collection. Austin’s (2003) measure of TMS, for
instance, is constructed on a matrix comprising team size and the number of
skills identified. For example, a team of 4 members and 4 skills requires 16
ratings per member, whereas a team of 25 members and 4 skills requires 100
ratings per member. There is less variation and difficulty associated with

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 677

non-cognitive emergent states; however, development of these measures can


be time-intensive due to their task-specific nature. In short, measurement
selection depends in great part on the size of the collective being measured,
with easier to develop, distribute, complete, and assess methods (e.g., ques-
tionnaires, scenario-based measures) being best suited for both larger and
smaller groups and more cumbersome methods (e.g., observations, card-sort-
ing) being better suited for smaller groups. Difficulties associated with emer-
gent state measurement have driven the development of innovative,
non-obtrusive measures such as social network analysis (SNA), sociometric
badges, vocal recognition, content analysis, and archival data analysis. For
example, SNA has recently been used to measure SMM (Avnet & Weigel,
2013) and cohesion (Wise, 2014), while content analytic approaches have
been used to measure cohesion (Gonzales, Hancock, & Pennebaker, 2010)
and collective cognition (Clariana & Wallace, 2007).

Recommendation 3: Avoid common method bias and minimize time


requirements for measurement by leveraging multiple measurement strat-
egies, including non-obtrusive measurement strategies.

Differences in task. Beyond demonstrating alignment between referent and


intended performance source, emergent states with significant task implica-
tions should be measured such that unique task components are reflected (cf.
Bartram, 2005). Collective efficacy, for example, is often defined as collec-
tive belief that specific levels and components of task performance can be
attained (Blecharz et al., 2014; Heuze et al., 2006; Stajkovic et al., 2009).
Moreover, group cohesion partially depends on the extent to which group
members are attracted to specific group tasks (Carron & Brawley, 2000; Mul-
len & Copper, 1994). Teams can develop mental models about specific sub-
tasks within a larger performance environment (McComb, Kennedy,
Perryman, Warner, & Letsky, 2010). Accordingly, team task analyses should
be conducted prior to emergent state measurement (Mohammed et al., 2010),
making emergent state measurement relatively context-specific, and not eas-
ily adaptable to other domains (or even different tasks in similar domains).
However, there have been steps to mitigate this difficulty. Webber, Chen,
Payne, Marsh, and Zaccaro (2000) developed and validated a measure of
strategic team mental models that can be easily adapted to construct a generic
measure of team mental models.
Team task may also guide appropriate measurement technique. For exam-
ple, team tasks may be characterized by declarative, procedural, and/or stra-
tegic knowledge. Mohammed and colleagues (2010) have shown recently
that certain SMM measurement techniques are more effective for assessing

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


678 Small Group Research 45(6)

different types of knowledge content (Cannon-Bowers, Tannenbaum, Salas,


& Volpe, 1995; Webber et al., 2000). In addition, DeChurch and Mesmer-
Magnus (2010a) demonstrated that compositional (e.g., SMM) and compila-
tional cognition (e.g., transactive memory) were differently predictive of
performance depending on team task type and emergent state conceptualiza-
tion. Relatedly, task interdependence affects how emergent states operate.
For example, stronger performance relationships have been found between
cohesion (Barrick, Bradley, Kristof-Brown, & Colbert, 2007) and team effi-
cacy (Gully et al., 2002) when interdependence was high. Staples and Webster
(2008) demonstrated a stronger trust–performance relationship when interde-
pendence was low. DeChurch and Mesmer-Magnus (2010b) found that mod-
erate and high task interdependence benefited compilational cognition (e.g.,
transactive memory), but the cognition–performance relationship was stron-
ger for compositional cognition (e.g., SMM) when interdependence was
moderate as opposed to high. Consequently, researchers and practitioners
should pay close attention to the type of task(s) and the level of task interde-
pendence under investigation because this can serve as a decision aid when
selecting emergent state measures.

Recommendation 4: Identify all performance components of the collec-


tive tasks, determine why an emergent state might influence these specific
components, and develop/select measures that are theoretically linked to
these components.

Aggregating Measures to the Collective


Having addressed issues of construct clarity and operationalization, the
researcher must consider questions of aggregation and level of analysis.
Assuming the construct is truly group level, the researcher must still deter-
mine how it should be conceptualized and analyzed at the group level. Various
researchers (Chan, 1998; Chen, Bliese, & Mathieu, 2005; Kozlowski &
Chao, 2012; Kozlowski & Klein, 2000) have discussed theoretical elements
and best practices for measuring and modeling multilevel constructs.
Although a complete review of the tenets of multilevel theory is beyond the
scope of this article, a brief synopsis is appropriate. One of the most basic
tenets is that multilevel constructs (of which emergent states are a class) must
have similar manifestations at individual and collective levels of analysis,
though this similarity may range from loosely metaphoric to essentially iden-
tical (Chen et al., 2005). Related to this is whether the emergent state con-
struct should be conceptualized as shared or configural. Shared and configural
properties both emerge from individual team members, though shared

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 679

constructs are similarly perceived by all members while configural properties


are unshared. Shared constructs are operationalized at the group level through
mean or average levels, while configural conceptualizations are concerned
with dispersion and structure (e.g., dispersion, patterns; Kozlowski & Klein,
2000). With these questions in mind, we focus this portion of our review on
current and future issues in emergent state aggregation.

Shared conceptualizations. In our review, we noticed that the overwhelming


majority of non-cognitive emergent states were conceptualized as shared.
That is, individual perceptions were aggregated to the collective level using
additive or mean models (and typically, checking for sharedness with rwg and/
or ICC indices). This is common practice when measuring and aggregating
the majority of non-cognitive emergent states such as trust (Burke, Sims,
Lazzara, & Salas, 2007), collective efficacy (Stajkovic et al., 2009), psycho-
logical safety (Edmondson, 1999; May, Gilson, & Harter, 2004), and team
climate (M. Baer & Frese, 2003; Bain, Mann, & Pirola-Merlo, 2001). Certain
aspects of cognitive emergent states—such as team mental model (TMM)
accuracy (Smith-Jentsch, Cannon-Bowers, Tannenbaum, & Salas, 2008) and
TMS specialization, credibility, and coordination (Lewis, 2003)—have been
assessed with shared/compositional models by averaging the degree to which
an individual team member’s mental model overlaps with that of an expert’s.
Overall, shared models are most appropriate when (a) units hold relatively
homogeneous views on the construct of interest, (b) there are no substantial
subgroups/faultlines, and (c) the construct of interest has little to no meaning
at the dyadic level (Chan, 1998; Cole, Bedeian, Hirschfeld, & Vogel, 2011;
Kozlowski & Klein, 2000). This emphasis on sharedness when modeling
emergent states represents a continuing trend noted by researchers through-
out at least the past decade (Chan, 1998; Cole et al., 2011; Klein & Kozlowski,
2000). Conceptual advances have been made in the past decade, as research-
ers and theoreticians discuss issues of temporality (Kozlowski & Chao,
2012), agreement indices (Lance, Butts, & Michels, 2006), and accounting
for level and sharedness simultaneously (Cole et al., 2011), yet we did not see
much research actually applying this work.
When modeling shared emergent states, researchers typically justify
aggregation by assessing within-team agreement. Recently, new agreement
indices have been proposed, such as within-group agreement (awg, Brown &
Hauenstein, 2005), absolute deviation (ADm, Cohen, Doveh, & Nahum-
Shani, 2009), team-specific agreement (rrg, Biemann, Ellwart, & Rack, 2014),
and group dissimilarity (Solanas, Manolov, Leiva, & Andres, 2013). One
study even used Cohen’s kappa as an agreement index (Rau, 2006). Roberson,
Sturman, and Simons (2007) compared multiple agreement indices, finding

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


680 Small Group Research 45(6)

that each performed differently depending on the criterion. Despite the avail-
ability and diversity of multiple agreement indices, research typically simply
justifies aggregation by appealing to rwg cutoffs. Indeed, this defaulting to
arbitrary cutoffs continues to occur, despite recent advances in this very area
that address some of the most problematic issues (Lance et al., 2006). Rather,
the more appropriate practice would be to use significance testing to deter-
mine 95% critical values for setting agreement cutoffs (Dunlap, Burke, &
Smith-Crowe, 2003; Lebreton, James, & Lindell, 2005). Despite this, of the
259 emergent state articles reviewed, only 1 (Wholey et al., 2011) mentioned
using the critical value approach to determine cutoff scores. This is not to
claim that the critical value approach has not made an impact—a quick
Google Scholar search shows 135 references to the Dunlap and colleagues
article; but the majority of empirical studies were on organizational climate
(e.g., McKay, Avery, & Morris, 2009), team processes (e.g., Vecchio, Justin,
& Pearce, 2010), or static team characteristics (Murphy, Cronin, & Tam,
2003; Van Mierlo, Rutte, Vermunt, Kompier, & Doorewaard, 2006).

Recommendation 5a: Select theoretically appropriate agreement indices;


abandon using arbitrary cutoff scores in favor of critical value significance
tests.

Several articles used (or appeared to use) strictly additive models (i.e.,
aggregating to the collective with mean indices without referencing any check
for agreement). Additive models are appropriate if within-group variance is
irrelevant (Chan, 1998; Kozlowski & Klein, 2000); this may be the case in
loosely interdependent collectives (Molleman, 2009; Saavedra, Earley, & Van
Dyne, 1993; Steiner, 1972). Most emergent state research that does not report
agreement studied loose collectives (e.g., neighborhoods) where there was no
specific collaborative task (e.g., Sherrieb, Norris, & Galea, 2010; Tendulkar,
Koenen, Dunn, Buka, & Subramanian, 2012). However, we also noticed sev-
eral studies that failed to mention agreement indices even though they studied
collectives with highly interdependent structures and tasks (e.g., non-profit
boards, various therapy groups). It would be helpful, given that the common
practice is to clearly explicate the rationale and method for aggregation, for
future researchers to clearly point out why they chose to use an additive model,
especially when the task is somewhat interdependent.

Recommendation 5b: When aggregating emergent states using mean


indices, report agreement, unless studying very loose collectives; further-
more, when not reporting agreement, clarify why agreement is not theo-
retically necessary.

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 681

Configural conceptualizations. Configural conceptualizations are important to


consider when the presence of subgroups or any other systematic variations
in related constructs elicit non-normal (e.g., bimodal) distributions of focal
construct (cf. Alexandrov, Babakus, & Yavas, 2007; Cole et al., 2011; Murrell
& Gaertner, 1992). Kozlowski and Klein (2000) noted that “compilation-
based emergent processes are relatively little explored from a multilevel per-
spective” (p. 18). Nearly 15 years later, this is mostly still true. Notable
exceptions include research on TMM similarity and TMS, which are consis-
tently modeled configurally (e.g., Austin, 2003; Ellwart, Konradt, & Rack,
2014; Smith-Jentsch, Kraiger, Cannon-Bowers, & Salas, 2009; Swaab, Post-
mes, Neijens, Kiers, & Dumay, 2002). This is not entirely surprising, as
TMM similarity and TMS dispersion/patterning are inherently meaningful,
not just a prerequisite for aggregation (Chan, 1998; Mohammed et al., 2010).
Beyond TMM similarity, other emergent states may be appropriately mod-
eled through dispersion under certain conditions, though these seem to be
less frequently researched (e.g., Goddard, 2001; Sorensen & Stanton, 2011).
Emergent states such as cohesion, psychological safety, and trust seem to be
underrepresented with configural conceptualizations, possibly because the-
ory tends to label some emergent states as inherently compositional and oth-
ers as compilational (Kozlowski & Chao, 2012). Indeed, it may be time for
researchers to incorporate configural indices rather than simply discarding
low agreement teams, as fairly common practice (e.g., Aryee, Chen, & Bud-
hwar, 2004; Rentsch & Klimoski, 2001; Susskind, Kacmar, & Borchgrevink,
2003). Unfortunately, this solution comes with problems of its own (Carron
et al., 2004; Cole et al., 2011). By focusing only on teams that achieve a cer-
tain level of sharedness, researchers assume, for example, that trust does not
exist in teams that lack this sharedness, even though it may be present within
subgroups without there being a consistent level of generic team trust. In
some studies we reviewed, researchers acknowledged low levels of agree-
ment, but opted to keep all teams in, explaining that removing teams reduces
power. This is preferable, but when significant disagreement exists, research-
ers should consider incorporating configural conceptualizations into their
research models.
An anonymous reviewer noted that one reason for this underrepresentation
may be that dispersion indices are more heavily impacted by missing data than
are means. Newman and Sin (2009) suggested several strategies for correcting
measures of within-team agreement when there is missing data. We refer the
reader to their research for an in-depth discussion and formulae for correcting
for missing data. Cole and colleagues (2011) argued that researchers should
begin including both mean and dispersion indices in multilevel models. We
echo this sentiment by Cole and colleagues (and refer the reader to their work

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


682 Small Group Research 45(6)

for specifically how to include both). Finally, we emphasize that a theoreti-


cally appropriate dispersion index should be selected (see above).

Recommendation 6a: Incorporate theoretically appropriate configural


indices when low agreement makes it necessary; use correction formulae
when missing data are a problem.

Shared or configural? Despite the widespread use of shared/consensus aggre-


gation techniques, researchers emphasize that this approach should not be
utilized without a strong theoretical rationale (Burke et al., 2007; Chan, 1998;
Dion, 2000). Deciding when to conceptualize an emergent state as shared or
configural is a difficult task and there does not seem to have been significant
advances made in the field (Burke et al., 2007; Chan, 1998; Dion, 2000; Gib-
son, Randel, & Earley, 2000). However, our review highlights factors—
including task characteristics, team structure, and construct effects—that
recent research has identified as being important for multilevel modeling.
Gully and colleagues’ (2002) meta-analysis found that collective efficacy
was more strongly related to performance when teams were more interdepen-
dent, suggesting that mean aggregation may be more appropriate when inter-
dependence is higher. Conversely, in more configural interdependence
structures, such as is the case with conjunctive tasks where if just one team
member performs poorly due to low perceived levels of an emergent state,
the entire team may perform more poorly (Klein & Kozlowski, 2000; Saave-
dra et al., 1993). This would then mean that the minimum level of said con-
struct would be its most meaningful collective index (maximum indices
would, conversely, be relevant if the highest level of a variable is most rele-
vant for performance; for example, Ng & Van Dyne, 2005). For further
insight on incorporating these indices into multilevel models, see Harrison
and Klein’s (2007) discussion of team disparity.
Obviously, the nature of the emergent state construct itself determines
how it should be aggregated. This is clear with TMS and SMM, which are
primarily concerned with assessing the degree of sharedness (Austin, 2003;
Mesmer-Magnus & DeChurch, 2009); as such, most reviewed studies that
used configural indices assessed cognitive emergent states (Austin, 2003;
Smith-Jentsch et al., 2009; Sorensen & Stanton, 2011; Swaab et al., 2002).
However, certain emergent states may necessitate different aggregation tech-
niques, depending on their operationalization. For example, Carron and col-
leagues (2004) studied cohesion using different rwg cutoffs, finding that more
lenient cutoffs reduced the cohesion–performance relationship when opera-
tionalized more individually; when operationalized more collectively, strin-
gent cutoff scores increased this relationship.

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 683

Recommendation 6b: Use theory to guide the selection of aggregate


level conceptualizations—whether it be mean, minimum/maximum, vari-
ance, or otherwise.

Complex conceptualizations. Ultimately, a major difficulty with multilevel


research is that whether one chooses a shared/compositional or configural/
compilational approach to modeling multilevel data, some information is lost
at higher levels of analysis. To circumvent these issues, researchers have pro-
posed several more complex solutions, including hierarchical linear model-
ing (HLM), network analysis, and consensus-dispersion models. HLM
addresses several problems commonly present in multilevel research. It
accounts for multicollinearity within aggregates, deals with heteroscedastic-
ity due to uneven group numbers, can accommodate missing data at Level 1,
and tests hypotheses at the aggregate level (Gill, 2003; Wright & Benson,
2011). There are some difficulties associated with HLM, such as requiring
larger sample sizes, but this can be circumvented to an extent by sampling
more groups with fewer individuals per group (Scherbaum & Ferreter, 2009;
Woltman, Feldstain, MacKay, & Rocchi, 2012). This is particularly germane
for field researchers, but may also be something to consider when designing
and conducting multilevel studies. To determine whether HLM has been
gaining in popularity since 2000, we conducted a targeted literature search
using the same construct search terms as in our broader searches, but adding
HLM and hierarchical linear, yielding 118 articles ranging in publication
date from 1979 to 2014. There has been an exponential increase in published
articles, with articles being published at rates of 0.7/year (1979-2002), 4.57/
year (2003-2008), and 11.33/year (2009-2014). Recent research has used
HLM techniques to assess the multilevel effects of collective efficacy (Baya-
zit & Mannix, 2003; Dithurbide, Sullivan, & Chow, 2009), cohesion (Cohen,
Ben-Tura, & Vashdi, 2012; Fullagar & Egleston, 2008), psychological safety
(Idris, Dollard, Coward, & Dormann, 2012), and transactive memory (Yuan,
Carboni, & Ehrlich, 2014; Yuan, Fulk, Monge, & Contractor, 2010). Interest-
ingly, the majority of articles identified as leveraging HLM studied collective
efficacy or cohesion (i.e., other emergent states were either rare or absent).
One limitation of HLM is that researchers are still left to determine whether
to conceptualize group-level variables with either consensus or configural
operationalizations, and the majority of HLM studies continue to justify
aggregation with rwg indices (e.g., Bayazit & Mannix, 2003; Cohen et al.,
2012; Idris et al., 2012).
Researchers have also argued for the adoption of network models when
conducting multilevel research (Crawford & Lepine, 2012; Murase, Doty,
Wax, DeChurch, & Contractor, 2012). Network models are compilational

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


684 Small Group Research 45(6)

techniques for measuring team variables that assess interrelationships


between all individuals in a given team (Crawford & Lepine, 2012; Murase
et al., 2012). It should be noted that much research on social networks focuses
on the network itself (i.e., general connectedness between individuals within
a collective), and may even explore how the network influences a mean
aggregated emergent state (e.g., Espinosa & Clark, 2014; Tirado, Hernando,
& Aguaded, 2012; Zhong, Huang, Davison, Yang, & Chen, 2012). We see
this as a missed opportunity for exploring the structural dynamics of the
emergent state; indeed, network analysis techniques can be utilized to study
emergent states by modeling individual-level constructs and assessing simi-
larities within all dyadic connections (see Espinosa & Clark, 2014; Salmon
et al., 2009; Walker et al., 2009). SNA enables researchers to measure “struc-
tures and systems that would be nearly impossible to describe without rela-
tional concepts,” and allows for “the testing of hypotheses about the networks’
structural properties” (Comu, Iorio, Taylor, & Dossick, 2013, p. 298).
Espinosa and Clark (2014) illustrated the importance of SNA to modeling
cognitive emergent states, noting that when “team knowledge constructs
[are] more complex . . . simple averages provide an incomplete picture” (p.
333). Resick and colleagues (2010) showed that a network approach to mod-
eling team cognition was superior for predicting performance than were other
metrics of team cognition. In our review, we noted a recent uptick in articles
using network analyses to study emergent states such as team trust (Lusher,
Kremer, & Robins, 2014), cohesion (Tirado et al., 2012; Wise, 2014; Zaheer
& Soda, 2009), affective climate (Yuan et al., 2014), team mental models
(Avnet & Weigel, 2013; Dionne, Sayama, Hao, & Bush, 2010), TMS (Comu
et al., 2013; Espinosa & Clark, 2014), and situational awareness (Sorensen &
Stanton, 2011). Network operationalizations are most relevant when a con-
struct may have meaningful intradyadic variance, such that the felt presence
of a given emergent state may differ from dyad to dyad. Finally, although a
full review of the nuances of SNA is outside the scope of this work, it is worth
mentioning m-slices, a specific, little-used SNA technique (Rodríguez,
Sicilia, Sánchez-Alonso, Lezcano, & García-Barriocanal, 2011). This tech-
nique can complement SNA by identifying clusters of related perceptions
within a social network. Although Rodriguez used m-slicing to identify inter-
est areas in an e-learning environment, applying this technique to emergent
states such as cohesion and mental model measurement is appealing.
Finally, Cole and colleagues (2011) recently argued for the use of consen-
sus-dispersion models. Although a full summary of this work is not appropri-
ate here, they essentially outline a methodology for simultaneously modeling
consensus (i.e., mean) and configural (i.e., dispersion) effects, while also
accounting for multicollinearity between means and dispersion. Unfortunately,

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 685

Cole and colleagues’ work seems to not be that widely cited, as a Google
Scholar search of articles citing this work returned only 15 hits; of these hits,
only one dealt with an emergent state—trust. De Jong and Dirks (2012) found
that mean trust, trust dispersion, and their interaction term all significantly
predicted team performance.

Recommendation 6c: Leverage recent advances in complex multilevel


analysis methodologies to more effectively model individual- and collec-
tive-level effects, as well as mean and dispersion indices.

Incorporating Time Into Emergent State Models


Kennedy and McComb (2010) noted that “little is known about how the
[team cognition] convergence process occurs in a team domain” (p. 340).
Similar sentiments have been echoed by other researchers (Costas et al.,
2013; Kozlowski & Chao, 2012; Roe, Gockel, & Meyer, 2012). We there-
fore conducted targeted literature searches for research exploring emergent
states from a temporal or longitudinal perspective. To do this, we added the
terms longitudinal, curve modeling, growth curve, and over time to our set
of broader search terms; from these hits, we identified 44 additional articles
that in some way discussed temporal issues in relation to emergent states.
Our goal was partly to validate claims made by past researchers regarding
the inadequacy of temporal research, but also to identify recent advances
made, and also to highlight where the field needs to go. Although there is
certainly a lack of research on temporal issues and emergent states, we high-
light a few key findings here relevant to emergent state research and
measurement.

Convergence over time. Arthur, Bell, and Edwards (2007) found support for the
hypothesis that within-team agreement on measures of collective efficacy
should increase, especially when using referent-shift measures. Their argu-
ment, which applies to all emergent states, was that “continued interaction
among team members provides a basis for which the team members can better
estimate” (Arthur et al., 2007, p. 39) the presence of an emergent state. Grow-
ing convergence over time was also evidenced in other studies (Dunlop, Falk,
& Beauchamp, 2013; Goncalo, Polman, & Maslach, 2010; Hommes et al.,
2014; Kanawattanachai & Yoo, 2007; Lee, Zhang, & Yin, 2011). Accordingly,
the general consensus in the literature seems to be that teams do trend toward
agreement over time. However, Kozlowski, Ployhart, and Lim (2010, cited in
Kozlowski & Chao, 2012) measured teams consistently (using experience
sampling) over an 8-week period and found that some teams converged toward

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


686 Small Group Research 45(6)

common cohesion perceptions, while others converged then diverged


cyclically.

Timing of measurement. The general consensus from our reviewed articles is


that it takes time for emergent states to develop and converge. Nonetheless, a
plethora of lab studies exist that examine emergent states in ad hoc teams,
and these short-lived teams continue to demonstrate acceptable agreement
indices. Taken together, it seems as if emergent states might indeed exist, at
least in some form, early in group life. Should emergent state researchers
really worry about time if they can find decent convergence early on? Empiri-
cal and conceptual work may shed some light on this issue. Bradley, Baur,
Banford, and Postlethwaite (2013) looked at cohesion in teams spanning 4
months, finding that later cohesion was more strongly linked to performance
than was cohesion measured earlier. Siebold (2006), reviewing years of mili-
tary cohesion research, noted that cohesion tends to be volatile (and down
trending) early in group life, and only stabilizes much later. Kanawattanachai
and Yoo (2007) found that it took several weeks for TMS to develop, but once
it was developed, it was stable and was a strong predictor of team
performance.
Conceptually, even when agreement is reached, it is simply intuitive that
emergent states may be qualitatively different later on in the team’s life, even
if numerical indicators remain constant. That is, moderate levels of cohesion
likely means quite different things in teams existing for 3 hours as opposed to
3 years. Indeed, Chiocchio and Essiembre (2009) argued that teams need to
interact for at least 4 weeks before cohesion can truly emerge, meaning that
studies that measure cohesion in ad hoc, short-term teams may not actually be
assessing cohesion (despite convergence). Furthermore, the existence of
swift versions of emergent states such as cohesion (Meyerson, Weick, &
Kramer, 1996) and psychological safety (Dufresne, 2013) suggests that con-
structs measured early on in collective life may be qualitatively different
from the same construct measured at a later period in team development. For
example, Arthur and colleagues (2007) found that after accounting for interim
performance, only initial measures of collective efficacy predicted final per-
formance. In other words, pre-task collective efficacy was meaningfully dif-
ferent from collective efficacy in situ (which was essentially equivalent to
teams’ actual ongoing performance).

Recommendation 7a: Account for the effects of time in emergent state


research, understanding that teams tend to progress toward convergence,
and that findings from a group in one phase may not generalize to groups
in other phases.

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 687

New temporal constructs. Recently, DeRue, Hollenbeck, Ilgen, and Feltz


(2010) have argued that another component of team-level conceptualization
should be the trajectory of emergent states over time; that is, teams with simi-
lar means and dispersions of a construct might experience said construct in
different ways, if one is moving toward greater convergence while the other
experiences growing divergence. Li and Roe (2012) showed that incorporat-
ing trajectory indices into regression models adds significant predictive
power. Quintane, Pattison, Robins, and Mol (2013) showed that time horizon
may influence the nature (and appropriate measurement strategy) of cohe-
sion. Specifically, they note that in teams with a more distal time horizon,
closure and reciprocity (typical SNA indices associated with cohesion) more
commonly occur, while in teams with a shorter time frame, adaptation pro-
cesses are more prevalent. This suggests that time horizon, and perhaps per-
ceived time horizon (see Molleman, 2009), may be an important construct to
consider when theorizing about and measuring emergent states.

Recommendation 7b: Incorporate other temporal elements such as tra-


jectory and time horizon into emergent state research.

Phases in emergent state development. Roe and colleagues (2012) reviewed


team process research and concluded that researchers tend to acknowledge
the importance of temporality while only examining differences between
teams at different points in time. This is an important distinction from truly
temporal research, which would study differences within teams across time.
In our review, we also noticed that while longitudinal research is growing in
frequency, it tends to focus on two time points, and rarely separated by more
than a few months. This research design does little to tell us about the evolu-
tion of emergent states, and is typically more focused on whether a given
psychological state at one time influences another variable at another time
(e.g., Allen, Jones, & Sheffield, 2009; Blecharz et al., 2014; Chen et al., 2005;
Hirak, Peng, Carmeli, & Schaubroeck, 2012). Even when research is con-
ducted over the course of an extended period of time, the evolution of the
emergent state tends not to be the focal point (e.g., Bradley et al., 2013;
Brahm & Kunze, 2012; H. W. Chou, Lin, & Chou, 2012). These research
designs inherently assume that the construct being measured at multiple
points in time is qualitatively the same, which is problematic.
Various researchers have argued that emergent states develop in a phase/
process manner (Langan-Fox, Anglim, & Wilson, 2004). Most process-based
theories of emergence argue that phases involve the following: (a) Team
members orienting themselves with each other; (b) gathering information
about team members (e.g., roles, trustworthiness); and, finally, (c) compiling

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


688 Small Group Research 45(6)

construct-relevant information after extended team interaction. These phases


substantially overlap existing team development theories (Kozlowski,
Watola, Jensen, Kim, & Botero, 2009; Tuckman & Jensen, 1977). Because
teams and multilevel research typically discuss team development phases
conceptually (if at all), we did not notice any articles that empirically showed
state emergence across specific development phases. That notwithstanding,
we offer a few thoughts on applying a development framework when think-
ing about measuring emergent states.
At early phases in team development, team members become oriented to
each other and the task (Kozlowski et al., 2009); they attain a base level of
interpersonal and task knowledge but are also often characterized by dis-
agreements (Tuckman & Jensen, 1977). Social bonding, interpersonal learn-
ing, and task practice opportunities occur, which yield initial levels of social
cohesion (Siebold, 2006) and team cognition (Kanawattanachai & Yoo,
2007). Convergence tends to be lower in earlier phases, making strict consen-
sus/compositional models inappropriate. This means that accounting for dis-
persion (either through dispersion-consensus or configural conceptualizations)
may be more important than at other phases.
As teams persist, tasks, roles, and overall team identity (e.g., norms) are
clarified. Team communication/clarification processes yield a common cog-
nitive framework for social and task interactions (Kozlowski et al., 2009).
These processes also drive the development of other non-cognitive emergent
states, such as psychological safety (e.g., Alavi & McCormick, 2008; Bradley
et al., 2013; Brahm & Kunze, 2012; Hommes et al., 2014). Although means
and dispersion indices will of course be important here, we suggest that tra-
jectory (DeRue et al., 2010; Li & Roe, 2012) is particularly important during
these phases. The literature seems to suggest that emergent states are either
most predictive early (e.g., Arthur et al., 2007) or later in team life, before
performance is measured (Goncalo et al., 2010; Salanova, Rodríguez-
Sanchez, Schaufeli, & Cifre, 2014). Initial volatility in team perceptions
would render trajectory indices likely unreliable, but trajectory assessed at
middle phases could be used to predict final levels of the emergent state. This
may be especially helpful in fast-paced teams where data collection is diffi-
cult immediately prior to task performance.
Our review indicates a consensus that emergent states typically move
toward agreement over time, meaning that emergent state changes at later
phases should be relatively small. Funk and Kulik (2012) highlighted key
characteristics of late-stage groups including behavioral stability and an aver-
sion toward mental model changes; they argued that studying the social net-
works within these teams is essential to diagnosing their performance. And
although these teams may be less likely to change, when they focus on

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 689

addressing sources of low performance (Kozlowski et al., 2009; Tuckman &


Jensen, 1977), emergent states may change as a result. If this happens, emer-
gent states that were initially stable may change and not immediately con-
verge; in fact, perceptions may oscillate between convergence and divergence
as mature teams work to arrive at sustainable solutions (e.g., Kozlowski et al.,
2010).

Recommendation 8: Mean levels and dispersion indices are important to


measure in all phases of team life; nonetheless, dispersion is especially
important early on, while trajectories and networks are important later in
group life.

Summary
Teams and teamwork are increasingly important in modern society; accord-
ingly, research interest in measuring emergent states has grown considerably.
Yet despite the importance of these variables for predicting and improving
team performance, the extant emergent state literature remains somewhat
nascent. Furthermore, from our review of the emergent state literature, we
noted several problematic trends.
First, constructs are frequently either not defined sufficiently or defined
inconsistently across different studies (Lewis & Herndon, 2011). This obfus-
cates trends that may be apparent across different streams of research. It com-
plicates theory building, because cohesion or efficacy in one domain might
not mean the same thing in another domain. We urge researchers to intention-
ally distinguish the exact nature of their emergent state of interest and resist
the temptation to infer generalizability across research domains with diver-
gent definitions of a given construct. To do this, we suggest that researchers
also understand the evolution of the construct of interest over time. Because
emergent state research is relatively recent, many constructs have fluid defi-
nitions. Building theory and making inferences across studies and situations
can be problematic when a construct meant something different 50 years ago.
Understanding the breadth (across research domains) and depth (over time)
of the construct of interest should not only enable better integration of find-
ings across studies but also facilitate more nuanced and insightful theory
building and research design.
Second, and a related problem, is the observation that there is no clear
criterion for developing appropriate item-specific operationalizations of dif-
ferent constructs (Lewis & Herndon, 2011). This issue is especially compli-
cated by the fact that the meaning and impact of various emergent states can
change somewhat depending on the referent, size of the group, and the team’s

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


690 Small Group Research 45(6)

task. Regarding team task type, we noted that different emergent states are
impacted differently by task interdependence. Although we did not notice
clear trends for understanding how interdependence affects specific emergent
states, we encourage researchers to pay attention to this potentially moderat-
ing factor.
Third, even when items are developed or selected correctly, researchers
tend to be fairly limited in the ways in which they operationalize the con-
structs at different levels of analysis. We encourage researchers to more con-
sistently use complex models to represent emergent states—accounting for
both emergent state level (e.g., individual, collective) and method of aggrega-
tion (e.g., sharedness, dispersion, structure). It has been at least 15 years since
researchers began highlighting the importance of multilevel modeling, and
the complex ways that this can happen (e.g., Chan, 1998; Kozlowski & Klein,
2000). However, team-level and multilevel models rarely incorporate config-
ural elements in their team-level models of emergent states (see Cole et al.,
2011). We have presented several recent articles that we believe can and
should continue to make a strong impact on the field (e.g., Carron et al.,
2004; Cole et al., 2011; Dunlap et al., 2003). These works may help research-
ers better conceptualize levels of analysis in their theoretical and statistical
models. Doing so will facilitate more accurate and insightful multilevel mod-
els, allowing researchers to generate and answer new and important research
questions, which will be increasingly important as organizations to look to
different kinds of teams (e.g., distributed, cross-functional, multiteam sys-
tems) to achieve objectives.
Finally, research has yet to give consistent attention to the role of tempo-
rality and the dynamic emergence of various constructs. Recently, research-
ers have begun developing non-obtrusive methods for measuring some
emergent states, which may facilitate more frequent and less cumbersome
measurement. We recommend that researchers leverage these measurement
advances to further research in all areas of emergent state measurement.
Furthermore, to help address the role of time, we theoretically tie Kozlowski
and colleagues’ (2009) team development phases to emergent state develop-
ment to suggest a few ways in which these states may shift over time.
In an effort to synthesize the literature on emergent state measurement, we
have provided recommendations to what we view as the central issues of the
day. These recommendations are intended to act as guideposts for both
researchers and practitioners alike. Practically, many of these recommenda-
tions can act as standalone best practices that can and should be immediately
implemented into practice. Some of these recommendations are best prac-
tices that have already been acknowledged and developed elsewhere in semi-
nal works on measurement and multilevel theory (e.g., Chan, 1998; Kozlowski

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 691

& Klein, 2000; Nunnally & Bernstein, 1994). Nonetheless, our review high-
lights that some best practices are not being consistently followed. We point
to some of these inconsistent practices to help narrow the gap between where
we should be as a science and where we currently are. Specifically, better
abiding by these best practices will increase construct clarity, facilitate
research across domains, and strengthen the validity and generalizability of
findings, among other benefits. From a theoretical standpoint, these recom-
mendations are intended to stimulate debate on emergent state measurement
and act as a jumping-off point for future critical analysis and research. More
research on the role of agreement across different emergent states (e.g.,
Carron et al., 2004), the nature of various types of swift emergent states (e.g.,
Dufresne, 2013; Meyerson et al., 1996), and the role of time in emergent state
emergence is needed. We also encourage researchers to continue developing
and using innovative ways to unobtrusively assess various emergent states.
As we seek to understand the development and performance of collectives in
increasingly complex environments, these methodologies will become
increasingly important. The importance of understanding emergent states
will only grow as we continue to rely on teamwork to accomplish societal and
organizational goals; it is therefore essential that we not only better under-
stand these states, but that we better understand how to measure them. This
work represents one step toward the goal of continuing to improve the sci-
ence of emergent state measurement.

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research,
authorship, and/or publication of this article: This research was supported by an Army
Research Institute grant (Contract No. W5J9CQ-11-D-0002, Task Order 11-10002) to
Dr. Christina Curnow of ICF International.

References
Adams, B. D., Bruyn, L. E., & Chung-Yan, G. (2004). Creating a measure of
trust in small military teams. Retrieved from http://www.dtic.mil/cgi-bin/
GetTRDoc?AD=ADA436363
Alavi, S., & McCormick, J. (2008). The roles of perceived task interdependence and
group members’ interdependence in the development of collective efficacy in
university student group contexts. British Journal of Educational Psychology, 78,
375-393. doi:10.1348/000709907X240471

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


692 Small Group Research 45(6)

Alexandrov, A., Babakus, E., & Yavas, U. (2007). The effects of perceived man-
agement concern for frontline employees and customers on turnover intentions
moderating role of employment status. Journal of Service Research, 9, 356-371.
doi:10.1177/1094670507299378
Allen, M. S., Jones, M. V., & Sheffield, D. (2009). Attribution, emotion, and collec-
tive efficacy in sports teams. Group Dynamics: Theory, Research, and Practice,
13, 205-217. doi:10.1037/a0015149
Arthur, W. R., Bell, S. T., & Edwards, B. D. (2007). A longitudinal examination of
the comparative criterion-related validity of additive and referent-shift consen-
sus operationalizations of team efficacy. Organizational Research Methods, 10,
35-58. doi:10.1177/1094428106287574
Aryee, S., Chen, Z. X., & Budhwar, P. S. (2004). Exchange fairness and employee
performance: An examination of the relationship between organizational politics
and procedural justice. Organizational Behavior and Human Decision Processes,
94, 1-14. doi:10.1016/j.obhdp.2004.03.002
Austin, J. R. (2003). Transactive memory in organizational groups: The effects of
content, consensus, specialization, and accuracy on group performance. Journal
of Applied Psychology, 88, 866-878. doi:10.1037/0021-9010.88.5.866
Avnet, M. S., & Weigel, A. L. (2013). The structural approach to shared knowl-
edge: An application to engineering design teams. Human Factors, 55, 581-594.
doi:10.1177/0018720812462388
Baer, M., & Frese, M. (2003). Innovation is not enough: Climates for initiative and
psychological safety, process innovations, and firm performance. Journal of
Organizational Behavior, 24, 45-68. doi:10.1002/job.179
Bain, P. G., Mann, L., & Pirola-Merlo, A. (2001). The innovation impera-
tive: The relationships between team climate, innovation, and performance
in research and development teams. Small Group Research, 32, 55-73.
doi:10.1177/104649640103200103
Barrick, M. R., Bradley, B. H., Kristof-Brown, A. L., & Colbert, A. (2007). The
moderating role of top management team interdependence: Implications for
real teams and working groups. Academy of Management Journal, 50, 544-557.
doi:10.5465/AMJ.2007.25525781
Bartram, D. (2005). The great eight competencies: A criterion-centric approach to
validation. Journal of Applied Psychology, 90, 1185-1203. doi:10.1037/0021-
9010.90.6.1185
Bayazit, M., & Mannix, E. A. (2003). Should I stay or should I go? Predicting team
members’ intent to remain in the team. Small Group Research, 34, 290-321.
doi:10.1177/1046496403034003002
Bell, S. T. (2007). Deep-level composition variables as predictors of team per-
formance: A meta-analysis. Journal of Applied Psychology, 92, 595-615.
doi:10.1037/0021-9010.92.3.595
Biemann, T., Ellwart, T., & Rack, O. (2014). Quantifying similarity of team men-
tal models: An introduction of the rRG index. Group Processes & Intergroup
Relations, 17, 125-140. doi:10.1177/1368430213485993

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 693

Blecharz, J., Luszczynska, A., Scholz, U., Schwarzer, R., Siekanska, M., & Cieslak,
R. (2014). Predicting performance and performance satisfaction: Mindfulness
and beliefs about the ability to deal with social barriers in sport. Anxiety, Stress
& Coping: An International Journal, 27, 270-287. doi:10.1080/10615806.2013
.839989
Bradley, B. H., Baur, J. E., Banford, C. G., & Postlethwaite, B. E. (2013). Team play-
ers and collective performance: How agreeableness affects team performance
over time. Small Group Research, 44, 680-711. doi:10.1177/1046496413507609
Brahm, T., & Kunze, F. (2012). The role of trust climate in virtual teams. Journal of
Managerial Psychology, 27, 595-614. doi:10.1108/02683941211252446
Brown, R. D., & Hauenstein, N. A. (2005). Interrater agreement reconsidered: An
alternative to the rwg indices. Organizational Research Methods, 8, 165-184.
doi:10.1177/1094428105275376
Burke, C. S., Sims, D. E., Lazzara, E. H., & Salas, E. (2007). Trust in leadership:
A multi-level review and integration. Leadership Quarterly, 18, 606-632.
doi:10.1016/j.leaqua.2007.09.006
Butler, J. K., Jr. (1991). Toward understanding and measuring conditions of trust:
Evolution of a Conditions of Trust Inventory. Journal of Management, 17, 643-
663. doi:10.1177/014920639101700307
Cannon-Bowers, J. A., Tannenbaum, S. I., Salas, E., & Volpe, C. E. (1995). Defining
competencies and establishing team training requirements. In R. A. Guzzo & E.
Salas (Eds.), Team effectiveness and decision making in organizations (pp. 333-
380). San Francisco, CA: Jossey-Bass.
Carron, A. V., & Brawley, L. R. (2000). Cohesion: Conceptual and measurement
issues. Small Group Research, 31, 89-106. doi:10.1177/1046496412468072
Carron, A. V., Brawley, L. R., Bray, S. R., Eys, M. A., Dorsch, K. D., Estabrooks,
P. A., & Terry, P. C. (2004). Using consensus as a criterion for groupness:
Implications for the cohesion-group success relationship. Small Group Research,
35, 466-491. doi:10.1177/1046496404263923
Carron, A. V., Widmeyer, W. N., & Brawley, L. R. (1985). The development of
an instrument to assess cohesion in sport teams: The Group Environment
Questionnaire. Journal of Sport Psychology, 7, 244-266.
Chan, D. (1998). Functional relations among constructs in the same content domain
at different levels of analysis: A typology of composition models. Journal of
Applied Psychology, 83, 234-246. doi:10.1037/0021-9010.83.2.234
Chen, G., Bliese, P. D., & Mathieu, J. E. (2005). Conceptual framework and
statistical procedures for delineating and testing multilevel theories
for homology. Organizational Research Methods, 8, 375-409. doi:10.1177/
1094428105280056
Chen, G., Gully, S. M., & Eden. D. (2001). Validation of a new General Self-Efficacy
Scale. Organizational Research Methods, 4, 62-83. doi:10.1177/109442810141004
Chiocchio, F., & Essiembre, H. (2009). Cohesion and performance: A meta-analytic
review of disparities between project teams, production teams, and service teams.
Small Group Research, 40, 382-420. doi:10.1177/1046496409335103

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


694 Small Group Research 45(6)

Chou, H. W., Lin, Y. H., & Chou, S. B. (2012). Team recognition, collective effi-
cacy, and performance in strategic decision-making teams. Social Behavior and
Personality, 40, 381-394. doi:10.2224/sbp.2012.40.3.381
Clariana, R. B., & Wallace, P. (2007). A computer-based approach for deriving and
measuring individual and team knowledge structure from essay questions. Journal
of Educational Computing Research, 37, 211-227. doi:10.2190/EC.37.3.a
Cohen, A., Ben-Tura, E., & Vashdi, D. R. (2012). The relationship between
social exchange variables, OCB, and performance: What happens when
you consider group characteristics? Personnel Review, 41, 705-731.
doi:10.1108/00483481211263638
Cohen, A., Doveh, E., & Nahum-Shani, I. (2009). Testing agreement for multi-item
scales with the indices rWG(J) and AD m(J). Organizational Research Methods,
12, 148-164. doi:10.1177/1094428107300365
Cole, M. S., Bedeian, A. G., Hirschfeld, R. R., & Vogel, B. (2011). Dispersion-
composition models in multilevel research: A data-analytic framework.
Organizational Research Methods, 14, 718-734. doi:10.1177/1094428110389078
Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust
propensity: A meta-analytic test of their unique relationship with risk taking and
performance. Journal of Applied Psychology, 92, 909-927. doi:10.1037/0021-
9010.92.4.909
Comu, S., Iorio, J., Taylor, J. E., & Dossick, C. (2013). Quantifying the impact of
facilitation on transactive memory system formation in global virtual project net-
works. Journal of Construction Engineering and Management, 139, 294-303.
doi:10.1061/(ASCE)CO.1943-7862.0000610
Cooke, N. J., Gorman, J. C., Myers, C. W., & Duran, J. L. (2013). Interactive team
cognition. Cognitive Science, 37, 255-285. doi:10.1111/cogs.12009
Cooke, N. J., Gorman, J. C., & Winner, J. L. (2007). Team cognition. In T. F.
Durso, R. S. Nickerson, S. T. Dumais, S. Lewandowsky, & T. J. Perfect (Eds.),
Handbook of applied cognition (2nd ed., pp. 239-268). Hoboken, NJ: John Wiley.
doi:10.1002/9780470713181.ch10
Costa, P. L., Graca, A. M., Marques-Quinteiro, P., Santos, C. M., Caetano, A., &
Passos, A. M. (2013). Multilevel research in the field of organizational behav-
ior: An empirical look at 10 years of theory and research. Sage Open, 1, 3-17.
doi:10.1177/2158244013498244
Crawford, E., & LePine, J. (2012). A configural theory of team processes: Accounting
for the structure of taskwork and teamwork. Academy of Management Review,
38, 32-48. doi:10.5465/amr.2011.0206
DeChurch, L. A., & Mesmer-Magnus, J. R. (2010a). The cognitive underpinnings of
effective teamwork: A meta-analysis. Journal of Applied Psychology, 95, 32-53.
doi:10.1037/a0017328
DeChurch, L. A., & Mesmer-Magnus, J. R. (2010b). Measuring shared team mental
models: A meta-analysis. Group Dynamics: Theory, Research, and Practice, 14,
1-14. doi:10.1037/a0017455

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 695

De Dreu, C. K. W., & Weingart, L. R. (2003). Task versus relationship conflict, team
performance, and team member satisfaction: A meta-analysis. Journal of Applied
Psychology, 88, 741-749. doi:10.1037/0021-9010.88.4.741
De Jong, B. A., & Dirks, K. T. (2012). Beyond shared perceptions of trust and moni-
toring in teams: Implications of asymmetry and dissensus. Journal of Applied
Psychology, 97, 391. doi:10.1037/a0026483
DeRue, D. S., Hollenbeck, J., Ilgen, D., & Feltz, D. (2010). Efficacy dispersion in
teams: Moving beyond agreement and aggregation. Personnel Psychology, 63,
1-40. doi:10.1111/j.1744-6570.2009.01161.x
Dion, K. L. (2000). Group cohesion: From “field of forces” to multidimen-
sional construct. Group Dynamics: Theory, Research, and Practice, 4, 7-26.
doi:10.1037/1089-2699.4.1.7
Dionne, S. D., Sayama, H., Hao, C., & Bush, B. (2010). The role of leadership in
shared mental model convergence and team performance improvement: An
agent-based computational model. Leadership Quarterly, 21, 1035-1049.
doi:10.1016/j.leaqua.2010.10.007
Dithurbide, L., Sullivan, P., & Chow, G. (2009). Examining the influence of team-ref-
erent causal attributions and team performance on collective efficacy: A multilevel
analysis. Small Group Research, 40, 491-507. doi:10.1177/1046496409340328
Dufresne, R. (2013). Learning from critical incidents by ad hoc teams: The impact
of storytelling on psychological safety. Academy of Management Proceedings.
doi:10.5465/AMBPP.2013.13939abstract. Retrieved form http://proceedings.
aom.org/content/2013/1/13939.short
Dunlap, W. P., Burke, M. J., & Smith-Crowe, K. (2003). Accurate tests of statis-
tical significance for rWG and average deviation interrater agreement indexes.
Journal of Applied Psychology, 88, 356-362. doi:10.1037/0021-9010.88.2.356
Dunlop, W. L., Falk, C. F., & Beauchamp, M. R. (2013). How dynamic are exercise
group dynamics? Examining changes in cohesion within class-based exercise
programs. Health Psychology, 32, 1240-1243. doi:10.1037/t01866-000
Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams.
Administrative Science Quarterly, 44, 350-383. doi:10.2307/2666999
Edmondson, A. C. (2004). Psychological safety, trust, and learning in organizations:
A group-level lens. In R. M. Kramer & K. S. Cook (Eds.), Trust and distrust in
organizations: Dilemmas and approaches (pp. 239-271). New York, NY: Russell
Sage Foundation.
Edwards, J. R. (2001). Multidimensional constructs in organizational behav-
ior research: An integrative analytical framework. Organizational Research
Methods, 4, 144-192. doi:10.1177/109442810142004
Ellwart, T., Konradt, U., & Rack, O. (2014). Team mental models of expertise loca-
tion: Validation of a field survey measure. Small Group Research, 45, 119-153.
doi:10.1177/1046496414521303
Espinosa, J., & Clark, M. A. (2014). Team knowledge representation: A network per-
spective. Human Factors, 56, 333-348. doi:10.1177/0018720813494093

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


696 Small Group Research 45(6)

Fullagar, C. J., & Egleston, D. O. (2008). Norming and performing: Using micro-
worlds to understand the relationship between team cohesiveness and perfor-
mance. Journal of Applied Social Psychology, 38, 2574-2593. doi:10.1111/
j.1559-1816.2008.00404
Funk, C. A., & Kulik, B. W. (2012). Happily ever after: Toward a theory of late
stage group performance. Group & Organization Management, 37, 36-66.
doi:10.1177/1059601111426008
Gibson, C. B., Randel, A. E., & Earley, P. (2000). Understanding group efficacy:
An empirical test of multiple assessment methods. Group & Organization
Management, 25, 67-97. doi:10.1177/1059601100251005
Gill, J. (2003). Hierarchical linear models. In Kimberly Kempf-Leonard (Ed.), Encyclopedia
of social measurement (pp. 209-214). Amsterdam, The Netherlands: Elsevier.
Goddard, R. D. (2001). Collective efficacy: A neglected construct in the study of
schools and student achievement. Journal of Educational Psychology, 93, 467-
476. doi:10.1037/0022-0663.93.3.467
Goncalo, J. A., Polman, E., & Maslach, C. (2010). Can confidence come too soon?
Collective efficacy, conflict and group performance over time. Organizational
Behavior and Human Decision Processes, 113, 13-24. doi:10.1016/j.
obhdp.2010.05.001
Gonzales, A. L., Hancock, J. T., & Pennebaker, J. W. (2010). Language style match-
ing as a predictor of social dynamics in small groups. Communication Research,
37, 3-19. doi:10.1177/0093650209351468
Gully, S. M., Incalcaterra, K. A., Joshi, A., & Beaubien, J. (2002). A meta-analysis of
team-efficacy, potency, and performance: Interdependence and level of analysis
as moderators of observed relationships. Journal of Applied Psychology, 87, 819-
832. doi:10.1037/0021-9010.87.5.819
Harrison, D. A., & Klein, K. J. (2007). What’s the difference? Diversity constructs
as separation, variety, or disparity in organizations. Academy of Management
Review, 32, 1199-1228. doi:10.5465/AMR.2007.26586096
Heuze, J. P. Raimbault, N., & Fontayne, P. (2006). Relationships between cohe-
sion, collective efficacy and performance in professional basketball teams:
An examination of mediating effects. Journal of Sports Sciences, 24, 59-68.
doi:10.1080/02640410500127736
Hirak, R., Peng, A., Carmeli, A., & Schaubroeck, J. M. (2012). Linking leader inclu-
siveness to work unit performance: The importance of psychological safety
and learning from failures. Leadership Quarterly, 23, 107-117. doi:10.1016/j.
leaqua.2011.11.009
Hommes, J. J., Bossche, P. P., Grave, W. W., Bos, G. G., Schuwirth, L. L., &
Scherpbier, A. A. (2014). Understanding the effects of time on collaborative
learning processes in problem based learning: A mixed methods study. Advances
in Health Sciences Education. Advance online publication. doi:10.1007/s10459-
013-9487-z
Hornsey, M. J., Dwyer, L., & Oei, T. S. (2007). Beyond cohesiveness: Reconceptualizing
the link between group processes and outcomes in group psychotherapy. Small
Group Research, 38, 567-592. doi:10.1177/1046496407304336

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 697

Huang, R., Kahai, S., & Jestice, R. (2010). The contingent effects of leadership on
team collaboration in virtual teams. Computers in Human Behavior, 26, 1098-
1110. doi:10.1016/j.chb.2010.03.014
Huber, G. P., & Lewis, K. (2010). Cross-understanding: Implications for group cogni-
tion and performance. Academy of Management Review, 35, 6-26. doi:10.5465/
AMR.2010.45577787
Idris, M., Dollard, M. F., Coward, J., & Dormann, C. (2012). Psychosocial safety
climate: Conceptual distinctiveness and effect on job demands and worker psy-
chological health. Safety Science, 50, 19-28. doi:10.1016/j.ssci.2011.06.005
Kanawattanachai, P., & Yoo, Y. (2007). The impact of knowledge coordination on
virtual tem performance over time. MIS Quarterly, 31, 783-808. doi:10.1108/
eb028933
Kennedy, D. M., & McComb, S. A. (2010). Merging internal and external pro-
cesses: Examining the mental model convergence process through team
communication. Theoretical Issues in Ergonomics Science, 11, 340-358.
doi:10.1080/14639221003729193
Klein, K., & Kozlowski, S. W. (2000). From micro to meso: Critical steps in concep-
tualizing and conducting multilevel research. Organizational Research Methods,
3, 211-236. doi:10.1177/109442810033001
Kozlowski, S. W. J., & Chao, G. T. (2012). The dynamics of emergence: Cognition
and cohesion in work teams. Managerial and Decision Economics, 33, 335-354.
doi:10.1002/mde.2552
Kozlowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the effectiveness of work
groups and teams. Psychological Science in the Public Interest, 7, 77-124.
doi:10.1111/j.1529-1006.2006.00030.x
Kozlowski, S. W. J., & Klein, K. J. (2000). A multilevel approach to theory and
research in organizations: Contextual, temporal, and emergent processes. In K.
J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and methods
in organizations: Foundations, extensions, and new directions (pp. 3-90). San
Francisco, CA: Jossey-Bass.
Kozlowski, S. W. J., Watola, D. J., Jensen, J. M., Kim, B. H., & Botero, I. C. (2009).
Developing adaptive teams: A theory of dynamic team leadership. In E. Salas, G.
F. Goodwin, & C. S. Burke (Eds.), Team effectiveness in complex organizations:
Cross-disciplinary perspectives and approaches (pp. 113-155). New York, NY:
Routledge.
Lance, C. E., Butts, M. M., & Michels, L. C. (2006). The sources of four commonly
reported cutoff criteria: What did they really say? Organizational Research
Methods, 9, 202-220. doi:10.1177/1094428105284919
Langan-Fox, J., Anglim, J., & Wilson, J. R. (2004). Mental models, team mental
models, and performance: Process, development, and future directions. Human
Factors and Ergonomics in Manufacturing & Service Industries, 14, 331-352.
doi:10.1002/hfm.20004
LeBreton, J. M., James, L. R., & Lindell, M. K. (2005). Recent issues regarding rWG,
rWG, rWG(J), and rWG(J). Organizational Research Methods, 8, 128-138.
doi:10.1177/1094428104272181

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


698 Small Group Research 45(6)

Lee, J., Zhang, Z., & Yin, H. (2011). A multilevel analysis of the impact of a profes-
sional learning community, faculty trust in colleagues and collective efficacy on
teacher commitment to students. Teaching and Teacher Education, 27, 820-830.
doi:10.1016/j.tate.2011.01.006
Lewis, K. (2003). Measuring transactive memory systems in the field: Scale
development and validation. Journal of Applied Psychology, 88, 587-604.
doi:10.137/0021-9010.88.4.587
Lewis, K., & Herndon, B. (2011). Transactive memory systems: Current issues and
future research directions. Organization Science, 22, 1254-1265. doi:10.1287/
orsc.1110.0647
Li, J., & Roe, R. A. (2012). Introducing an intrateam longitudinal approach to the
study of team process dynamics. European Journal of Work & Organizational
Psychology, 21, 718-748. doi:10.1080/1359432X.2012.660749
Lusher, D., Kremer, P., & Robins, G. (2014). Cooperative and competi-
tive structures of trust relations in teams. Small Group Research, 45, 3-36.
doi:10.1177/1046496413510362
Marks, M. A., Mathieu, J. E., & Zaccaro, S. J. (2001). A temporally based framework
and taxonomy of team processes. Academy of Management Review, 26, 356-376.
doi:10.2307/259182
Mathieu, J., Maynard, M. T., Rapp, T., & Gilson, L. (2008). Team effectiveness 1997-
2007: A review of recent advancements and a glimpse into the future. Journal of
Management, 34, 410-476. doi:10.1177/0149206308316061
May, D. R., Gilson, R. L., & Harter, L. M. (2004). The psychological conditions of
meaningfulness, safety, and availability and the engagement of the human spirit
at work. Journal of Occupational and Organizational Psychology, 77, 11-37.
doi:10.1348/096317904322915892
McComb, S., Kennedy, D., Perryman, R., Warner, N., & Letsky, M. (2010).
Temporal patterns of mental model convergence: Implications for distributed
teams interacting in electronic collaboration spaces. Human Factors, 52, 264-
281. doi:10.1177/0018720810370458
McKay, P. F., Avery, D. R., & Morris, M. A. (2009). A tale of two climates: Diversity
climate from subordinates’ and managers’ perspectives and their role in store
unit sales performance. Personnel Psychology, 62, 767-791. doi:10.1111/j.1744-
6570.2009.01157.x
Mesmer-Magnus, J. R., & DeChurch, L. A. (2009). Information sharing and team
performance: A meta-analysis. Journal of Applied Psychology, 94, 535-546.
doi:10.1037/a0013773
Meyerson, D., Weick, K. E., & Kramer, R. M. (1996). Swift trust and temporary
groups. In R. M. Kramer & T. R. Tyler (Eds.), Trust in Organizations: Frontiers
of theory and research (pp. 166-195). Thousand Oaks, CA: Sage.
Mohammed, S., Ferzandi, L., & Hamilton, K. (2010). Metaphor no more: A 15-year
review of the team mental model construct. Journal of Management, 36, 876-
910. doi:10.1177/0149206309356804

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 699

Mohammed, S., Klimoski, R., & Rentsch, J. R. (2000). The measurement of team
mental models: We have no shared schema. Organizational Research Methods,
3, 123-165. doi:10.1177/109442810032001
Molleman, E. (2009). Attitudes toward flexibility: The role of task characteris-
tics. Group & Organization Management, 34, 241-268. doi:10.1177/105960
1108330090
Mullen, B., & Copper, C. (1994). The relation between group cohesiveness and perfor-
mance: An integration. Psychological Bulletin, 115, 210-227. doi:10.1037/0033-
2909.115.2.210
Murase, T., Doty, D., Wax, A. M. Y., Dechurch, L. A., & Contractor, N. S. (2012).
Teams are changing: Time to “think networks.” Industrial and Organizational
Psychology, 5, 41-44. doi:10.1111/j.1754-9434.2011.01402.x
Murphy, K. R., Cronin, B. E., & Tam, A. P. (2003). Controversy and consensus
regarding the use of cognitive ability testing in organizations. Journal of Applied
Psychology, 88, 660-671. doi:10.1037/0021-9010.88.4.660
Murrell, A. J., & Gaertner, S. L. (1992). Cohesion and sport team effectiveness: The
benefit of a common group identity. Journal of Sport & Social Issues, 16, 1-14.
doi:10.1177/019372359201600101
Myers, N. D., Payment, C. A., & Feltz, D. L. (2004). Reciprocal relationships
between collective efficacy and team performance in women’s ice hockey. Group
Dynamics: Theory, Research, and Practice, 8, 183-195. doi:10.1037/1089-
2699.8.3.182
Newman, D. A., & Sin, H.-P. (2009). How do missing data bias estimates of within-
group agreement? Sensitivity of SDWG, CVWG, rWG(J), rWG(J)*, and ICC to system-
atic nonresponse. Organizational Research Methods, 12, 113-147. doi:10.1177/
1094428106298969
Ng, K., & Van Dyne, L. (2005). Antecedents and performance consequences of
helping behavior in work groups: A multilevel analysis. Group & Organization
Management, 30, 514-540. doi:10.1177/1059601104269107
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. New York, NY:
McGraw-Hill.
Oliver, L. W., Harman, J., Hoover, E., Hayes, S. M., & Pandhi, N. A. (1999). A quan-
titative integration of the military cohesion literature. Military Psychology, 11,
57-83. doi:10.1207/s15327876mp1101_4
Owens, J. E. (2003). Part 1: Cohesion: Explaining party cohesion and discipline
in democratic legislatures: Purposiveness and contexts. Journal of Legislative
Studies, 9, 12-40. doi:10.1080/1357233042000306236
Pain, M. A., & Harwood, C. G. (2008). The performance environment of the England
youth soccer teams: A quantitative investigation. Journal of Sports Sciences, 26,
1157-1169. doi:10.1080/02640410802101835
Quintane, E., Pattison, P. E., Robins, G. L., & Mol, J. M. (2013). Short- and long-term
stability in organizational networks: Temporal structures of project teams. Social
Networks, 35, 528-540. doi:10.1016/j.socnet.2013.07.001

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


700 Small Group Research 45(6)

Rau, D. (2006). Top management team transactive memory, information gather-


ing, and perceptual accuracy. Journal of Business Research, 59, 416-424.
doi:10.1016/j.jbusres.2005.07.001
Rempel, J. K., Holmes, J. G., & Zanna, M. P. (1985). Trust in close relationships. Journal
of Personality and Social Psychology, 49, 95-112. doi:10.1037/0022-3514.49.1.95
Rentsch, J. R., & Klimoski, R. J. (2001). Why do “great minds” think alike?
Antecedents of team member schema agreement. Journal of Organizational
Behavior, 22, 107-120. doi:10.1002/job.81
Resick, C. J., Murase, T., Bedwell, W. L., Sanz, E., Jiménez, M., & DeChurch, L. A.
(2010). Mental model metrics and team adaptability: A multi-facet multi-method
examination. Group Dynamics: Theory, Research, and Practice, 14, 332-349.
doi:10.1037/a0018822
Rice, S. A. (1925). The behavior of legislative groups: A method of measurement.
Political Science Quarterly, 40, 60-72.
Rico, R., Sánchez-Manzanares, M., Gil, F., & Gibson, C. (2010). Team implicit
coordination processes: A team knowledge-based approach. In J. Wagner & J.
R. Hollenbeck (Eds.), Readings in organizational behavior (pp. 254-279). New
York, NY: Routledge.
Roberson, Q. M., Sturman, M. C., & Simons, T. L. (2007). Does the measure of
dispersion matter in multilevel research? A comparison of the relative perfor-
mance of dispersion indexes. Organizational Research Methods, 10, 564-588.
doi:10.1177/1094428106294746
Robertson, C., Gockel, R., & Brauner, E. (2013). Trust your teammates or bosses?
Differential effects of trust on transactive memory, job satisfaction, and per-
formance. Employee Relations, 35, 222-242. doi:10.1108/01425451311287880
Rodríguez, D., Sicilia, M., Sánchez-Alonso, S., Lezcano, L., & García-Barriocanal,
E. (2011). Exploring affiliation network models as a collaborative filtering
mechanism in e-learning. Interactive Learning Environments, 19, 317-331.
doi:10.1080/10494820903148610
Roe, R. A., Gockel, C., & Meyer, B. (2012). Time and change in teams: Where we
are and where we are moving. European Journal of Work and Organizational
Psychology, 21, 629-656. doi:10.1080/1359432X.2012.729821
Rosh, L., Offermann, L. R., & Van Diest, R. (2012). Too close for comfort?
Distinguishing between team intimacy and team cohesion. Human Resource
Management Review, 22, 116-127. doi:10.1016/j.hrmr.2011.11.004
Saavedra, R., Earley, C. P., & Van Dyne, L. (1993). Complex interdependence in task-
performing groups. Journal of Applied Psychology, 78, 61-72. doi:10.1037/0021-
9010.78.1.61
Salanova, M., Rodríguez-Sánchez, A. M., Schaufeli, W. B., & Cifre, E. (2014).
Flowing together: A longitudinal study of collective efficacy and collective flow
among workgroups. Journal of Psychology, 148, 435-455. doi:10.1080/002239
80.2013.806290
Salmon, P. M., Stanton, N. A., Walker, G. H., Jenkins, D., Ladva, D., Rafferty,
L., & Young, M. (2009). Measuring situation awareness in complex systems:
Comparison of measures study. International Journal of Industrial Ergonomics,
39, 490-500. doi:10.1016/j.ergon.2008.10.010

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 701

Scherbaum, C. A., & Ferreter, J. M. (2009). Estimating statistical power and required
sample sizes for organizational research using multilevel modeling. Organizational
Research Methods, 12, 347-367. doi:10.1177/1094428107308906.
Schoorman, F. D., Mayer, R. C., & Davis, J. H. (1996). Organizational trust:
Philosophical perspectives and conceptual definitions. Academy of Management
Review, 21, 337-340. doi:10.5465/AMR.1996.27003218
Sherrieb, K., Norris, F. H., & Galea, S. (2010). Measuring capacities for community
resilience. Social Indicators Research, 99, 227-247. doi:10.1007/s11205-010-
9576-9
Siebold, G. L. (2006). Military group cohesion. In T. W. Britt, C. A. Castro, & A. B.
Adler (Eds.), Military life: The psychology of serving in peace and combat (pp.
185-201). Westport, CT: Praeger.
Smith-Jentsch, K. A., Cannon-Bowers, J. A., Tannenbaum, S. I., & Salas, E. (2008).
Guided team self-correction: Impacts on team mental models, processes, and effec-
tiveness. Small Group Research, 39, 303-327. doi:10.1177/1046496408317794
Smith-Jentsch, K. A., Kraiger, K., Cannon-Bowers, J. A., & Salas, E. (2009). Do
familiar teammates request and accept more backup? Transactive memory in air
traffic control. Human Factors, 51, 181-192. doi:10.1177/0018720809335367
Solanas, A., Manolov, R., Leiva, D., & Andres, A. (2013). A measure of group dis-
similarity for psychological attributes. Psicologica, 32, 343-364.
Sorensen, L. J., & Stanton, N. A. (2011). Is SA shared or distributed in team work?
An exploratory study in an intelligence analysis task. International Journal of
Industrial Ergonomics, 41, 677-687. doi:10.1016/j.ergon.2011.08.001
Stajkovic, A. D., Lee, D., & Nyberg, A. J. (2009). Collective efficacy, group
potency, and group performance: Meta-analyses of their relationships, and
test of a mediation model. Journal of Applied Psychology, 94, 814-828.
doi:10.1037/a0015659
Staples, D. S., & Webster, J. (2008). Exploring the effects of trust, task interde-
pendence and virtualness on knowledge sharing in teams. Information Systems
Journal, 18, 617-640. doi:10.1111/j.1365-2575.2007.00244.x
Steiner, I. (1972). Group processes and productivity. New York, NY: Academic
Press.
Suddaby, R. (2010). Challenges for institutional theory. Journal of Management
Inquiry, 19, 14-20. doi:10.1177/1056492609347564
Susskind, A. M., Kacmar, K. M., & Borchgrevink, C. P. (2003). Customer service
providers’ attitudes relating to customer service and customer satisfaction in
the customer-server exchange. Journal of Applied Psychology, 88, 179-187.
doi:10.1037/0021-9010.88.1.179
Swaab, R. I., Postmes, T., Neijens, P., Kiers, M. H., & Dumay, A. M. (2002).
Multiparty negotiation support: The role of visualization’s influence on the
development of shared mental models. Journal of Management Information
Systems, 19, 129-150.
Tendulkar, S. A., Koenen, K. C., Dunn, E. C., Buka, S., & Subramanian, S. V. (2012).
Neighborhood influences on perceived social support among parents: Findings
from the project on human development in Chicago neighborhoods. PLoS ONE,
7(4), 1-9. doi:10.1371/journal.pone.0034235

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


702 Small Group Research 45(6)

Tirado, R., Hernando, A. & Aguaded, J. I. (2012). The effect of centralization


and cohesion on the social construction of knowledge in discussion forums.
Interactive Learning Environments. Advance online publication. doi:10.1080/
10494820.2012.745437
Tuckman, B. W., & Jensen, M. A. C. (1977). Stages of small-group devel-
opment revisited. Group & Organization Management, 2, 419-427.
doi:10.1177/105960117700200404
Van Mierlo, H., Rutte, C. G., Vermunt, J. K., Kompier, M. A. J., & Doorewaard, J. A.
M. C. (2006). Individual autonomy in work teams: The role of team autonomy,
self-efficacy, and social support. European Journal of Work & Organizational
Psychology, 15, 281-299. doi:10.1080/13594320500412249
Vecchio, R. P., Justin, J. E., & Pearce, C. L. (2010). Empowering leadership: An
examination of mediating mechanisms within a hierarchical structure. Leadership
Quarterly, 21, 530-542. doi:10.1016/j.leaqua.2010.03.014
Walker, G. H., Stanton, N. A., Stewart, R., Jenkins, D., Wells, L., Salmon, P., &
Baber, C. (2009). Using an integrated methods approach to analyse the emergent
properties of military command and control. Applied Ergonomics, 40, 636-647.
doi:10.1016/j.apergo.2008.05.003
Webber, S., Chen, G., Payne, S. C., Marsh, S. M., & Zaccaro, S. J. (2000). Enhancing
team mental model measurement with performance appraisal practices.
Organizational Research Methods, 3, 307-322. doi:10.1177/109442810034001
Wegner, D. M. (1987). Transactive memory: A contemporary analysis of the group
mind. In B. Mullen & G. R. Goethals (Eds.), Theories of group behavior
(pp. 185-208). New York, NY: Springer.
Wholey, D. R., Zhu, X., Knoke, D., Shah, P., Bruhn-Zellmer, M., & Witheridge,
T. F. (2011) The Team Work Assertive Community Treatment (TACT) scale:
Development and validation. Psychiatric Services, 63, 1108-1117. doi:10.1176/
appi.ps.201100338
Wise, S. (2014). Can a team have too much cohesion? The dark side to network density.
European Management Journal, 32, 703-711. doi:10.1016/j.emj.2013.12.005
Woltman, H., Feldstain, A., MacKay, J. C., & Rocchi, M. (2012). An introduction to
hierarchical linear modeling. Tutorials in Quantitative Methods for Psychology,
8, 52-69.
Wright, E. M., & Benson, M. L. (2011). Clarifying the effects of neighborhood con-
text on violence “behind closed doors.” Justice Quarterly, 28, 775-798. doi:10.
1080/07418825.2010.533687
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in pro-
duction of knowledge. Science, 316, 1036-1039. doi:10.1126/science.1136099
Yuan, Y., Carboni, I., & Ehrlich, K. (2014). The impact of interpersonal affective
relationships and awareness on expertise seeking: A multilevel network investi-
gation. European Journal of Work & Organizational Psychology, 23, 554-569.
doi:10.1080/1359432X.2013.766393
Yuan, Y., Fulk, J., Monge, P. R., & Contractor, N. (2010). Expertise directory devel-
opment, shared task interdependence, and strength of communication network

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016


Coultas et al. 703

ties as multilevel predictors of expertise exchange in transactive memory work


groups. Communication Research, 37, 20-47. doi:10.1177/009365020351469
Zaheer, A., & Soda, G. (2009). Network evolution: The origins of structural holes.
Administrative Science Quarterly, 54, 1-31. doi:10.2189/asqu.2009.54.1.1
Zhong, X., Huang, Q., Davison, R. M., Yang, X., & Chen, H. (2012). Empowering
teams through social network ties. International Journal of Information
Management, 32, 209-220. doi:10.1016/j.ijinfomgt.2011.11.001

Author Biographies
Chris W. Coultas is a research and consulting psychologist at Leadership Worth
Following in Dallas, Texas, USA, where he conducts research on coaching effective-
ness and leadership development. He received his doctorate in industrial/organiza-
tional psychology from the University of Central Florida in 2014.
Tripp Driskell is a research scientist at Florida Maxima Corporation in Orlando,
Florida, USA. He received his doctorate in human factors psychology from the
University of Central Florida in 2013.
C. Shawn Burke is an associate professor (research) at the Institute for Simulation
and Training of the University of Central Florida, USA. Her expertise includes teams,
leadership, team adaptability, team training, measurement, evaluation, and team
effectiveness. She earned her doctorate in industrial/organizational psychology from
George Mason University.
Eduardo Salas is Pegasus & Trustee Chair Professor of psychology at the University
of Central Florida, USA, where he also holds an appointment as program director for
the Human Systems Integration Research Department at the Institute for Simulation
and Training. He earned his doctorate in industrial/organizational psychology from
Old Dominion University.

Downloaded from sgr.sagepub.com at Western Sydney University on February 21, 2016

You might also like