You are on page 1of 18

Organizational Behavior and Human Decision Processes

Vol. 80, No. 1, October, pp. 3–20, 1999


Article ID obhd.1999.2851, available online at http://www.idealibrary.com on

A Primer of Social Decision Scheme Theory:


Models of Group Influence, Competitive
Model-Testing, and Prospective Modeling
Garold Stasser
Miami University

The basic elements of social decision scheme (SDS) theory are


individual preferences, group preference compositions (distin-
guishable distributions), patterns of group influence (decision
schemes, social combination rules), and collective responses
(group decisions, judgments, solutions, and the like). The theory
provides a framework for addressing two fundamental questions
in the study of group performance: How are individual resources
combined to yield a group response (the individual-into-group
problem)? What are the implications of empirical observations
under one set of circumstances for other conditions where data
do not exist (the sparse data problem)? Several prescriptions for
how to conduct fruitful group research are contained in the SDS
tradition: make precise theoretical statements, provide strong
and competitive tests of theories, and interpret empirical find-
ings in the context of robust process models. q 1999 Academic Press

This article attempts three tasks. First, it describes the basic elements of
social decision scheme (SDS) theory. These elements include individual prefer-
ences, group preference compositions (distinguishable distributions), patterns
of group influence (decision schemes, social combination rules), and collective
responses (group decisions, judgments, solutions, and the like). Second, it ex-
plores how SDS theory and methodology provide a toehold for answering two
fundamental questions in the study of group process. One is “How are individual
inclinations combined to yield a group response (the individual-into-group prob-
lem)?” The second is “What are the implications of findings obtained under one
set of circumstances for other conditions of interest (the sparse data problem)?”
Third, this article examines some ideas about how to conduct fruitful group

Preparation of this article was supported by National Science Foundation Grant SBR-9410584.
Address correspondence and reprint requests to Garold Stasser, Department of Psychology,
Miami University, Oxford, OH 45056. E-mail: stassegl@muohio.edu.
3
0749-5978/99 $30.00
Copyright q 1999 by Academic Press
All rights of reproduction in any form reserved.
4 GAROLD STASSER

research: make precise theoretical statements, provide strong and competitive


tests of theories, evaluate and shape theories with data (lots of it), and interpret
empirical findings in the context of robust process models. To illustrate these
points, I rely on several simple examples. Occasionally, the examples are overly
simplistic, but I have taken care to point to published articles that not only
further illustrate the points but also convey more fully the richness and texture
of the SDS tradition.

THE BASICS

SDS theory is composed of four basic elements: individual preferences, group


composition (distinguishable distributions), group influence processes (decision
schemes, social combination rules), and collective responses (group decisions,
judgments, solutions, verdicts, etc.). As depicted schematically in Fig. 1, individ-
ual preferences are the ingredients of group composition, and consensus pro-
cesses act on preferences within a group to yield a collective response.

Individual Preference
The term preference is used here in the sense of an inclination of an individual
to select one option from a set of response alternatives. As Davis (1973) noted
in the original presentation of SDS, the term preference takes on different
shades of meaning depending on the context. In the context of collective choice,
preference takes on its most common meaning—an inclination to choose one
decision alternative over others. SDS theory has also been applied to problem-
solving tasks (e.g., Davis, 1973; Laughlin & Ellis, 1986) and collective recall
(e.g., Hinsz, 1990). For these types of tasks, preference is a choice among possible

FIG. 1. Schematic of the basic components of SDS theory.


SDS PRIMER 5

solutions or answers and often indicates a belief that one response is right (or,
at least, the best among available options).
More formally, and in the notation used by Davis (1973), let a denote a finite
set of discrete and mutually exclusive response options, a 5 {a1, a2, a3, . . . ,
an}, where n the number of response options. Define two companion vectors.
The vector p is a distribution of probabilities, p 5 { p1, p2, p3, . . . , pn}, where
pi is the probability that an individual will prefer response ai. The vector r
contains the distribution of preferences within a group of size r, r 5 {r1, r2, r3,
. . . , rn}, where ri the number of members that prefer ai. Note that r 5 ( ri.
To illustrate, consider a six-person criminal jury—that is, r 5 6. In this case,
the response options are guilty (a1) and not guilty (a2). The vector p contains the
probabilities of randomly selecting a juror (from some well-specified population
such as members of the jury roll in West Dell) who favors each of the response
options (before participating in the jury but presumably after hearing the
evidence). For example, p 5 {.6, .4) denotes a case and a population of potential
jurors for which individuals are slightly more likely to favor conviction than
acquittal. The vector r contains any one of the possible patterns of preference
that can occur within a six-person jury. For instance, r 5 {4, 2} denotes a jury
of four who favor a guilty verdict and two who favor a not-guilty verdict.

Group Composition: Distinguishable Distributions

In SDS theory, group composition is typically represented as the preference


distribution among members—that is, the distinguishable distribution, r. The
number of possible distinguishable distributions depends on the number of
preference alternatives, n, and group size, r. As Davis (1973) noted, there are
(n1r21)Cr distinguishable distributions, where C is the binomial coefficient (or,
perhaps more familiar to some, the counting rule for combinations from finite
mathematics). Thus, for a six-person jury there are 7C6 5 7 distinguishable
distributions, whereas for a four-person group solving a problem with three
decision options the number of distinguishable distributions is (31421)C4 5
6C4 5 15. By way of illustration, Table 1 presents the possible distinguishable
distributions for these two cases.
Each distinguishable distribution represents a different alignment of support
for the various response alternatives. Stasser, Kerr, and Davis (1989) suggested
that the patterns of support within a group often tell us much about the social
influence climate. Majorities frequently enjoy an advantage in winning converts
to their positions. This advantage may stem from either informational or nor-
mative power. On the one hand, larger factions may simply know more, in a
collective sense, than smaller factions and be able to summon more facts and
arguments to defend and sell their points of view. On the other hand, larger
factions may exert more social pressure to conform to their views than do
smaller factions. This pressure can stem from the threat of social censure or
from the promise of social rewards. More subtly and probably more typically,
social pressures can also stem from normative prescriptions of fairness (e.g.,
“majority wins” notions of fair process) or from strategic assessments of the
6 GAROLD STASSER

TABLE 1
Examples of Distinguishable Distributions (r) and the Probabilities of Their
Occurrence under Different Preference Distributions, p

Six-person jury: r 5 6 & n 5 2


r
(rG, rNG) p 5 (.2, .8) p 5 (.4, .6) p 5 (.5, .5) p 5 (.7, .3)

(6, 0) .000 .004 .016 .118


(5, 1) .004 .037 .094 .302
(4, 2) .033 .138 .234 .324
(3, 3) .132 .276 .312 .185
(2, 4) .297 .311 .234 .060
(1, 5) .356 .187 .094 .010
(0, 6) .178 .047 .016 .001

Four-person problem-solving group: r 5 4 & n 5 3


r
(rA, rB, rC) p 5 (.2, .8, .0) p 5 (.4, .3, .3) p 5 (.5, .0, .5)

(4, 0, 0) .002 .026 .062


(3, 1, 0) .026 .077 .000
(3, 0, 1) .000 .077 .250
(2, 2, 0) .154 .086 .000
(2, 0, 2) .000 .086 .375
(2, 1, 1) .000 .173 .000
(1, 3, 0) .410 .043 .000
(1, 2, 1) .000 .130 .000
(1, 1, 2) .000 .130 .000
(1, 0, 3) .000 .043 .250
(0, 4, 0) .410 .008 .000
(0, 3, 1) .000 .032 .000
(0, 2, 2) .000 .049 .000
(0, 1, 3) .000 .032 .000
(0, 0, 4) .000 .008 .062

likelihood that a position will ultimately emerge as the group’s choice (Kerr &
Watts, 1982). Dissenters may promote only their most cherished views when
they have little chance of prevailing in the group.
However, social influence may not be simply a matter of numbers. Sometimes
one position is more easily defended than others. The leniency bias observed
in jury decision making (MacCoun & Kerr, 1988; Stasser, Kerr, & Bray, 1982)
suggests that acquittal is easier to defend than conviction. In problem solving
or collective recall, correct options frequently win with only one or two support-
ers in the group (Laughlin & Ellis, 1986), particularly when correct members
are confident in their choice (Hinsz, 1990).
A central premise in SDS theory is that knowing where a group starts (its
distinguishable distribution) foretells where it is likely to end up (its collective
response). In practice, distinguishable distributions can be assessed, manipu-
lated, or predicted. One can directly assess the alignment of opinion in the
group by polling group members before the group convenes or at the onset of
SDS PRIMER 7

discussion. Or, based on such polls, one can manipulate group composition to
obtain target distinguishable distributions.
Predicting the probability or rate of occurrence of distinguishable distribu-
tions requires knowing or estimating the distribution of preferences, p, in the
population of potential group members. To continue the jury example, suppose
that a random sample of 50 jurors from a jury roster heard a murder case,
and 40 (80%) favored guilt. From this sample, the estimate of p is p̂ 5 {40/50
5 .8, 10/50 5 .2}. Suppose that six-person juries were randomly composed from
this population (i.e., the recruitment process is random selection). Let pj denote
the probability that a specific preference composition, rj, will occur. For the two
decision alternatives (n 5 2) in a criminal jury task and random recruitment,
pj can be estimated using the binomial function rule or a table of binomial
probabilities. For example, if the probability of sampling a guilty-sayer is .8,
then the probability of getting exactly four guilty-sayers (and two favoring
not guilty) in a six-person jury is 0.25. For tasks involving more than two
alternatives, one can use the multinomial function rule to generate the probabil-
ity of each possible distinguishable distribution (see Davis, 1973, or Stasser et
al., 1989, for more detail). Table 1 gives, for selected population preference
distributions (p), the probabilities of the possible distinguishable distributions
for a six-person group and two alternatives (jury case) and for a four-person
group and three alternatives. For convenience of expression, it is useful to
organize the probabilities of the possible distinguishable distributions in a
vector, p 5 {p1, p2, p3, . . . , pm}, where m is the number of possible distinguish-
able distributions.

Collective Response
Let A denote the finite set of discrete and mutually exclusive response options
for the group, A 5 {A1, A2, A3, . . . , AN}, where N the number of group response
options. In many applications of SDS theory, the response options for the group
are identical to those for the individual. That is, it is often the case that
A 5 a. For example, if a board of directors is to choose among three employee
health plans, both the individual- and the group-level decision set would consist
of the three specified plans. However, this equality is not required and, in some
cases, is not desirable. Most notably, it is customary in applications to criminal
juries to include “hung” as a group “response,” recognizing the fact that juries,
in practice, occasionally fail to reach a verdict. More specifically, for the typical
criminal jury, A 5 {guilty, not guilty, hung}.

Consensus Processes
The social decision scheme matrix (D) summarizes the relationships among
initial alignments of support (r) and the possible group responses (elements
of A). More specifically, each element of D, dij, is the probability that the ith
distinguishable distribution (ri) will lead to the jth collective response (Aj).
Consider the following social decision scheme matrix for a six-person jury:
8 GAROLD STASSER

r
(rG, rNG) A1 (G) A2 (NG) A3 (H)
(6, 0) 1.0 0.0 0.0
(5, 1) 1.0 0.0 0.0
(4, 2) 0.6 0.1 0.3
(3, 3) 0.0 0.5 0.5
(2, 4) 0.0 0.9 0.1
(1, 5) 0.0 1.0 0.0
(0, 6) 0.0 1.0 0.0

This D represents a majority process with strong leniency bias superimposed.


When five or all of the six jurors favor guilty—(6, 0) and (5, 1)—the jury is
certain to convict. The probability of conviction drops off rapidly as the number
initially favoring guilty decreases from five to three. An initial even split
(3, 3) will never result in conviction and has an equal chance of producing
either an acquittal or a hung jury.
It is often convenient to attach verbal labels to different versions of D. The
foregoing D might be dubbed “majority wins with leniency.” Such labels rarely
capture the nuances of a particular D matrix. It is easy to imagine D’s that
differ from the one given but that also depict variants of defendant protection
superimposed on a majority process. In practice, the aim is often to explore a
range of distinctly different processes leading to alternate instantiations of the
D matrix. For example, we could conceive of a pattern of evidence or a radical
“law and order” culture for which the jury process would tilt toward conviction.
For such a case, we could use the label “conviction-supported wins.”
To push this counterexample further, suppose that the prosecution case turns
on one compelling but complex argument and, without this argument, the
defendant looks like a saint. Thus, the prosecution needs at least one juror to
stay awake long enough during closing to comprehend this critical argument.
If this one juror is sufficiently eloquent and motivated (some risk here, not
everyone possesses these qualities), he/she can convince the others during
deliberation. If two jurors comprehend this critical argument, it is virtually
certain that at least one of them (or the two together) will possess the necessary
verbal skills and impetus to convince the others. The D matrix would take on
a distinctly different look in this case:

r
(rG, rNG) A1 (G) A2 (NG) A3 (H)
(6, 0) 1.0 0.0 0.0
(5, 1) 1.0 0.0 0.0
(4, 2) 1.0 0.0 0.0
(3, 3) 1.0 0.0 0.0
(2, 4) 1.0 0.0 0.0
(1, 5) 0.0 0.0 1.0
(0, 6) 0.0 1.0 0.0
SDS PRIMER 9

Notice that in this construction of the consensus process, the one guilty advocate
in the (1, 5) case may not be able to convince the others but also will not
conform to the majority. The ideas expressed in this D matrix are borrowed from
applications to group problem solving where “truth-supported” is frequently
descriptive of the social combination processes (see, for example, Laughlin &
Ellis, 1986). (Rather than “conviction-supported wins,” we might also dub this
jury process the “Henry Fonda” effect in recognition of his role in the classic
film Twelve Angry Men.)
In generating each of these contrasting examples of D, I imagined a social
process and gave it explicit expression in the pattern of entries in the matrix,
going from verbal theory to D. In an empirical science, the next step is to collect
data and, thereby, garner support for one construction of the process over
others. Kerr, Stasser, and Davis (1979) called this approach model testing: (a)
propose a set of possible D’s (based on theory and prior evidence), (b) assess
or predict the relative frequency of each distinguishable distribution, (c) predict
the distribution of group responses using each version D in turn, and (d) assess
the fit of these predictions to the observed group responses. Model fitting
reverses the approach. Based on data, aggregated within or across studies,
estimate a D matrix. In the simplest case, such an estimate is based on tabu-
lating how many groups starting with a particular distinguishable distribution
yielded each of the possible group responses. For example, suppose that 20
juries in a mock trial started with a (4, 2) split and 12 convicted, 2 acquitted,
and 6 failed to reach a verdict. The estimates based on this data would yield
the third row of the “majority/leniency” D, displayed earlier, namely,
{.6 .1 .3 }. If the data are sufficiently dense so that the estimates of the entries
in D are overdetermined, it is possible not only to estimate D but also to assess
statistically its goodness of fit (see, Laughlin et al., 1976, for an example). To
understand better these approaches, it helps to be more explicit about the
mathematical relationships among the element of SDS theory.

SDS MODEL AND MODEL TESTING

If the probability distributions of distinguishable distributions, p, and of


group responses, P, are known, then an appropriate choice for D must satisfy

P 5 p D, (1)

where P and p are row vectors. In practice, we rarely know p and P but must
work with estimates of these quantities derived from empirical observations.
In some contexts, we might want to distinguish these estimates from the
theoretical quantities in our notation (e.g., by placing a carat over the symbol
to signify an estimate). However, in this paper, it should be clear when the
symbols refer to estimates.
To illustrate more concretely a model testing approach, consider the two D
matrices already developed. Let Dml denote the “majority wins with leniency
bias” model and Dcs denote the “conviction-supported wins” model. Suppose
10 GAROLD STASSER

that, in an independent sample, 40% of jurors felt the defendant was guilty
after hearing the case. Randomly composing six-person juries from a population
containing 40% guilty advocates would yield an estimated p 5 {.004, .037,
.138, .276, .311, .187, .047}. Under these conditions, we expect that a (6, 0)
distinguishable distribution will rarely occur (.004) whereas we expect a (2, 4)
alignment about a third of the time (.311). Given this estimate of p, the predicted
distribution of jury outcomes under the majority/leniency model would be

Pml 5 p Dml
.1.0 0.0 0.0.
.1.0 0.0 0.0.
.0.6 0.1 0.3.
5 {.004 .037 .138 .276 .311 .187 .047} .0.0 0.5 0.5.
.0.0 0.9 0.1.
.0.0 1.0 0.0.
.0.0 1.0 0.0.
5 {.124 .665 .211}.

That is, the majority/leniency model predicts that about 12% of six-person
juries will convict, 66% will acquit, and 21% will be “hung” when 40% of jurors
initially favor conviction. The comparable computations with the conviction-
supported model yield Pcs 5 {.766, .047, .187}. In this population of jurors (40%
favoring conviction initially), the conviction-supported model predicts a high
conviction rate whereas the majority/leniency model predicts a fairly high
acquittal rate.
Having these predictions in hand, we can compose six-person juries from
this population and ask them to decide the case. Let O be the observed relative
frequency of jury decisions. Then, we can address two related questions: (a)
does either model provide an adequate account of the observations (model
test) and (b) which model provides a better account of the data (competitive
model test)?
Suppose that in a sample of 30 juries, 8 convicted, 16 acquitted, and 6 were
hung. Then, O 5 {.267 .533 .200}. This result appears to be inconsistent with
the predictions of the conviction-supported model, and, of the two models, the
majority/leniency model seemingly provides a better account. But how good
does the majority leniency model do? Are the discrepancies between O and Pml
too large to attribute to sampling error? A number of goodness-of-fit statistics
can assist in answering these questions. Most familiar among these statistics
is the chi-square (x2) statistic (Hays, 1988). In the current example, x 2(2) 5
5.75 for the majority/leniency model and 160.54 for the conviction-supported
model. These values confirm our impression that the majority/leniency model
is doing much better, but leave some uncertainty about the conclusions regard-
ing its adequacy. On the one hand, a value of 5.75 is not sufficiently rare under
a chi-square distribution with two degrees of freedom to reject that model at
the .05 level often used in conventional null hypothesis testing. On the other
SDS PRIMER 11

hand, there is still less than a 1 in 10 chance of obtaining an observed distribu-


tion so discrepant from the predicted due solely to sampling error. Whereas
exploring the practices and pitfalls of using goodness of fit in model testing is
beyond the scope of this article (see, Davis, 1973, and Hastie & Stasser, in
press, for more in depth discussions), suffice it to say that such tests are most
aptly treated as ways of conveniently summarizing the fit of models. One needs
a context to properly evaluate goodness of fit.

COMPETITIVE MODEL TESTS

The model testing strategy most commonly used in the SDS tradition is to
posit a range of plausible models and to include some that are not so plausible.
One hopes that one can confidently dismiss the implausible models and several
of the a priori plausible ones, leaving only one or a few models that remain
plausible in light of the data. Thus, my current example using only two models
is much too restricted. Davis (1973) and Kerr et al. (1976) provide nice illustra-
tions of this approach using mock juries. For example, Kerr et al. (1976) tested
13 models using the data from 36 six-person and 36 twelve-person juries.
Competitive modeling testing has several advantages (Bentler & Bonnett,
1980, Hastie & Stasser, in press; Stasser, 1988). First, it encourages one to
consider explicitly, based on available theory and data, what models are plausi-
ble a priori and discourages one from limiting attention to the most favored
set of models. Second, having multiple models provides a context for evaluating
the adequacy of each. Because the outcome of a goodness-of-fit test of any one
model can be ambiguous, it is helpful to know how a model is doing relative
to its theoretical competitors and to clearly inadequate models (see, Bentler &
Bonnett, 1980, and Stasser, 1988, for an elaboration of this point). Third, in
the event that no model fits well, one can often examine the patterns of fit
across models and identify what characteristics a good model should have.
Fourth, using multiple models (especially ones that vary in plausibility)
allows one to assess the sensitivity of one’s empirical approach. If one cannot
confidently reject on the basis of the data even the most ridiculous model, one
cannot gain much satisfaction from failing to reject a theoretically favored
model. A frequent reason for insensitive model tests is small samples which
yield low statistical power and noisy data. This is particularly problematic
when group-level data are needed, as in SDS theory. The temptation is to rely
on too few groups because the number of participants required is so large.
Thus, a constant concern is that tests of SDS models will not be sufficiently
sensitive and any half-baked model will “fit” the data.
Finally, whereas too little power can be a problem, too much statistical power
can also produce misleading tests if consideration is limited to one or two
models. Under conditions of high statistical power very good models can look
terrible on the basis of a single goodness-of-fit test. In these cases, it is informa-
tive to know how well theoretically plausible models are doing relative to
implausible “baseline” models. For example, in the jury case, we might posit
a “random decision” model which arbitrarily asserts that a jury will be equally
12 GAROLD STASSER

likely to convict, acquit, and hang and compare the fit of our theory-driven
models with this theoretically vacuous model (see Stasser & Taylor, 1991, for
an example of this approach).
Just as testing a single model can be uninformative and misleading, testing
models under one set of conditions is a dangerous strategy. Given a set of
models, it is instructive to examine how each portrays the relationship between
individual preferences and group responses. Think of a graph that tracks the
predicted probability of a group response (e.g., conviction) as a function of
individual response tendencies. Whereas models may predict similar outcomes
in some regions of such a graph, each model will generate its own “signature”
across the range of individual response tendencies. Consider, for example, the
predictions graphed in Fig. 2. Predicted jury conviction rates are plotted as a
function of the probability that an individual juror will favor conviction at
the onset of deliberation for the majority/leniency (M/L) and the conviction-
supported (C-S) models. These models generate strikingly different predictions
over a wide range of individual response tendencies. However, to illustrate the
point, consider the predictions of a simple majority (Maj) model without a
leniency component. This model predicts that a majority will always win and,
if a majority does not exist, the jury will be hung. Note that there are regions
of the graph where the M/L and Maj functions nearly overlap. If these models
were competitively tested within these regions, the tests would necessarily be
inconclusive. Similarly, conceptually distinct models are often virtually indis-
tinguishable for certain group sizes. For example, when the M/L and Maj
models are applied to four-person juries, the functions are nearly identical
across all individual conviction rates. The lesson is clear. Use the SDS models

FIG. 2. Predicted jury conviction rate as a function of individual juror conviction rate for six-
person juries and three SDS models: majority/leniency, simple majority, and conviction-supported.
SDS PRIMER 13

to design the empirical tests, and focus the empirical efforts on regions where
models diverge in their predictions.

INDIVIDUALS INTO GROUP

Davis (1969) contrasted three types of questions that are relevant to group
performance: How does being in a group affect individual performance (individ-
ual in the group)? How does group performance compare to individual perfor-
mance (individual versus group)? And, how are individual resources combined
to yield the group product (individual into group)? The individual-into-group
question is at the heart of SDS theory: How do members’ individual response
inclinations map onto group responses? Possible answers to the individual-
into-group question are embodied in the social decision scheme matrix, D.
Davis (1969) noted that there has been considerable interest in the group-
versus-individual question. Are juries more likely to convict than jurors? Are
groups better than individuals at solving problems? Do groups make more or
less biased judgments than individuals? Are group judgments more extreme
(polarized) than individual judgments of attitude or risk? Interest in such
questions has not abated (e.g., Gigone & Hastie, 1996; Kerr, MacCoun, &
Kramer, 1996; Tindale, Smith, Thomas, Filkins, & Sheffey, 1996). Nonetheless,
Davis (1969) asserted that the more fundamental question is the individual-
into-group question. If we know (even roughly) how individual responses are
combined to yield a group response, we often can answer the group-versus-
individual question. Indeed, armed with a reasonably accurate SDS model, we
can not only provide an answer but also show that the answer is frequently
contingent on the individual response distribution and group size.
Consider again the predictions for the M/L, Maj, and C-S models in Fig. 2.
According to these models, are six-person juries more likely to convict than
individual jurors? Only for the process represented by the C-S model is the
answer simply “Yes.” For both variants of majority process, the answer is “It
depends.” For the simple majority model, juries are more likely to convict when
more than 65% of jurors favor guilty but less likely to convict when fewer than
65% of jurors favor guilt. The M/L model has a similarly qualified answer but
the crossover point has moved from the region of 65% to about 80% individual
conviction rate.
Interestingly, intuition suggests that the predictions for the simple majority
model would yield a classic polarization pattern—the group process enhances
of the most dominant response of individuals. Thus, one might expect that,
under a simple majority process, juries would be more likely than individuals
to convict when the individual conviction rate was greater than .5 and less
likely when the individual rate was less than .5. This polarization pattern would
obtain only if “hung” juries did not occur, and juries with a (3, 3) distinguishable
distribution were equally likely to convict and acquit. This fact highlights the
importance of “subschemes” in SDS theory. The common notion of “majority
rules” is sufficient to generate most of the entries in the Dmaj but fails when
applied to the (3, 3) distinguishable distribution. Nonetheless, the nature of
14 GAROLD STASSER

the social process when the group is equally divided can have a substantial
impact on the predicted distribution of group responses.
More than providing sophisticated answers to individual-versus-group ques-
tions, individual-into-group models can change how we interpret individual
and group comparisons. Group problem solving has a long and rich history in
the study of group performance (see Davis, 1969; Hill, 1982; and Shaw, 1976,
for reviews of this literature). A common finding is that groups are more likely
to solve a problem than are individuals working alone. Such findings of group
“superiority” in the social psychological literature date back to Marjorie Shaw’s
(1932) classic study comparing the performance of four-person groups with the
performance of individuals on several “brainteasers.” She found that groups
were much more likely to solve than individuals. For example, on one class of
problems, she found that 8% of individuals solved whereas 53% of groups
solved. Such apparent superiority of groups is often attributed to emergent
qualities of the interaction that improves members’ ability to solve problems—
mutual error-checking, pooling of complementary knowledge, or, more mysteri-
ously, synergy. That is, on the face of it, the data seem to demand an answer
to the question Why are groups better?
Building on earlier work by Lorge and Solomon (1962) and Steiner (1972),
work within the SDS tradition (Davis, 1973; Laughlin, 1980; Laughlin & Ellis,
1986) has reframed these findings of group “superiority” by using a simple
group process as a baseline of comparison. Suppose, the reasoning goes, that
a group will solve if at least one of the members is a solver. That is, if a member
can solve the problem, she will be able to convince the rest of the group to
adopt her solution. Such a process has been dubbed the “best-person” or “truth-
wins” model. The D matrix for such a process is easily expressed if we represent
all incorrect answers in one response category. Thus, individuals can be correct
or incorrect, and groups can be correct or incorrect: a 5 {correct, incorrect} and
A 5 {correct, incorrect}. Then, Dt-w for a four-person group would be as follows:
r
rC, rI A1 (C) A2 (I)
(4, 0) 1.0 0.0
(3, 1) 1.0 0.0
(2, 2) 1.0 0.0
(1, 3) 1.0 0.0
(0, 4) 0.0 1.0
Returning to Shaw’s (1932) data, it is informative to ask what a truth-wins
model predicts for group solution rates when 8% of individuals are able to solve.
Assuming random assignment to four-person groups and using the binomial
function rule, the 8% individual solution rate yields an estimate of p 5 {.000,
.002, .033, .249, .716}. That is, about 72% of groups will not contain a solver
and will not solve. However, 28% will contain one or more solvers. Given that
53% of Shaw’s groups solved, these groups were apparently doing better than
simply adopting a solver’s solution. For her data, there is reason to explore
the emergent benefits of having individuals work together on problems.
SDS PRIMER 15

However, groups do not always perform better than their best member (Hill,
1982; Steiner, 1972). A typical finding is that the group solution rate is better
than the individual solution rate but not as good as the truth-wins model
predicts. For example, Laughlin and Ellis (1986) gave five-person groups
multiple-choice mathematics problems. Although truth-wins gave a reasonably
good account of their data, they found that groups with only one solver did not
always get the correct answer—especially for problems that involved elemen-
tary probability theory. Their conclusion was that truth-wins holds when condi-
tions of task demonstrability are met: (a) the solution is based on a mutually
shared system of inference, (b) there is sufficient information to generate a
solution, (c) nonsolvers are able to understand the reasoning that leads to the
correct answer, and (d) solvers are able and sufficiently motivated to present
the rationale leading to the solution. As Laughlin and Ellis (1986) noted, brain-
teasers that have a “eureka” quality are highly demonstrable because the
correct answers are obvious once presented. Eureka problems require only that
the solver present the solution and the self-affirming nature of the solution
does the rest. Other kinds of problems are less demonstrable. Take, for instance,
their probability problems for which one solver in a group did not guarantee
group success. As teachers of elementary statistics and probability can attest,
demonstrating the correctness of the solution to a probability problem can be
taxing. Indeed, for many people, basic concepts in probability are difficult to
grasp and apply. Thus, probability problems may fail to meet fully the ability
and motivation requirements implied in (c) and (d) in the aforementioned
conditions of demonstrability.
For other kinds of problems, the solver may not have the necessary resources
at hand to demonstrate the correct answer. For example, Hinsz (1990) gave
six-person groups a recognition memory task (true/false questions about a
previously viewed video of a simulated job interview). Groups got 85% correct
on average whereas individuals got 68% correct. Based on this simple compari-
son, we might be tempted to conclude that something about the group interac-
tion helped members recall more information (e.g., mutual cuing, error correct-
ing, and the like). However, the SDS analysis cast a different light on the
results. Out of 16 a priori decision schemes, Hinsz found that a “plurality-
correct” scheme best predicted the groups’ performances. Indeed, when a group
had only one correct member, fewer than half responded correctly. Interestingly,
however, for items on which individual solvers tended to be highly confident,
the solution rates for groups with one correct member were about 60%. Thus,
for Hinsz’s recognition memory task, correct members needed either to be in
the plurality or to be highly confident to get the group to adopt their response.
The point here is that the interesting questions about process change when
viewed from the vantage point of the SDS models. Rather than wondering why
groups were doing better than individuals on the recognition task, we wonder
why groups were not performing up to the level of their best member (and, in
many cases, the level of their best two members). When group-versus-individual
comparisons are recast using social combination models, our attention often
shifts from explaining what initially appears as group process gain to explaining
16 GAROLD STASSER

process loss (Steiner, 1972). Why did the group not perform up to the level
permitted by the resources of its members?

THE SPARSE DATA PROBLEM AND PROSPECTING

One benefit of having a model of group process is that it permits one to


explore territory that is not covered by existing data (see Abelson, 1968; Hastie,
1988; Ostrom, 1988; and Stasser, 1990, for examples and added discussion of
this point). Davis and Kerr (1986) noted that this benefit is particularly relevant
in the study of group process because of the “sparseness” of data that address
fundamental issues in group process and organizational behavior. On the one
hand, SDS theory provides a framework for expressing ideas about group
process and competitively testing these ideas against data. On the other hand,
it provides a tool for extrapolating beyond and interpolating between existing
data sets—prospective modeling.
Knowing that a model holds under some circumstances, one can readily
address two types of questions. What is the effect on group response rates:
(a) when individuals’ preferences change and (b) when group size changes? The
preference question returns to the basic individual-into-group question: How
do changes in the individual response rates affect the distribution of group
responses? For example, given that a majority/leniency model adequately ac-
counts for the decisions of six-person juries when 50% of jurors favor conviction,
how would jury conviction rates change if an intervention (e.g., pretrial public-
ity) shifted the juror conviction rate from .5 to .7? The individual-into-group
functions, like those depicted in Fig. 2, address such questions. Davis and Kerr
(1982) and Tindale and Nagao (1986) provide nice illustrations of using SDS
theory to explore such questions.

FIG. 3. Predicted jury conviction rate as a function of individual juror conviction rate for
4-, 6-, and 12-person juries under a simple majority model.
SDS PRIMER 17

The group size question asks how much the individual-into-group signature
changes as group size changes. For example, Figs. 3 and 4 contrast the individ-
ual-into-group functions for 4-, 6-, and 12-person juries for the simple majority
and majority/leniency models. A couple of interesting patterns emerge. For the
majority leniency model, size has relatively little impact on predicted jury
conviction rates regardless of the individual juror conviction rate. The picture is
different, however, for the simple majority process. There are small differences
when juror conviction rates are below .5, but the larger groups are more likely
to convict when juror conviction rates are in the .6 to .85 range. Such patterns
are nearly impossible to intuit or predict from verbally expressed ideas about
group process.
Another way of framing the group size question is to ask, “Does the same
social process (as represented in a D matrix) yield distinctly different relation-
ships between individual inputs and group responses for different-sized
groups?” As Fig. 2 shows, simple majority and majority/leniency processes do
not produce large differences in jury conviction rates for 6-person juries. The
maximum difference is about .1 (when the individual conviction rate is around
.6 to .7). Figure 5 gives the same comparisons for 12-person juries. Here, the
differences in the predictions of the simple majority and the majority/leniency
models are much larger, particularly when the juror conviction rate is in the
.6 to .7 range. The comparable graph for 4-person juries is not presented, but
suffice it to say that the differences in predictions from the two majority models
virtually disappear when group size is 4. Such explorations can be useful not
only for embedding empirical results in a larger context but also for planning
future empirical studies. For example, the foregoing comparisons suggest that
it would be difficult, at best, to detect the effects of an intervention designed

FIG. 4. Predicted jury conviction rate as a function of individual juror conviction rate for
4-, 6-, and 12-person juries under a majority/leniency model.
18 GAROLD STASSER

FIG. 5. Predicted jury conviction rate as a function of individual juror conviction rate for
12-person juries and three SDS models: majority/leniency, simple majority, and conviction-
supported.

to dampen the leniency component when using small juries or using a criminal
case that has a juror conviction rate below .5.

CONCLUSIONS

SDS theory provides an inductive and deductive tool. On the inductive side,
we strive to summarize and understand empirical observations in terms of
underlying consensus models. On the deductive, we can use the framework to
explore regions where no data exist—prospective modeling. Moreover, the SDS
tradition encourages attitudes and practices that are characteristic of “good”
science and useful in the study of group process and performance. First, it
requires that we be precise about our theoretical ideas, replacing verbal
vagueness with models that produce point predictions. Second, it encourages
us to extend our considerations beyond our pet models to include a range of
plausible and implausible models. Third, the SDS tradition asks that we think
in terms of how individual inputs are transformed into group outputs. We
should be inclined to probe beyond simple individual-versus-group compari-
sons. Fourth, SDS model testing and fitting are data-demanding enterprises.
Test models with data—as much as you can get your hands on. Fifth, prospect.
Seek to fill gaps in the empirical record. If a model fits the observations obtained
for six-person groups, what does it imply for three-person groups?

REFERENCES

Abelson, R. P. (1968). Simulation of social behavior. In G. Lindzey & E. Aronson (Eds.), The
handbook of social psychology (2nd ed., Vol. 2, pp. 274–356). Reading, MA: Addison–Wesley.
SDS PRIMER 19

Bentler, P. M., & Bonnett, D. G. (1980). Significance testing and goodness of fit in the analysis of
covariance structures. Psychological Bulletin, 88, 588–606.
Davis, J. H. (1969). Group performance. Reading, MA: Addison–Wesley.
Davis, J. H. (1973). Group decision and social interaction: A theory of social decision schemes.
Psychological Review, 80, 97–125.
Davis, J. H., & Kerr, N. L. (1986). Thought experiments and the problem of sparse data in
small-group research. In P. Goodman (Ed.), Designing effective work groups (pp. 305–349). New
York: Jossey–Bass.
Gigone, D., & Hastie, R. (1996). The impact of information on group judgment: A model and
computer simulation. In E. H. Witte & J. H. Davis (Eds.), Understanding group behavior:
Consensual action by small groups (Vol. I, pp. 221–251). Hillsdale, NJ: Erlbaum.
Gigone, D., & Hastie, R. (1997). Proper analysis of the acuracy of group judgments. Psychological
Bulletin, 121, 149–167.
Hastie, R. (1988). A computer simulation model of person memory. Journal of Experimental Social
Psychology, 24, 423–447.
Hastie, R., & Stasser, G. (in press). Computer simulation methods in social psychology. In H.
Reis & C. Judd (Eds.), Handbook of research methods in social psychology. New York: Cambridge
Univ. Press.
Hays, W. L. (1988). Statistics (4th ed.). New York: Holt, Rinehart & Winston.
Hill, G. W. (1982). Group versus individual performances: Are N 1 1 heads better than one?
Psychological Bulletin, 91, 517–539.
Hinsz, V. B. (1990). Cognitive and consensus processes in group recognition memory performance.
Journal of Personality and Social Psychology, 59, 705–718.
Kerr, N. L., Stasser, G., & Davis, J. H. (1979). Model testing, model fitting, and social decision
schemes. Organizational Behavior and Human Performance, 23, 399–410.
Kerr, N. L., Atkins, R. S., Stasser, G., Meek, D., Holt, R. W., & Davis, J. H. (1976). Guilt beyond
a reasonable doubt: Effects of concept definition and assigned decision rule. Journal of Personality
and Social Psychology, 34, 282–294.
Kerr, N. L., MacCoun, R. J., & Kramer, G. P. (1996). “When are N heads better (or worse) than
one?”: Biased judgment in individuals versus groups. In E. H. Witte & J. H. Davis (Eds.),
Understanding group behavior: Consensual action by small groups (Vol. I, pp. 105–136). Hillsdale,
NJ: Erlbaum.
Kerr, N. L., & Watts, B. L. (1982). After division, before decision: Group faction size and predelibera-
tion thinking. Social Psychology Quarterly, 45, 198–205.
Laughlin, P. R. (1980). Social combination processes of cooperative problem-solving groups on
verbal intellective tasks. In M. Fishbein (Ed.), Progress in social psychology (Vol. 1, pp. 127–155).
Hillsdale, NJ: Erlbaum.
Laughlin, P. R., & Ellis, A. L. (1986). Demonstrability and social combination processes on mathe-
matical intellective tasks. Journal of Experimental Social Psychology, 22, 177–189.
Laughlin, P. R., Kerr, N. L., Munch, M. M., & Haggarty, C. A. (1976). Social decision schemes and
the same four-persons on two different intellective tasks. Journal of Personality and Social
Psychology, 33, 80–88.
Lorge, I., & Solomon, H. (1962). Group and individual behavior in free-recall verbal learning. In
J. H. Criswell, H. Solomon, & P. Suppes (Eds.), Mathematical methods in small group processes
(pp. 221–231). Stanford, CA: Stanford Univ. Press.
MacCoun, R. J., & Kerr, N. L. (1988). Asymmetric influence in mock jury deliberation: Jurors’
bias for leniency. Journal of Personality and Social Psychology, 54, 21–33.
Ostrom, T. M. (1988). Computer simulation: The third symbol system. Journal of Experimental
Social Psychology, 24, 381–392.
Shaw, M. E. (1932). Comparison of individuals and small groups in the rational solution of complex
problems. American Journal of Psychology, 44, 491–504.
20 GAROLD STASSER

Shaw, M. E. (1976). Group dynamics: The psychology of small group behavior. New York: McGraw–
Hill.
Stasser, G. (1988). Computer simulation as a research tool: The DISCUSS model of group decision
making. Journal of Experimental Social Psychology, 24, 393–422.
Stasser, G. (1990). Computer simulation of social interaction. In C. Hendrick & M. S. Clark (Eds.),
Research methods in personality and social psychology (Vol. 11, pp. 120–141). Newbury Park,
CA: Sage.
Stasser, G., Kerr, N. L., & Bray, R. M. (1982). The social psychology of jury deliberation: Structure,
process, and product. In N. L. Kerr & R. M. Bray (Eds.), The psychology of the courtroom.
Orlando, FL: Academic Press.
Stasser, G., Kerr, N. L., & Davis, J. H. (1989). Influence processes and consensus models in decision-
making groups. In P. Paulus (Ed.), Psychology of group influence (2nd ed., pp. 279–326). Hillsdale,
NJ: Erlbaum.
Stasser, G., & Taylor, L. A. (1991). Speaking turns in face-to-face discussions. Journal of Personality
and Social Psychology, 60, 675–684.
Steiner, I. D. (1972). Group process and productivity. New York: Academic Press.
Tindale, R. S., & Nagao, D. H. (1986). An assessment of the potential utility of “scientific jury
selection”: A “thought experiment” approach. Organizational Behavior and Human Decision
Processes, 37, 409–425.
Tindale, R. S., Smith, C. M., Thomas, L. S., Filkins, J., & Sheffey, S. (1996). Shared representations
and asymmetric social influence processes in small groups. In E. H. Witte & J. H. Davis (Eds.),
Understanding group behavior: Consensual action by small groups (Vol. I, pp. 81–103). Hillsdale,
NJ: Erlbaum.

Received April 1999

You might also like