You are on page 1of 31

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/323219380

Measuring Trust

Chapter · January 2018

CITATION READS

1 10,743

1 author:

Paul C. Bauer
Mannheim Centre for European Social Research, University of Mannheim
26 PUBLICATIONS   380 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Trust Research View project

All content following this page was uploaded by Paul C. Bauer on 16 February 2018.

The user has requested enhancement of the downloaded file.


Measuring Trust

Oxford Handbooks Online


Measuring Trust
Paul C. Bauer and Markus Freitag
The Oxford Handbook of Social and Political Trust
Edited by Eric M. Uslaner

Subject: Political Science, Political Methodology Online Publication Date: Jan 2017
DOI: 10.1093/oxfordhb/9780190274801.013.1

Abstract and Keywords

This article focuses on the measurement of trust. First, we start with a brief
conceptualization of trust, contrasting it with the concept of generalized trust. Second,
we survey developments in trust measurement since the 1960s. Third, we summarize and
try to systematize a number of measurement debates that have taken place. Fourth, we
outline how trust measurement may develop in the future, discuss how differently
formulated survey questions may abate some of the debates within the field, and present
empirical data that follow some of these directions. Essentially we argue that trust—as
opposed to generalized trust—should be measured through reliance on a set of more
specific questions that measure expectations across a series of different situations.

Keywords: Measurement, trust, trust measurement, measurement, generalized trust

Introduction
The concept of trust is regarded as one of the essential building blocks of social science
theory. It has been studied across a variety of disciplines and has even been equated with
the broader concept of social capital (Whiteley 2000, 450). However, while many scholars
agree on the essential role trust plays as a concept in social theory, they do not
necessarily agree on its meaning (Bacharach and Gambetta 2001; Gambetta 1988; Hardin
2002; Misztal 2013; Nooteboom 2002; Seligman 2000; Uslaner 2002; Warren 1999). To
the contrary, trust research has produced an impressive number of definitions that all too
often diverge in important aspects (Bauer 2014; Rousseau et al. 1998). This conceptual
diversity did not result in a common way of measuring the two concepts (Cook and
Cooper 2003; Lyon et al. 2012). And despite the fact that most empirical work is based on

Page 1 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

a small set of measures, measurement remains a contested area. This article is guided by
the following question: How was trust measured in the past and how can or should we
measure trust in the future? In answering our research question we start, first, with a
short conceptual discussion. Second, we review developments and innovations in trust
measurement, starting with studies from the 1960s. Third, we summarize measurement
debates that have occurred in the field of trust research. Fourth, we outline directions
through which trust measurement may develop in the future, recognizing the deficiencies
that characterize current measurement.

Defining Trust
Measurement requires a clear conception of trust. In our view, the term “trust” first and
foremost designates an expectation and not a behavior (e.g., Hardin 2002). Mixing the
two conflates trust with cooperative behavior (Cook and Cooper 2003, 213). Thus, we
recommend calling the latter “trusting behavior” or “behaviorally exhibited trust” (Barr
2003; Fehr et al. 2002).

Moreover, trust is situation-specific. Such situations can be parsimoniously described by


using a few parameters (Baier 1986; Hardin 1992, 154; Luhmann 1979, 27; Sztompka
1999, 55). When speaking about trust, we essentially speak about a truster A that trusts
(judges the trustworthiness of) a trustee B with regard to some behavior X in context Y
(Bauer 2014, 2–3) at time t.1 Adding time t clarifies that trust may change, that is, a
truster may adapt his expectations over time. Subsequently, we can replace these
parameters with different real-life trustees, behaviors, and contexts. Sometimes
specifying one parameter makes another one redundant. For instance, Hans (A) may trust
his brother (B) to return borrowed money (X), regardless of the context Y in which the
interaction takes place. However, for strangers it probably matters where and at what
time we meet them. In our experience, any discussion of trust becomes much more
systematic when one departs from the conceptual statement above.

The reduced statement “A trusts” describes the idea that individuals possess some
generalized situation-independent expectation. That is, independently of parameters B, X,
and Y (and t), that is, across a wide variety of situations, some individuals are simply
more trusting than others. It seems helpful to conceive it as some basic, “stable” starting
level from which situational expectations may deviate in a positive or negative direction.
Different accounts reflect this idea. See, for instance, Uslaner’s (2002) “generalized
trust”; Erikson’s (1959, 57) concept of “basic trust”; Rotter’s (1967, 653) idea of a
“generalized expectancy”; or Coleman’s idea that persons possess a “standard estimate

Page 2 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

of the probability of trustworthiness, p*, for the average person he [or she]
meets” (Coleman 1990, 104). It also seems inherent in the concepts of “trust propensity”
or “trait trust” that are closely linked to the concept of personality (Colquitt et al. 2007;
McCrae and Costa 2003; Mooradian et al. 2006).2

The main difference between social and political trust lies in the specification of both
trustee and expected behavior. The latter concept entails that the trustee is a political
actor, for example, a government. It seems rather unsurprising that expectations toward
governments differ from expectations toward fellow citizens (e.g., Newton 2001, 203).
Importantly, in our view, to speak of trust, it is unnecessary that truster and trustee know
each other personally (see Hardin 2002 for a contrasting view). In the next section we
give an overview of various significant developments and innovations in the area of trust
measurement.

Developments and Innovations in


Measurement
The field of trust measurement is so extensive that it is impossible to discuss all
developments and innovations. Hence, our review is forcibly selective, and we focus more
strongly on social trust than on political trust. In doing so we can draw on a set of
insightful and recommendable studies that previously reviewed measures of trust (Cook
and Cooper 2003; Hardin 2002; Levi and Stoker 2000; Lyon et al. 2012; Nannestad 2008;
Nooteboom 2002; Sztompka 1999; Uslaner 2002).

Direct measures of the concept let subjects self-report their trust. Indirect measures try
to infer trusting expectations by observing individuals’ decisions, behavior, and reactions.
Behavioral scholars have gone to great lengths to construct lab experiments that allow
for capturing behavior that is caused by trust and not by alternative motivations. For
instance, coining the classic trust game, Berg et al. (1995, 137) suggest that the “double-
blind and one-shot controls used in this design strengthen our conclusion that self-
interest alone cannot explain our results.” Early on the suspicion that behavior in such
games may be driven by self-interest or a competitive spirit was one of the motivations
for Rotter (1967) to develop trust survey measures (Cook and Cooper 2003, 214).

Trust measurement—as systematic measurement across a large number of units—started


in the first half of the twentieth century. Self-report measurement predates behavioral
measurement in lab experiments and seemingly started in the 1940s. The first record of a

Page 3 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

version of the most-people (trust) question3—the most popular measure of trust—may be


a questionnaire from 1942 (Bauer 2015a, 16).4

Rosenberg (1956, 690)—seemingly the first to construct a systematic measurement


instrument—probably coined the balanced version of the most-people question: “Some
people say that most people can be trusted. Others say you can’t be too careful in your
dealings with people. How do you feel about it?” Rosenberg (1956) combined multiple
items and constructed a faith-in-people Guttman scale.5 In his study, Rosenberg (1956)
was interested in the relationship between “faith in people” and individuals’ political
ideologies as well as evaluations/views of specific political questions. Later on,
Rosenberg’s questions were used by Almond and Verba (1963) in their seminal
comparative study on the civic culture. Modified versions of these questions are still used
today and have found their way into many longitudinal as well as comparative surveys.

In contrast to Rosenberg’s self-report measures, Deutsch (1960) observes participants’


behavior while letting them play the prisoner’s dilemma in a laboratory setting. Deutsch
paved the way for behavioral measurement of trust (Cook and Cooper 2003).

One of the first to measure political trust was Stokes (1962), who followed his interest in
measuring basic evaluative orientations toward political actors and developed a
corresponding set of questions (Levi and Stoker 2000). The concept of political trust
never figured into Stokes’s analysis. However, later on his questions came to be known as
the “trust-in-government questions” (Levi and Stoker 2000, 477) and they were included
in the American National Election Studies (ANES) starting in 1964 (Citrin and Muste
1999, 477; A. H. Miller 1974).6 The questions are introduced as follows: “People have
different ideas about the government in Washington. These ideas don’t refer to
Democrats or Republicans in particular, but just to the government in general. We want
to see how you feel about these ideas. For example… ,” followed by five items that
measure trust in government.7

Thereafter the interest in political trust rose massively triggered by the works of Easton
(1965) and Gamson (1968) (Levi and Stoker 2000, 477). Nowadays, many surveys contain
questions that have the following basic structure: “Using this card, please tell me on a
score of 0–10 how much you personally trust each of the institutions I read out. 0 means
you do not trust an institution at all, and 10 means you have complete trust. Firstly, the
legal system?” (European Social Survey 2012). Questions are mostly located in batteries
and list a number of institutions that can be rated by the respondent.

In the same decade, Rotter (1967) developed a measurement instrument for interpersonal
trust that contains twenty-five questions and fifteen filler questions.8 Rotter (1967, 654)
(see also Rotter and Stein 1971) was dissatisfied with social psychologists’ focus on the

Page 4 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

prisoner’s dilemma and wanted to measure trust as a personality factor that predicts
cooperative behavior in a wide range of settings (Cook and Cooper 2003, 214). Because
Rotter was suspicious that such games measure competitive behavior, he tested the
validity of his scale against sociometric ratings through student peers of the participants
(Rotter 1967, 653). But in the end Rosenberg’s (1956) questions remained more popular,
with the shorter length of his measurement instrument being one of the potential
reasons.

Within experimental research viewed critically by Rotter, Berg et al. (1995) designed an
investment game that later came to be known as the “classical trust game.”9 Berg et al.
(1995) aimed at controlling for alternative explanations of behavior such as reputation
effects, contractual precommitments, and punishment threats. The structure is the
following: Truster A is given a certain amount of money. A then chooses to send all, some,
or none of this amount of money to the trustee (recipient), which is called the “amount
sent.” The “amount sent” is multiplied by some factor and received by trustee B. A keeps
the rest to himself. B, the recipient, chooses to send all, some, or none of the received
money back to the sender, which is called the “the amount returned” (Berg et al. 1995,
123). Trust is simply equated and measured with the (average) amount sent across
trusters; and trustworthiness is equated with the (average) amount returned across
trustees. The more A sends, the higher is A’s trust; the more B returns, the higher is B’s
trustworthiness. To this day, the classic trust game is immensely popular and used
extensively, sometimes with slight modifications of the original rules. While their work is
seminal for behavioral measurement, it doesn’t elaborately discuss the lines of reasoning
or expectations individuals may follow in their decision to trust in the game.

Integrating experiments with surveys, Glaeser et al. (2000) systematically contrast self-
report measures with behavioral measures. Glaeser et al. (2000) show to what extent
trusting behavior in an experiment—measured with a modified version of the Berg et al.
(1995) game as well as an envelope drop experiment—is predicted by trust self-reports
and self-reports of past trusting behavior.10 Thereby the authors test a wide variety of
self-report measures such as the trust questions included in the General Social Survey,11
the Faith-in-People Scale (Rosenberg 1956), the Interpersonal Trust Scale (Rotter 1967),
and questions querying past trusting behavior. The authors find that self-report measures
“of past trusting behavior are better than [the] abstract attitudinal questions in
predicting subjects’ experimental choices” (Glaeser et al. 2000, 813). However, to this
date evidence on which trust questions are the best predictors of trusting behavior in
experiments is mixed (Capra et al. 2008; Ermisch et al. 2009; Fehr et al. 2002).

Following a different venue, Buskens and Weesie (2000) investigate a concrete trust
situation, namely the situation in which a buyer wants to buy a used car from a car

Page 5 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

dealer. Buskens and Weesie (2000) measure trust as a decision between two vignettes,
that is, descriptions of situations. Relying on this and similar methods allows us to
investigate the impact of all sorts of hypothetical scenarios on trust judgments or trusting
decisions (in this case the decision to buy). One such contextual characteristic could be
that the Auto Shop is a well-known garage and has many customers in the buyer’s
neighborhood. The authors assume that “the larger the probability that the dealer abuses
trust, the smaller the probability that the buyer will take the risk of placing
trust” (Buskens and Weesie 2000, 228).

Probably the first to integrate a behavioral experiment into a “representative” large-scale


survey are Fehr et al. (2002). Experiments in which subjects do not interact with each
other can be added to surveys more easily. However, the significant step forward
provided by Fehr et al. (2002) is to develop a method suitable to implement a sequential
game within a survey in a simultaneous manner without the use of the strategy method.
Fehr et al. (2002) use decisions in an investment game to measure behavioral trust as
well as different survey questions to measure self-reported trust.12 Consequently, the
authors can identify which survey questions correlate well with behaviorally exhibited
trust (i.e., decisions in the experiment). However, and in contrast to Glaeser et al. (2000),
their sample is more informative in that it is not confined to students.

Introducing the implicit-association test (IAT), Burns et al. (2006) investigate the safety
culture at a U.K. gas plant. Arguing that self-report measures may be biased by
respondents’ motivations of self-presentation, the authors try to measure trust implicitly.
Implicit measures were originally developed to measure prejudices (e.g., Fazio and Olson
2003). Participants are shown different categories of people on a screen (e.g., the word
“Workmates”). These terms may or may not trigger an automatic attitude. Subsequently,
participants are shown a trust-related or distrust-related target word (e.g., “Caring”) and
have to press a key labeled “trust” or “distrust” as quickly as possible. The idea is that
the presence of an automatic attitude will impact the latency time of participants’
answers. In other words, if a participant has an automatic attitude toward a certain
trustee category that mirrors trust or distrust, respectively, the participant will be
quicker to push the respective button labeled with trust or distrust. Burns et al. (2006,
1149f.) carefully outline various potential problems with this measurement. For instance,
automatic attitudes should only matter when the motivation or opportunity to deliberate
are low. In many real-life situations that require trust, individuals presumably do have
time for deliberation. Moreover, it is unclear to what extent the basic motivation behind
this measurement approach—self-presentation bias—matters as strongly for self-reported
trust as it does for prejudices.

Page 6 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Contributing to classic self-report measurement, Soroka et al. (2007) develop the so-
called wallet questions that mirror a field experiment conducted by the Reader’s Digest
described in Knack and Keefer (1997). In this experiment, wallets were dropped in a
number of cities across the world, and Knack and Keefer (1997) find that the percentage
of wallets returned in each country strongly correlates with answers to the most-people
question on the country level. Soroka et al. (2007, 99) explicitly ask, “If you lost a wallet
or purse that contained two hundred dollars, how likely is it to be returned with the
money in it if it was found by…” and provide four different trustee categories.13 These
questions specify a situation—namely an expected behavior (returning the wallet) —as
well as specific groups of people. Thereby, the authors make a major step toward
measuring situational trust as we describe it below.

In their visual stimuli experiments, Todorov et al. (2008) use pictures of faces with
previously predicted levels of trustworthiness as stimuli in a functional magnetic
resonance imaging study. They use an implicit task and let participants do a face memory
task in which they are presented with blocks of faces and are asked to indicate whether a
test face was presented in the block. Participants did not engage in explicit evaluation of
the faces’ trustworthiness, but their neural response was measured. Results indicate that
the amygdala response changed as a function of facial trustworthiness.

Within experimental research, Ermisch et al. (2009) publish a critique of the classical
version of the trust game suggested by Berg et al. (1995). Among other aspects, they
point out that the game does not properly reflect trust situations in real life. Despite the
attempt to isolate trust as motivation for behavior, the observed behavior in the classical
trust game may be due to different motivations such as gift-giving (Ermisch et al. 2009,
753). They develop a game—termed the “binary” trust game—with modified rules that
reflects their criticism and integrate it into a survey similar to Fehr et al. (2002). The
central aspect in their game is its binary nature. Truster as well as trustee have only two
behavioral options (keeping vs. sending money). By giving the trustee no discretion in the
amount he can return, that is, by giving him only two options, it is not left to the trustee
to define which behavior can be regarded as trustworthy. In the classic game a trustee
may perceive that any amount returned is a display of trustworthiness on his side.

Naef and Schupp (2009) contribute to the debate on the behavioral relevance of survey
measures by developing two questions (termed “SOEP-trust”): “How much do you trust
strangers you meet for the first time” and “When dealing with strangers, it’s better to be
cautious before trusting them.”14 They contrast these questions with a series of more
specific questions and find that trust in strangers loads on an independent component.
Moreover, they find that an experimental measure (a behavioral measure based on Fehr

Page 7 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

et al. 2002) of trust is significantly correlated with SOEP-trust, which is specifically aimed
to measure trust in strangers.

To sum up, various self-report and behavioral measures have been introduced during the
last decades. Despite some innovations, researchers today primarily use modified
versions of questions that were introduced in the 1940s and 1950s for social trust and in
the 1960s for political trust. The most widely used question to measure generalized trust
is a modified version of the most-people question presumably introduced in 1942.
Regarding lab game experiments, researchers started out with the prisoner’s dilemma
(see Deutsch 1960) and now mainly rely on the classic trust game (Berg et al. 1995). In
contrast, the innovations introduced (e.g., vignette experiments, brain imaging, IAT)
remain confined to a few studies. As discussed in the following section, the validity of our
standard measures is increasingly scrutinized, and so we will outline some of the debates
that have happened in the self-report tradition.

Debates and Questions Regarding


Measurement
New measures should ideally remedy flaws of old measures. In this section we shortly
summarize some of the debates that surround current self-report measurement. Thereby
we lean on a previous discussion by Uslaner (2011). In the subsequent section we present
some solutions to those problems.

The behavioral-relevance debate concerns the question of whether self-reports of trust


(expectations) are really linked to behavior (Fehr et al. 2002; Glaeser et al. 2000; Naef
and Schupp 2009; Uslaner 2012).15 This debate has to be delimited from the conceptual
debate on whether trust is a behavior or rather an expectation. Following the paradigm
“talk is cheap,” it is often questioned that self-reports are behaviorally relevant (Gächter
et al. 2004; Glaeser et al. 2000; Uslaner 2012). Glaeser et al. (2000, 841) suggest that
“standard survey questions about trust do not appear to measure [behavioral] trust”;
however, self-reports of past trusting behavior seem to have some predictive power for
trusting behavior (Glaeser et al. 2000). Gächter et al. (2004) find that the most-people
question does not predict behavior in a public goods game. However, the General Social
Survey (GSS) trust index seems to have predictive power (see also Ahn et al. 2003). In
general, findings still seem inconsistent (Capra et al. 2008).

The item-number debate (Uslaner 2012) contrasts single-item measures of generalized


trust with multi-item (or scale) measures. Multi-item measures of latent concepts are

Page 8 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

generally hailed because they allow for a correction of measurement error. A


respondent’s “wrong” self-placement on one scale can be mitigated by “right” placements
on other scales. Generalized trust, for instance, is often measured using the “three-item
misanthropy scale,” known as the Genderal Social Survey (GSS) index (Brehm and Rahn
1997; Zmerli and Newton 2008). Uslaner (2011, 75–76) suggests that an increase in
indicators may also decrease validity if some of the indicators do not tap into the same
underlying concept. Moreover, while self-reported measurement originated in psychology
where researchers commonly use many items to measure latent concepts (Uslaner
2012, 76), it is precisely their length that made such measurement instruments
unattractive for large-scale population surveys with limited space (e.g., instruments such
as Rotter 1967, 654).

A related discussion—the dimensions debate or forms debate—concerns the question


whether there are different forms of trust, that is, whether items measuring trust can be
reduced to fewer latent dimensions such as generalized trust or particularized trust
(Freitag and Bauer 2013; Freitag and Traunmüller 2009; Newton and Zmerli 2011;
Omodei and McLennan 2000; Whiteley 2000; Wollebæk et al. 2012; Yamagishi and
Yamagishi 1994). A similar debate has occurred in the subfield of political trust between
Fisher et al. (2010, 2011) and Hooghe (2011). If a subset of trust scales (among a larger
set) correlates with each other but not with other ones, it signals that this subset of
scales measures something different. Depending on the set of scales that are analyzed,
results show that one should at least differentiate between two or three dimensions
(Freitag and Bauer 2013; Freitag and Traunmüller 2009; Newton and Zmerli 2011;
Wollebæk et al. 2012). However, with more scales that are more refined, we are likely to
find more dimensions. Importantly, from a technical point of view both the item-number
and the dimensions debate are related to the question whether respondents choose the
same (or similar) scale points on groups of trust scales, that is, whether their positions
correlate. If so, methods such as principal components analysis or confirmatory factor
analysis will reveal that the underlying variation can be described by fewer latent factors.

The scale-length debate (Lundmark et al. 2015; Uslaner 2009; Uslaner 2012) surrounds
the use of different answer scales (e.g., dichotomous vs. longer answer scales). The topic
of scale length has long occupied survey methodologists (Krosnick and Presser 2010).
While the dichotomous version of the most-people question was the standard for a long
time, several surveys have changed to longer answer scales when measuring generalized
trust (e.g., Swiss Household Panel; Citizenship, Involvement, Democracy; European
Social Survey). In what concerns the most-people question, Uslaner (2011) argued in
favor of the classic dichotomous version, but new evidence suggests that longer scales
may be advantageous (e.g., Lundmark et al. 2015).

Page 9 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

The equivalence debate questions whether scales and concepts in questions are
interpreted in the same way across respondents (Davidov 2009; Freitag and Bauer 2013;
Reeskens and Hooghe 2008). In other words, finding a difference between two
respondents could be related to a true difference in trust or to the fact that they interpret
the question differently (referred to as interpersonal incomparability or measurement
inequivalence). For instance, it is suggested that concepts such as “trusting most people”
or “being cautious” may have a different meaning for different respondents (Miller and
Mitamura 2003). Currently researchers use mainly two approaches to investigating this
problem: they probe respondents as to what they were thinking while answering with a
think-aloud approach or a follow-up open-ended question; or they use multigroup-
confirmatory factor analysis.

Following the first approach, Uslaner (2002 18–19, 73, n. 7) analyzed think-aloud
responses to the most-people question in the ANES 2000 pilot and concludes that the
“question on trust brings up general evaluations of society.” Uslaner compares the most-
people question to the two other trust questions that belong to the GSS trust index:
“Would you say that most of the time people try to be helpful, or that they are just looking
out for themselves?” and “Do you think most people would try to take advantage of you if
they got the chance or would they try to be fair?” and argues that the most-people
question, in comparison, fares best. Sturgis and Smith (2010) conduct a similar analysis
for the most-people question, and a second question measuring trust in people in the
local area. They conclude that differences in the interpretation of the trustee categories B
—specifically most people and people in your local area—may lead to a bias in
responses.16

Page 10 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

In Figure 1 we replicate
and illustrate this problem
using two student samples
that were given the classic
most-people question
(eleven-point scale),
followed by a question
probing whom they had in
mind when considering the
category “most people.”17
The graph illustrates that
in this homogeneous
sample respondents do not
necessarily tend to think of
Click to view larger
strangers or people that
Figure 1 Associations with “most people.”
are unknown to them.
Many think of situations
(e.g., meeting someone in the train/street) or of people they know (e.g., friends, family
members, etc.). These results are in line with previous findings (Sturgis and Smith 2010;
Uslaner 2002).18

Other scholars follow the second approach. Instead of probing questions, they use
structural equation models to assess measurement equivalence of latent trust constructs
(André 2013; Davidov 2009; Freitag and Bauer 2013; Poznyak et al. 2014; Reeskens and
Hooghe 2008; Van der Veld and Saris 2011). Evidence is mixed with more encouraging
results on the subnational level, and less encouraging results across countries. However,
ultimately it seems as if the probing strategy is the preferred strategy to identify such
problems in survey questions.

The Future of Trust Measurement


Given the debates just mentioned, we suggest that the field of trust research may benefit
from a new set of differently formulated survey questions that follow the statement A
trusts B to do X in context Y (see Bauer 2014).

First, questions should be more specific and contain explicit references to single trustees
or trustee groups that are sufficiently precise (e.g., “your parents and your siblings”
instead of “family”). There is always a trade-off between generality and specificity. From

Page 11 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

a measurement perspective, it is an advantage if a trustee category comprises a clear set


of persons. The finding that a respondent A2 has a lower level of trust in his family than a
respondent A1 could be due to the wider circle of persons A2 is thinking of (e.g., uncles,
aunts, and cousins).

Second, questions should be more specific in that they explicitly refer to some kind of
behavior X that truster A expects of a trustee B. The level of trust depends on the content
of the trust “relationship.” If left unspecified, respondents may fill in different
specifications, and it makes a difference when a trustee is expected to keep a secret or to
return a large amount of borrowed money. Relatedly, Hetherington and Husser (2012)
find that people evaluate governmental trustworthiness considering different issues,
depending on the salience of these issues. In our view their results stem from the fact
that no X is specified in the classical standard political trust questions. Querying
respondents’ trust in government with regard to a specific X, such as “will lower the
unemployment rate,” should solve this problem.

Third, we think that measurement would benefit if questions refer to a more concrete
context Y. Our probing data reveal that a considerable share of respondents think of
concrete situations, such as “people in the bus or train,” “people that I ask for the time,”
“people that I meet in the street,” or “strangers in the train.” In other words, holding the
trustee and an expected behavior constant, the imagined context—a sunny park or a dark
street—may still vary across respondents.

Finally, we suggest to elicit a subjective probability regarding whether a specified


behavior by a trustee B in a context Y will or will not occur. While we are less firm on this
point, it has several advantages. To start, it assures that our measure is aligned with a
clearer conception of trust, namely trust as subjective probability, an idea that is behind
various accounts of trust (see Bauer 2014). Besides, it assures that the scale has clear,
quantitative, and balanced endpoints. Endpoint quantifiers such as “complete trust” or
“you cannot be too careful in dealing with people” can be regarded as vague. In contrast,
an endpoint of probability 0 means that an event will not occur; 1 means that it will
occur. Such an interpretation should be understood equivalently across respondents. This
idea is in line with recommendations by Tourangeau et al. (2000, 47f., 61) to use scales
with absolute quantifiers. While the concepts of “probability” or “percent chance” travel
much better, the term “trust” has “many and varied meanings” in vernacular application
(Hardin 2002, 20) and even more so across languages. We have also seen that evidence
on measurement equivalence is mixed with probing strategies warranting greater
caution. Moreover, Clinton and Manski (2002, 2) argue that respondents seem to have
little difficulty using probabilities to express the likelihood of future events if they are
adequately introduced to the “percent chance” scale. In principle it should also be
possible to give respondents test questions to see whether they understand a simple

Page 12 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

probability scale.19 Finally, using such scales may provide a solution for the conceptual
controversy around trust and distrust that is linked to the endpoints we choose for trust
scales (Cook et al. 2005; Hardin 2002; Lagace and Gassenheimer 1989; Lewicki and
Brinsfield 2012; Wrightsman and Wuescher 1974). We would argue that mistrust/distrust
is an antonym for trust with the scale reversed; for example, a low estimated probability
simultaneously reflects a low level of trust and a high level of distrust.

If we follow these recommendations, they provide a natural solution to the behavioral-


relevance debate. It is not surprising that general survey questions (e.g., the most-people
question) do not predict situation-specific behavior such as the choice of sending x dollars
in a lab. If we measure trust as a specific expectation, it should be related to such
behavior. Ermisch et al. (2009, 751) mention the “expectation that the trustee will do X,
framed in terms of a probability” as one component that leads to trusting behavior and
find that a “person’s expectation of the chances of return is strongly related to their
experimental trust decision” (Ermisch et al. 2009, 760). Fehr et al. (2002, 532 and Table
4) measure a similar expectation (about the amount returned) and find that this
expectation predicts behavior. Accordingly, Sapienza et al. (2013, 3) suggest that the
“best measure… is not the amount sent, but the expectation about the amount returned
for large amounts sent.” Departing from the idea that a trustee’s behavior is binary, that
is, he either fulfills trust or not (Ermisch et al. 2009, 753), we suggest measuring
expectations about such binary behaviors.

The item-number debate relates to a decision between notions of reflective or formative


indicators (e.g., Diamantopoulos and Winklhofer 2001). In the former approach we test
whether a latent concept is reflected by different indicators, that is, whether they
correlate to a certain degree and can be reduced to a lower number of higher-order
dimensions. Potentially, the more interesting approach for the future is the latter one.
Here we predefine that a certain concept comprises a set of indicators and decide about
some aggregation rule for the values of the single indicators. In our view we could try to
measure trust as the average across situation-specific trust indicators—let’s call it cross-
situational trust—following a formative measurement strategy. This idea comes close to
various conceptual definitions (Coleman 1990; Rotter 1967; Uslaner 2002), and we can
investigate how this measure relates to classical, generalized trust measures. However,
the construction of such a measure requires an elaborate discussion of which situations
(i.e., which Bs, Xs, and Ys) should be included.

Moreover, as we use a higher number of more specific questions we will be able to


identify more forms of trust (Freitag and Bauer 2013, 41). Respondents should have
consistent trust patterns for certain situations, that is, they should have consistent
expectations for certain trustee categories, certain expected behaviors, or certain

Page 13 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

contexts. For instance, certain people are skeptical when it comes to money regardless of
the trustee. In what concerns scale length, longer scales are advantageous (Krosnick and
Presser 2010; Lundmark et al. 2015). However, there is a caveat. An eleven-point scale
requires that respondents can count to eleven. A probability scale requires that
respondents understand the concept of probability (Clinton and Manski 2002). Thus,
when moving to contexts where those preconditions are not fulfilled, we should adapt our
measures accordingly. Finally, we are convinced that more specific questions provide the
way forward in terms of equivalence. They allow less variation in question interpretation
on the respondent’s part.

We would like to round up our discussion by illustrating some data. In 2012 we collected
survey data in Switzerland, in part, building on rationales provided above (see also Bauer
2014) as well as inspired by Soroka et al. (2007):20 “The next questions deal with future
events. Please imagine a probability scale running from 0 to 100%. 0% means that the
event will not occur, 100% means that the event will certainly occur. Imagine losing your
wallet (with identity card) containing, among other things, 200 Swiss Francs. On a scale
from 0 to 100%, how probable is it that the wallet will be returned to you including its
content, if it is found by… ?” Subsequently respondents answer these questions for a
series of categories: “a relative,” “one of your friends,” “neighbor,” “a stranger, that you
don’t know,” “someone who speaks the same language as you,” “someone of the same
nationality as you,” “a co-worker,” “a friend from your association or club.”

While these questions reflect some of the ideas above, the corresponding measures are
by no means ideal. They explicitly mention a behavior X (returning a wallet with identity
card and 200 Swiss Francs). Some of the trustee categories are a bit more specific but
probably not specific enough, and the questions elicit a subjective probability. Besides,
they do not specify a context (e.g., finding a wallet in the neighborhood vs. a train
station). And they were located in a battery but asked in a random order. Before this
battery of questions, respondents answered the most-people question (eleven-point
scale),21 which allows us to compare the results we get for the new questions with results
we get for the most-people question.

Table 1 displays summary statistics. Scales that have trustee categories that presumably
encompass people known to the truster have high means above 80 (scales 2, 3, 4, 8, and
9 in Table 1). In addition, the variance on these scales is lower as indicated by the
standard deviation and the interquartile range. In contrast, scales that refer to categories
that do not necessarily trigger the consideration of known people have lower means
between 44 and 55 (scales 5, 6, and 7 in Table 1). Also both standard deviation and
interquartile range are higher on these scales. The most-people question has a mean of
6.12 and 61.2 respectively when rescaled (to 0–100).

Page 14 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Table 1 Summary Statistics

Nr Statistic N Mean St. Dev. Min Pctl (25) Pctl (75) Max

1 Trust in 1,153 6.12 2.03 0 5 8 10


most
people

2 Trust in 1,150 93.54 15.33 0 90 100 100


relative

3 Trust in 1,153 92.84 15.01 0 90 100 100


friend

4 Trust in 1,147 83.76 21.50 0 80 100 100


neighbor

5 Trust in 1,135 44.24 23.81 0 25 60 100


stranger

6 Trust in 1,122 51.85 23.42 0 40 70 100


s.o. w/
same
language

Page 15 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights Reserved. Under the terms of the licence agreement, an
individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

7 Trust in 1,127 52.59 23.36 0 40 70 100


s.o. w/
same
nationality

8 Trust in 1,093 85.56 18.96 0 80 100 100


colleague
from work

9 Trust in 802 85.56 18.22 0 80 100 100


friend from
assoc./club

10 Trust in 1,153 61.18 20.32 0 50 80 100


most
people
(rescaled)

11 Average 750 74.64 13.00 5.12 67.66 83.72 100.00


(across
questions
2–9)

Page 16 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights Reserved. Under the terms of the licence agreement, an
individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Figure 2 displays distributions. They follow a shape that we would probably expect. They
are skewed to the left for trusters the respondent knows (e.g. friends, relatives,
neighbors, colleagues from work, and associations). In contrast, the distributions for
trustee categories such as most people, strangers, and people with another nationality
are more spread out. Generalized trust is often equated with trust in strangers (Torpe
and Lolle 2010). However, we find that on our trust-in-strangers scale (Question 5 in
Table 1), which explicitly refers to strangers and a particular X, many more people locate
themselves at the lower end of the scale, which contrasts answers to the most-people
question.22

Click to view larger

Figure 2 Distribution on trust scales.

Page 17 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

As displayed in Figure 3 the


correlation between trust in
strangers and trust in most
people is at 0.3. This is not a
particularly high value for
two indicators that are
presumed to measure the
same concept. At the same
time, it is not surprising
given that we contrast a
more general question with
a very specific question
detailing B and X. Generally,
correlations among trust
Click to view larger
scales that measure trust in
categories potentially
Figure 3 Correlation between trust scales.
eliciting trust in people the
truster knows display a
higher correlation among each other. The same is true for scales that should measure
abstract categories (strangers, another nation, another language).

We suggested measuring generalized trust as an average of expectations across a large


number of situations. Consequently, we may simply take the average across our
probabilistic trust scales. We calculated such an average for all our probabilistic trust
scales (mean = 75, sd = 13). The problem is, however, that we did not formulate our trust
questions with the aim of covering a wide variety of situations, for instance, if there is in
imbalance in favor of trustee categories that refer to groups potentially known to the
truster (friends, etc.). Hence, another, theoretically derived set of questions would be
necessary. Finally, if we equate generalized trust with trust in strangers, we may use a
set of questions that queries trust in strangers in different situations and average across
this set.

Conclusion
In this chapter we survey the development of trust measurement and discuss ways in
which we may measure trust in the future. Our contribution should not be perceived as a
damning critique of past empirical research that is based on current standard measures—

Page 18 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

including our own. Rather, it represents a careful attempt at thinking beyond the
measures used so far and at incorporating the lessons we have learned.

First, we start by conceptualizing trust as a situation-specific expectation—which


differentiates it from generalized trust. It is an expectation that is bound to vary
depending on different situational parameters such as the trustee, the behavior we
expect of him/her, and some context.

Second, we discuss various standard measures as well as more innovative measures that
were used in the past. Strikingly, the lion’s share of empirical research is solely based on
a few measures (e.g., the most-people question and the GSS index in the case of social
trust). Third, we outlined various measurement debates that have taken place with
regard to self-report measurement. Among other things, we analyzed some recent data
regarding the interpretation of the most-people question. A considerable share of
respondents filled in situational specifications (i.e., think of concrete situations). To us,
this suggests that it is a fruitful way forward to specify situations in our trust questions
that are as concrete as possible.

Fourth, we suggested ways of how trust may be measured in future, namely through
more specific questions that specify situations through a trustee, an expected behavior,
and ideally some context. The clear advantage is that such questions should fare better
when it comes to criticism that has been voiced in the various debates. As an alternative
to current standard measures (e.g., the most-people question), one could try to measure
generalized trust as an average across many situations that entails a variety of trustees,
expected behaviors, and contexts. To avoid confusion, we suggest using the term “cross-
situational trust.” Arguing for the use of a bigger set of questions seems infeasible in a
world in which researchers go to war over including single items in large-scale surveys.
However, we stand on the verge of a world where surveys are easily administered to
millions of smartphone users with almost no costs. As a result, we are not subject to the
same time and space constraints and have much more freedom regarding the length of
our survey instruments. Fifth, we presented some data that were collected in Switzerland
2012. In using those questions, we realized only some of the recommendations made in
this chapter; still, to us the data seem encouraging. When properly introduced,
individuals seem to be able to locate themselves on such scales, but a certain level of
literacy is necessary.

A potential move toward more specific questions also matters for wider debate on the
relationship between predispositions, experiences, and trust. So far we have studied the
corresponding questions relying on our standard survey items. And evidence regarding
the impact of experiences on generalized trust is inconclusive (Bauer 2015b; Glanville et
al. 2013; Oskarsson et al. 2016; Paxton and Glanville 2015; Uslaner 2002). Experiences

Page 19 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

that humans collect are situation-specific, that is, we gather experiences with specific
trustees, specific expected behaviors, and in specific contexts. Imagine a situation where
a truster A is asked for the time in a train station and subsequently robbed of his watch. A
will probably adapt his situation-specific expectations and ignore future queries for the
time in this context. However, A does not necessarily generalize these expectations to
other situations.

If we want to push trust research in a more policy-relevant direction—in which we explain


cooperative behavior in concrete contexts such as neighborhoods or workplaces and in
which we are better able to quantify the costs of low levels of trust—it seems inevitable
that we use more specific questions. Studies such as Yuki et al. (2005) go into this
direction. Such a move will allow us to uncover a rich variation in expectations regarding
different trustees, behaviors, and contexts—expectations that are more closely linked to
actual cooperative behaviors.

References
Ahn, T. K., E. Ostrom, D. Schmidt, and J. Walker. 2003. Trust in two-person games: Game
structures and linkage. In E. Ostrom and J. Walker, eds., Trust and Reciprocity:
Interdisciplinary Lessons from Experimental Research, 323–351. New York: Russell Sage.

Algan, Y., and P. Cahuc. 2013. Trust and Growth. Annual Review of Economics 5(1): 521–
549.

Almond, G. A., and S. Verba. 1963. The Civic Culture: Political Attitudes and Democracy
in Five Nations. Princeton: Princeton University Press.

André, S. 2013. Does Trust Mean the Same for Migrants and Natives? Testing
Measurement Models of Political Trust with Multi-group Confirmatory Factor Analysis.
Social Indicators Research 115(3): 963–982.

Bacharach, M., and D. Gambetta. 2001. Trust in Signs. In K. S. Cook, ed., Trust in
Society, 148–184. New York: Russell Sage Foundation.

Baier, A. 1986. Trust and Antitrust. Ethics 96: 231–260.

Barr, A. 2003. Trust and Expected Trustworthiness: Experimental Evidence from


Zimbabwean Villages. Economic Journal of Nepal 113(489): 614–630.

Bauer, P. C. 2014. Conceptualizing and Measuring Trust and Trustworthiness. Political


Concepts: Committee on Concepts and Methods Working Paper Series 61: 1–27.

Page 20 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Bauer, P. C. 2015a. Three Essays on the Concept of Trust and Its Foundations. Bern:
University of Bern.

Bauer, P. C. 2015b. Negative Experiences and Trust: A Causal Analysis of the Effects of
Victimization on Generalized Trust. European Sociological Review 31(4): 397–417.

Bauer, P. C., P. Barbera, K. Ackermann, and A. Venetz. 2016. Is the left-right scale a valid
measure of ideology? Individual-level variation in associations with “left” and “right” and
left-right self-placement. Political Behavior. doi:10.1007/s11109-016-9368-2.

Berg, J., J. Dickhaut, and K. McCabe. 1995. Trust, Reciprocity, and Social History. Games
and Economic Behavior 10: 122–142.

Brehm, J., and W. Rahn. 1997. Individual-level evidence for the causes and consequences
of social capital. American Journal of Political Science 41(3): 999–1023.

Burns, C., K. Mearns, and P. McGeorge. 2006. Explicit and implicit trust within safety
culture. Risk Analysis: An Official Publication of the Society for Risk Analysis 26(5): 1139–
1150.

Buskens, V., and J. Weesie. 2000. An Experiment on the Effects of Embeddedness in Trust
Situations: Buying a Used Car. Rationality and Society 12(2): 227–253.

Butler, J. V., P. Giuliano, and L. Guiso. 2015. Trust, values, and false consensus.
International Economic Review 56(3): 889–915.

Camerer, C. F. 2003. Behavioral Game Theory: Experiments in Strategic Interaction.


Princeton: Princeton University Press.

Camerer, C., and K. Weigelt. 1988. Experimental Tests of a Sequential Equilibrium


Reputation Model. Econometrica: Journal of the Econometric Society 56(1): 1–36.

Capra, C. M., K. Lanier, and S. Meer. 2008. Attitudinal and Behavioral Measures of Trust:
a New Comparison. Unpublished manuscript. Emory University Department of
Economics.

Citrin, J. S., and C. Muste. 1999. Trust in government. In J. P. Robinson, P. R. Shaver, and
L. S. Wrightsman, eds., Measures of Political Attitudes, 465–532. New York: Academic
Press.

Clinton, J. D., and C. F. Manski. 2002. Empirical probability scales for verbal expectations
data, with Application to Expectations of Job Loss. Unpublished paper.

Coleman, J. S. 1990. Foundations of Social Theory. Cambridge: Harvard University Press.

Page 21 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Colquitt, J. A., B. A. Scott, and J. A. LePine. 2007. Trust, Trustworthiness, and Trust
Propensity: A Meta-Analytic Test of Their Unique Relationships with Risk Taking and Job
Performance. Journal of Applied Psychology 92(4): 909–927.

Cook, K. S., and R. M. Cooper. 2003. Experimental studies of cooperation, trust, and
social exchange. In Elinor Ostrom, ed., Trust and Reciprocity: Interdisciplinary Lessons
for Experimental Research, 209–244. New York: Russell Sage Foundation.

Cook, K. S., R. Hardin, and M. Levi. 2005. Cooperation without Trust? New York: Russell
Sage Foundation.

Davidov, E. 2009. Measurement Equivalence of Nationalism and Constructive Patriotism


in the ISSP: 34 Countries in a Comparative Perspective. Political Analysis 17: 64–82.

Delhey, J., K. Newton, and C. Welzel. 2011. How General Is Trust in “Most People”?
Solving the Radius of Trust Problem. American Sociological Review 76(5): 786–807.

Deutsch, M. 1960. The Effect of Motivational Orientation upon Trust and Suspicion.
Human Relations: Studies towards the Integration of the Social Sciences 13: 123–139.

Diamantopoulos, A., and H. M. Winklhofer. 2001. Index Construction with Formative


Indicators: An Alternative to Scale Development. Journal of Marketing Research 38(2):
269–277.

Easton, D. 1965. A Framework for Political Analysis. Vol. 25. Englewood Cliffs, NJ:
Prentice-Hall.

Erikson, E. H. 1959. Growth and crisis of the healthy personality. In E. H. Erikson, ed.,
Psychological Issues: Selected Papers, Volume 1, Issue 1, 51–107. New York:
International Universities Press.

Ermisch, J., D. Gambetta, H. Laurie, T. Siedler, and S. C. N. Uhrig. 2009. Measuring


people’s trust. Journal of the Royal Statistical Society 172(4): 749–769.

European Social Survey. 2012. ESS Round 6 Source Questionnaire. London: Centre for
Comparative Social Surveys, City University.

Fazio, R. H., and M. A. Olson. 2003. Implicit measures in social cognition research: Their
meaning and use. Annual Review of Psychology 54: 297–327.

Fehr, E., U. Fischbacher, B. von Rosenbladt, J. Schupp, and G. G. Wagner. 2002. A


Nation-Wide Laboratory. Examining trust and trustworthiness by integrating behavioral
experiments into representative surveys. Schmollers Jahrbuch 122(4): 519–542.

Page 22 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Fisher, J., J. van Heerde-Hudson, and A. Tucker. 2011. Why Both Theory and Empirics
Suggest There Is More Than One Form of Trust: A Response to Hooghe. British Journal of
Politics and International Relations 13: 276–281.

Fisher, J., J. van Heerde, and A. Tucker. 2010. Does one trust judgement fit all? Linking
theory and empirics. British Journal of Politics and International Relations 12(2): 161–
188.

Freitag, M., and P. C. Bauer. 2013. Testing for measurement equivalence in surveys:
Dimensions of Social Trust across Cultural Contexts. Public Opinion Quarterly 77(S1): 24–
44.

Freitag, M., and P. C. Bauer. 2016. Personality traits and the propensity to trust friends
and strangers. The Social Science Journal 53(4): 467–476.

Freitag, M., and R. Traunmüller. 2009. Spheres of trust: An empirical analysis of the
foundations of particularised and generalised trust. European Journal of Political
Research 48: 782–803.

Gächter, S., B. Herrmann, and C. Thöni. 2004. Trust, voluntary cooperation, and socio-
economic background: survey and experimental evidence. Journal of Economic Behavior
and Organization 55(4): 505–531.

Gambetta, D. 1988. Can We Trust Trust? In D. Gambetta, ed., Trust: Making and
Breaking Cooperative Relations, 213–237. Cambridge: Basil Blackwell.

Gamson, W. A. 1968. Power and Discontent. Homewood, IL: Dorsey Press.

Glaeser, E. L., D. I. Laibson, J. A. Scheinkman, and C. L. Soutter. 2000. Measuring Trust.


Quarterly Journal of Economics 115(3): 811–846.

Glanville, J. L., M. A. Andersson, and P. Paxton. 2013. Do Social Connections Create


Trust? An Examination Using New Longitudinal Data. Social Forces: A Scientific Medium
of Social Study and Interpretation 92(2): 545–562.

Hardin, R. 1992. The Street-Level Epistemology of Trust. Analyse and Kritik 14: 152–176.

Hardin, R. 2002. Trust and Trustworthiness. New York: Russell Sage Foundation.

Hetherington, M. J., and J. A. Husser. 2012. How Trust Matters: The Changing Political
Relevance of Political Trust. American Journal of Political Science 56(2): 312–325.

Page 23 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Hooghe, M. 2011. Why There Is Basically Only One Form of Political Trust. British Journal
of Politics and International Relations 13(2): 269–275.

Johnson, N. D., and A. A. Mislin. 2011. Trust games: A meta-analysis. Journal of Economic
Psychology 32: 865–889.

Knack, S., and P. Keefer. 1997. Does Social Capital Have an Economic Payoff? A Cross-
Country Investigation. Quarterly Journal of Economics 112(4): 1251–1288.

Krosnick, J. A., and S. Presser. 2010. Question and questionnaire design. In P. V. Marsden
and J. D. Wright, eds., Handbook of Survey Research, vol. 2, 263–313. Bingley, UK:
Emerald Group Publishing.

Lagace, R. R., and J. B. Gassenheimer. 1989. A measure of global trust and suspicion:
Replication. Psychological Reports 65(2): 473–474.

Levi, M., and L. Stoker. 2000. Political Trust and Trustworthiness. Annual Review of
Political Science 3: 475–507.

Lewicki, R. J., and C. Brinsfield. 2012. Measuring trust beliefs and behaviours. In F. Lyon,
G. Mollering, and M. N. K. Saunders, eds., Handbook of Research Methods on Trust, 29–
39. Cheltenham: Edward Elgar.

Luhmann, N. 1979. Trust and Power. Chichester: Wiley.

Lundmark, S., M. Gilljam, and S. Dahlberg. 2015. Measuring Generalized Trust: An


Examination of Question Wording and the Number of Scale Points. Public Opinion
Quarterly 80(1): 26–43.

Lyon, F., G. Möllering, and M. Saunders. 2012. Handbook of Research Methods on Trust.
Cheltenham: Edward Elgar.

Manski, C. F., and F. Molinari. 2010. Rounding Probabilistic Expectations in Surveys.


Journal of Business and Economic Statistics 28(2): 219–231.

McCrae, R. R., and P. T. Costa Jr. 2003. Personality in Adulthood: A Five-Factor Theory
Perspective. New York: Guilford Press.

Miller, A. H. 1974. Political Issues and Trust in Government—1964–1970—Rejoinder.


American Political Science Review 68(3): 989–1001.

Miller, A. S., and T. Mitamura. 2003. Are Surveys on Trust Trustworthy? Social
Psychology Quarterly 66(1): 62–70.

Page 24 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Misztal, B. 2013. Trust in Modern Societies: The Search for the Bases of Social Order.
Hoboken, NJ: John Wiley and Sons.

Mooradian, T., B. Renzl, and K. Matzler. 2006. Who trusts? Personality, trust and
knowledge sharing. Management Learning 37(4): 523–540.

Naef, M., and J. Schupp. 2009. Measuring trust: Experiments and surveys in contrast and
combination. IZA Discussion Paper 4087: 1–44.

Nannestad, P. 2008. What Have We Learned about Generalized Trust, if Anything?


Annual Review of Political Science 11: 413–436.

Newton, K. 2001. Trust, Social Capital, Civil Society, and Democracy. International
Political Science Review 22(2): 201–214.

Newton, K., and S. Zmerli. 2011. Three Forms of Trust and Their Association. European
Political Science Review 3: 169–200.

Nooteboom, B. 2002. Trust: Forms, Foundations, Functions, Failures and Figures.


Cheltenham: Edward Elgar.

Omodei, M. M., and J. McLennan. 2000. Conceptualizing and Measuring Global


Interpersonal Mistrust-Trust. Journal of Social Psychology 140(3): 279–294.

OPOR. 1942. Survey 813: War. Office of Public Opinion Research.

Oskarsson, S., P. T. Dinesen, C. T. Dawes, M. Johanneson, and P. K. E. Magnusson. 2016.


Education and Social Trust: Testing a Causal Hypothesis Using the Discordant Twin
Design. Political Psychology. Early View.

Paxton, P., and J. L. Glanville. 2015. Is Trust Rigid or Malleable? A Laboratory


Experiment. Social Psychology Quarterly 78(2): 194–204.

Petersen, T. 2014. Personal email communication with Paul C. Bauer.

Pickett, J. T., T. A. Loughran, and S. Bushway. 2015. On the Measurement and Properties
of Ambiguity in Probabilistic Expectations. Sociological Methods and Research 44(4):
636–676.

Poznyak, D., B. Meuleman, K. Abts, and G. F. Bishop. 2014. Trust in American


Government: Longitudinal Measurement Equivalence in the ANES, 1964–2008. Social
Indicators Research 118(2): 741–758.

Page 25 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Reeskens, T., and M. Hooghe. 2008. Cross-cultural measurement equivalence of


generalized trust. Evidence from the European Social Survey 2002 and 2004. Social
indicators research 85(3): 515–532.

Rosenberg, M. 1956. Misanthropy and Political Ideology. American Sociological Review


21(6): 690–695.

Rotter, J. B. 1967. A new scale for the measurement of interpersonal trust. Journal of
Personality 35(4): 651–665.

Rotter, J. B., and D. K. Stein. 1971. Public Attitudes toward the Trustworthiness,
Competence, and Altruism of Twenty Selected Occupations. Journal of Applied Social
Psychology 1(4): 334–343.

Rousseau, D. M., S. B. Sitkin, R. S. Burt, and Colin Camerer. 1998. Not So Different After
All: A Cross-Discipline View Of Trust. The Academy of Management Review 23(3): 393–
404.

Sapienza, P., A. Toldra-Simats, and L. Zingales. 2013. Understanding Trust. Economic


Journal 123(573): 1313–1332.

Seligman, A. B. 2000. The Problem of Trust. Princeton: Princeton University Press.

Soroka, S., J. F. Helliwell, and R. Johnston. 2007. Measuring and modelling interpersonal
trust. In S. Soroka, J. Helliwell, and R. Johnston, eds., Social Capital, Diversity and the
Welfare State, 95–132. Vancouver: University of British Columbia Press.

Stokes, D. E. 1962. Popular evaluations of government: An empirical assessment. In H.


Cleveland and H. D. Lasswell, eds., Ethics and Bigness: Scientific, Academic, Religious,
Political, and Military, 61–72. New York: Harper and Brothers.

Sturgis, P., and P. Smith. 2010. Assessing the validity of generalized trust questions:
What kind of trust are we measuring? International Journal of Public Opinion Research
22(1): 74–92.

Sztompka, P. 1999. Trust: A Sociological Theory. Cambridge, UK: Cambridge University


Press.

Todorov, A., S. G. Baron, and N. N. Oosterhof. 2008. Evaluating face trustworthiness: A


model based approach. Social Cognitive and Affective Neuroscience 3(2): 119–127.

Torpe, L., and H. Lolle. 2010. Identifying Social Trust in Cross-Country Analysis: Do We
Really Measure the Same? Social Indicators Research 103(3): 481–500.

Page 26 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Tourangeau, R., L. J. Rips, and K. Rasinski. 2000. The Psychology of Survey Response.
Cambridge, UK: Cambridge University Press.

Uslaner, E. 2009. Is eleven really a lucky number? Measuring trust and the problem of
clumping. Unpublished manuscript, University of Maryland, College Park.

Uslaner, E. M. 2002. The Moral Foundations of Trust. Cambridge, UK: Cambridge


University Press.

Uslaner, E. M. 2012. Measuring generalized trust: In defense of the “standard” question.


In F. Lyon, G. Mollering, and M. N. K. Saunders, eds., Handbook of Research Methods on
Trust, 72–82. Cheltenham: Edward Elgar Publishing.

Van der Veld, W., and W. E. Saris. 2011. Causes of Generalized Social Trust: An
Innovative Cross-National Evaluation. In E. Davidov, P. Schmidt, and J. Billiet, eds.,
Cross-Cultural Analysis: Methods and Applications, 207–247. New York: Routledge.

Warren, M. E. 1999. Democracy and Trust. Cambridge, UK: Cambridge University Press.

Whiteley, P. F. 2000. Economic growth and social capital. Political Studies 48: 443–466.

Wollebæk, D., S. W. Lundåsen, and L. Trägårdh. 2012. Three forms of interpersonal trust:
evidence from Swedish municipalities. Scandinavian Political Studies 35(4): 319–346.

Wrightsman, L. S., and M. L. Wuescher. 1974. Assumptions about human nature: A social
psychological approach. Monterey, CA: Brooks/Cole.

Yamagishi, T., and M. Yamagishi. 1994. Trust and commitment in the United States and
Japan. Motivation and Emotion 18: 129–166.

Yuki, M., W. W. Maddux, M. B. Brewer, and K. Takemura. 2005. Cross-cultural


differences in relationship—and group-based trust. Personality and Social Psychology
Bulletin 31(1): 48–62.

Zmerli, S., and K. Newton. 2008. Social Trust and Attitudes toward Democracy. Public
Opinion Quarterly 72(4): 706–724.

Notes:

(1) Turning this statement around we may speak of a trustee B who is trustworthy with
regard to some (non)behavior X, context Y, a truster A (Bauer 2014, 2–3), and time t, i.e.,
the point in time when we measure trust.

Page 27 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

(2) Self-rated trust has long been an item within personality research, subsumed under
the factor agreeableness (McCrae and Costa 2003).

(3) We use the term “most-people question” to refer to the classic measure coined by
Rosenberg (1956) in its various forms before and after Rosenberg.

(4) The respective survey contains the question: “Do you think most people can be
trusted?” with possible answers “yes,” “no,” “no opinion,” and giving a “qualified
answer” (Opor 1942). The question does not seem to have originated with Elisabeth
Noelle-Neumann or Almond and Verba (1963) (Algan and Cahuc 2013; Zmerli and
Newton 2008, 709). Compare email exchange with Thomas Petersen of the Allensbach
Institute founded by Noelle-Neumann (Petersen 2014).

(5) Questions: 1. Some people say that most people can be trusted. Others say you can’t
be too careful in your dealings with people. How do you feel about it? 2. Would you say
that most people are more inclined to help others or more inclined to look out for
themselves? 3. If you don’t watch yourself, people will take advantage of you. 4. No one is
going to care much what happens to you, when you get right down to it. 5. Human nature
is fundamentally cooperative (Rosenberg 1956, 690).

(6) See Citrin and Muste (1999) for a review of measures tapping evaluations of political
institutions.

(7) Questions: 1. How much of the time do you think you can trust the government in
Washington to do what is right: Just about always/most of the time/or only some of the
time; 2. Would you say the government is: Pretty much run by a few big interests looking
out for themselves/or that it is run for the benefit of all the people; 3. Do you think that
people in government: Waste a lot of the money we pay in taxes/waste some of it/or don’t
waste very much of it; 4. Do you feel that: Almost all of the people running the
government are smart people who usually know what they are doing/or do you think that
quite a few of them don’t seem to know what they’re doing; 5. Do you think that: Quite a
few of the people running the government are a little crooked/not very many are/ or do
you think hardly any of them are crooked at all (Citrin and Muste 1999, 483).

(8) Examples are: 1. In dealing with strangers one is better off to be cautious until they
have provided evidence that they are trustworthy. 2. Parents usually can be relied upon
to keep their promises. 3. Parents and teachers are likely to say what they believe
themselves and not just what they think is good for the child to hear. Answer scales range
from 1 strongly agree to 5 strongly disagree (Rotter 1967, 654).

(9) Butler et al. (2015, 891) suggest that the trust game literature already starts with
Camerer and Weigelt (1988). See Camerer (2003) for a review of lab game research and

Page 28 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

Johnson and Mislin (2011) for a meta-analysis of research based on the classic trust
game.

(10) In the envelop drop experiment,. subjects can place a value in an envelope that is
addressed to themselves and subsequently dropped by the experimenter. Subjects had to
evaluate different conditions (e.g., different places where the envelope could be dropped)
and an average was taken. The higher the amount a subject places, the higher the level of
trust.

(11) That includes the most-people question as well as questions concerning expected
fairness (“Do you think most people would try to take advantage of you if they got the
chance, or would they try to be fair?”) and helpfulness (“Would you say that most of the
time people try to be helpful, or that they are mostly just looking out for themselves?”)
(Glaeser et al. 2000, 825).

(12) Questions: 1. Do you think that most people try to take advantage of you if they got a
chance or would they try to be fair?; 2. Would you say that most of the time people try be
helpful or that they are mostly just looking out for themselves?; 3. a) In general, one can
trust people b) In these days you can’t rely on anybody else c) When dealing with
strangers it is better to be careful before you trust them; 4. In the following you are asked
to which persons, groups and institutions you have more or less trust; 5. Have you ever
spontaneously benefited from a person you did not know before?; 6. How often does it
happen a) that you lend personal possessions to your friends (CDs, books your car,
bicycle etc.)? b) that you lend money to your friends? c) that you leave your door
unlocked? (Fehr et al. 2002, 530–532).

(13) Trustee categories: “By someone who lives close by,” “by a clerk at the grocery store
where you do most of your shopping,” “by a police officer,” and “by a complete stranger.”

(14) The answer scales are “no trust at all,” “little trust,” “quite a bit of trust,” and “a lot
of trust” for the first question; and “disagree strongly,” “disagree somewhat,” “agree
somewhat,” or “agree strongly” for the second question.

(15) It is also debated to what extent the classic trust game is a valid measure of
behavioral trust (Ermisch et al. 2009). More generally, using sender behavior as a
measure of trust has become controversial (Butler et al. 2015, Footnote 7).

(16) For another approach see Delhey et al. (2011).

(17) Students were participants at the lecture “Social Capital in Switzerland” at the
University of Bern. In 2015, 67 out of 124 students answered our probing question; and
in 2016, 81 out of 94 students answered the probing question. For Figure 1 we simply

Page 29 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017


Measuring Trust

combined the data of the two student samples after some initial preprocessing of their
open-ended answers.

(18) See Bauer et al. (2016) for an overview of similar studies.

(19) In situations in which a respondent is unable to give a precise response on the trust
scale we could query an interval. In some situations this interval may even cover the
complete trust scale. Pickett, Loughran, and Bushway (2015) provide a more elaborate
discussion of these ideas.

(20) This data was used in Freitag and Bauer (2016).

(21) Question: Broadly speaking, do you think that most people can be trusted, or that
can’t be too careful in dealing with people. Taking a scale on which 0 means that you
can’t be too careful in dealing with people and 10 means that most people can be trusted.
Where would you rank yourself on that scale?

(22) There is indication that respondents tend to round to the next ten on such scales.
This is a well-known phenomenon. See Manski and Molinari (2010) for a potential way to
tackle it.

Paul C. Bauer
European University Institute

Markus Freitag
University of Bern

Page 30 of 30

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: Yale University; date: 10 March 2017

View publication stats

You might also like