You are on page 1of 29

THE ACCOUNTING REVIEW American Accounting Association

Vol. 92, No. 5 DOI: 10.2308/accr-51662


September 2017
pp. 89–116

Use of High Quantification Evidence in Fair Value Audits:


Do Auditors Stay in their Comfort Zone?
Jennifer R. Joe
University of Delaware

Scott D. Vandervelde
University of South Carolina

Yi-Jing Wu
Texas Tech University
ABSTRACT: Research documents significant management bias and opportunism around the discretionary inputs of
audited complex estimates, including fair value measurements (FVMs), which raises questions about auditors’ ability
to test these estimates. We examine how the degree of quantification in client evidence and client control
environment risk influence auditors’ planned substantive testing of management’s discretionary inputs to FVMs. We
find that auditors allocate a lower proportion of effort to testing the subjective inputs to the fair value estimate when
the degree of quantification in the client evidence and level of client risk are both high. Further, this tendency persists
even after auditors receive a regulatory practice alert reminding them to focus more audit effort on testing fair value
(FV) inputs that are susceptible to management bias, and despite the auditors increasing their overall audit effort.
Qualitative analyses of the procedures auditors selected indicate that inapt attention to the degree of quantification in
evidence is a potential root cause of the difficulty auditors encounter when testing complex estimates. Our results
imply that in situations where both quantified and non-quantified data are important to the audit, there is the potential
for management to manipulate the evidence they provide to auditors to distract auditors from testing the discretionary
inputs to complex estimates that are susceptible to management opportunism.
Keywords: auditing fair value; client specialist; valuation specialist; quantification effect; auditing management
bias and opportunism; auditing discretionary inputs; client risk; PCAOB alert; audit planning; entity-
level controls; control environment risk.
Data Availability: Contact authors for data availability.

I. INTRODUCTION

A
plethora of archival data studies document widespread management bias and opportunism around the subjective
inputs (e.g., discount rates, model assumptions, and model selections) to complex accounting estimates, including fair
value measurements (hereafter, FVMs) (e.g., Hilton and O’Brien 2009; Choudhary 2011; Ramanna and Watts 2012).
Management not only exploits the subjectivity in fair value (FV) estimates to bias financial reporting, but Dechow, Myers, and
Shakespeare (2010) find that management opportunism in FVMs yields higher compensation for managers. Therefore,
management opportunism in FVMs can negatively impact investors’ decision-making and wealth. Motivated by these findings
of management bias and opportunism in audited FVMs and regulatory concerns over the audit of discretionary inputs to FVMs
(e.g., Public Company Accounting Oversight Board [PCAOB] 2009a, 2009b, 2012a, 2014; Financial Reporting Council [FRC]

Prior and current versions of this paper have benefited from helpful comments from Sanaz Aghazadeh, Erik Boyle, Joe Brazel, Christine Earley, Steve
Glover, Vicky Hoffman, Marlys Lipe, Tracie Majors, Linda Quick, David Wood, and workshop participants at the 2015 AAA Auditing Section Midyear
Meeting, Brigham Young University, Case Western Reserve University, 2014 Deloitte Foundation/University of Kansas Auditing Symposium, 2014
International Symposium on Audit Research, Kent State University, Lehigh University, North Carolina State University, and Texas Tech University. We
also thank the editor and anonymous reviewers for their insights.
Editor’s note: Accepted by Mark E. Peecher, under the Senior Editorship of Mark L. DeFond.
Submitted: May 2014
Accepted: November 2016
Published Online: January 2017
89
90 Joe, Vandervelde, and Wu

2011; Canadian Public Accountability Board [CPAB] 2012), we examine two factors that can influence auditors’ testing of
management discretion in FVMs: the degree of quantification (i.e., amount of numeric detail) in the client specialist’s report
and the level of the client’s control environment risk (a significant component of client risk). Specifically, we investigate
whether and how auditors’ planned substantive testing (nature and extent of subjective versus objective audit procedures) of a
complex FVM is influenced by the joint effects of the degree of quantification in audit evidence and the client’s control
environment risk.
There is widespread use of third-party valuation specialists1 to assist management in their preparation of FVMs (hereafter,
client specialist(s)) (Harvest 2014; PCAOB 2015). Accordingly, auditors must often evaluate evidence prepared by client
specialists when testing FVMs, which is typically provided in a specialist report. Client specialist reports can vary significantly
in the amount of quantitative detail provided to support the calculations for management’s assumptions (Cannon and Bedard
2017; Glover, Taylor, and Wu 2017a). Cognitive research in psychology and accounting suggests that the degree of
quantification in the client specialist report can influence auditors’ attention to available evidence and, thus, affect the nature
and extent of auditors’ testing of a client’s FVMs (e.g., Mack and Rock 1998; Most et al. 2001; Kadous, Koonce, and Towry
2005; Drew, Vo, and Wolfe 2013). Because there is significant discretion in the subjective inputs to FVMs and research
provides evidence that management exploits this discretion to engage in opportunistic and/or biased reporting (e.g., Hilton and
O’Brien 2009; Choudhary 2011; Ramanna and Watts 2012; PCAOB 2016a, 6), auditors should be wary of allowing features of
audit evidence that can be controlled by the client (such as the degree of quantification) to influence their planned substantive
testing. For example, allowing the degree of quantification to disproportionally drive audit testing can expose auditors to
increased litigation and reputational risk, as well as risk of regulatory scrutiny. The influence on planned testing can increase
risk because the amount of numeric detail provided in a client specialist report is determined by the client and/or specialist and,
in the context of this study, does not signal a change in the risk that the account balance could be misstated.
We address our research questions in an experiment using in-charge-level participants with FV audit experience. The
design manipulates (1) the degree of quantification by varying the amount of numerical data in the client specialist’s report (low
versus high), and (2) the level of client control environment risk (low versus high) by varying the strength of entity-level
controls (ELCs) (strong versus moderate) while holding constant the overall informativeness of the data available for audit
judgment.2 Our examination of the relative trade-off in auditors’ planned mix and extent of subjective versus objective audit
procedures to test FVMs considers both the nature and the object of the procedures. Further, our classification of subjective
versus objective procedures is validated by an expert panel of national office partners. Subjective audit procedures by nature
require more auditor judgment, and their object/purpose is to test inputs that involve more management discretion.
Alternatively, objective audit procedures by nature call for little audit judgment, and their object/purpose is to test inputs that
involve little management discretion.3
Consistent with the theoretical predictions from cognitive research in psychology and accounting (e.g., Mack and Rock
1998; Most et al. 2001; Balcetis and Dunning 2006; Luippold and Kida 2012), we find that the degree of quantification in
client-provided evidence and the control environment risk jointly influence the trade-off auditors make when choosing
procedures to test the client’s FVM, but do not influence overall effort (i.e., the total hours allocated). Auditors are most likely
(least likely) to apply subjective procedures to test discretionary inputs when control environment risk is high and the degree of
quantification in the client specialist’s report is low (high). Additionally, an analysis of the procedures selected by auditors
reveals that auditors in the higher-risk setting are less likely to take the substantive approach of developing an independent
estimate to test the client’s FVMs when the degree of quantification is low versus when it is high. Our finding that the degree of
quantification influences auditors’ use of an independent estimate approach is noteworthy because empirical evidence shows
that developing independent judgments/estimates makes auditors less susceptible to management’s influence (McDaniel and
Kinney 1995; Earley, Hoffman, and Joe 2008). Collectively, the results of this study are not desirable from an audit quality
perspective in settings where both quantified and non-quantified audit evidence are important to evaluating the reasonableness
of the client’s account balance (e.g., in the client specialist report). Further, in these higher-risk scenarios, management can

1
The FV audit setting includes different types of valuation experts. Management can engage third-party valuation experts to assist them in preparing
FVMs. Auditors also use valuation experts to help them evaluate the clients’ FVMs, either by engaging third-party firms or directly employing in-house
specialists (see discussion in Griffith [2016]).
2
Our manipulation of high control risk uses a client with moderate ELCs because discussions with audit partners indicated that clients with weak ELCs
are rare in practice for the setting represented in our study (i.e., publicly traded companies requiring a Sarbanes-Oxley [SOX] Section 404 internal
controls audit).
3
For example, evaluating the reasonableness of management’s assumptions of inputs to FVMs is considered a subjective procedure, because this
procedure by nature requires auditor judgment and its object/purpose is to test a component of the FV estimate that requires management discretion. On
the other hand, clerically testing the mathematical accuracy of quantitative data provided in the client specialist’s report is an objective procedure,
because by nature it requires little auditor judgment and its object/purpose is to test a component of the FV estimate with little management discretion
(i.e., do the numbers add up and/or are averages calculated correctly).

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 91

strategically provide a high degree of quantified evidence to divert auditors’ attention to performing more objective procedures,
thereby distracting them from performing the subjective procedures that test management’s discretionary (and potentially
opportunistic) inputs to FVMs.
U.S. regulators often issue practice alerts to highlight concerns about auditor performance (e.g., PCAOB 2007, 2010a).
Academics question the effectiveness of this regulatory approach and speculate that it can negatively impact how audits are
conducted (Peecher, Solomon, and Trotman 2013; Glover, Taylor, and Wu 2017b). Therefore, we conduct a follow-up study to
investigate whether a regulatory alert changes (1) auditors’ propensity to be influenced by the joint effect of quantification and
control environment risk, and (2) overall audit effort to test the client’s FVMs. We find that alerting auditors to regulators’
preference for more testing of the subjective aspects of FVMs does not change auditors’ tendency to be influenced by the
degree of quantification and control environment risk (i.e., a practice alert does not change the effort allocated to subjective
versus objective audit procedures). Consistent with Peecher (1996), however, reminding auditors of regulators’ preference
leads to an increase in the total effort for testing FVMs (i.e., total hours allocated to both subjective and objective procedures).
Thus, we provide insights for regulators and practitioners that simply issuing practice alerts does not change how auditors
allocate effort in testing the objective and subjective inputs to FVMs, but can increase total audit effort. To change how auditors
allocate effort, the PCAOB might consider recommendations that restructure audit tasks (e.g., see Earley et al. 2008; Griffith,
Hammersley, and Kadous 2015a).
Our study extends two streams of research. First, we contribute to the growing literature examining potential sources of
the challenges auditors encounter in FV auditing (e.g., Griffith, Hammersley, Kadous, and Young 2015b; Backof, Thayer,
and Carpenter 2016; Joe, Wu, and Zimmerman 2016; Maksymov, Nelson, and Kinney 2016). Bratten, Gaynor, McDaniel,
Montague, and Sierra (2013) note that management bias around the subjective aspects of FVMs is especially challenging for
auditors, and call for research, such as this study, to examine auditor-specific and task-specific factors that influence auditors’
ability to appropriately test the aspects of FVMs that are most susceptible to management bias. We also extend the literature
by providing an additional root cause for the difficulty auditors encounter when testing management’s assumptions
underlying critical estimates. Griffith et al. (2015a) identify environmental attributes (audit firms’ and auditing standards’
emphasis on verification of management’s model, and audit firm division of labor that inhibits on-the-job learning for testing
estimates) as root causes for insufficient testing of management’s assumptions and models. We provide evidence of a
cognitive root cause—auditors’ ambiguity-aversion and motivational preference for more verifiable/quantitative tasks—
leading to the degree of quantification in audit evidence influencing their choice of objective procedures over subjective
procedures (such as developing independent estimates) to test management’s estimates. Second, we extend the literature on
the impact of client risk on audit judgment. Prior studies indicate that auditor focus on increased client risk can prevent
auditors from engaging in motivated reasoning and lead to more skeptical judgments (e.g., Kadous, Kennedy, and Peecher
2003; Earley, Hoffman, and Joe 2016). However, consistent with contemporaneous findings in Aghazadeh and Joe (2015),
we demonstrate that auditors’ attention to client risk exacerbates the negative effect of quantification on audit judgment
observed (i.e., degree of quantification has a stronger impact on auditors’ planned testing of FVMs when client risk is high
rather than when it is low).
This study also has implications for investors, practitioners, and regulators. The prior research documenting that
management opportunism in audited FVMs leads corporate boards to compensate managers based on the managers providing
biased reports of critical estimates (see Bratten et al. 2013) highlights the importance of auditors’ attestation over FVMs.
Regulators are also critical of auditors’ processes in testing management’s discretion around FVMs (e.g., FRC 2011; CPAB
2012; PCAOB 2012a, 2013). By providing new evidence on the joint effect of two factors that can unknowingly influence
auditors’ substantive testing of the discretionary inputs to the client’s FV estimates that are susceptible to management
opportunism, this study’s results have implications regarding the reliability of the audited FVMs for financial statement users.
Further, while this study uses an FV context to illustrate the effect of quantification conditioned on the level of control
environment risk, it has implications for other audit settings. For example, our results inform settings where auditors assess
evidence containing important objective and subjective inputs, which can include the going-concern decision and evaluation of
contingencies.
This study does not focus on auditors’ strategic thinking related to the degree of quantification in client-provided evidence
(e.g., Zimbelman and Waller 1999). Nonetheless, in this context, when client risk is high, strategic thinking should lead
auditors to question the client’s choice in providing a low or high degree of quantification in the specialist’s report. Such
strategic thinking should lead auditors to assign a higher proportion of audit effort to test the subjective FV inputs that are based
on management’s discretion. Our post-experimental analyses, however, suggest that auditors did not engage in strategic
thinking in this setting. Auditors evaluated the client specialist as equally competent, reliable, and credible across all four
experimental conditions. Thus, they do not appear to attribute differences in the degree of quantification in the client specialist’s
report to preparers’ motives. Our findings are intriguing because Salzsieder (2016) finds that managers ‘‘opinion shop’’ for a

The Accounting Review


Volume 92, Number 5, 2017
92 Joe, Vandervelde, and Wu

specialist report that supports management’s desired FV estimate.4 Thus, in practice, variability in the degree of quantification
in client evidence could be a result of management strategy to distract auditors from testing subjective inputs, and auditors seem
to overlook that possibility.

II. BACKGROUND AND HYPOTHESIS DEVELOPMENT

Background on Auditing of Fair Value Measurements


FV auditing has received considerable regulatory attention (e.g., International Auditing and Assurance Standards Board
[IAASB] 2008; PCAOB 2008, 2009b, 2014; CPAB 2012), because it is an inherently complex audit task (e.g., Christensen,
Glover, and Wood 2012; Bratten et al. 2013) and a critical one where management can exploit the discretion around the inputs
that derive the estimate (e.g., Hilton and O’Brien 2009; Ramanna and Watts 2012). Archival research provides overwhelming
evidence that ‘‘[management’s] bias . . . present in audited financial statements, [is] consistent with auditor’s failure to detect or
correct the bias’’ (cf. Bratten et al. 2013, 18–19). Specifically germane to our setting, the research finds that management’s
choices in FV stock pricing for employee stock options, discount rates in FV of securitized assets, and other discretionary
(subjective) inputs to FVMs are made to achieve smooth earnings, meet earnings targets, increase compensation, and other self-
interested outcomes (e.g., Johnston 2006; Dechow et al. 2010; Choudhary 2011).5
One aspect of FV auditing that makes it such a challenging task is that it includes several interrelated subcomponents.
These subtasks include: evaluating and testing the client’s controls over FVMs; determining whether and how the results of
those tests of controls over FVMs influence substantive procedures; planning the nature, extent, and timing of procedures to test
the FVMs using one or more substantive approach(es);6 evaluating the appropriateness of the client’s classification of FV assets
within the Accounting Standards Codification (ASC) 820 hierarchy (also referred to as ASC leveling judgments) (Financial
Accounting Standards Board [FASB] 2011); and reviewing and evaluating the adequacy of the client’s FV disclosures. In this
study, we focus on the subtask of planned substantive testing, after reviewing the results of tests of the client’s internal controls
(a component of client risk) and evidence provided in the client specialist’s report. Because auditors face budgetary and time
pressures to be efficient, we investigate the trade-off auditors make when planning substantive tests of the subjective and
objective inputs to FVMs while maintaining equivalent overall audit effort (total hours) across all settings. This focus is
motivated by empirical evidence demonstrating management opportunism in audited FVMs, regulators’ concerns regarding the
audit of the subjective inputs to FVMs, and the widespread use of valuation specialists to assist management in their
preparation of FV estimates (Choudhary 2011; Bratten et al. 2013; PCAOB 2015). The data in the client specialist’s report can
be controlled by the client (e.g., the degree of quantification) and, thus, if manipulated, has the potential to unduly influence
auditors’ substantive testing decisions when both quantified and non-quantified data are important for audit testing.7 We
consider auditors’ planning decisions because it is a critical audit task and an iterative process requiring the evaluation of client-
provided evidence, as well as evidence gathered from other audit tasks to update the planned audit approach (e.g., Knechel and
Messier 1990).8

Quantification in the Valuation Specialist’s Report


Our inspection of valuation specialists’ reports and discussions with auditors revealed that client specialist reports differ
significantly in the amount of quantitative detail provided to support their final valuation (see similar findings in Cannon and
Bedard [2017] and Glover et al. [2017a]). Prior research finds that the degree of quantification in the information received
influences judgment (e.g., Nisbett and Ross 1980; Bell 1984; Viswanathan 1993; Kadous et al. 2005; Peters et al. 2006). In

4
Bowlin (2011) also provides similar evidence of management strategy in auditor-management interaction. In his experimental market, students
proxying for managers act strategically by placing intentional misstatements in areas that are less subject to auditor scrutiny. While the participants are
not instructed to act as managers and auditors, the roles established for the two participant types proxy for client managers and auditors, respectively.
5
See similar findings of management bias and opportunism in the discretionary inputs to critical accounting estimates in other studies (e.g., Dietrich,
Harris, and Muller 2000; Chandar and Bricker 2002; Nissim 2003; Aboody, Barth, and Kasznik 2006; Hodder, Mayew, McAnally, and Weaver 2006;
Johnston 2006; Ramanna 2008; Bratten, Jennings, and Schwab 2015).
6
PCAOB auditing standard AS 2502 (formerly AU 328) prescribes three substantive approaches for auditing FVMs. They include: (1) testing
management’s significant assumptions, the valuation model, and the underlying data, (2) developing independent FVMs, and (3) reviewing subsequent
transactions (PCAOB 2002).
7
In this study, we focus on the influence of the client specialist’s report rather than the audit firm’s in-house valuation specialist, because without specific
corporate governance controls in place, the client specialist’s report will be subject to more bias than one prepared by the audit firm’s in-house
specialist (e.g., Bratten et al. 2013; Cannon and Bedard 2016).
8
PCAOB auditing standards (e.g., AS 2101) also note that ‘‘planning is not a discrete phase of an audit but, rather, a continual and iterative process that
might begin shortly after (or in connection with) the completion of the previous audit and continues until the completion of the current audit’’ (PCAOB
2010a, para. 5).

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 93

particular, high numerates, such as auditors, are more influenced by quantitative data than non-quantitative data (e.g., Kadous et
al. 2005; Peters et al. 2006).9 For example, Kadous et al. (2005) find that when decision inputs are objective, quantified data are
more persuasive than non-quantified data because quantification enhances the perceived competence of the preparer.10 In the
FV context, client specialists are generally perceived as competent (Martin, Rich, and Wilks 2006; Bratten et al. 2013);
accordingly, we anticipate that the degree of quantification will not influence auditors’ beliefs about the specialist’s
competence. However, applying the general result from Kadous et al. (2005) suggests that auditors will be more influenced by
the quantitative data in the client specialist report when there is a high degree of quantification versus a low degree of
quantification. We extend the literature by examining how the degree of quantification (high versus low) influences judgment
rather than making stark comparisons of quantitative versus qualitative information (i.e., as was done in earlier studies like
Anderson, Kadous, and Koonce [2004] and Kadous et al. [2005]). Thus, by varying the degree of quantification (in contrast
with prior studies manipulating its presence versus absence) while holding competence constant at a high level, we advance the
understanding of the quantification effect in accounting.
Above, we note that the client specialist’s competence is unlikely the mechanism through which quantitative data will
influence auditors’ FV judgment. Rather, the psychology literature suggests that individuals’ motivational states (i.e.,
preferences) and inadequate consideration of all available information together form the mechanism through which
quantification influences audit judgment (e.g., Mack and Rock 1998; Simons and Chabris 1999; Most et al. 2001; Balcetis and
Dunning 2006). Balcetis and Dunning (2006) find that participants’ preference to perform one experimental task over another
influenced how they interpreted and perceived an ambiguous stimulus (e.g., is it the number ‘‘13’’ or the letter ‘‘B’’?). These
findings imply that in the FV audit setting, auditors’ ambiguity-aversion (e.g., Bamber, Snowball, and Tubbs 1989; Luippold
and Kida 2012), high numerate trait, and preference to process quantitative data will motivate how they perceive and use client-
provided evidence.
Auditors’ preference for tasks that are well-defined, more verifiable, and lower in ambiguity, (e.g., tick and tie numbers
provided in the specialist’s report) will motivate them to perceive the quantitative evidence in the client specialist report as
more relevant for testing the reasonableness of the client’s FVMs. This will lead to higher emphasis on objective versus
subjective audit procedures. Auditors’ ambiguity intolerance (e.g., Bamber et al. 1989; Luippold and Kida 2012) will guide
their pre-attentional preferences toward the quantitative evidence in the client specialist report, which will direct where they
focus attention (Mack, Tang, Tuma, Kahn, and Rock 1992).11 Consequently, the quantitative evidence will ‘‘pop-out’’ more so
than other available audit evidence in the specialist’s report. Further, auditors’ focus on the quantitative data diverts their
attention away from other data available in the report (e.g., cues about the discretionary inputs to the FV estimate) (Neisser and
Becklen 1975; Mack and Rock 1998; Simons and Chabris 1999; Most et al. 2001; Drew et al. 2013). This diversion of attention
is consistent with Mack and Rock’s (1998, 18) finding that ‘‘inattention stimulus [that] falls outside the area to which attention
is paid, it is much less likely to be seen.’’ In particular, the ‘‘looking without seeing’’ behavior is most likely to occur when
individuals perform high-concentration, highly demanding tasks, such as in FV auditing (e.g., Bratten et al. 2013). Thus, when
auditors attend to the quantitative data in the client specialist report, that focus can ‘‘act like a set of blinders’’ to shift their
attention from evaluating other evidence in the report related to the subjective inputs to the FV estimate (Drew et al. 2013,
1848).
In sum, the psychology literature discussed above predicts that when the degree of quantification of client evidence is high,
auditors’ preference for well-defined tasks that are lower in ambiguity and more verifiable will motivate them to attend to the
quantitative evidence. Therefore, when the degree of quantification is high, auditors’ focus on quantitative data in the client
specialist report will lead them to allocate more audit effort to objective audit procedures. Conversely, when the degree of
quantification in the client specialist report is low, there is less quantitative evidence available to divert the auditors’ attention
from the other, more subjective data present in the client specialist report, leading them to allocate more audit effort to
subjective procedures. Additionally, because auditors face budgetary and time pressures to be efficient, increased audit effort
devoted to objective procedures is likely to result in decreased effort toward subjective procedures. Thus, ceteris paribus, the

9
The quantification literature finds that high numerates are more influenced by quantitative (numeric) data, while the general population, less able to
interpret numeric data, is more influenced by qualitative (non-numeric) data. Auditors are considered high numerates due to their professional training,
ability to draw inferences from numeric data, and understanding of mathematical concepts. Importantly, numeracy is not correlated with general
intelligence (Peters et al. 2006).
10
Kadous et al. (2005) also propose an alternative path where making the preparer’s incentives salient increases the decision-maker’s critical analysis and
decreases the persuasiveness of quantified data. That alternative path is not relevant because our design holds incentives and auditors’ critical analysis
constant across conditions. As we report later, post hoc measures validate that this aspect of our design plan was achieved.
11
‘‘Pre-attentive processes are . . . those which occur early, are automatic, fast . . . and underlie the perception of features which ‘pop-out.’ Moreover,
features which pop-out . . . are considered to be those basic to perception. These processes set up the potential candidates for subsequent attentional
processing’’ (cf. Mack et al. 1992, 476–477).

The Accounting Review


Volume 92, Number 5, 2017
94 Joe, Vandervelde, and Wu

allocation of audit effort to subjective procedures will be lower when the degree of quantification in the client specialist’s report
is high than when the degree of quantification in the report is low. Formally stated:
H1: Auditors will focus less planned substantive testing on subjective procedures when the degree of quantification in the
audit evidence is high than when it is low.

Control Environment Risk


H1 predicts that the degree of quantification (the key focus of our study) will influence auditor judgment. Incorporating
prior literature documenting that auditors are more cautious and careful in their judgment when client risk factors are high
(Hackenbrack and Nelson 1996; Kadous et al. 2003; Anderson et al. 2004; Earley et al. 2016) suggests that when auditing
FVMs, auditors will be attentive to the differences in control environment risk. However, PCAOB inspections charge that
auditors do not always consider the client’s control environment risk (i.e., precision of ELCs) appropriately (PCAOB 2012b).
The PCAOB inspectors claim that auditors over-rely on the client’s ELCs and do not adjust their audit approach sufficiently
when ELCs are operating with a high degree of imprecision (PCAOB 2009a, 2012b). The Board, therefore, called for more
research examining the impact of ELCs on audit judgment (Asare et al. 2013).
Prior literature indicates that auditors are sensitive to cues consistent with increased client risk (e.g., Zimbelman 1997;
Wilks and Zimbelman 2004), but that they do not always adjust their substantive procedures appropriately to match changes in
client risk (e.g., Mock and Wright 1993, 1999; Allen, Hermanson, Kozloski, and Ramsay 2006; Brazel and Agoglia 2007).
Variability in client risk, as reflected by control environment risk, is likely to be salient for auditors because both ELCs and FV
auditing have received considerable regulatory scrutiny, and particularly so for high-risk engagements (e.g., Glover et al.
2017b). Therefore, in high control environment risk settings, auditors should be sensitive to the elevated risk and focus greater
attention on testing the client’s FVMs. However, because of auditors’ cognitive tendency to prefer more objective/less
ambiguous tasks over more subjective/more ambiguous tasks, the increased attention will be reflected in a higher proportion of
objective procedures than subjective procedures when the specialist’s report contains a high versus low degree of quantified
data. Thus, we expect that increased control environment risk will exacerbate the quantification effect predicted in H1.
In low control environment risk settings, however, prior studies suggest that auditors will not be as motivated to review
client evidence as diligently as they will when control environment risk is high (Hackenbrack and Nelson 1996; Ng and Tan
2003; Anderson et al. 2004). For example, Anderson et al. (2004) find that auditors were influenced by incentives more than all
other factors, and did not attend to differences in the reliability of the client’s evidence when client risk was low. Therefore,
consistent with the literature, we expect that when control environment risk is low, the degree of quantification will not have a
material impact on the auditors’ approach to testing client information. Formally stated:
H2: The difference in auditors’ planned substantive testing on subjective procedures under low versus high quantification
conditions will be greater when control environment risk is high than when it is low.

Effectiveness of PCAOB Practice Alerts


H1 and H2 predict that auditors’ planning decisions about substantive testing are influenced by the degree of quantification
in client evidence when control environment risk is high. Allowing the amount of quantitative detail in client evidence that is
unrelated to misstatement risk to drive audit testing, particularly in high-risk scenarios, increases the risk that client
management can be opportunistic in strategically providing information to reduce auditor testing of the discretionary inputs to
FVMs. Accordingly, we explore whether practice alerts, a common approach taken by the PCAOB, result in audit judgments
that differ from those anticipated in H1 and H2.
Because audit firms face considerable competitive constraints for time and resources, it is important that any attempts to
ensure that audit judgments limit litigation risks, reputational risks, and regulatory scrutiny be simple and cost effective. The
PCAOB often uses staff alerts, speeches, and inspection reports to motivate auditors to conduct testing that is consistent with
their preferences—for example, to focus more attention on testing the subjective aspects of FVMs that are susceptible to
management bias (e.g., PCAOB 2007, 2010b, 2012c). Recent research and academic commentary suggest that the current
regulatory approach is not as effective as it could be in getting auditors to conform with regulators’ preferences (e.g., Peecher et
al. 2013; Glover et al. 2017b). Thus, we conduct a second experiment to investigate whether reminding auditors of the
PCAOB’s preference for more testing of management’s assumptions and/or valuation methods changes the auditors’ response
to the degree of quantification in client evidence in higher-risk settings.
Prior research finds that auditors adjust their judgment processes and judgments/decisions to align with their supervisors’
preferences (e.g., Peecher 1996; Cohen and Trompeter 1998; Peecher et al. 2013). This literature suggests that a PCAOB
reminder conveying regulators’ preferences should increase auditors’ proportion of total substantive testing hours allocated to

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 95

subjective procedures, thereby mitigating the effect of quantification on auditors’ planned testing. Psychology and auditing
studies offer mixed evidence on the efficacy of intervention techniques to change judgment tendencies. This literature finds that
prompting decision-makers to engage in System 2 thinking is sometimes successful (Kennedy 1993; Babcock, Loewenstein,
and Issacharoff 1998; Earley et al. 2008; Milkman, Chugh, and Bazerman 2009).12 Psychology research, however, also finds
that conscious effort to simply do better is futile and acknowledges that researchers encounter difficulty in developing effective
interventions to reduce or eliminate subconscious cognitive tendencies (e.g., Nosek, Greenwald, and Banaji 2007; Milkman et
al. 2009). Thus, it is questionable whether a simple alert that triggers auditors’ System 2 thinking will be effective in altering the
extent to which the degree of quantification in audit evidence affects the focus of auditors’ planned substantive testing on
subjective procedures.
Accordingly, Experiment 2 investigates whether and how reminding auditors that regulators want them to attend to the
areas in FVMs that are more susceptible to management bias influences auditors’ substantive testing decisions. Experiment 2
addresses the following research questions:
Research Questions: Is a PCAOB alert reminding auditors to focus attention on the more subjective aspects of FVMs
effective in changing auditors’ approach to testing FVMs? Specifically:
A. Does the alert change the likelihood that auditors will be influenced by the joint effects of the
degree of quantification in the client’s specialist report and the level of control environment risk?
B. Does the alert change auditors’ overall effort allocation based on the degree of quantification in
the client’s specialist report and the level of control environment risk?

III. METHOD

Participants
Ninety-two Big 4 audit seniors attending firm-wide training participated in this study.13 Table 1 presents the participants’
demographic information. The participants have an average of 2.34 years of public accounting experience, with a range of two
to five years. Approximately 75 percent of the participants indicate that their main industry was one considered to be an FV
specialty industry (i.e., financial services, real estate, oil, gas and utilities, or insurance and employee benefits) (Earley et al.
2016). On average, participants spent approximately 40 hours on FVM judgments in the last year. Our analysis reveals no
systematic differences in the demographic variables across treatment conditions; thus, randomization achieved a balanced
distribution of participants across the experimental conditions.14
Our participants’ experience level is consistent with the type of auditors (i.e., seniors) who routinely perform FVM audit
procedures, including evaluating assumptions and valuation methods for complex estimates, as reported by Griffith et al.
(2015a). Consistent with Peecher and Solomon (2001), because there is no expected interaction between our task and
participants’ experience, and no theoretical basis for using very experienced auditors, the participants are appropriate to address
our research questions.

Experiment 1
Experiment 1 tests H1 and H2. Participants were assigned the role of an in-charge auditor on an integrated audit of a
medium-sized publicly held company, Yijensco (or the Company). Their task was to plan the year-end substantive procedures
to be performed to test the FVM of Yijensco’s investment securities. Participants received the following background
information: (1) the audit team has already completed the internal control evaluation; (2) Yijensco’s holding in investment
securities is material to the Company’s financial statements taken as a whole; and (3) the Company engaged third-party
independent valuation experts, MoonStar Group, to determine the FVs of its investment securities. To hold the professional
qualifications, knowledge, and competence of the client specialist constant across all conditions, the case informs participants
that the audit team evaluated the professional qualifications of MoonStar and did not note any concerns regarding MoonStar’s
professional qualifications.

12
System 2 thinking refers to reasoning that is slower, conscious, effortful, explicit, and logical, whereas System 1 thinking is our intuition, which is often
fast, automatic, and emotional (cf. Milkman et al. 2009).
13
This study received exemption from Human Research Subject Regulations. No further action or Institution Review Board (IRB) oversight was
required.
14
Our final sample is 92 participants after eliminating 17 participants who either did not complete all of the post experimental questions or whose
response to both manipulation check questions were at the opposite end of the response scale from their assigned treatment conditions. The qualitative
inferences and statistical results of tests of our hypotheses remain the same if we include these 17 participants who were inattentive to the instructions or
materials.

The Accounting Review


Volume 92, Number 5, 2017
96 Joe, Vandervelde, and Wu

TABLE 1
Demographics
Experiment 1
Total Number of Participants ¼ 92
Panel A: Demographics
(Standard
Mean Deviation)
Years of audit experience 2.34 (0.67)
Years at current firm 2.29 (0.56)
Number of engagements as in-charge auditor 2.01 (2.26)
Number of engagements with FVM judgment 1.60 (1.67)
Hours spent on FVM judgment last year 38.88 (55.37)

Panel B: Primary Industry of Clients Served


Frequency (Percentage)
Banking and Financial Services 44 (47.82%)
Insurance and Third-Party Administrative/Workers Compensation 15 (16.30%)
Real Estate 6 (6.52%)
Telecommunications 4 (4.35%)
Technology/Electronics 4 (4.35%)
Manufacturing/Industrial 4 (4.35%)
Oil and Gas/Mining 4 (4.35%)
Healthcare and Pharmaceuticals 3 (3.26%)
Consumer Products and Retail 3 (3.26%)
Other 5 (5.43%)

Following the background information, participants reviewed information regarding the audit team’s assessment of
Yijensco’s control environment risk. The materials indicate that the ‘‘audit team detected no control deficiencies, which rose to
the level of a material weakness, based on testing of the operating effectiveness of the associated Yijensco process-level
controls,’’ which is constant across all conditions.15
Our design features a substantive planning audit task requiring participants to select the appropriate audit procedures and to
allocate the time for each procedure selected. It is important to use a planning task rather than an evidence evaluation task
because planning allows auditors to allocate time to test any evidence they deem to be important (including evidence that they
obtain from the client, the client specialist, or other sources). The materials reviewed before the planning judgments included
information regarding the FVM investment security, a mortgage-backed security (MBS), and excerpts from the MoonStar
report that were used to arrive at the FVM for the MBS. Following their review of the materials, participants selected which
substantive testing procedures (from a list) they would include to test Yijensco’s year-end MBS balance. Based on our
discussions with practicing auditors, the total hours participants could allocate to substantive testing ranged from 20 to 36
hours. Experiment 1 ended with the completion of the post-experimental questionnaire. Figure 1 presents the sequential flow of
the experimental procedures.16

Independent Variables
We use a 2 3 2 between-subjects design, which varies the quantification level (low versus high) and control environment
risk (low versus high). We manipulate quantification by varying only the level of detail (amount of quantified data) about key
model inputs provided by the client specialist (refer to Appendix A for examples of the quantification manipulation). In both
conditions, the client specialist’s report includes information regarding the pricing methodology for the MBS, the assumed

15
Process-level internal controls are controls that specifically address management’s financial reporting assertions and risks (Protiviti Inc. 2007).
16
As illustrated in Figure 1, due to time limitations and availability of participants from the CPA firm to test all experimental conditions and tasks,
following Experiment 1, participants were randomly assigned to either Experiment 2 (the PCAOB Reminder Task) or Task 2 (Substantive Procedure
Classification Task). We discuss the post-Experiment 1 tasks in more detail later in the paper.

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 97

FIGURE 1
Sequence of Experimental Procedures

* Control environment risk is manipulated as low versus high.


** The quantification level provided in the client valuation specialist’s report is manipulated as low versus high.

prepayment speed and average interest yield, and an explanation of the determination of the assumed prepayment speed and
average treasury interest yield used to derive the MBS’s FV estimate.
The low quantification condition presents the assumed prepayment speed and the average interest yield used to determine
the value of the MBS. In the high quantification condition, participants receive the information provided in the low
quantification condition, along with detail quantified information that can be used to recalculate the assumed prepayment rate
and average treasury interest yield without going to another source for the supporting data. An important feature of our
quantification manipulation is that the detail information supporting both the prepayment speed and the average treasury
interest yield data are publicly available. Given that the task is to plan the audit of the FVM (as opposed to an evidence
evaluation task), if auditors believe it is critical to test the quantified detail supporting the prepayment speed and average
interest yield, then they can allocate audit time to obtain and test this supporting data. Further, while the high quantification
condition provides more numeric detail on the amounts that accumulate to the assumed prepayment speed and average interest
rate used in preparation of the FV estimate, alone, they do not provide sufficient evidence to evaluate the reasonableness of the
client specialist’s assumptions or the accuracy of the underlying data used to derive the estimate.17 Therefore, the
operationalization of quantification in our study allows us to hold the diagnosticity of the evidence constant while focusing on
how differences in the form of client evidence can influence auditors’ substantive testing decisions. In addition, our
manipulation of quantification is less extreme than prior accounting studies that report no difference in the information content
of quantitative versus qualitative experimental conditions (e.g., Anderson et al. 2004; Kadous et al. 2005).
The second independent variable, control environment risk, is manipulated at two levels of ELC strength. Appendix B
provides examples of our control environment risk manipulation. We operationalize control environment risk by varying the
overall assessment of Yijensco’s ELCs over the investment process and the description of the key ELCs identified by the audit

17
It is possible that in the high quantification condition, participants could have planned to conduct some statistical analyses on the supporting data that
were not readily apparent in the low quantification condition. However, it is unclear whether this possibility would increase or decrease the auditors’
relative focus on subjective versus objective audit procedures, as statistical procedures that use objective inputs still require subjective interpretation.
Further, auditors who contemplate statistical procedures might well be more apt to plan to prepare an independent estimate. We thank the editor for
highlighting the importance of emphasizing this point.

The Accounting Review


Volume 92, Number 5, 2017
98 Joe, Vandervelde, and Wu

team (see Appendix B for the ELC strength manipulation). In the low control environment risk condition, participants are told
that the audit team assessed Yijensco’s ELCs over the investment process as strong. Consistent with the overall assessment,
descriptions of key ELCs also suggest strong ELCs. The list of ELC risk indicators was developed based on our review of audit
programs from international public accounting firms and auditing textbooks. In the high control environment risk condition,
participants are told that the audit team assessed Yijensco’s ELCs over the investment process as moderate. Further,
descriptions of ELCs also indicate weaker controls than in the low control risk condition.
A pre-experimental panel of FV audit experts (two partners and four managers) assisted us in designing the experimental
instrument to minimize both internal and external validity concerns. First, the panel reviewed the materials for realism and
consistency, and provided feedback to ensure that there was no difference in the information content and the diagnosticity of
evidence provided across the two quantification conditions. Second, the pre-experimental panel reviewed the list of ELC risk
indicators and confirmed that they were valid descriptors of the two levels of environmental control risk used in the study.
Third, this panel reviewed and validated the list of audit procedures that form the basis of our dependent variables discussed
below and provided the budget range for the substantive audit of FVMs similar to the one in this study.

Dependent Variables
Participants received a list of 19 typical substantive audit procedures (taken from the audit work papers of international
audit firms) to test the FVM of the investment securities with space available to indicate the number of minutes, if any, they
would allocate to each procedure. For procedures selected, participants also indicate the number of minutes they would allocate
to the core audit team versus the audit firm’s in-house valuation specialist. The list included both objective procedures (i.e.,
those that focus on verification of management’s model components that require little judgment) and subjective procedures
(i.e., those that focus on evaluation of the reasonableness of significant assumptions and model selection that are more
susceptible to management bias). (Appendix C provides examples of the objective and subjective substantive audit procedures.)
The pre-experimental expert panel classified the substantive procedures as subjective or objective audit procedures. We
further validate the ex ante classification of the audit procedures by having a subset of the participants (n ¼ 34) rate the level of
objectivity (less judgment required) versus subjectivity (more judgment required) associated with each of the substantive
procedures. During Phase 2 of our experiment, following the completion of the post-experimental question for Experiment 1,
the 34 participants rated the procedures using a seven-point scale, where ‘‘1’’ indicated ‘‘more objective’’ and ‘‘7’’ indicated
‘‘more subjective.’’ All audit procedures with a mean participant rating less than 4 are classified as objective procedures, while
all procedures with a mean participant rating greater than 4 are classified as subjective procedures. One procedure received a
mean rating of 4 (the middle of the scale) and we exclude this procedure from our dependent variable analyses.18 A key strength
in the measurement of our dependent variables is that a subset of the Experiment 1 participants classify the 19 audit procedures.
Incorporating these participants’ classifications ensures that our interpretation of objective versus subjective audit procedures:
(1) are based on a representative sample of the participants’ own classifications, (2) are not subject to author bias, and (3)
participants’ classifications coincide with those of the pre-experimental and post-experimental panels of FV audit experts.
It is possible that auditors’ classification of the audit procedures as objective versus subjective captures only the extent of
subjectivity associated with the nature of the procedures (i.e., the level of auditors’ judgment required), while overlooking the
extent of subjectivity associated with the object of the procedure (i.e., the level of testing of management’s judgment). To verify
that the list of procedures we use in this study measures the relative subjectivity associated with both the nature and object of
the test, we validated these procedures with a post-experimental panel of five national FV expert partners from the largest
international accounting firms. Two of the panelists are quality review pre-issuance FV experts at their respective firms. The
post-experimental panel classified each of the procedures by its nature (more objective [less auditor judgment] versus more
subjective [more auditor judgment]) and its object (more objective [less management judgment] versus more subjective [more
management judgment]). We find that across the 19 audit procedures, the audit experts’ classification of the nature and object
of each procedure matched as being either both objective or both subjective approximately 96 percent of the time.19 Recall that

18
After excluding the procedure rated as neutral (i.e., 4), the participants’ classifications match the pre-experimental (post-experimental) panel’s
classification for 16 (15) of the remaining 18 procedures. We find no statistical difference in the classification of audit procedures determined by the
Phase 2 participants and the classifications determined by the pre-experimental and post-experimental panels (v(1)2 ¼ 0.45, p ¼ 0.52; and v(1)2 ¼ 1.00, p ¼
0.32, respectively). The analyses employed for hypothesis and supplemental testing are based on the participants’ classifications. The results and
conclusions do not differ if the pre-experimental or post-experimental panel’s classifications are used as a basis for the dependent variables. The Phase 2
participants’ ratings include participants who failed the manipulation check related to the Experiment 1 planning task, as the classification task is not tied
to the manipulations in Experiment 1. Our conclusions do not differ if we also exclude those participants from the classification task. Importantly, the fact
that the Phase 2 participants’ classifications of the audit procedures correspond so closely with the classification of the pre-experimental and post-
experimental panel provides additional assurance that the participants have the requisite experience and competence to complete our experimental task.
19
We thank an anonymous reviewer and the editor for the idea that objectivity/subjectivity of a procedure can differ based on both the nature and the
object of the given audit procedure.

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 99

the post-experimental panel also rated the importance of each of the 19 procedures on a seven-point scale (where 1 ¼ not at all
important and 7 ¼ very important). Eighteen of the audit procedures had a mean rating of 6 or higher, and one procedure had a
mean of 5.4. There was no statistical difference in the mean importance rating of the procedures classified as objective and
those classified as subjective (p . 0.10). Thus, the post-experimental expert panel confirms that (1) expert FV auditors consider
the audit procedures used in this study to be important in testing the clients’ FVMs; (2) the procedures classified as subjective
versus objective do not differ in importance when conducting substantive tests of FVMs; and (3) the classification of audit
procedures as either subjective or objective takes into account both the object and nature of the procedures.
Our dependent variable is the participants’ proportion of planned substantive audit hours allocated to subjective testing
(PROPSUBJ). PROPSUBJ is computed as the total hours allocated to audit procedures classified as subjective procedures
divided by the total hours of planned subjective and objective substantive testing of the MBS. Audit procedures are classified as
either subjective or objective based on a dichotomous split of the mean classification ratings of the audit procedures, provided
by the participants in Phase 2 of our study.
As discussed earlier, the dependent variables capture the trade-off auditors encounter when allocating audit effort to test
their client’s FVM. Auditors must decide which substantive procedures to perform and how much time to allocate between the
subjective versus objective procedures selected, since each hour allocated to performing objective procedures reduces the time
available to perform subjective procedures. To ensure that budgetary constraints did not force participants to omit audit
procedures, participants had a wide range (20–36) of hours to allocate to test the MBS. The maximum of 36 hours was
determined by adding several hours to the high end of the range provided by our pre-experimental panel of FV experts for
substantive testing of securities similar to the one in our study.

Experiment 2
Experiment 2 addresses the research questions. Of the 92 participants in Experiment 1, 64 were randomly assigned to
Experiment 2. Figure 1 details the experimental sequence. Participants completed the post-experimental questions for
Experiment 1 before beginning Experiment 2.20 In Experiment 2, participants received a second copy of the materials for the
Yijensco audit, which is identical to the information provided in Experiment 1. The list of procedures available for
consideration, the range of hours to be allocated (20 to 36 hours), and the independent and dependent variables were identical to
Experiment 1. Also, participants remained in the same treatment condition they were randomly assigned to in Experiment 1.
Participants in all treatment conditions received the following PCAOB alert in Experiment 2:
The PCAOB encourages auditors to test measurements and inputs that might be more susceptible to management bias
when observable market prices are not available and to address the risk of material misstatement resulting from
management bias.
Participants were asked to consider the PCAOB’s alert to auditors when selecting the planned audit procedures to test the
client’s FVM balance. This PCAOB alert likely serves as a reminder to the participants who are already aware of the PCAOB’s
emphasis from audit firm training, audit firm technical updates, and the PCAOB inspection reports.

IV. RESULTS

Experiment 1
Manipulation Checks
Recall that we manipulated quantification (low versus high) based on the amount of detail supporting two key inputs to the
client-hired expert’s report: prepayment speed and interest rate. Participants rated the degree of quantification in the MoonStar
report on a seven-point scale, where ‘‘1’’ indicated ‘‘provided limited quantitative detail’’ and ‘‘7’’ indicated ‘‘provided
significant quantitative detail.’’ The difference in mean rating (3.51 in the low versus 4.89 in the high) in the two treatment
conditions was statistically significant (t(90) ¼ 4.01, p , 0.001, one-tailed), indicating that our manipulation was successful. Our
second independent variable is control environment risk (high versus low). Participants rated the internal controls at the client
using a seven-point scale, where ‘‘1’’ indicated ‘‘moderate internal controls’’ (i.e., high risk) and ‘‘7’’ indicated ‘‘very strong
internal controls’’ (i.e., low risk). The difference in mean rating (3.19 in the high versus 5.38 in the low) of the two control risk
treatment conditions was statistically significant (t(90) ¼ 8.00, p , 0.001, one-tailed), confirming that our manipulation was
successful.

20
Prior research documents that the post-experimental task is sufficient and effective in clearing short-term memory (e.g., Heaps and Henley 1999;
Borthick, Curtis, and Sriram 2006).

The Accounting Review


Volume 92, Number 5, 2017
100 Joe, Vandervelde, and Wu

Before conducting tests of our hypotheses, we analyze two key measures to demonstrate that our results are not driven by
overall audit effort or available audit budget. First, as reflected in Table 2, Panel A, participants’ mean rating of the budget
sufficiency (using a scale where 1 ¼ less than needed, 4 ¼ about right, and 7 ¼ more than needed) is 4.17, which is not
statistically different from the scale midpoint (t(91) ¼ 1.11, p ¼ 0.27, two-tailed), indicating that participants judged the budget to
be adequate and are not constrained by the audit budget provided in our study. Importantly, the assessed budget sufficiency
does not differ across the four treatment conditions (no main effects of Quantification, p ¼ 0.33; Control Environment Risk, p ¼
0.81; or interaction, p ¼ 0.51, two-tailed). Second, we demonstrate that the degree of quantification in the client specialist report
and control environment risk do not affect total audit effort to test the client’s MBS as measured by TOTAL_HOURS. The mean
TOTAL_HOURS (presented in Panel A of Table 2) is 25.90 when Quantification is low, compared with 26.36 when
Quantification is high. When Control Environment Risk is low, the mean TOTAL_HOURS is 26.93, compared to 25.33 when
Control Environment Risk is high. ANCOVA results, presented in Panel C of Table 2, reveal no significant Quantification,
Control Environment Risk main effects, or interactive effects for TOTAL_HOURS (p ¼ 0.72, p ¼ 0.22, and p ¼ 0.50, two-tailed,
respectively). Further, the overall mean TOTAL_HOURS (presented in Panel A of Table 2) allocated to testing the MBS
security was 26.21, which is substantially below the maximum hours (36) available for testing the MBS. Together, these
analyses support that results presented below are not driven by differences in total audit effort.

Hypotheses Tests
Panel B of Table 2 provides the mean and standard deviation for the participants’ substantive testing decisions by treatment
condition for the dependent variable, PROPSUBJ. Because H2 predicts an interaction effect for Quantification 3 Control
Environment Risk, we present the results of our test of H2 before presenting and interpreting the results for H1.
H2 predicts that the difference between audit hours allocated to subjective procedures in the low versus high Quantification
treatments would be greater when Control Environment Risk is high compared to when it is low. Figure 2 presents the
experimental results for PROPSUBJ, which are consistent with our predictions. As presented in Panel B of Table 2, the
adjusted means for PROPSUBJ are consistent with H2. The difference of the PROPSUBJ adjusted means in the low versus
high quantification cells (52.0 percent versus 43.0 percent) in the high control environment risk condition is 9.0 percent, while
the difference of the PROPSUBJ adjusted means in the low versus high quantification cells (47.0 percent versus 48.9 percent)
in the low control environment risk condition is only 1.9 percent. ANCOVA results presented in Panel D of Table 2 confirm a
significant Quantification 3 Control Environment Risk interaction (F(1,87) ¼ 4.57, p ¼ 0.02, one tailed).21 Thus, H2 is supported.
Given the significant interaction effect of Quantification 3 Control Environment Risk, we perform simple effects analysis
to test H1 for the effect of Quantification on PROPSUBJ. Specifically, H1 predicts that when Quantification is high, auditors
will allocate less effort to subjective procedures in comparison with when Quantification is low. Consistent with our
expectations, the simple effects analysis presented in Panel D of Table 2 shows that when Control Environment Risk is high, the
difference in PROPSUBJ adjusted means (52.0 percent versus 43.0 percent) is statistically significant (F(1,87) ¼ 5.81, p ¼ 0.01).
We further analyze the process through which auditors’ planned testing of FVMs can be impacted by the degree of
quantification in audit evidence by examining auditors’ effort allocation to the 19 audit procedures across experimental
conditions. Table 3 presents details of the ten procedures (Top 10) with the highest overall mean hours allocated for all of the
treatment conditions (Panel A), which aggregate to 19.15 hours (73.0 percent) of the planned testing of the FV estimate (Panel
B).
Consistent with the theoretical predictions and results of our statistical analyses, we focus on the low versus high
Quantification conditions when Control Environment Risk is high because these conditions are the source of the effects
observed in this study. As shown in Table 3, Panel A, auditors’ effort allocation differences across Quantification conditions for
four procedures are consistent with theoretical expectations. Effort allocated across these four procedures totals approximately
ten hours, or 50 percent of the hours allocated to the Top 10 procedures. Effort allocation for only one procedure (mean ¼ 1.79
hours and 9 percent of hours allocated to the Top 10 procedures) was not consistent with expectations, and there was no
statistical difference in effort allocation for the remaining five Top 10 procedures.
Table 3, Panel A indicates that the largest difference in auditors’ effort allocation between high and low quantification
occurred in Procedure 4 (testing management’s FVM by developing an independent estimate). Consistent with expectations,

21
We performed analysis to determine whether any of our demographic variables are significant covariates. Only the number of engagements in which the
participant was the in-charge audit senior (NUM_ENGAGEMENT) is a significant covariate and, thus, included in our model. Additional analysis
indicates that there is an inverse relation between NUM_ENGAGEMENT and PROPSUBJ. That is, PROPSUBJ increases as NUM_ENGAGEMENT
decreases. This relation suggests that more audit experience does not necessarily reduce the preference to perform objective audit procedures instead of
subjective audit procedures. Our conclusions do not differ if NUM_ENGAGEMENT is excluded from our model. Further, NUM_ENGAGEMENT does
not interact with Quantification or Control Environment Risk.

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 101

TABLE 2
Experiment 1 Results

Panel A: Means (Standard Deviation) of Sufficiency of Budgeted Hours Available (Budget_Sufficiency) and Total
Hours Allocated to Substantive Procedures (TOTAL_HOURS) by Quantification Level and Control Environment Risk
Quantification
Control Environment Risk Low High Overall
Low
Budget_Sufficiency 4.09 (1.59) 4.19 (1.65) 4.14
TOTAL_HOURS 26.26 (5.92) 27.60 (6.83) 26.93
n ¼ 23 n ¼ 26 n ¼ 49
High
Budget_Sufficiency 3.95 (1.65) 4.48 (1.03) 4.21
TOTAL_HOURS 25.55 (6.07) 25.12 (5.54) 25.33
n ¼ 22 n ¼ 21 n ¼ 43
Overall
Budget_Sufficiency 4.02 4.32 4.17
TOTAL_HOURS 25.90 26.36 26.21
n ¼ 45 n ¼ 47 n ¼ 92

Panel B: Adjusted Means (Standard Deviation) for Dependent Variable (PROPSUBJ) by Quantification Level and
Control Environment Risk
Quantification
Control Environment Risk Low High Overall
Low
PROPSUBJ 47.0% (0.111) 48.9% (0.126) 47.9%
n ¼ 23 n ¼ 26 n ¼ 49
High
PROPSUBJ 52.0% (0.15) 43.0% (0.11) 47.5%
n ¼ 22 n ¼ 21 n ¼ 43
Overall
PROPSUBJ 49.5% 45.9%
n ¼ 45 n ¼ 47

Panel C: ANOVA Results for Total Hours Allocated to Substantive Procedures (TOTAL_HOURS)
Sum of Mean
Source Squares df Square F-value p
Quantification 4.720 1 4.720 0.12 0.72
Control Environment Risk 58.237 1 58.237 1.54 0.22
Quantification 3 Control Environment Risk 17.733 1 17.733 0.47 0.50
Error 3,326.851 88 37.805

Panel D: ANCOVA and Simple Effects Results for the Variable (PROPSUBJ)
Sum of Mean
Source Squares df Square F-value pa
NUM_ENGAGEMENT 0.102 1 0.102 6.79 0.01
Quantification 0.029 1 0.029 1.93 0.08
Control Environment Risk 0.000 1 0.000 0.02 0.88
Quantification 3 Control Environment Risk 0.069 1 0.069 4.57 0.02
(continued on next page)

The Accounting Review


Volume 92, Number 5, 2017
102 Joe, Vandervelde, and Wu

TABLE 2 (continued)
Sum of Mean
Source Squares df Square F-value pa
Error 1.305 87 0.012
Simple Effects Tests
Effect of Quantification given low Control Environment Risk 1 0.29 0.59
Effect of Quantification given high Control Environment Risk 1 5.81 0.01
a
Given the directional expectations for Quantification and Quantification 3 Control Environment Risk, p-values presented are one-tailed.

Variable Definitions:
NUM_ENGAGEMENT ¼ number of engagements in which the participant was the in-charge senior;
Quantification ¼ quantification level provided in the client-hired valuation specialist’s report manipulated at two levels: low versus high;
Control Environment Risk ¼ control environment risk is manipulated at two levels: high versus low, by varying the strength of the entity-level controls
(moderate versus strong);
PROPSUBJ ¼ proportion of total substantive hours allocated to subjective substantive procedures in Experiment 1;
TOTAL_HOURS ¼ total hours allocated to substantive procedures;
TOTAL_SUBJECTIVE ¼ hours allocated to subjective substantive procedures;
TOTAL_OBJECTIVE ¼ hours allocated to objective substantive procedures; and
Budget_Sufficiency ¼ rating regarding whether the 36 maximum budgeted hours were sufficient time to effectively audit the MBS using a seven-point
scale, with ‘‘1’’ being ‘‘less than needed,’’ ‘‘4’’ being ‘‘about right,’’ and ‘‘7’’ being ‘‘more than needed.’’

when Quantification was low, auditors allocated 8.09 hours to this subjective procedure, but when Quantification was high,
only 4.40 hours were allocated to Procedure 4. Interestingly, the post-experimental expert panel rated this audit procedure as the
least important of the procedures they evaluated, which mirrors Griffith et al.’s (2015a) observations gleaned from interviews
with experienced FV auditors, that auditors are less likely to develop an independent estimate when testing the client’s FV
estimate. Our finding that the degree of quantification in management-provided evidence influences auditors’ effort allocation
to developing independent estimates is noteworthy because prior findings show that management exerts greater influence on
auditors’ testing and can bias auditors’ judgment when management’s evaluation is the starting point for auditors’ decision-
making (e.g., McDaniel and Kinney 1995; Earley et al. 2008).22
Panel B of Table 3 shows a marginally significant difference in hours allocated across quantification levels to objective
procedures that are part of the Top 10 procedures (F1,88 ¼ 2.99; p ¼ 0.09) and consistent with expectations, a significant
difference in hours allocated across quantification levels to subjective procedures that are part of the Top 10 procedures (F1,88 ¼
5.72; p ¼ 0.02). In contrast, there is no difference (F1,88 ¼ 0.05; p ¼ 0.82) in hours allocated to objective procedures that are not
part of the Top 10 procedures in the low versus high Quantification conditions when Control Environment Risk is high. Further,
hours allocated to the subjective procedures that are not part of the Top 10 procedures are not consistent with expectations (i.e.,
they are in the opposite direction [mean ¼ 2.57 hours in the low Quantification condition and four hours in the high
Quantification condition; F1,88 ¼ 5.01; p ¼ 0.03]).23
Collectively, these results provide support for our hypotheses and demonstrate that when the client’s control environment
risk is high, auditors’ trade-off between subjective versus objective procedures is influenced by the degree of quantification in
the client’s evidence.24 Additional analysis of procedures selected shows that our results are driven by the Top 10 procedures.
However, there was no difference in the trade-off in auditors’ planned testing of subjective versus objective procedures when
control risk is low.

Sensitivity Analyses and Supplemental Analyses


Our dependent variable, PROPSUBJ, treats all subjective procedures as being equal in subjectivity and all objective
procedures as being equal in objectivity. As a robustness check, we constructed an alternative dependent variable

22
We thank the editor for encouraging us to explore how the degree of quantification drives auditors’ selection of specific procedures, which allows us to
establish a link with the prior literature on the effectiveness of an independent audit approach.
23
Further detailed analyses (untabulated) reveal that there were limited differences in auditors’ effort allocation in the low-risk setting. For example, there
is no statistical difference (p . 0.10) in hours allocated to nine of the Top 10 procedures in the low versus high Quantification conditions. The only
statistical difference (F1,88 ¼ 3.29, p ¼ 0.07) observed in the low-risk setting was in the opposite direction of expectations (two hours in the low
Quantification condition versus 1.29 hours in the high Quantification condition) and occurred in Procedure 5.
24
We also substituted alternative dependent variables in the ANCOVA Model: (1) total subjective hours allocated; (2) the difference between objective
hours versus subjective hours allocated; and (3) the difference between objective hours and subjective hours allocated scaled by the total hours
allocated. Untabulated results indicate that our findings for H1 and H2 are robust to alternative measures of the dependent variables.

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 103

FIGURE 2
Observed Interaction
Experiment 1

Variable Definitions:
Quantification ¼ quantification level provided in the client valuation specialist’s report manipulated at two levels: low versus high;
Control Environment Risk ¼ control environment risk is manipulated at two levels: high versus low, by varying the strength of the entity-level controls
(moderate versus strong); and
PROPSUBJ ¼ proportion of total substantive hours allocated to subjective substantive procedures in Experiment 1.

(SUBJSCORE) that considers the level of subjectivity and objectivity of the audit procedures. SUBJSCORE is calculated by
converting the participant’s classification ratings of the audit procedures (scale ¼ 1 to 7) to a plus-minus scale (3 to þ3), where
the initial mean classification ratings are transformed as follows: 1 ¼3; 2 ¼2; 3 ¼1; 4 ¼ 0; 5 ¼ 1; 6 ¼ 2; 7 ¼ 3. We calculate
SUBJSCORE as the mean transformed rating for each procedure multiplied by the percentage of time allocated to that
procedure (i.e., time the participant allocated to the procedure scaled by the total hours allocated by the participant). Therefore,
a more negative SUBJSCORE means greater auditor focus on objective procedures, while a more positive SUBJSCORE means
greater focus on subjective procedures. We then substitute SUBJSCORE as the dependent variable for all of the tests of the
hypotheses. For all hypotheses, our conclusions remain unchanged when SUBJSCORE is applied as the dependent variable.
Further, untabulated analyses indicate that SUBJSCORE and PROPSUBJ are highly correlated (r(90) ¼ 0.89, p , 0.0001).
To rule out alternative explanations for the observed results, we conduct a series of additional analyses. First, we test that
the results are not driven by the participants having differential interpretations of the ASC 820 hierarchy level of the MBS in the
experimental materials. Participants rated Yijensco’s MBS using a seven-point scale, where ‘‘1’’ indicated ‘‘Level 2’’ and ‘‘7’’
indicated ‘‘Level 3.’’ Mean ASC level ratings (ASC_Levels) and frequency of classifications (Freq_Classifications) by
treatment conditions are presented in Panel A of Table 4. The participants’ ASC level ratings are classified into the three groups
(less than or equal to 3 ¼ ‘‘Level 2’’; 4 ¼ ‘‘Neutral’’ [i.e., neither Level 2 nor Level 3]; greater than or equal to 5 ¼ ‘‘Level 3’’).
Untabulated results indicate that there was no difference in participants’ classification of the MBS as a Level 2 versus Level 3
security across treatment conditions (v(6)2 ¼ 5.64, p ¼ 0.47). Thus, we rule out the alternative explanation that auditors’ planned
substantive testing of the client’s MBS investment was driven by differential interpretations of the ASC hierarchy of the MBS.

The Accounting Review


Volume 92, Number 5, 2017
104 Joe, Vandervelde, and Wu

TABLE 3
Analysis of Hours Assigned to Audit Procedures in Experiment 1

Panel A: Top 10 Audit Procedures in High Control Environment Risk


Low High Simple Effects
Quantification Quantification Overall Statistics
Expert
Panel Mean Mean Mean
Procedure Importance Hours Std. Hours Std. Hours Std.
Number Description of Procedure Rating Assigned Dev. Assigned Dev. Assigned Dev. F(1,88) p-value
4 Develop an independent estimate of 5.4 8.09 5.96 4.40 4.07 5.94 5.15 5.75 0.01*
Subjective fair value (i.e., an auditor-
developed model) to corroborate
the Company’s fair value
measurement and identify
exceptions.
12 Obtain supporting evidence for 6.8 1.84 1.34 1.17 0.75 1.79 1.28 3.15 0.08
Objective significant assumptions
(observable and unobservable
inputs).
6 Test the completeness and accuracy 6.4 1.57 0.99 1.62 1.01 1.70 0.99 0.03 0.87
Objective of any client data supplied to the
specialist.
8 Determine that the model used in 6.8 1.59 2.05 1.17 0.68 1.58 1.52 0.84 0.36
Subjective the projection is appropriate in
relation to the security being
evaluated.
10 Evaluate whether assumptions are 6.8 1.05 1.00 1.38 0.74 1.55 1.32 0.72 0.40
Subjective reasonable and consistent with
market information.
5 Consider the sensitivity of the 6.2 1.34 1.41 1.40 1.29 1.52 1.46 0.02 0.89
Objective valuation to changes in
significant assumptions.
14 For those inputs that are observable 6.8 1.11 1.01 1.60 0.83 1.46 1.00 2.53 0.06*
Objective (i.e., inputs that reflect the
assumptions market participants
would use and are based on
market data obtained from
independent sources), identify
source(s) and agree to source(s).
7 Agree the following per the third- 6.4 0.93 0.82 2.00 1.54 1.25 1.11 11.37 0.00*
Objective party specialist report to client’s
records: security ID, security
description, number of shares,
interest/coupon rate, maturity,
acquisition date, and fair value
hierarchy level under ASC 820.
1 Evaluate whether management’s 6.4 1.20 1.37 1.48 1.05 1.18 1.08 0.70 0.41
Objective forecasts and projections have
been accurate historically and if
any updates are necessary for the
current year.
19 Mathematically recalculate the fair 7.0 1.07 0.76 1.57 1.34 1.17 0.95 3.06 0.04*
Objective value estimate based on the
method employed to assess
whether the calculation is
accurate.
(continued on next page)

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 105

TABLE 3 (continued)
Panel B: Supplemental Detail of Audit Procedures in High Control Environment Risk
Simple Effects
Low Quantification High Quantification Overall Statistics
Mean Hours Std. Mean Hours Std. Mean Hours Std.
Assigned Dev. Assigned Dev. Assigned Dev. F(1, 88) p-value
Objective Procedures in the Top 10 9.07 3.62 10.83 3.03 10.07 3.35 2.99 0.09
Subjective Procedure in the Top 10 10.73 5.84 6.95 4.13 9.08 5.26 5.72 0.02
Objective Procedures in the Non-Top 10 3.18 3.14 3.33 1.96 3.47 2.12 0.05 0.82
Subjective Procedures in the Non-Top 10 2.57 1.68 4.00 1.97 3.59 2.15 5.01 0.03
Total mean hours assigned to top ten procedures 19.15
Percentage of total hours assigned to top ten 73.0%
procedures relative to total hours assigned
to all procedures
* Results consistent with directional expectations for Quantification and Quantification 3 Control Environment Risk; therefore, p-values presented are
one-tailed.
Procedure Number is the procedure number per the audit work program and the associated designation of objective versus subjective based on Phase 2
participants’ classification. Expert Panel Importance Rating is the post-experimental expert panel’s rating regarding the importance of each audit procedure
when substantively testing the FV of Yijensco’s MBS investment using a seven-point scale, with ‘‘1’’ being ‘‘not at all important,’’ ‘‘4’’ somewhat
important,’’ and ‘‘7’’ ‘‘very important.’’

Variable Definitions:
Quantification ¼ quantification level provided in the client-hired valuation specialist’s report manipulated at two levels: low versus high; and
Control Environment Risk ¼ control environment risk is manipulated at two levels: high versus low, by varying the strength of the entity level controls
(moderate versus strong).

Second, the hypothesis development section argues that, contrary to the prior literature (i.e., Kadous et al. 2005), in the FV
auditing context, the degree of quantification will not influence perceived competence of the client specialist providing the valuation
report and, thus, the persuasiveness of the evidence in the specialist’s report. To rule out the possibility that our results are driven by
differences in the perceived quality of the client specialist (i.e., related to competence, reliability, and/or credibility of the client
specialist’s report), participants’ perceptions of the competence, reliability, and credibility associated with the specialist’s report are
measured by three post-experimental questions. Mean ratings by treatment condition indicate that participants’ perceived
competence, reliability, and credibility do not differ across quantification conditions (p-values ranging from 0.51 to 0.99, two-tailed)
(see Panel B of Table 4). The lack of a statistical difference in the perceived reliability and credibility across the experimental
conditions provides evidence that there is no difference in the level of critical analysis of the information provided to the experimental
groups (the alternative path for a quantification effect described by Kadous et al.’s [2005] model). In addition, none of the three post-
experimental variables is statistically significant when included as a covariate in our ANCOVA model analysis presented above.
Thus, in our study, the interactive effect of quantification and control risk on auditors’ planned substantive testing is not due to
differences in participants’ perceptions of the preparer’s competence, reliability, or the credibility of the specialist’s report.
Further, if auditors perceived that more (less) quantification in the specialist’s report somehow indicated that the client had hired
a better (worse) client specialist, then it could signal to auditors that management’s commitment to the control environment/tone at
the top was higher (lower). We, therefore, compare whether differences in the amount of quantified information provided by the
client specialist influenced participants’ ratings of the client’s ELCs (i.e., the manipulation check question). Results indicate that
when Quantification is low, participants’ mean rating of the client’s ELC is 4.18 versus 4.53 when Quantification is high, and the
difference in these ratings is not statistically significant (t(90) ¼ 0.99, p ¼ 0.32). Thus, we rule out the alternative explanation that
auditors’ planned substantive testing was driven by perceived differences in management’s commitment to the control environment.
Next, we examine participants’ allocation of hours to the audit firm’s in-house valuation specialist. Panel C of Table 4
presents the means (standard deviation) for the following: (1) total subjective hours allocated to the in-house valuation
specialist (TOTALSUBJIH), and (2) the proportion of total subjective hours allocated to the in-house valuation specialist
(IHPROPSUBJ). The interaction patterns for the above in-house specialist variables are consistent with those for PROPSUBJ.
Untabulated ANOVA results reveal a statistically significant Quantification 3 Control Environment Risk interaction for
TOTALSUBJIH (F(1,88) ¼ 6.31, p , 0.01, one-tailed) and IHPROPSUBJ (F(1,88) ¼ 2.97, p ¼ 0.04, one-tailed). In sum, these
results indicate that when client risk is high, auditors are more likely to assign in-house specialists to perform subjective audit
procedures when the degree of quantification in client evidence is low compared to when it is high.

The Accounting Review


Volume 92, Number 5, 2017
106 Joe, Vandervelde, and Wu

TABLE 4
Additional Analysis
Experiment 1

Panel A: Means (Standard Deviation) for ASC Level Ratings and Frequency of Levels 2 and 3 Classifications
Quantification
Control Environment Risk Low High Overall
Low
ASC_Levels 4.70 (2.00) 4.62 (2.16) 4.65
Freq_Classifications
Level 2 7 8 15
Neutral 3 4 7
Level 3 13 14 27
n ¼ 23 n ¼ 26 n ¼ 49
High
ASC_Levels 5.00 (2.02) 3.86 (2.00) 4.44
Freq_Classifications
Level 2 7 9 16
Neutral 1 5 6
Level 3 14 7 21
n ¼ 22 n ¼ 21 n ¼ 43
Overall
ASC_Levels 4.84 4.28 4.55
Freq_Classifications
Level 2 14 17 31
Neutral 4 9 13
Level 3 27 21 48
n ¼ 45 n ¼ 47 n ¼ 92

Panel B: Means (Standard Deviation) for Competence, Reliability, and Credibility Ratings
Quantification
Control Environment Risk Low High Overall
Low
COMPETENCE 4.70 (0.88) 4.60 (1.04) 4.64
RELIABILITY 4.61 (0.89) 4.81 (1.20) 4.71
CREDIBILITY 4.80 (1.03) 4.92 (1.09) 4.87
n ¼ 23 n ¼ 26 n ¼ 49
High
COMPETENCE 4.50 (1.30) 4.57 (0.87) 4.53
RELIABILITY 4.09 (1.44) 4.29 (1.10) 4.19
CREDIBILITY 4.41 (1.47) 4.38 (1.07) 4.40
n ¼ 22 n ¼ 21 n ¼ 43
Overall
COMPETENCE 4.60 4.59 4.59
RELIABILITY 4.36 4.57 4.47
CREDIBILITY 4.61 4.68 4.65
n ¼ 45 n ¼ 47 n ¼ 92
(continued on next page)

Experiment 2
Before investigating the research questions, we include a variable in the ANOVA model (untabulated) reflecting which
second task the participant completed, Task 2 (i.e., Experiment 2 versus Phase 2—the classification of the 19 audit procedures).
Including the Task 2 variable in the model with either TOTAL_HOURS or PROPSUBJ as the dependent variable is not

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 107

TABLE 4 (continued)
Panel C: Means (Standard Deviation) for Total Subjective Hours Allocated to In-House Valuation Specialist
(TOTALSUBJIH) and Proportion of Total Subjective Hours Allocated to the In-House Valuation Specialist
(IHPROPSUBJ)
Quantification
Control Environment Risk Low High Overall
Low
TOTALSUBJIH 9.39 (4.29) 10.81 (5.29) 10.14
IHPROPSUBJ 75.1% (18.2%) 75.9% (17.7%) 75.6%
n ¼ 23 n ¼ 26 n ¼ 49
High
TOTALSUBJIH 10.07 (5.37) 6.38 (4.28) 8.27
IHPROPSUBJ 74.1% (30.0%) 57.4% (29.2%) 66.0%
n ¼ 22 n ¼ 21 n ¼ 43
Overall
TOTALSUBJIH 9.72 8.83 9.27
IHPROPSUBJ 74.6% 67.7% 71.1%
Variable Definitions:
Quantification ¼ quantification level provided in the client-hired valuation specialist’s report manipulated at two levels: low versus high;
Control Environment Risk ¼ control environment risk is manipulated at two levels: high versus low, by varying the strength of the entity-level controls
(moderate versus strong);
ASC_Levels ¼ ASC 820 hierarchy ratings for Yijensco’s JWBank Collective MBS using a seven-point scale, with ‘‘1’’ being ‘‘Level 2’’ and ‘‘7’’ being
‘‘Level 3’’;
Freq_Classifications ¼ frequency of ASC 820 hierarchy classification. ‘‘Level 2’’ are ratings less than or equal to 3 per the ASC level rating scale.
‘‘Neutral’’ are ratings of 4 per the ASC level rating scale. ‘‘Level 3’’ are ratings greater than or equal to 5 per the ASC level rating scale;
COMPETENCE ¼ rating of competence associated with the third-party specialist’s report using a seven-point scale, with ‘‘1’’ being ‘‘not competent at all’’
and ‘‘7’’ being ‘‘very competent’’;
RELIABILITY ¼ rating of reliability associated with the third-party specialist’s report using a seven-point scale, with ‘‘1’’ being ‘‘not at all reliable’’ and ‘‘7’’
being ‘‘very reliable’’;
CREDIBILITY ¼ rating of credibility associated with the third-party specialist’s report using a seven-point scale, with ‘‘1’’ being ‘‘not at all credible’’ and
‘‘7’’ being ‘‘very credible’’;
TOTALSUBJIH ¼ total subjective hours allocated to in-house valuation specialists; and
IHPROPSUBJ ¼ the hours allocated to the in-house valuation specialist divided by the total subjective hours.

significant (TOTAL_HOURS, F(1,86) ¼ 0.98, p ¼ 0.32; PROPSUBJ, F(1,86) ¼ 0.17, p ¼ 0.68), nor does including it change the
conclusions for the other variables in Experiment 1.
Research Question A examines whether the PCAOB alert changes auditors’ cognitive tendency to be influenced by the
degree of quantification of the client’s specialist report and control risk. We analyze whether there was a change in the pattern
of results the dependent variable, PROPSUBJ, presented in Experiment 1, following the PCAOB alert. Table 5 presents a
comparison of auditors’ substantive testing decisions before and after the PCAOB alert, while Table 6, Panel A presents the
mean and standard deviation for PROPSUBJ for Experiment 2. As presented in Table 5, following the PCAOB alert, there is no
statistical difference (t(63) ¼ 0.89, p ¼ 0.38, one-tailed) in PROPSUBJ (51.2 percent versus 49.9 percent) for Experiment 2
versus 1.
To reduce nonsystematic errors by controlling for individual differences (Pany and Reckers 1987), we also confirm the
above results using a repeated measures ANOVA where Study (Experiment 2 with PCAOB alert versus Experiment 1 with no
PCAOB alert) was a within-subjects variable, and Quantification and Control Environment Risk were the same between-
subjects variables described in Experiment 1. As reflected in Panel B of Table 6, Study is not significant for PROPSUBJ (F(1,60)
¼ 0.64, p ¼ 0.43, two-tailed). Consistent with the Experiment 1 results, the interaction effect of Quantification 3 Control
Environment Risk and simple effect analysis for the higher control environment risk conditions both remain significant at the 5
percent level. Further, as illustrated in Figure 3, the observed pattern of the interaction is similar to Experiment 1’s results
(illustrated in Figure 2). Our repeated measures analysis indicates that the pattern and interaction results obtained in Experiment
2 are similar to those observed in Experiment 1. Thus, we conclude that the PCAOB alert is not effective in changing auditors’
cognitive tendency to allow quantification of client evidence to influence their planned substantive testing in high-risk settings.
Research Question B investigates whether the PCAOB alert is effective in increasing auditors’ planned effort to test the
client’s FVMs. Table 5 presents the variables that capture planned audit time allocated to substantive testing. Auditors allocated

The Accounting Review


Volume 92, Number 5, 2017
108 Joe, Vandervelde, and Wu

TABLE 5
Comparison of Experiment 1 and Experiment 2 Results
Experiment 2 Experiment 1 Statistical Tests
Means Means t(63)
Dependent Variables n ¼ 64 n ¼ 64 (p-value)a
TOTAL_HOURS 28.74 25.97 5.16
(,0.01)
TOTAL_OBJECTIVE 14.02 13.02 2.00
(0.05)
TOTAL_SUBJECTIVE 14.72 12.95 4.10
(,0.01)
PROPSUBJ 51.2% 49.9% 0.89
(0.38)
TOTAL_CORE 13.59 13.13 0.76
(0.90)
TOTAL_IH 15.15 12.84 4.24
(,0.01)
TOTALSUBJIH 10.95 9.52 3.03
(,0.01)
IHPROPSUBJ 72.0% 71.2% 0.29
(0.38)

a
All p-values presented are one-tailed.

Variable Definitions:
TOTAL_HOURS ¼ total hours allocated to substantive procedures;
TOTAL_OBJECTIVE ¼ hours allocated to objective substantive procedures;
TOTAL_SUBJECTIVE ¼ hours allocated to subjective substantive procedures;
TOTAL_CORE ¼ hours allocated to core audit team for substantive procedures;
TOTAL_IH ¼ hours allocated to in-house valuation specialists for substantive procedures;
PROPSUBJ ¼ proportion of total substantive hours allocated to subjective substantive procedures;
TOTALSUBJIH ¼ total subjective hours allocated to in-house valuation specialists; and
IHPROPSUBJ ¼ the hours allocated to the in-house valuation specialist divided by the total subjective hours.

approximately three additional hours to substantive testing of the FVM (TOTAL_HOURS) following the PCAOB alert (t(63) ¼
5.16, p , 0.01). Two interesting observations emerge when comparing auditors’ effort allocation in Experiment 2 versus 1.
First, while the proportion of subjective procedures does not change, the increase in total hours results in 1.77 additional hours
of subjective procedures (TOTAL_SUBJECTIVE) (t(63) ¼ 4.10, p , 0.01, one-tailed). This result is consistent with prior studies
showing that auditors will attempt to accommodate the preferences of the reviewer (e.g., Peecher 1996; Cohen and Trompeter
1998; Wilks 2002; Peecher et al. 2013). Second, auditors allow for a greater allocation of audit effort (2.31 more hours,
representing approximately 8 percent of total hours) to the in-house specialists (TOTAL_IH) (t(63) ¼ 4.24, p , 0.01, one-tailed).
Moreover, results show that the mean subjective hours allocated to the in-house valuation specialist (TOTALSUBJIH) increased
significantly from 9.52 in Experiment 1 to 10.95 in Experiment 2 (t(63) ¼ 3.03, p ,0.01, one-tailed). However, the practice alert
did not significantly change the allocation of subjective audit procedures to the in-house specialists IHPROPSUBJ (t(63) ¼ 0.29,
p ¼ 0.38, one-tailed). Similar to Experiment 1, the participants allocate over 70 percent of the subjective procedures to the in-
house valuation specialist, as shown in Table 5. Consistent with recent survey research, our results show that auditors use in-
house specialists to perform the more subjective audit procedures to test management’s discretion (e.g., Cannon and Bedard
2017; Glover et al. 2017a).
Taken together, the results from Experiment 2 suggest that a PCAOB alert is not effective in increasing the proportion of
total audit effort allocated to subjective procedures, nor in changing auditors’ tendency to be influenced by the degree of
quantification in client data, especially in the higher-risk setting. These results suggest the current regulatory approach might
not be effective in mitigating hard-wired cognitive tendencies related to FV auditing. However, our results do suggest that the
PCAOB alert does motivate auditors to increase total audit effort in response to ‘‘justifiee’’ preferences, consistent with Peecher
(1996).

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 109

TABLE 6
Experiment 2 Results

Panel A: Adjusted Means (Standard Deviation) for the Dependent Variable (PROPSUBJ) by Quantification Level and
Control Environment Risk
Quantification
Control Environment Risk Low High Overall
Low
PROPSUBJ 51.1% (10.2%) 48.9% (10.7%) 50.0%
n ¼ 16 n ¼ 18 n ¼ 34
High
PROPSUBJ 57.9% (13.7%) 43.8% (12.6%) 50.9%
n ¼16 n ¼ 14 n ¼ 30
Overall
PROPSUBJ 54.5% 46.3%
n ¼ 32 n ¼ 32

Panel B: Repeated Measures ANOVA Results for the Second Dependent Variable (PROPSUBJ)
Sum of Mean
Source Squares df Square F-value pa
Between-Subjects
Quantification 0.132 1 0.132 5.06 0.01
Control Environment Risk 0.010 1 0.010 0.38 0.54
Quantification 3 Control Environment Risk 0.085 1 0.085 3.25 0.04
Error 1.566 60 0.026
Within-Subjects
Study 0.004 1 0.004 0.64 0.43
Error 0.346 60 0.006
Simple Effects Tests
Effect of Quantification given low Control Environment Risk 1 0.29 0.59
Effect of Quantification given high Control Environment Risk 1 10.77 ,0.01
a
Given the directional expectations for Quantification and Quantification 3 Control Environment Risk, p-values presented are one-tailed.

Variable Definitions:
Quantification ¼ quantification level provided in the client valuation specialist’s report manipulated at two levels: low versus high;
Control Environment Risk ¼ control environment risk is manipulated at two levels: high versus low, by varying the strength of the entity-level controls
(moderate versus strong);
PROPSUBJ ¼ proportion of total substantive hours allocated to subjective substantive procedures for Experiment 2; and
Study ¼ the within-subjects variable of participants’ substantive hours allocation for Experiment 2 versus Experiment 1 of the experiment.

V. CONCLUSIONS

Regulators charge that auditors do not adequately test the discretionary aspects of their client’s FVMs, rely too heavily on
evidence from client specialists, and do not adjust the audit approach based on the level of client risk (e.g., PCAOB 2009a,
2009b, 2010a, 2012a, 2014; FRC 2011; CPAB 2012). Archival evidence is consistent with the regulatory concerns and
documents that management opportunistically uses discretion in FV inputs and other complex estimates reported in audited
financial statements (e.g., Hodder et al. 2006; Johnston 2006; Dechow et al. 2010; Choudhary 2011; PCAOB 2010a, 2012a;
Bratten et al. 2013). Motivated by concerns about FV audits, we examine whether the joint effects of the degree of
quantification in the client specialist report and level of client control environment risk influence auditors’ planned substantive
testing of FVMs. Therefore, our findings, which are consistent with theoretical expectations, provide insight regarding a key
area of regulatory concern and respond to a call for research on auditors’ testing of aspects of FVMs that are susceptible to
management’s opportunism and bias (e.g., PCAOB 2010a, 2012a; Bratten et al. 2013).

The Accounting Review


Volume 92, Number 5, 2017
110 Joe, Vandervelde, and Wu

FIGURE 3
Observed Interaction
Experiment 2

Variable Definitions:
Quantification ¼ quantification level provided in the client valuation specialist’s report manipulated at two levels: low versus high;
Control Environment Risk ¼ control environment risk is manipulated at two levels: high versus low, by varying the strength of the entity-level controls
(moderate versus strong); and
PROPSUBJ ¼ proportion of total substantive hours allocated to subjective substantive procedures in Experiment 2.

Although in the context of this study, it is important to test both quantified and non-quantified data, consistent with
theoretical predictions, the auditors are most influenced by the degree of quantification in client-provided evidence when client
control risk is high. Auditors’ preference for well-defined tasks that are lower in ambiguity and more verifiable motivated them
to attend to quantitative evidence and diverted audit effort away from testing qualitative evidence, such as subjective audit
procedures that test the discretionary aspects of the client’s FV estimate. When the degree of quantification is low in high client
control risk settings, however, there is less quantitative evidence available to divert the auditors’ attention and they apply more
audit effort to subjective audit procedures that test management’s discretion. Auditors’ ambiguity-aversion and preference for
well-defined tasks also made them less likely to allocate effort on developing independent estimates to test the client’s
discretionary balances when the degree of quantification was high versus low in the high client risk scenario. These theory-
consistent findings raise potential concerns about audit effectiveness for the profession, because they imply that auditors’
substantive testing is most susceptible to strategic manipulations in the client specialist’s report on higher-risk clients. The
implications of our findings are also troublesome for investors because our theory predicts that the degree of quantification in
client-provided evidence reduces the likelihood that auditors will apply subjective procedures (including developing
independent estimates) to test FV inputs that are most prone to management bias and opportunism.
We also examine whether prompting auditors to consider the regulators’ concerns about insufficient attention to testing
discretionary inputs that are susceptible to management bias changes auditors’ focus when designing planned substantive
procedures. Consistent with Peecher (1996), alerting auditors to regulators’ preference does not mitigate auditors’ tendency to
be influenced by quantification—the trade-off in audit effort between subjective versus objective substantive procedures
remained unchanged following the practice alert. Rather, the practice alert increased total planned audit testing allocated to the
client’s FVMs (i.e., both planned subjective and objective procedures). We also find that following a regulatory alert, audit

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 111

teams use in-house valuation specialists to complete the more complex and subjective audit procedures. These findings suggest
that auditors are aware that they suffer from a ‘‘complexity competence gap’’ and rely on in-house specialists to complete more
subjective procedures (e.g., Martin et al. 2006; Bratten et al. 2013).
Our research is subject to the following limitations, which present opportunities for future research. First, we investigate an
FV audit setting where the degree of quantification in the evidence did not signal a change in the risk that the balance could be
materially misstated; therefore, care should be taken in extrapolating our findings to audit settings where quantification in audit
evidence is relevant to assessing the risk of misstatement. Further, recent research (e.g., Bol, Estep, Moers, and Peecher 2016)
finds that auditors’ tacit knowledge (e.g., self-motivation, self-organization, and stress management skills) allows them to
manage performance and their response to factors in the audit environment. Thus, future research can shed light on whether
auditors who have a high level of tacit knowledge are less susceptible to the joint effects of quantification and client risk
observed in this study. Second, our research design does not consider the costs of in-house valuation specialists on the audit
engagement. Future research could explore whether cost considerations (discussed in Martin et al. [2006]) limit the
effectiveness of the PCAOB alert observed in this study. Third, in the setting examined in this study, there was no strategic
manipulation of the detail in the client-provided evidence to support management preferences. We also did not examine any
strategic management opportunism and the strategic actions auditors can take to combat such opportunism. Future research
could explore the impact of strategic interactions on auditors’ FVM judgments. For example, Earley et al. (2016) find that
auditors can be more skeptical when management’s FV reporting is incentive-aligned.
Notwithstanding these limitations, the current study makes key contributions to the literature. We contribute to the growing
literature demonstrating features of the FV auditing setting that can have adverse effects on auditor judgment (e.g., Griffith et al.
2015b; Backof et al. 2016; Joe et al. 2016; Maksymov et al. 2016). Specifically, we identify and provide evidence that one
aspect of audit evidence—the degree of quantification, which is susceptible to client control—can influence auditors’ planned
substantive testing in high-risk settings where it is important for auditors to consider both quantified and non-quantified
evidence. Our research identifies a previously unexplored cognitive root cause for insufficient audit testing of the discretionary
inputs to FVMs that are susceptible to management influence (e.g., choice of models, assumptions, and model inputs), and
complements the prior literature identifying environmental root causes of challenges in FV auditing (e.g., Griffith et al. 2015a).
Further, we contribute to the emerging research on potential consequences of the current regulatory approach (e.g., Peecher et
al. 2013; Glover et al. 2017b) by investigating the efficacy of practice alerts to change the focus of auditors’ planned testing.
Our findings are consistent with theoretical predictions suggesting that while the PCAOB alert is effective in motivating
auditors to increase total audit effort to conform to regulators’ preferences (Peecher 1996), these alerts are not likely to be
effective at mitigating auditors’ tendency to be influenced by the joint effect of the degree of quantification in the client’s
evidence and control environment risk.
Collectively, our theory-consistent findings inform practitioners, regulators, and investors. Our study illustrates that
auditors should be cautious about allowing features of client evidence that are susceptible to management control to influence
their planned substantive testing, especially in those audit settings where management can exploit the discretionary inputs to
complex estimates. This study highlights limits to the effectiveness of regulators’ warning that auditors should be alert to the
potential for management bias in estimates and to maintain professional skepticism when evaluating client evidence supporting
management’s estimates (e.g., PCAOB 2011, 2012c). As the PCAOB contemplates expanding guidance to direct auditors’ use
of the work of specialists (PCAOB 2016b), this study contributes new and timely knowledge that features of the specialist’s
evidence (e.g., the degree of quantification) and client risk together interact to influence auditors’ cognitive processing and
testing of critical estimates, and that regulatory practice alerts have limited efficacy in modifying such cognitive processing.

REFERENCES
Aboody, D., M. E. Barth, and R. Kasznik. 2006. Do firms understate stock option-based compensation expense disclosed under SFAS
123? Review of Accounting Studies 11 (4): 429–461. doi:10.1007/s11142-006-9013-0
Aghazadeh, S., and J. R. Joe. 2015. How Management’s Expressions of Confidence Influence Auditors’ Skeptical Response to
Management’s Explanations. Working paper, Louisiana State University and University of Delaware.
Allen, R. D., D. R. Hermanson, T. M. Kozloski, and R. J. Ramsay. 2006. Auditor risk assessment: Insights from the academic literature.
Accounting Horizons 20 (2): 157–177. doi:10.2308/acch.2006.20.2.157
Anderson, U., K. Kadous, and L. Koonce. 2004. The role of incentives to manage earnings and quantification in auditors’ evaluations of
management-provided information. Auditing: A Journal of Practice & Theory 23 (1): 11–27. doi:10.2308/aud.2004.23.1.11
Asare, S. K., B. C. Fitzgerald, L. Graham, J. R. Joe, E. M. Negangard, and C. J. Wolfe. 2013. Auditors’ internal control over financial
reporting decisions: Analysis, synthesis, and research directions. Auditing: A Journal of Practice & Theory 32 (Supplement 1):
131–166. doi:10.2308/ajpt-50345
Babcock, L., G. Loewenstein, and S. Issacharoff. 1998. Creating convergence: Debiasing litigants. Law and Social Inquiry 22 (4): 913–
925. doi:10.1111/j.1747-4469.1997.tb01092.x

The Accounting Review


Volume 92, Number 5, 2017
112 Joe, Vandervelde, and Wu

Backof, A. G., J. Thayer, and T. Carpenter. 2016. Auditing Complex Estimates: How Do Construal Level and Evidence Formatting
Impact Auditors’ Consideration of Inconsistent Evidence. Working paper, University of Virginia and The University of Georgia.
Balcetis, E., and D. Dunning. 2006. See what you want to see: Motivational influences on visual perception. Journal of Personality and
Social Psychology 91 (4): 612–625. doi:10.1037/0022-3514.91.4.612
Bamber, E., D. Snowball, and R. Tubbs. 1989. Audit structure and its relation to role conflict and role ambiguity: An empirical
investigation. The Accounting Review 64: 285–299.
Bell, J. 1984. The effect of presentation form on the use of information in annual reports. Management Science 30 (2): 169–185. doi:10.
1287/mnsc.30.2.169
Bol, J. C., C. Estep, F. Moers, and M. E. Peecher. 2016. The Role of Tacit Knowledge in Auditor Expertise and Human Capital
Development. Working paper, Tulane University, Emory University, Maastricht University, University of Illinois at Urbana–
Champaign.
Borthick, A. F., M. B. Curtis, and R. S. Sriram. 2006. Accelerating the acquisition of knowledge structure to improve performance in
internal control reviews. Accounting, Organizations and Society 31 (4/5): 323–342. doi:10.1016/j.aos.2005.12.001
Bowlin, K. 2011. Risk-based auditing, strategic prompts, and auditor sensitivity to the strategic risk of fraud. The Accounting Review 86
(4): 1231–1253. doi:10.2308/accr-10039
Bratten, B., R. Jennings, and C. M. Schwab. 2015. The effect of using a lattice model to estimate reported option values. Contemporary
Accounting Research 32 (1): 193–222. doi:10.1111/1911-3846.12067
Bratten, B., L. M. Gaynor, L. McDaniel, N. R. Montague, and G. E. Sierra. 2013. The audit of fair values and other estimates: The effects
of underlying environmental, task, and auditor-specific factors. Auditing: A Journal of Practice & Theory 31: 127–146.
Brazel, J. F., and C. P. Agoglia. 2007. An examination of auditor planning judgments in a complex accounting information system
environment. Contemporary Accounting Research 24 (4): 1059–1083. doi:10.1506/car.24.4.1
Canadian Public Accountability Board (CPAB). 2012. Report on the 2012 Inspections of the Quality of Audits Conducted by Public
Accounting Firms. Available at: http://www.cpab-ccrc.ca/Documents/Topics/Public%20Reports/CPAB_Public_Report_2012_Eng.
pdf
Cannon, N., and J. C. Bedard. 2017. Auditing challenging fair value measurements: Evidence from the field. The Accounting Review 92
(4): 81–114. doi:10.2308/accr-51569
Chandar, N., and R. Bricker. 2002. Incentives, discretion, and asset valuation in closed-end mutual funds. Journal of Accounting Research
40 (4): 1037–1070. doi:10.1111/1475-679X.00081
Choudhary, P. 2011. Evidence on differences between recognition and disclosure: A comparison of inputs to estimate fair values of
employee stock options. Journal of Accounting and Economics 51 (1/2): 77–94. doi:10.1016/j.jacceco.2010.09.004
Christensen, B. E., S. M. Glover, and D. A. Wood. 2012. Extreme estimation uncertainty in fair value estimates: Implications for audit
assurance. Auditing: A Journal of Practice & Theory 31 (1): 127–146. doi:10.2308/ajpt-10191
Cohen, J. R., and G. M. Trompeter. 1998. An examination of factors affecting audit practice development. Contemporary Accounting
Research 15 (4): 481–504. doi:10.1111/j.1911-3846.1998.tb00568.x
Dechow, P. M., L. A. Myers, and C. Shakespeare. 2010. Fair value accounting and gains from asset securitizations: A convenient earnings
management tool with compensation side-benefits. Journal of Accounting and Economics 49 (1-2): 2–25. doi:10.1016/j.jacceco.
2009.09.006
Dietrich, J. R., M. S. Harris, and K. A. Muller III. 2000. The reliability of investment property fair value estimates. Journal of Accounting
and Economics 30 (2): 125–158. doi:10.1016/S0165-4101(01)00002-7
Drew, T., M. L.-H. Vo, and J. M. Wolfe. 2013. The invisible gorilla strikes again: Sustained inattentional blindness in expert observers.
Psychological Science 24 (9): 1848–1853. doi:10.1177/0956797613479386
Earley, C. E., V. B. Hoffman, and J. R. Joe. 2008. Reducing management’s influence on auditors’ judgments: An experimental
investigation of SOX 404 assessments. The Accounting Review 83 (6): 1461–1485. doi:10.2308/accr.2008.83.6.1461
Earley, C., V. B Hoffman, and J. R. Joe. 2016. Auditors’ Role in Fair-Value Level 2 versus Level 3 Classification Disclosures. Working
paper, Providence College, University of Pittsburgh, and University of Delaware.
Financial Accounting Standards Board (FASB). 2011. Fair Value Measurement (Topic 820: Amendments to Achieve Common Fair Value
Measurement and Disclosure Requirements in U.S. GAAP and IFRS). Accounting Standards Update No. 2011-04. Norwalk, CT:
FASB.
Financial Reporting Council (FRC). 2011. Audit Inspection Unit Annual Report 2010/11. London, U.K.: FRC.
Glover, S. M., M. H. Taylor, and Y. Wu. 2017a. Current practices and challenges in auditing fair value measurements and other complex
estimates: Implications for auditing standards and the academy. Auditing: A Journal of Practice & Theory 36 (1): 63–84. doi.org/
10.2308/ajpt-51514
Glover, S. M., M. Taylor, and Y. Wu. 2017b. Mind the Gap: Why Do Experts Have Differences of Opinion Regarding the Sufficiency of
Audit Evidence Supporting Complex Fair Value Measurements? Working Paper, Brigham Young University, Case Western
Reserve University, and Texas Tech University.
Griffith, E. E. 2016. Auditors, Specialists, and Professional Jurisdiction in Audits of Fair Values. Working paper, University of
Wisconsin–Madison.
Griffith, E. E., J. S. Hammersley, and K. Kadous. 2015a. Audits of complex estimates as verification of management numbers: How
institutional pressures shape practice. Contemporary Accounting Research 32 (3): 833–863. doi:10.1111/1911-3846.12104
Griffith, E. E., J. S. Hammersley, K. Kadous, and D. Young. 2015b. Auditor mindsets and audits of complex estimates. Journal of
Accounting Research 53 (1): 49–77. doi:10.1111/1475-679X.12066
Hackenbrack, K., and M. W. Nelson. 1996. Auditors’ incentives and their application of financial accounting standards. The Accounting
Review 71 (1): 43–59.

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 113

Harvest. 2014. Re: Request for Public Comment—Staff Consultation Paper, Auditing Accounting Estimates and Fair Value
Measurements. (August 19). Available at: https://pcaobus.org/Standards/Staff_Consultation_Comments/036_Harvest.pdf
Heaps, C. M., and T. B. Henley. 1999. Language matters: Wording considerations in hazard perception and warning comprehension.
Journal of Psychology 133 (3): 341–351. doi:10.1080/00223989909599747
Hilton, A. S., and P. C. O’Brien. 2009. Inco Ltd.: Market value, fair value, and management discretion. Journal of Accounting Research
47 (1): 179–211. doi:10.1111/j.1475-679X.2008.00314.x
Hodder, L., W. J. Mayew, M. L. McAnally, and C. D. Weaver. 2006. Employee stock option fair value estimates: Do managerial
discretion and incentives explain accuracy? Contemporary Accounting Research 23 (4): 933–975. doi:10.1506/ML46-8401-6222-
4642
International Auditing and Assurance Standards Board (IAASB). 2008. Staff Audit Practice Alert: Challenges in Auditing Fair Value
Accounting Estimates in the Current Market Environment. Available at: http://www.ifac.org/publications-resources/staff-audit-
practice-alert-challenges-auditing-fair-value-accounting-estimate
Joe, J. R., Y. Wu, and A. Zimmerman. 2016. Overcoming Communication Challenges: Can Taking the Specialist’s Perspective Improve
Auditors’ Critical Evaluation and Integration of the Specialist’s Work? Working paper, University of Delaware, Texas Tech
University, and Northern Illinois University.
Johnston, D. 2006. Managing stock option expense: The manipulation of option-pricing model assumptions. Contemporary Accounting
Research 23 (2): 395–425. doi:10.1506/6YVX-9KDJ-08UC-P0Q6
Kadous, K., J. Kennedy, and M. Peecher. 2003. The effect of quality assessment and directional goal commitment on auditors’ acceptance
of client-preferred accounting methods. The Accounting Review 78 (3): 759–778. doi:10.2308/accr.2003.78.3.759
Kadous, K., L. Koonce, and K. L. Towry. 2005. Quantification and persuasion in managerial judgment. Contemporary Accounting
Research 22 (3): 643–686. doi:10.1506/568U-W2FH-9YQM-QG30
Kennedy, J. 1993. Debiasing audit judgment with accountability: A framework and experimental results. Journal of Accounting Research
31 (2): 231–245. doi:10.2307/2491272
Knechel, W. R., and W. F. Messier, Jr. 1990. Sequential auditor decision making: Information search and evidence evaluation.
Contemporary Accounting Research 6 (2): 386–406. doi:10.1111/j.1911-3846.1990.tb00765.x
Luippold, B. L., and T. E. Kida. 2012. The impact of initial information ambiguity on the accuracy of analytical review judgments.
Auditing: A Journal of Practice & Theory 31 (2): 113–129. doi:10.2308/ajpt-10259
Mack, A., and I. Rock. 1998. Inattentional Blindness. Cambridge, MA: MIT Press.
Mack, A., B. Tang, R. Tuma, S. Kahn, and I. Rock. 1992. Perceptual organization and attention. Cognitive Psychology 24 (4): 475–501.
doi:10.1016/0010-0285(92)90016-U
Maksymov, E., M. W. Nelson, and W. R. Kinney, Jr. 2016. Planning Audits of Fair Values: Effects of Procedure Frame and Perceived
Procedure Verifiability. Working paper, Arizona State University, Cornell University, and The University of Texas at Austin.
Martin, R. D., J. S. Rich, and T. J. Wilks. 2006. Auditing fair value measurement: A synthesis of relevant research. Accounting Horizons
20 (3): 287–303. doi:10.2308/acch.2006.20.3.287
McDaniel, L. S., and B. Kinney. 1995. Expectation formation guidance in the auditor’s review of interim financial information. Journal of
Accounting Research 33 (1): 59–76. doi:10.2307/2491292
Milkman, K. L., D. Chugh, and M. H. Bazerman. 2009. How can decision making be improved? Perspectives on Psychological Science 4
(4): 379–383. doi:10.1111/j.1745-6924.2009.01142.x
Mock, T. J., and A. Wright. 1993. An exploratory study of auditors’ evidential planning judgments. Auditing: A Journal of Practice &
Theory 12 (2): 39–61.
Mock, T. J., and A. Wright. 1999. Are audit program plans risk adjusted? Auditing: A Journal of Practice & Theory 18 (1): 55–74. doi:10.
2308/aud.1999.18.1.55
Most, S. B., D. J. Simons, B. J. Scholl, R. Jimenez, E. Clifford, and C. F. Chabris. 2001. How not to be seen: The contribution of
similarity and selective ignoring to sustained inattentional blindness. Psychological Science 12 (1): 9–17. doi:10.1111/1467-9280.
00303
Neisser, U., and R. Becklen. 1975. Selective looking: Attending to visually specified events. Cognitive Psychology 7 (4): 480–494.
doi:10.1016/0010-0285(75)90019-5
Ng, T., and H. Tan. 2003. Effects of authoritative guidance availability and audit committee. The Accounting Review 78 (3): 801–818.
doi:10.2308/accr.2003.78.3.801
Nisbett, S., and J. Ross. 1980. Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Cliffs, NJ: Prentice Hall
Inc.
Nissim, D. 2003. Reliability of banks’ fair value disclosure for loans. Review of Quantitative Finance and Accounting 20 (4): 355–384.
doi:10.1023/A:1024072317201
Nosek, B. A., A. G. Greenwald, and M. R. Banaji. 2007. The implicit association test at age 7: A methodological and conceptual review.
In Automatic Processes in Social Thinking and Behavior, edited by Bargh, J. A. Hove, U.K.: Psychology Press.
Pany, K., and P. M. J. Reckers. 1987. Within-subjects vs. between-subjects experimental designs—A study of demand effects. Auditing:
A Journal of Practice & Theory 7 (1): 39–53.
Peecher, M. E. 1996. The influence of auditors’ justification processes on their decisions: A cognitive model and experimental evidence.
Journal of Accounting Research 34 (1): 125–140. doi:10.2307/2491335
Peecher, M. E., and I. Solomon. 2001. Theory and experimentation in studies of audit judgments and decisions: Avoiding common
research traps. International Journal of Auditing 5 (3): 193–203. doi:10.1111/1099-1123.00335
Peecher, M. E., I. Solomon, and K. T. Trotman. 2013. An accountability framework for financial statement auditors and related research
questions. Accounting, Organizations and Society 38 (8): 596–620. doi:10.1016/j.aos.2013.07.002

The Accounting Review


Volume 92, Number 5, 2017
114 Joe, Vandervelde, and Wu

Peters, E., D. Västfjäll, P. Slovic, C. K. Mertz, K. Mazzocco, and S. Dickert. 2006. Numeracy and decision making. Psychological
Science 17 (5): 407–413.
Protiviti Inc. 2007. Guide to the Sarbanes-Oxley Act: Internal Control Reporting Requirements. Available at: https://www.protiviti.com/
sites/default/files/united_states/insights/protiviti_section_404_faq_guide.pdf
Public Company Accounting Oversight Board (PCAOB). 2002. Auditing Fair Value Measurements and Disclosures. PCAOB Interim
Auditing Standards AS 2502. Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2007. Matters Related to Auditing Fair Value Measurements of Financial
Instruments and the use of Specialists. Staff Audit Practice Alert No. 2. Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2008. Report on the PCAOB’s 2004, 2005, 2006, and 2007 Inspections of
Domestic Annually Inspected Firms. PCAOB Release No. 2008-008. Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2009a. Report on 2008 Inspection of Ernst & Young, LLP (May 19).
Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2009b. Report on the First-Year Implementation of Auditing Standard No. 5, an
Auditing of Internal Control over Financial Reporting that is Integrated with an Audit of Financial Statements. PCAOB Release
No. 2009-2006 (September 24). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2010a. Report on Observations of PCAOB Inspectors Related to the Audit Risk
Areas Affected by the Economic Crisis. Release No. 2010-006 (September 29). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2010b. Audit Planning. Auditing Standard No. 9. Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2011. Assessing and Responding to Risk in the Current Economic Environment.
Staff Audit Practice Alert No. 9 (December 6). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2012a. Auditing the Future (June 7). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2012b. Observations from 2010 Inspections of Domestic Annually Inspected
Firms Regarding Deficiencies in Audits of Internal Control over Financial Reporting. PCAOB Release No. 2012-006 (December
10). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2012c. Maintaining and Applying Professional Skepticism in Audits. PCAOB
Staff Audit Practice Alert No. 10 (December 4). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2013. Report on 2007–2010 Inspections of Domestic Firms that Audit 100 or
Fewer Public Companies (February 25). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2014. Report on 2013 Inspection of Ernst & Young, LLP (August 14).
Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2015. The Auditor’s Use of the Work of Specialists. Staff consultation paper No.
2015-01 (May 28). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2016a. Staff Inspection Brief: Preview of Observations from 2015 Inspections
of Auditors of Issuers (April). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2016b. Standard-Setting Agenda: Office of the Chief Auditor. Available at:
https://pcaobus.org/Standards/Documents/2016Q3-standard-setting-agenda.pdf
Ramanna, K. 2008. The implications of unverifiable fair-value accounting: Evidence from the political economy of goodwill accounting.
Journal of Accounting and Economics 45 (2/3): 253–281. doi:10.1016/j.jacceco.2007.11.006
Ramanna, K., and R. L. Watts. 2012. Evidence on the use of unverifiable estimates in required goodwill impairment. Review of
Accounting Studies 17 (4): 749–780. doi:10.1007/s11142-012-9188-5
Salzsieder, L. 2016. Fair value opinion shopping. Behavioral Research in Accounting 28 (1): 57–66. doi:10.2308/bria-51238
Simons, D. J., and C. F. Chabris. 1999. Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception 28 (9):
1059–1074. doi:10.1068/p281059
Viswanathan, M. 1993. Measurement of individual differences in preference for numerical information. Journal of Applied Psychology 78
(5): 741–752. doi:10.1037/0021-9010.78.5.741
Wilks, T. J. 2002. Predecisional distortion of evidence as a consequence of real-time audit review. The Accounting Review 77 (1): 51–71.
doi:10.2308/accr.2002.77.1.51
Wilks, T. J., and M. F. Zimbelman. 2004. Decomposition of fraud-risk assessments and auditors’ sensitivity to fraud cues. Contemporary
Accounting Research 21 (3): 719–745. doi:10.1506/HGXP-4DBH-59D1-3FHJ
Zimbelman, M. F. 1997. The effects of SAS No. 82 on auditors’ attention to fraud risk factors and audit planning decisions. Journal of
Accounting Research 35 (Supplement): 75–97. doi:10.2307/2491454
Zimbelman, M. F., and W. S. Waller. 1999. An experimental investigation of auditor-auditee interaction under ambiguity. Journal of
Accounting Research 37 (Supplement): 135–155. doi:10.2307/2491349

The Accounting Review


Volume 92, Number 5, 2017
Use of High Quantification Evidence in Fair Value Audits: Do Auditors Stay in their Comfort Zone? 115

APPENDIX A
Excerpt of Quantification Manipulation

Low Quantification Example


Prepayment Speed
The assumed prepayment speed used in our determination of the value of the MBS is 27.6 percent. The rate is based on the
average rate over the past five years for selected housing markets that are included in the Standard & Poor’s (S&P)/Case-Shiller
Index that we deemed to be most representative of the underlying residential properties. These representative housing markets
are Charlotte, NC; Chicago, IL; Cleveland, OH; Dallas, TX; Las Vegas, NV; Miami, FL; New York, NY; and Seattle, WA.

High Quantification Example


Prepayment Speed
The assumed prepayment speed used in our determination of the value of the MBS is 27.6 percent. The rate is based on the
average rate over the past five years by selected housing markets that are included in the S&P/Case-Shiller Index that we
deemed to be most representative of the underlying residential properties. These housing markets and their annual prepayment
speeds (as shown below) are:
5-Year
2008 2009 2010 2011 2012 Average
Charlotte, NC 21.4 22.8 23.2 24.9 24.1 23.3
Chicago, IL 26.7 27.6 29.6 31.8 30.3 29.2
Cleveland, OH 28.2 28.7 29.9 32.3 31.8 30.2
Dallas, TX 25.2 25.4 26.6 27.8 29.3 26.9
Las Vegas, NV 27.3 30.7 31.1 33.9 32.2 31.0
Miami, FL 24.2 25.6 26.4 27.1 29.2 26.5
New York, NY 26.3 27.1 29.1 30.2 31.8 28.9
Seattle, WA 22.3 24.1 24.8 25.7 25.9 24.6
27.6%

APPENDIX B
Excerpt of Control Environment Risk (ELC Strength) Manipulation
Yijensco entity-level internal controls were assessed as strong [moderate]25 during the SOX 404 testing phase of the
integrated audit. In particular, the engagement team identified the following key entity-level controls fexamples provided
belowg:
 The company has strong [moderate] controls over hiring, training, and promotion. These policies are well defined
[Most of these policies are defined] and documented so that employees can readily [can] access them.
 The company’s investment policy is developed and updated regularly by a committee made up of senior management
and key personnel from the Treasury Department, who have significant finance and investment experience
[Accounting Department, who have significant accounting experience]. The Finance Committee of Yijensco’s
Board of Directors reviews the company’s investment policy.

APPENDIX C
Sample Substantive Audit Procedures

Example of Objective Procedures


Determine that the fair value measurement provided by the client’s specialist reconciles to the financial statements.
Test the completeness and accuracy of any client data supplied to the specialist.

25
Contents in brackets indicate the manipulations for the high control environment risk (i.e., moderate entity-level control strength condition).

The Accounting Review


Volume 92, Number 5, 2017
116 Joe, Vandervelde, and Wu

Agree the following per the third-party specialist report to client’s records: security ID, security description, number of
shares, interest/coupon rate, maturity, acquisition date, and fair value hierarchy level under ASC 820.

Example of Subjective Procedures


Evaluate whether there should be a change in the valuation technique being used (i.e., compare last year’s financial
reporting environment to the current year’s financial reporting environment).
Test the data used by the third-party specialist to develop fair value measurements and disclosures:
 Evaluate whether assumptions are reasonable and consistent with market information.
 Consider whether there is information available in the market that would contradict the assumptions used as inputs.

The Accounting Review


Volume 92, Number 5, 2017
Copyright of Accounting Review is the property of American Accounting Association and its
content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder's express written permission. However, users may print, download, or email
articles for individual use.

You might also like