Professional Documents
Culture Documents
x
Journal of Accounting Research
Vol. 45 No. 4 September 2007
Printed in U.S.A.
Accounting Standards,
Implementation Guidance, and
Example-Based Reasoning
S H A N A C L O R - P R O E L L ∗ A N D M A R K W. N E L S O N ∗
ABSTRACT
1. Introduction
Accounting standards in the United States currently consist of a mix of
principles and rules (SEC [2003]), and the Financial Accounting Standards
Board (FASB [2002]), International Accounting Standards Board (IASB
[2002]), and Securities and Exchange Commission (SEC [2003]) have re-
cently advocated moving to a more “principles-based” financial-reporting
system that avoids bright-line rules and requires more professional judg-
ment to determine appropriate accounting. Much research suggests that
practitioners sometimes employ the latitude inherent in standards to make
choices consistent with their incentives, such that regulators need to strike
the right balance of incentives to encourage accurate reporting when lati-
tude exists (Nelson [2003]). We examine whether, even when practitioners
intend to report accurately, the process by which they deal with latitude
might produce inaccurate reporting.
We focus on practitioners’ use of examples when determining appro-
priate application of standards. In practice, examples can be provided by
standard setters as implementation guidance when the standard is issued
(e. g., Statement of Financial Accounting Standards (SFAS) No. 106 (FASB
[1990]), SFAS No. 123(R) (FASB [2004]), SFAS No. 133(FASB [1998])).
Examples also can be provided by standard setters subsequent to issuance
(e. g., Emerging Issues Task Force (EITF) 94-3 (FASB [1994]) and EITF
99-19 (FASB [1999])), by regulators via enforcement actions (e. g., Ac-
counting and Auditing Enforcement Release (AAER) 108 (SEC [1986])),
and by the business press. We use the term “example-based reasoning”
to refer to the psychological process by which practitioners apply such
examples.
Prior psychology research examining similarity judgments (e. g., Tversky
[1977]) and priming (e. g., Higgins, Bargh, and Lombardi [1985]) suggests
that practitioners may conclude that the treatment illustrated by the exam-
ple is more appropriate for their situation than is actually justified by the
underlying facts. Thus, a practitioner might tend to conclude that a particu-
lar accounting treatment is appropriate for their case when presented with
implementation guidance that provides an example of acceptable report-
ing (which we call an “affirmative example”), and for the same case might
tend to conclude that a particular accounting treatment is not appropri-
ate for their situation when presented with implementation guidance that
provides an example of unacceptable reporting (which we call a “counter
example”).
Two features of the accounting setting make it difficult to infer that re-
sults from prior psychology research generalize, and also present design
challenges for our experiments. First, unlike the generic contexts used
in prior psychology research, in which examples are abstract and not in-
tended to communicate decision thresholds, a unique feature of the ac-
counting context is that standard setters may choose specific examples
to communicate thresholds or conditions that they deem necessary for
EXAMPLE-BASED REASONING 701
1 A related literature examines analogical reasoning, whereby decision makers solve prob-
lems by identifying an analog that is structurally similar to their decision problem and mapping
elements of the analog to their own problem (Novick [1988], Holyoak and Thagard [1997]).
Prior accounting research has examined this type of analogical reasoning in tax, auditing, and
managerial accounting settings (Marchant [1989], Marchant et al. [1991, 1993], Matsumura
and Vera-Munoz [2005]), providing evidence about the circumstances in which accountants
are most likely to identify and apply solutions implied by an analog.
2 We use the term “example” rather than “precedent” because implementation guidance may
also consist of examples that describe hypothetical transactions for the purpose of providing
implementation guidance.
704 S. CLOR-PROELL AND M. W. NELSON
3. Experiment 1
3.1 METHOD
3.1.1. Design Overview. Experiment 1 employs a 2 × 2 between-subjects de-
sign, crossing two example conditions (Affirmative/Counter) and two case
conditions (Revenue/Expense). Affirmative examples describe the facts of
an accounting transaction that is likely to qualify for revenue/expense recog-
nition. Counter examples describe the facts of an accounting transaction
that is not likely to qualify for revenue/expense recognition. The revenue
case requires a recognition determination for a “bill-and-hold” sale. The
expense recognition case requires a technological feasibility determination
for a computer software product.
3.1.2. Participants. One hundred twenty-five MBA students enrolled in
an MBA-level intermediate-accounting course participated voluntarily in
this study. Using MBA students as participants allowed us to enhance ex-
perimental power by reducing intrusion from experienced practitioners’
reporting incentives and their knowledge of existing specific standards and
precedents. Data collection occurred at the start of the course (January,
3 Salterio and Koonce [1997] provide evidence that auditors provided with mixed prece-
dents attend more to the precedents supporting the reporting position favored by their client.
We focus on circumstances where a practitioner’s preference is accurate application of the
standard (rather than attaining a favored reporting position).
706 S. CLOR-PROELL AND M. W. NELSON
4 Specifically, to balance any order effects, facts presented in each case appear in 16 different
orders. The orders are selected so that each fact appears in each position an equal number
of times. Each fact is either in favor of or against recognition an equal number of times. Each
EXAMPLE-BASED REASONING 707
fact is not always followed or preceded by the same other fact. The orders are selected so that
each fact is sometimes consistent with and sometimes in opposition to each other fact in the
case. Finally, the orders are selected so that facts indicating recognition are always followed
by facts indicating nonrecognition and vice versa. Two recognition facts never appear next to
each other and two nonrecognition facts never appear next to each other.
708 S. CLOR-PROELL AND M. W. NELSON
3.1.5. Materials and Procedure. Appendices 1 and 2 include task and case
information, the standard, the affirmative example, and the counter exam-
ple used in the revenue and expense treatments, respectively. Participants
first read a case that considers the appropriate accounting treatment for
either a “bill-and-hold” sale (Revenue case) or costs related to the develop-
ment of a computer software product (Expense case). Each case contains
background financial information and the specific facts of the transaction
in question. The background financial information is based on the CAP
case used by Nelson, Smith, and Palmrose [2005] and adapted from Braun
[2001] and Libby and Kinney [2000]. The background information is in-
tended to indicate to participants that the treatment of the transaction in
question has a relatively large financial impact on the profitability of the
company. Specifically, recognizing revenue or expense from the transac-
tion in question results in a change in net income that is almost 5% of the
prior year’s net income and 0.5% of the prior year’s total assets.
Participants are provided a case and a relevant standard (but not an exam-
ple that provides implementation guidance). Participants are informed that
they have an incentive to report net income accurately rather than conser-
vatively or aggressively, and asked to indicate their incentive to reinforce it
and provide data for a comprehension check. Participants also are informed
that, given the case materials and standard, they should think that the like-
lihood of qualifying for recognition is 50%. Participants are then asked to
indicate on a 100-point scale the probability that the transaction in the case
qualifies for revenue or expense recognition. The lower (upper) end of the
scale is marked numerically as 0 (100) and the description associated with
the number is “definitely does NOT qualify” (“definitely qualifies”). This
judgment serves as a pretest baseline to assess the effect of examples, and
also provides evidence about whether participants internalized instructions
that conveyed an accuracy goal and a 50% pre-example likelihood of the
case qualifying for recognition. 6
Next, participants are asked to assume that the standard is accompanied
by an example that is intended to provide further guidance. The example is
5 The affirmative and counter examples are consistent with each other, because the affirma-
tive example indicates that all positive attributes allow recognition and the counter example
indicates that all negative attributes do not allow recognition. Thus, the processes that we in-
vestigate are distinct from the effects of “counter-factual reasoning” as investigated by Heiman
[1990], Koonce [1992], and Kadous, Krische, and Sedor [2005], in which considering incon-
sistent evidence debiases optimism or overconfidence.
6 By encouraging participants to hold a prior belief around 50% and then eliciting this prior
belief, our design enables us to decrease the amount of noise in the pretest judgment while
still allowing participants to set their own prior, which has been found to play an important
role in prior belief-revision research (McMillan and White [1993]). It also minimizes the risk
of ceiling or floor effects.
EXAMPLE-BASED REASONING 709
7 All analyses are presented using the post-example judgment as the dependent variable.
Results are similar if the dependent variable is the difference between the pre- and post-example
judgments, and when pre-example judgment is included as a covariate. In addition, using an
indicator variable set to one if the pre-example judgment is 50% and zero otherwise reveals
that the covariate does not significantly interact with example type or case, indicating that
results do not depend on whether participants’ initial judgment deviated from 50%. When the
pre-example judgment is the dependent variable, neither the main effects nor the interaction
are significant.
8 Order is not significant in any analyses, so it is dropped from analyses and is not discussed
further.
9 All data in both experiments are analyzed parametrically since the assumptions of normality
and equal variance are met. However, the same results of hypothesis tests are obtained when
rank-transformed data are used to perform nonparametric analyses.
710
TABLE 1
Results of Experiment 1: Judged Probability that Transaction Qualifies for Recognition
Panel A: Cell means
Revenue Case Expense Case Collapsed across Case
Pre- Post- Pre- Post- Pre- Post-
example example example example example example
Example Type N Judgment Judgment Diff. N Judgment Judgment Diff. Judgment Judgment Diff.
Affirmative 32 41.2 57.0 15.8 33 49.8 58.2 8.4 45.6 57.6 12.0
Counter 28 45.0 35.9 −9.1 32 53.3 47.2 −6.1 49.2 41.6 −7.6
Collapsed by type 43.1 46.5 3.3 51.5 52.7 1.2
Panel B: Test of H1
Sum of Mean
S. CLOR-PROELL AND M. W. NELSON
income-increasing judgments are significantly more probable when participants are provided
a revenue case than an expense case (t = 2.31, p = 0.011). Focusing only on counter examples,
simple-effects tests indicate that income-increasing judgments are significantly more probable
when participants are provided an expense case than a revenue case (t = 2.46, p = 0.008).
11 We make no prediction about whether the absolute differences between pre- and post-
example judgments differ between case and example types. The data reported in table 2,
panel A suggest a larger absolute difference for the revenue case than for the expense case
and for affirmative examples than for counter examples, which implies that participants react
TABLE 2
712
Standard p-Value
Parameter Estimate Error t Value (One-Tailed)
Revenue 21.14 6.88 3.07 0.001
Simple effect of example for expense-recognition case:
Standard p-Value
Parameter Estimate Error t Value (One-Tailed)
Expense 10.99 6.59 1.67 0.049
Participants in experiment 1 judge the probability that the transaction in the case qualifies for revenue or expense recognition. This table transforms results reported in table 1 to report descriptive statistics
and hypothesis tests about the probability of making an income-increasing judgment implied by participants’ judgments. For the revenue case the means indicate the probability of qualifying for revenue
recognition. For the expense case the means indicate 100 – the probability of qualifying for expense recognition. Participants receive a basic revenue (expense) recognition standard and apply it to a case that
always contains an equal mix of facts supporting recognition and facts supporting nonrecognition. Participants make pre-example judgments (based on only the case information and general standard) and
post-example judgments (after receiving either an affirmative example in which all facts favor recognition and recognition is allowed, or a counter example where all facts favor nonrecognition and recognition is
not allowed), with the example serving as “implementation guidance” that supplements the standard.
EXAMPLE-BASED REASONING 713
Post-example Judgment
60.0
P(income increasing)
50.0
40.0
30.0 Affirmative
Counter
20.0
Revenue Expense
Difference
20.0
Affirmative
15.0
Counter
P(income increasing)
10.0
5.0
0.0
-5.0
Revenue Expense
-10.0
-15.0
FIG. 1.—Results: Experiment 1. Participants in experiment 1 judge the probability that the
transaction in the case qualifies for revenue or expense recognition. Participants receive a
basic revenue (expense) recognition standard and apply it to a case that always contains an
equal mix of facts supporting recognition and facts supporting nonrecognition. Participants
make pre-example judgments (based on only the case information and general standard) and
post-example judgments (after receiving either an affirmative example in which all facts favor
recognition and recognition was allowed, or a counter example where all facts favor nonrecog-
nition and recognition is not allowed), with the example serving as “implementation guidance”
that supplements the standard. This figure shows mean judgments of the appropriateness of
making an income-increasing judgment. For the revenue case the means indicate the prob-
ability of qualifying for revenue recognition. For the expense case the means indicate 100 –
the probability of qualifying for expense recognition. “Difference” is mean post-judgment less
mean pre-judgment.
more strongly to examples in the revenue case than in the expense case, and to affirmative
examples than to counter examples. To test this possibility, we perform an ANOVA in which
the absolute value of the difference is the dependent variable and case and example type are
the independent variables. The results reveal insignificant main effects for case (F = 0.99, p =
0.322) and example type (F = 0.72, p = 0.398), and an insignificant interaction (F = 0.18, p =
0.674). This analysis indicates that participants do not react significantly differently to examples
in the revenue and expense cases and do not react significantly differently to affirmative and
counter examples.
714 S. CLOR-PROELL AND M. W. NELSON
incentive tend to recognize revenue and not expense, and participants who
perceive an income-decreasing incentive tend to recognize expense and not
income. Therefore, incentive-based reasoning would produce a main effect
of case and no main effect or interaction between case and example type
both on judged appropriateness of recognition and on the income resulting
from participants’ judgments. Results of tests of H1 and H2 show no main
effect for case, but rather the predicted main effect for example type for H1
and the predicted interaction between example type and case for H2. Thus,
the data do not support an incentive-based explanation for the results.
4. Experiment 2
In experiment 1, student participants are used to enhance experimental
power by avoiding intrusion from experienced practitioners’ knowledge of
existing standards and precedents. However, it is also important to deter-
mine the extent to which results generalize to circumstances where experi-
enced practitioners’ existing knowledge combines with provided examples
when applying implementation guidance. Therefore, while we do not posit
particular effects of experienced practitioners’ knowledge, one purpose of
experiment 2 is to generalize the results of H1 and H2 to a more experienced
population.
The pattern of results in experiment 1 also suggests a potential means for
encouraging accurate reporting. Specifically, practitioners could be pro-
vided with both affirmative and counter examples. As predicted by H3, so
long as practitioners focus equally on both types of examples, effects of
example-based reasoning are counterbalanced. We focus on only the rev-
enue case in experiment 2 because the results of experiment 1 indicate
that “process” is now a two-level indicator variable discriminating between a priming- and a
similarity-based process, we find an insignificant three-way interaction among case, example,
and process; a significant case by example interaction; and insignificant main effects of process,
case, and example. The interaction between case and example is present for both the similarity-
based- and priming-based-process participants when analyzed separately.
EXAMPLE-BASED REASONING 717
a somewhat larger effect of example type for the revenue case than for
the expense case (although not significantly so). Therefore, we reason that
the revenue case is more likely to produce the larger effect of example
type and therefore provides the greater effect to debias for purposes of
testing H3.
4.1 METHOD
Except as noted otherwise, the method used in experiment 2 is the same
as the method used in experiment 1.
4.1.1. Design Overview. All participants receive the same “bill-and-hold”
revenue case as was used in experiment 1. The experiment employs a
1 × 3 between-subjects design with three example conditions (affirmative,
counter, both). The affirmative and counter example conditions are the
same as in experiment 1. In the “both” condition, participants receive both
the affirmative and counter example.
4.1.2. Participants. Two hundred sixty practitioners working in
accounting- and finance-related fields are selected from an alumni
database, contacted via email, and asked to participate in the experiment.
A total of 166 practitioners (64%) participate. Experiment-2 participants
have an average of 10 years of work experience, are an average of 35 years
old, and received an MBA an average of six years prior to participation.
Eighty-two percent are male. Participants are randomly assigned to one of
three treatments. Data collection occurred during June and July of 2005.
4.1.3. Manipulation of Example. The example is manipulated as in experi-
ment 1 for the affirmative and counter conditions. For the “both” condition,
participants receive both an affirmative and a counter example instead of
receiving one or the other. The order in which each example type appears
in the case is balanced between subjects.
4.1.4. Materials and Procedure. Whereas experiment 1 utilized a paper-
and-pencil task, experiment 2 is administered via a Web-based instrument.
Participants access the materials by clicking on a link supplied in the email
that solicits their participation. The instrument is split into three sequential
sections. Participants are not able to change their answers after submitting
each section of the instrument, but before submitting the section can scroll
throughout the section to allow them to repeatedly access the information
in that section.
The first section of the instrument presents participants with the case and
standard, and requires participants to submit their pre-example judgment
before continuing to the next section of the instrument. Once submitted,
participants are directed to the next section of the experiment, which in-
cludes all information provided in the prior section and also includes the
example manipulation. After reviewing this information, participants are
required to submit their post-example judgment before being directed to
the debriefing questionnaire, which concludes the experiment.
718 S. CLOR-PROELL AND M. W. NELSON
4.2 RESULTS
4.2.1. Comprehension Check. The case materials indicate to participants
that they should have an incentive to report net income accurately (i.e.,
unbiased) rather than conservatively (i.e., understated) or aggressively (i.e.,
overstated). The results of a comprehension check question reveal that 15
participants (9%) do not indicate they have an accuracy incentive. Dropping
these subjects from the analysis does not affect the results that follow, so all
analyses include these subjects.
The case materials also indicate that, prior to receiving the example(s),
there is a 50% probability that the transaction qualifies for recognition.
The results of the pre-example judgment provide evidence about the ex-
tent to which participants internalize the 50% prior. As in experiment
1, a majority of participants (89) indicates a pre-example judgment of
50%, but many (77) do not. Dropping these participants from the anal-
ysis does not affect the results that follow, so all analyses include these
participants. 13
4.2.2. Test of H1 and H2. Participants judge the probability that the trans-
action in question qualifies for revenue recognition. Because all participants
receive the revenue-recognition case, the same analysis provides a test of H1
(the effect of example type on probability of recognition) and H2 (the ef-
fect of example type on the net income implied by participants recognition
judgment).
The means are presented in table 3, panel A for pre-example judgments,
post-example judgments, and the difference between pre- and post-example
judgments, and are shown in figure 2 for post-example judgments and
the difference between pre- and post-example judgments. Using the post-
example judgment, a GLM procedure indicates a significant effect for ex-
ample type (F = 10.05, p < 0.000). Table 3, panel B provides the complete
ANOVA table. Contrasts comparing the cell means indicate that the affir-
mative example condition results in a judgment that is significantly greater
than the counter example condition (t = 4.00, p < 0.000), supporting H1
and H2. 14
4.2.3. Test of H3. The nonparametric Jonckheere-Terpstra test (Hollan-
der and Wolfe [1973]) for ordered cell medians indicates that the medians
13 All analyses are presented using the post-example judgment as the dependent variable.
discussed further. The data are analyzed parametrically since the assumptions of normality
and equal variance are met. However, H1 and H2 are also supported if the data is analyzed
nonparametrically using rank-transformed data, with a significant effect of example (F = 9.91,
p < 0.000) and the affirmative example condition resulting in a judgment that is significantly
greater than the counter example condition (t = 3.86, p < 0.000).
TABLE 3
Results of Experiment 2: Judged Probability that Transaction Qualifies for Recognition and Implied Income Effects
Panel A: Cell means
Pre- Post-
example example
Example Type Judgment Judgment Difference
Affirmative (N = 55) 43.6 46.5 2.9
Counter (N = 53) 33.6 25.2 −8.4
Both (N = 58) 42.6 45.3 2.7
Panel B: Test of H1 and H2
Sum of Mean
Source df Squares Square F Value p-Value
Example Type 2 15,500.38 7,750.19 10.05 0.000
Error 163 125,662.87 770.94
Planned comparisons:
Standard p-Value
Parameter Estimate Error t Value (One-Tailed)
Affirmative vs. both 1.29 5.23 0.25 0.403
Counter vs. both 20.07 5.28 3.80 0.000
Affirmative vs. counter 21.36 5.34 4.00 <0.000
Participants in experiment 2 judge the probability that the transaction in the case qualifies for revenue recognition. This table reports descriptive statistics and hypothesis tests
about participants’ judgments. Participants receive a basic revenue recognition standard and apply it to a case that always contains an equal mix of facts supporting recognition
and facts supporting nonrecognition. Participants make pre-example judgments (based on only the case information and general standard) and post-example judgments (after
receiving an affirmative example in which all facts favor recognition and recognition is allowed, a counter example where all facts favor nonrecognition and recognition is not
allowed, or both an affirmative and a counter example), with the example(s) serving as “implementation guidance” that supplements the standard. This table reports descriptive
statistics about participants’ judgments of the appropriateness of making an income-increasing judgment. Since this is a revenue case the means indicate the probability of qualifying
EXAMPLE-BASED REASONING
for revenue recognition. “Post-example judgment” refers to the mean probability judgment made after seeing implementation guidance. “Difference” is mean post-judgment less
mean pre-judgment.
719
720 S. CLOR-PROELL AND M. W. NELSON
Post-example Judgment
50.0
P(income increasing)
40.0
30.0
20.0
10.0
Affirmative Both Counter
Difference
4.0
2.0
P(income increasing)
0.0
Affirmative Both Counter
-2.0
-4.0
-6.0
-8.0
-10.0
FIG. 2.—Results: Experiment 2. Participants in experiment 2 judge the probability that the
transaction in the case qualifies for revenue recognition. Participants receive a basic revenue
recognition standard and apply it to a case that always contains an equal mix of facts supporting
recognition and facts supporting nonrecognition. Participants make pre-example judgments
(based on only the case information and general standard) and post-example judgments (af-
ter receiving an affirmative example in which all facts favor recognition and recognition is al-
lowed, a counter example where all facts favor nonrecognition and recognition is not allowed,
or both an affirmative and counter example), with the example(s) serving as “implementa-
tion guidance” that supplements the standard. This figure reports descriptive statistics about
participants’ judgments of the appropriateness of making an income-increasing judgment.
“Difference” is mean post-judgment less mean pre-judgment.
15 In the “both” condition, the order in which the affirmative and counter example appear
in the case is not significant in any analysis, and is not discussed further.
16 The planned comparisons are conducted parametrically since the assumptions of
normality and equal variance are met. However, conducting the planned comparisons
EXAMPLE-BASED REASONING 721
nonparametrically with rank-transformed data yields similar results, again not supporting H3.
The mean in the affirmative example treatment is not significantly greater than the mean in
the “both” condition (t = 0.02, p = 0.493), but the counter example condition is significantly
less than the “both” condition (t = 3.89, p < 0.000).
17 Despite the fact that all participants are presented with the same information prior to
providing pretest judgments, and that participants are assigned randomly to treatments, par-
ticipants in the counter-example treatment have a mean pretest that is lower than the pretest
for participants in the affirmative example and “both” treatments. When the pre-example judg-
ment is used as a dependent variable, the main effect for example type is marginally significant
at p = 0.080 (F = 2.57).
722 S. CLOR-PROELL AND M. W. NELSON
18 Results are similar when the dependent variable is the difference between the pre- and
5. General Discussion
This paper examines application of examples provided as implementa-
tion guidance for accounting standards. The results of two experiments
indicate that, consistent with prior results in psychology, many participants
engage in example-based reasoning, such that concluding recognition is
appropriate is more (less) likely after viewing an affirmative (counter) ex-
ample. From a net income perspective this result indicates that example type
and case interact, such that the probability of making an income-increasing
judgment for a revenue (expense) case is greater after receiving an af-
firmative (counter) example than after receiving a counter (affirmative)
example. Additional analyses indicate that these results occur when partici-
pants self-report a similarity-based process, and also when participants self-
report a priming-based process, but not when participants report another
process.
These results appear to be robust. By design we preclude support for our
hypotheses being explained by participants extracting a decision threshold
from the example, and analyses indicate that responses are inconsistent with
the thresholds implied by examples. Further analyses indicate that results are
not driven by participants’ pre-example priors, by nonaccuracy incentives,
by the order in which facts are examined, or by the identity of particular
fact combinations in the cases they evaluate. These results hold regardless of
whether participants are MBA students focused on finance and accounting
19 Analysis of only the participants who specify a similarity-based process reveals a significant
main effect of example type (F = 8.09, p = 0.001). Analysis of only the participants who specify
a priming-based process again reveals a significant main effect of example type (F = 5.42,
p = 0.010). Analysis of only those participants who self-report that they do not use the example
reveals an insignificant main effect of example type (F = 0.59, p = 0.568). Analysis of only those
participants who self-report an “other” process reveals an insignificant main effect of example
type (F = 0.60, p = 0.558).
724 S. CLOR-PROELL AND M. W. NELSON
APPENDIX 1
Revenue-Recognition Case, Standard, Example, and Counter Example
20 Sarbanes-Oxley Act of 2002, Pub. L. No. 107-204 116 Stat. 145 (2002).
726 S. CLOR-PROELL AND M. W. NELSON
Standard:
Revenues are generally recognized (1) when they are realized or realiz-
able and (2) when they have been earned. Revenues are considered real-
izable when assets received or held are readily convertible into cash or
claims to cash. Revenues are considered earned when the entity has sub-
stantially accomplished what it must do to be entitled to the benefits
represented by the revenues. With respect to bill and hold sales, various
aspects of a contract affect the likelihood that revenue recognition is ap-
propriate at contract signing. No individual aspect of a contract guarantees
that a bill and hold sale qualifies for revenue recognition upon contract
signing.
(Note: Provision of example vs. counter example manipulated between
participants.)
end, but delivery was scheduled for after period end. There is not a clear
business purpose for handling the sale on a bill and hold basis. The auto
parts will not be delivered on a fixed schedule. There is not a written sales
contract. The goods have not been physically separated.
APPENDIX 2
Expense-Recognition Case, Standard, Example, and Counter Example
Selected task and case information:
As part of the process for determining Net Income in 2004, you are con-
sidering how to treat a computer software project in which CAP intends to
sell the final product. Various research and development costs related to the
project have been incurred. If you determine that the project has reached
technological feasibility, some of the costs related to the project will be cap-
italized. If you determine that the project has not reached technological
feasibility, all costs incurred to date that are related to the project will be
expensed.
The costs in question total $5.5 million. Some key considerations related
to the project include the following.
r The product design has been completed.
r The program design has not been checked against the product
specifications.
r CAP has ascertained that they have available the expertise and tech-
nology necessary to produce the product.
r Uncertainties related to high-risk development issues have not been
resolved through testing.
Standard:
All costs incurred to establish the technological feasibility of a computer
software product are research and development costs. These costs should
be charged as an expense when incurred. After the product is viewed as
technologically feasible, further development costs are capitalized. With re-
spect to computer software projects, various aspects of a project affect the
likelihood that technological feasibility has been attained. No individual as-
pect of a project guarantees that it meets the requirements to be considered
technologically feasible.
(Note: Provision of example vs. counter example manipulated between
participants.)
Example: Now assume that the accounting standard is accompanied
by the following additional example, intended to provide further
guidance:
With respect to computer software projects, various aspects of a project
affect the likelihood that technological feasibility has been attained. No
728 S. CLOR-PROELL AND M. W. NELSON
REFERENCES
BAMBER, E. M.; R. J. RAMSAY; AND R. M. TUBBS. “An Examination of the Descriptive Validity of
the Belief-Adjustment Model and Alternative Attitudes to Evidence in Auditing.” Accounting
Organizations and Society 22 (1997): 249–68.
BRAUN, K. W. “The Disposition of Audit Detected Misstatements: An Examination of Risk and
Reward Factors and Aggregation Effects.” Contemporary Accounting Research 18 (2001): 71–99.
CHURCH, B. K. “An Examination of the Effect that Commitment to a Hypothesis Has on Au-
ditors’ Evaluations of Confirming and Disconfirming Evidence.” Contemporary Accounting
Research 7 (1991): 513–34.
FINANCIAL ACCOUNTING STANDARDS BOARD (FASB). Accounting for the Costs of Computer Software
to Be Sold, Leased, or Otherwise Marketed. Statement of Financial Accounting Standards No. 86.
Norwalk, CT: FASB, 1985.
FINANCIAL ACCOUNTING STANDARDS BOARD (FASB). Employers’ Accounting for Postretirement Bene-
fits Other than Pensions. Statement of Financial Accounting Standards No. 106. Norwalk, CT:
FASB, 1990.
FINANCIAL ACCOUNTING STANDARDS BOARD (FASB). Liability Recognition for Certain Employee
Termination Benefits and Other Costs to Exit an Activity (Including Certain Costs Incurred in a
Restructuring). Emerging Issues Task Force. EITF Issue No. 94-3. Norwalk, CT: FASB, 1994.
EXAMPLE-BASED REASONING 729
FINANCIAL ACCOUNTING STANDARDS BOARD (FASB). Accounting for Derivative Instruments and
Hedging Activities. Statement of Financial Accounting Standards No. 133. Norwalk, CT: FASB,
1998.
FINANCIAL ACCOUNTING STANDARDS BOARD (FASB). Reporting Revenue Gross as a Principal Versus
Net as an Agent. Emerging Issues Task Force. EITF Issue No. 99-19. Norwalk, CT: FASB,
1999.
FINANCIAL ACCOUNTING STANDARDS BOARD (FASB). Principles-Based Approach to Standard Setting.
Norwalk, CT: FASB, 2002.
FINANCIAL ACCOUNTING STANDARDS BOARD (FASB). Share-Based Payment. Statement of Financial
Accounting Standards No. 123 (revised 2004). Norwalk, CT: FASB, 2004.
FREDERICK, D. M., AND R. LIBBY. “Expertise and Auditors Judgments of Conjunctive Events.”
Journal of Accounting Research 24 (1986): 270–90.
GLOVER, S. M. “The Influence of Time Pressure and Accountability on Auditors’ Processing of
Nondiagnostic Information.” Journal of Accounting Research 35 (1997): 213–26.
HACKENBRACK, K. “Implications of Seemingly Irrelevant Evidence in Audit Judgment.” Journal
of Accounting Research 30 (1992): 126–36.
HACKENBRACK, K., AND M. W. NELSON. “Auditors’ Incentives and Their Application of Financial
Accounting Standards.” The Accounting Review 71 (1996): 43–59.
HEIMAN, V. B. “Auditors’ Assessments of the Likelihood of Error Explanations in Analytical
Review.” The Accounting Review 65 (1990): 875–90.
HIGGINS, E. T.; J. A. BARGH; AND W. LOMBARDI. “Nature of Priming Effects on Categorization.”
Journal of Experimental Psychology-Learning Memory and Cognition 11 (1985): 59–69.
HOFFMAN, V. B., AND J. M. PATTON. “Accountability, the Dilution Effect, and Conservatism in
Auditors’ Fraud Judgments.” Journal of Accounting Research 35 (1997): 227–37.
HOLLANDER, M., AND D. A. WOLFE. Non-parametric Statistical Methods. New York: Wiley,
1973.
HOLYOAK, K. J., AND P. THAGARD. “The Analogical Mind.” American Psychologist 52 (1997): 35–
44.
INTERNATIONAL ACCOUNTING STANDARDS BOARD (IASB). FASB and IASB Agree to Work Together
Toward Convergence of Global Accounting Standards. FASB and IASB Joint Press Release, October
29, 2002.
KADOUS, K.; S. J. KENNEDY; AND M. E. PEECHER. “The Effect of Quality Assessment and Di-
rectional Goal Commitment on Auditors’ Acceptance of Client-Preferred Methods.” The
Accounting Review 78 (2003): 759–78.
KADOUS, K.; S. D. KRISCHE; AND L. M. SEDOR. “Using Counter-explanation to Limit Analysts’
Forecast Optimism.” Working paper, Emory University, University of Illinois, and University
of Notre Dame, 2005.
KIESO, D. E.; J. J. WEYGANDT; AND T. D. WARFIELD. Intermediate Accounting , Eleventh edition.
New York: John Wiley & Sons, Inc., 2004
KOONCE, L. “Explanation and Counterexplanation During Audit Analytical Review.” The Ac-
counting Review 67 (1992): 59–76.
LEVIN, I. P.; S. L. SCHNEIDER; AND G. J. GAETH. “All Frames Are Not Created Equal: A Typology
and Critical Analysis of Framing Effects.” Organizational Behavior and Human Decision Processes
76 (1998): 149–88.
LIBBY, R., AND W. R. KINNEY. “Earnings Management, Audit Differences, and Analysts’ Fore-
casts.” The Accounting Review 75 (2000): 383–404.
MARCHANT, G. “Analogical Reasoning and Hypothesis Generation in Auditing.” Accounting
Review 64 (1989): 500–13.
MARCHANT, G.; J. ROBINSON; U. ANDERSON; AND M. SCHADEWALD. “Analogical Transfer and
Expertise in Legal Reasoning.” Organizational Behavior and Human Decision Processes 48 (1991):
272–90.
MARCHANT, G.; J. ROBINSON; U. ANDERSON; AND M. SCHADEWALD. “The Use of Analogy in
Legal Argument—Problem Similarity, Precedent, and Expertise.” Organizational Behavior and
Human Decision Processes 55 (1993): 95–119.
730 S. CLOR-PROELL AND M. W. NELSON
MATSUMURA, E., AND S. C. VERA-MONOZ. “The Joint Effects of Similarity and Comparison on
Accountants’ Recommendation Quality.” Working paper, University of Wisconsin—Madison
and University of Notre Dame, 2005.
MCMILLAN, J. J., AND R. A. WHITE. “Auditors Belief Revisions and Evidence Search—The Effect
of Hypothesis Frame, Confirmation Bias, and Professional Skepticism.” The Accounting Review
68 (1993): 443–65.
NELSON, M. W. “Behavioral Evidence on the Effects of Principles- and Rules-Based Standards.”
Accounting Horizons 17 (2003): 91–104.
NELSON, M. W. “A Review of Experimental and Archival Conflicts-of-Interest Research in Au-
diting,” in Conflicts of Interest: Challenges and Solutions in Business, Law, Medicine, and Public
Policy, edited by D. A. Moore, D. M. Cain, G. Lowenstein, and M. H. Bazerman. Cambridge:
Cambridge University Press, 2004: 41–69.
NELSON, M. W.; J. A. ELLIOTT; AND R. L. TARPLEY. “Evidence from Auditors About Managers’ and
Auditors’ Earnings Management Decisions.” The Accounting Review 77 (Supplement 2002):
175–202.
NELSON, M. W.; S. D. SMITH; AND Z. V. PALMROSE. “The Effect of Quantitative Materiality
Approach on Auditors’ Adjustment Decisions.” The Accounting Review 80 (2005): 897–920.
NISBETT, R. E.; H. ZUKIER; AND R. E. LEMLEY. “The Dilution Effect—Non-diagnostic Information
Weakens the Implications of Diagnostic Information.” Cognitive Psychology 13 (1981): 248–77.
NOVICK, L. R. “Analogical Transfer, Problem Similarity, and Expertise.” Journal of Experimental
Psychology-Learning Memory and Cognition 14 (1988): 510–20.
SALTERIO, S. “The Effects of Precedents and Client Position on Auditors’ Financial Accounting
Policy Judgment.” Accounting Organizations and Society 21 (1996): 467–86.
SALTERIO, S., AND L. KOONCE. “The Persuasiveness of Audit Evidence: The Case of Accounting
Policy Decisions.” Accounting Organizations and Society 22 (1997): 573–87.
SECURITIES AND EXCHANGE COMMISSION (SEC). SEC Accounting and Auditing Enforcement Release
No. 108. Washington, DC: SEC, 1986.
SECURITIES AND EXCHANGE COMMISSION (SEC). Revenue Recognition in Financial Statements. SEC
Staff Accounting Bulletin No. 101. (December 3). Washington, DC: SEC, 1999.
SECURITIES AND EXCHANGE COMMISSION (SEC). Study Pursuant to Section 108(d) of the Sarbanes-
Oxley Act of 2002 on the Adoption by the United States Financial Reporting System of a Principles-Based
Accounting System. Washington, DC: SEC, 2003.
SHELTON, S. W. “The Effect of Experience on the Use of Irrelevant Evidence in Auditor Judg-
ment.” The Accounting Review 74 (1999): 217–24.
SMITH, J. F., AND T. KIDA. “Heuristics and Biases—Expertise and Task Realism in Auditing.”
Psychological Bulletin 109 (1991): 472–89.
TVERSKY, A. “Features of Similarity.” Psychological Review 84 (1977): 327–52.