You are on page 1of 25

International Journal of Accounting Information Systems

8 (2007) 92 – 116

Audit support systems and decision aids: Current


practice and opportunities for future research
Carlin Dowling ⁎, Stewart Leech
The University of Melbourne, Australia
Received 5 July 2006; received in revised form 25 March 2007; accepted 5 April 2007

Abstract

Technological changes and audit firm mergers over the last decade raise the question as to whether the
decision aids reported in prior research are representative of the types of decision support currently employed
in audit firms. To address this issue, a study was conducted of the audit support systems used at five
international audit firms and the types of decision support embedded within their audit support systems. The
concepts of system restrictiveness and audit structure were combined to develop a definition of audit support
system restrictiveness, and the firms' systems were classified using this definition. Substantial differences in
audit support system restrictiveness were found to be associated with the type of decision support embedded
within these systems. In order to guide future research, existing audit decision aid studies were mapped to the
types of embedded decision support and several future research opportunities were identified.
© 2007 Elsevier Inc. All rights reserved.

Keywords: Audit support systems; Decision aids; System restrictiveness

1. Introduction

Audit support systems are the key technology application deployed by audit firms to facilitate
efficient and effective audits. These systems include electronic workpapers, extensive help files,
accounting and auditing standards, relevant legislation, and decision aids. Although several
studies have investigated issues surrounding auditor use of decision aids (e.g., Anderson et al.,
1995, 2003; Bedard and Graham, 2002; Boatsman et al., 1997; Eining et al., 1997; Jennings et al.,
1993; Kachelmeier and Messier, 1990; Lowe and Reckers, 2000; Lowe et al., 2002; Mueller and
Anderson, 2002; Murphy and Brown, 1992; Murphy and Yetmar, 1996; Swinney, 1999; Ye and

⁎ Corresponding author. Tel.: +61 3 8344 3415; fax: +61 3 9349 2397.
E-mail addresses: carlin@unimelb.edu.au (C. Dowling), saleech@unimelb.edu.au (S. Leech).

1467-0895/$ - see front matter © 2007 Elsevier Inc. All rights reserved.
doi:10.1016/j.accinf.2007.04.001
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 93

Johnson, 1995), limited evidence of the decision aids used in audit practice is available, and most
of this is dated and/or firm specific (e.g., Abdolmohammadi and Usoff, 2001; Abdolmohammadi,
1987, 1999; Bell et al., 2002; Brown and Murphy, 1990; Brown and Phillips, 1991; Connell,
1987; Graham et al., 1991; Shpilberg and Graham, 1986). The aim of this study is to support
future research by documenting the audit support systems and decisions aids employed in audit
firms and mapping these to prior research. Although it is important for epistemological reasons
that future research is informed by and extends existing research, the transferability of decision
aid research findings to practice is likely to be enhanced by studies investigating issues relevant to
the types of decision aids used in practice and the issues encountered in their use.
This study documents and compares the audit support systems deployed in five international
audit firms. The systems are compared according to their location of development, use policy, the
extent to which a client's file is manually or automatically tailored, the extent to which decision
outcomes in an earlier audit phase are manually or automatically integrated into later audit phases,
and the role of the audit system within the audit process. Significant differences in audit support
system design were found, including the extent to which an audit support system is viewed as an
enabler or enforcer of a firm's audit process.
The findings reported in this study have the potential to significantly influence the direction of
future audit support system and audit decision aid research. In addition to identifying several
research opportunities, this study highlights the importance of system restrictiveness, a construct
that has not been extensively investigated in the literature. This study combined the concepts of
system restrictiveness (from the information systems literature) and audit structure (from the audit
literature) to develop a definition of audit support system restrictiveness. This is an important
contribution to the literature because, as the findings reported in this study indicate, the extent of
system restrictiveness is an important design feature of audit support systems associated with the
type of decision support embedded within a system and the way in which an audit firm views their
audit support system. The significant differences in audit support system design documented in
this study provide important insights that can be used to generalize and integrate the findings of
studies investigating audit support systems.
The remainder of this paper is divided into three sections. The first section presents the
research method, compares the audit support systems and discusses the types of decision support
embedded in these systems. In the second section, opportunities for future research are identified
by mapping extant studies to the types of decision aids currently deployed in audit firms. In the
third section, the implications for research and the limitations of the study are discussed.

2. Audit support systems and decision aids currently used in practice

2.1. Research method

Semi-structured interviews were conducted with four partners and four managers from five
audit firms, the Big 4 and one large mid-tier international audit firm1. The purpose was to obtain
information regarding the audit support systems and the decision aids deployed in these firms
during 2004/2005. Each of these firms have a proprietary audit support system that has decision
aids embedded within the system. All of the partners and managers interviewed have extensive

1
To satisfy confidentiality agreements each firm is referred to as A, B, C, D, or E. These alpha codes are used in the
text and tables to facilitate cross-referencing.
94 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116

knowledge of their firm's audit support system2 and decision aids. In three firms, the partners and/
or managers were members of their audit firm's global or national audit technology development
group. The partners and managers interviewed in the other two firms were responsible for
deployment and staff training of audit technologies.
Each interview began by defining a decision aid to ensure that interviewers and the inter-
viewees shared a common understanding. The partners/managers were then asked to sys-
tematically walk through the decision aids used in each stage of the audit process, the typical
users and the output obtained. Questions and prompts for further information were asked
throughout the interviews3 .
All interviews were voice recorded and transcribed. The interview transcriptions were used to
code the data for each audit firm4. For each firm, tables were compiled that contained an overview
of the firm's audit support system, the decision aids used in each phase of the audit, and the
perceived benefits and limitations of using decision aids. The interviewees were sent the tables for
their firm and asked to confirm the contents or make any necessary adjustments5.

2.2. The audit support systems

Table 1 provides a comparison of the audit support systems deployed in the five audit firms.
The key similarities and differences across the audit firms are discussed below6.

2.2.1. Development and system use policies


All five firms' audit support systems are developed globally, and tailored to achieve com-
pliance with country specific standards. In two firms (C and D), the audit support systems are
supplemented with stand alone tools. Use of the system is mandatory for all clients in two firms
(B and E) and is mandatory for all but very small clients in two other firms (A and C). For the one
firm (D) in which use is voluntary, the production of working papers in a similar format to those
produced by the system is mandatory7. According to a partner at Firm D, voluntary use of the
system has the potential to impact audit efficiency and effectiveness due to the development of
inappropriate audit plans that require the collection of too much or insufficient audit evidence.
Using the audit support system decreases this potential problem through assisting the auditor to
tailor the audit program to the client's circumstances.

2
The terms audit support systems and system are used interchangeably for ease of readability.
3
Some firms provided less information than others, despite prompting from the researchers. This variability is reflected
in the analysis throughout the paper.
4
One researcher coded the data and compiled summary tables. The complied tables were independently reviewed by
the other researcher then forwarded to the audit firms for verification. The coding process was very objective because of
the descriptive nature of the data. It was considered redundant for two researchers to independently code the data to
obtain a measure of inter-rater reliability because independent verification was obtained from the audit firms.
5
A change in audit technology committee membership in one firm meant that a different individual to whom we
interviewed verified our tables. For all other firms, at least one interviewee confirmed the data. The audit firms did not
verify the extent to which their firm’s system was classified as restrictive. System restrictiveness is a relative measure
(Silver, 1988a,b). To be meaningful, assessment requires knowledge of the other firms’ systems, which the interviewees
did not have access to.
6
To satisfy confidentiality agreements, screen shots of the decision aids cannot be provided because they would
identify the audit firms.
7
In 2006, this firm rolled out an enhanced audit support system. Use of the enhanced system was mandatory during the
planning phase of the audit.
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 95

Table 1
Comparison of audit support systems
Firm A Firm B Firm C Firm D Firm E
Developed globally Yes Yes Mostly Yes Yes
Supplemented with other tools No No Yes Yes No
Use policy for large clients Mandatory Mandatory Generally mandatory Voluntary Mandatory
Use policy for small clients Voluntary Mandatory Some aspects mandatory Voluntary Mandatory
Industry tailoring Yes Yes Yes Yes Yes
Tailoring of client file Predominately manual Automatic Manual Manual Automatic
Extent of automated Medium High None Low High
integration across
audit phases a
System viewed as enabler Yes Yes Yes Yes Yes
of audit process
System enforces compliance No Yes No No Yes
with audit methodology
Decision aids embedded Yes Yes Yes Yes Yes
Extent of automated Low High Low Low High
decision support b
Extent of System Low High Low Low High
Restrictiveness c
a
High = output in one phase of system is automatically integrated as input into a following phase; Medium =
combination of manual and automatic integration within the system; Low = integration is mostly completed manually;
None = all integration is completed manually.
b
High = system provides recommendations based on user input; Low = embedded decision aids generally only prompt
users (e.g., checklists).
c
High = system significantly restricts extent to which user is free to choose how the audit is performed and make certain
judgments through influencing how user interacts with the system and conducts the audit; Low = system does not
significantly constrain users interaction with the system or enforce how audit should be performed.

2.2.2. Automated vs manual tailoring


The audit support systems used at all of the firms contain industry “packs” which are used to
tailor each client's audit plan to address industry risks. Tailoring is one of the first tasks
completed in the planning phase, and is important for controlling the work completed in the
field. In two firms (B and E), decision aids embedded within each firm's system automatically
tailor a client's file using an auditor's response to a set of standard questions8 . Auditors respond
to the standard questions by selecting the appropriate response from a selection of options (e.g.,
yes, no, unknown). The embedded decision aids use the selected response/s to identify and
automatically upload the relevant key risks and processes into the client's file. In the other three
firms (A, C and D), auditors manually tailor a client's file using checklists and lists of industry
specific risks accessed from the audit support system. For example, in Firm D's system an
auditor is required to select the client's industry, which triggers the production of a list of risks
relevant to that industry. The auditor selects the risks relevant to the client from the generated
list. According to a manager at this firm, tailoring is important to stop audit staff from pursing
further work in areas that are not relevant to the client and/or do not have significant impact on
the client's financial statements.

8
The manager at one of these firms described the questions as having two levels, and auditors are free to answer either
level. “Auditors can choose to answer the high level primary questions or they can drill down to sub-questions which can
help them answer the primary level questions”.
96 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116

2.2.3. Automated integration across audit phases


As indicated in Table 1, the audit support systems used in the five firms also differ in the level
of integration across the audit phases. This difference is associated with whether the audit
support system is manually or automatically tailored. In the two firms (B and E) whose systems
are automatically tailored, the impact that a decision made in one part of the audit has on a
subsequent part of the audit is automatically incorporated throughout the system. For example, if
a specific risk factor is relevant to the client, the system automatically adds the appropriate audit
steps into the audit program. The partners/managers at these two firms stressed that although
there is a high level of automatic tailoring and integration, at all stages auditors are required to
consider and if necessary modify the system's output. As one manager said, “the auditor has the
final judgment … [the system's recommendations] are not prescriptive, at best they are a strong
‘you should do’ … [it is the firm's policy] that the auditor must use judgment based upon all
circumstances”.
The audit partners at two firms (A and C), which have a low level of automated
integration, expressed mixed responses to the benefits of automated integration. A partner at
Firm A stated that seven to ten years ago the firm's system was highly structured and
integrated. The firm moved away from this type of system because auditors were over relying
on the system by “keying in and doing what it said … rather than thinking about what is
required”. This partner viewed low levels of automation as superior. However, this view is not
shared by all partners. For example, a manager at Firm C, whose system also has no automatic
integration, noted that this can lead to excessive testing/collecting of audit evidence. For
example, in this firm's system the results of control testing are not fed through to substantive
test selection. This means that even if the results of the control testing provide most of the
audit evidence all substantive tests are available for selection, not just the analytical tests,
which can result in over auditing and inefficiency if a user selects unnecessary substantive
tests.

2.2.4. Enabler vs enforcer of the firm's audit methodology


Although all firms view their audit support system as an enabler of the audit process, firms
differ on whether they view their audit support system as an enforcer of the firm's audit
methodology. The two firms (B and E) who reported a high level of automatic tailoring and
integration also reported having specifically designed their audit support systems to enforce
compliance with their firm's audit methodology by tightly interweaving their firm's policies into
the audit support system. These systems were designed to guide auditors through each firm's
audit methodology and in so doing enforce compliance with the methodology. Consistent with the
prescriptive control embedded in these systems, these two firms are the only two who have
mandated use of the system for all clients.
In contrast to viewing the audit support system as an enforcer of the firm's audit methodology,
a partner at Firm A reported that their audit methodology provides important guidance to users
regarding how the system and the embedded decision aids should be used. This firm's audit
methodology contains the literal description of the judgments underlying the templates included
in the audit support system, such as sample size selection and how control testing should be
related to substantive testing. A further contrast was highlighted by a partner at Firm C who noted
that their firm's audit methodology is more prescriptive than that enforced by the system. He gave
the example that the firm's methodology states that “testing scope should be X% of tolerable error
… but there is no way of enforcing compliance … there is a disconnect between the level of
prescription in the methodology and what is actually done in practice”.
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 97

The examples above indicate that the five audit firms hold differing views on whether the
role of their audit support system is to enforce their firm's audit methodology. From Table 1 it
can be seen that these differences are correlated with other design differences. As summarized
in Table 1, Firms B and E, the two firms that view their audit support system as an enforcer of
their firm's audit methodology, are also the only two firms in which system use is mandatory
for all clients, the system provides a high level of automated decision support, automatically
tailors a client's engagement file and is highly integrated across the various audit phases. In
contrast, the systems used at the other three firms (A, C, and D), who do not view their system
as an enforcer of their firm's methodology, provide relatively low levels of decision support,
require auditors to manually tailor the file and are not automatically integrated across the audit
phases.
The differences in audit support system design highlighted in Table 1 and discussed
above are akin to the differences in audit firm structure (see for example, Bowrin, 1998;
Cushing and Loebbecke, 1986; Prawitt, 1995). A structured audit approach is “a systematic
approach to auditing characterized by a prescribed, logical sequence of procedures, de-
cisions, and documentation steps, and by a comprehensive and integrated set of audit
policies and tools designed to assist the auditor in conducting the audit” (Cushing and
Loebbecke, 1986:32). Audit support systems have become the “face” of a firm's audit
methodology and thus the firm's audit approach; as one manager said “auditors don't read
the methodology, they don't need to, the system informs them”. To the extent audit support
systems reflect a firm's audit structure, the differences in system design discussed above are
contrary to the belief that audit firms have converged on the adoption of similar semi-
structured audit approaches (Bowrin, 1998). Although all of the audit support systems
described in this study enable and (to some extent) structure the audit process, the audit
support systems used at two of the five firms have been clearly designed to structure and
control the audit process to a greater extent than the audit support systems deployed at the
other three firms.

2.2.5. System restrictiveness


To provide generalizable insights into the similarities and differences of the decision support
embedded within audit support systems, the five firms' audit support systems are classified using
the concept of system restrictiveness (Silver, 1988a,b, 1990). System restrictiveness was initially
defined in the literature as the “degree to which and the manner in which a decision support
system restricts its users' decision making processes to a particular subset of all possible
processes” (Silver, 1988a:912). The concept of system restrictiveness has subsequently been
broadened and applied to several types of information systems (DeSanctis and Poole, 1994;
Lynch and Gomaa, 2003; Sailsbury and Stollak, 1999; Wheeler and Valacich, 1996). Although
there are different schools of thought regarding system restrictiveness, one view is that it is a
structural feature of a system which influences how a system is used (DeSanctis and Poole, 1994).
Structural features are the “specific types of rules and resources, or capabilities, offered by the
system” that enable and/or constrain how individuals behave and interact with a technology
(DeSanctis and Poole, 1994:126).
The concepts of system restrictiveness and audit structure are combined to develop a
definition of audit support system restrictiveness and each firm's audit support system is
classified as having either a “high or “low” level of restrictiveness. This study defines audit
support system restrictiveness as the extent to which an audit support system constrains auditor
behaviour through prescribing, organizing and controlling the audit approach. A high level of
98 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116

restrictiveness limits the degree to which an auditor is able to choose how the audit is performed
and the degree to which the auditor is free to make certain judgments, such as choosing which
audit tests should be performed. Based upon the design features documented in Table 1 and the
previous discussion of the key features of these firms' systems, the audit support systems used at
Firms B and E are classified as having a “high” level of restrictiveness and the audit support
systems used at the other three firms (A, C and D) are classified as having a “low” level of
restrictiveness.

2.3. Decision support embedded within the audit support systems

Although decision aids are embedded within all firms' audit support systems, the type of
decision support provided varies across the firms. These differences are correlated with the extent
to which a system is classified as having a “high” or “low” level of system restrictiveness. Table 2
provides an overview of the decision support provided during the key audit phases (Panels A to D)
for the audit support system's classified as having a “high” and “low” level of system re-
strictiveness system.
Consistent with the “high” level of audit support system restrictiveness in the systems used at
Firms B and E, the decision support embedded within these systems is more structured and
prescriptive than the decision support embedded in the systems classified as having a “low” level
of system restrictiveness (Firms A, C and D). The type of decision support embedded within the
“low” restrictive systems is predominately checklists that do not provide recommendations. These
checklists are designed to structure a user's information search by prompting users to consider
certain items. For example, during the client acceptance and understanding the client phase, the
checklists used at Firm C structure the information to be obtained by prompting a user to consider
the client's integrity, the risk profile and the industry in which the client operates. In contrast, the
decision aids embedded in the “high” restrictive audit support systems (Firms B and E) use an
auditor's responses to questions in checklists as the basis to tailor the audit file and provide
recommendations.
From the classification scheme used in Table 2 (Panels A to D) it can be seen that for the
majority of decisions across the various audit phases, systems classified as having a “high” level of
system restrictiveness contain decision aids that provide recommendations, including identifying
the risks, recommending the audit strategy, identifying the control objectives, recommending
relevant control tests, assessing the effectiveness of the controls, and recommending substantive
audit tests. In contrast, the decision aids embedded within the “low” restrictive systems generally
do not provide recommendations. For example, during the control and substantive testing phase
(Table 2, Panel C), all five firms make extensive use of test banks that contain lists of
recommended audit procedures. The test banks in the “high” restrictive systems (B and E) are used
by the audit support system to recommend the appropriate audit steps based upon the auditor's
input. In contrast, manual test banks are used at the three firms classified as having “low”
restrictive systems (A, C and D). These test banks include a minimal number of mandatory audit
steps and several other steps from which auditors manually select and import the applicable tests
into a client's file.
An exception to the pattern for firms whose audit support systems are classified as
having “low” levels of system restrictiveness is the automated decision support recom-
mendation in Firm A's system. This decision aid provides a standardized risk measure (Z
score) that informs the auditor's decision to accept/reject the client and influences the audit
approach by generating a list of relevant generic risks. However, this decision aid's output
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 99

Table 2
Comparison of decision support provided by audit support system classification and audit phase
Audit support High level of system restrictiveness Low level of system restrictiveness
system
classification
Audit firm B E A C D
(from Table 1)
Panel A: Decision support provided for client acceptance/understanding the client
Types of Risk identification Eligibility to act Automated Guide to Acceptance and
decision aids questionnaire checklist structure retention checklist
information to
Confirmation of Need for second be attained
independence partner review
requirements

Need for Materiality


specialists calculator
to be involved
Automated risk Yes Yes Yes No No
identification
Automated Partial Yes No No No
recommendation
of audit strategy

Panel B: Decision support for understanding the client's control environment


Types of Control assessment Control Internal Questionnaire/ Identification of
decision aids assessment tool control checklists risk factors
based on COSO documentation
Tool documentation framework and testing Need to involve
tool IT specialist

Assess need for Questionnaire/


IT specialist checklists
Control objectives Yes Yes No No No
recommended by
decision aids
Control tests Yes Yes No No No
recommended by
decision aids

Panel C: Decision support for control and substantive testing


Types of No decision aids are Assessment tool Sample size Attribute testing Industry and
decision aids used for actual produces program selection support tool process specific
testing, however for control testing banks of expected
the output of the controls and tests
decision aids used
in previous phases Sample size Ratio Guidance for Sample size
inform the tests calculator calculation how to test selection (excel)
conducted controls

Evaluation List of audit Analytical tools Audit program


calculator to steps to used to extract templates
extrapolate results choose from and analyze data
to the population
(continued on next page)
100 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116

Table 2 (continued )
Audit support High level of system restrictiveness Low level of system restrictiveness
system
classification
Audit firm B E A C D
(from Table 1)
Panel C: Decision support for control and substantive testing
Effectiveness of Yes for CIS Yes No No No
controls assessed (used by IT specialists)
by decision aid
Automated Yes Yes No No No
recommendation
of audit tests

Panel D: Decision support provided for reviewing the audit file and forming an audit opinion
Types of Electronic ‘file check’ Disclosure Disclosure Standardized Disclosure
decision aids highlights incomplete checklist checklist reporting checklists
areas templates

Identification of Workpaper Compliance Completeness


incomplete status with GAAP check tool
review notes identification checklist
Embedded red flags Yes Yes Yes No Yes
to assist review
Recommended No No No No No
audit opinion

is not binding; clients identified as high-risk can be accepted and the audit approach
tailored accordingly. The relevant risk factors are imported into the audit support system
and auditors are required to outline the procedures which will be used to address the risks.
The partner reported that while this aid was reliable and stable, it is limited in that the
standard questions do not always cover all risks for a specific client. Therefore, users need
to consider each client's specific circumstances and whether other risks not covered by the
decision aid need to be considered. This limitation is also likely to be applicable to the
risks and audit strategy recommended by the systems used at Firms B and D. A partner at
Firm B stressed that the pervasive risks identified by the system using an auditor's
response to the standard questions needs to be supplanted with auditor-designated client-
specific risks.
The other significant difference to the pattern of decision support provided for systems
classified as having “high” vs “low” levels of system restrictiveness is that none of the five
firms has a decision aid which forms the audit opinion for a client; this is clearly an area
requiring application of an auditor's professional judgment. The firms do however use decision
aids which assist auditors review the audit file for completeness, and tools for determining that
the financial statements include the required disclosures. Red flags embedded within the
system highlight key areas not completed and/or reviewed in the systems used at Firms A, B, D
and E. Consistent with providing a high level of structure and prescription, the completeness
tools embedded within Firm B and D's audit support system enable auditors to match the risk
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 101

Table 3
Perceived benefits and limitations of providing automated decision support
Benefits Limitations
Enhances audit quality through compliance with Auditors can over rely on recommendations made by the system
auditing standards and audit methodology Mechanistic behavior — emphasis on ticking the box rather
Increases audit efficiency than judgment
Consistent audit approach across clients Significant amount of training required
Improves risk management Stability of technology
Facilitates documentation Not cost efficient on very small jobs
Controls junior staff Perceived complexity of the system can result in auditors not adopting
the technology, or working around it by using word documents

factors identified up front in the audit process with how they have been addressed throughout
the audit.

2.3.1. Perceived benefits and limitations of providing automated decision support


As a final question, the partners/managers were asked their opinions on the benefits and
limitations of providing decision support. Table 3 summarizes the key items discussed by the
partners/managers. Although no attempt was made to discuss the benefits and limitations reported in
prior studies with the partners/managers, the discussion below compares the responses in Table 3 to
prior studies to assess whether the partners/managers concerns in 2004/2005 are consistent with the
benefits and limitations previously reported.

2.3.2. Benefits
The partners identified that decision aids can enhance audit quality through promoting
compliance with accounting standards and the firm's methodology. Although no mention was
made of whether the use of decision aids increases the degree of structure of the audit process, as
suggested by Ashton and Willingham (1988), the classification of the audit support systems in
the previous section suggests that the type of decision support provided and how it is integrated
within a firm's audit support system increases the structure of the audit process. The benefit of
consistency across clients and facilitation of documentation is consistent with claims that
documentation, as one aspect of justifying decisions, may lead to increased consistency (Ashton
and Willingham, 1988). The partners indicated that embedding decision aids within audit
support systems that tailor the client's audit program improves audit efficiency, controls junior
staff and improves risk management. These benefits are consistent with prior claims that decision
aids can improve efficiency through decreasing decision time and structuring the information
search to gather facts specifically relevant to each audit engagement (Ashton and Willingham,
1988; Elliott and Kielich, 1985). However, the benefits of “controlling junior staff and
improving risk management” are not explicit in the prior literature9. The benefits of using
decision aids to improve knowledge sharing and staff training mentioned in the prior literature
(Ashton and Willingham, 1988; Elliott and Kielich, 1985) were not mentioned by the partners/
managers.

9
In a subsequent interview with one of the participating managers, the manager raised the concern that staff shortages
are driving an increasing need for tools to be developed that will structure the audit process further to enable audit firms
to employ para-professionals to complete basic audit tasks.
102 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116

2.3.3. Limitations
The risks associated with the use of decision aids identified by the partners include the
potential for auditors to over rely on system recommendations, mechanistic behaviour, and that
decision aids are time consuming — both on the job (they are often not suitable for small audits)
and in the time required to train users. The dangers of mechanistic use and over-reliance have
been noted before in the literature (for example, Arnold and Sutton, 1998; Ashton and
Willingham, 1988; Rose, 2002). Consistent with previously stated concerns regarding the

Table 4
Prior research which has examined decision aids similar to those currently used in practice
Audit phase Decision aids being used in practice Prior studies
Client acceptance and Automated and manual checklists – (Bedard and Graham,
understanding the client including: 2002; Boatsman et al., 1997;
• Acceptance/retention Eining et al., 1997;
• Eligibility to act Independence Johnson and Kaplan, 1996;
• Risk identification Lowe et al., 2002;
• Need for IT specialists Pincus, 1989)
Materiality calculator Jennings et al. (1993)
System recommended audit strategy
Guide to structure information to be attained
Understanding the System recommended control objectives and/or testing
control environment IS control assessment tool
Documentation tool
Questionnaires and checklists Bonner et al. (1996)
Risk factor identification (Eining and Dorr, 1991;
Hornik and Ruf, 1997;
Mascha, 2001; Murphy and
Yetmar, 1996; Odom and Door,
1995; Pei et al., 1994; Smedley
and Sutton, 2004; Steinbart and
Accola, 1994)
Need for IT specialists
Control and substantive Audit program recommended by system
testing Sample selection tools (manual and computerized) (Kachelmeier and Messier,
1990; Messier et al., 2001)
Evaluation calculator Butler (1995)
Ratio calculation
Analytical procedure tools (Anderson et al., 1995, 2003;
Kaplan et al., 2001; Mueller and
Anderson, 2002; Swinney, 1999;
Ye and Johnson, 1995)
Attribute testing support
Test banks
• Guidance for how to test controls
• Industry or generic processes
Audit program templates
Review and forming an Disclosure/compliance checklists (Lowe and Reckers, 2000;
audit opinion Murphy, 1990)
Identification of completed/ uncompleted workpapers
Identification of the review status of workpapers
Extract identified risk factors and how they have been
addressed
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 103

development and maintenance costs (for example, Elliott and Kielich, 1985), the partners raised
the question of the stability of the technology given changes in hardware, software, auditing
standards and methodology. They also implied that the perceived complexity of the system may
lead auditors to not embrace the technology, which can result in inappropriate use, including
auditors finding ways to work around the system. This concern is consistent with concerns about
users “circumventing the aid” (Ashton and Willingham, 1988) “working backwards” (Messier
and Hansen, 1987; Messier et al., 2001) or “working around” (Bedard et al., 2005a,b) the
system. Although the prior literature has raised concerns regarding the long-term use of decision
support, including the de-skilling of auditors' abilities (Arnold and Sutton, 1998), increased
competition from non-accountants and a decline in the demand for junior staff (Ashton and
Willingham, 1988; Elliott and Kielich, 1985), these concerns were not mentioned by the partners
and managers.
The discussion presented in this section has highlighted that the type of decision support
embedded within a firm's audit support system is an important design feature associated
with the extent to which a system is classified as having a “high” or “low” level of system
restrictiveness. In the next section, the implications of these findings for future research are
discussed.

3. Mapping extant research to current audit decision aids: opportunities for future research

To identify future research opportunities the extant research on audit decision aids is mapped
to the types of decision support used in practice. This mapping is summarized in Table 4, and a
detailed summary of the prior studies is provided in Appendix A. Because audit tasks and the
type of decision support provided varies across audit phases, the mapping is undertaken using
the four major phases of an audit: client acceptance and understanding the client, understanding
the control environment, control and substantive testing and audit review and opinion
formulation. This mapping is followed by a discussion of studies that have investigated audit
support systems.

3.1. Client acceptance and understanding the client

The extant studies reported in Table 4 that have examined decision aid use in the client
acceptance and understanding the client phase have predominately used manual checklists.
Although manual checklists are predominately used in audit support systems classified as having
“low” levels of system restrictiveness, audit support systems classified as having “high” levels of
system restrictiveness typically incorporate automated checklists that provide a recommendation
based upon a user's input. Differences in reliance and decision performance have been found to be
associated with different types of decision aids (Eining et al., 1997). This raises the question of
whether users rely differently on manual compared to automated checklists depending upon the
restrictiveness of the audit support system in which they are embedded. Future research could also
investigate what factors influence an audit firm's decision to provide automated or manual
checklists. For example, how important is a firm's client portfolio in this decision? Audit support
systems cannot be designed to meet the needs of all audit engagements. Therefore, do the client
portfolios of firms that deploy highly restrictive systems differ from firms that deploy less
restrictive systems?
Overall, very few studies have investigated the types of decision aids used during the client
acceptance and understanding the client phase. The current study identified that three of the five
104 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116

audit firms (A, B and E) use decision aids to inform the client acceptance decision and assess client
risk. However, the type of output provided by the decision aids differs across the firms. In one firm
the aid provides a standardized risk measure (Z score), whereas in the other two firms a decision to
reject/accept the client is provided. These differences in output suggest opportunities for future
research. For example, what are the implications of providing a Z score, a one line recommendation
or a more extensive recommendation? The provision of explanations has been found to increase
user acceptance of decision aids (Ye and Johnson, 1995). Future research could investigate
whether the provision of extensive recommendations influences user acceptance, reliance and/or
decision making outcomes.
Audit support systems also differ in terms of whether the system identifies and recommends the
risks. Understanding the relevant risks for a client's engagement is an important first step within an
audit which significantly influences the audit approach. The differences in the design of the
systems used at the participating audit firms provides opportunities to test concerns that long-term
use of decision aids has a de-skilling effect on auditor expertise (Arnold and Sutton, 1998).
Although several studies have investigated short-term learning transfer from decision aids to users
(predominately using student subjects and internal control assessment decision aids) (e.g., Eining
and Dorr, 1991; Odom and Dorr, 1995; Smedley and Sutton, 2004), no known published study has
investigated the long-term effects of decision aid use on auditor knowledge acquisition. Because
the audit support systems and decision aids reported in this study have been deployed for a
reasonable period of time, researchers are now in a position to begin investigating any long-term
effects of providing decision support. For example, a study could be designed to investigate
whether the design differences of audit support systems identified in this study are associated with
differences in auditors' knowledge.

3.2. Understanding the control environment

Although several studies have used decision aids to evaluate internal controls (e.g., Eining
and Dorr, 1991; Hornik and Ruf, 1997; Mascha, 2001; Murphy and Yetmar, 1996; Odom and
Dorr, 1995; Pei et al., 1994; Smedley and Sutton, 2004; Steinbart and Accola, 1994), the
focus has been on investigating knowledge transfer from expert systems to novice users. As
such, these studies provide limited insights into the use of decision aids during this audit
phase.
Although all of the five audit firms provide their auditors with decision support for assessing
a client's control environment, the decision support provided and the way in which it is
incorporated within the firms' audit support systems differ in terms of whether a client's file is
tailored by the user (i.e., manually) or by the system (i.e., automatically). These design
differences, which are associated with the extent of system restrictiveness embedded within a
firm's system (Table 1), provide opportunities for future research. For example, does manual vs
automatic tailoring affect auditor knowledge and their ability to develop an appropriate audit
strategy when the auditor does not have access to their firm's audit support system? If so, what
are the implications when auditors review workpapers? O'Leary (2003) found that staff and
manager level auditors make different environmental assessments which result in differences in
user inputs. Does this mean that reviewers who have been trained on a system that tailors a
client's file automatically are less able to identify incorrect inputs by subordinates and
inappropriate audit plan tailoring? Or do they over rely on the system's automatic tailoring?
Manual audit file tailoring also has risks which present opportunities for future research. For
example, are manually tailored audit programs less efficient than automatically tailored
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 105

programs? And are risks adequately covered or do auditors tend to adopt a ‘same as last year’
approach?

3.3. Control and substantive testing

Several studies were identified that used decision aids similar to those used in practice
during the control and substantive testing phase. Two studies (Kachelmeier and Messier, 1990;
Messier et al., 2001) that examined the use of sample selection tools found that auditors ‘work
backwards’ by adjusting inputs to obtain desired outputs. The current study identified
differences in the way decision aids assisted auditors determine an appropriate sample size. For
example, the tool used at Firm A is a manual set of tables from which auditors select the
appropriate sample size from suggested “ranges” depending on the extent of “comfort”
required. In contrast, the tools provided in two systems classified as having a “high” level of
system restrictiveness are computerized. For example, at Firm E, auditors input the population
size, key items, materiality level, control and inherent risk into the sample selection tool. The
tool uses these inputs to recommend the exact sample size. Auditors at this firm also have
access to a tool which assists in the extrapolation of the sample test results to the population.
Future research could investigate whether the type of output (“range” vs “exact sample size”)
impacts the propensity of auditors to “work backwards” or the ease of which they can “work
backwards”. A manual sample size calculation was used in Kachelmeier and Messier (1990)
and Messier et al. (2001). The open nature of this type of decision aid increases a user's ability
to understand how the aid works and thus how inputs can be altered to achieve the desired
output. In contrast, many of the sample size selection tools used in practice are electronic and
provide an exact recommended sample size. This raises the question as to whether this
eliminates or hinders a user from “working backwards”, for example, because the calculation is
‘hidden’ from a user; or, does it take a user longer to obtain a sufficient understanding of the
relationship between the inputs into the aid and the aid's output to acquire the knowledge to
“work backwards”?
Decision tools used during analytical procedures have been extensively investigated in the
prior literature. The focus has been on understanding auditor reliance on these aids. Studies
have found that auditors over rely on erroneous or insufficient decision aid outputs (Anderson
et al., 2003; Swinney, 1999). One reason for this is that auditors perceive intelligent decision
aids to be ‘experts’ that incorporate audit firm knowledge and therefore they must be relied
upon (Swinney, 1999). However, other studies have found that under-reliance is also a concern.
Factors influencing under and over-reliance on decision aids have been examined extensively in
the decision aid literature. Although not all of these studies have used auditors or examined
auditor judgments, the overall findings are potentially relevant for audit decision aid research.
Reliance on a decision aid has been found to be influenced by incentives (Ashton, 1990),
feedback (Ashton, 1990), disclosure of the aid's predictive ability (Kaplan et al., 2001), the
decision aid's face validity (Ashton, 1990), the user's locus of control (Kaplan et al., 2001),
and the provision of explanations (Ye and Johnson, 1995). Reliance on decision aids is an
important issue for audit firms, because underutilization of decision aids increases perceptions
of auditor liability (Anderson et al., 1995).
Studies have also identified that the design of an aid is important. Mueller and Anderson
(2002) examined the way goal framing influences how auditors use a decision aid. They found
that when an “inclusion” goal frame is built within the instructions, auditors select a smaller set
of relevant explanations from a provided list than auditors instructed with an “exclusion” goal
106 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116

frame. The current study has documented significant differences in the prescriptiveness of the
decision support embedded within the five firm's audit systems. Future research could
investigate if such design differences and the extent of system restrictiveness embedded within
an audit support system are associated with differences in perceptions of auditor liability if
auditors underutilize these aids. The outcomes of such research have the potential to provide
important information to audit firms as they continually develop and refine their firm's audit
support systems.
Although this study identified that test banks are used extensively in practice, no previously
published study was identified that had examined their use. In practice, test banks are used
manually (where auditors select from lists of possible tests) or they are automated (where the test
bank recommends the relevant tests based upon an auditor's responses to standard questions). The
test banks also differ in relation to the way in which the information is organized, with some test
banks grouped according to industry and/or processes. There are potentially many research
opportunities related to test banks. For instance, does the design/grouping of tests, such as by
industry, process or class of transactions, influence audit efficiency and/or effectiveness? Are there
audit efficiency and/or effectiveness implications when using a manual test bank? And compared
with an automated test bank does having a manual test bank increase the likelihood that the tests
will be the ‘same as last year'?

3.4. Audit review and opinion formulation

Two prior studies which have examined decision aids used during the audit review and
opinion formulation phases were identified. Both of these studies examined the use of
compliance checklists. Lowe and Reckers (2000) found that the design of a compliance
checklist alters an auditor's thought processes to be more aligned with how the decision would
be evaluated ex post. The design has a framing effect, which can decrease the potential legal
liability implications in the case of audit failure (Lowe and Reckers, 2000). Murphy (1990)
examined whether the design of an expert system, developed by a Big 6 audit firm to assess
compliance with an accounting standard, impacted learning. The results indicated that learning
is highest when a semi-structured, non-automated aid is used compared with an expert system
(with or without explanations). No study was identified which has investigated the use of red
flags during the audit review process. However, our study found that these decision aids are
used extensively in practice. Future research could investigate the implications of using red
flags during the review process. For example, what are the consequences of closing a red flag
before it is actioned? Can reviewers identify inappropriate red flag/review note closure? Or do
they over rely on the automatic red flag check? Prior studies have found that the review
process is more complex when completed electronically (Bedard et al., 2005b; Rosman et al.,
2005) and reviewers identify less seeded errors in an electronic environment (Bible et al.,
2005). There are future research opportunities to investigate whether red flags or other
decision aids can be designed to decrease the complexity of completing an audit review
electronically.

3.5. Audit support systems

The highly integrated nature of audit support systems has lead researchers to begin inves-
tigating the use of the entire systems. Banker et al. (2005) examined the economic impact of
investing in audit support systems and found that firms benefit from efficiency gains driven by
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 107

increased automation of tasks undertaken by staff auditors. In an examination of the effectiveness


outcomes of using these systems, O'Donnell and Schultz's (2003) experimental results indicate
that the design of the audit support system affects auditor decision quality. Differences in audit
firm approaches have lead to differences in how audit support systems are designed across audit
firms. O'Donnell and Schultz (2003) found that auditors using a business process oriented audit
support system identified more seeded risk conditions than auditors using a transaction cycle
oriented audit support system.
The underlying assumptions of the studies reported by Banker et al. (2005) and O'Donnell
and Schultz (2003) is that the audit support systems are (1) being used and (2) being used
appropriately. Bedard et al. (2003) found that training increases preparers' acceptance of an
electronic workpaper system but does not change reviewers' perceptions of the system.
Acceptance of a system is a pre-requisite for system use (Davis et al., 1989). When auditors
have accepted an audit support system and use it in practice, the crucial issue becomes whether
they are using it appropriately. Partners interviewed in this study indicated that one of the
limitations of decision aids is that individuals “work around” them. Bedard et al. (2005a)
provide empirical evidence that auditors also “work around” audit support systems. In
particular, they found auditors print workpapers rather than use them electronically. “Work
around” behavior could be driven by the increased complexity in some tasks in an electronic
workpaper system compared with a paper environment (Bedard et al., 2005b; Bible et al., 2005;
Rosman et al., 2005).
Research investigating audit support systems is in its infancy. There are audit ef-
ficiency and effectiveness implications if auditors do not accept these systems and/or do
not use these systems in an appropriate manner. There are many opportunities to in-
vestigate issues related to the acceptance and use of audit support systems in practice. For
example, this study has documented significant differences in the design of these systems.
Future research could examine whether design differences influence auditor acceptance
and use.

4. Conclusion

The aim of this study was to support future research in audit support systems and audit
decision aids by documenting and comparing the types of systems and aids deployed in
five major international audit firms. Significant differences between these firms' audit
support systems were identified. The concept of system restrictiveness (Silver, 1988a,b,
1990) was combined with Cushing and Loebbecke's (1986) definition of audit structure to
develop a definition of audit support system restrictiveness. Five firms' audit support
systems were then classified as having either a “high” or “low” level of audit support
system restrictiveness. The audit support systems classified in the “high” category are
viewed by their audit firm as an enforcer of their firm's audit methodology. Use of these
systems is mandatory for all clients, client files are automatically tailored, the work
components across the audit phases are automatically integrated and high levels of automated
decision support are embedded within these systems. Although all firms view their system as
an enabler of the audit process, the systems classified as having “low” audit support system
restrictiveness are not viewed as an enforcer of their firm's audit methodology. Use of the
systems in the “low” restrictiveness category is voluntary for small engagements, auditors
manually tailor the client files, the work components predominately require manual
integration and manual checklists are the main type of decision support embedded in these
108 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116

systems. The differences in audit support systems and the types of decision support embedded
within these systems provide many opportunities for future research, many of which were
identified in this study by mapping the extant decision aid literature to the types of decision
support used in practice.
Several limitations of this study need to be taken into account. The use of semi-structured
interviews with the partners and managers of the five firms limited the amount of information
collected on the audit support systems and decision aids. Data were not collected from users.
Even so, we are confident that we gained access to the best experts in the firms since the
partner/managers were members of their audit firm's global or national audit technology
development group or responsible for deployment and staff training related to audit
technologies. The purpose was to gain an insight into the audit support systems and the
types of decision support embedded within them, not whether or how these systems are used. A
second limitation is that the study was limited to five firms. However, they are the biggest
international audit firms and were selected because they developed and deployed their own
audit support systems and decision aids. Finally, the mapping of prior studies to current audit
decision aids has its limitations because many of the studies used decision aids with task
content that varies from current aids.
Given the above caveats, this study provides crucial guidance for future research in audit
support systems and the types of decision support embedded in these systems. The
comparison of the audit support systems reported in this study clearly indicate significant
differences in the types of systems used in practice. This finding has important implications
for past and future research. Although several studies have investigated audit support systems
(e.g., Banker et al., 2005; Bedard et al., 2005a, O'Donnell and Schultz, 2003, Rosman et al.,
2005), with the exception of O'Donnell and Schultz (2003), these studies have investigated a
single audit support system. The fact that audit firms deploy different kinds of audit support
systems limits the generalizability of studies investigating a single support system. This is not
to imply that insights cannot be gained from studies investigating a single support system,
and depending upon the research question, controlling for system design could be vital.
However, as the body of literature investigating audit support systems grows, it is important
that links are made between the studies so that generalizable conclusions can be drawn. The
audit support systems used by five international audit firms in this study provides a common
basis from which prior and future studies investigating audit support systems can be
classified. This study documented that significant differences in the role of audit support
systems and the types of decision support embedded within these systems are correlated with
whether an audit support system is classified as having a “high” or “low” level of system
restrictiveness. This definition and the criterion developed in this study can be used to
classify the types of audit support systems investigated in prior and future studies as a basis
for generalizing research findings and for enhancing the transferability of a study's finding to
audit practice.

Acknowledgements

We gratefully acknowledge the generous support provided by the participating audit


firms and helpful comments received from Vicky Arnold, Carlos Ferran, Steve Sutton,
participants at the 2006 AAA Annual Meeting, Washington, the Seventh International
Research Symposium on Accounting Information Systems, Milwaukee and the anonymous
reviewers.
Appendix A

Summary of auditing related decision aid studies (in alphabetical order)

C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116


Study Research Decision aid type Participants Task Dependent Independent variable(s) Findings
design variable(s) ⁎ = significant

Anderson Experiment Analytical diagnostic tool 45 judges Assess auditor Perceived liability Full utilization of aid vs no Under utilization of
et al. (1995) (between that recommends liability of auditor decision aid decision aid increases
subject) procedures to investigate Financial statements Full utilization vs under perceptions of auditor
unusual ratio overstated by 9% utilization of decision aid⁎ liability.
due to an inventory error
Anderson Experiment Participants 51 Big 5 Provided with explanations Sufficiency Source of explanation Insufficient explanations
et al. (2003) (between provided with auditors for unusual fluctuations from of provided (aid or client)⁎ accepted from decision
subject) decision aid output either the firm's decision aid explanations Prior ratio analysis aid more than client,
or a client. Provided with experience⁎ suggesting auditors rely
information that all upon an aid's output.
explanations were
insufficient.
Ashton (1990) Experiment Output of aid 182 KPMG Bond rating predictions Classification Financial incentives External factors influence use of
(between provided. Users auditors for 16 firms accuracy [tournament] (FI) aid. Average performance
subject) informed aid was Performance feedback (PF) decreased in the presence of an
accurate in 50% Justification apparently valid aid when
of cases requirement (JR) tournament financial incentives,
Presence of decision feedback & justification were
aid ( (DA) ⁎ required.
FI × DA⁎
PF × DA⁎
JR × DA⁎ Lower face validity
FI × PF × DA decreases reliance.
Face validity⁎
Bedard and Experiment Client risk 46 auditors Identify risk factors, Number of Orientation of decision aid A higher number of risk
Graham (matched pair) assessment tool assess risk levels, negative risk (negative [emphasis on factors are identified
(2002) and plan audit tests factors client risk factors] or when a negatively
identified positive [no emphasis]) orientated decision aid is
Substantive testing Risk level of client used for high-risk clients.
performed. (high or low)
Orientation × risk level⁎
Prior engagement
experience with the client⁎
Boatsman Experiment Fraud 118 senior Audit planning Compare initial Decision consequences Higher non-reliance or shifting
et al. (1997) (within and assessment auditors judgment based on assessment (no aid) (severity of monetary away from aid's recommendation
between tool assessed level of with final decision payoffs/penalties)⁎ in the presence of decision
subject) management fraud (made after provided consequences.
with aid's
recommendation)

109
(continued on next page )
110
Appendix ) (continued )
(continued A

Study Research Decision aid type Participants Task Dependent Independent variable(s) Findings
design variable(s) ⁎ = significant

C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116


Bonner et al. Experiment Checklist 105 auditors Abstract audit Conditional Presence/absence Use of both types of checklists
(1996) (between planning task-estimation probability of checklist⁎ improved auditors' judgments.
and within of error frequencies judgment of Design of checklist Use of the mechanical
subject) error frequency (list⁎ vs mechanical aggregation (list) aid
aggregation) improved judgment
performance greatly (slightly).
Butler (1995) Experiment Decision rule 18 auditors Assessment of Accuracy of risk Number of errors Use of decision aid
(within sampling risk assessment Sample size improved accuracy of risk
subject) Tolerable error assessment.
Aid vs No aid⁎
Eining and Experiment Expert system 191 Payroll internal Accuracy Decision aid group Expert system users were
Dorr (1991) (between (with/without undergraduate control evaluation efficiency (no aid, questionnaire, more efficient and
subject) explanations) accounting expert system with no accurate in unaided post-test.
students explanations⁎, expert Experiment conducted over 5
system with explanations⁎) weeks.
Checklist/ Feedback (outcome vs Expert system users
questionnaire outcome and task developed higher levels of
properties) experiential knowledge.
Eining et al. Experiment 3 types of decision 93 auditors (an Assess risk of Reliance Type of aid: Expert system users have
(1997) (between aids: Checklist, additional 24 management Checklist vs unaided highest levels of
subject Logit Model and subjects were in an fraud Logit⁎ vs checklist discrimination between
(decision Expert System (all extended expert and unaided fraud levels, consistency,
aids); within delivered via the system group; Expert system⁎ vs logit agreement with aid, and
subject same computer results consistent Expert system⁎ vs all correspondence between
(subsequent interface) with those other groups risk assessment and audit
audit plan reported) plan. Type of aid
decision)) influences reliance and
performance.
Hornik and Experiment Expert system 63 undergraduate Internal control Knowledge No explanations Explanation type influences
Ruf (1997) (within accounting evaluation for transfer Explanations performance. Reflection/
subject) students payroll cycle overconfidence (if–then rule based) contrast explanation group
Explanations had highest accuracy and
(reflection/contrast)⁎ mean absolute accuracy.
Level of internal control
identified as an important factor.
Jennings et al. Experiment Calculation of 82 US judges Assess auditor liability: Perceived level of Presence of a decision aid The existence of a decision aid
(1993) (between materiality level inventory overstated. auditor culpability and Large vs small levels of non- mitigates the judges' prior views.
subject) (based on prescriptive legal liability compliance with aid's The presence of an aid
calculation used by KPMG) materiality level provides a different
Size of error × presence of aid evaluation point for the judges.
Precase jurists attitudes⁎
Precase jurists
attitudes × presence of aid⁎
Johnson and Experiment Checklist 77 audit seniors Analytical Error Length of list⁎ Design of checklist is important.
Kaplan (1996) (between subject) procedures explanation Response format Explanations were assessed
(audit planning likelihood (individual or aggregate as more likely when presented
stage) assessment of errors) in the short vs the longer list.

C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116


De-biasing approach Auditors assessing risks
(none, exemplar or retrieval) individually were more likely
to overweight the error
explanation than auditors
making an aggregate assessment.
Kachelmeier and Experiment AICPA 1983 audit 152 audit seniors Sample size Sample size Decision aid⁎ Evidence of users working
Messier (between subject) sampling guide selection Dispersion of Input information only ‘backwards’. Input parameters
(1990) (non statistical aid) auditor decisions (no computing output altered to obtain desired sample
of inputs into aid)⁎ size Evidence is consistent with
Level of internal control inappropriate decision aid use.
(weak and strong)⁎ Auditors from a firm with an
unstructured audit methodology
more likely to work backward.
Kaplan et al. Experiment Statistical decision aid 91 senior auditors Bond rating task Reliance Experiment 1: Reliance higher when aid's
(2001) (between subject) 61 MBA students Locus of Control⁎ predictive ability is not disclosed.
Non-disclosure of Users overestimate the aid's
aid's predictive ability ⁎ accuracy.
Experiment 2: Individuals with an external
Choice of user inputs (involvement)⁎ locus of control are more likely to
Locus of Control rely on an aid, except when have
Locus of Control × involvement⁎ a choice of information inputs.
Lowe and Reckers Experiment Checklist: questions to be 131 audit seniors from a Assess the likelihood that an Inventory Foresight vs hindsight⁎ The design of a decision aid
(2000) considered and answers Big 5 accounting firm adjustment for inventory would obsolescence Foresight vs hindsight & 1 outcome⁎ influences users' judgments. A
justified be necessary to issue an assessment Hindsight vs foresight & 1 outcome decision aid which directed users
unqualified audit opinion Foresight vs foresight & 1 outcome⁎ to consider the most damaging
Foresight vs hindsight & legal scenario was effective in
multiple outcomes⁎ mitigating hindsight effects.
Hindsight vs hindsight & multiple
outcomes
Lowe et al. Experiment Description of a fraud 149 Jurors Hypothetical audit lawsuit Jurors' assessment of Reliability of decision aid⁎ The provision of a decision aid
(2002) (between assessment decision aid and use of decision aid audit firm liability Auditor use of decision aid with a high level of perceived
subject) Use × reliability⁎ reliability increases perceptions of
auditor liability when an auditor
overrides the decision aid's
recommendation, and decreases
perception of liability when an
auditor relies upon such an aid even
if the aid is found to be incorrect.
Mascha (2001) Experiment Expert system 124 undergraduate Payroll internal control Procedural knowledge Presence/absence of expert system⁎ Users of expert system acquired
(between subject) accounting students evaluation acquired Task complexity (high vs low)⁎ more procedural knowledge
Type of feedback (rule based vs than non-users when task
detailed text with example)⁎ complexity is high.
Feedback × task complexity⁎

111
(continued on next page )
112
Appendix ) (continued )
(continued A

Study Research Decision aid type Participants Task Dependent Independent⁎variable(s) Findings
design variable(s) ⁎ = significant

Input parameter only specification⁎

C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116


Messier et al. Experiment AICPA 1999 Audit 44 staff, 82 senior and 20 Sample size for inventory Sample size Evidence of working backwards:
(2001) (between subject) sampling guide manager level auditors substantive testing Variance Supplemental worksheet for auditors adjust inputs to
(non statistical aid) (from two Big 6 audit decision aid obtain desired decision aid
firms) Population size output. Slight evidence that the
worksheet increased sample
sizes, but it does not eliminate
‘working backward’.
Use of worksheet
reduces variance.
Study confirms earlier evidence of
inappropriate use of decision aids.
Mueller and Experiment Screen shot of analytical 65 auditors Selection of explanations Reduced set size of Goal framing (exclusion vs inclusion)⁎ Number of explanations
Anderson (2002) (between subject) review decision aid explanations Environment (high vs low risk)⁎ influenced by goal frame and
Likelihood criterion Goal frame × environment risk environment. More error
explanations provided in
high-risk environment.
Murphy (1990) Experiment An audit firm's expert 65 students Evaluate a client's Number of correct Use of decision aid in developing Subjects using the semi-
(pretest–posttest, system with/without compliance with SFAS 91. treatment expertise (semi-structured non- structured non-automated
between subject) explanation facility. classifications automated aid [designed to direct decision aid demonstrated a
Knowledge of self-learning]⁎, expert system with no higher level of learning.
SFAS 91 explanations, expert system with
explanations at end of consultation)
Murphy and Experiment Internal control evaluation 74 auditors Review of staff auditors' Credibility, Likelihood, Decision aid⁎ Reviewers were more likely to rely
Yetmar (1996) (within subject; aid developed by an audit internal control evaluations. Frequency of Internal control test on a subordinates' recommendation
between subject) firm used to design the Agreement, Perceived Confidence level⁎ when the subordinate had access to
case (Participants did not validity of subordinates Staff reliability⁎ a decision aid Higher levels of
have physical access to the conclusion Subordinates conclusion reliance are inversely related to
decision aid.) experience.
No information provided as to
whether the subordinate had
relied upon the aid. Suggesting
that reviewers equate provision
of aid with reliance upon aid,
and assume that the user used
the aid appropriately. A firm
effect was found. Higher levels
of reliance levels occurred
when reviewers were
from unstructured audit firms.
O'Leary (2003) Experiment (via Audit expert system 78 auditors at a Big 5 N/A N/A N/A Exploratory study. Found that
survey audit firm assessment of environment to
instrument of determine appropriate decision aid
different inputs varied between partner/
scenarios) manager and lower level staff. Staff
level auditors also had a higher level
of variance in identified inputs.

C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116


The results suggest that decision
aid outputs are likely to differ
across user levels because of
differences in input assessments.
Odom and Dorr Experiment Expert system 126 undergraduate Payroll internal Accuracy (declarative Specificity of explanations The type and placement
(1995) (between students control evaluation knowledge) (general vs specific to problem) of explanations significantly
subject) Accuracy (procedural Placement of explanations affected declarative knowledge
knowledge) (continuous vs end placement) development but did not
Specificity of impact procedural
explanations × Placement⁎ knowledge development.
Pei et al. (1994) Experiment Expert system 107 undergraduate Internal control Declarative Presence or absence of The dependent variable
(between accounting students evaluation knowledge transfer prompts to make user measures were obtained
subject) (recognition & recall) think about judgment immediately following the
Procedural knowledge strategy⁎ experiment and one week after.
transfer (context free Judgment strategy⁎ The effect of the independent
& context specific) Prompts × judgment strategy⁎ variables differed across periods.
Overall, prompting affects the
transfer of declarative knowledge
(recall) and judgment strategy
impacted procedural
knowledge (context specific).
Pincus (1989) Experiment Checklist 137 auditors Evaluation of fraud risk Evaluation of fraud Fraud vs no fraud case Use of questionnaire increased
(between risk indicator Use vs non-use of a red flags comprehensiveness
subject) questionnaire⁎ and uniformity of data.
Smedley and Experiment Expert system 294 accounting Internal control Declarative Declarative knowledge Use of declarative knowledge
Sutton (2004) students evaluation knowledge acquisition explanations⁎ explanations improved
Procedural knowledge Procedural knowledge declarative and procedural
performance explanations knowledge acquisition.
acquisition
Steinbart and Experiment Expert system 78 undergraduate Internal control evaluation Learning Explanation type (rule Explanation type did not affect
Accola (1994) (between accounting students Satisfaction trace or justification) learning or user satisfaction.
subject) Involvement (evaluation of Involvement did not affect learning
system's recommendation)⁎ but it increased satisfaction.
Swinney (1999) Experiment Expert system 29 auditors Loan loss evaluation Judgment of loan loss Erroneous expert system Users inappropriately rely upon an
(between developed by a Big 6 (from 3 firms) judgment (positive or aid's erroneous output when negative
subject) accounting firm negative)⁎ judgment provided by
Control group (no aid) decision aid.
Ye and Johnson Experiment Simulated aid 20 auditors Evaluation of an expert Change in users' Provision of explanation The provision of explanations
(1995) (within subject) system (examined output perception of the facility⁎ improved users' perceptions of the
of a simulated expert system reasonableness of the 3 kinds of explanations: decision aid.
for an analytical review process) system's conclusions Trace
(pre-test vs post-test Justification⁎
measure) Strategy

113
114 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116

References

Abdolmohammadi MJ. Decision support and expert systems in auditing: a review and research directions. Account Bus
Res 1987:173–85 [Spring].
Abdolmohammadi MJ. A comprehensive taxonomy of audit task structure, professional rank and decision aids for
behavioral research. Behav Res Account 1999;11:51–92.
Abdolmohammadi MJ, Usoff C. A longitudinal study of applicable decision aids for detailed tasks in a financial audit. Int J
Intell Syst Account Financ Manag 2001;10:139–54.
Anderson JC, Jennings M, Kaplan SE, Reckers PM. The effect of using diagnostic decision aids for analytical procedures
on judges' liability judgments. J Account Public Policy 1995;14:33–62.
Anderson JC, Moreno KK, Mueller J. The effect of client vs decision aid as a source of explanations upon auditors'
sufficiency judgments: a research note. Behav Res Account 2003;15:1–11.
Arnold V, Sutton G. The theory of technology dominance: understanding the impact of intelligent aids on decision maker's
judgments. Adv Account Behav Res 1998;1:175–94.
Ashton RH. Pressure and performance in accounting decision settings: paradoxical effects of incentives, feedback and
justification. J Account Res 1990;28:148–80.
Ashton RH, Willingham JJ. Using and evaluating audit decision aids. Auditing symposium IX: proc of the 1988 touche
Ross/University of Kansas symposium on auditing problems. Kansas: School of Business, University of Kansas; 1988.
Banker RD, Chang H, Kao Y. Impact of information technology on public accounting firm productivity. J Inf Syst
2005;16:209–22.
Bedard JC, Graham LE. The effects of decision aid orientation on risk factor identification and audit test planning. Audit J
Pract Theory 2002;21:40–56.
Bedard JC, Jackson C, Ettredge ML, Johnstone KM. The effect of training on auditors' acceptance of an electronic work
system. Int J Account Inf Syst 2003;4:227–50.
Bedard JC, Ettredge ML, Jackson C, Johnstone KM. Electronic media in the audit work process: perceptions, intentions
and use. Working paper. Boston: Northeastern University; 2005a.
Bedard JC, Ettredge ML, Johnstone KM. Adopting electronic workpaper systems: task analysis, transition and learning
issues, and auditor resistance. Working Paper. Boston: Northeastern University; 2005b.
Bell TB, Bedard JC, Johnstone KM, Smith EF. Krisksm: a computerized decision aid for client acceptance and continuance
risk assessments. Audit J Pract Theory 2002;21:97–113.
Bible L, Graham L, Rosman A. The effect of electronic audit environments on performance. J Account Audit Financ
2005;20:27–42.
Boatsman JR, Moeckel C, Pei BKW. The effects of decision consequences on auditors' reliance on decision aids in audit
planning. Organ Behav Hum Decis Process 1997;21:211–47.
Bonner SE, Libby R, Nelson MW. Using decision aids to improve auditors' conditional probability judgments. Account
Rev 1996;71:221–40.
Bowrin AR. Review and synthesis of audit structure literature. J Account Lit 1998;17:40–71.
Brown CE, Murphy DS. The use of auditing expert systems in public accounting. J Inf Syst 1990:63–72 [Fall].
Brown CE, Phillips ME. Expert systems for internal auditing. Intern Aud 1991:23–8 [August].
Butler SA. Application of a decision aid in the judgmental evaluation of substantive test of details samples. J Account Res
1995;23:513–26.
Connell NAD. Expert systems in accountancy: a review of some recent applications. Account Bus Res 1987;17:221–33.
Cushing BE, Loebbecke JK. Comparison of audit methodologies of large accounting firms. USA: American Accounting
Association; 1986.
Davis FD, Bagozzi RP, Warshaw PR. User acceptance of information technology: a comparison of two theoretical models.
Manage Sci 1989;35:982–1003.
DeSanctis G, Poole MS. Capturing the complexity in advanced technology use: adaptive structuration theory. Organ Sci
1994;5:121–47.
Eining MM, Dorr PB. The impact of expert system usage on experiential learning in an auditing setting. J Inf Syst 1991:1–16.
Eining MM, Jones DR, Loebbecke JK. Reliance on decision aids: an examination of auditors' assessment of management
fraud. Audit J Pract Theory 1997;16:1–19.
Elliott RK, Kielich JA. Expert systems for accountants. J Account 1985:126–34 [September].
Graham LE, Damens J, Van Ness G. Developing Risk Advisorsm: an expert system for risk identification. Audit J Pract
Theory 1991;10:69–96.
Hornik S, Ruf BM. Expert systems usage and knowledge acquisition: an empirical assessment of analogical reasoning in
the evaluation of internal controls. J Inf Syst 1997;11:57–74.
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 115

Jennings M, Kneer DC, Reckers PM. The significance of audit decision aids and precase jurists' attitudes on perceptions of
audit firm culpability and liability. Contemp Account Res 1993;9:489–507.
Johnson VE, Kaplan SE. Auditors' decision-aided probability assessments: an analysis of the effects of list length and
response format. J Inf Syst 1996;10:87–101.
Kachelmeier SJ, Messier WF. An investigation of the influence of a nonstatistical decision aid on auditor sample size
decisions. Account Rev 1990;65:209–26.
Kaplan SE, Reneau JH, Whitecotton S. The effects of predictive ability information, locus of control and decision maker
involvement on decision aid reliance. J Behav Decis Mak 2001;14:35–50.
Lowe DJ, Reckers PM. The use of foresight decision aids in auditors' judgments. Behav Res Account 2000;12:97–118.
Lowe DJ, Reeckers PM, Whitecotton S. The effects of decision-aid use and reliability on jurors' evaluations of auditor
liability. Account Rev 2002;77:185–202.
Lynch A, Gomaa M. Understanding the potential impact of information technology on the susceptibility of organizations
to fraudulent employee behaviour. Int J Account Inf Syst 2003;4:295–308.
Mascha MF. The effect of task complexity and expert system type on acquisition of procedural knowledge — some new
evidence. Int J Account Inf Syst 2001;2:103–24.
Messier WF, Hansen JV. Expert systems in auditing: the state of the art. Audit J Pract Theory 1987;7:94–105.
Messier WF, Kachelmeier SJ, Jensen KL. An experimental assessment of recent professional developments in
nonstatistical audit sampling guidance. Audit J Pract Theory 2001;20:81–96.
Mueller JM, Anderson JC. Decision aids for generating analytical review alternatives: the impact of goal framing and
audit-risk level. Behav Res Account 2002;14:157–77.
Murphy D, Brown CE. The use of advanced information technology in audit planning. Int J Intell Syst Account Financ
Manag 1992;1:187–93.
Murphy DS. Expert system use and the development of expertise in auditing: a preliminary investigation. J Inf Syst
1990:19–35 [Fall].
Murphy DS, Yetmar SA. Auditor evidence evaluation: expert systems as credible sources. Behav Inf Technol
1996;15:14–23.
Odom MD, Dorr PB. The impact of elaboration-based expert system interfaces on de-skilling: an epistemological issue.
J Inf Syst 1995;9:1–17.
O'Donnell E, Schultz J. The influence of business-process-focused audit support software on analytical procedures
judgment. Audit J Pract Theory 2003;22:265–79.
O'Leary DE. Auditor environmental assessments. Int J Account Inf Syst 2003;4:275–94.
Pei BKW, Steinbart PJ, Reneau JH. The effects of judgment strategy and prompting on using rule-based expert systems for
knowledge transfer. J Inf Syst 1994;8:21–42.
Pincus KV. The efficacy of a red flags questionnaire for assessing the possibility of fraud. Account Organ Soc
1989;14:153–63.
Prawitt DF. Staffing assignments for judgment-oriented audit tasks: the effects of structured audit technology and
environment. Account Rev 1995;70:443–65.
Rose JM. Behavioral decision aid research: decision aid use and effects. In: Arnold V, Sutton SG, editors. Researching
accounting as an information systems discipline. Florida, USA: AAA; 2002.
Rosman A, Bible L, Graham LE, Biggs S. Investigating auditor adaptation to changing complexity in task environments: the
case of electronic workpapers. Paper presented at American Accounting Association annual conference, August 7–10,
San Francisco; 2005.
Sailsbury D, Stollak MJ. Process restricted AST: an assessment of group support systems appropriation and meeting
outcomes using participant perceptions. Proc of the twentieth international conference on information systems. North
Carolina: Charlotte; 1999.
Shpilberg D, Graham LE. Developing Expertaxsm: an expert system for corporate tax accrual and planning. Audit J Pract
Theory 1986;6:75–94.
Silver MS. Descriptive analysis for computer-based decision support. Oper Res 1988a;36:904–16.
Silver MS. User perceptions of decision support system restrictiveness: an experiment. J Manage Inf Syst
1988b;5:51–65.
Silver MS. Decision support systems: directed and undirected change. Inf Syst Res 1990;1:47–70.
Smedley GA, Sutton SG. Explanation provision in knowledge-based systems: a theory-driven approach for knowledge
transfer design. J Emerg Techol Account 2004;1:41–61.
Steinbart PJ, Accola WL. The effects of explanation type and user involvement on learning from satisfaction with expert
systems. J Inf Syst 1994;8:1–17.
116 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116

Swinney L. Consideration of the social context of auditors' reliance on expert system output during evaluation of loan loss
reserves. Int J Intell Syst Account Financ Manag 1999;8:199–213.
Wheeler BC, Valacich JS. Facilitation, GSS, and training as sources of process restrictiveness and guidance for structured
group decision making: an empirical assessment. Inf Syst Res 1996;7:429–50.
Ye LR, Johnson PE. The impact of explanation facilities on user acceptance of expert system advice. MIS Quart
1995:157–72 [June].