The purpose of this paper is
to apply signal detection
theory (SDT) to the problem
of detecting management
fraud.

Attribution Non-Commercial (BY-NC)

40 views

The purpose of this paper is
to apply signal detection
theory (SDT) to the problem
of detecting management
fraud.

Attribution Non-Commercial (BY-NC)

- Binder 1
- ICAEW CFAB Syllabus and Technical Knowledge Grids 2015
- Comparative Study Auditing Between Profit Nonprofit Organizations
- Auditing Standards in China—A Comparative Analysis with Relevant International Standards and Guidelines
- Evaluating Organizational Effectiveness
- Smieliauskas 6e - Solutions Manual - Chapter 01
- At - (14) Internal Control
- Auditor Interview Questions
- TBCH01
- Definición de Auditoria Forense, ingles
- ISA Definitions
- Auditing Assignment
- 7786-9927-1-PB
- Audit 2 Marks Questions
- IAASB Staff Audit Practice Alert External Confirmations
- Revised Rules of Procedure Coa
- 05-DILG2016 Part1-Independent Auditor's Report
- 0 Hypothesis Testing
- Tatelx Result
- Standalone & Consolidated Financial Results, Limited Review Report for June 30, 2016 [Result]

You are on page 1of 9

Khondkar E. Karim Assistant Professor of Accountancy, School of Professional Accountancy, Long Island University, Brookville, New York, USA Philip H. Siegel Professor of Accountancy, School of Professional Accountancy, Long Island University, Brookville, New York, USA

The purpose of this paper is to apply signal detection theory (SDT) to the problem of detecting management fraud. The use of SDT methodology signicantly strengthens understanding of the relationships among audit technology, base rates of management fraud, costs of Type I and Type II errors, extensions of audit procedures, and risk assessments prior and during the audit. The analysis suggests that the auditor must accept disproportionate false alarm rates in order to maintain audit effectiveness in the presence of management fraud. This condition becomes even stronger as the costs of Type II errors increase compared to costs of Type I errors. The study also provides policy implications for auditor practice and standard-setters.

I. Introduction

The responsibility of the independent auditor to detect management fraud remains a controversial issue. American Institute of Certied Public Accountants recently issued a new Statement on Auditing Standard (SAS) No. 82 entitled Consideration of Fraud in a Financial Statement (AICPA, 1997) which supercedes SAS No. 53 (AICPA, 1988). The new standard continues to require the auditor to plan and perform the audit to provide a reasonable assurance that the nancial statements are free of management fraud[1]. SAS No. 110 issued by Auditing Standards Board of the Institute of Chartered Accountants of England and Wales also provides similar standards on fraud detection. Costello (1991) analysed court cases concerning fraud and found that neither the generally accepted auditing standards (GAAS) nor SAS No. 53 constitute the controlling measures of an auditors liability . He suggests that auditors design their audits to detect all types of frauds. Albrecht and Willingham (1992) reported that the public, SEC, and courts expect auditors to detect all material nancial statement fraud; the auditor faces an alarming high level of audit risk when fraud is present. Management fraud is also associated with explosion of litigation against the auditor (Palmrose, 1991). SAS No. 82 was issued to address some of the above issues. SAS No. 82 does not change the responsibility of the auditor that previously existed for discovering fraud. The standard describes fraud and its characteristics, requires that auditors specically assess the risk of material misstatement due to fraud, and provides categories of fraud risk factors that the auditor should consider. Additionally, the standard provides guidance for the auditor on how to respond to the results of risk assessment and how to relate audit test results to the risk of fraud. The provisions of SAS No. 82 require a signicant increase in audit resources being devoted to fraud detection in order to avoid litigation on charges of negligence. AICPA acknowledges that it may result in increased audit cost but argues that public interest benets will outweigh the additional cost.

Managerial Auditing Journal 13/6 [1998] 367375 MCB University Press [ISSN 0268-6902]

Auditing can detect management fraud, but audit procedures are not designed to guarantee that the nancial statements are free from management fraud. Arens and Loebbecke (1997) contend that management fraud is inherently difficult to uncover because management is in position to override internal controls and can actively conceal the misstatements. Additionally, the problem of detecting management fraud is compounded due to the fact that an auditor rarely encounters fraud. Bell et al. (1993) surveyed partners from a Big Six rm. They reported that 80 per cent of the respondents had encountered two or fewer incidents of management fraud and had on average 17 years of audit experience while 40 per cent of the respondents had never experienced an engagement involving fraud. Based on this data, Hansen et al. (1996) concluded that auditors probably have not adequately developed cognitive models for fraud risk assessment. The purpose of this paper is to apply signal detection theory (SDT) to the external auditors problem of detecting management fraud. The SDT enables the auditor to examine the relationships among audit technology[2], base rates of management fraud, costs of Type I and Type II errors[3], the auditors experience with management fraud, extension of audit procedures, and risk assessments prior and during the audit. This analysis is expected to provide insights into the problems faced by auditors due to increasing societal expectations to detect management fraud and limitations of audit technology . The primary results of the analysis indicate that with the given increase in the cost of Type II error, audit effectiveness is maintained only if the increase in the power of audit technology is matched by corresponding increases in false alarm rates. The auditor is forced to accept higher rates of false alarm and consequent Type I errors with the increase in the cost of Type II errors, which adds to the cost of the audit. The paper also investigates the effect of the extension of audit procedures and risk assessments prior and during the audit on the detection of management fraud. The remainder of the paper proceeds as follows. Part II provides a brief explanation of

[ 367 ]

Khondkar E. Karim and Philip H. Siegel A signal detection theory approach to analyzing the efficiency and effectiveness of auditing to detect management fraud Managerial Auditing Journal 13/6 [1998] 367375

the SDT. Part III presents the formulation and analysis of the problem of detecting management fraud using the SDT framework. Part IV extends the analysis to capture the sequential nature of auditing including the extension of the audit procedures and risk assessments. Part V discusses the implications of the results of this study in the context of empirical research done in the area. The paper concludes with the discussion of limitations of the study .

ratio is given by the following formula (Green and Swets, 1974), Hit rates Likelihood ratio = False alarm rates It should be noted that likelihood ratio is a number and not a probability . The observer compares the value of likelihood ratio to a criterion value. This criterion value incorporates prior probability for the event and the values attached by the observer to the consequences of the decisions. The formula is (Green and Swets, 1974, p.23), Criterion value = (CNN + CSN ) Pr(S) (CSS + CNS ) Pr(N)

SDT[4] is a model for how humans detect signals in a background of interference or noise. SDT assumes that the human observer behaves like a rational economic decision maker, and attempts to balance costs and benets to arrive at an optimum performance. The human observer considers a particular event and decides whether it is signal or a noise. SDT assumes that there is an overlap between the distributions of signal and noise, so that a particular observation may have come from either distribution. The observer then has to decide whether to accept or reject the event as a signal. There are four possibilities in the decision matrix of the observer: 1 the observer notices a noise when it is a signal (miss); 2 the observer notices a signal when it is a signal (called a hit); 3 the observer notices a noise when it is a noise (correct identication); and 4 the observer notices a signal when it is a noise (called a false alarm). The conditional probabilities for these four possibilities are given as Pr (S|S), Pr (S|N), Pr (N|S), and Pr (N|N). This decision matrix is shown in Figure 1. The observer should make a decision concerning the event and classify it either as a signal or a noise. The decision is made by the observer in two steps: calculation of the likelihood ratio; and comparison of the likelihood ratio with the criterion value. This likelihood

Observer Noise Signal Signal Event Noise Pr(N|S) Miss rate Pr(N|N) Correct identication Pr(S|S) Hit rate Pr(S|N) False alarm rate

where CNN is the cost of deciding noise when the event is noise, CSN is the cost of deciding signal when the event is noise, CSS is the cost of deciding signal when the event is signal, and CNS is cost of deciding noise when the event is signal. The observer will compare the value of the likelihood ratio with the criterion value and based on whether the likelihood ratio is lesser (greater) than the criterion value, accept (reject) the event as a signal (noise). Thus, SDT utilizes the notions of hit rates and false alarm rates in a decisiontheoretic framework to analyze the decisions made by the observer. This process is described in more detail in the next section where SDT is applied to the problem of detecting management fraud. SDT has been applied in the areas of human perception and decision making (Green and Swets, 1974), diagnostic systems in clinical medicine (Swets, 1979), industrial quality control (Drury and Fox, 1975), and in marketing for advertisement recognition testing (Cradit et al., 1994; Singh and Churchill, 1986). The method primarily uses experimentally produced data to analyze the phenomena under investigation. However, theoretical concepts of SDT can be used to analyze a phenomenon where experimental and empirical data is scarce. Sorkin and Dai (1994) used SDT to analyze the efficiency and detection performance of group decisions. They reported the application of their results to real groups, such as juries and committees. This paper also uses the theoretical concepts of SDT to analyze the auditors legal responsibilities and efficiency and effectiveness of auditing in detecting management fraud.

The application of the SDT framework to the problem of detecting management fraud begins with the auditor examining an

[ 368 ]

Khondkar E. Karim and Philip H. Siegel A signal detection theory approach to analyzing the efficiency and effectiveness of auditing to detect management fraud Managerial Auditing Journal 13/6 [1998] 367375

account balance. The audit objective is to determine the existence or non-existence of management fraud for that particular account. The auditors decision rule is to accept if no management fraud exists (~MF) and reject if management fraud exists (MF). The auditor collects and evaluates the audit evidence for the particular account and based on the audit evidence decides whether to accept or reject the account balance. The collection and evaluation of evidence is called audit signal in the SDT terminology .

chance). The audit technology is highly reliable when the values of Pr(S|MF) and Pr(~S|~MF) are 1 and the audit technology is unreliable when the values of these signals are 0.5. In the SDT framework, the auditor will not base his/her decision on the audit signal. The auditor will use hit rates and miss rates to calculate the likelihood ratio (L). The likelihood ratio captures the likelihood that a particular account consists of MF relative to the likelihood that it consists of ~MF. The likelihood ratio is calculated as, L= Pr(S|MF) Pr(S|~MF) (1)

Four different types of audit signals are possible when auditing for fraud: 1 the audit does not signal MF when MF (miss); 2 the audit signals MF when MF (hit); 3 the audit does not signal MF when ~MF; and 4 the audit signals MF when ~MF (false alarm). In the SDT framework, the auditor will make accept/reject decision utilizing these signals, as described below. This description is summarized in Figure 2. The four cells in Figure 2 are summarized by two independent values: the hit rate and false alarm rate. To formalize hit rates and false alarm rates two conditional probabilities are introduced, Pr (S|MF) and Pr(~S|~MF), where, Pr (S|MF) is probability that the audit signals (S) management fraud when MF and Pr(~S|~MF) is probability that the audit does not signal (~S) management fraud when ~MF. The hit rate is expressed as Pr(S|MF), the miss rate is expressed as Pr(~S|MF) or 1 less Pr(S|MF), and the false alarm rate is expressed as Pr(S|~MF) or 1 less Pr(~S|~MF). The hit rate is related to audit effectiveness and the false alarm rate is related to audit efficiency . Audit technology is effective and efficient if Pr(S|MF) and Pr(~S|~MF) are close to 1, and audit technology on the other hand is ineffective and inefficient if Pr(S|MF) and Pr(~S|~MF) are close to 0.5 (i.e. random

The calculated value of L is compared against a xed criterion value of the likelihood ratio called . The decision rule for the auditor can be specied as follows if L is greater than then reject, and if L is less than then accept. is shown as a function of the pay-offs associated with auditors decisions and the estimate of prior probabilities of occurrence of management fraud. Thus, indirectly incorporates the decision makers response biases, attitudes, and motives which can be shown as,

=k

Pr(~MF) Pr(MF)

(2)

Auditor's decision Accept Reject MF Event ~MF Pr(~S|MF) Miss rates Pr(~S|~MF) Correct acceptances Pr(S|MF) Hit rates Pr(S|~MF) False alarm rate

where, k is a constant of proportionality and Pr(MF) and Pr(~MF) are the prior probabilities of occurrence or non-occurrence of management fraud (Coombs et al., 1970, pp. 170-2). Empirical research in fraud detection lends some support to above relationships. Pincus (1990) found that auditors priors were positively related to fraud detection. Bernardi (1994) conducted an experimental study of the inuence of client integrity and competence and auditor cognitive style on fraud detection. He reported that the higher the auditors priors for existence of fraud the greater the fraud detection rate. However, Bernardi did not investigate the incidence of Type I errors. In the SDT framework, it can be concluded that the auditors estimate of prior probabilities of management fraud signicantly affects the xed criterion value () of the likelihood ratio. The constant k is given by the formula, k= Cca+CType I Ccr+CType II (3)

where, Cca is the cost associated with correct acceptance, CType I is the cost associated with Type I error (incorrect rejection), Ccr is the cost associated with correct rejection, and CType II is the cost associated with Type II error (incorrect acceptance).

[ 369 ]

Khondkar E. Karim and Philip H. Siegel A signal detection theory approach to analyzing the efficiency and effectiveness of auditing to detect management fraud Managerial Auditing Journal 13/6 [1998] 367375

If the decision of an auditor is based on a xed amount of evidence then the auditor can only modify . An increase in shall decrease Type I risk and increase Type II risk, while a decrease in shall increase Type I risk and decrease Type II risk. That is, if MF then a decrease in b increases the effectiveness of the audit; however, if ~MF then a decrease in b will decrease the efficiency of the audit. If the signals are perfectly diagnostic then a can be chosen that will eliminate both Type I and Type II risks. The audit signals are rarely perfectly diagnostic. In this situation, equations (1), (2), and (3) can be used to analyze the relationship between the audit technology and the base rates of management fraud.

The low base rates of management fraud have implications for audit efficiency and effectiveness. The audit technology should enable the auditor to make better decisions than simply assuming that all accounts belong to the category with the highest base rate. However, this consideration is complicated by the induction of false alarm rates and costs of Type I and Type II errors. Also, low prior probabilities of management fraud makes false alarm rates more relevant for correct decision making. These issues are investigated by using equations (1), (2), and (3). First, the paper discusses how the auditor forms prior odds concerning management fraud. The prior odds represent the information available to the auditor prior to the observation of L. The auditor can obtain this information based on the observed rates of management fraud cases in the population or client base. The quantitative information, however, about the base rates of management fraud is not available publicly or on a rmwide basis (Joyce and Biddle, 1981). The consensus in the literature is that management fraud (detected or undetected) is a rare event having a low prior probability (Elliot and Jacobson, 1986; Gwilliam, 1986)[5]. The auditors can also form estimates based on their personal experiences of management fraud cases. Loebbecke et al. (1989) found that an auditors encounter with management fraud is an uncommon event. Bell et al. (1993) indicated that a majority of the auditors encounter two or fewer cases of management fraud. Based on these experiences an auditors estimate of prior odds for management fraud should be low. This study assumes that prior odds vary between 1:99 and 1:10, that is, Pr(MF) varies between 1 per cent and 10 per cent, respectively . The specic values of prior odds (base rates) chosen for this analysis are: 1:99 (1 per cent), 1:19 (5 per cent), and 1:9 (10 per cent).

Next, the study discusses the costs of Type I and Type II errors. It is common knowledge that cost of Type II errors is more than cost of Type I errors. The costs of Type II errors, legal costs of a missed fraud, are well documented in the literature. However, the costs of Type I errors, extra audit hours, unwarranted audit adjustments, and loss of clients and future audit revenue are not researched and are difficult to estimate. In the ensuing analysis it is assumed that the cost of Type II errors is 10-99 times higher than the cost of Type I errors, which is in line with the Hansen et al. (1996) study . Assuming even higher costs will make the results of this paper only stronger. If these assumptions hold true then the value of k in the equation (3) will be less than 1 (Cca and Ccr will not signicantly affect the calculations). This causes the shift in the value of . As mentioned above, captures the pay-off/costs associated with the auditors decisions. The auditors are trained to look for errors and fraud because of costs of Type I and Type II errors. This causes the value of k to shift downwards even further. As k becomes smaller the threshold value of becomes smaller. As discussed earlier, a decrease in increases the effectiveness of audit if MF and decreases the efficiency of audit if ~MF.

Now the analysis plugs the numbers in equations (1), (2), and (3) (See Table I). In case where k equals 1/10 and the prior odds of MF are assumed to be 1:99 (1 per cent) then the value of will be 9.9[6]. Then for the audit to be effective (efficient) L should be greater than 9.9[7]. If we assume a hit rate of 100 percent (Pr(S|MF) equals 1) then false alarm rates become approximately 10 per cent; (Pr(S|~MF) equals 0.1). If k equals 1/50 then the false alarm rate becomes approximately 50 per cent. Finally, if k equals 1/99 then the false alarm rate becomes 100 per cent. In this situation the auditor achieves 100 per cent hit rate at the cost of 100 per cent false alarm rate. Therefore, the auditor rejects all accounts and in doing so nds all accounts with management fraud. In situations where the hit rate is not assumed to be 100 per cent then the results change slightly. If the hit rate is 95 per cent then as earlier will still be 9.9. The corresponding required false alarm rate should be 9.5 per cent. If k is equal to 1/50 then the false alarm rate becomes approximately 47.5 percent. When k equals 1/99 then false alarm rate becomes 94.5 per cent. If the power of audit technology is 95 per cent then the auditor achieves 100 percent hit rate at the

[ 370 ]

cost of approximately 95 per cent false alarm rate. Table I indicates that if the prior odds are 1:99 given k equals 1/99 then in order to maintain the effectiveness of the audit false alarm rate it should be equal to the power of audit technology[8]. Similar results are obtained for k equaling 1/19 (or 1/9) if the prior odds are 1:19 (or 1:9). If the prior odds decrease then even for smaller downward shifts in k, the false alarm rate should be equal to the power of audit technology. This is necessary to maintain the effectiveness of audit. However, this increasing false alarm rate causes corresponding increase in Type I errors (if ~MF). Deshmukh et al. (1997) showed that if the cost of Type I and Type II errors is equal then for correct decision making we need an extremely low false alarm rate. However, as the cost of Type II error increases the value of L (or ) increasingly shifts downwards resulting in even lower false alarm rates. Since the base rate of management fraud is low, the auditor is forced to accept disproportionately large Type I errors to avoid Type II errors. These results provide an understanding of the dilemma faced by the auditor in taking responsibility for detecting management fraud. The auditor can increase the power of audit technology to detect management fraud. The audit effectiveness is maintained only if the increase in the power of audit technology is matched by the corresponding increase in false alarm rates. At any level of power the auditors correct decision making concerning MF depends on acceptance of a very high level of false alarm rates and consequent Type I errors.

A signicant constraint of the preceding analysis is that it assumes auditing as a point process where only a xed amount of evidence is collected and evaluated. In reality, there is a hierarchy of evidence evaluation and decision making. The same evidence is usually evaluated many times and data are gathered sequentially, often with the same audit objective. Auditors also use risk assessment tools to modify their prior probabilities relating to the existence of management fraud. For example, Planet is an articial intelligence tool developed by Price Waterhouse, which is used to draft audit plans after considering different factors such as potential management bias, history of manipulation, the control environment, prior years errors, and human resource issues (McFadgen 1994). There can be an extension of audit procedures and application of new audit procedures to the same evidence for the same audit objective. Consequently, the auditors prior odds are modied sequentially through the audit[9]. This same audit process can be captured by using the SDT. In this section, the audit process is modeled by using a lter concept from the SDT. The audit process can be modeled as a sequence of n lters. Each lter represents application of a new audit procedure to the same audit evidence, an extension of audit procedures, or use of a risk assessment tool to more accurately assess the probability of management fraud. The auditors prior odds and the likelihood ratio are modied based on the sequential signals. This process is shown in Figure 3. The auditors correct decision in this case depends on the statistical independence of

Table I Calculation of maximum permissible false alarm rates Pr(S|~MF) for various values of Pr(S|MF) False alarm rates Prior odds Likelihood ratio k Pr(S|MF)= 1 Pr(S|MF)= 0.95 0.0959 0.4799 0.9500 0.5000 0.7480 0.9500 0.1055 0.5279 0.9500 Pr(S|MF)= 0.90 0.0909 0.4545 0.9000 0.4736 0.7086 0.9000 0.1000 0.5000 0.9000 Pr(S|MF)= 0.85 0.0858 0.4292 0.8500 0.4473 0.6692 0.8500 0.0944 0.4722 0.8500

1:99 9.90 1/10 0.1010 1:99 1.98 1/50 0.5050 1:99 1.00 1/99 1.000 1:19 1.90 1/10 0.5263 1:19 1.27 1/15 0.7874 1:19 1.00 1/19 1.000 1:9 9.00 1/1 0.1111 1:9 1.80 1/5 0.5555 1:9 1.00 1/9 1.000 Note: Here k is assumed to be greater than one

[ 371 ]

Prior odds Sequential modification of odds

Pr(~MF) Pr(MF)

L1

L2

Ln

the signals generated by each lter. Assume that all the audit signals are independent of each other. In lter 1, the auditors prior odds will be modied by signal S1 that results in likelihood ratio L1. The formula is,

L1 =

(5)

(6) The left-hand side of equation (6) becomes the prior odds for the lter 2. After the sequence of n lters the above condition becomes, i =n Pr(S | MF ) Pr(MF ) Pr(S 1|~ MF) (Pr ~ MF) k i =1 1 (7) In this case the value of k is assumed to be 1. The probability of correct decision improves signicantly if each lter has a relatively low hit rate and high false alarm rate given that the lters are independent. If we assume that the hit rate and false alarm rate for each lter is 80 per cent and 20 per cent respectively, and the probability of management fraud is 1 per cent (prior odds 1:99), and if we substitute the numbers into equation (7) then four lters are sufficient to make a correct decision. The calculations work as follows: (80/20) (80/20) (80/20) (80/20) 1% > k (=1). Suppose MF (~MF) then after four lters the auditors modied likelihood ratio (L4) > 1 and then the correct decision concerning MF (~MF) can be made. That is, if management fraud exists then the auditor will be able to detect fraud after applying four independent audit procedures. Since the cost of Type I error is less than Type II error, k shall be less than 1. As k becomes smaller lower number of lters are required to maintain the effectiveness of the audit. For k < 1, if MF then the auditor should be able to detect fraud after applying less than four independent audit procedures. However, the above results apply only if statistical independence of the lters is achieved. When the signals are correlated

there are dependencies among the lters and after the rst lter, there is no increase in power. In this case all the predictions about hit rate and false alarm rate described in the earlier section hold true. The higher the correlation the lesser the chance of making a correct decision. The condition of statistical independence for different procedures is difficult to achieve in practice. Due to standardized staff training, the group of the same auditors performing different audit procedures usually lack experience in detecting management fraud. This training standardization and lack of experience might cause the signals emanating from different audit procedures to be correlated. These correlations might explain the auditors failure in detecting management fraud. The correlation between successive hit rates (Pr(S1|MF)) and false alarms (Pr(S1|~MF)) can be separately evaluated. The incremental discriminatory power of successive lters will be lower depending on the extent of correlation between the successive hit and false alarm rates. However, if the hit rates are correlated and false alarm rates are independent then the successive lters will reduce the likelihood ratio (and vice versa) resulting in an increased probability of an incorrect decision.

V. Conclusion

This paper presented the relationship between audit technology (hit rates and false alarm rates), base rates of management fraud, and costs of Type I and Type II errors using a SDT framework. First, the audit process was modeled as a point process where the auditor evaluates the xed amount of evidence and makes the decision. Next, the audit process was simulated as a sequential application of different audit procedures to achieve a particular audit objective. SDT

[ 372 ]

concepts were used to model and analyze the audit process. The modeling process shows that critical levels of false alarm rates are required to maintain audit effectiveness (if MF) and audit efficiency (if ~MF). The analysis has demonstrated that as the cost of Type II error increases, the audit effectiveness is maintained only if the increase in the power of audit technology is matched by the corresponding increase in false alarm rates. At any level of power the auditors correct decision making concerning MF depends on acceptance of a very high rate of false alarm rates and consequent Type I errors. The sequential nature of audit tests might mitigate this problem if these tests are statistically independent of each other. However, if the dependencies exist, then the prior analysis holds. The analysis leads to certain policy recommendations, as Davidson (1994) suggests that auditors should be exposed to forensic practice so that the priors for management fraud will be higher. This might result in better auditing in terms of fraud detection. Johnson et al. (1991) also point out that auditors whose exposure to fraud cases is higher than the average base rates for a particular industry construct a fault model of the company and can identify framing effects. Pincus (1990) found that auditors priors were positively related to fraud detection. Thus, it is possible that the auditors who correctly form expectations about prior odds can successfully integrate the audit evidence and draw correct conclusions. The Bernardi (1994) model predicts detection of fraud when the auditors priors on the existence of fraud were above 57 per cent. Such increased priors across the board may increase the audit effectiveness. However, as the earlier analysis indicates due to the low base rates it will also result in increased Type I errors, resulting in overauditing and increased cost of the audit for the rms total client portfolio. For example, Bernardi and Pincus (1996) found that auditors higher priors for fraud did result in audit inefficiency . Secondly, traditionally standard-setters have been concerned with audit ineffectiveness (Type II errors) and not with audit inefficiency (Type I errors). But from a broader economic perspective it is imperative that standard-setters be concerned with audit inefficiency . A few well-publicized cases of management fraud can damage the reputation of the profession and can lead to a demand for increase in the auditors responsibility for detection of management fraud. As a result of low base rates, an increase in

the cost of Type II errors results in audit inefficiency on a broader scale due to Type I errors. Finally the paper calls into question the utility of external auditing in detecting management fraud. Increasing responsibility of the auditor may not lead to desired results of reducing the incidence of management fraud. However, it will certainly lead to increased cost of auditing, denial of audit services to high-risk industries, and more difficulty for small businesses in raising capital (AICPA 1993). The analysis in the paper has several limitations. First, at present there is little empirical evidence that auditors calculate likelihood ratios in making accept/reject decisions. Even if there is some evidence (e.g. Bernardi, 1994; Pincus, 1990) that the auditors decisions are inuenced by the base rates of management fraud, further research needs to be done to understand this process. Second, the analysis in this paper is based on the assumption that the base rate of management fraud is less than 10 per cent. Future studies should empirically explore this issue. Finally, this analysis does not consider the restraining effects of the audit on management fraud.

Notes

1 SAS No. 82 denes management fraud as an intentional misstatement arising from fraudulent nancial reporting or from misappropriation of assets. 2 Audit technology is operationalised by means of hit rates (power of the audit technology) and false alarm rates. Hit rate is when audit signals management fraud when management fraud exists and false alarm is when audit signals management fraud when management fraud does not exist. 3 Type I and Type II errors are dened as an incorrect rejection and an incorrect acceptance, respectively . 4 For exhaustive treatment of SDT, see Green and Swets (1974). 5 Presently no denitive empirical estimates of base rates of management fraud are available. KPMG Peat Marwick (1993) fraud survey results show false nancial statements (which appear somewhat similar to management fraud as dened in SAS No. 82) occurred only in 1 per cent of fraud cases. Association of Certied Fraud Examiners in a Report to the Nation on Occupational Fraud and Abuse (1996) reported that nancial statement fraud accounted for about 5 per cent of all occupational fraud cases. These are the only two empirical studies in the literature and based on these studies this paper assumes that base rates are below 10 per cent.

[ 373 ]

6 The decrease in due to decrease in k will maintain the effectiveness of audit. However, since the prior incidence of MF in the population does not change such a decrease will result in an increased Type I errors across the population. 7 For any given account the auditor either faces a Type I or Type II risk, not both. Audit effectiveness is appropriate when MF and audit efficiency is appropriate when ~MF. 8 It should be noted that if k exceeds 1/99 then the false alarm rate can increase only up to 1 (if the power of audit technology is less than 1), however, the audit effectiveness can still be maintained. 9 The audit process can also be modeled as a parallel process where various tests are conducted in parallel and the outputs or results are combined to make a nal decision. The results of such a model are qualitatively not very different than the sequential process modeled in this paper. For more details see Swets and Picketts (1982, p. 61).

References

Albrecht, W.S. and Willingham, J.J. (1992), An evaluation of SAS No. 53, the auditors responsibility to detect and report errors and irregularities, Proceedings of the Expectation Gap Roundtable, 11-12 May, Charleston, SC, pp. 102-21. American Institute of Certied Public Accountants (1988), Statement on Auditing Standards No. 53, The Auditors Responsibility to Detect and Report Errors and Irregularities, AICPA, New York, NY. American Institute of Certied Public Accountants (1993), Meeting the Financial Reporting Needs of the Future: A Public Commitment from the Public Accounting Profession, Board of Directors of the American Institute of Certied Public Accountants, AICPA, New York, NY. American Institute of Certied Public Accountants (1997), Statement on Auditing Standards No. 82, Consideration of Fraud in a Financial Statement Audit, AICPA, New York, NY. Arens, A. and Loebbecke, J. (1997), Auditing: An Integrated Approach, Prentice-Hall, Englewood Cliffs, NJ. Association of Certied Fraud Examiners (1996), Report to the Nation on Occupational Fraud and Abuse. Bell, T., Szykowny, S. and Willingham, J. (1993), Assessing the likelihood of fraudulent nancial reporting: A cascaded logit approach, Working paper, KPMG Peat Marwick, Montvale, NJ, 1993. Bernardi, R.A. (1994), Fraud detection: the effect of client integrity and competence and auditor cognitive style, Auditing: A Journal of Practice & Theory, Vol. 13, Supplement, pp. 68-84.

Bernardi, R.A. and Pincus K.V . (1996), The relationship between materiality thresholds and judgments of fraud risk, Managerial Finance Journal, Vol. 22, Spring, pp. 1-15. Coombs, C.H., Dawes, R.M. an d Tversky, A. (1970), Mathematical Psychology, Prentice-Hall, Inc., Englewood Cliffs, NJ. Costello, J.L. (1991), The auditors responsibilities for fraud detection and disclosure: do the auditing standards provide safe harbor?, Maine Law Review, Vol. 43, pp. 265-305. Cradit, J.D., Tashchian, A. and Hofacker, C.F. (1994), Signal detection theory and single observation designs: methods and indices for advertising recognition testing, Journal of Marketing Research, February, pp. 117-27. Davidson, S.A. (1994), Fraud detection: the effect of client integrity and competence and auditor cognitive style, Auditing: A Journal of Practice & Theory, Vol. 13, Supplement, pp. 84-9. Deshmukh A., Siegel, P. and Karim, K. (1997), A Bayesian analysis of cost-effectiveness of auditing for small businesses, Advances in Accounting Vol. 15, pp. 265-77. Drury, C.G. and Fox, J.G. (1975), The imperfect inspector, in Drury, G.G. and Fox, J.G. (Eds), Human reliability in quality control, Halsted, New York, NY, pp. 11-16. Elliot, R.K. and Jacobson, P.D.(1986), Detecting and deterring nancial statement fraud, Corporate Accounting, Fall, pp. 34-9. Green, D.M. and Swets, J.A. (1974), Signal Detection Theory and Psychophysics, Robert E. Drieger, New York, NY. Gwilliam, D. (1986), Be alert to the possibility of management fraud, Accountancy, February, pp. 104-5. Hansen, J.V ., McDonald, J.B., Messier, W.F. and Bell, T.B. (1996), A generalized qualitativeresponse model and the analysis of management fraud, Management Science, Vol. 42 No. 7, pp. 1022-32. Johnson, P.E., Jamal, K. and Berryman, R.G. (1991), Effects of framing on auditor decisions, Organizational Behavior and Human Decision Processes, Vol. 50 No. 75, p. 105. Joyce, E.J. and Biddle, G.C. (1981), Are auditors judgments sufficiently regressive?, Journal of Accounting Research, Autumn, pp. 323-49. KPMG Peat Marwick (1993), Fraud Survey Results KPMG Peat Marwick, New York, NY. Loebbecke, J.K., Eining, M.M. and Willingham, J.J. (1989), Auditors experience with material irregularities: frequency, nature, and delectability, Auditing: A Journal of Practice & Theory, Fall, pp. 1-28. McFadgen, D.N. (1994), When numbers are better than words: the joint effects of response representation and experience on inherent risk judgment, Auditing: A Journal of Practice & Theory, Vol. 13, Supplement, pp. 20-3.

[ 374 ]

Palmrose, Z. (1991), Trial for legal disputes involving independent auditors: some empirical evidence, Journal of Accounting Research, Vol. 29, pp. 149-85. Pincus, K.V . (1990), Auditor individual differences and fairness of presentation judgments, Auditing: A Journal of Practice & Theory, Fall, pp. 150-66. Singh, S.N. and Churchill, G.A. (1986), Using the theory of signal detection to improve ad recognition testing, Journal of Marketing Research, November, pp. 327-36.

Sorkin, R.D. and Dai, H. (1994), Signal detection analysis of the ideal group, Organizational Behavior and Human Decision Processes, Vol. 60, pp. 1-13. Swets, J.A. (1979), ROC analysis applied to the evaluation of medical imaging techniques, Investigative Radiology, Vol. 14 No. 2, pp. 203-6. Swets, J.A. and Picketts, R.M. (1982), Evaluation of Diagnostic Systems: Methods from Signal Detection Theory, Academic Press, New York, NY.

[ 375 ]

- Binder 1Uploaded bysohail merchant
- ICAEW CFAB Syllabus and Technical Knowledge Grids 2015Uploaded byHuyenDao
- Comparative Study Auditing Between Profit Nonprofit OrganizationsUploaded byaskmee
- Auditing Standards in China—A Comparative Analysis with Relevant International Standards and GuidelinesUploaded byIchbinleo
- Evaluating Organizational EffectivenessUploaded bySumayyia Qamar
- Smieliauskas 6e - Solutions Manual - Chapter 01Uploaded byscribdtea
- At - (14) Internal ControlUploaded byLorena Joy Aggabao
- Auditor Interview QuestionsUploaded byZAKA ULLAH
- TBCH01Uploaded byDaniel Morgan
- Definición de Auditoria Forense, inglesUploaded byRuben Aragon
- ISA DefinitionsUploaded bymrizwan84
- Auditing AssignmentUploaded byihsan278
- 7786-9927-1-PBUploaded byPrince Carl
- Audit 2 Marks QuestionsUploaded byEric Martinez
- IAASB Staff Audit Practice Alert External ConfirmationsUploaded byAditya Mahadik
- Revised Rules of Procedure CoaUploaded byJaime M. Palattao Jr.
- 05-DILG2016 Part1-Independent Auditor's ReportUploaded bymr. one
- 0 Hypothesis TestingUploaded byAishar Husin
- Tatelx ResultUploaded bylkamal
- Standalone & Consolidated Financial Results, Limited Review Report for June 30, 2016 [Result]Uploaded byShyam Sunder
- AppendixaUploaded byEnvisage123
- AT 10Uploaded byChristopher Price
- Intern as Hip Anabelle Version FinaleUploaded byyohanize
- Audited AIPP 2012 FinancialsUploaded byJames Carter
- Final Fire Fs 6 30 13Uploaded byFIRE
- North Green BushUploaded byjfrancorecord
- Ch 2 SmUploaded bydesireeren
- 5890 Take Home Test 2Uploaded bybiwithse7en
- ThreatUploaded byyng_lim
- 7664Uploaded byRicardo Torres

- set-50091312-542c30277c7829a1e661cd2633c76326Uploaded byling
- Chapter 2Uploaded byPhil12345asdf
- Audit Mcqs Ricchiute Test BankUploaded byjpbluejn
- AUDIT MCQs RICCHIUTE TEST BANK.docxUploaded byLeigh Pilapil
- On the Myth of Anglo-Saxon Financial Accounting#1Uploaded byArdhy Brilyan
- Chap 001Uploaded byMack1170
- Chapter 25.pdfUploaded byAlliah Manuel
- 408Test 1 Review QuestionsUploaded bycukiek
- a1b.pdfUploaded byRinokukun
- WhittingtonUploaded byVuinhi16
- Chapter 2 習題解答Uploaded bylo0302
- SYSTEMIC AUDIT AND SUBSTANTIVE EVALUATION IN.pdfUploaded byHani
- May 1, 2008 Audit Sampling GuideUploaded byaliwalio
- Auditing Chapter 2Uploaded bybyedmund
- Benston 2006Uploaded byEhab Agwa
- Questions and AnswersUploaded byJi Yu
- Test Bank With Answers of Accounting Information System by Turner Chapter 07Uploaded byEbook free
- T.B - CH02Uploaded byMohammadYaqoob
- ACCT 506 - Financial And Integrated AuditsUploaded bymian
- TBCH01Uploaded byArnyl Reyes
- Chapter 25 FinalUploaded byMichael Hu
- Chapter 26Uploaded byhomer_639399297
- Audit Class Units 1-4Uploaded byallaboutbookslover
- Corporate Reporting and Analysis ApprovedUploaded byManish Kumar
- Chapter 2Uploaded byRafael Garcia
- Auditing Ch 2 The Auditing Profession1.pptxUploaded bylije
- picpaUploaded bygelli joy corpuz
- Assessing Fraud Risks Understanding Cmomn Fraud ShemesUploaded bysaccontala
- THE ROLE OF AUDITING QUALITY IN NARROWING THE EXPECTATIONS GAP IN AUDITING PROFESSION.Uploaded byIJAR Journal
- Auditing Ch1Uploaded byJon