You are on page 1of 11

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/1358-1988.htm

JFRC
20,4 Will credit rating agency reforms
be effective?
Scott J. Boylan
356 Department of Accounting, Washington and Lee University,
Lexington, Virginia, USA

Abstract
Purpose – The purpose of this paper is to examine the potential effectiveness of government
reforms aimed at improving the accuracy of ratings issued by credit ratings agencies in US financial
markets.
Design/methodology/approach – The paper identifies unconscious bias as a source of inaccuracy
in the credit ratings process. It examines prior behavioral research on unconscious bias,
and uses this research to identify structural issues within the credit ratings industry that give rise
to biased judgments. Finally, it examines whether government reforms will be effective in
improving the accuracy of credit ratings, and offers additional reforms aimed at combating
unconscious bias.
Findings – Recent government reforms will be most effective in curbing intentional decisions to
compromise the ratings process. However, the reforms will be less effective at mitigating unconscious
biases in judgments underlying credit ratings, because they do not adequately address relevant
structural issues. To combat unconscious bias, changes need to be made to ratings agencies’ fee
structures, business models, and risk management functions.
Practical implications – The analysis is of use to regulators who are contemplating the need for
reforms aimed at improving the accuracy of credit ratings. While focusing on events in the USA, the
analysis is relevant to any country in which credit ratings are influential in financial markets.
Originality/value – This is the first paper to examine the performance of credit ratings agencies
through the lens of behavioral psychology, and to introduce the concept of unconscious bias as a
determining factor in the accuracy of credit ratings.
Keywords Credit rating, Credit, Bias, Experiment, United States of America, Financial markets
Paper type Research paper

On April 13, 2011, the US Senate Permanent Subcommittee on Investigations (2011),


chaired by Senator Carl Levin (D-Mich.), released a post-mortem analysis entitled,
Wall Street and the Financial Crisis: Anatomy of a Financial Collapse. The
subcommittee report, based on extensive interviews with US bankers, regulators,
and other market participants, concluded that one of the key contributors to the
financial crisis of 2008 was inflated credit ratings issued by agencies such as Moody’s,
Standard & Poor’s, and Fitch Ratings. The subcommittee identified several factors
contributing to the inaccurate ratings, including excessive competitive pressures
within the industry, flawed quantitative models, and bad data for those models.
Overall, the report paints a picture of an industry whose members were too eager to
Journal of Financial Regulation and please the investment banks which created the securities – and paid the agencies’ fees.
Compliance More specifically, the report concludes that the credit rating agencies were not
Vol. 20 No. 4, 2012
pp. 356-366 sufficiently careful in designing and evaluating the tests upon which their ratings were
q Emerald Group Publishing Limited
1358-1988
based, and were too reluctant to revise their ratings when it was apparent that those
DOI 10.1108/13581981211279327 ratings were inaccurate. In response, the subcommittee recommended that the US
Securities and Exchange Commission be given expanded powers to monitor these Credit rating
agencies. In addition, it recommended that the SEC be permitted to impose stiff agency reforms
penalties for violators.
In light of the subcommittee’s report, coupled with the lingering public outrage over
the 2008 financial crisis, it is easy to envision a caricature of greedy investment
bankers colluding with unscrupulous credit raters to cook the ratings on investments
in order to line their pockets at the expense of unwary investors. However, the truth 357
likely is more complicated, and the reasons for the poor performance of the credit
rating agencies is likely more subtle. In particular, the nature of the credit ratings
process, coupled with structural elements behind the industry, make for an
environment in which unconscious biases in professional judgment are likely to
arise, and which are likely to have a negative impact on the quality of credit ratings.
This type of bias is particularly difficult to deal with, because those who exhibit it do
not believe that they are making poor judgments or compromising their integrity in the
decisions that they make.
The following analysis focuses on events that occurred in the USA and in its
financial markets, and on the related US Government reforms. However, the relevant
issues are not unique to the USA. Accordingly, the analysis also applies to almost any
country in which credit ratings play an important role in financial markets.

Securitizations and credit ratings


During the first part of the 2000s, credit rating agencies played a key role in the
securitization of US residential mortgages and in the marketing of related financial
derivatives, whose values were based on those mortgages. Essentially, investment
banks like Goldman Sachs purchased residential mortgages from originators,
including commercial banks, and mortgage companies such as Countrywide. The
investment banks then repackaged these mortgages in the form of bonds and sold
them to investors. The benefit of doing so, according to the banks, was the
diversification of risk. In other words, if a particular lender held a $300,000 mortgage
note, it would stand to gain if the mortgage was paid off according to schedule, but it
would stand to lose if the borrower defaulted; essentially an all or nothing proposition.
The idea behind residential mortgage backed securities (RMBS) was that it would be
more efficient to spread this credit risk around. For instance, the local bank could sell
the mortgage to an investment bank (typically for more than face-value). If it wanted to
replace its exposure to the real-estate market, it could, for instance, buy $300,000 worth
of RMBS. The benefit of doing so is that it would receive interest payments, much like
if it held the original mortgage. But, rather than having the entire $300,000 investment
linked to a single property, its investment would be spread across hundreds, or even
thousands, of properties from different geographic regions. In the event of a default,
lots of investors would suffer very small losses, rather than one investor shouldering
the entire burden on its own.
The investment banks devised clever ways to assign the cash flows from the
underlying mortgage payments to the related RMBS – to construct bonds with varying
degrees of credit risk. Bonds that had the highest priority for receiving payments carried
the least risk, and offered the most conservative return. Bonds with lower priorities
carried more risk, and offered higher returns. The investment banks hired credit rating
agencies to evaluate their RMBS packages, and to assign credit ratings to each
JFRC type of bond. The importance of doing so was that certain institutional investors, such as
20,4 pension funds, were the source of substantial demand for bonds, and were contractually
limited to purchasing only the safest debt securities. The banks worked hard to structure
the packages of RMBS such that most of the bonds would receive a triple-A rating,
signifying a very limited risk of loss of principal. These bonds then could be sold to
institutional investors, and to others who were sufficiently risk-averse. The bonds that
358 did not make the grade often were combined with the riskiest pieces of other RMBS
packages, and re-sliced – to further diversify the new pieces – and re-submitted to the
same ratings process. Interestingly, enough, the majority of these new, and increasingly
diversified, securities (known as collateralized debt obligations or CDOs) often received
the coveted triple-A rating from the agencies; also making them eligible for purchase by
pension funds and other conservative investors.
The ratings process typically commenced with an investment bank hiring a credit
rating agency to rate a package of RMBS or CDOs. In the case of RMBS, the agency would
examine the structure of the securities, including how cash flows were assigned to each
bond, and the degree of cushion built-in to the offering. For instance, a $300,000 RMBS
offering might be backed by mortgages with a face value of $306,000, providing a 2 percent
default cushion before any of the bonds would begin to suffer; the greater the cushion, the
safer the bonds, and the greater the chance of receiving a triple-A rating. In addition, the
agencies typically examined spreadsheets of loan data for the underlying mortgages,
looking for things like credit scores of the borrowers, loan to value ratios, geographic
location of the collateral, etc. Data from these spreadsheets was fed into proprietary
quantitative models to arrive at conclusions about potential losses associated with
particular bonds. Analysts for the agencies used the quantitative data as input to arrive at
their ratings. If the investment banks were not happy with, for instance, the proportion of
an offering receiving triple-A status, they had the opportunity to modify the contents, such
as increasing the default cushion, or swapping-out specific mortgages. The revised
packages would then be reanalyzed. The process would continue until both parties were
satisfied, at which time the ratings would be published and the bonds issued. The process
for CDOs was similar, with the notable exception that frequently, due to the derivative
nature of the CDOs, the credit rating agencies relied on their ratings of the underlying
pieces, rather than re-examining the data at the loan level.
After ratings were issued, the agencies monitored the related securities. Negative
events, such as evidence of a large number of defaults in a particular region, might
cause the agencies to review a particular RMBS or CDO offering, and potentially
downgrade their rating. For most of the run-up of the US housing bubble, the credit
rating agencies held firm to their triple-A ratings on these securities. However, in the
summer of 2007, mounting evidence of widespread defaults on mortgages, declining
home prices, and outright mortgage fraud led the agencies to downgrade thousands of
bonds in a short period of time. Many institutional investors were forced to sell their
holdings because they were forbidden from investing in anything but low-risk
securities. In addition, their appetite for purchasing the securities evaporated as well.
Collectively, this caused the market values of the bonds to plummet, and created huge
losses for anyone holding the investments, or anyone selling them. In addition, many
banks had these securities parked on their balance sheets as assets. This happened,
in part, because banks which created the RMBS or CDOs often retained a portion
of the securities for a variety of reasons. The decline in value of the securities retained
as investments, coupled with high degrees of financial leverage, caused many banks to Credit rating
risk running afoul of statutory capital requirements, necessitating even more sales, and agency reforms
contributing to the vicious cycle.

Government action
Part of the US Senate subcommittee’s quest was to determine why the credit rating
agencies appear to have misjudged the risk associated with the RMBS and CDOs, to 359
determine why they waited so long to downgrade, and to offer recommendations for
improved performance. The subcommittee concluded that the agencies generated
inaccurate ratings, and that the principal determinants for this were excessive competitive
pressures within the industry, flawed quantitative models, and bad data for those models.
Overall, the report paints a picture of an industry whose members were too eager to please
the investment banks which created the securities – and paid the agencies’ fees.
In addition, the report concludes that the credit rating agencies were not sufficiently
careful in designing the tests upon which their ratings were based, and were too reluctant
to revise their ratings when it was apparent that those ratings were inaccurate.
The recommendations of the subcommittee build on the reforms introduced in the
Dodd-Frank Wall Street Reform and Consumer Protection Act, issued into US law in
July 2010. That act gave the Securities and Exchange Commission broad powers to
register and regulate credit rating agencies, and included mandates for improving
internal controls, reducing conflicts of interest, and increasing public disclosure.
In addition, the SEC has the authority to penalize agencies for violations. Penalties
can include censure, remedial training, fines, and decertification, which means that a
specific agency would be barred from issuing ratings that are used for regulatory
purposes. In addition to those reforms, the Senate subcommittee recommends that that
the US Government decrease its reliance on privately issued credit ratings altogether.
Also, it recommends that the SEC do the following:
.
Rank credit rating agencies by accuracy.
.
Facilitate civil lawsuits by investors when credit rating agencies knowingly or
recklessly fail to conduct diligent research in support of their ratings.
.
Inspect credit rating agencies and discipline them for inadequate controls over
their policies and procedures.
.
Verify that credit rating agencies are assigning higher risk to unusually novel or
complex securities.
.
Verify that credit rating agencies are complying with federal law by publicly
disclosing high quality information about their organization and procedures.

Unconscious bias
Increased regulation of credit rating agencies raises a question about the effectiveness of
any proposed or enacted reforms. Essentially, the issue is whether these reforms will
increase the accuracy of credit ratings and improve the timeliness of changes in those
ratings. In considering this question, it is important to remember that, ultimately, a credit
rating is an expression of an “opinion” about the risk of loss on a bond or similar
investment. Moreover, the opinions offered by the credit rating agencies are the product of
complex analyses; full of judgments and assumptions about the causes and effects of credit
defaults.
JFRC A long history of academic research demonstrates that judgments and decisions are
20,4 distinct from one another. In general, when confronted with a decision-problem, people
use available information to form judgments. Based on those judgments, people make
decisions. Bias can enter in either the judgment phase or the decision phase. The key
difference is that when it enters at the judgment phase, bias typically is unconscious.
That is, decision-makers are unaware that their judgments are biased. Examples include
360 well-documented tendencies for individuals to over-estimate their driving skills,
or under-estimate the amount of time it takes to complete a project. In either case,
individuals really believe that they are better than they are in fact. In the case of
under-estimating the time to complete a project, this could lead individuals to, for
example, unintentionally submit budgets with overly optimistic assumptions.
In contrast, if one believes, for example, that it will take ten hours to complete a
project, but that individual submits a budget indicating it will take 8 hours, this is an
example of a (consciously) biased decision.
It is important for policy-makers and other parties to both recognize and understand
the difference between conscious and unconscious bias. One practical reason for this is
that specific policies intended to mitigate bias are likely to be more effective at mitigating
one type of bias (conscious versus unconscious) than another. For instance, harsh
penalties for providing false information to a taxing authority are likely to deter deliberate
cheating. However, if a decision-maker truly believes that a reported amount is correct
(even when it is not), he/she likely would be insensitive to the magnitude of the penalty.
With respect to credit ratings, one must ask whether agencies consciously chose to
issue ratings that they knew to understate the default risk on RMBS and CDOs
(a deliberately “biased” decision). Or, were the ratings products of “unconsciously”
biased judgments? In other words, it is possible that the agencies were not fully aware
of the fact that their ratings were too lenient. While we never may be certain about the
degree to which agencies did or did not know that their ratings were biased, research
shows that unconscious bias is highly likely to manifest in settings like the one in
which credit ratings are issued.
The source of unconscious bias is attributed in part to limitations in our ability to
effectively and efficiently process large quantities of complex information. In addition,
research shows that opinions are often unconsciously shaped by one’s own self-interest.
For instance, in a recent experiment, Boylan (2008) created pairs of subjects who were
asked to estimate the value of an item. Its value was based on the number of objects in a
container. So, if there were 437 objects, its value would be $437. In each pair, subjects
were assigned the roles of manager and auditor. The auditor subjects had financial
incentives to produce accurate valuations. The managers had incentives either to
overstate the value (e.g. assets and revenues), or understate the value (e.g. liabilities and
expenses). The results show that auditors who were paired with managers with
incentives to overstate, actually believed that the containers were more valuable. In other
words, their opinions about the true value of the item in question were biased, and they
were completely unaware of that fact. Bazerman et al. (2002) conducted a similar
experiment using professional accountants, who were provided a case study pertaining
to the potential acquisition of a company. Some of the accountants took the role of
advisor to the buyer, while others took the role of advisor to the seller. Despite having
identical information, those who advised the seller believed the company’s financial
statements were more accurate than did those who advised the buyer.
These two examples show that even in simple settings, unconscious bias is likely to Credit rating
creep into one’s judgments, and because it is unconscious, combatting the bias can be agency reforms
problematic[1]. In addition, these examples illustrate that unconscious bias often is in
the direction of a decision-maker’s own self-interest. Given the importance and
prevalence of this issue, considerable research has attempted to categorize different
types of unconscious bias, and to provide insight into where and why bias might arise.
Three important, and well-documented, sources of bias are attributable to: 361
(1) Availability. Information that is readily available tends to have a
disproportionately strong influence on individuals’ judgments (Tversky and
Kahneman, 1973, 1974). In the experiments described above, a party who has a
vested interest in a particular outcome likely will make arguments in favor of his
or her position readily available. Being paired with someone of this persuasion will
contribute to unconscious bias due to the asymmetric availability of information
about a particular outcome. With regard to credit rating agencies, availability is an
issue because the agencies work closely with issuers to gather information about
the security offerings under review. Issuers obviously have a vested interest in
obtaining high ratings, and thus are likely to provide evidence and to steer
discussions in favor of their desired outcome, with no external party providing an
effective counter-weight in the process. Contributing to availability-based bias
was the lack of negative information about the US housing market, which
produced steady gains for 70 years with very low instances of credit default. In
addition, most declines were contained to relatively isolated geographic areas
(e.g. declines in San Francisco appeared independent of declines in Chicago).
(2) Representativeness. Good past performance tends to lead to excessively
optimistic projections about future performance (Tversky and Kahneman, 1974,
1982). This phenomenon has been demonstrated in a variety of settings. For
example, De Bondt and Thaler (1985) found evidence that investors tend to
systematically overvalue stocks that have strong recent past performance,
leading to abnormally poor subsequent returns. The rationale for this type of
error is that individuals erroneously interpret strong past performance as more
sustainable that it really is. Thus, their judgments about future performance tend
to be inflated. Representativeness likely contributed to the poor performance of
the credit rating agencies because of the overall positive past performance of the
housing market, and in particular, the more recent run-up in home prices, coupled
with defaults that tended to lag this run-up by several years.
(3) Anchoring. Individuals tend to overweight baseline information, and
underweight incremental information when making judgments (Tversky and
Kahneman, 1974). For instance, Bernard and Thomas (1989) found evidence that
investors do not adjust earnings forecasts sufficiently in response to new
information, leading to a phenomenon known as post-earnings-announcement
drift, in which security prices tend to drift either predictably upward in response
to “good-news” or predictably downward in response to “bad-news.” The credit
rating agencies appear to have suffered from a similar problem when news of
increased defaults on home mortgages started showing up in the news, and in
their data. Information from over 70 years of data suggested that default rates
generally were very low and uncorrelated across geographic regions.
JFRC News of increased defaults, not restricted to small geographic pockets, appears to
20,4 have been discounted until its prevalence in the news and the data was so
voluminous (availability?) that the agencies were forced to react by
simultaneously downgrading the credit status of thousands of RMBS and CDOs.

Structural issues
362 In order to improve the accuracy of credit ratings and to improve the timeliness of
revisions to those ratings, reforms must address both conscious choices made by the
agencies to do substandard work and unconscious biases that color their judgments
about the creditworthiness of companies and specific investments under review. The
Senate subcommittee’s report and recommendations, as well as the reforms set forth by
the Dodd-Frank Act seek to provide the SEC with the toolset to address the former
concern. However, several structural issues within the credit ratings industry itself
remain. Unless modified, they likely will continue to have a negative effect on the
quality of credit ratings. Three critical structural issues are:
(1) “Issuer Pays” business model. Under the current business model, when an
investment bank or other entity wants to issue debt securities, it hires a credit
rating agency to evaluate the package and issue a credit rating on the
underlying securities. This business model incentivizes the agencies to provide
ratings that please the issuer. The agencies counter by arguing that their
analysts are shielded from the sales side of their businesses, and hence immune
to the pressure of providing favorable ratings. Whether this is true, and the
degree to which separating analysts and sales people is effective, however, is
not a settled issue. At a minimum, there is at least anecdotal evidence
suggesting that ratings decisions were influenced by upper management and
by sales representatives within the agencies. For instance, a common complaint
in the subcommittee’s report is that the agencies had a misplaced emphasis on
increasing their market share by doing as many deals with the investment
banks as possible, and that there was considerable internal pressure to meet
market-share-based revenue goals. As long as this business model remains in
place, there will be a nettlesome degree of commonality between the interests of
the issuers and the interests of the ratings agencies. Accordingly, credit rating
agencies will have strong financial incentives to generate reports that please the
issuers, suggesting that the likelihood of unconscious bias in ratings will
remain high.
(2) Complexity. Providing a credit rating depends on judgments about complex
transactions, whose cash flows are spread over long periods of time, and subject to
myriad risks. Accordingly, the opinions offered via credit ratings are subjective,
and must be supported by qualitative judgments and assumptions, about which
reasonable people may differ. This environment is ripe for the development of
unconscious bias. It is one thing to ask, “what is 2 þ 2?” it is another to ask:
[. . .] what is the chance that a 30 year bond, collateralized by 1,000 mortgages issued by
four banks in fifteen geographic regions will lose value due to a 6 percent default rate in
the underlying mortgages?
Hard quantitative data and sophisticated models help prevent this from being a
totally subjective question, but at the end of the day, the answer will depend on
assumptions built into the models as well as intuition from professionals who Credit rating
evaluate the results. Given that the agencies work closely with issuers, and derive agency reforms
their revenues and profits from those issuers, it is likely that issuers will continue
to attempt to pressure the agencies for favorable ratings. As evidenced by the
psychological research discussed above, pressure from issuers likely will
unconsciously influence analysts judgments about the creditworthiness of
securities under review. The direction of that influence will likely be in favor of the 363
issuers, thus leading to ratings decisions that are (unintentionally) biased in favor
of the issuers.
(3) Culture. The credit rating agencies historically have viewed themselves as
publishers, whose work is protected by the First Amendment of the US
Constitution; not as investor watch-dogs, or gatekeepers for the financial
markets. Accordingly, they do not view their role as one in which they have a
responsibility to protect the public interest. Their work typically includes
disclaimers, indicating that their ratings should not be used to make investment
decisions. And, when involved in litigation by those who relied on the ratings, the
agencies have mounted successful First Amendment defenses. In addition, as
public companies, large credit rating agencies must answer to shareholders, who
are interested in generating returns on their investments, not in owning stock in a
company that provides the most accurate ratings per se. The absence of any real
sense of obligation for protecting the public interest, coupled with financial
incentives to grow the size of their companies suggests that a commonality of
interests between issuers and the ratings agencies is likely to persist. Thus,
despite sincere efforts on the part of credit analysts to remain impartial, research
has shown that common interests can conspire to generate unconscious bias. As
such, this type of bias is likely to continue to be a factor in the opinions offered by
the agencies.

Additional reforms
In order to combat unconscious bias, and to mitigate its effect on the quality of credit
ratings, one must address the structural issues described above. The government
reforms described above, both proposed and enacted, likely will have a positive impact
on combatting willful neglect and conscious decisions to compromise the ratings
process. Their ability to reduce unconscious bias, however, is tied to their ability to
address the structural problems that promote that sort of bias. It is likely that increased
penalties and monitoring will be helpful in changing “some” of the cultural issues in
the industry. And, requirements for improved internal controls should improve the
internal evaluation of the ratings process, and the adequacy of underlying models and
datasets. However, these reforms may not go far enough. To address the problem of
unconscious bias, the following additional reforms also should be considered.
First, the “issuer-pays” business model needs to be replaced. One possibility is a
return to a “user-pays” model. This was, in fact, the business model used by the agencies
in the USA up until the mid-1970s. Under the “user-pays” model, the ratings are not
public information. Investors pay for access to those ratings. This change would break
the financial ties between the agencies and the issuers, leading to a realignment of
interests. Under this scenario, credit rating agencies would feel less pressure to placate
issuers. By mitigating the commonality of interests that arises from the issuer-pays
JFRC business model, the move to a user-pays model likely will remove one important
20,4 condition that has been demonstrated to generate unconscious biases in general, which
in turn, should reduce the likelihood of agencies issuing ratings biased in favor of
issuers. An alternative to this approach would be to retain the “issuer-pays” model,
but rather than creating engagements on a deal-by-deal basis, the issuers should be
required to hire an agency for a fixed period of time (e.g. five years), with no chance of
364 renewal. During that time, all of the deals would be rated by the agency. The agency, in
turn, would have no incentive to placate the issuer, because after the five year
engagement, the ratings agency would rotate-off, and be replaced by a competitor.
Second, there also should be restrictions on the hiring of ratings agency employees
by issuers; for instance a two-year cooling-off period. This would reduce the incentives
for employees of the agencies to placate issuers in hopes of being hired by them. If the
credit rating agency employees are not seeking jobs with issuers, they should be less
susceptible to bias, both conscious and deliberate. As was true with moving away from
the issuer-pays business model, establishing employment restrictions also is likely to
lessen the commonality of interests between issuers and ratings agents. And, as
demonstrated by prior research, commonality of interests is a key ingredient for
unconscious bias. Clogging the employment pipeline between issuers and credit rating
agencies is important because issuers have incentives to hire ratings agency employees
to take advantage of their knowledge about the proprietary ratings processes
employed by the agencies.
Third, the organizational structure of the agencies needs to be reevaluated. Is the
public corporation the appropriate structure for an entity whose product is relied upon
by external parties to make investment decisions? This structure creates inherent
conflicts of interest between pleasing shareholders and providing the highest quality
ratings possible. Requiring these entities to be organized as partnerships would relieve
some of this tension, and concentrate decisions among a comparatively small group of
general partners, whose interests are less likely to be influenced by short-term
considerations. Of course, this type of organizational change carries with it other
economic consequences (taxes, liability issues, etc.), which would need to be part of the
decision-making calculus.
Finally, the agencies should develop and strengthen internal risk-management
functions. In particular, risk management personnel should be trained to spot and
manage areas of the business that are likely to be the incubators of unconscious bias.
Also, the chain of command should be such that they report not to the CEO, but rather
to the board of directors. This structure and related training would promote scrutiny of
proprietary models used in the ratings process, and the implementation of stress tests
designed specifically to challenge the types of assumptions that are most likely to be
affected by unconscious bias. For example, the Senate subcommittee report determined
that the agencies used data from corporate loans, which was more readily available
than data from residential mortgage loans, to develop assumptions about the
correlation between defaults of residential mortgage loans. Historical data from
corporate loans indicated a correlation of about 30 per cent. The agencies assumed
that correlation between defaults on residential loans would be about 40 per cent.
Ex post, it turns out that defaults on many residential loans were about 80-90 per cent
correlated. This sequence of assumptions led to a systematic underestimation of the
default risk on many RMBS and CDO packages. Moreover, anchoring and availability
are likely culprits in this failure. Simply recognizing this, and asking, “what happens if Credit rating
defaults on residential mortgages are much higher than on corporate loans?” would agency reforms
have at least generated some serious fodder for discussion.

Conclusion
Both the Dodd-Frank Wall Street Reform and Consumer Protection Act, and the more
recently released report on the financial crisis, issued by the US Senate Permanent 365
Subcommittee on Investigations contain provisions and recommendations aimed at
improving the accuracy of credit ratings and the timeliness of related revisions. The
effectiveness of these provisions and recommendations ultimately will rest on the degree
to which they mitigate both decisions to intentionally water-down the ratings process
and unconscious biases in the judgments underlying those ratings. Unfortunately, given
the nature of the industry, it is probably infeasible to completely eliminate unconscious
bias. For instance, agencies always will need to work closely with issuers, who have
vested interests in favorable outcomes. Moreover, rating the credit risk associated with a
package of securities always will be a complex, and ultimately subjective, process. Both
of these structural elements will always be present, contributing to an environment in
which unconscious bias can arise. But, while it may not be possible to completely
eliminate unconscious bias in the credit rating process, the additional reforms proposed
in this paper should help by reducing the financial commonality of interest that
currently exists between the credit rating agencies and the issuers, and by promoting
independent and well-trained monitoring functions within the agencies. In doing so, the
proposed reforms address the root causes of unconscious bias, and should be effective at
reducing it; thus improving the quality of credit ratings.

Note
1. Bazerman et al. (1997, 2002) used this line of reasoning to critique the auditing profession
both before and after the 2001 Enron/Andersen scandal.

References
Bazerman, M., Lowenstein, G. and Moore, D. (2002), “Why good accountants do bad audits”,
Harvard Business Review, November, pp. 97-102.
Bernard, V. and Thomas, J. (1989), “Post-earnings-announcement drift: delayed price response or
risk premium?”, Journal of Accounting Research, Vol. 27, pp. 1-36.
Boylan, S. (2008), “A classroom exercise on unconscious bias in financial reporting and auditing”,
Issues in Accounting Education, Vol. 23 No. 2, pp. 229-45.
De Bondt, W. and Thaler, R. (1985), “Does the stock market overreact?”, Journal of Finance,
Vol. 40, pp. 793-805.
Permanent Subcommittee on Investigations: United States Senate (2011), “Inflated credit ratings:
case study of Moody’s and Standard & Poor’s”, Wall Street and the Financial Crisis:
Anatomy of a Financial Collapse, pp. 243-317.
Tversky, A. and Kahneman, D. (1973), “Availability: a heuristic for judging frequency and
probability”, Cognitive Psychology, Vol. 5, pp. 207-32.
Tversky, A. and Kahneman, D. (1974), “Judgment under uncertainty: heuristics and biases”,
Science, Vol. 185, pp. 1124-31.
JFRC Tversky, A. and Kahneman, D. (1982), “Judgments of and by representativeness”,
in Kahneman, D., Slovic, P. and Tversky, A. (Eds), Judgment Under Uncertainty:
20,4 Heuristics and Biases, Cambridge University Press, New York, NY.

Further reading
Tversky, A., Kahneman, D., Morgan, K. and Lowenstein, G. (1997), “The impossibility of auditor
366 independence”, Sloan Management Review, Summer, pp. 89-93.

Corresponding author
Scott J. Boylan can be contacted at: boylans@wlu.edu

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like