You are on page 1of 65

The Economics of Public Scrutiny

and Financial Misreporting


Delphine Samuels
dsamuels@mit.edu
Sloan School of Management
MIT

Daniel J. Taylor
dtayl@wharton.upenn.edu
The Wharton School
University of Pennsylvania

Robert E. Verrecchia
verrecch@wharton.upenn.edu
The Wharton School
University of Pennsylvania

Draft: December 2019

Abstract:

This paper examines how the ex ante level of public scrutiny influences a manager’s subsequent
decision to misreport. The conventional wisdom is that high levels of public scrutiny facilitate
monitoring, suggesting a negative relation between scrutiny and misreporting. However, public
scrutiny also increases the weight that investors place on earnings in valuing the firm. This in turn
increases the benefit of misreporting, suggesting a positive relation. We formalize these two
countervailing forces––“monitoring” and “valuation”––in the context of a parsimonious model of
misreporting. We show that the combination of these two forces leads to a unimodal relation.
Specifically, as the level of public scrutiny increases, misreporting first increases, reaches a peak,
and then decreases. We find evidence of such a relation across multiple empirical measures of
misreporting, multiple measures of public scrutiny, and multiple research designs.

__________________
We thank Jeremy Bertomeu, John Core, Vivian Fang, Paul Fischer, Mirko Heinle, Naveen
Kartik, John Kepler, Rick Lambert, Thomas Ruchti (discussant), Rodrigo Verdi, Ronghuo Zheng
(discussant), and seminar participants at the 2019 FARS Midyear Meeting, the University of
Michigan, MIT, Stanford, Toronto, Wharton, and University of Zurich for helpful comments. We
thank Travis Dyer for providing data on EDGAR downloads, and thank our respective schools
for financial support.

Electronic copy available at: https://ssrn.com/abstract=3157222


1. Introduction

Understanding the motives for misreporting is of paramount interest to investors,

regulators, and practitioners. Perhaps for this reason, a vast academic literature examines the

causes of misreporting and the various mechanisms that can mitigate it. Many studies in this

literature explicitly link the misreporting decision to the cost-benefit tradeoff facing the manager,

and seek to provide insight on settings where managers will be more (or less) likely to misreport.

In this paper, we analytically and empirically examine how the ex ante level of public scrutiny

(i.e., the level of public scrutiny prior to misreporting) alters the cost-benefit tradeoff facing

managers and influences their subsequent decision to misreport.

The conventional wisdom is that greater public scrutiny reduces the incidence of

misreporting. The intuition is compelling: greater scrutiny increases the probability of detection

(e.g., Ferri, Zheng, and Zou, 2018). If the probability of detection is higher, the expected cost is

higher and, in equilibrium, the incidence of misreporting should be lower. This intuition has led

much of the prior literature to predict that misreporting is most pronounced in settings

characterized by minimal scrutiny (e.g., Dechow, Ge, and Schrand, 2010).

However, this intuition is incomplete. Prior research shows that greater public scrutiny

increases the weight that investors place on accounting disclosures in valuing the firm. For

example, it is widely accepted that the price response to each dollar of reported earnings, or

earnings response coefficient (ERC), is larger in high-scrutiny environments (e.g., Collins and

Kothari, 1989; Teoh and Wong, 1993). The larger the ERC, the greater the valuation benefit from

a dollar of inflated earnings. If the expected benefit is higher, intuition suggests that the incidence

of misreporting should be higher (e.g., Fischer and Verrecchia, 2000; Ferri, Zheng, and Zou,

2018). We show analytically and empirically that these two economic forces––“monitoring” and

Electronic copy available at: https://ssrn.com/abstract=3157222


“valuation”––introduce countervailing effects that result in misreporting being greatest in a

medium-scrutiny environment and least in both high- and low-scrutiny environments.

We use a parsimonious model of misreporting based on Fischer and Verrecchia (2000) to

formalize the intuition that motivates our predictions. As is standard in the literature, we assume a

privately informed manager can misreport, or “bias,” reported earnings and that there are a variety

of different “types” of managers, where each type has a different reporting objective. We assume

investors are uncertain about the manager’s type, and consequently uncertain about the manager’s

reporting objective. In deciding whether to bias earnings, the manager trades off the benefit and

cost, and investors have rational expectations about the extent of bias and adjust price accordingly.1

In the context of Fischer and Verrecchia (2000), one can think of public scrutiny––our

theoretical construct of interest––as increasing both the ERC (through a reduction in uncertainty

about the manager’s reporting objective) and the manager’s cost of bias (through a higher

probability of detection). Figure 1 illustrates the link between public scrutiny and the various

theoretical constructs of interest. In the presence of these costs and benefits, we show that the

relation between bias and scrutiny is unimodal: there is a unique peak, below which the relation is

positive, and above which the relation is negative (see e.g., Figure 2).2

The intuition for this result is that in firms with low scrutiny, a lower expected benefit

results in lower bias. As public scrutiny increases, bias increases because the ERC increases––the

valuation channel dominates. Eventually, bias peaks at a unique threshold point and then begins

to decline because in firms with high scrutiny the cost of bias is higher––the monitoring channel

dominates. Consequently, the relation is neither unambiguously positive nor unambiguously

1
See Bertomeu et al. (2017, 2019), Fang, Huang, and Wang (2017), and Frankel and Kartik (2019) for related
models.
2
Formally, a function, y = f(x), is unimodal if for some threshold value t, it is monotonically increasing for x ≤ t and
monotonically decreasing for x ≥ t.

Electronic copy available at: https://ssrn.com/abstract=3157222


negative: bias is greatest in a medium-scrutiny environment and least in both high- and low-

scrutiny environments. We use the insights from this analysis to inform the design of our empirical

tests.

In principle, there are a variety of proxies we could use to measure the level of public

scrutiny. To minimize concerns that our results are specific to any one measure, and to illustrate

the generality of the economic forces we study, we focus on three proxies common in the literature:

analyst following, institutional investor following, and media coverage. While these proxies are

broad, each of them provides some insight on investors’ knowledge about the manager’s reporting

strategy. Consistent with how we conceptualize public scrutiny, a large body of prior research

indicates that each of these proxies is associated with both heightened ERCs and heightened

monitoring (e.g., Bushee, 1998; Bushman, Piotroski, and Smith 2004; Miller, 2006; Bowen,

Rajgopal, and Venkatachalam, 2008; Dyck, Morse, and Zingales, 2010). These findings provide

construct validity, and suggest that our proxies capture the countervailing forces that motivate our

analysis. Our primary measure of public scrutiny is the common component to analyst following,

institutional investor following, and media coverage, and in subsequent analyses, we assess the

robustness of our results to using EDGAR downloads as an alternative measure of public scrutiny.

We begin our empirical analysis validating the existence of both the valuation and

monitoring channels. Consistent with the valuation channel (and with a large body of prior

literature), we find that firms with higher levels of public scrutiny have higher ERCs––measured

using both price-levels specifications and returns specifications. Next, to validate the monitoring

channel, we obtain novel FOIA data on all formal SEC investigations over our sample period. The

data contains an exhaustive list of SEC investigations over our sample period, and represents the

“suspect list” considered by the SEC (e.g., Blackburne et al., 2019). Consistent with the monitoring

Electronic copy available at: https://ssrn.com/abstract=3157222


channel––with public scrutiny increasing the probability of detection––we find a highly significant

positive relation between public scrutiny and the probability of an SEC investigation in the

subsequent year. Firms in the highest quintile of public scrutiny are four times more likely to be

investigated by the SEC in the subsequent year than firms in the lowest quintile: 27% (6%) of

firms in the highest (lowest) quintile of public scrutiny are under SEC investigation in the

subsequent year.

Having documented the presence of the valuation and monitoring channels, we proceed to

test our novel theoretical prediction: a unimodal relation between public scrutiny and misreporting.

An important distinguishing feature of our analysis is that we predict that the relation between

misreporting and public scrutiny takes a specific, non-linear functional form. While prior work

often frames empirical predictions in terms of linear relations, our predictions focus on the shape

of the relation. Our focus on the shape of the relation should mitigate concerns about omitted

variables and endogeneity. For example, an alternative explanation for a unimodal relation would

need to suggest an alternative theoretical construct that not only explains the relation between

misreporting and public scrutiny, but also why the relation flips sign at approximately the same

threshold point. While we cannot rule out such a possibility, it seems unlikely. In the economics

literature, this empirical approach is known as “identification by functional form” (e.g., Lewbel,

2018).

Following prior research, we focus on restatements due to intentional misrepresentation as

a measure of misreporting (e.g., Hennes, Leone, and Miller, 2008; Armstrong et al., 2013).

Consistent with extant theory, we measure “bias” in reported earnings using the difference between

earnings as originally reported and earnings as restated. In subsequent analyses, we assess the

robustness of our results to alternative measures of misreporting, including a binary indicator

Electronic copy available at: https://ssrn.com/abstract=3157222


variable for whether there was a restatement due to intentional misrepresentation, and a binary

indicator variable for whether the SEC issued an Accounting and Auditing Enforcement Release.

We estimate the shape of the relation between public scrutiny and misreporting using

multiple distinct sets of tests. We begin by graphically presenting the shape of the relation. In

particular, we sort firms into quintiles based on public scrutiny and present average values of

misreporting for each quintile. We find that misreporting is monotonically increasing in the first

three quintiles, peaks in the fourth quintile, and then declines sharply in the fifth quintile. We

observe a similar pattern across multiple measures of public scrutiny (including each of analyst

following, institutional following, and media coverage) and across multiple measures of

misreporting (including the amount of bias and an indicator variable for restatements due to

misrepresentation).

Next, we conduct three sets of empirical tests. First, we estimate polynomial regression

models that include both linear and 2nd-order polynomial terms. If the relation is unimodal (i.e.,

first increases and then decreases), we expect the 2nd-order polynomial term on our measure of

public scrutiny to load incremental to the linear term, and the coefficient to be negative. In

estimating these specifications, we control for ad hoc unimodal relations between misreporting

and our control variables (e.g., we control for the possibility of an atheoretic unimodal relation

between misreporting and growth options). Consistent with our predictions, we find robust

evidence of a negative coefficient on the 2nd-order polynomial term across all specifications.

Second, we estimate spline regression models that treat the shape of the relation as

piecewise linear. Specifically, we estimate linear regressions on either side of a threshold and test

whether the slope coefficient below (above) the threshold is positive (negative). One estimation

challenge with these models is that, while the theory suggests that a threshold exists, it is silent on

Electronic copy available at: https://ssrn.com/abstract=3157222


the location of the threshold within our sample. Accordingly, we use two different approaches to

estimate these models. In the first approach, we use the average level of scrutiny as a rough

approximation of the threshold, and estimate different slope coefficients for observations below

and above the threshold. Consistent with a unimodal relation, we find a positive (negative) relation

for firms with below- (above-) average levels of scrutiny.

In the second approach, we use the multivariate adaptive regression spline method (MARS)

to estimate simultaneously both the threshold that minimizes the mean-squared error, and the sign

of the relation on either side of the threshold (e.g., Friedman, 1991). The advantage of this

approach is that MARS can reliably identify not only the slope coefficients of the piecewise-linear

function around the threshold, but also the point at which the function is piecewise linear. We find

that the threshold is at the 61st percentile and, consistent with a unimodal relation, the slope to the

left (right) of the threshold is positive (negative). Thus, for observations in the top two quintiles of

public scrutiny, our results suggest that small increases in public scrutiny will decrease

misreporting. However, for observations in the bottom three quintiles of public scrutiny, our results

suggest that small increases will increase misreporting.

Finally, in our third set of tests, we exploit a regulatory shock that increased investors’

ability to scrutinize managers. Specifically, we examine how mandatory EDGAR reporting of

Form 4 (managers’ equity trades) affects misreporting. We use a difference-in-differences design

that allows for heterogeneous treatment effects, and predict that the sign of the treatment effect

varies with the level of public scrutiny. Consistent with a unimodal relation between public

scrutiny and misreporting, we find that mandatory EDGAR dissemination of Form 4 increases

misreporting for firms with extremely low ex ante scrutiny, i.e., those below the threshold; and

decreases misreporting for firms with extreme high ex ante scrutiny, i.e., those above the threshold.

Electronic copy available at: https://ssrn.com/abstract=3157222


While we cannot definitely rule out confounding factors, to explain these results, an omitted

variable would need to: (i) be positively correlated with misreporting; (ii) vary over time with the

EDGAR mandate (first difference); (iii) differentially affect treatment and control firms (second

difference); and (iv) result in the difference between treatment and control firms flipping sign

based on the level of public scrutiny (third difference).

Our findings make multiple contributions to the empirical literature. First, by formally

modeling the managers’ cost-benefit tradeoff, our analysis highlights the limitation of existing

intuition in the empirical literature on misreporting. A distinguishing feature of our analysis is that

we predict that the relation between misreporting and public scrutiny takes a specific, non-linear

functional form––a prediction that would be difficult to intuit in the absence of formal theory. In

this regard, our empirical findings suggest a more nuanced view of how public scrutiny affect the

cost-benefit tradeoff facing the manager. While prior work often frames empirical predictions in

terms of linear relations, our results suggest that a promising path for future work would be to

explore non-linear or unimodal relations. Indeed, our results suggest the relation between

misreporting and each of analyst following, institutional investor ownership, media coverage, and

EDGAR downloads is unimodal: for low (high) values, these variables are positively (negatively)

associated with misreporting. Linear specifications mask these relations.

Second, using novel data on formal SEC investigations (including investigations that did

not lead to charges), we provide evidence that public scrutiny is a leading indicator of SEC

investigations. Despite the much greater rate of SEC investigations among firms with high public

scrutiny (analyst following, institutional investor following, and media coverage), misreporting is

greatest in firms with medium levels of public scrutiny and lowest in both high- and low-scrutiny

firms.

Electronic copy available at: https://ssrn.com/abstract=3157222


Third, we exploit a novel regulatory shock that has heretofore gone unstudied in the

empirical literature––the requirement that managers’ equity trades be filed electronically on

EDGAR. Not only is this the first evidence that EDGAR dissemination of managers’ equity trades

affects misreporting, but also that the effect of EDGAR dissemination varies with the level of

public scrutiny in a manner consistent with our theoretical predictions. In this regard, our findings

suggest that a promising path for future work on regulatory shocks is to consider designs that allow

for heterogeneous treatment effects: narrowly focusing on the sign and significance of the “average

treatment effect” can mask important heterogeneity in the effect of the shock. For example, we

find the average treatment effect is insignificant, but the treatment effect in the top (bottom)

quintile of public scrutiny is reliably negative (positive). Collectively, the notion that our

theoretical predictions hold across multiple measures of the theoretical constructs, multiple

research designs, and multiple empirical settings underscores the generality and power of the

economic forces we study.

The remainder of our paper proceeds as follows. Section 2 formally develops our empirical

predictions. Section 3 describes the sample and measurement choices used in our empirical tests.

Section 4 describes our empirical tests and results that validate the valuation and monitoring

channels. Section 5 describes our primary research design, results, and robustness tests for a

unimodal relation. Section 6 provides concluding remarks.

2. Hypothesis Development

2.1 Overview

To formally motivate our empirical predictions, we incorporate the notion of public

scrutiny into Fischer and Verrecchia (2000), hereafter FV. As in FV, we consider a setting where

Electronic copy available at: https://ssrn.com/abstract=3157222


a manager privately observes true earnings, but then exercises discretion over the extent to which

reported earnings diverge from true earnings. We refer to the divergence between reported

earnings and true earnings as “bias.” In the context of FV, the divergence between reported

earnings and true earnings constitutes “misreporting” because the manager intentionally introduces

bias to obfuscate the report. As in FV, we assume bias is costly to the manager, and there is a

continuum of different types of managers, where each type of manager has a different reporting

objective. Investors are uncertain about the manager’s reporting objective, and thus cannot

perfectly anticipate the manager’s reporting strategy. The unique feature of our model, relative to

prior work, is that we incorporate the notion of public scrutiny into FV. Specifically, we assume

that public scrutiny increases investors ability to anticipate the manager’s reporting strategy and

increases the manager’s cost of bias.3

2.2. Setup

Unless otherwise noted, the basic setup and notation follows FV, and all random variables

are mean-zero and independently normally distributed. Henceforth, we use a ~ to denote a random

variable. To simplify the exposition, we state formal results in the Appendix.

At the beginning of the first period, the manager privately observes a noisy signal of

terminal cash flow: e  v  n . One can think of e as true earnings, v as terminal cash flow, and

n as the noise in earnings that results from a non-opportunistic application of accounting rules.

Having observed the realization of true earnings, the manager then reports earnings to the capital

market. We represent reported earnings by r  e  b where b is the bias in reported earnings and

3
FV is but one example of a class of signaling model in which agents are heterogeneous along two dimensions:
“natural action” and “gaming ability.” See Frankel and Kartik (2019) for a broad discussion of this type of model.
See Bertomeu et al. (2017, 2019) for examples of papers that structurally estimate this type of model, and Fang,
Huang, and Wang (2017) for an example of a paper that links the cost of bias to the noise in earnings and derives
empirical predictions.

Electronic copy available at: https://ssrn.com/abstract=3157222


is a choice variable of the manager. After observing reported earnings, risk neutral investors trade

the firm’s shares in a perfectly competitive market and price is equal to expected future cash flow,

P  E[v | r ] . In choosing the optimal bias, a risk neutral manager trades off the cost of bias with

the benefit of bias. We represent the manager’s objective function as:

 1 
max  xP  C ( s )b 2  . (1)
b
 2 

 , represents the benefit to bias through the share price, where x is


The first term, xP

independently normally distributed with positive mean  x  0 , finite variance  x2  0 , and

precision  x (i.e.,  x   x2 ). We assume all distributions are independent and common

knowledge, and the realization of x  x is known only to the manager at the beginning of the

period. One way to interpret x is to suggest that there is a continuum of different “types” of

managers, where each type has a different reporting objective. As such,  x represents investors’

knowledge about the manager’s reporting objective: larger values indicate that investors can better

anticipate the manager’s reporting strategy and better anticipate any bias.

The second term represents the cost of bias. One can think of the cost of bias as representing

an amalgamation of the effort cost of bias, the probability of detection, and the pecuniary and non-

pecuniary penalties to the manager in the event of detection. Intuitively, one can think of public

scrutiny as increasing the probability of detection. Henceforth, we refer to C  s  simply as the

10

Electronic copy available at: https://ssrn.com/abstract=3157222


“cost function.” 4

In a departure from FV, we introduce the notion of public scrutiny, represented by the

parameter s , and assume (i) investors’ knowledge about the manager’s objective is strictly positive

and increasing in s , and (ii) the cost of bias is strictly positive and increasing in s . Henceforth, we

represent the precision of investors’ knowledge about the manager’s objective by  x  s  and the

cost associated with bias as C  s  . This is the only difference between FV and our model.

2.3 Equilibrium

The Appendix shows that there is a unique linear equilibrium with price given by

P     (e  b) . Here,  represents the ERC. The greater the ERC, the greater the price

response to reported earnings (e  b) , and hence the greater the benefit of bias. The Appendix


shows that s
 0 . This establishes the mechanism through which public scrutiny increases the

benefit of bias––by increasing the value-relevance of reported earnings. The greater the value-

relevance of reported earnings, the greater the benefit to inflating reported earnings.

Similar to FV, the Appendix shows that the expression for equilibrium expected bias is


given by E[b]  C (s)
 x . The numerator of this expression encapsulates the benefit of bias through

the ERC, and the denominator encapsulates the cost of bias. This expression is similar to FV,

except that both the numerator and the denominator are increasing in s .

4
Note that it is not relevant for our analysis whether investors are uncertain about the first term of the manager’s
objective function or the second term (e.g., Bertomeu et al., 2017). In this regard, recent empirical work tends to
interpret πx very narrowly; as speaking to pecuniary compensation (e.g., Fang, Huang, and Wang, 2017; Ferri,
Zhang, and Zhu, 2018). While managerial compensation contracts are undoubtedly one source of information about
the manager’s reporting objective, they represent only a small fraction of the total available information. For
example, investors might glean information from an analysis of previous disclosures, media articles, analyst reports,
or acquire information from other intermediaries. In this regard, the theory applies broadly and is not specific to a
certain information channel.

11

Electronic copy available at: https://ssrn.com/abstract=3157222


Next, we consider the sign of the relation between expected bias and public scrutiny, i.e.,

E [ b ]
s . A unimodal relation is characterized by a unique “peak” or “threshold” value of public

scrutiny, denoted s , such that: (1) for all values of public scrutiny below the threshold ( s  s* ),

the relation is positive ( E[sb ]  0 ); and (2) for all values of public scrutiny above the threshold (

s  s* ), the relation is negative ( E[sb ]  0 ). The Appendix derives technical conditions that

guarantee unimodality, and shows that a variety of cost functions satisfy these conditions. Figure

2 illustrates the relation between expected bias and public scrutiny for standard linear and convex

cost functions. Having discussed the economic theory that motivates our predictions, next we

examine whether our predictions are empirically descriptive.

3. Sample construction and variable measurement

3.1 Sample

We construct our sample using data from CRSP, Compustat, I/B/E/S, Thomson-Reuters,

RavenPack, and Audit Analytics from 2004 to 2012. Our sample begins in 2004, when data on our

measures of the level of public scrutiny and misreporting first become available, and ends in 2012

when our measure of misreporting ends. Column (1) of Panel A of Table 1 shows that there are a

total of 46,148 firm-year observations over this period on the merged CRSP/Compustat universe

with non-missing income, total assets, and market value. After requiring additional information

needed to construct the control variables used in our analysis (e.g., plant, property, and equipment,

sales growth, debt, buy-and-hold returns over the fiscal year), Column (2) shows that the sample

drops to 41,831 firm-years. However, this sample still represents 90% of the total available

Compustat/CRSP population, suggesting our sample is comprised of a wide variety of firms, and

significant cross-sectional variation in public scrutiny.


12

Electronic copy available at: https://ssrn.com/abstract=3157222


Panel B presents descriptive statistics for the variables used in our analysis. Variable

definitions follow Armstrong et al. (2013) and are provided in Table 1. All variables are calculated

net of any restatements and winsorized at the 1st/99th percentiles. Consistent with significant cross-

sectional variation in the level of public scrutiny, Panel A suggests the interquartile range of total

assets (sales) is $146 million to $2.67 billion ($68 million to $1.6 billion), and the interquartile

range is 1 to 8 analysts, 23 to 157 institutional investors, and 9 to 27 media articles.

3.2 Measurement of key variables

3.2.1 Measuring the level of public scrutiny

A distinguishing feature of our analysis is that we predict that the relation between

misreporting and public scrutiny takes a specific, non-linear functional form. While our focus on

the shape of the relation mitigates concerns about omitted variables and reverse causality––an

alternative explanation would need to explain why the relation between misreporting and public

scrutiny flips sign––it poses other challenges. Indeed, one concern with “identification by

functional form” is that the sampling variation in the empirical measure of interest ––public

scrutiny––may not be sufficiently large to replicate the entirety of the theoretical shape.

Consider how this concern might affect our empirical analysis. Figure 2 shows the shape

of the relation predicted by theory. However, ex ante, we do not know where firms in our sample

will fall on the x-axis (public scrutiny). It could be that, within our sample, investors have

sufficiently high public scrutiny such that all firms fall to the right of the “peak” or “threshold”. In

such a circumstance, we would only observe a portion of the theoretical shape. For example, if all

of the firms in our sample fall to the right (left) of the threshold, we would only observe a negative

(positive) relation. Thus, for our tests of the shape of the functional form to be meaningful, we

need to maximize sampling variation in our measures of public scrutiny: we need to observe firms

13

Electronic copy available at: https://ssrn.com/abstract=3157222


on both sides of the threshold. This requires having as broad a sample as possible. In this regard,

our empirical tests are joint tests of a unimodal shape and a sufficiently broad sample that we

observe firms on both sides of the threshold.

This concern informs how we measure public scrutiny. In principle, there are a variety of

empirical proxies for the level of public scrutiny. We consider several popular proxies in the

literature commonly thought to be associated with valuation and monitoring channels: analyst

following, institutional investor following, and media coverage. There are two key advantages of

these proxies. First, Panel B of Table 1 suggests there is considerable cross-sectional variation in

these proxies in our sample, which helps remedy concerns that the sampling variation might be

insufficient to detect a unimodal relation. Second, a large body of prior research suggests each of

these proxies is related both to greater value-relevance of reported earnings, and to greater

monitoring, and thus captures both the “valuation” and “monitoring” channels that motivate our

analysis (see Figure 1).5 In this regard, the findings in prior work establish construct validity of

our measures of public scrutiny.

To reduce the effect of measurement error, we use principal component analysis to extract

the common component to analyst following, institutional investor following and media coverage,

and use the common component as our primary measure of the level of public scrutiny (Scrutiny).

In addition, to be faithful to the underlying theory, we lag our empirical measures by one year

relative to the potential period of misreporting. This ensures we measure public scrutiny prior to

5
For example, the literature consistently finds that analysts not only provide valuable information to investors and
enhance the value-relevance of earnings, but also perform an important monitoring role that can help deter
misreporting (e.g., Bushman, Piotroski, and Smith 2004; Yu, 2008; Dyck, Morse, and Zingales, 2010). With
fiduciary responsibilities towards their clients, institutional investors play important roles in impounding information
into prices (Piotroski and Roulstone, 2004) and in monitoring managers (e.g., Bushee, 1998; Bowen, Rajgopal, and
Venkatachalam, 2008). Similar to analysts and institutions, the recent literature also suggests that media coverage
can play a dual role: it disseminates information and facilitates monitoring and deterrence (e.g., Miller and Skinner,
2015; Blankespoor, deHaan, and Zhu, 2018).

14

Electronic copy available at: https://ssrn.com/abstract=3157222


current period earnings. Table 2 presents the results from our principal component analysis.

Variables are normalized to be mean zero with standard deviation of one prior to this analysis.

Panel A shows that the first principal component explains 66.3% of the variation in the three

variables and has an eigenvalue near two. The first principal component loads positively on all

three variables (loadings of 0.413, 0.445, and 0.366 on Analysts, Institutions, and Media

respectively) and is the only component with an eigenvalue greater than one.

Panel B reports descriptive statistics for our measure. The mean and median values of

Scrutiny are 0.00 and –0.280 respectively, suggesting a right skew toward high-scrutiny firms.6

Panel C shows that Scrutiny is highly correlated with each of the underlying components. Scrutiny

has a spearman (pearson) correlation of 0.84 (0.87) with analyst following, 0.91 (0.90) with

institutional following, and 0.74 (0.76) with media coverage.

As with all theory-grounded empirical tests, our analyses are joint tests of our theoretical

predictions and our measures. We predict the existence of two countervailing economic forces, not

the absence of other forces. That is, we do not predict that the common component to analyst

following, institutional investor following, and media coverage, affects misreporting only through

the two countervailing effects of valuation and monitoring, but rather that such forces exist in the

data. In this regard, the failure to find a unimodal relation could mean that (i) the forces we study

are not present in the data, or (ii) our empirical proxies are sufficiently noisy proxies of the

underlying economic forces that our tests lack power. To the extent that our empirical proxies are

associated with misreporting through channels other than those discussed above––monitoring and

valuation––our tests will be biased against finding a unimodal relation.

3.2.2 Measuring financial misreporting

6
The mean of a linear combination of normalized variables is zero by construction.

15

Electronic copy available at: https://ssrn.com/abstract=3157222


In our model, financial misreporting, or “bias,” refers to the difference between reported

earnings and true earnings. In an effort to adhere closely to the theory, we measure misreporting

using the difference between earnings as originally reported and earnings as restated, and focus on

restatements due to intentional misrepresentation. We consider restatements classified by Audit

Analytics as resulting from fraud or an SEC investigation to be intentional misrepresentations (e.g.,

Hennes, Leone and Miller, 2008; Armstrong et al., 2013).

We obtain from Audit Analytics a dataset of restatements classified as resulting from

intentional misrepresentation. The database tracks Non-Reliance Restatements announced on

Form 8-K between 2004 and 2016, and provides information on the date of the announcement, the

period to which the restatement applies, and the effect of the restatement on net income. The

database contains information on 1,018 restatements due to intentional misrepresentations

announced between 2004 and 2016. We use data on restatement announcements through 2016 to

measure misreporting through 2012. In doing so, we allow for a minimum of four years between

the year of restatement and the year of the announcement.7

Next, we match the Audit Analytics database to our sample (described in Table 1) based

on the period that was restated. We define Bias as the amount of restated earnings due to intentional

misrepresentation, scaled by beginning total assets and expressed in basis points (i.e., 5.23% of

assets is expressed 523). Because our theory speaks to the amount of misreporting, we use Bias as

our primary measure of misreporting. In subsequent analyses, we report results from repeating our

tests measuring misreporting using an indicator variable equal to one if Bias > 0 (Restate).

7
Firms begin disclosing 8-K item 4.02 “Non-Reliance on Previously Issued Financial Statements or a Related Audit
Report or Completed Interim Review” in August 2004. We remove from our sample intentional restatements
missing data on the announcement date, the period for which the restatement applies, or the effect of the restatement
on net income. By removing these observations from the sample, we ensure we do not erroneously classify the
observation as having zero misreporting. The Audit Analytics database provides us with a broader sample than the
GAO database used in prior work (see Karpoff et al., 2017 for a review of the different measures and databases).

16

Electronic copy available at: https://ssrn.com/abstract=3157222


Panel A of Table 3 presents the number of observations restated each year from 2004 to

2012 due to intentional misrepresentation, the probability of restatement (average Restate), and

the average values of Bias. Consistent with prior literature, Panel B shows that the probability of

restatement due to intentional misrepresentation is approximately 0.01 in every year of our sample,

and that average bias is consistently around 0.13 basis points of assets.8

Panel B presents correlations between our measures of public scrutiny, misreporting, and

key control variables. This panel shows that our measures of misreporting (Bias and Restate) are

only weakly correlated with the composite measures of public scrutiny (Scrutiny) and its

components (Analysts, Institutions, and Media). Spearman (Pearson) correlations are between 0.01

and 0.03 (0.00 and 0.01). Consistent with a unimodal relation, our subsequent tests show that for

approximately 40% (60%) of the data, the relation is significantly negative (positive); such that,

the relation is appears to be zero when pooling across all observations.

4. Validating the Valuation and Monitoring Channels

4.1. Valuation channel

Our model explicitly predicts that the ERC is larger for high-scrutiny firms (see Section

2.3). Accordingly, we validate our measure of public scrutiny by estimating its relation to the ERC.

Although prior work has effectively shown that the ERC is increasing in the three underlying

components of our measure (e.g., analyst following, media coverage, and institutional investor

following), we nonetheless conduct the validation test for the sake of completeness. If we do not

find a positive relation, this would suggest that the empirical measure does not adequately capture

the theoretical construct of interest.

8
Average Bias is small because 99% of observations do not misreport, and thus have zero bias. Among those who
do misreport (i.e., among firms with Restate = 1) average Bias is 21 basis points.

17

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 4 presents our results. Panel A presents results from a simple price-levels

representation of the ERC. This price-levels specification is a literal representation of the ERC in

Section 2.3. Specifically, we regress price per share (Price) on reported earnings per share

(Earnings), public scrutiny (Scrutiny), and the interaction term. Prior empirical work suggests

return-based specifications of the ERC are generally more robust (e.g., Christie, 1987; Kothari and

Zimmerman, 1995). Consequently, Panel B presents results from a return-based representation of

the ERC. Specifically, we regress the market-adjusted buy and hold return over the twelve months

ending three months after the fiscal year end (BHAR) on the earnings surprise (Surprise), the

forecast error from a random walk model of annual earnings scaled by beginning of period price

(Surprise), and controlling for firm risk (Beta), size (Ln(MV)), and growth opportunities (BM).9

Column (1) of Panel B presents the results from estimating the relation between the return-

based ERC and public scrutiny. Column (2) presents results after controlling for the relation

between the ERC and firm risk, size, and growth opportunities. Column (3) presents results from

a similar specification as Column (2), except that all of the independent variables are transformed

into scaled quintile ranks that range from 0 to 1. Using ranks ensures that our independent variables

are of similar scale, minimizes the effects of outliers, and allows for a meaningful comparison of

the relative economic significance of each variable (i.e., each coefficient measures the difference

in returns between the top and bottom quintile, ceteris paribus).

Consistent with the extant literature, and regardless of empirical specification, we find a

significantly positive relation between public scrutiny and the ERC. In addition, Column (4) of

9
To ensure that these variables reflect the information available to investors at the time, we compute all measures
using unrestated values (i.e., values as initially reported). This research design choice is consistent with the ERC
featured in our model, which is measured relative to reported earnings that potentially include bias. We use a
random walk model as opposed to the analyst consensus forecast because a substantial portion of our sample (24%)
is missing analyst forecasts. See Teoh and Wong (1993) and Ferri, Zheng, and Zou (2018) for related papers that use
returns-based specifications to estimate the ERC in extant theory models.

18

Electronic copy available at: https://ssrn.com/abstract=3157222


Panel B suggests that the relation is similar in economic magnitude to firm risk (Surprise*Beta

coeff. = 0.326) and twice that of firm growth (Surprise*BM coeff. = –0.184). We conclude that

our empirical measure adequately captures the level of public scrutiny and the valuation channel

that motivates our analysis.

4.2. Monitoring channel

Our model is premised on the notion that the cost of misreporting (e.g., the probability of

detection) is larger for high-scrutiny firms. In this section, we empirically validate this “monitoring

channel” by estimating the relation between public scrutiny and the probability of detection using

novel FOIA data on formal SEC investigations. We begin by obtaining data on all formal SEC

investigations during our sample period from Blackburne et al. (2019). This data provides an

exhaustive set of SEC investigations regardless of whether the SEC brought charges, and can be

thought as the SEC’s “suspect list” of firms. We construct an indicator variable, SECInvestigate,

equal to one, if the firm was under investigation by the SEC during the fiscal year.

In Panel A of Table 5, we sort firms into quintiles based on the level of public scrutiny. For

each quintile, we report the probability of an SEC investigation in the subsequent year. Consistent

with the monitoring channel, we find that the likelihood of an SEC investigation is monotonically

increasing in public scrutiny and that the increase in likelihood from each quintile to the next is

statistically significant (p-values < 0.01). These increases also appear economically significant,

ranging from 21.3% to 60.3%. Notably, Panel A presents similar results regardless of whether we

use the composite measure of public scrutiny, Scrutiny, or the individual components (Analystst,

Institutionst, and Mediat). In the top (bottom) quintile of public scrutiny, we find 27% (6%) of

firms are under investigation by the SEC.

19

Electronic copy available at: https://ssrn.com/abstract=3157222


Panel B of Table 5 presents results for estimating the probability of an SEC investigation,

SECInvestigatet+1, as a function of public scrutiny and control variables from the literature on

financial misreporting (e.g., Burns and Kedia, 2006; Efendi, Srivastava and Swanson, 2007;

Armstrong et al., 2013). Column (1) presents results from a regression without any control

variables, Column (2) includes control variables and year fixed effects, and Column (3) includes

control variables, and year and industry fixed effects.10 Finally, to verify that the association

between public scrutiny and the probability of an investigation is not itself unimodal, in Column

(4) we add the 2nd-order polynomial term, Scrutiny2 to the regression. Across all specifications,

the coefficient on Scrutiny is positive and highly statistically significant (t-stats range from 13.76

to 20.31), while the coefficient on Scrutiny2 is insignificantly different from zero (t-stat = 0.95).

Collectively, these findings provide direct evidence of the monitoring channel––the probability of

detection is monotonically increasing in public scrutiny.

5. Relation between Public Scrutiny and Misreporting

5.1 Univariate sorts

We begin our primary analysis by graphically presenting the shape of the relation between

the level of public scrutiny and misreporting. In Panel A of Table 6, we sort firms into quintiles

based on the level of public scrutiny. For each quintile, we report the average value of misreporting

in the subsequent year (i.e., Biast+1). Consistent with a unimodal relation, we find that misreporting

is monotonically increasing in the first four quintiles of Scrutiny, and declines sharply in the fifth

quintile. Moreover, the increase from the second to the third quintile is significant (p-value < 0.01),

10
We use a linear probability model. Inferences are unchanged if we use a probit model. All independent variables
are normalized to be mean zero and standard deviation of one, and are lagged one period relative to the dependent
variable. Industry classifications are based on Fama-French twelve industry groups.

20

Electronic copy available at: https://ssrn.com/abstract=3157222


as is the decrease from the fourth to the fifth quintile (p-value < 0.01). These differences also

appear economically significant (68.8% and -37.0%, respectively). Panel A also presents results

from repeating this procedure separately for analyst following (Analystst), institutional investor

following (Institutionst), and media coverage (Mediat). We find a similar pattern across these

measures as well. We present these results graphically in Figure 3.

In Panel B, we repeat the procedure and present average values of an indicator variable for

misreporting (Restatet+1) and find similar results. Finding a similar pattern across multiple

measures of misreporting and multiple measures of the level of public scrutiny suggests the

evidence of a unimodal relation is not an artefact of the specification of our subsequent statistical

tests: one can observe it in the raw data. Next, we conduct multiple formal statistical tests.

5.2. Polynomial regressions

We formally test for a unimodal relation between misreporting and the level public scrutiny

by estimating polynomial regression models that include both linear and 2nd-order polynomial

terms (e.g., Scrutiny and Scrutiny2). If the relation is unimodal, we expect the 2nd-order polynomial

term to be statistically significant and negative.

We estimate the following base model:

Biast+1 = α + β1 Scrutiny2t + β2 Scrutinyt + γ Controlst + εt. (2)

where Bias and Scrutiny are as previously defined and Controls is a vector of control variables

based on the prior literature examining restatements (e.g., Burns and Kedia, 2006; Efendi,

Srivastava and Swanson, 2007; Armstrong et al., 2013),. All independent variables are normalized

to be mean zero and standard deviation of one, are lagged one period relative to the measure of

misreporting, and are net of any restatements.

21

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 7 presents results from estimating eqn. (2). Column (1) presents results from a

regression without any control variables, Column (2) includes control variables and year fixed

effects, and Column (3) includes control variables, and year and industry fixed effects. Consistent

with a unimodal relation, we find a negative and significant coefficient on Scrutiny2 across all our

specifications (t-stat ranges between –3.72 and –4.57). In addition, the signs and significance of

our controls (e.g., positive coefficients on ROA, InterestCov, SalesGrowth, and negative

coefficients on Capital and Financing) are generally consistent with those in prior research (e.g.,

Armstrong et al., 2013). In Column (4), we remove the 2nd-order polynomial term and report results

from a linear specification. We find that the coefficient on the linear term Scrutiny is insignificant

(t-stat = –0.35). This is consistent with the notion that the correlation between public scrutiny and

misreporting is not significantly different from zero when pooled across all observations, due to

the existence of an underlying non-linear relation.

We next conduct two sets of tests designed to assess the robustness of our results to

controlling for other potential non-linearities in the data. First, we follow Fang, Huang, and Wang

(2017) and control for a non-linear relation between intentional restatements and two measures of

the industry rate of restatements due to error (PctError and ErrorAmount). PctError is the

percentage of observations in the respective industry-year that were restated due to unintentional

error (where unintentional errors are those restatements not classified as intentional manipulation).

ErrorAmount is the average difference between net income and restated net income due to error in

the respective industry-year, scaled by the standard deviation of net income among non-restating

firms in the industry-year. These variables control for industry earnings quality.

Panel A of Table 8 presents results. Column (1) presents results from estimating eqn. (2)

including PctError and PctError2. Column (2) present results from estimation eqn. (2) including

22

Electronic copy available at: https://ssrn.com/abstract=3157222


ErrorAmount and ErrorAmount2. Consistent with the results in Fang, Huang, and Wang (2017),

we find a negative and significant coefficient on ErrorAmount2 (t-stats = –3.33 and –2.69),

suggesting a non-linear relation between the number of restatements due to error in the industry

and misreporting. More importantly, across all specification, we continue to find that the

coefficient on Scrutiny2 remains negative and strongly significant (t-stats range from –4.02 to –

3.72). Moreover, the coefficient magnitudes in Table 8 are virtually unchanged from those

presented in Table 7. This suggests that our empirical findings are distinct.

Second, we consider the possibility of heretofore undocumented, non-linear relations

between misreporting and our control variables. While we are not aware of any theory that would

predict a unimodal relation between misreporting and our control variables (e.g., a unimodal

relation between misreporting and growth options), we nonetheless control for this possibility.

Specifically, we estimate eqn. (2) after including both 1st- and 2nd-order polynomials of all our

control variables: for a total of 24 linear and quadratic controls.

Panel B of Table 8 presents results. We note two findings. First, the coefficients on the

non-linear control variables are generally insignificant. Only 2 of the 11 coefficients on the non-

linear control variables are statistically significant at the 10% level or lower. The exceptions are

Size2, which is statistically negative (t-stat = –2.40), and Intangibles2, which is statistically positive

(t-stat = 2.45). Second, the coefficient on Scrutiny2 remains negative and strongly significant (t-

stat = –2.74). These findings suggest that our results are not attributable to non-linear relations

between misreporting and our control variables.

5.3. Alternative measures of theoretical constructs

5.3.1 Alternative measure of public scrutiny

23

Electronic copy available at: https://ssrn.com/abstract=3157222


In this section, we assess the robustness of our results to using the number of times

professional investors downloaded the firm’s EDGAR filings during the fiscal year (EDGAR) as

an alternative measure of public scrutiny (Dyer, 2018).11 Table 9 reports results from estimating

eqn. (2) using EDGAR downloads to measure public scrutiny. Consistent with our earlier findings,

we find the coefficient on EDGAR is insignificant in the linear specification, and we find a negative

and significant coefficient on EDGAR2.

5.3.2 Alternative measures of misreporting

We next assess the robustness of our results to four alternative measures of misreporting.

First, we use the natural logarithm of one plus the dollar amount of earnings restated due to

misrepresentation (in millions), Log(1+$Biast+1). This measure is unscaled. Second, we use an

indicator variable equal to one if Biast+1 > 0 (Restatet+1). This measure is directly related to Bias,

except that it contains no information about the magnitude of the restatement. It is thus unaffected

by issues related to scale and magnitude, and is perhaps the most common measure of misreporting

used in the literature. Columns (1) to (4) of Table 10 present results from estimating eqn. (2) using

these alternative measures of misreporting. Consistent with our earlier findings, the coefficient on

Scrutiny2 is negative and significant (t-stats range from –2.12 to –3.99). This suggests our earlier

results are not an artefact of scaling the dependent variable.

A potential limitation of all measures of misreporting is that they are conditional on the

firm being caught. In this regard, they are the product of both the economic incidence of

misreporting and the detection of misreporting. While we have no reason to predict that the

probability of detection is unimodal in public scrutiny (and the evidence in Table 5 suggests it is

not), to address this potential concern, we estimate eqn. (2) within the sample of firms where

11
We thank Travis Dyer for providing data on EDGAR downloads by professional investors.

24

Electronic copy available at: https://ssrn.com/abstract=3157222


misreporting was detected. Note that this is an advantage of using a continuous variable of

misreporting: it allows us to exploit variation in the magnitude of misreporting conditional on

detection. Columns (5) and (6) present results from estimating eqn. (2) using Biast+1 as the

dependent variable within the sample of firm-years where Restate = 1. Consistent with our earlier

findings, the coefficient on Scrutiny2 is negative and significant (t-stats = –3.12 and –4.05). This

suggests that our earlier results are driven by unimodality in the underlying economic incidence of

misreporting rather than unimodality in the probability of detection.

Finally, we use a binary indicator variable for whether the respective firm-year was the

subject of an SEC Accounting and Auditing Enforcement Release (AAER). We obtain data on

AAERs from the Center for Financial Reporting and Management. Using this database reduces

our sample period to 2004-2011, and our sample size to 37,672 observations. Columns (7) and (8)

report results from estimating eqn. (2) using AAERt+1 as the dependent variable. Consistent with

our predictions, the coefficient on Scrutiny2 is negative and strongly significant (t-stats = –3.20

and –3.28). Collectively, we interpret the findings in Table 10 as suggesting that the unimodal

relation we document does not appear to be an artefact of measurement choices.

5.4 Alternative research design: Spline regressions

In this section, we aim to provide further evidence on the shape of the relation by estimating

spline regressions that treat the relation as piecewise linear. Specifically, we estimate the relation

on either side of a threshold, τ, and test whether the relation below (above) the threshold is positive

(negative):

Biast+1 = α + β1 (Scrutiny – τ < 0 ) + β2 (Scrutiny – τ ≥ 0 ) + γ Controlst + εt, (3)

where all variables are as previously defined. We predict β1 < 0 and β2 > 0.

25

Electronic copy available at: https://ssrn.com/abstract=3157222


One challenge with estimating a piecewise linear form is that the theory does not specify

the value of τ in our sample. Accordingly, we use two different approaches to estimate these

models. In the first approach, we use the mean level of Scrutiny as a rough approximation of the

threshold (τ = 0), and estimate the relation for observations below and above the mean. Because

Scrutiny is standardized to be mean zero (see Table 2), this approach is equivalent to estimating

separate slopes for positive and negative values of Scrutiny.

In the second approach, we use the multivariate adaptive regression spline method (MARS)

to simultaneously estimate both the threshold that minimizes the mean-squared error (denoted τ*),

and the sign of the relation on either side of the threshold (e.g., Friedman, 1991). The advantage

of this approach is that MARS can reliably identify not only the slope coefficients of the piecewise

linear function around the threshold, but also the point at which the function is piecewise linear.

Table 11 presents our results. Columns (1) and (2) present results from estimating

regressions where the threshold is defined at the mean of Scrutiny (i.e., τ = 0). In these

specifications, β1 measures the relation for observations with below-mean values, and β2 measures

the relation for observations with above-mean values. Column (1) excludes control variables, and

column (2) includes controls variables, year fixed effects and industry fixed effects. Consistent

with a unimodal relation, in both specifications we find a positive slope below the mean (Scrutiny

– τ < 0, t-stat = 3.46 and 3.26) and a negative slope above the mean (Scrutiny – τ ≥ 0, t-stat = –3.84

and –2.37). Notably, the positive slope is between approximately three to four times steeper that

the negative slope (e.g., 0.189 vs. –0.046). This is consistent with the general shape of the unimodal

relation presented in Figure 2.

Columns (3) and (4) present results from using MARS to simultaneously estimate both τ*

and the corresponding slope coefficients. There are three noteworthy findings. First, the threshold

26

Electronic copy available at: https://ssrn.com/abstract=3157222


that minimizes the mean-squared error is τ* = 0.069, which is at the 61st percentile of Scrutiny. The

location of τ* at the 61st percentile is generally consistent with the univariate plots in Figure 3,

which suggests the relation changes sign in the top two quintiles. Second, the location of τ* is not

affected by the inclusion of control variables or fixed effects. This is consistent with the notion

that, because we focus on estimating a specific non-linear functional form, omitted variables and

endogeneity are an unlikely explanation for our findings. Third, consistent with a unimodal

relation, we continue to find a positive slope below the threshold (Scrutiny – τ < 0, t-stat 3.51 and

3.35) and a negative slope above the threshold (Scrutiny – τ ≥ 0, t-stat –3.75 and –2.23). Thus, for

observations in the top two quintiles, our results suggest small increases in scrutiny will decrease

misreporting. However, for observations in the bottom three quintiles, our results suggest small

increases in scrutiny will increase misreporting. Collectively, our tests provide robust evidence of

a unimodal relation.

5.5. Alternative research design: Regulatory shock

A distinguishing feature of the preceding analysis is that we focus our tests on the shape of

the relation between misreporting and public scrutiny. Our focus on the shape of the relation should

mitigate concerns about endogeneity: an alternative explanation for a unimodal relation would

need to explain the existence of a threshold where the relation is positive on one side, and negative

on the other. While we cannot rule out such a possibility, it seems unlikely.12 Nevertheless, in this

section, we use a regulatory “shock” to public scrutiny as an alternative research design to mitigate

any residual concerns (Glaeser and Guay, 2017). Specifically, we examine how mandatory

EDGAR reporting of Form 4 (managers’ equity trades) affects misreporting.

12
Standard textbook forms of endogeneity, i.e., a non-zero correlation between public scrutiny and the regression
error term, generally do not generate a specific non-linear functional form. Our empirical approach is known as
“identification by functional form” in the economics literature (e.g., Lewbel, 2018).

27

Electronic copy available at: https://ssrn.com/abstract=3157222


5.5.1 Background

Prior to July 2003, firms filed Form 4 on EDGAR on a voluntary basis. Firms that did not

wish to file Form 4 on EDGAR were required to mail the form directly to the SEC. The SEC was

retaining those forms in the basement of the Washington D.C. and was not uploading the mail-

filed forms to EDGAR.13 At the time, Thomson Reuters was sending a courier to the SEC to scan

the mail-filed forms and providing the scans to institutional subscribers. Thus, the practice of mail-

filing shielded managers from scrutiny by a large swathe of the public.14 After June 2003, all firms

were required to report Form 4 on EDGAR within two business days. Related to our focus on

mandatory EDGAR reporting of manager’s equity trades, a concurrent paper by Liu (2019) tests

the predictions of our model using the creation of EDGAR in 1993. Liu (2019) finds evidence of

a positive effect (no effect) on discretionary accruals among firms with low (high) analyst and

institutional following.

Mandatory EDGAR reporting of managers’ equity trades occurs in 2003, and thus pre-

dates the availability of our primary measure of misreporting (Bias) and media coverage (Media),

which both begin in 2004. Consequently, for this analysis, we use AAERs as our measure of

misreporting and focus on analyst following and institutional investor following as our measures

of public scrutiny.15 As a result, the sample in this analysis spans 1995-2011 and covers 85,895

firm-years. Panel A of Table 12 presents descriptive statistics for this sample. Statistics are

generally similar to those for the main sample reported in Table 1.

5.5.2 Difference-in-differences design

13
https://www.sec.gov/rules/final/33-8230.htm
14
For example, HealthSouth has no Form 4s on EDGAR before 2003 despite having significant manager trades on
Thomson Reuters. See e.g., Chang and Suk (1998).
15
We obtain data on AAERs from 1995 to 2011 from the Center for Financial Reporting and Management.

28

Electronic copy available at: https://ssrn.com/abstract=3157222


We being our analysis by using a standard difference-in-differences design to study the

effect of mandatory EDGAR reporting on misreporting.

AAERt+1 = α + β1 Treated * Post + β2 Treated + γ Controlst + εt. (4)

Treated is a binary indicator variable equal to one for firms affected by the regulation (i.e., firms

that were not voluntarily filing their Form 4s on EDGAR prior to the mandate) and zero otherwise.

Post is a binary indicator variable equal to one for fiscal years ended after June 30, 2003, when the

mandate became effective. Our vector of control variables includes the same controls as in eqn.

(2), including industry and year fixed effects. Note that year fixed effects absorb the main effect

on Post in our regressions. The coefficient on the interaction term, Post*Treated, represents the

average treatment effect. Our treatment (control) group is comprised of 57,207 (28,688) firm-years

for firms that were not (were) voluntarily filing their Form 4s on EDGAR.

Panel B of Table 12 presents results from our difference-in-differences regressions.

Column (1) presents results from estimating the model in eqn. (4). The average treatment effect is

insignificant (t-stat = –1.44), suggesting that the increase in public scrutiny had no average effect

when pooled across all observations. One potential reason for this is that the assumption of parallel

trends might be violated. In Column (2), we explore this possibility by estimating the difference

in misreporting between treatment and control groups in the years leading up to and following the

regulatory mandate. Specifically, we estimate eqn. (4) replacing Post with the indicator variables:

I(Year ≤ 1999), I(Year = 2000), I(Year = 2001), I(Year = 2003) and I(Year ≥ 2005). In this

specification, the year immediately preceding the regulatory mandate (Year = 2002) is the omitted

group. If parallel trends are violated, we expect to see differences between treatment and control

groups in the years leading up to the regulatory mandate.16 Column (2) shows no evidence of a

16
See Christensen, Hail, and Leuz (2016), Christensen et al. (2017), Duguay, Rauter, and Samuels (2019), and
Samuels (2019) for examples of studies that use this design to test for parallel trends.

29

Electronic copy available at: https://ssrn.com/abstract=3157222


difference in misreporting between our treatment and control groups leading up to the regulatory

mandate, consistent with the parallel trends assumption.

5.5.3 Heterogeneous treatment effects

While most prior work on regulatory shocks focuses on the average treatment effect, a

distinguishing feature of our analysis is that we predict that the shape of the relation between public

scrutiny and misreporting is unimodal. Tests of a unimodal relation entail comparing the treatment

effect across subsamples of firms. Specifically, if the relation between public scrutiny and

misreporting is unimodal then theory would predict: (i) firms with low levels of public scrutiny

prior to the shock are on the left side of the theoretical peak in misreporting, and thus a shock that

increases scrutiny increases misreporting (moving them closer to the peak) and (ii) firms with high

levels of public scrutiny prior to the shock are on the right side of the theoretical peak in

misreporting, and thus a shock that increases scrutiny would decrease misreporting (moving them

further away from the peak). See Figure 4 for a graphical representation of these predictions. In

this regard, our analysis highlights that the standard practice of narrowly focusing on the sign and

significance of the average treatment effect can mask important heterogeneity in the effect of the

regulatory shock.

In Table 13, we estimate a difference-in-differences design that allows for heterogeneous

treatment effects. Specifically, we modify eqn. (4) to estimate different treatment effects for each

quintile of public scrutiny. Table 13 presents results from estimating eqn. (4) after interacting

Treated * Post (and all associated main effects) with indicator variables for whether the firm

appears in each of the respective quintiles prior to the EDGAR mandate (Scrutiny_Q2 through

Scrutiny_Q5, where Scrutiny_Q1 is the omitted group). In this specification, the coefficient on

Treated * Post measures the treatment effect in the first quintile of Scrutiny, and the coefficient on

30

Electronic copy available at: https://ssrn.com/abstract=3157222


Treated * Post * Scrutiny_Q5 measures the incremental treatment effect in the fifth quintile of

Scrutiny.17

In Column (1) of Panel A, we present results using analyst following to measure public

scrutiny, and in Column (2) we present results using institutional investor following. Regardless

of measure, we find: (i) that the treatment effect for firms in the first quintile of public scrutiny

(Treated * Post) is positive and highly significant (t-stats = 2.56 and 3.53), and (ii) that the

incremental treatment effect for firms in the fifth quintile of public scrutiny (Treated * Post *

Scrutiny_Q5) is negative and significant (t-stats = –3.91 and –3.62). Panel B reports the total

treatment effects for each quintile of public scrutiny. Regardless of specification, we find evidence

of a significant treatment effect only in the tails of public scrutiny. Consistent with a unimodal

relation between public scrutiny and misreporting (see Figure 4), regardless of specification, we

find the total treatment effect in the top quintile of public scrutiny is reliably negative and the total

treatment effect in the bottom quintile of public scrutiny is reliably positive (p-values<0.01).

While we cannot definitely rule out confounding factors to explain these results, an omitted

variable would need to: (i) be positively correlated with misreporting; (ii) vary over time with the

regulatory mandate (first difference); (iii) vary cross-sectionally with the classification of affected

and unaffected firms (second difference); and (iv) result in the difference between the two groups

flipping sign based on the level of public scrutiny (third difference). Collectively, these results

offer the first evidence that EDGAR dissemination of managers’ equity trades affects misreporting,

and in a manner consistent with a unimodal relation between public scrutiny and misreporting.

6. Conclusion

17
For each firm we compute the average value of Analysts (and Institutions) prior to the regulation, and rank firms
into quintiles based on the respective pre-period average.

31

Electronic copy available at: https://ssrn.com/abstract=3157222


In this paper we examine how the ex ante level of public scrutiny influences a manager’s

subsequent decision to misreport. The conventional wisdom in the literature is that high levels of

public scrutiny facilitate monitoring and increase the cost of misreporting by increasing the

probability of detection, suggesting a negative relation. However, public scrutiny also increases

the weight that investors place on earnings in valuing the firm. This in turn increases the valuation

benefit of misreporting, suggesting a positive relation. We formalize these two countervailing

forces––“monitoring” and “valuation”––in the context of a parsimonious model of misreporting,

and use the insights from the model to guide our subsequent empirical tests.

We show that the theoretical relation between public scrutiny and misreporting is

unimodal: there is a unique threshold, below which the relation is positive, and above which the

relation is negative. Consequently, the relation is neither unambiguously positive nor

unambiguously negative: misreporting is greatest in a medium-scrutiny environment and least in

both high- and low-scrutiny environments. We test the novel prediction of our model––that the

shape of the relation is unimodal––using multiple empirical measures of misreporting and public

scrutiny and three distinct research designs.

In the first research design, we estimate polynomial regressions that include both linear

and 2nd-order polynomial terms. Consistent with our theoretical predictions, we find robust

evidence of a negative relation between misreporting and the 2nd-order polynomial term on public

scrutiny. In the second research design, we use state-of-the-art spline regression models that treat

the shape of the relation as piecewise linear around a threshold level of public scrutiny. Consistent

with a unimodal relation, we find that the slope to the left (right) of the threshold is reliably positive

(negative). In the third research design, we exploit a regulatory shock that increased investors’

ability to scrutinize managers. Specifically, we estimate how mandatory EDGAR reporting of

32

Electronic copy available at: https://ssrn.com/abstract=3157222


managers’ trades affects misreporting using a difference-in-differences design that allows for

heterogeneous treatment effects. Consistent with a unimodal relation between public scrutiny and

misreporting, we predict and find that the EDGAR mandate increases (decreases) misreporting for

firms with extremely low (high) levels of scrutiny prior to the mandate. Collectively, the notion

that our theoretical predictions hold across multiple measures of the theoretical constructs, multiple

designs, and multiple empirical settings underscores the generality and power of the economic

forces we study.

Our findings have wide-ranging implications. First, by formally modeling the managers’

cost-benefit tradeoff, our analysis highlights the limitation of existing intuition in the empirical

literature on misreporting. A distinguishing feature of our analysis is that we predict that the

relation between misreporting and public scrutiny takes a specific, non-linear functional form––a

prediction that would be difficult to intuit in the absence of formal theory. Our empirical findings

suggest a more nuanced view of how public scrutiny affects the cost-benefit tradeoff facing the

manager. While prior work often frames empirical predictions in terms of linear relations, our

results suggest that a promising path for future work would be to explore non-linear or unimodal

relations––relations that are masked by linear specifications. Our empirical findings also suggest

that a promising path for future work on regulatory shocks is to consider designs that allow for

heterogeneous treatment effects: narrowly focusing on the sign and significance of the “average

treatment effect” can mask important heterogeneity in the effect of the shock. Our results

underscore that increased public scrutiny can sometimes have unintended, adverse consequences.

33

Electronic copy available at: https://ssrn.com/abstract=3157222


References

Armstrong, C., Larcker, D., Ormazabal, G., Taylor, D., 2013. The relation between equity
incentives and misreporting: The role of risk-taking incentives. Journal of Financial
Economics 109, 327-350.
Bertomeu, J., Cheynel, E., Li, E. Liang, Y., 2017. A simple theory-based approach to identifying
misreporting costs. Working paper.
Bertomeu, J., Cheynel, E., Li, E. Liang, Y., 2019. How uncertain is the market about managers’
reporting objectives? Evidence from structural estimation. Working paper.
Blackburne, T., Kepler, J., Quinn, P., Taylor, D., 2019. Undisclosed SEC investigations.
Working paper.
Blankespoor, E., deHaan, E., Zhu, C., 2018. Capital market effects of media synthesis and
dissemination: Evidence from robo-journalism. Review of Accounting Studies 23, 1-36.
Bowen, R., Rajgopal, S., Venkatachalam, M., 2008. Accounting discretion, corporate
governance, and firm performance. Contemporary Accounting Research 25, 351-405.
Burns, N., Kedia, S., 2006. The impact of performance-based compensation on misreporting.
Journal of Financial Economics 79, 35-67.
Bushee, B., 1998. The influence of institutional investors on myopic R&D investment behavior.
The Accounting Review 73, 305-333.
Bushman, R., Piotroski, J., Smith, A., 2004. What determines corporate transparency? Journal of
Accounting Research 42, 207-252.
Chang, S., Suk, D., 1998. Stock and prices and the secondary dissemination of information: The
Wall Street Journal’s “Insider Trading Spotlight” column. The Financial Review 33, 115-
128.
Christensen, H., Hail, L., Leuz, C., 2016. Capital-market effects of securities regulation: Prior
conditions, implementation, and enforcement. Review of Financial Studies 29, 2885-2924.
Christensen, H., Floyd, E., Liu, L., Maffett, M., 2017. The real effects of mandated information
on social responsibility in financial reports: Evidence from mine-safety records. Journal of
Accounting and Economics 64, 284-304.
Christie, A., 1987. On cross-sectional analysis in accounting research. Journal of Accounting and
Economics 9, 231-258.
Collins, D., Kothari, S., 1989. An analysis of intertemporal and cross-sectional determinants of
earnings response coefficients. Journal of Accounting and Economics 11, 143-181.
Dechow, P., Ge, W., Schrand, C., 2010. Understanding earnings quality: A review of the proxies,
their determinants and their consequences. Journal of Accounting and Economics 50, 143-
181.
Duguay, R., Rauter, T., Samuels, D., 2019. The Impact of Open Data on Public Procurement.
working paper.
Dyck, A., Morse, A., Zingales, L., 2010. Who blows the whistle on corporate fraud? Journal of
Finance 65, 2213-2253.
Dyer, T., 2018. Does public information acquisition level the playing field or widen the gap? An
analysis of local and non-local investors. Working paper.
Efendi, J., Srivastava, A., Swanson, E., 2007. Why do corporate managers misstate financial
statements? The Role of option compensation and other factors. Journal of Financial
Economics 85, 667-708.

34

Electronic copy available at: https://ssrn.com/abstract=3157222


Fang, V., Huang, A., Wang, W., 2017. Imperfect accounting and reporting bias. Journal of
Accounting Research 55, 919-962.
Ferri, F., Zheng, R., Zou, Y., 2018. Uncertainty in managers’ reporting objectives and investors’
response to earnings reports: Evidence from the 2006 executive compensation disclosures.
Journal of Accounting and Economics 66, 339-506.
Fischer, P., Verrecchia, R., 2000. Reporting bias. The Accounting Review 75, 229-245.
Frankel, A., Kartik, N., 2019. Muddled Information. Journal of Political Economy 127, 1739-
1775.
Friedman, J., 1991. Adaptive spline networks. Advances in Neural Information Processing
Systems, 675-683.
Glaeser, S. and Guay, W., 2017. Identification and generalizability in accounting research: A
discussion of Christensen, Floyd, Liu, and Maffett (2017). Journal of Accounting and
Economics, 64, 305-312.
Grossman, S., Stiglitz, J., 1980. On the impossibility of informationally efficient markets.
American Economic Review, 70, 393-408.
Hennes, K., Leone, A., Miller, B., 2008. The importance of distinguishing errors from
irregularities in restatement research: The case of restatements and CEO/CFO turnover.
Accounting Review 83, 1487-1519.
Karpoff, J., Koester, A., Lee, S., Martin, G., 2017. Proxies and Databases in Financial
Misconduct Research. The Accounting Review 92, 129-163.
Kothari, S.P. and Zimmerman, J., 1995. Price and return models. Journal of Accounting and
Economics 20, 155-192.
Kyle, A., 1985. Continuous auctions and insider trading. Econometrica 53, 1315-1335.
Lambert, R., Leuz, C., Verrecchia, R., 2007. Accounting information, disclosure, and the cost of
capital. Journal of Accounting Research 45, 385-420.
Lewbel, A., 2018. The identification zoo: Meanings of identification in econometrics. Journal of
Economic Literature, forthcoming.
Liu, Y., 2019. Information acquisition costs and misreporting: Evidence from the
implementation of EDGAR. Working Paper.
Miller, G., 2006. The press as a watchdog for accounting fraud. Journal of Accounting Research
44(5), 1001-1033.
Miller, G. Skinner, D., 2015. The evolving disclosure landscape: How changes in technology, the
media, and capital markets are affecting disclosure. Journal of Accounting Research, 53(2),
221-239.
Piotroski, J., Roulstone, D., 2004. The influence of analysts, institutional investors, and insiders
on the incorporation of market, industry, and firm-specific information into stock prices. The
Accounting Review 79(4), 1119-1151.
Samuels, D., 2019. Government procurement and changes in firm transparency. The Accounting
Review, forthcoming.
Teoh, S., Wong, T.J., 1993. Perceived auditor quality and the earnings response coefficient. The
Accounting Review 68(2), 346-366.
Yu, F., 2008. Analyst coverage and earnings management. Journal of Financial Economics
88(2), 245-271.

35

Electronic copy available at: https://ssrn.com/abstract=3157222


Figure 1. Theoretical Relation between Public Scrutiny and Financial Misreporting

This figure illustrates the theoretical predictions concerning the firm’s public scrutiny, the cost of
misreporting, the earnings response coefficient, and financial misreporting. On the one hand,
public scrutiny increases the probability of detection (i.e., the cost of misreporting), which
decreases misreporting. We refer to this as the “monitoring channel.” On the other hand, public
scrutiny of the firm increases the price response to earnings (i.e., the earnings response
coefficient), which increases misreporting. We refer to this as the “valuation channel.”

Cost of “Monitoring Channel”


+ misreporting –
(C)

Public Misreporting
scrutiny (E[b])
(s)
ERC
+ (β) + “Valuation Channel”

36

Electronic copy available at: https://ssrn.com/abstract=3157222


Figure 2. Unimodal Relation

E[b]

This figure plots the theoretical relation between expected bias (E[b]) and public scrutiny (s) in
our model. E[b] appears on the y-axis. s appears on the x-axis. The dashed, red line illustrates the
shape of the relation with a convex cost function (i.e., C ( s )     s 2 ). The solid, black line
illustrates the shape of the relation with a linear cost function (i.e., C (s)     s ). For the
purposes of this figure, and without loss of generality, we set  x  s ,  x  1 ,  v2  1 2 ,  n2  1 2 ,
  12 ,   12 .

37

Electronic copy available at: https://ssrn.com/abstract=3157222


Figure 3. Empirical Relation between Public Scrutiny and Subsequent Misreporting

This table presents the average values of Biast+1 by quintile of Scrutinyt and its components. Panel
A presents results for quintiles of Scrutiny, Panel B for quintiles of Analysts, Panel C for quintiles
of Media, and Panel D for quintiles of Institutions.

Panel A. Bias by Scrutiny Panel B. Bias by Analysts


0.25 0.25

0.20 0.20

0.15 0.15
Bias

Bias
0.10 0.10

0.05 0.05
Q1 Q2 Q3 Q4 Q5 Q1 Q2 Q3 Q4 Q5
Quintiles of Scrutiny Quintiles of Analysts

Panel C. Bias by Media Panel D. Bias by Institutions


0.25 0.25

0.20 0.20

0.15 0.15
Bias

Bias

0.10 0.10

0.05 0.05
Q1 Q2 Q3 Q4 Q5 Q1 Q2 Q3 Q4 Q5
Quintiles of Media Quintiles of Institutions

38

Electronic copy available at: https://ssrn.com/abstract=3157222


Figure 4. Unimodal relation with a Shock to Public Scrutiny

This figure illustrates our predictions for how a regulatory shock to public scrutiny (s) affects
expected bias (E[b]). E[b] appears on the y-axis. s appears on the x-axis. The black solid line
represents the unimodal relation between the two constructs. We consider two firms: Firm A and
Firm B. Firm A’s pre-existing level of public scrutiny is low, and appears below the threshold
level of public scrutiny, s*. Consequently, for Firm A, a small increase in public scrutiny (Δs > 0)
leads to an increase in expected bias (ΔE[b] > 0). Firm B’s pre-existing level of public scrutiny is
high, and appears above the threshold level of public scrutiny, s*. Consequently, for Firm B, a
small increase in public scrutiny (Δs > 0) leads to a decrease in expected bias (ΔE[b] < 0). Thus,
in the presence of a unimodal relation between public scrutiny and misreporting, the sign of the
effect depends on whether the firm is above or below the threshold level of public scrutiny.

E[b]

ΔE[b] < 0

ΔE[b] > 0

s
Firm A * Firm B
s
Δs > 0 Δs > 0

39

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 1. Sample

Panel A presents the number of firms appearing in the CRSP/Compustat universe and the number
of firms appearing in our sample. Panel B presents descriptive statistics for the variables used in
our analysis. Acquisition is an indicator variable for whether an acquisition accounts for 20% or
more of total sales. Analysts is the number of analysts with one-year ahead earnings forecasts on
I/B/E/S as of the end of the fiscal year. $Assets is the dollar value of total assets (in millions). Beta
is the slope coefficient from a single factor market model. BHAR is the market-adjusted buy and
hold return over the twelve months ending three months after the fiscal year end. Capital is net
plant, property, and equipment scaled by total assets. Earnings is income before extraordinary
items divided by shares outstanding. Financing is total debt and equity issuance scaled by total
assets. FirmAge is the number of years the firm appears on Compustat. Institutions is the number
of institutional owners listed on Thomson Reuters as of the end of the fiscal year. Intangibles is
the ratio of research and development and advertising expense to sales. InterestCov is the ratio of
interest expense to net income. Leverage is long term debt plus short term debt, scaled by total
assets. MB is market value of equity plus book value of liabilities divided by book value of assets.
Media is the number of news releases about the firm over the fiscal year on RavenPack Analytics.
Price is share price three months after the fiscal year end. Returns is the buy-and-hold return over
the fiscal year. ROA is income before extraordinary items scaled by total assets. $Sales is the dollar
value of sales (in millions). SalesGrowth is the change in sales scaled by prior-period sales. Size
is the natural logarithm of total assets. Surprise is the forecast error from a random walk model of
annual earnings scaled by beginning of period price.

Panel A. Sample Composition


CRSP/Compustat Sample
Year Universe (requiring controls)
2004 5,654 5,105
2005 5,592 5,067
2006 5,512 4,969
2007 5,399 4,805
2008 5,135 4,695
2009 4,860 4,486
2010 4,750 4,313
2011 4,664 4,232
2012 4,582 4,159
Total 46,148 41,831

40

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 1. Sample (cont’d)

Panel B. Descriptive Statistics


Variable Mean Std 25th Median 75th
Acquisition 0.030 0.171 0.000 0.000 0.000
Analysts 5.139 6.131 1.000 3.000 8.000
$Assets (in millions) 7,002.831 25,269.653 146.518 643.245 2,677.122
Beta 0.989 0.588 0.562 0.989 1.384
BHAR 0.031 0.680 –0.276 –0.052 0.190
Capital 0.220 0.242 0.031 0.122 0.332
Earnings 2.632 15.574 –0.146 0.645 1.784
Financing 0.128 0.231 0.003 0.028 0.139
FirmAge 19.634 14.464 9.000 15.000 25.000
Institutions 125.399 156.087 23.000 78.000 157.000
Intangibles 0.210 0.990 0.000 0.012 0.072
InterestCov 0.818 0.884 0.017 0.295 2.000
Leverage 0.204 0.209 0.021 0.153 0.316
MB 1.803 1.350 1.034 1.328 2.011
Media 21.545 20.854 9.000 17.000 27.000
Price 24.110 31.563 6.860 15.510 29.635
Returns 0.091 0.541 –0.233 0.036 0.306
ROA –0.020 0.207 –0.014 0.023 0.069
$Sales (in millions) 3,321.100 10,067.715 68.333 327.378 1,615.013
SalesGrowth 0.150 0.440 –0.030 0.079 0.218
Size 6.502 2.173 4.987 6.467 7.892
Surprise 0.017 0.234 –0.024 0.006 0.033

41

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 2. Measuring the Level of Public Scrutiny

This table describes how we measure the level of public scrutiny (Scrutiny). We measure public
scrutiny using the first principal component from a factor analysis of analyst following (Analysts),
institutional investor following (Institutions), and media coverage (Media). All variables are
standardized prior to the factor analysis. Panel A presents the principal component output. Panel
B presents descriptive statistics for Scrutiny (Scrutiny = 0.413 * Analysts + 0.445 * Institutions +
0.366 Media). Panel C presents the correlation coefficients between Scrutiny and its components.
Spearman (Pearson) correlations appear above (below) the diagonal.

Panel A. Principal Component Output


Cumulative
Proportion Proportion First Principal
of the variation of the variation Component
Factor Eigenvalue explained explained Weights Variables
1st 1.989 66.3% 66.3% 0.413 Analysts
2nd 0.669 22.3% 88.6% 0.445 Institutions
3rd 0.341 11.4% 100.0% 0.366 Media

Panel B. Descriptive Statistics


Variable Mean Std 25th Median 75th
Scrutiny 0.00 1.000 –0.646 –0.280 0.276

Panel C. Correlation Matrix


Scrutiny Analysts Institutions Media
Scrutiny 1.00 0.84 0.91 0.74
Analysts 0.87 1.00 0.72 0.37
Institutions 0.90 0.69 1.00 0.53
Media 0.76 0.50 0.58 1.00

42

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 3. Measuring Misreporting

This table presents descriptive statistics for our measures of misreporting. Panel A presents
average values of misreporting by year. Column (1) presents the total number of firm-years in our
sample. Columns (2) and (3) present average values of our measures of misreporting. Restate is a
binary measure of misreporting that equals one if the respective firm-year was eventually restated
due to intentional misrepresentation, and zero otherwise, and Bias is the amount of restated
earnings due to intentional misrepresentation, scaled by beginning total assets and expressed in
basis points. The sample ends in 2012 to allow a minimum lag of four years between period of
restatement and the restatement announcement (i.e., our measures of misreporting are calculated
using restatements announced through 2016). Panel B presents correlations between our measure
of misreporting, public scrutiny, and key control variables. Pearson (Spearman) correlations
appear above (below) the diagonal.

Panel A. Measures of Misreporting


Sample
observations Avg Avg
Year (Table 1) Restate Bias
2004 5,105 0.013 0.295
2005 5,067 0.010 0.225
2006 4,969 0.007 0.134
2007 4,805 0.007 0.138
2008 4,695 0.007 0.138
2009 4,486 0.007 0.124
2010 4,313 0.007 0.138
2011 4,232 0.007 0.142
2012 4,159 0.006 0.124
Total 41,831 0.008 0.165

Panel B. Correlation Matrix


Variable (1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
(1) Analysts 1.00 0.69 0.37 0.84 0.00 0.01 0.50 0.19 0.02 0.13
(2) Institutions 0.72 1.00 0.53 0.91 0.00 0.01 0.63 0.21 0.07 0.07
(3) Media 0.50 0.58 1.00 0.74 0.00 0.00 0.38 0.05 0.06 0.00
(4) Scrutiny 0.87 0.90 0.76 1.00 0.00 0.01 0.61 0.19 0.06 0.08
(5) Bias 0.01 0.03 0.01 0.02 1.00 0.93 –0.01 0.01 0.00 0.01
(6) Restate 0.01 0.03 0.01 0.02 1.00 1.00 0.01 0.01 0.01 0.00
(7) Size 0.51 0.70 0.38 0.64 0.01 0.01 1.00 0.35 0.25 –0.26
(8) ROA 0.27 0.36 0.08 0.29 0.00 0.00 0.23 1.00 –0.07 –0.13
(9) Leverage 0.07 0.15 0.09 0.13 0.01 0.01 0.37 -0.11 1.00 –0.13
(10) MB 0.26 0.24 0.16 0.26 0.01 0.01 -0.17 0.34 -0.18 1.00

43

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 4. Validation of the Valuation Channel

The table presents results from estimating the earnings response coefficient as a function of the
level of public scrutiny. Panel A presents results from estimating the earnings response coefficient
using a price-level regression. Panel B presents results from estimating the earnings response
coefficient using returns regressions. Columns (1) and (2) present results from OLS regressions.
Column (3) presents results from OLS regressions using the quintile ranks of the independent
variables scaled between zero and one. t–statistics appear in parentheses and are based on standard
errors clustered by firm. ***, **, and * denote statistical significance at the 0.01, 0.05, and 0.10 levels
(two–tail), respectively. Sample of 41,379 observations from 2004 to 2012.

Panel A. Price-levels Regression


Dependent Variable:
Price
Variable (1)
Scrutiny 8.524***
(20.79)
Earnings 0.591***
(9.42)
Earnings*Scrutiny 0.248**
(2.49)
F 258.02

Panel B. Return Regressions


Dependent Variable: BHAR
Ranks
Variable (1) (2) (3)
Scrutiny 0.078*** 0.072*** 0.046*
(11.77) (11.83) (1.65)
Beta 0.118*** 0.102*** –0.047**
(11.82) (10.96) (–2.19)
Ln(MV) –0.055*** –0.049*** 0.106***
(–11.91) (–11.53) (3.24)
BM –0.134*** –0.138*** –0.154***
(–24.61) (–23.12) (–9.86)
Surprise 0.809*** 2.699*** 0.558***
(14.37) (4.94) (17.86)
***
Surprise*Beta . 0.467 0.326***
. (4.67) (7.20)
***
Surprise*Ln(MV) . –0.197 –0.807***
. (–4.08) (–10.01)
***
Surprise*BM . –0.105 –0.184***
. (–4.08) (–5.02)
** ***
Surprise*Scrutiny 0.160 0.324 0.312***
(2.45) (2.91) (4.47)
F 215.85 220.02 388.18

44

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 5. Validation of the Monitoring Channel

Panel A presents results from sorting firms into quintiles based on the level of public scrutiny and
calculating the average values of the probability of an SEC investigation in the subsequent year by
quintile. Panel B presents results from estimating the relation between the level of public scrutiny
and the probability of an SEC investigation in the subsequent year. t–statistics appear in
parentheses and are based on standard errors clustered by firm. ***, **, and * denote statistical
significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively. Sample of 41,831
observations from 2004 to 2012.

Panel A. SEC Investigations by Quintile of Public Scrutiny


Quintile
Variable 1 2 3 4 5
Quintiles of Analystst SECInvestigatet+1 0.104 0.104 0.139 0.163 0.258
Nobs 10,154 9,740 5,551 8,345 8,041

Quintiles of Mediat SECInvestigatet+1 0.073 0.101 0.144 0.175 0.272


Nobs 8,695 9,254 7,523 8,348 8,011

Quintiles of Institutionst SECInvestigatet+1 0.068 0.090 0.136 0.185 0.272


Nobs 8,425 8,320 8,426 8,312 8,348
Quintiles of Scrutinyt SECInvestigatet+1 0.063 0.090 0.145 0.176 0.277
Nobs 8,367 8,367 8,365 8,366 8,366
(q,q–1) 0.028 0.055 0.031 0.101
% (q,q–1) 0.445 0.603 0.213 0.575
p-value: =0 < [0.01] < [0.01] < [0.01] < [0.01]

45

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 5. Validation of the Monitoring Channel (cont’d)

Panel B. Regression Output


Dependent variable: SECInvestigatet+1
Linear Linear Linear Polynomial

Variable (1) (2) (3) (4)


Scrutiny 0.085*** 0.092*** 0.079*** 0.073***
(20.31) (17.24) (13.76) (10.55)
Scrutiny2 0.002
(0.95)
Control variables
Size . 0.006 0.022*** 0.024***
. (1.26) (4.35) (4.68)
MB . 0.007** 0.005 0.006*
. (2.00) (1.50) (1.71)
Leverage . 0.004 0.006 0.006
. (1.09) (1.54) (1.55)
ROA . –0.000 –0.002 –0.002
. (–0.04) (–0.55) (–0.48)
Returns . –0.006*** –0.006*** –0.006***
. (–3.02) (–3.25) (–3.30)
Capital . –0.018*** –0.018*** –0.018***
. (–5.81) (–4.39) (–4.41)
Intangibles . 0.002 –0.003 –0.003
. (0.46) (–0.92) (–0.90)
Financing . 0.001 0.000 0.000
. (0.31) (0.06) (0.09)
Acquisition . –0.001 –0.003 –0.003
. (–0.73) (–1.45) (–1.40)
InterestCov . 0.031*** 0.027*** 0.027***
. (9.63) (8.51) (8.49)
FirmAge . –0.000 0.001 0.001
. (–0.05) (0.26) (0.17)
SalesGrowth . –0.001 –0.001 –0.001
. (–0.54) (–0.62) (–0.60)
Year effects No Yes Yes Yes
Industry effects No No Yes Yes
F 412.68 35.96 34.40 34.17

46

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 6. Level of Public Scrutiny and Future Misreporting

This table presents results from sorting firms into quintiles based on the level of public scrutiny
and calculating the average values of our measures of misreporting by quintile. Panel A presents
results for Bias. Panel B presents results for Restate. Misreporting variables are measured in the
subsequent year (i.e., in t+1). Sample of 41,831 firm-years.

Panel A. Bias by Quintile of Public Scrutiny


Quintile
Variable 1 2 3 4 5
Quintiles of Analystst Biast+1 0.126 0.127 0.193 0.220 0.152
Nobs 10,154 6,107 9,184 8,345 8,041

Quintiles of Mediat Biast+1 0.110 0.163 0.226 0.161 0.162


Nobs 8,695 7,896 8,880 7,877 8,482

Quintiles of Institutionst Biast+1 0.103 0.118 0.196 0.237 0.170


Nobs 8,425 8,320 8,426 8,312 8,348
Quintiles of Scrutinyt Biast+1 0.100 0.125 0.211 0.238 0.150
Nobs 8,367 8,365 8,367 8,366 8,366
(q,q–1) 0.025 0.086 0.027 –0.088
% (q,q–1) 0.250 0.688 0.128 –0.370
p-value: =0 [0.33] [< 0.01] [0.47] [< 0.01]

Panel B. Restate by Quintile of Public Scrutiny


Quintile
Variable 1 2 3 4 5
Quintiles of Analystst Restate t+1 0.006 0.006 0.009 0.010 0.008
Nobs 10,154 6,107 9,184 8,345 8,041

Quintiles of Mediat Restate t+1 0.006 0.007 0.011 0.007 0.008


Nobs 8,695 7,896 8,880 7,877 8,482

Quintiles of Institutionst Restate t+1 0.005 0.005 0.009 0.012 0.009


Nobs 8,425 8,320 8,426 8,312 8,348
Quintiles of Scrutinyt Restate t+1 0.004 0.007 0.009 0.011 0.008
Nobs 8,367 8,365 8,367 8,366 8,366
(q,q–1) 0.003 0.002 0.002 –0.003
% (q,q–1) 0.750 0.286 0.222 –0.273
p-value: =0 [0.03] [0.05] [0.22] [0.02]

47

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 7. Polynomial Regressions

This table presents results from estimating equation (2). Columns (1) through (3) presents results
from estimating the relation between misreporting and 1st- and 2nd- order polynomials of the level
of public scrutiny. Column (4) presents results from estimating the linear relation between
misreporting and public scrutiny. All variables are as defined in Table 1. t–statistics appear in
parentheses and are based on standard errors clustered by firm. ***, **, and * denote statistical
significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively. Sample of 41,831 firm-
years.

Dependent variable: Biast+1


Polynomial Polynomial Polynomial Linear
Variable (1) (2) (3) (4)
Scrutiny2 –0.030*** –0.035*** –0.029*** .
(–3.99) (–4.57) (–3.72) .
Scrutiny 0.059** 0.098*** 0.064** –0.006
(2.49) (3.51) (2.13) (–0.35)
Control variables
Size . –0.039* –0.007 0.015
. (–1.83) (–0.27) (0.61)
MB . 0.004 0.009 0.016
. (0.24) (0.56) (1.06)
Leverage . 0.010 0.013 0.014
. (0.50) (0.71) (0.72)
ROA . 0.038** 0.033** 0.036**
. (2.50) (2.19) (2.35)
Returns . 0.010 0.006 0.005
. (0.70) (0.42) (0.35)
Capital . –0.023* –0.056*** –0.057***
. (–1.70) (–2.86) (–2.91)
Intangibles . –0.012 –0.007 –0.006
. (–1.04) (–0.54) (–0.47)
Financing . –0.019* –0.021* –0.020*
. (–1.77) (–1.85) (–1.75)
Acquisition . –0.002 –0.006 –0.005
. (–0.23) (–0.58) (–0.47)
InterestCov . 0.047*** 0.041** 0.040**
. (2.66) (2.30) (2.26)
FirmAge . –0.001 –0.011 –0.015
. (–0.05) (–0.54) (–0.74)
SalesGrowth . 0.028** 0.028** 0.028**
. (2.17) (2.13) (2.16)
Year effects No Yes Yes Yes
Industry effects No No Yes Yes
F 15.21 2.87 2.72 2.10

48

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 8. Polynomial Regressions with Additional Controls

Panel A presents results from estimating equation (2), including controls for the industry rate of
restatements due to error (IndustryError). PctError is the percentage of observations restated due
to error within the industry-year. ErrorAmount is the average difference between net income and
restated net income due to error within the industry-year, scaled by the standard deviation of net
income among non-restating firms in the industry-year (ErrorAmount). Panel B presents results
from estimating equation (2), including both linear control variables and 2nd-order polynomial
transformations of all controls. Acquisition is an indicator variable and is collinear with the 2nd-
order transformation. All other variables are as defined in Table 1. t-statistics appear in parentheses
and are based on standard errors clustered by firm. ***, **, and * denote statistical significance at
the 0.01, 0.05, and 0.10 levels (two–tail), respectively. Sample of 41,831 firm-years.

Panel A. Controlling for Industry Error Rates


Dependent Variable: Biast+1
Variable (1) (2)
***
Scrutiny2 –0.029 –0.028***
(–3.74) (–3.72)
**
Scrutiny 0.065 0.064**
(2.15) (2.12)
Controls of Industry Error Rates
PctError2 –0.000 .
(–0.05) .
PctError 0.076*** .
(3.35) .
ErrorAmount2 . –0.006***
. (–2.69)
ErrorAmount . 0.079**
. (2.20)
Controls Yes Yes
Year effects Yes Yes
Industry effects Yes Yes
F 2.70 2.70

49

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 8. Polynomial Regressions with Additional Controls (cont’d)

Panel B. Controlling for Arbitrary Non-Linearities


Dependent Variable: Biast+1
Scrutiny2 –0.019***
(–2.74)
Scrutiny 0.046
(1.54)
Non-Linear Controls Linear Controls
Size2 –0.025** Size 0.005
(–2.40) (0.20)
2 –0.007 0.045
MB MB
(–0.94) (1.38)
Leverage2 –0.005 Leverage 0.010
(–0.41) (0.32)
ROA2 –0.000 ROA 0.014
(–0.01) (0.34)
Returns2 –0.003 Returns 0.008
(–0.44) (0.44)
Capital2 0.004 Capital –0.066*
(0.21) (–1.84)
Intangibles2 0.012** Intangibles –0.101**
(2.45) (–2.13)
Financing2 –0.002 Financing –0.016
(–0.38) (–0.60)
Acquisition2 . Acquisition –0.001
. (–0.68)
InterestCov2 –0.049 InterestCov 0.070**
(–1.19) (2.37)
FirmAge2 –0.011 FirmAge 0.004
(–0.75) (0.15)
SalesGrowth2 0.004 SalesGrowth 0.017
(0.75) (0.94)
Year effects Yes
Industry effects Yes
F 2.17

50

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 9. Alternative Measure of Public Scrutiny: EDGAR Downloads

This table presents results from estimating equation (2) using the number of times professional
investors downloaded the firm’s filings from EDGAR during the fiscal year (EDGAR). Columns
(1) through (3) presents results from estimating the relation between misreporting and 1st- and 2nd-
order polynomials of EDGAR. Column (4) presents results from estimating the linear relation
between misreporting and EDGAR. t-statistics appear in parentheses and are based on standard
errors clustered by firm. ***, **, and * denote statistical significance at the 0.01, 0.05, and 0.10 levels
(two–tail), respectively. Sample of 41,831 firm-years.

Dependent variable: Biast+1


Polynomial Polynomial Polynomial Linear
Variable (1) (2) (3) (4)
EDGAR 2 –0.026*** –0.030*** –0.021** .
(–2.69) (–2.93) (–2.01) .
EDGAR 0.075* 0.123** 0.072 –0.011
(1.69) (2.24) (1.21) (–0.46)
Control variables
Size . –0.038 0.003 0.018
. (–0.97) (0.07) (0.42)
MB . 0.029 0.032 0.038
. (0.99) (1.07) (1.27)
Leverage . –0.002 0.011 0.015
. (–0.05) (0.32) (0.45)
ROA . 0.078*** 0.070** 0.074***
. (2.77) (2.53) (2.63)
Returns . 0.015 0.009 0.008
. (0.56) (0.34) (0.29)
Capital . –0.033 –0.094*** –0.095***
. (–1.43) (–2.88) (–2.90)
Intangibles . –0.016 –0.009 –0.008
. (–0.79) (–0.38) (–0.37)
Financing . –0.031 –0.032 –0.030
. (–1.52) (–1.54) (–1.45)
Acquisition . –0.005 –0.012 –0.012
. (–0.26) (–0.63) (–0.59)
InterestCov . 0.080** 0.072** 0.074**
. (2.56) (2.29) (2.33)
FirmAge . –0.004 –0.019 –0.021
. (–0.14) (–0.60) (–0.65)
SalesGrowth . 0.065** 0.064** 0.064**
. (2.24) (2.19) (2.21)
Year effects No Yes Yes Yes
Industry effects No No Yes Yes
F 9.23 2.26 2.11 2.04

51

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 10. Alternative Measures of Misreporting:
Dollar Amount of Bias, Bias Indicator, Bias within Restatement Firms, AAERs

This table presents results from estimating equation (2) using three alternative measures of misreporting. Log(1+$Biast+1) is the natural
logarithm of one plus the amount of restated earnings in millions. Restate is an indicator variable for whether the firm-year is restated due
to fraud. Bias | Restate = 1 is our primary measure of misreporting calculated only for those 330 firms with restatements due to fraud (i.e.,
estimating our results within the sample of restatements due to fraud). AAER is an indicator variable for whether the firm-year is subject to
Electronic copy available at: https://ssrn.com/abstract=3157222

an SEC Accounting and Auditing Enforcement Release. Data on AAERs are from the Center for Financial Reporting and Management and
spans 2004 to 2011, and 37,672 firm-years. t-statistics appear in parentheses and are based on standard errors clustered by firm. ***, **, and
*
denote statistical significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively.

Dependent variable: Dependent variable: Dependent variable: Dependent variable:


Log(1+$Biast+1) Restatet+1 Biast+1 | Restatet+1 = 1 AAERt+1

N = 330
N = 41,831 N = 41,831 from 2004 – 2012 N = 37,672
from 2004 – 2012 from 2004 – 2012 where Restatet+1 = 1 from 2004 – 2011

Variable (1) (2) (3) (4) (5) (6) (7) (8)


Scrutiny2 –0.0034*** –0.0023** –0.0012*** –0.0009** –1.2452*** –1.6300*** –0.0007*** –0.0007***
(–3.99) (–2.23) (–3.02) (–2.12) (–3.12) (–4.05) (–3.20) (–3.28)
Scrutiny 0.0105*** 0.0036 0.0028*** 0.0017 0.1095 3.9293** 0.0018** 0.0017*
(4.06) (0.73) (2.68) (1.06) (0.09) (2.46) (2.47) (1.93)
Controls No Yes No Yes No Yes No Yes
Year effects No Yes No Yes No Yes No Yes
Industry effects No Yes No Yes No Yes No Yes
F 8.87 2.47 4.62 2.32 14.99 5.56 5.11 2.32

52
Table 11. Alternative Research Design: Spline Regressions

This table presents results from estimating the relation between Bias and Scrutiny using a spline
regression with threshold τ:

Biast+1 = α + β1 (Scrutiny – τ < 0 ) + β2 (Scrutiny – τ ≥ 0) + γ Controlst + εt.

Columns (1) and (2) present results from estimating regressions where the threshold is defined at
the mean of Scrutiny (i.e., τ = 0). In these specifications, β1 estimates the relation for observations
below the mean and β2 estimates the relation for observations above the mean value. Columns (3)
and (4) present results from using the multivariate adaptive regression spline (MARS) method to
simultaneously estimate both the regression coefficients and the optimal threshold that minimizes
the mean-squared error (i.e., τ = τ*). All variables are as defined in Table 1. t–statistics appear in
parentheses and are based on standard errors clustered by firm. ***, **, and * denote statistical
significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively.

OLS MARS
Researcher specified Optimal threshold selection
threshold (τ) algorithm (τ*)
(1) (2) (3) (4)
Threshold τ=0 τ=0 τ* = –0.069 τ* = –0.069

Scrutiny – τ < 0 0.179*** 0.189*** 0.195*** 0.204***


(3.46) (3.26) (3.51) (3.35)

Scrutiny – τ ≥ 0 –0.061*** –0.046** –0.058*** –0.043**


(–3.84) (–2.37) (–3.75) (–2.23)

p-value: β1 – β2 = 0 <0.001 <0.001 <0.001 <0.001

Controls No Yes No Yes


Year Effects No Yes No Yes
Industry Effects No Yes No Yes
F 8.28 2.06 8.20 2.06

53

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 12. Regulatory Shock to Public Scrutiny:
Mandatory Filing of Form 4 on EDGAR

This table presents results from estimating the effect of the mandatory filing of Form 4 on EDGAR
on misreporting. Panel A presents descriptive statistics for variables used in our analysis. Panel B
presents results from estimating our difference-in-differences design, equation (4). Column (1) of
Panel B presents results from estimating the average treatment effect. Column (2) of Panel B
presents results from testing for parallel trends. Treated is a binary indicator variable equal to one
for firms that were affected by the SEC’s requirement to file Form 4s on EDGAR. Post is a binary
indicator variable equal to one for fiscal years ended after June 30, 2003. I(Year = t) is an indicator
variable equal to one for observations in year t. All other variables are as defined in Table 1. t–
statistics appear in parentheses and are based on standard errors clustered by firm. ***, **, and *
denote statistical significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively. Sample of
85,895 firm-years from 1995 to 2011.

Panel A. Descriptive Statistics


Variable Mean Std 25th Median 75th
AAER 0.009 0.092 0.000 0.000 0.000
Acquisition 0.042 0.202 0.000 0.000 0.000
Capital 0.237 0.238 0.044 0.153 0.360
Financing 0.131 0.241 0.001 0.026 0.143
FirmAge 17.736 13.703 8.000 13.000 23.000
Intangibles 0.178 0.815 0.000 0.010 0.066
InterestCov 0.887 0.887 0.030 0.433 2.000
Leverage 0.218 0.211 0.032 0.172 0.337
MB 1.874 1.590 1.035 1.309 2.027
Post 0.411 0.492 0.000 0.000 1.000
Returns 0.144 0.667 –0.252 0.049 0.370
ROA –0.039 0.247 –0.025 0.019 0.064
SalesGrowth 0.189 0.553 –0.027 0.087 0.241
Size 5.900 2.249 4.258 5.819 7.393
Treated 0.666 0.472 0.000 1.000 1.000

54

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 12. Regulatory Shock to Public Scrutiny:
Mandatory Filing of Form 4 on EDGAR (cont’d)

Panel B. Difference-in-Differences Design


Dependent variable: AAERt+1
Test of Parallel Trends

Average Treatment Effect


Treatment by year relative to 2002
Effect base category
Variable (1) (2)
Treated*Post –0.003 .
(–1.44) .
Pre-Period
Treated * I(Year ≤ 1999) . –0.004
. (–1.46)
Treated * I(Year = 2000) . –0.002
(–0.59)
Treated * I(Year = 2001) . –0.003
(–1.27)
Post-Period
Treated * I(Year = 2003) . –0.001
. (–0.39)
Treated * I(Year = 2004) . –0.004
. (–1.37)
Treated * I(Year ≥ 2005) . –0.008***
. (–2.63)
Control variables Yes Yes
Main effects Yes Yes
Year effects Yes Yes
Industry effects Yes Yes
F 7.42 6.16

55

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 13. Regulatory Shock to Public Scrutiny:
Mandatory Filing of Form 4 on EDGAR––Heterogeneous Treatment Effects

This table presents results from estimating equation (4) after allowing the treatment effect to vary
across the quintiles of public scrutiny. Panel A presents regression results and marginal effects for
each quintile, where quintile one is the base category. Column (1) of Panel A presents results using
analyst following to measure public scrutiny. Column (2) of Panel A presents results using
institutional investor following to measure public scrutiny. Panel B presents total treatment effects
for each quintile of public scrutiny. Treated is a binary indicator variable equal to one for firms
that were affected by the SEC’s requirement to file Form 4s on EDGAR. Post is a binary indicator
variable equal to one for fiscal years ended after June 30, 2003. Scrutiny_Q(q) is an indicator
variable equal to one if the observation is in the q-th quintile of the respective measure of public
scrutiny. All other variables are as defined in Table 1. t-statistics (two-tailed p-values) appear in
parentheses (brackets) and are based on standard errors clustered by firm. ***, **, and * denote
statistical significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively. Sample of 85,895
firm–years from 1995 to 2011.

Panel A. Difference-in-Differences Design


with Heterogeneous Treatment Effects
Dependent variable: AAERt+1
Scrutiny =
Quintiles of Quintiles of
Analysts Institutions
Variable (1) (2)
Treated * Post 0.008** 0.007***
(2.56) (3.53)
Treated * Post * Scrutiny_Q2 –0.002 –0.006
(–0.60) (–1.14)
Treated * Post * Scrutiny_Q3 –0.008 –0.007
(–1.26) (–1.54)
Treated * Post * Scrutiny_Q4 –0.012** –0.007
(–2.01) (–1.43)
Treated * Post * Scrutiny_Q5 –0.026*** –0.024***
(–3.91) (–3.62)
Control variables Yes Yes
Main effects Yes Yes
Two-way main effects Yes Yes
Year effects Yes Yes
Industry effects Yes Yes
F 4.11 4.47

56

Electronic copy available at: https://ssrn.com/abstract=3157222


Table 13. Regulatory Shock to Public Scrutiny:
Mandatory Filing of Form 4 on EDGAR ––Heterogeneous Treatment Effects (cont’d)

Panel B. Total Treatment Effects by Measures of Public Scrutiny


Measure of Public Scrutiny = Analysts
Quintiles of Analysts Quintile 1 Quintile 2 Quintile 3 Quintile 4 Quintile 5
Electronic copy available at: https://ssrn.com/abstract=3157222

(Treated * Post) (Treated * Post) (Treated * Post) (Treated * Post)


+ + + +
(Treated * Post * (Treated * Post * (Treated * Post * (Treated * Post *
Coefficients (Treated * Post) Scrutiny_Q2) Scrutiny_Q3) Scrutiny_Q4) Scrutiny_Q5)
Treatment Effect 0.008** 0.006** 0.000 –0.004 –0.019***
p-value [0.01] [0.03] [0.96] [0.45] [<0.01]

Measure of Public Scrutiny = Institutions


Quintiles of Institutions Quintile 1 Quintile 2 Quintile 3 Quintile 4 Quintile 5
(Treated * Post) (Treated * Post) (Treated * Post) (Treated * Post)
+ + + +
(Treated * Post * (Treated * Post * (Treated * Post * (Treated * Post *
Coefficients (Treated * Post) Scrutiny_Q2) Scrutiny_Q3) Scrutiny_Q4) Scrutiny_Q5)
Treatment Effect 0.007*** 0.001 0.000 0.000 –0.017***
p-value [<0.01] [0.83] [0.98] [0.94] [<0.01]

57
Appendix.

This Appendix presents the details underlying our model and derives the results. To

facilitate the discussion, our setup, notation, and analysis follow directly from Fischer and

Verrecchia (2000, FV)––the only substantive difference being that we interpret the precision of

investors’ knowledge about the manager’s objective and the cost of bias as increasing functions of

public scrutiny.

A.1 Setup

The timeline of the model is as follows: (i) a risk-neutral manager privately observes true

earnings, (ii) the manager issues a potentially biased report of earnings to the capital market, and

(iii) investors price the firm’s shares based on the report. To facilitate the discussion, henceforth

we use a tilde (i.e., ~) to denote a random variable.

We represent true earnings by e  v  n, where v and n are independently normally

distributed with mean zero and positive, finite variances  v2 and  n2 , respectively. Having

observed the realization of true earnings, e  e , the manager reports r  e  b , where b represents

the extent to which the manager chooses to bias reported earnings. A risk-neutral, perfectly

competitive market observes reported earnings and prices the firm accordingly. Price is the rational

expectation of the firm’s terminal value conditional on reported earnings: P  E[v | r ] .18

Second, the manager’s objective is to maximize x  P , where x is normally distributed

with mean  x  0 , finite variance  x2  0 , and precision  x (i.e.,  x   x2 ). We assume all

distributions are independent and common knowledge, and the realization of x  x is known only

to the manager at the beginning of the period.

18
Similar to traditional asset pricing models (e.g., Grossman and Stiglitz, 1980; Kyle, 1985; Lambert, Leuz, and
Verrecchia, 2007), FV has the feature that price can take negative values.

58

Electronic copy available at: https://ssrn.com/abstract=3157222


In a departure from FV, we introduce the notion of public scrutiny, represented by the


parameter s . Let S represent some open set on the positive, half-real line (i.e., on  ): that is,

S  ( sL , sH ) where 0  sL  sH . We assume that both the precision of investors’ common

knowledge about x ,  x , and the cost associated with bias are strictly, positive, increasing

functions over S . As such, henceforth we represent  x by  x  s  and the cost associated with

bias as C  s  , where  x  s   0 , C  s   0 ,
 x ( s ) C ( s )
s
 0 , and s
 0 . Hence, higher values of public

scrutiny, s , imply investors know more about the manager’s objective through  x    and the

manager faces a higher cost of bias through C   . This is the only difference between FV and our

model.

The manager chooses the bias in reported earnings in anticipation of share price. The

manager’s objective function including the cost of bias is given by:

 1 
max  xP  C ( s )b 2  . (A1)
b
 2 

A.2 Equilibrium

One can characterize an equilibrium to the model by a bias function, b(e, x) , and a pricing

function, P(r) , that satisfy three conditions. (1) The manager’s choice of bias must solve his

optimization problem given his conjecture about the pricing function. (2) The market price must

equal expected firm value conditional on the report and the market’s conjecture about bias. (3)

Both the conjectured pricing function and conjectured bias must be sustained in equilibrium. Bias

and price functions in the form of b(e, x)  ee  x x   and P(r )     r provide a unique,

linear equilibrium. Here,  represents the sensitivity of price to reported earnings, or the earnings

response coefficient (ERC).

59

Electronic copy available at: https://ssrn.com/abstract=3157222


Equilibrium Bias Function. Combining the pricing function and the manager’s objective

ˆ
function, the first-order condition implies that the optimal bias function is given by b(e, x)  C ( s ) x

ˆ
such that e  0 ,   0 , and x  C (s)
, where we use a caret (i.e., ^) to denote a conjecture.

Equilibrium Pricing Function. Assuming the conjectured bias function of the form

(r  ˆx  x ) , such that


2
described above, market price of the firm is given by P  E[v | r ]   2  2 vˆ 2 1
v n x   s
x

 v2
  2
  n2  ˆx2  1
and    ˆx  x .
x  s
v

Uniqueness. Assume all conjectures in the bias and pricing functions are correct. It is

straightforward to verify that the solutions for e ,  , and  are unique functions of the pair

 , x  . Thus, to prove the existence of a unique equilibrium we need to show that there exists a

unique  , x  that satisfies    2  2 v 2


2 
1
and x  C (s)
. Substituting the latter into the former,
v n x   s
x

 v2
we have   . Thus, we need only show that this expression has a unique solution
 
2

 v2  n2  C(s)
1
 x s

for  . Rearranging terms yields the following 3rd-order polynomial:

 3    v2   n2   x  s  C ( s )    v2 x  s   C ( s )   0.
2 2
(A2)

The interested reader will note that this expression replicates the equilibrium condition in FV (eqn.

[15] on p. 236 in FV), except that now we characterize the equilibrium in terms of  x  s  and

C  s  as functions of s .

As in FV, the  that solves eqn. (A2) is strictly positive because   0 implies the

expression is negative. Also, the solution to eqn. (A2) will be unique: for positive  , the expression

is monotonically increasing in  and tends to infinity as  tends to infinity. Finally, the solution

60

Electronic copy available at: https://ssrn.com/abstract=3157222


 v2  v2
to eqn. (A2) is strictly less than  v2   n2
, because    v2   n2
results in the expression being positive.

2
This observations stem from the fact that    2v 2  1 in a circumstance where the manager is
v n

forced to report truthfully (i.e., without bias). Thus, when managers bias reported earnings, the

ERC is lower than in a circumstance where they are forced to report truthfully. Insofar as  is

the weight that investors place on the manager’s report, the latter feature of the equilibrium is

intuitive and important in establishing unimodality.

A.3 Comparative Static


Next, we examine the sign of the relation between expected bias, E[b]  C (s)
 x , and public

E[ b ] 
scrutiny, s : in other words, s
. To do so, we first need to derive the implicit solution for s
from

eqn. (A2). Allowing for the fact that this requires a considerable amount of algebraic manipulation,

it is nonetheless straightforward to show




 3 C ( s) xs  2 x  s  C(ss )
 ( s )

 0.
 (A3)
s 
3 2   v2   n2   x  s  C ( s )    x  s  C ( s)
2

This establishes the ERC is increasing in s.

E [ b ]
Next we solve for s and evaluate its sign:

E[b] 1  1 C ( s)
 x   x , (A4)
s C ( s) s C ( s) 2 s

Note that because  x  0, C(s)  0, and   0, eqn. (A4) is proportional to (has the same sign

as):

1  1 C ( s )
 . (A5)
 s C ( s ) s

61

Electronic copy available at: https://ssrn.com/abstract=3157222


Eqn. (A5) lends itself to a straightforward economic interpretation. The first term in eqn. (A5)

represents the percentage change in  for a one unit change in s , and the second term represents

the percentage change in C(s) for a one unit change in s . One can think of the former as akin to

the percentage change in the marginal benefit of bias, and the latter as akin to the percentage

change in the marginal cost of bias. When the percentage change in the marginal benefit of bias

E [ b ]
exceeds that of the marginal cost of bias, s
 0 , and when the percentage change in the marginal

E [ b ]
cost of bias exceeds that of the marginal benefit, s
 0.

We can further simplify the expression for the sign of the comparative static by substituting

eqn. (A3) into equation eqn. (A5), and multiplying by  and C(s) . This yields

   3   v2  x  s   C ( s ) xs s  
C ( s )
 x  s  2
C ( s) 
 C ( s ) 
s
.
C ( s)   s
(A6)
s s 2 2

 x  s   3   v   n   x  s  C ( s ) 
2 2

 x  s 
Because C(s)  0 , s
 0 , and the entire denominator on the right-hand-side of eqn. (A6) are

positive, we need only consider the sign of the second term in the numerator:

C ( s )
 3   v2  x  s   C ( s ) s s  .
2
(A7)
x
s

Using the equilibrium condition in eqn. (A2) to express  3 , one can show that eqn. (A7) is

proportional to

  v2   n2  
s
ln C  s  
1    . (A8)
 v 
2 
s
ln  x  s  

A.4 Unimodality

Now define f  s  such that

62

Electronic copy available at: https://ssrn.com/abstract=3157222


 2  2  
ln C  s  
f  s  1    v 2 n   s
. (A9)
 v 

s
ln  x  s  

To ensure that expected bias is unimodal in s , f  s  needs to have behave as follows: 1) initially

f  s   0 ; 2) eventually f  s   0 ; and 3) f  s  is everywhere decreasing in s . This behavior

ensures that there exists some s (this is the “peak” or “threshold”) such that f  s   0 , for all

s  s , f  s   0 , and for all s  s , f  s   0 . Sufficient conditions to establish these features

are:


s
ln C  s  
lim  0; (A10)
s  sL 
s
ln  x  s  


s
ln C  s  
lim  1; (A11)
s  sH 
s
ln  x  s  



s
ln C  s  
 0. (A12)
s 
s
ln  x  s  

More specifically, note that for all s  S ,

  v2   n2 
0 1     1. (A13)
 v 
2

Thus, eqn (A10) implies

  2  2  
ln C  s   
lim f  s   lim  1    v 2 n   s
  0. (A14)
s  sL s  sL 
  v 

s
ln  x  s   

Thus, eqn. (A11) implies

   v2   n2  
ln C  s   
lim f  s   lim  1    
s
  0. (A15)
s  sH s  sH 
  v 
2 
s
ln  x  s   

63

Electronic copy available at: https://ssrn.com/abstract=3157222



Finally, we know from eqn. (A3) that s
 0 . Thus, eqn. (A12) implies

    v2   n2  

ln C  s  
f s    
s
 0. (A16)
s s   v2  s 
s
ln  x  s  

An interested reader can easily verify that S  (0, ) ,  x  s   s , and cost functions of the form

C (s)     s and C ( s )     s 2 (   0 and   0 ) satisfy these conditions. See Figure 2 for

examples.

64

Electronic copy available at: https://ssrn.com/abstract=3157222

You might also like