Professional Documents
Culture Documents
Daniel J. Taylor
dtayl@wharton.upenn.edu
The Wharton School
University of Pennsylvania
Robert E. Verrecchia
verrecch@wharton.upenn.edu
The Wharton School
University of Pennsylvania
Abstract:
This paper examines how the ex ante level of public scrutiny influences a manager’s subsequent
decision to misreport. The conventional wisdom is that high levels of public scrutiny facilitate
monitoring, suggesting a negative relation between scrutiny and misreporting. However, public
scrutiny also increases the weight that investors place on earnings in valuing the firm. This in turn
increases the benefit of misreporting, suggesting a positive relation. We formalize these two
countervailing forces––“monitoring” and “valuation”––in the context of a parsimonious model of
misreporting. We show that the combination of these two forces leads to a unimodal relation.
Specifically, as the level of public scrutiny increases, misreporting first increases, reaches a peak,
and then decreases. We find evidence of such a relation across multiple empirical measures of
misreporting, multiple measures of public scrutiny, and multiple research designs.
__________________
We thank Jeremy Bertomeu, John Core, Vivian Fang, Paul Fischer, Mirko Heinle, Naveen
Kartik, John Kepler, Rick Lambert, Thomas Ruchti (discussant), Rodrigo Verdi, Ronghuo Zheng
(discussant), and seminar participants at the 2019 FARS Midyear Meeting, the University of
Michigan, MIT, Stanford, Toronto, Wharton, and University of Zurich for helpful comments. We
thank Travis Dyer for providing data on EDGAR downloads, and thank our respective schools
for financial support.
regulators, and practitioners. Perhaps for this reason, a vast academic literature examines the
causes of misreporting and the various mechanisms that can mitigate it. Many studies in this
literature explicitly link the misreporting decision to the cost-benefit tradeoff facing the manager,
and seek to provide insight on settings where managers will be more (or less) likely to misreport.
In this paper, we analytically and empirically examine how the ex ante level of public scrutiny
(i.e., the level of public scrutiny prior to misreporting) alters the cost-benefit tradeoff facing
The conventional wisdom is that greater public scrutiny reduces the incidence of
misreporting. The intuition is compelling: greater scrutiny increases the probability of detection
(e.g., Ferri, Zheng, and Zou, 2018). If the probability of detection is higher, the expected cost is
higher and, in equilibrium, the incidence of misreporting should be lower. This intuition has led
much of the prior literature to predict that misreporting is most pronounced in settings
However, this intuition is incomplete. Prior research shows that greater public scrutiny
increases the weight that investors place on accounting disclosures in valuing the firm. For
example, it is widely accepted that the price response to each dollar of reported earnings, or
earnings response coefficient (ERC), is larger in high-scrutiny environments (e.g., Collins and
Kothari, 1989; Teoh and Wong, 1993). The larger the ERC, the greater the valuation benefit from
a dollar of inflated earnings. If the expected benefit is higher, intuition suggests that the incidence
of misreporting should be higher (e.g., Fischer and Verrecchia, 2000; Ferri, Zheng, and Zou,
2018). We show analytically and empirically that these two economic forces––“monitoring” and
formalize the intuition that motivates our predictions. As is standard in the literature, we assume a
privately informed manager can misreport, or “bias,” reported earnings and that there are a variety
of different “types” of managers, where each type has a different reporting objective. We assume
investors are uncertain about the manager’s type, and consequently uncertain about the manager’s
reporting objective. In deciding whether to bias earnings, the manager trades off the benefit and
cost, and investors have rational expectations about the extent of bias and adjust price accordingly.1
In the context of Fischer and Verrecchia (2000), one can think of public scrutiny––our
theoretical construct of interest––as increasing both the ERC (through a reduction in uncertainty
about the manager’s reporting objective) and the manager’s cost of bias (through a higher
probability of detection). Figure 1 illustrates the link between public scrutiny and the various
theoretical constructs of interest. In the presence of these costs and benefits, we show that the
relation between bias and scrutiny is unimodal: there is a unique peak, below which the relation is
positive, and above which the relation is negative (see e.g., Figure 2).2
The intuition for this result is that in firms with low scrutiny, a lower expected benefit
results in lower bias. As public scrutiny increases, bias increases because the ERC increases––the
valuation channel dominates. Eventually, bias peaks at a unique threshold point and then begins
to decline because in firms with high scrutiny the cost of bias is higher––the monitoring channel
1
See Bertomeu et al. (2017, 2019), Fang, Huang, and Wang (2017), and Frankel and Kartik (2019) for related
models.
2
Formally, a function, y = f(x), is unimodal if for some threshold value t, it is monotonically increasing for x ≤ t and
monotonically decreasing for x ≥ t.
scrutiny environments. We use the insights from this analysis to inform the design of our empirical
tests.
In principle, there are a variety of proxies we could use to measure the level of public
scrutiny. To minimize concerns that our results are specific to any one measure, and to illustrate
the generality of the economic forces we study, we focus on three proxies common in the literature:
analyst following, institutional investor following, and media coverage. While these proxies are
broad, each of them provides some insight on investors’ knowledge about the manager’s reporting
strategy. Consistent with how we conceptualize public scrutiny, a large body of prior research
indicates that each of these proxies is associated with both heightened ERCs and heightened
monitoring (e.g., Bushee, 1998; Bushman, Piotroski, and Smith 2004; Miller, 2006; Bowen,
Rajgopal, and Venkatachalam, 2008; Dyck, Morse, and Zingales, 2010). These findings provide
construct validity, and suggest that our proxies capture the countervailing forces that motivate our
analysis. Our primary measure of public scrutiny is the common component to analyst following,
institutional investor following, and media coverage, and in subsequent analyses, we assess the
robustness of our results to using EDGAR downloads as an alternative measure of public scrutiny.
We begin our empirical analysis validating the existence of both the valuation and
monitoring channels. Consistent with the valuation channel (and with a large body of prior
literature), we find that firms with higher levels of public scrutiny have higher ERCs––measured
using both price-levels specifications and returns specifications. Next, to validate the monitoring
channel, we obtain novel FOIA data on all formal SEC investigations over our sample period. The
data contains an exhaustive list of SEC investigations over our sample period, and represents the
“suspect list” considered by the SEC (e.g., Blackburne et al., 2019). Consistent with the monitoring
positive relation between public scrutiny and the probability of an SEC investigation in the
subsequent year. Firms in the highest quintile of public scrutiny are four times more likely to be
investigated by the SEC in the subsequent year than firms in the lowest quintile: 27% (6%) of
firms in the highest (lowest) quintile of public scrutiny are under SEC investigation in the
subsequent year.
Having documented the presence of the valuation and monitoring channels, we proceed to
test our novel theoretical prediction: a unimodal relation between public scrutiny and misreporting.
An important distinguishing feature of our analysis is that we predict that the relation between
misreporting and public scrutiny takes a specific, non-linear functional form. While prior work
often frames empirical predictions in terms of linear relations, our predictions focus on the shape
of the relation. Our focus on the shape of the relation should mitigate concerns about omitted
variables and endogeneity. For example, an alternative explanation for a unimodal relation would
need to suggest an alternative theoretical construct that not only explains the relation between
misreporting and public scrutiny, but also why the relation flips sign at approximately the same
threshold point. While we cannot rule out such a possibility, it seems unlikely. In the economics
literature, this empirical approach is known as “identification by functional form” (e.g., Lewbel,
2018).
a measure of misreporting (e.g., Hennes, Leone, and Miller, 2008; Armstrong et al., 2013).
Consistent with extant theory, we measure “bias” in reported earnings using the difference between
earnings as originally reported and earnings as restated. In subsequent analyses, we assess the
indicator variable for whether the SEC issued an Accounting and Auditing Enforcement Release.
We estimate the shape of the relation between public scrutiny and misreporting using
multiple distinct sets of tests. We begin by graphically presenting the shape of the relation. In
particular, we sort firms into quintiles based on public scrutiny and present average values of
misreporting for each quintile. We find that misreporting is monotonically increasing in the first
three quintiles, peaks in the fourth quintile, and then declines sharply in the fifth quintile. We
observe a similar pattern across multiple measures of public scrutiny (including each of analyst
following, institutional following, and media coverage) and across multiple measures of
misreporting (including the amount of bias and an indicator variable for restatements due to
misrepresentation).
Next, we conduct three sets of empirical tests. First, we estimate polynomial regression
models that include both linear and 2nd-order polynomial terms. If the relation is unimodal (i.e.,
first increases and then decreases), we expect the 2nd-order polynomial term on our measure of
public scrutiny to load incremental to the linear term, and the coefficient to be negative. In
estimating these specifications, we control for ad hoc unimodal relations between misreporting
and our control variables (e.g., we control for the possibility of an atheoretic unimodal relation
between misreporting and growth options). Consistent with our predictions, we find robust
evidence of a negative coefficient on the 2nd-order polynomial term across all specifications.
Second, we estimate spline regression models that treat the shape of the relation as
piecewise linear. Specifically, we estimate linear regressions on either side of a threshold and test
whether the slope coefficient below (above) the threshold is positive (negative). One estimation
challenge with these models is that, while the theory suggests that a threshold exists, it is silent on
estimate these models. In the first approach, we use the average level of scrutiny as a rough
approximation of the threshold, and estimate different slope coefficients for observations below
and above the threshold. Consistent with a unimodal relation, we find a positive (negative) relation
In the second approach, we use the multivariate adaptive regression spline method (MARS)
to estimate simultaneously both the threshold that minimizes the mean-squared error, and the sign
of the relation on either side of the threshold (e.g., Friedman, 1991). The advantage of this
approach is that MARS can reliably identify not only the slope coefficients of the piecewise-linear
function around the threshold, but also the point at which the function is piecewise linear. We find
that the threshold is at the 61st percentile and, consistent with a unimodal relation, the slope to the
left (right) of the threshold is positive (negative). Thus, for observations in the top two quintiles of
public scrutiny, our results suggest that small increases in public scrutiny will decrease
misreporting. However, for observations in the bottom three quintiles of public scrutiny, our results
Finally, in our third set of tests, we exploit a regulatory shock that increased investors’
that allows for heterogeneous treatment effects, and predict that the sign of the treatment effect
varies with the level of public scrutiny. Consistent with a unimodal relation between public
scrutiny and misreporting, we find that mandatory EDGAR dissemination of Form 4 increases
misreporting for firms with extremely low ex ante scrutiny, i.e., those below the threshold; and
decreases misreporting for firms with extreme high ex ante scrutiny, i.e., those above the threshold.
variable would need to: (i) be positively correlated with misreporting; (ii) vary over time with the
EDGAR mandate (first difference); (iii) differentially affect treatment and control firms (second
difference); and (iv) result in the difference between treatment and control firms flipping sign
Our findings make multiple contributions to the empirical literature. First, by formally
modeling the managers’ cost-benefit tradeoff, our analysis highlights the limitation of existing
intuition in the empirical literature on misreporting. A distinguishing feature of our analysis is that
we predict that the relation between misreporting and public scrutiny takes a specific, non-linear
functional form––a prediction that would be difficult to intuit in the absence of formal theory. In
this regard, our empirical findings suggest a more nuanced view of how public scrutiny affect the
cost-benefit tradeoff facing the manager. While prior work often frames empirical predictions in
terms of linear relations, our results suggest that a promising path for future work would be to
explore non-linear or unimodal relations. Indeed, our results suggest the relation between
misreporting and each of analyst following, institutional investor ownership, media coverage, and
EDGAR downloads is unimodal: for low (high) values, these variables are positively (negatively)
Second, using novel data on formal SEC investigations (including investigations that did
not lead to charges), we provide evidence that public scrutiny is a leading indicator of SEC
investigations. Despite the much greater rate of SEC investigations among firms with high public
scrutiny (analyst following, institutional investor following, and media coverage), misreporting is
greatest in firms with medium levels of public scrutiny and lowest in both high- and low-scrutiny
firms.
EDGAR. Not only is this the first evidence that EDGAR dissemination of managers’ equity trades
affects misreporting, but also that the effect of EDGAR dissemination varies with the level of
public scrutiny in a manner consistent with our theoretical predictions. In this regard, our findings
suggest that a promising path for future work on regulatory shocks is to consider designs that allow
for heterogeneous treatment effects: narrowly focusing on the sign and significance of the “average
treatment effect” can mask important heterogeneity in the effect of the shock. For example, we
find the average treatment effect is insignificant, but the treatment effect in the top (bottom)
quintile of public scrutiny is reliably negative (positive). Collectively, the notion that our
theoretical predictions hold across multiple measures of the theoretical constructs, multiple
research designs, and multiple empirical settings underscores the generality and power of the
The remainder of our paper proceeds as follows. Section 2 formally develops our empirical
predictions. Section 3 describes the sample and measurement choices used in our empirical tests.
Section 4 describes our empirical tests and results that validate the valuation and monitoring
channels. Section 5 describes our primary research design, results, and robustness tests for a
2. Hypothesis Development
2.1 Overview
scrutiny into Fischer and Verrecchia (2000), hereafter FV. As in FV, we consider a setting where
reported earnings diverge from true earnings. We refer to the divergence between reported
earnings and true earnings as “bias.” In the context of FV, the divergence between reported
earnings and true earnings constitutes “misreporting” because the manager intentionally introduces
bias to obfuscate the report. As in FV, we assume bias is costly to the manager, and there is a
continuum of different types of managers, where each type of manager has a different reporting
objective. Investors are uncertain about the manager’s reporting objective, and thus cannot
perfectly anticipate the manager’s reporting strategy. The unique feature of our model, relative to
prior work, is that we incorporate the notion of public scrutiny into FV. Specifically, we assume
that public scrutiny increases investors ability to anticipate the manager’s reporting strategy and
2.2. Setup
Unless otherwise noted, the basic setup and notation follows FV, and all random variables
are mean-zero and independently normally distributed. Henceforth, we use a ~ to denote a random
At the beginning of the first period, the manager privately observes a noisy signal of
terminal cash flow: e v n . One can think of e as true earnings, v as terminal cash flow, and
n as the noise in earnings that results from a non-opportunistic application of accounting rules.
Having observed the realization of true earnings, the manager then reports earnings to the capital
market. We represent reported earnings by r e b where b is the bias in reported earnings and
3
FV is but one example of a class of signaling model in which agents are heterogeneous along two dimensions:
“natural action” and “gaming ability.” See Frankel and Kartik (2019) for a broad discussion of this type of model.
See Bertomeu et al. (2017, 2019) for examples of papers that structurally estimate this type of model, and Fang,
Huang, and Wang (2017) for an example of a paper that links the cost of bias to the noise in earnings and derives
empirical predictions.
the firm’s shares in a perfectly competitive market and price is equal to expected future cash flow,
P E[v | r ] . In choosing the optimal bias, a risk neutral manager trades off the cost of bias with
1
max xP C ( s )b 2 . (1)
b
2
precision x (i.e., x x2 ). We assume all distributions are independent and common
knowledge, and the realization of x x is known only to the manager at the beginning of the
period. One way to interpret x is to suggest that there is a continuum of different “types” of
managers, where each type has a different reporting objective. As such, x represents investors’
knowledge about the manager’s reporting objective: larger values indicate that investors can better
anticipate the manager’s reporting strategy and better anticipate any bias.
The second term represents the cost of bias. One can think of the cost of bias as representing
an amalgamation of the effort cost of bias, the probability of detection, and the pecuniary and non-
pecuniary penalties to the manager in the event of detection. Intuitively, one can think of public
10
In a departure from FV, we introduce the notion of public scrutiny, represented by the
parameter s , and assume (i) investors’ knowledge about the manager’s objective is strictly positive
and increasing in s , and (ii) the cost of bias is strictly positive and increasing in s . Henceforth, we
represent the precision of investors’ knowledge about the manager’s objective by x s and the
cost associated with bias as C s . This is the only difference between FV and our model.
2.3 Equilibrium
The Appendix shows that there is a unique linear equilibrium with price given by
P (e b) . Here, represents the ERC. The greater the ERC, the greater the price
response to reported earnings (e b) , and hence the greater the benefit of bias. The Appendix
shows that s
0 . This establishes the mechanism through which public scrutiny increases the
benefit of bias––by increasing the value-relevance of reported earnings. The greater the value-
relevance of reported earnings, the greater the benefit to inflating reported earnings.
Similar to FV, the Appendix shows that the expression for equilibrium expected bias is
given by E[b] C (s)
x . The numerator of this expression encapsulates the benefit of bias through
the ERC, and the denominator encapsulates the cost of bias. This expression is similar to FV,
except that both the numerator and the denominator are increasing in s .
4
Note that it is not relevant for our analysis whether investors are uncertain about the first term of the manager’s
objective function or the second term (e.g., Bertomeu et al., 2017). In this regard, recent empirical work tends to
interpret πx very narrowly; as speaking to pecuniary compensation (e.g., Fang, Huang, and Wang, 2017; Ferri,
Zhang, and Zhu, 2018). While managerial compensation contracts are undoubtedly one source of information about
the manager’s reporting objective, they represent only a small fraction of the total available information. For
example, investors might glean information from an analysis of previous disclosures, media articles, analyst reports,
or acquire information from other intermediaries. In this regard, the theory applies broadly and is not specific to a
certain information channel.
11
E [ b ]
s . A unimodal relation is characterized by a unique “peak” or “threshold” value of public
scrutiny, denoted s , such that: (1) for all values of public scrutiny below the threshold ( s s* ),
the relation is positive ( E[sb ] 0 ); and (2) for all values of public scrutiny above the threshold (
s s* ), the relation is negative ( E[sb ] 0 ). The Appendix derives technical conditions that
guarantee unimodality, and shows that a variety of cost functions satisfy these conditions. Figure
2 illustrates the relation between expected bias and public scrutiny for standard linear and convex
cost functions. Having discussed the economic theory that motivates our predictions, next we
3.1 Sample
We construct our sample using data from CRSP, Compustat, I/B/E/S, Thomson-Reuters,
RavenPack, and Audit Analytics from 2004 to 2012. Our sample begins in 2004, when data on our
measures of the level of public scrutiny and misreporting first become available, and ends in 2012
when our measure of misreporting ends. Column (1) of Panel A of Table 1 shows that there are a
total of 46,148 firm-year observations over this period on the merged CRSP/Compustat universe
with non-missing income, total assets, and market value. After requiring additional information
needed to construct the control variables used in our analysis (e.g., plant, property, and equipment,
sales growth, debt, buy-and-hold returns over the fiscal year), Column (2) shows that the sample
drops to 41,831 firm-years. However, this sample still represents 90% of the total available
Compustat/CRSP population, suggesting our sample is comprised of a wide variety of firms, and
definitions follow Armstrong et al. (2013) and are provided in Table 1. All variables are calculated
net of any restatements and winsorized at the 1st/99th percentiles. Consistent with significant cross-
sectional variation in the level of public scrutiny, Panel A suggests the interquartile range of total
assets (sales) is $146 million to $2.67 billion ($68 million to $1.6 billion), and the interquartile
A distinguishing feature of our analysis is that we predict that the relation between
misreporting and public scrutiny takes a specific, non-linear functional form. While our focus on
the shape of the relation mitigates concerns about omitted variables and reverse causality––an
alternative explanation would need to explain why the relation between misreporting and public
scrutiny flips sign––it poses other challenges. Indeed, one concern with “identification by
functional form” is that the sampling variation in the empirical measure of interest ––public
scrutiny––may not be sufficiently large to replicate the entirety of the theoretical shape.
Consider how this concern might affect our empirical analysis. Figure 2 shows the shape
of the relation predicted by theory. However, ex ante, we do not know where firms in our sample
will fall on the x-axis (public scrutiny). It could be that, within our sample, investors have
sufficiently high public scrutiny such that all firms fall to the right of the “peak” or “threshold”. In
such a circumstance, we would only observe a portion of the theoretical shape. For example, if all
of the firms in our sample fall to the right (left) of the threshold, we would only observe a negative
(positive) relation. Thus, for our tests of the shape of the functional form to be meaningful, we
need to maximize sampling variation in our measures of public scrutiny: we need to observe firms
13
our empirical tests are joint tests of a unimodal shape and a sufficiently broad sample that we
This concern informs how we measure public scrutiny. In principle, there are a variety of
empirical proxies for the level of public scrutiny. We consider several popular proxies in the
literature commonly thought to be associated with valuation and monitoring channels: analyst
following, institutional investor following, and media coverage. There are two key advantages of
these proxies. First, Panel B of Table 1 suggests there is considerable cross-sectional variation in
these proxies in our sample, which helps remedy concerns that the sampling variation might be
insufficient to detect a unimodal relation. Second, a large body of prior research suggests each of
these proxies is related both to greater value-relevance of reported earnings, and to greater
monitoring, and thus captures both the “valuation” and “monitoring” channels that motivate our
analysis (see Figure 1).5 In this regard, the findings in prior work establish construct validity of
To reduce the effect of measurement error, we use principal component analysis to extract
the common component to analyst following, institutional investor following and media coverage,
and use the common component as our primary measure of the level of public scrutiny (Scrutiny).
In addition, to be faithful to the underlying theory, we lag our empirical measures by one year
relative to the potential period of misreporting. This ensures we measure public scrutiny prior to
5
For example, the literature consistently finds that analysts not only provide valuable information to investors and
enhance the value-relevance of earnings, but also perform an important monitoring role that can help deter
misreporting (e.g., Bushman, Piotroski, and Smith 2004; Yu, 2008; Dyck, Morse, and Zingales, 2010). With
fiduciary responsibilities towards their clients, institutional investors play important roles in impounding information
into prices (Piotroski and Roulstone, 2004) and in monitoring managers (e.g., Bushee, 1998; Bowen, Rajgopal, and
Venkatachalam, 2008). Similar to analysts and institutions, the recent literature also suggests that media coverage
can play a dual role: it disseminates information and facilitates monitoring and deterrence (e.g., Miller and Skinner,
2015; Blankespoor, deHaan, and Zhu, 2018).
14
Variables are normalized to be mean zero with standard deviation of one prior to this analysis.
Panel A shows that the first principal component explains 66.3% of the variation in the three
variables and has an eigenvalue near two. The first principal component loads positively on all
three variables (loadings of 0.413, 0.445, and 0.366 on Analysts, Institutions, and Media
respectively) and is the only component with an eigenvalue greater than one.
Panel B reports descriptive statistics for our measure. The mean and median values of
Scrutiny are 0.00 and –0.280 respectively, suggesting a right skew toward high-scrutiny firms.6
Panel C shows that Scrutiny is highly correlated with each of the underlying components. Scrutiny
has a spearman (pearson) correlation of 0.84 (0.87) with analyst following, 0.91 (0.90) with
As with all theory-grounded empirical tests, our analyses are joint tests of our theoretical
predictions and our measures. We predict the existence of two countervailing economic forces, not
the absence of other forces. That is, we do not predict that the common component to analyst
following, institutional investor following, and media coverage, affects misreporting only through
the two countervailing effects of valuation and monitoring, but rather that such forces exist in the
data. In this regard, the failure to find a unimodal relation could mean that (i) the forces we study
are not present in the data, or (ii) our empirical proxies are sufficiently noisy proxies of the
underlying economic forces that our tests lack power. To the extent that our empirical proxies are
associated with misreporting through channels other than those discussed above––monitoring and
6
The mean of a linear combination of normalized variables is zero by construction.
15
earnings and true earnings. In an effort to adhere closely to the theory, we measure misreporting
using the difference between earnings as originally reported and earnings as restated, and focus on
Form 8-K between 2004 and 2016, and provides information on the date of the announcement, the
period to which the restatement applies, and the effect of the restatement on net income. The
announced between 2004 and 2016. We use data on restatement announcements through 2016 to
measure misreporting through 2012. In doing so, we allow for a minimum of four years between
Next, we match the Audit Analytics database to our sample (described in Table 1) based
on the period that was restated. We define Bias as the amount of restated earnings due to intentional
misrepresentation, scaled by beginning total assets and expressed in basis points (i.e., 5.23% of
assets is expressed 523). Because our theory speaks to the amount of misreporting, we use Bias as
our primary measure of misreporting. In subsequent analyses, we report results from repeating our
tests measuring misreporting using an indicator variable equal to one if Bias > 0 (Restate).
7
Firms begin disclosing 8-K item 4.02 “Non-Reliance on Previously Issued Financial Statements or a Related Audit
Report or Completed Interim Review” in August 2004. We remove from our sample intentional restatements
missing data on the announcement date, the period for which the restatement applies, or the effect of the restatement
on net income. By removing these observations from the sample, we ensure we do not erroneously classify the
observation as having zero misreporting. The Audit Analytics database provides us with a broader sample than the
GAO database used in prior work (see Karpoff et al., 2017 for a review of the different measures and databases).
16
2012 due to intentional misrepresentation, the probability of restatement (average Restate), and
the average values of Bias. Consistent with prior literature, Panel B shows that the probability of
restatement due to intentional misrepresentation is approximately 0.01 in every year of our sample,
and that average bias is consistently around 0.13 basis points of assets.8
Panel B presents correlations between our measures of public scrutiny, misreporting, and
key control variables. This panel shows that our measures of misreporting (Bias and Restate) are
only weakly correlated with the composite measures of public scrutiny (Scrutiny) and its
components (Analysts, Institutions, and Media). Spearman (Pearson) correlations are between 0.01
and 0.03 (0.00 and 0.01). Consistent with a unimodal relation, our subsequent tests show that for
approximately 40% (60%) of the data, the relation is significantly negative (positive); such that,
Our model explicitly predicts that the ERC is larger for high-scrutiny firms (see Section
2.3). Accordingly, we validate our measure of public scrutiny by estimating its relation to the ERC.
Although prior work has effectively shown that the ERC is increasing in the three underlying
components of our measure (e.g., analyst following, media coverage, and institutional investor
following), we nonetheless conduct the validation test for the sake of completeness. If we do not
find a positive relation, this would suggest that the empirical measure does not adequately capture
8
Average Bias is small because 99% of observations do not misreport, and thus have zero bias. Among those who
do misreport (i.e., among firms with Restate = 1) average Bias is 21 basis points.
17
representation of the ERC. This price-levels specification is a literal representation of the ERC in
Section 2.3. Specifically, we regress price per share (Price) on reported earnings per share
(Earnings), public scrutiny (Scrutiny), and the interaction term. Prior empirical work suggests
return-based specifications of the ERC are generally more robust (e.g., Christie, 1987; Kothari and
the ERC. Specifically, we regress the market-adjusted buy and hold return over the twelve months
ending three months after the fiscal year end (BHAR) on the earnings surprise (Surprise), the
forecast error from a random walk model of annual earnings scaled by beginning of period price
(Surprise), and controlling for firm risk (Beta), size (Ln(MV)), and growth opportunities (BM).9
Column (1) of Panel B presents the results from estimating the relation between the return-
based ERC and public scrutiny. Column (2) presents results after controlling for the relation
between the ERC and firm risk, size, and growth opportunities. Column (3) presents results from
a similar specification as Column (2), except that all of the independent variables are transformed
into scaled quintile ranks that range from 0 to 1. Using ranks ensures that our independent variables
are of similar scale, minimizes the effects of outliers, and allows for a meaningful comparison of
the relative economic significance of each variable (i.e., each coefficient measures the difference
Consistent with the extant literature, and regardless of empirical specification, we find a
significantly positive relation between public scrutiny and the ERC. In addition, Column (4) of
9
To ensure that these variables reflect the information available to investors at the time, we compute all measures
using unrestated values (i.e., values as initially reported). This research design choice is consistent with the ERC
featured in our model, which is measured relative to reported earnings that potentially include bias. We use a
random walk model as opposed to the analyst consensus forecast because a substantial portion of our sample (24%)
is missing analyst forecasts. See Teoh and Wong (1993) and Ferri, Zheng, and Zou (2018) for related papers that use
returns-based specifications to estimate the ERC in extant theory models.
18
coeff. = 0.326) and twice that of firm growth (Surprise*BM coeff. = –0.184). We conclude that
our empirical measure adequately captures the level of public scrutiny and the valuation channel
Our model is premised on the notion that the cost of misreporting (e.g., the probability of
detection) is larger for high-scrutiny firms. In this section, we empirically validate this “monitoring
channel” by estimating the relation between public scrutiny and the probability of detection using
novel FOIA data on formal SEC investigations. We begin by obtaining data on all formal SEC
investigations during our sample period from Blackburne et al. (2019). This data provides an
exhaustive set of SEC investigations regardless of whether the SEC brought charges, and can be
thought as the SEC’s “suspect list” of firms. We construct an indicator variable, SECInvestigate,
equal to one, if the firm was under investigation by the SEC during the fiscal year.
In Panel A of Table 5, we sort firms into quintiles based on the level of public scrutiny. For
each quintile, we report the probability of an SEC investigation in the subsequent year. Consistent
with the monitoring channel, we find that the likelihood of an SEC investigation is monotonically
increasing in public scrutiny and that the increase in likelihood from each quintile to the next is
statistically significant (p-values < 0.01). These increases also appear economically significant,
ranging from 21.3% to 60.3%. Notably, Panel A presents similar results regardless of whether we
use the composite measure of public scrutiny, Scrutiny, or the individual components (Analystst,
Institutionst, and Mediat). In the top (bottom) quintile of public scrutiny, we find 27% (6%) of
19
SECInvestigatet+1, as a function of public scrutiny and control variables from the literature on
financial misreporting (e.g., Burns and Kedia, 2006; Efendi, Srivastava and Swanson, 2007;
Armstrong et al., 2013). Column (1) presents results from a regression without any control
variables, Column (2) includes control variables and year fixed effects, and Column (3) includes
control variables, and year and industry fixed effects.10 Finally, to verify that the association
between public scrutiny and the probability of an investigation is not itself unimodal, in Column
(4) we add the 2nd-order polynomial term, Scrutiny2 to the regression. Across all specifications,
the coefficient on Scrutiny is positive and highly statistically significant (t-stats range from 13.76
to 20.31), while the coefficient on Scrutiny2 is insignificantly different from zero (t-stat = 0.95).
Collectively, these findings provide direct evidence of the monitoring channel––the probability of
We begin our primary analysis by graphically presenting the shape of the relation between
the level of public scrutiny and misreporting. In Panel A of Table 6, we sort firms into quintiles
based on the level of public scrutiny. For each quintile, we report the average value of misreporting
in the subsequent year (i.e., Biast+1). Consistent with a unimodal relation, we find that misreporting
is monotonically increasing in the first four quintiles of Scrutiny, and declines sharply in the fifth
quintile. Moreover, the increase from the second to the third quintile is significant (p-value < 0.01),
10
We use a linear probability model. Inferences are unchanged if we use a probit model. All independent variables
are normalized to be mean zero and standard deviation of one, and are lagged one period relative to the dependent
variable. Industry classifications are based on Fama-French twelve industry groups.
20
appear economically significant (68.8% and -37.0%, respectively). Panel A also presents results
from repeating this procedure separately for analyst following (Analystst), institutional investor
following (Institutionst), and media coverage (Mediat). We find a similar pattern across these
In Panel B, we repeat the procedure and present average values of an indicator variable for
misreporting (Restatet+1) and find similar results. Finding a similar pattern across multiple
measures of misreporting and multiple measures of the level of public scrutiny suggests the
evidence of a unimodal relation is not an artefact of the specification of our subsequent statistical
tests: one can observe it in the raw data. Next, we conduct multiple formal statistical tests.
We formally test for a unimodal relation between misreporting and the level public scrutiny
by estimating polynomial regression models that include both linear and 2nd-order polynomial
terms (e.g., Scrutiny and Scrutiny2). If the relation is unimodal, we expect the 2nd-order polynomial
where Bias and Scrutiny are as previously defined and Controls is a vector of control variables
based on the prior literature examining restatements (e.g., Burns and Kedia, 2006; Efendi,
Srivastava and Swanson, 2007; Armstrong et al., 2013),. All independent variables are normalized
to be mean zero and standard deviation of one, are lagged one period relative to the measure of
21
regression without any control variables, Column (2) includes control variables and year fixed
effects, and Column (3) includes control variables, and year and industry fixed effects. Consistent
with a unimodal relation, we find a negative and significant coefficient on Scrutiny2 across all our
specifications (t-stat ranges between –3.72 and –4.57). In addition, the signs and significance of
our controls (e.g., positive coefficients on ROA, InterestCov, SalesGrowth, and negative
coefficients on Capital and Financing) are generally consistent with those in prior research (e.g.,
Armstrong et al., 2013). In Column (4), we remove the 2nd-order polynomial term and report results
from a linear specification. We find that the coefficient on the linear term Scrutiny is insignificant
(t-stat = –0.35). This is consistent with the notion that the correlation between public scrutiny and
misreporting is not significantly different from zero when pooled across all observations, due to
We next conduct two sets of tests designed to assess the robustness of our results to
controlling for other potential non-linearities in the data. First, we follow Fang, Huang, and Wang
(2017) and control for a non-linear relation between intentional restatements and two measures of
the industry rate of restatements due to error (PctError and ErrorAmount). PctError is the
percentage of observations in the respective industry-year that were restated due to unintentional
error (where unintentional errors are those restatements not classified as intentional manipulation).
ErrorAmount is the average difference between net income and restated net income due to error in
the respective industry-year, scaled by the standard deviation of net income among non-restating
firms in the industry-year. These variables control for industry earnings quality.
Panel A of Table 8 presents results. Column (1) presents results from estimating eqn. (2)
including PctError and PctError2. Column (2) present results from estimation eqn. (2) including
22
we find a negative and significant coefficient on ErrorAmount2 (t-stats = –3.33 and –2.69),
suggesting a non-linear relation between the number of restatements due to error in the industry
and misreporting. More importantly, across all specification, we continue to find that the
coefficient on Scrutiny2 remains negative and strongly significant (t-stats range from –4.02 to –
3.72). Moreover, the coefficient magnitudes in Table 8 are virtually unchanged from those
presented in Table 7. This suggests that our empirical findings are distinct.
between misreporting and our control variables. While we are not aware of any theory that would
predict a unimodal relation between misreporting and our control variables (e.g., a unimodal
relation between misreporting and growth options), we nonetheless control for this possibility.
Specifically, we estimate eqn. (2) after including both 1st- and 2nd-order polynomials of all our
Panel B of Table 8 presents results. We note two findings. First, the coefficients on the
non-linear control variables are generally insignificant. Only 2 of the 11 coefficients on the non-
linear control variables are statistically significant at the 10% level or lower. The exceptions are
Size2, which is statistically negative (t-stat = –2.40), and Intangibles2, which is statistically positive
(t-stat = 2.45). Second, the coefficient on Scrutiny2 remains negative and strongly significant (t-
stat = –2.74). These findings suggest that our results are not attributable to non-linear relations
23
professional investors downloaded the firm’s EDGAR filings during the fiscal year (EDGAR) as
an alternative measure of public scrutiny (Dyer, 2018).11 Table 9 reports results from estimating
eqn. (2) using EDGAR downloads to measure public scrutiny. Consistent with our earlier findings,
we find the coefficient on EDGAR is insignificant in the linear specification, and we find a negative
We next assess the robustness of our results to four alternative measures of misreporting.
First, we use the natural logarithm of one plus the dollar amount of earnings restated due to
indicator variable equal to one if Biast+1 > 0 (Restatet+1). This measure is directly related to Bias,
except that it contains no information about the magnitude of the restatement. It is thus unaffected
by issues related to scale and magnitude, and is perhaps the most common measure of misreporting
used in the literature. Columns (1) to (4) of Table 10 present results from estimating eqn. (2) using
these alternative measures of misreporting. Consistent with our earlier findings, the coefficient on
Scrutiny2 is negative and significant (t-stats range from –2.12 to –3.99). This suggests our earlier
A potential limitation of all measures of misreporting is that they are conditional on the
firm being caught. In this regard, they are the product of both the economic incidence of
misreporting and the detection of misreporting. While we have no reason to predict that the
probability of detection is unimodal in public scrutiny (and the evidence in Table 5 suggests it is
not), to address this potential concern, we estimate eqn. (2) within the sample of firms where
11
We thank Travis Dyer for providing data on EDGAR downloads by professional investors.
24
detection. Columns (5) and (6) present results from estimating eqn. (2) using Biast+1 as the
dependent variable within the sample of firm-years where Restate = 1. Consistent with our earlier
findings, the coefficient on Scrutiny2 is negative and significant (t-stats = –3.12 and –4.05). This
suggests that our earlier results are driven by unimodality in the underlying economic incidence of
Finally, we use a binary indicator variable for whether the respective firm-year was the
subject of an SEC Accounting and Auditing Enforcement Release (AAER). We obtain data on
AAERs from the Center for Financial Reporting and Management. Using this database reduces
our sample period to 2004-2011, and our sample size to 37,672 observations. Columns (7) and (8)
report results from estimating eqn. (2) using AAERt+1 as the dependent variable. Consistent with
our predictions, the coefficient on Scrutiny2 is negative and strongly significant (t-stats = –3.20
and –3.28). Collectively, we interpret the findings in Table 10 as suggesting that the unimodal
In this section, we aim to provide further evidence on the shape of the relation by estimating
spline regressions that treat the relation as piecewise linear. Specifically, we estimate the relation
on either side of a threshold, τ, and test whether the relation below (above) the threshold is positive
(negative):
where all variables are as previously defined. We predict β1 < 0 and β2 > 0.
25
the value of τ in our sample. Accordingly, we use two different approaches to estimate these
models. In the first approach, we use the mean level of Scrutiny as a rough approximation of the
threshold (τ = 0), and estimate the relation for observations below and above the mean. Because
Scrutiny is standardized to be mean zero (see Table 2), this approach is equivalent to estimating
In the second approach, we use the multivariate adaptive regression spline method (MARS)
to simultaneously estimate both the threshold that minimizes the mean-squared error (denoted τ*),
and the sign of the relation on either side of the threshold (e.g., Friedman, 1991). The advantage
of this approach is that MARS can reliably identify not only the slope coefficients of the piecewise
linear function around the threshold, but also the point at which the function is piecewise linear.
Table 11 presents our results. Columns (1) and (2) present results from estimating
regressions where the threshold is defined at the mean of Scrutiny (i.e., τ = 0). In these
specifications, β1 measures the relation for observations with below-mean values, and β2 measures
the relation for observations with above-mean values. Column (1) excludes control variables, and
column (2) includes controls variables, year fixed effects and industry fixed effects. Consistent
with a unimodal relation, in both specifications we find a positive slope below the mean (Scrutiny
– τ < 0, t-stat = 3.46 and 3.26) and a negative slope above the mean (Scrutiny – τ ≥ 0, t-stat = –3.84
and –2.37). Notably, the positive slope is between approximately three to four times steeper that
the negative slope (e.g., 0.189 vs. –0.046). This is consistent with the general shape of the unimodal
Columns (3) and (4) present results from using MARS to simultaneously estimate both τ*
and the corresponding slope coefficients. There are three noteworthy findings. First, the threshold
26
location of τ* at the 61st percentile is generally consistent with the univariate plots in Figure 3,
which suggests the relation changes sign in the top two quintiles. Second, the location of τ* is not
affected by the inclusion of control variables or fixed effects. This is consistent with the notion
that, because we focus on estimating a specific non-linear functional form, omitted variables and
endogeneity are an unlikely explanation for our findings. Third, consistent with a unimodal
relation, we continue to find a positive slope below the threshold (Scrutiny – τ < 0, t-stat 3.51 and
3.35) and a negative slope above the threshold (Scrutiny – τ ≥ 0, t-stat –3.75 and –2.23). Thus, for
observations in the top two quintiles, our results suggest small increases in scrutiny will decrease
misreporting. However, for observations in the bottom three quintiles, our results suggest small
increases in scrutiny will increase misreporting. Collectively, our tests provide robust evidence of
a unimodal relation.
A distinguishing feature of the preceding analysis is that we focus our tests on the shape of
the relation between misreporting and public scrutiny. Our focus on the shape of the relation should
mitigate concerns about endogeneity: an alternative explanation for a unimodal relation would
need to explain the existence of a threshold where the relation is positive on one side, and negative
on the other. While we cannot rule out such a possibility, it seems unlikely.12 Nevertheless, in this
section, we use a regulatory “shock” to public scrutiny as an alternative research design to mitigate
any residual concerns (Glaeser and Guay, 2017). Specifically, we examine how mandatory
12
Standard textbook forms of endogeneity, i.e., a non-zero correlation between public scrutiny and the regression
error term, generally do not generate a specific non-linear functional form. Our empirical approach is known as
“identification by functional form” in the economics literature (e.g., Lewbel, 2018).
27
Prior to July 2003, firms filed Form 4 on EDGAR on a voluntary basis. Firms that did not
wish to file Form 4 on EDGAR were required to mail the form directly to the SEC. The SEC was
retaining those forms in the basement of the Washington D.C. and was not uploading the mail-
filed forms to EDGAR.13 At the time, Thomson Reuters was sending a courier to the SEC to scan
the mail-filed forms and providing the scans to institutional subscribers. Thus, the practice of mail-
filing shielded managers from scrutiny by a large swathe of the public.14 After June 2003, all firms
were required to report Form 4 on EDGAR within two business days. Related to our focus on
mandatory EDGAR reporting of manager’s equity trades, a concurrent paper by Liu (2019) tests
the predictions of our model using the creation of EDGAR in 1993. Liu (2019) finds evidence of
a positive effect (no effect) on discretionary accruals among firms with low (high) analyst and
institutional following.
Mandatory EDGAR reporting of managers’ equity trades occurs in 2003, and thus pre-
dates the availability of our primary measure of misreporting (Bias) and media coverage (Media),
which both begin in 2004. Consequently, for this analysis, we use AAERs as our measure of
misreporting and focus on analyst following and institutional investor following as our measures
of public scrutiny.15 As a result, the sample in this analysis spans 1995-2011 and covers 85,895
firm-years. Panel A of Table 12 presents descriptive statistics for this sample. Statistics are
13
https://www.sec.gov/rules/final/33-8230.htm
14
For example, HealthSouth has no Form 4s on EDGAR before 2003 despite having significant manager trades on
Thomson Reuters. See e.g., Chang and Suk (1998).
15
We obtain data on AAERs from 1995 to 2011 from the Center for Financial Reporting and Management.
28
Treated is a binary indicator variable equal to one for firms affected by the regulation (i.e., firms
that were not voluntarily filing their Form 4s on EDGAR prior to the mandate) and zero otherwise.
Post is a binary indicator variable equal to one for fiscal years ended after June 30, 2003, when the
mandate became effective. Our vector of control variables includes the same controls as in eqn.
(2), including industry and year fixed effects. Note that year fixed effects absorb the main effect
on Post in our regressions. The coefficient on the interaction term, Post*Treated, represents the
average treatment effect. Our treatment (control) group is comprised of 57,207 (28,688) firm-years
for firms that were not (were) voluntarily filing their Form 4s on EDGAR.
Column (1) presents results from estimating the model in eqn. (4). The average treatment effect is
insignificant (t-stat = –1.44), suggesting that the increase in public scrutiny had no average effect
when pooled across all observations. One potential reason for this is that the assumption of parallel
trends might be violated. In Column (2), we explore this possibility by estimating the difference
in misreporting between treatment and control groups in the years leading up to and following the
regulatory mandate. Specifically, we estimate eqn. (4) replacing Post with the indicator variables:
I(Year ≤ 1999), I(Year = 2000), I(Year = 2001), I(Year = 2003) and I(Year ≥ 2005). In this
specification, the year immediately preceding the regulatory mandate (Year = 2002) is the omitted
group. If parallel trends are violated, we expect to see differences between treatment and control
groups in the years leading up to the regulatory mandate.16 Column (2) shows no evidence of a
16
See Christensen, Hail, and Leuz (2016), Christensen et al. (2017), Duguay, Rauter, and Samuels (2019), and
Samuels (2019) for examples of studies that use this design to test for parallel trends.
29
While most prior work on regulatory shocks focuses on the average treatment effect, a
distinguishing feature of our analysis is that we predict that the shape of the relation between public
scrutiny and misreporting is unimodal. Tests of a unimodal relation entail comparing the treatment
effect across subsamples of firms. Specifically, if the relation between public scrutiny and
misreporting is unimodal then theory would predict: (i) firms with low levels of public scrutiny
prior to the shock are on the left side of the theoretical peak in misreporting, and thus a shock that
increases scrutiny increases misreporting (moving them closer to the peak) and (ii) firms with high
levels of public scrutiny prior to the shock are on the right side of the theoretical peak in
misreporting, and thus a shock that increases scrutiny would decrease misreporting (moving them
further away from the peak). See Figure 4 for a graphical representation of these predictions. In
this regard, our analysis highlights that the standard practice of narrowly focusing on the sign and
significance of the average treatment effect can mask important heterogeneity in the effect of the
regulatory shock.
treatment effects. Specifically, we modify eqn. (4) to estimate different treatment effects for each
quintile of public scrutiny. Table 13 presents results from estimating eqn. (4) after interacting
Treated * Post (and all associated main effects) with indicator variables for whether the firm
appears in each of the respective quintiles prior to the EDGAR mandate (Scrutiny_Q2 through
Scrutiny_Q5, where Scrutiny_Q1 is the omitted group). In this specification, the coefficient on
Treated * Post measures the treatment effect in the first quintile of Scrutiny, and the coefficient on
30
Scrutiny.17
In Column (1) of Panel A, we present results using analyst following to measure public
scrutiny, and in Column (2) we present results using institutional investor following. Regardless
of measure, we find: (i) that the treatment effect for firms in the first quintile of public scrutiny
(Treated * Post) is positive and highly significant (t-stats = 2.56 and 3.53), and (ii) that the
incremental treatment effect for firms in the fifth quintile of public scrutiny (Treated * Post *
Scrutiny_Q5) is negative and significant (t-stats = –3.91 and –3.62). Panel B reports the total
treatment effects for each quintile of public scrutiny. Regardless of specification, we find evidence
of a significant treatment effect only in the tails of public scrutiny. Consistent with a unimodal
relation between public scrutiny and misreporting (see Figure 4), regardless of specification, we
find the total treatment effect in the top quintile of public scrutiny is reliably negative and the total
treatment effect in the bottom quintile of public scrutiny is reliably positive (p-values<0.01).
While we cannot definitely rule out confounding factors to explain these results, an omitted
variable would need to: (i) be positively correlated with misreporting; (ii) vary over time with the
regulatory mandate (first difference); (iii) vary cross-sectionally with the classification of affected
and unaffected firms (second difference); and (iv) result in the difference between the two groups
flipping sign based on the level of public scrutiny (third difference). Collectively, these results
offer the first evidence that EDGAR dissemination of managers’ equity trades affects misreporting,
and in a manner consistent with a unimodal relation between public scrutiny and misreporting.
6. Conclusion
17
For each firm we compute the average value of Analysts (and Institutions) prior to the regulation, and rank firms
into quintiles based on the respective pre-period average.
31
subsequent decision to misreport. The conventional wisdom in the literature is that high levels of
public scrutiny facilitate monitoring and increase the cost of misreporting by increasing the
probability of detection, suggesting a negative relation. However, public scrutiny also increases
the weight that investors place on earnings in valuing the firm. This in turn increases the valuation
and use the insights from the model to guide our subsequent empirical tests.
We show that the theoretical relation between public scrutiny and misreporting is
unimodal: there is a unique threshold, below which the relation is positive, and above which the
both high- and low-scrutiny environments. We test the novel prediction of our model––that the
shape of the relation is unimodal––using multiple empirical measures of misreporting and public
In the first research design, we estimate polynomial regressions that include both linear
and 2nd-order polynomial terms. Consistent with our theoretical predictions, we find robust
evidence of a negative relation between misreporting and the 2nd-order polynomial term on public
scrutiny. In the second research design, we use state-of-the-art spline regression models that treat
the shape of the relation as piecewise linear around a threshold level of public scrutiny. Consistent
with a unimodal relation, we find that the slope to the left (right) of the threshold is reliably positive
(negative). In the third research design, we exploit a regulatory shock that increased investors’
32
heterogeneous treatment effects. Consistent with a unimodal relation between public scrutiny and
misreporting, we predict and find that the EDGAR mandate increases (decreases) misreporting for
firms with extremely low (high) levels of scrutiny prior to the mandate. Collectively, the notion
that our theoretical predictions hold across multiple measures of the theoretical constructs, multiple
designs, and multiple empirical settings underscores the generality and power of the economic
forces we study.
Our findings have wide-ranging implications. First, by formally modeling the managers’
cost-benefit tradeoff, our analysis highlights the limitation of existing intuition in the empirical
literature on misreporting. A distinguishing feature of our analysis is that we predict that the
relation between misreporting and public scrutiny takes a specific, non-linear functional form––a
prediction that would be difficult to intuit in the absence of formal theory. Our empirical findings
suggest a more nuanced view of how public scrutiny affects the cost-benefit tradeoff facing the
manager. While prior work often frames empirical predictions in terms of linear relations, our
results suggest that a promising path for future work would be to explore non-linear or unimodal
relations––relations that are masked by linear specifications. Our empirical findings also suggest
that a promising path for future work on regulatory shocks is to consider designs that allow for
heterogeneous treatment effects: narrowly focusing on the sign and significance of the “average
treatment effect” can mask important heterogeneity in the effect of the shock. Our results
underscore that increased public scrutiny can sometimes have unintended, adverse consequences.
33
Armstrong, C., Larcker, D., Ormazabal, G., Taylor, D., 2013. The relation between equity
incentives and misreporting: The role of risk-taking incentives. Journal of Financial
Economics 109, 327-350.
Bertomeu, J., Cheynel, E., Li, E. Liang, Y., 2017. A simple theory-based approach to identifying
misreporting costs. Working paper.
Bertomeu, J., Cheynel, E., Li, E. Liang, Y., 2019. How uncertain is the market about managers’
reporting objectives? Evidence from structural estimation. Working paper.
Blackburne, T., Kepler, J., Quinn, P., Taylor, D., 2019. Undisclosed SEC investigations.
Working paper.
Blankespoor, E., deHaan, E., Zhu, C., 2018. Capital market effects of media synthesis and
dissemination: Evidence from robo-journalism. Review of Accounting Studies 23, 1-36.
Bowen, R., Rajgopal, S., Venkatachalam, M., 2008. Accounting discretion, corporate
governance, and firm performance. Contemporary Accounting Research 25, 351-405.
Burns, N., Kedia, S., 2006. The impact of performance-based compensation on misreporting.
Journal of Financial Economics 79, 35-67.
Bushee, B., 1998. The influence of institutional investors on myopic R&D investment behavior.
The Accounting Review 73, 305-333.
Bushman, R., Piotroski, J., Smith, A., 2004. What determines corporate transparency? Journal of
Accounting Research 42, 207-252.
Chang, S., Suk, D., 1998. Stock and prices and the secondary dissemination of information: The
Wall Street Journal’s “Insider Trading Spotlight” column. The Financial Review 33, 115-
128.
Christensen, H., Hail, L., Leuz, C., 2016. Capital-market effects of securities regulation: Prior
conditions, implementation, and enforcement. Review of Financial Studies 29, 2885-2924.
Christensen, H., Floyd, E., Liu, L., Maffett, M., 2017. The real effects of mandated information
on social responsibility in financial reports: Evidence from mine-safety records. Journal of
Accounting and Economics 64, 284-304.
Christie, A., 1987. On cross-sectional analysis in accounting research. Journal of Accounting and
Economics 9, 231-258.
Collins, D., Kothari, S., 1989. An analysis of intertemporal and cross-sectional determinants of
earnings response coefficients. Journal of Accounting and Economics 11, 143-181.
Dechow, P., Ge, W., Schrand, C., 2010. Understanding earnings quality: A review of the proxies,
their determinants and their consequences. Journal of Accounting and Economics 50, 143-
181.
Duguay, R., Rauter, T., Samuels, D., 2019. The Impact of Open Data on Public Procurement.
working paper.
Dyck, A., Morse, A., Zingales, L., 2010. Who blows the whistle on corporate fraud? Journal of
Finance 65, 2213-2253.
Dyer, T., 2018. Does public information acquisition level the playing field or widen the gap? An
analysis of local and non-local investors. Working paper.
Efendi, J., Srivastava, A., Swanson, E., 2007. Why do corporate managers misstate financial
statements? The Role of option compensation and other factors. Journal of Financial
Economics 85, 667-708.
34
35
This figure illustrates the theoretical predictions concerning the firm’s public scrutiny, the cost of
misreporting, the earnings response coefficient, and financial misreporting. On the one hand,
public scrutiny increases the probability of detection (i.e., the cost of misreporting), which
decreases misreporting. We refer to this as the “monitoring channel.” On the other hand, public
scrutiny of the firm increases the price response to earnings (i.e., the earnings response
coefficient), which increases misreporting. We refer to this as the “valuation channel.”
Public Misreporting
scrutiny (E[b])
(s)
ERC
+ (β) + “Valuation Channel”
36
E[b]
This figure plots the theoretical relation between expected bias (E[b]) and public scrutiny (s) in
our model. E[b] appears on the y-axis. s appears on the x-axis. The dashed, red line illustrates the
shape of the relation with a convex cost function (i.e., C ( s ) s 2 ). The solid, black line
illustrates the shape of the relation with a linear cost function (i.e., C (s) s ). For the
purposes of this figure, and without loss of generality, we set x s , x 1 , v2 1 2 , n2 1 2 ,
12 , 12 .
37
This table presents the average values of Biast+1 by quintile of Scrutinyt and its components. Panel
A presents results for quintiles of Scrutiny, Panel B for quintiles of Analysts, Panel C for quintiles
of Media, and Panel D for quintiles of Institutions.
0.20 0.20
0.15 0.15
Bias
Bias
0.10 0.10
0.05 0.05
Q1 Q2 Q3 Q4 Q5 Q1 Q2 Q3 Q4 Q5
Quintiles of Scrutiny Quintiles of Analysts
0.20 0.20
0.15 0.15
Bias
Bias
0.10 0.10
0.05 0.05
Q1 Q2 Q3 Q4 Q5 Q1 Q2 Q3 Q4 Q5
Quintiles of Media Quintiles of Institutions
38
This figure illustrates our predictions for how a regulatory shock to public scrutiny (s) affects
expected bias (E[b]). E[b] appears on the y-axis. s appears on the x-axis. The black solid line
represents the unimodal relation between the two constructs. We consider two firms: Firm A and
Firm B. Firm A’s pre-existing level of public scrutiny is low, and appears below the threshold
level of public scrutiny, s*. Consequently, for Firm A, a small increase in public scrutiny (Δs > 0)
leads to an increase in expected bias (ΔE[b] > 0). Firm B’s pre-existing level of public scrutiny is
high, and appears above the threshold level of public scrutiny, s*. Consequently, for Firm B, a
small increase in public scrutiny (Δs > 0) leads to a decrease in expected bias (ΔE[b] < 0). Thus,
in the presence of a unimodal relation between public scrutiny and misreporting, the sign of the
effect depends on whether the firm is above or below the threshold level of public scrutiny.
E[b]
ΔE[b] < 0
ΔE[b] > 0
s
Firm A * Firm B
s
Δs > 0 Δs > 0
39
Panel A presents the number of firms appearing in the CRSP/Compustat universe and the number
of firms appearing in our sample. Panel B presents descriptive statistics for the variables used in
our analysis. Acquisition is an indicator variable for whether an acquisition accounts for 20% or
more of total sales. Analysts is the number of analysts with one-year ahead earnings forecasts on
I/B/E/S as of the end of the fiscal year. $Assets is the dollar value of total assets (in millions). Beta
is the slope coefficient from a single factor market model. BHAR is the market-adjusted buy and
hold return over the twelve months ending three months after the fiscal year end. Capital is net
plant, property, and equipment scaled by total assets. Earnings is income before extraordinary
items divided by shares outstanding. Financing is total debt and equity issuance scaled by total
assets. FirmAge is the number of years the firm appears on Compustat. Institutions is the number
of institutional owners listed on Thomson Reuters as of the end of the fiscal year. Intangibles is
the ratio of research and development and advertising expense to sales. InterestCov is the ratio of
interest expense to net income. Leverage is long term debt plus short term debt, scaled by total
assets. MB is market value of equity plus book value of liabilities divided by book value of assets.
Media is the number of news releases about the firm over the fiscal year on RavenPack Analytics.
Price is share price three months after the fiscal year end. Returns is the buy-and-hold return over
the fiscal year. ROA is income before extraordinary items scaled by total assets. $Sales is the dollar
value of sales (in millions). SalesGrowth is the change in sales scaled by prior-period sales. Size
is the natural logarithm of total assets. Surprise is the forecast error from a random walk model of
annual earnings scaled by beginning of period price.
40
41
This table describes how we measure the level of public scrutiny (Scrutiny). We measure public
scrutiny using the first principal component from a factor analysis of analyst following (Analysts),
institutional investor following (Institutions), and media coverage (Media). All variables are
standardized prior to the factor analysis. Panel A presents the principal component output. Panel
B presents descriptive statistics for Scrutiny (Scrutiny = 0.413 * Analysts + 0.445 * Institutions +
0.366 Media). Panel C presents the correlation coefficients between Scrutiny and its components.
Spearman (Pearson) correlations appear above (below) the diagonal.
42
This table presents descriptive statistics for our measures of misreporting. Panel A presents
average values of misreporting by year. Column (1) presents the total number of firm-years in our
sample. Columns (2) and (3) present average values of our measures of misreporting. Restate is a
binary measure of misreporting that equals one if the respective firm-year was eventually restated
due to intentional misrepresentation, and zero otherwise, and Bias is the amount of restated
earnings due to intentional misrepresentation, scaled by beginning total assets and expressed in
basis points. The sample ends in 2012 to allow a minimum lag of four years between period of
restatement and the restatement announcement (i.e., our measures of misreporting are calculated
using restatements announced through 2016). Panel B presents correlations between our measure
of misreporting, public scrutiny, and key control variables. Pearson (Spearman) correlations
appear above (below) the diagonal.
43
The table presents results from estimating the earnings response coefficient as a function of the
level of public scrutiny. Panel A presents results from estimating the earnings response coefficient
using a price-level regression. Panel B presents results from estimating the earnings response
coefficient using returns regressions. Columns (1) and (2) present results from OLS regressions.
Column (3) presents results from OLS regressions using the quintile ranks of the independent
variables scaled between zero and one. t–statistics appear in parentheses and are based on standard
errors clustered by firm. ***, **, and * denote statistical significance at the 0.01, 0.05, and 0.10 levels
(two–tail), respectively. Sample of 41,379 observations from 2004 to 2012.
44
Panel A presents results from sorting firms into quintiles based on the level of public scrutiny and
calculating the average values of the probability of an SEC investigation in the subsequent year by
quintile. Panel B presents results from estimating the relation between the level of public scrutiny
and the probability of an SEC investigation in the subsequent year. t–statistics appear in
parentheses and are based on standard errors clustered by firm. ***, **, and * denote statistical
significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively. Sample of 41,831
observations from 2004 to 2012.
45
46
This table presents results from sorting firms into quintiles based on the level of public scrutiny
and calculating the average values of our measures of misreporting by quintile. Panel A presents
results for Bias. Panel B presents results for Restate. Misreporting variables are measured in the
subsequent year (i.e., in t+1). Sample of 41,831 firm-years.
47
This table presents results from estimating equation (2). Columns (1) through (3) presents results
from estimating the relation between misreporting and 1st- and 2nd- order polynomials of the level
of public scrutiny. Column (4) presents results from estimating the linear relation between
misreporting and public scrutiny. All variables are as defined in Table 1. t–statistics appear in
parentheses and are based on standard errors clustered by firm. ***, **, and * denote statistical
significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively. Sample of 41,831 firm-
years.
48
Panel A presents results from estimating equation (2), including controls for the industry rate of
restatements due to error (IndustryError). PctError is the percentage of observations restated due
to error within the industry-year. ErrorAmount is the average difference between net income and
restated net income due to error within the industry-year, scaled by the standard deviation of net
income among non-restating firms in the industry-year (ErrorAmount). Panel B presents results
from estimating equation (2), including both linear control variables and 2nd-order polynomial
transformations of all controls. Acquisition is an indicator variable and is collinear with the 2nd-
order transformation. All other variables are as defined in Table 1. t-statistics appear in parentheses
and are based on standard errors clustered by firm. ***, **, and * denote statistical significance at
the 0.01, 0.05, and 0.10 levels (two–tail), respectively. Sample of 41,831 firm-years.
49
50
This table presents results from estimating equation (2) using the number of times professional
investors downloaded the firm’s filings from EDGAR during the fiscal year (EDGAR). Columns
(1) through (3) presents results from estimating the relation between misreporting and 1st- and 2nd-
order polynomials of EDGAR. Column (4) presents results from estimating the linear relation
between misreporting and EDGAR. t-statistics appear in parentheses and are based on standard
errors clustered by firm. ***, **, and * denote statistical significance at the 0.01, 0.05, and 0.10 levels
(two–tail), respectively. Sample of 41,831 firm-years.
51
This table presents results from estimating equation (2) using three alternative measures of misreporting. Log(1+$Biast+1) is the natural
logarithm of one plus the amount of restated earnings in millions. Restate is an indicator variable for whether the firm-year is restated due
to fraud. Bias | Restate = 1 is our primary measure of misreporting calculated only for those 330 firms with restatements due to fraud (i.e.,
estimating our results within the sample of restatements due to fraud). AAER is an indicator variable for whether the firm-year is subject to
Electronic copy available at: https://ssrn.com/abstract=3157222
an SEC Accounting and Auditing Enforcement Release. Data on AAERs are from the Center for Financial Reporting and Management and
spans 2004 to 2011, and 37,672 firm-years. t-statistics appear in parentheses and are based on standard errors clustered by firm. ***, **, and
*
denote statistical significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively.
N = 330
N = 41,831 N = 41,831 from 2004 – 2012 N = 37,672
from 2004 – 2012 from 2004 – 2012 where Restatet+1 = 1 from 2004 – 2011
52
Table 11. Alternative Research Design: Spline Regressions
This table presents results from estimating the relation between Bias and Scrutiny using a spline
regression with threshold τ:
Columns (1) and (2) present results from estimating regressions where the threshold is defined at
the mean of Scrutiny (i.e., τ = 0). In these specifications, β1 estimates the relation for observations
below the mean and β2 estimates the relation for observations above the mean value. Columns (3)
and (4) present results from using the multivariate adaptive regression spline (MARS) method to
simultaneously estimate both the regression coefficients and the optimal threshold that minimizes
the mean-squared error (i.e., τ = τ*). All variables are as defined in Table 1. t–statistics appear in
parentheses and are based on standard errors clustered by firm. ***, **, and * denote statistical
significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively.
OLS MARS
Researcher specified Optimal threshold selection
threshold (τ) algorithm (τ*)
(1) (2) (3) (4)
Threshold τ=0 τ=0 τ* = –0.069 τ* = –0.069
53
This table presents results from estimating the effect of the mandatory filing of Form 4 on EDGAR
on misreporting. Panel A presents descriptive statistics for variables used in our analysis. Panel B
presents results from estimating our difference-in-differences design, equation (4). Column (1) of
Panel B presents results from estimating the average treatment effect. Column (2) of Panel B
presents results from testing for parallel trends. Treated is a binary indicator variable equal to one
for firms that were affected by the SEC’s requirement to file Form 4s on EDGAR. Post is a binary
indicator variable equal to one for fiscal years ended after June 30, 2003. I(Year = t) is an indicator
variable equal to one for observations in year t. All other variables are as defined in Table 1. t–
statistics appear in parentheses and are based on standard errors clustered by firm. ***, **, and *
denote statistical significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively. Sample of
85,895 firm-years from 1995 to 2011.
54
55
This table presents results from estimating equation (4) after allowing the treatment effect to vary
across the quintiles of public scrutiny. Panel A presents regression results and marginal effects for
each quintile, where quintile one is the base category. Column (1) of Panel A presents results using
analyst following to measure public scrutiny. Column (2) of Panel A presents results using
institutional investor following to measure public scrutiny. Panel B presents total treatment effects
for each quintile of public scrutiny. Treated is a binary indicator variable equal to one for firms
that were affected by the SEC’s requirement to file Form 4s on EDGAR. Post is a binary indicator
variable equal to one for fiscal years ended after June 30, 2003. Scrutiny_Q(q) is an indicator
variable equal to one if the observation is in the q-th quintile of the respective measure of public
scrutiny. All other variables are as defined in Table 1. t-statistics (two-tailed p-values) appear in
parentheses (brackets) and are based on standard errors clustered by firm. ***, **, and * denote
statistical significance at the 0.01, 0.05, and 0.10 levels (two–tail), respectively. Sample of 85,895
firm–years from 1995 to 2011.
56
57
Appendix.
This Appendix presents the details underlying our model and derives the results. To
facilitate the discussion, our setup, notation, and analysis follow directly from Fischer and
Verrecchia (2000, FV)––the only substantive difference being that we interpret the precision of
investors’ knowledge about the manager’s objective and the cost of bias as increasing functions of
public scrutiny.
A.1 Setup
The timeline of the model is as follows: (i) a risk-neutral manager privately observes true
earnings, (ii) the manager issues a potentially biased report of earnings to the capital market, and
(iii) investors price the firm’s shares based on the report. To facilitate the discussion, henceforth
distributed with mean zero and positive, finite variances v2 and n2 , respectively. Having
observed the realization of true earnings, e e , the manager reports r e b , where b represents
the extent to which the manager chooses to bias reported earnings. A risk-neutral, perfectly
competitive market observes reported earnings and prices the firm accordingly. Price is the rational
expectation of the firm’s terminal value conditional on reported earnings: P E[v | r ] .18
with mean x 0 , finite variance x2 0 , and precision x (i.e., x x2 ). We assume all
distributions are independent and common knowledge, and the realization of x x is known only
18
Similar to traditional asset pricing models (e.g., Grossman and Stiglitz, 1980; Kyle, 1985; Lambert, Leuz, and
Verrecchia, 2007), FV has the feature that price can take negative values.
58
parameter s . Let S represent some open set on the positive, half-real line (i.e., on ): that is,
knowledge about x , x , and the cost associated with bias are strictly, positive, increasing
functions over S . As such, henceforth we represent x by x s and the cost associated with
bias as C s , where x s 0 , C s 0 ,
x ( s ) C ( s )
s
0 , and s
0 . Hence, higher values of public
scrutiny, s , imply investors know more about the manager’s objective through x and the
manager faces a higher cost of bias through C . This is the only difference between FV and our
model.
The manager chooses the bias in reported earnings in anticipation of share price. The
1
max xP C ( s )b 2 . (A1)
b
2
A.2 Equilibrium
One can characterize an equilibrium to the model by a bias function, b(e, x) , and a pricing
function, P(r) , that satisfy three conditions. (1) The manager’s choice of bias must solve his
optimization problem given his conjecture about the pricing function. (2) The market price must
equal expected firm value conditional on the report and the market’s conjecture about bias. (3)
Both the conjectured pricing function and conjectured bias must be sustained in equilibrium. Bias
and price functions in the form of b(e, x) ee x x and P(r ) r provide a unique,
linear equilibrium. Here, represents the sensitivity of price to reported earnings, or the earnings
59
ˆ
function, the first-order condition implies that the optimal bias function is given by b(e, x) C ( s ) x
ˆ
such that e 0 , 0 , and x C (s)
, where we use a caret (i.e., ^) to denote a conjecture.
Equilibrium Pricing Function. Assuming the conjectured bias function of the form
v2
2
n2 ˆx2 1
and ˆx x .
x s
v
Uniqueness. Assume all conjectures in the bias and pricing functions are correct. It is
straightforward to verify that the solutions for e , , and are unique functions of the pair
, x . Thus, to prove the existence of a unique equilibrium we need to show that there exists a
v2
we have . Thus, we need only show that this expression has a unique solution
2
v2 n2 C(s)
1
x s
3 v2 n2 x s C ( s ) v2 x s C ( s ) 0.
2 2
(A2)
The interested reader will note that this expression replicates the equilibrium condition in FV (eqn.
[15] on p. 236 in FV), except that now we characterize the equilibrium in terms of x s and
C s as functions of s .
As in FV, the that solves eqn. (A2) is strictly positive because 0 implies the
expression is negative. Also, the solution to eqn. (A2) will be unique: for positive , the expression
is monotonically increasing in and tends to infinity as tends to infinity. Finally, the solution
60
2
This observations stem from the fact that 2v 2 1 in a circumstance where the manager is
v n
forced to report truthfully (i.e., without bias). Thus, when managers bias reported earnings, the
ERC is lower than in a circumstance where they are forced to report truthfully. Insofar as is
the weight that investors place on the manager’s report, the latter feature of the equilibrium is
Next, we examine the sign of the relation between expected bias, E[b] C (s)
x , and public
E[ b ]
scrutiny, s : in other words, s
. To do so, we first need to derive the implicit solution for s
from
eqn. (A2). Allowing for the fact that this requires a considerable amount of algebraic manipulation,
3 C ( s) xs 2 x s C(ss )
( s )
0.
(A3)
s
3 2 v2 n2 x s C ( s ) x s C ( s)
2
This establishes the ERC is increasing in s.
E [ b ]
Next we solve for s and evaluate its sign:
E[b] 1 1 C ( s)
x x , (A4)
s C ( s) s C ( s) 2 s
Note that because x 0, C(s) 0, and 0, eqn. (A4) is proportional to (has the same sign
as):
1 1 C ( s )
. (A5)
s C ( s ) s
61
represents the percentage change in for a one unit change in s , and the second term represents
the percentage change in C(s) for a one unit change in s . One can think of the former as akin to
the percentage change in the marginal benefit of bias, and the latter as akin to the percentage
change in the marginal cost of bias. When the percentage change in the marginal benefit of bias
E [ b ]
exceeds that of the marginal cost of bias, s
0 , and when the percentage change in the marginal
E [ b ]
cost of bias exceeds that of the marginal benefit, s
0.
We can further simplify the expression for the sign of the comparative static by substituting
eqn. (A3) into equation eqn. (A5), and multiplying by and C(s) . This yields
3 v2 x s C ( s ) xs s
C ( s )
x s 2
C ( s)
C ( s )
s
.
C ( s) s
(A6)
s s 2 2
x s 3 v n x s C ( s )
2 2
x s
Because C(s) 0 , s
0 , and the entire denominator on the right-hand-side of eqn. (A6) are
positive, we need only consider the sign of the second term in the numerator:
C ( s )
3 v2 x s C ( s ) s s .
2
(A7)
x
s
Using the equilibrium condition in eqn. (A2) to express 3 , one can show that eqn. (A7) is
proportional to
v2 n2
s
ln C s
1 . (A8)
v
2
s
ln x s
A.4 Unimodality
62
To ensure that expected bias is unimodal in s , f s needs to have behave as follows: 1) initially
ensures that there exists some s (this is the “peak” or “threshold”) such that f s 0 , for all
are:
s
ln C s
lim 0; (A10)
s sL
s
ln x s
s
ln C s
lim 1; (A11)
s sH
s
ln x s
s
ln C s
0. (A12)
s
s
ln x s
v2 n2
0 1 1. (A13)
v
2
2 2
ln C s
lim f s lim 1 v 2 n s
0. (A14)
s sL s sL
v
s
ln x s
v2 n2
ln C s
lim f s lim 1
s
0. (A15)
s sH s sH
v
2
s
ln x s
63
v2 n2
ln C s
f s
s
0. (A16)
s s v2 s
s
ln x s
An interested reader can easily verify that S (0, ) , x s s , and cost functions of the form
examples.
64