You are on page 1of 28

GENES

ADRES

The Impact of NSF Support for Basic Research In Economics


Author(s): Ashish ARORA and Alfonso GAMBARDELLA
Source: Annales d'Économie et de Statistique, No. 79/80, Contributions in memory of Zvi
Griliches (JULY/DECEMBER 2005), pp. 91-117
Published by: GENES on behalf of ADRES
Stable URL: http://www.jstor.org/stable/20777571
Accessed: 07-02-2016 12:49 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.

GENES and ADRES are collaborating with JSTOR to digitize, preserve and extend access to Annales d'Économie et de
Statistique.

http://www.jstor.org

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
ANNALES D'?CONOMIE ET DE STATISTIQUE.- N? 79/80- 2005

The Impact of NSF Support


for Basic Research
In Economies

Ashish ARORA* and Alfonso GAMBARDELLA**

- This
ABSTRACT. paper studies an unusuallyrichdata set ofall the 1473 applications
to theNSF ineconomics during1985-1990. Itprovidesa rareopportunity toanalyze notonly
the characteristicsof the researcherswhose applicationwas accepted (414 applications
inour sample), but also those whose applicationswere rejected.This impliesthatone
can investigatethe impactof an NSF grants on the research output (quality-adjusted
publications)of individualresearchers.Using non-parametrictechniques,as well as more
conventionalregressionanalyses,we findthattheNSF effect ismodest, apart forthemore
juniorscholars.We also address some ancillaryquestions, likethe factorsthataffectthe
NSF selection process and thedecision about thesize of thegrants.

L'impact des subventions de la NSF sur la recherche


fondamentale en sciences ?conomiques

R?SUM?. -Cet article ?tudie une base de donn?es exceptionnellement riche


compos?e de toutes lesdemandes adress?e ? laNSF entre 1985 et 1990 pour lessciences
?conomiques. Elle fournit une occasion rared'analyser non seulement lescaract?ristiques
des chercheursdont lesdemandes ont?t? accept?es (414demandes dans notre?chantillon)
mais aussi celles de ceux dont lesdemandes ont ?t? rejet?es.Ceci permetd'?tudier l'impact
d'une subventionde laNSF sur le r?sultatde la recherche(les publicationscontr?l?es de
leurqualit?). ? l'aidede techniques non param?triqueset de m?thodes de r?gressionplus
conventionnelles,nous trouvonsque l'effet de laNSF est mod?r?, except? pour les jeunes
chercheurs.Nous abordons ?galementdes questions auxiliaires,comme celle des facteurs
qui affectentleprocessus de s?lectionde laNSF et lemontantdes subventions.

We are indebted to RuthWilliams of the NSF forhelp ingetting access to the data,
and forpatientlyanswering our questions and queries. Dan Newlon and LynnPollnow
educated us about the intricaciesof the NSF grant procedures, and we thank them
for theirhelp and support.We thankPaul David for longand stimulatingconversa
tions,and JonAngrist,Dan Black, Ron Ehrenberg,Seth Sanders, and members of the
NBER Productivity Workshop forhelpfulsuggestions and advice. Wei Kong provided
enthusiastic and skillfulresearch assistance. Data collection and analysis were par
tiallysupported by a grant fromthe Heinz School. We alone are responsible for the
remainingshortcomings of thispaper.
*
A. Arora: Heinz School of Public Policy, Carnegie Mellon University,Pittsburgh,PA
15213, e-mail: ashish@andrew.cmu.edu
**
A. Gambardella: Universit? Commerciale "Luigi Bocconi", Milan, Italy, e-mail:agam
bardella@unibocconi.it

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
92 ANNALES D'?CONOMIE ET DE STATISTIQUE

1 Introduction

Fifty years ago Vannevar Bush's manifesto, "The Endless Frontier", marked
the beginning of a significant expansion of the public support to science. This set
the ground for the creation of institutions like theNational Science Foundation
(NSF) in theUS and Research Councils in other countries, though theCNRS in
France and CNR in Italy predate Bush's manifesto. It is therefore surprising that,
after so many years, we still know very littleabout the effects of the grant alloca
tion process of these institutions:Have theyhad a "leveraging" or "crowding out"
effecton the resources that scientists use to perform their research? To what extent
have these grants had a differential impact on differentcategories of scientists (e.g.
young vs more senior researchers)? Even the sociologists of science, who have
been attentive to the analysis of the scientific system (e.g. see Merton [1973]),
have not looked deeply into these issues.
In economics, the literature on science was initially confined to studies of the
scientists labor market (e.g. Blank and Stigler [1957]; Ehrenberg [1991], and
[1992]). More recently,Dasgupta and David [1994] and David [1994] have empha
sized the importance, for economic analysis, of understanding themechanisms
that govern the institutions of science. A survey by Paula Stephan [1996] does
an excellent job of giving the Economics of Science the dignity of a full-fledged
topic for economic research.Among other things, she argues thatwhile a good deal
of attention has been given to the productivity of scientists over their life cycle,
a neglected area has been the effects of resource allocations on the production of
scientific outputs, and the question of how research outputs relate to the resources
provided by government or philanthropic organizations. She even suggests that this
could constitute an "alternative approach to the study of scientists" (pp. 1224), and
goes as far as saying that
"[t]his leads one to wonder ifwe should not use our talents as economists to
develop a differentapproach to the study of scientists that stresses the importance
of resources in theprocess of discovery rather than the importance of thefiniteness
of life".(Stephan,[1996]: 1224)
This paper studies the relationship between NSF funding and the publications
of US economists using data on 1473 applications toNSF during 1985-1990,414
ofwhich were awarded a research grant.We obtained data on all the publications
(quality-adjusted) of these 1473 principal investigators (Pis) five years before and
five years after theNSF grant, along with other PI characteristics (e.g. sex, years
since PhD, institution, referee scores on the proposal). Put simply, the central
-
question of our paper is Does NSF matter? That is,we empirically investigate
whether, other tilings being equal, those who received an NSF grant produced a
largernumber of (quality-adjusted) publications than thosewho were not awarded
the grant. In so doing, we also address the ancillary question of the factors that
influence theNSF selection decision, and we examine the effect ofNSF grants on
economists at different stages of theircareer.
Among the few empirical studies on this topic, a pioneering one isby Jonathan and
Steven Cole [1977], who examined ingreat depth theworking of thepeer review sys
tem in the late 1970s using theNSF as a testcase. Their study,published as a report to
theNational Academy of Science in 1981 (and summarized inCole [1992]), included

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
THE IMPACTOF NSF SUPPORTFOR BASIC RESEARCH INECONOMICS 93

a statistical analysis of randomized reviews of proposals thathad already been peer


reviewed for the purposes of NSF grants. Their statistical analysis is supplemented
by interviewsand provides a rich and accurate description of theworking of thepeer
review system.However, theCole and Cole studydoes not address thequestion of the
impactofNSF fundingon publication output.Tremblay [1992] compares NSF grants
in economics and chemistry and finds that the unconditional probability of getting a
grant is lower in economics than in chemistry and successful grants are confined to a
smaller set of institutions.However, Tremblay's sample consists of projects that are
funded,and she too does not address the impact of funding.Averch [1988] estimates
the determinants of the citation per dollar of NSF funding for a random sample of
93 projects in chemistry.He finds only a verymodest relationship between citations
per dollar and characteristics of thePis affiliated institutionsalthough PI characteris
ticsdo have some impacton citations per dollar. By contrast,forbehavioral and neural
sciences,Averch [1987] finds thateven PI characteristics are unrelated to citations per
dollar. Averch's sample also consists of projects thatwere funded, and therefore,is
potentially subject to selectivitybiases. Finally,Adams and Grujches [1998] analyze
the publication performance of a system of US universities, and find that research
fundinghas direct and indirecteffectson research output, the latterbeing mediated by
graduate students.
Arora, David and Gambardella [1998] is thefirst study, to the best of our knowl
edge, thatuses a coherentmethodology thatrecognizes die endogeneity of the level
of the research funding thata researcher (or research group) receives. They develop
a structuralmodel of thebehavior of researchers and apply it to data on a sample of
800 Italian biotechnology proposals. The paper models the decision to apply for a
grant, and conditional upon theoutcome, die decision of how much effortto invest
in the research project. They find that the average elasticity of research outputwith
respect to funding is a little less than 0.6. In otherwords, for this sample, doubling
the research funding given to the typical research group would increase the latter's
expected research output by 60%.
Although this paper follows in the spirit of Arora, David and Gambardella
[1998], we do not have an explicit structuralmodel derived from firstprinciples.
Instead, we specify the equations to be estimated and also use non-parametric
techniques to discern the effect of NSF grants. Though we can observe research
ers whose applications were rejected and have rich data on project and applicant
characteristics, unobserved heterogeneity across researchers potentially remains
an important challenge. If theNSF funds applicants that are expected to be more
productive based on characteristics thatwe do not observe, the impact of an NSF
grant is not identified.
tofindvariablesthataffectthegrantbutnot theproduction
Since itisdifficult of
publications, we cannot solve thisproblem. However, because the direction of the
bias produced by unobserved heterogeneity is likely to imply a higher estimated
effectof theNSF grant, our results set an upper bound to theNSF effect. Since we
find that this effect is not big (and it is smaller themore senior is the researcher),
one can conclude that theNSF effect is probably modest, and, ifat all, NSF grants
matters only for the younger economists. Our non-parametric analysis confirms
these findings. In sum,NSF appears to be most critical for the entry into theprofes
sion, and much less lateron. In turn,this is suggestive of the critical role played by
the application selection decision in the case of younger economists.
The next section describes our data. Section 3 discusses the conceptual and mea
surement issues mentioned above. Section 4 presents our empirical results for the

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
94 annales d'?conomie et de statistique

selection process, and the determinants of the amount of grant received. Section 5
focuses on the effects ofNSF funding on the production of publications. Section 6
concludes.

2 Data

We use data on 1473 applications toNSF during 1985-1990. Of these slightly


less than one third (441) were selected for funding.These data can be classified in
three categories:
i) Characteristics of thePI or of theproposal: Pis sex and institution (name of
university or other organization); type of institution (Ph.D. granting school, non
Ph.D. school, organization); years since Ph.D.; number of co-PIs in the proposal;
NSF reviewers' score for the proposal; quality-adjusted number of publications of
thePI in thefive year window before application.
ii)NSF decision variables: index of selection; amount ofNSF grant.
iii)Production ofpublications: quality-adjusted number of the Pis publications
during thefive year window after application.
All these data, apart from publication output, were supplied by NSF. The data
on publications were obtained from Econlit. To adjust for quality, each publication
was weighted by the impact factor of the top 50 economics journals published in
Liebowitz and Palmer ([1984]: Table 2). The journals range from theJournal of
Economic Literature (100) to theJournal ofDevelopment Economics (2.29). If a
journal was not in the top 50, the publication was given an impact factor of 1.The
number of publications was adjusted for co-authorship by dividing the impact fac
torof thepublication by the number of co-authors. Thus a paper co-authored with
two other people published in JEL would count as 33.3 quality adjusted publica
tion units. The Appendix describes these data in greater detail. It also reports the
list of the top 50 journals along with their impact factors. Table 1 defines all the
variables thatwill be used in this analysis. Table 2 reports descriptive statistics for
these variables.
Two aspects of the empirical set up should be noted. First, weighting publica
tions by citation is customary but also time consuming and expensive. We choose
toweight by journal quality. The measure of journal quality used here is based on
weighted citations, adjusted for the lengthof the journal. Citations areweighted by
the quality of the journal where they occur. The citation weights and the journal
impact factor are determined jointly through an iterated procedure described in
Liebowitz and Palmer [1984]. Any weighting scheme is likely to yield rankings
that some may find curious. As a robustness check, we re-estimated all the equa
tions reported here using raw counts of publications (adjusted for co-authorship).
The qualitative results are very similar to those reported here and are available on
request.
The measurement error in our output measure may raise additional concerns.
The obvious one is that themeasure is so noisy as to be uninformative. However,
as Table 2 shows, quality weighted publications are higher (both before and after)

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
the impact of nsf support for basic research in economics 95

Table 1
DefinitionofVariables
AWARD Dummy equal to 1 ifproject was awarded NSF grant.
PAFTER Quality-adjusted number of publications in the 5 year window after
the grant.
LAFT Natural logofPAFTER.
PBEFORE Quality-adjusted number of publications in the 5 year window before
the grant.
LBEF NaturallogofPBEFORE.
GROWTH PAFTER-PBEFORE.
ASSIST Dummy equal to 1 forPis who received theirPhD less than 6 years
before the grant.
ASSOC Dummy equal to 1 forPis who received theirPhD between 6 and
12 years before the grant.
PROF Dummy equal to 1 forPis who received theirPhD more than
12 years before the grant.
PHD Dummyequal to 1 ifPI belongstoa PhD granting
school
NOPHD Dummy equal to 1 ifPI belongs to a school thatdoes not offer a PhD
degree.
OTHER Dummy equal to 1 ifPI belongs to institutionsother thanPHD or
NOPHD (e.g. organizations likeNBER, foundations, etc.)
ELITE Dummy equal to 1 ifPI belongs to the following economic
departments:MIT, Harvard, Stanford, Princeton, Berkeley, Yale,
Chicago, Northwestern, San Diego, Wisconsin Madison, Columbia;
or to the following organizations: NBER, CEPR, NORC, and
Cowles.

DCOPI2 Dummyequal to 1 ifprojecthas 2 co-PIs.


DCOPI3 Dummy equal to 1 ifproject has 3 ormore co-PIs.
GEOG Dummy equal to 1 ifPi's institutions is in theWest, Mid-West or
North-East regions of theUnited States.
MALE Dummy equal to 1 ifPI ismale.
D8890 Dummy equal to 1 ifapplication was in years 1988-1990.
SCORE Average referee score on theproject. Ranges from 1 (excellent
proposal) to 5 (very poor proposal).
Sl-1.75 < 1.75.
Dummy for SCORE
< < 2.25.
Sl.75-2.25 Dummy for 1.75 SCORE
< SCORE < 3.
S2.25-3 Dummy for 2.25
RECJX)L Dollar received(in 10,000dollarunits).
LREC Natural logofREC DOL.

for projects selected by NSF as compared to those thatwere not selected. Indeed,
on average, the output for selected projects is twice as high as unselected projects.
In otherwords, noisy though themeasure may be, it is unlikely to be only or even

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
96 annales d'?conomie et de statistique

Table 2
Descriptive Statistics, Full Sample (1473 obs.)

Standard
Variables Mean Minimum Maximum
Deviation
AWARD 0.28 0.45 0.00 1.00
PAFTER 65.08 85.68 0.00 1052.00
PBEFORE 67.21 89.67 0.00 929.00

ASSIST 0.31 0.46 0.00 1.00


ASSOC 0.24 0.42 0.00 1.00
PROF 0.45 0.50 0.00 1.00
D8890 0.77 0.42 0.00 1.00
PHD 0.80 0.40 0.00 1.00
NOPHD 0.04 0.20 0.00 1.00
DCOPI2 0.23 0.42 0.00 1.00

DCOPI3 0.04 0.20 0.00 1.00


ELITE 0.35 0.48 0.00 1.00

GEOG 0.83 0.37 0.00 1.00


MALE 0.93 0.26 0.00 1.00
SCORE 2.54 0.82 1.00 5.00

RECJDOL (*) 9.08 5.23 0.60 37.19

(*) 414 observations far AWARD** 1

predominantly noise. A related concern raised by a reviewer iswhether our failure


to use individual citation weights (as opposed to thejournal impact factor) imparts
measurement error so as to induce a downward bias towards zero in regression
estimates. Since our results are similar ifwe merely use growth in publications
(instead of using past publications as a control) and since we obtain similar results
using non-parametric analysis ofmatched cohorts, this too does not appear to be a
serious concern.
Our unit of analysis is a proposal (a grant application). Slightly more than 10%
of theproposals had co-PIs listed, and thepercentage showed a slight increase over
time. In the results presented here, we ignored the effects of co-PI characteristics
on selection. We also ignored the publication output of co-PIs1. However, we did
use the number of co-PIs as control variables. To the extent that some co-PIs are
successful Pis in theirown right inproximate years, this creates potential measure
ment problems thatour set up does not address. As a simplification,we also ignored
thehandful of cases where a PI applied formore thanone grant in a particular year,
as well as the dependence over time in proposals involving the same PI or co Pis.

1 Recall that our output measure is adjusted for the number of co-authors on each paper. Estimates pro
duced by using the total output of all the researchers named on a proposal did not give qualitatively
different results for the production function of publications. Accordingly, as a first cut at the data,
ignoring the prior publication output of co-PIs, and publication output not jointly authored with the
PI seemed reasonable.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
THE IMPACTOF NSF SUPPORTFOR BASIC RESEARCH INECONOMICS 97

3 Conceptual issues, caveats,


and qualifications

3.1 Social impact

Exercises like ours can say very little about the social value, or even the social
impact, ofNSF-funding (or of any other public research agency). In using scientific
publications tomeasure scientific output, one is faced with an obvious constraint:
The aggregate growth of publications is constrained by thegrowth rate of journals.
Thus, at the aggregate level, one cannot say how differentallocations of resources
would increase the total scientific output. To take a simple example, if there are
only 5 economics journals in theUS thatpublished 20 papers per year, different
allocations of resources will always produce 100 papers per year. The constraint in
the number of journals makes the aggregate "social welfare'9 analysis of science
meaningless, unless one comes up with (meaningful) measures of scientific output
thatare differentfrom thenumber of publications2.
Although the question "By how much would aggregate scientific research out
put increase ifone increased research funding by 10%?" cannot be meaningfully
answered, one can answer thequestion "By how much would the publication out
put of a certain typeof scientist increase ifNSF were to increase his or her research
grant by 10%?" From the viewpoint of the individual researcher, whose career
prospects depend crucially on publication output, this is an interestingand impor
tantquestion. The question is also important for public funding agencies such as
theNSF. How much money should be spent is not the only decision that policy
makers must make. Equally, ifnotmore, important is the question of how a given
amount of money should be spent. The impacts on individuals are an important
component of the decision on how theNSF budget should be allocated. By esti
mating a production function of publications, we are also able to estimate how
characteristics such as age, institutionalaffiliation, and regional location are related
to publication performance.

3.2 Functional form

Regression analysis imposes functional form restrictions. Therefore, we will


reportboth differences of conditional means as well as more conventional regres
sion results.Regressions are necessary because even parsimonious representations
of characteristics in thenon-parametric analysis led to cells with few observations.
We tried regressions with both log and level specifications.We report the log speci
fication here but the results are robust to the choice of specification.

2 The problem remains even ifpublications were weighted by citations. The reference list of any eco
nomics article is of approximately constant length. Hence, the number of citations is also constrained
by the number of journals (assuming fixed number of pages per journal). Although US based econo
mists are not the only ones publishing in journals, they account for a large fraction of English lan
guage journals.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
98 ANNALES D'?CONOMIE ET DE STATISTIQUE

3.3 Unobserved heterogeneity

A fundamental problem in studies such as the one presented here is that it is not
easy to distinguish between thedifferenteffects thatare being sought. For instance,
in assessing the impact of factors thatdetermine selection or the quantity of funds
supplied, one needs to find variables that affect selection and not the amount of
budget granted, and vice versa. This is the classical problem of identification.
The problem is particularly difficultwhen we trytomeasure the impact of NSF
support. Simply put, it is difficult to find variables thatwould affect selection (or
the budget), but not publication output. The point is straight forward. Suppose
thatNSF selects more able researchers from amongst a set of otherwise similar
researchers. (Indeed, theNSF is supposed to fundmore promising projects, and
these are likely to be proposed by more able researchers.) These researchers will,
on average, be more productive. Only a (small?) part of theirgreater productivity
is properly attributable toNSF; the rest is due to theirgreater ability. If, as is likely,
ability is imperfectlymeasured, regression estimates would tend to overstate the
impact ofNSF selection on output3.Moreover, there are other aspects to the issue
of unobserved heterogeneity, such as those relating to non NSF resources that are
furtherdiscussed in section 5 below.

4 Selection and grant received

4.1 Selection

We begin by comparing the sample means of all our variables conditional upon
selection (AWARD=1) and non-selection (AWARIX)) inTable 3. These are sug
gestive of the correlations between selection and characteristics of thePis. Table 3
reports the sample means of the variables for the non-selected Pis and the differ
ence between themeans for the selected and non-selected Pis.
As expected, there is a sizable difference between the sample means of PAFTER
(total five year publication output after year of application) and PBEFORE (total
five year publication output before year of application) conditional upon selection
and non-selection. On average, PAFTER for a selected PI is 101.2, while for a non
selected PI is only 51. Similarly, the sample means of PBEFORE are 108.6 and 51
respectively. Selected Pis are more productive, both before and after selection.
A higher fraction of applications fromASSIST professors is rejected in the selec
tionprocess, while the opposite is true for theASSOC professors. The share of the
researchers with six or fewer years of experience after Ph.D (ASSIST) is smaller
in the selected sample than in the applicant sample as a whole, and this difference
is statistically significant.By contrast, the share ofASSOC professors (6-12 years
since Ph.D) is higher in the selected than in thenon-selected sample, and this is also

3 For a more extensive discussion of these problems in this context, see Arora, David, and Gam
bardella [1998].

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
the impact of nsf support for basic research in economics 99

Table 3
SampleMeans ConditionaluponAWARD
Sample Means, Sample Means, Difference
Variables AWARD=1 AWARTX) (AWARD=1)
(N.obs.=414) (N.obs.=1059) (AWARD=0)
PAFTER 101.20 50.95 50.25
(5.55) (2.06) (5.92)
PBEFORE 108.56 51.04 57.52
(6.10) (1.96) (6.41)
ASSIST 0.26 0.34 -0.08

(0.02) (0.01) (0.03)


ASSOC 0.28 0.22 0.06
(0.02) (0.01) (0.03)
PROF 0.46 0.44 0.02
(0.02) (0.02) (0.03)
PHD 0.74 0.83 -0.09

(0.02) (0.01) (0.02)


NOPHD 0.03 0.05 -0.02

(0.01) (0.01) (0.01)


DCOPI2 0.20 0.24 -0.04

(0.02) (0.01) (0.02)


DCOPI3 0.03 0.05 -0.02

(0.01) (0.01) (0.01)


ELITE 0.51 0.28 0.23
(0.02) (0.01) (0.03)
GEOG 0.91 0.80 0.11
(0.01) (0.01) (0.02)
MALE 0.92 0.93 -0.01

(0.01) (0.01) (0.02)


SCORE 1.93 2.78 -0.85

(0.03) (0.02) (0.04)


Standard Errors inparenthesis. In thefirst two columns, these are the standard errors of the sample
means, computed as the standard error of the variable divided by the square root of the number of
observations. The standard error of the last column is the estimated standard error of the coefficient of
theaward dummy in an OLS regression of the variable on a constant and the award dummy, using all
observations (1473).

statistically significant.The share of PROF conditional upon selection is not statis


tically different from that in the sample as a whole. As we shall also discuss below,
thedistribution of the scores on theproposals for theASSIST professors tends tobe
dominated (in the sense of firstorder stochastic dominance) by the distribution of
the scores for theASSOC or PROF (more than 12 years since Ph.D). This suggests
that in the selection process NSF uses looks primarily at the score of the proposal
(along with other factors), and does not seem to favor younger scholars.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
100 ANNALES D'?CONOMIE ET DE STATISTIQUE

While the percentage ofMALE is not correlated with selection (but female Pis
account for less than 8% of the total sample), the share of selected applicants com
or located in
ing fromOTHER organizations, ELITE institutions, from institution
theWest, Mid-West or North-East (GEOG dummy), is higher than the one in the
non-selected sample, and the differences are statistically significant4.Given that
we aremerely comparing sample means, it is not clear whether these correlations
reflect any structural relationships. These variables may well be correlated with
other factors affecting selection thatwe do not control for in the table.
The SCORE variable is the average reviewer score of the proposal. It ranges
from 1 (excellent) to 5 (very poor). Reviewer scores are an important consider
ation for the recommendations the review panel makes to the program director,
who ultimately decides whether and atwhat level the program should be funded5.
Table 3 shows that the average score of a selected proposal is 1.93 versus 2.78 for a
non-selected proposal. To investigate furtherthe relationship between the reviewer
score and success, we divided SCORE into five size classes. Table 4a shows the
resulting distribution of selected and non-selected proposals. The table points very
95% of
clearly to the high correlation between SCORE and selection. More than
the proposals with an average reviewer score greater than 3 (about one-third of
total applications) were rejected. By contrast, almost 70% of the proposals with
average score between 1 and 1.75 (about 16% of total) were selected, and 44% of
the proposals with average score between 1.75 and 2.25 (23% of the total sample)
were selected. This also means that theNSF panel has considerable discretion for
a frac
proposals with an average score around 2. In otherwords, for non-negligible
tion of theproposals, Pis characteristics and other factorsmay matter.
The last column of Table 4A shows the average of PBEFORE for the Pis in the
a
corresponding score classes. As expected, there is also high correlation between
past reputation of the Pis and their ability to produce high quality proposals.
Moreover, thoughnot reported here,we computed thefrequency distribution of the
proposals in the different score classes by age cohorts (ASSIST, ASSOC, PROF).
We found thatASSIST professors show a higher percentage of proposals in the
higher score classes (lower quality proposals), while ASSOC and PROF show a
relatively higher share in the lower score classes.
All in all these findings are laigely consistent with the experimental results
reported by Cole and Cole [1977]. They find that experimental reviewers agreed
with NSF reviewers on proposals in the tails, but proposals in themid range of the
score distributions tended to evoke themost disagreement. We find that theNSF
to reviewers on
panel thatultimately recommends selection is likely agree with the
the proposals in the tails but less so for proposals in themid range.
In Table 4B, we investigated furtherthe relationships between score classes and
PBEFORE. The table reports the sample means of PBEFORE by score classes for
the non-selected Pis and the differencewith the sample mean of PBEFORE in the
same score class for the selected Pis. In so doing, we want to examine whether, for
a on thepast reputa
given score classes, thedecision to select project also depends
tion of thePI, and thus assess whether, in spite of the correlation between SCORE
and PBEFORE, the lattervariable contains additional informationabout the selec
tion process. The table also shows the same comparison between sample means for

toPHD andNOPHD.
OTHER is thecomplement
4 Recall, intable3, that
5 Awards are largely based on the recommendations by a panel of economists. Inmaking these recom
mendations, the panel and the program director (who is the ultimate decision making authority) rely
upon written reviews and the numerical scores of outside reviewers.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
the impact of nsf support for basic research in economics 101

Table 4a
Frequency of Score Classes

Score
F|?<iuency % ofTotal Fre<P?ency %0?Totel
m Score DnA^K
CIass
Class by S*5016observations
Ubservatlons for PBEFORE by
Class AWARD=1 Class ScoreClass
[1-1.75) 231 15.7% 160 69.3% 112.2

11.75-2.25) 342 23.2 149 43.6 85.3

[2.25-3) 413 28.0 85 20.6 65.7

[3-3.75) 360 24.4 16 4.4 39.1

[3.75-5] 127 8.6 4 3.1 21.1


TOTAL 1473 100.0 414

Table 4b
ComparisonbetweenSCORE andPBEFORE
ASSIST ASSOC PROF
Score
Class (1) (2) (1) (2) (1) (2) (1) (2)
[1-1.75) 90.1 31.9 157.7 -28.3 99.9 50.7 48.7 56.1
(11.3) (15.6) (29.5) (36.8) (14.2) (22.0) (10.4) (19.2)
[1.75-2.25) 66.4 43.4 70.4 25.2 90.2 45.8 52.8 46.8
(5.0) (11.8) (11.5) (19.5) (9.6) (22.6) (6.1) (18.5)
[2.25-3) 59.6 29.6 51.1 21.9 85.2 37.4 51.9 24.7
(3.9) (10.1) (6.4) (17.1) (8.8) (22.7) (5.4) (13.4)
[3-3.75) 36.7 55.8 31.4 5.1 53.9 77.3 33.7 89.0
(2.7) (27.2) (3.6) (29.4) (6.2) (57.5) (4.6) (52.2)
21.8 -21.0 19.5 - 18.5 38.7 15.8 - 15.3
[3.75-5] --(*)
_(3.1) (3.1) (4.5) (4.5) (9.3)_(3.9) (3.9)
(1) SampleMean ofPBEFORE forAWARD=0
(2)Differencebetween
SampleMean ofPBEFORE, (AWARD=1)-(AWARD=0)
(*) No observationsfor AWARD=1
Standard Errors inparenthesis. For (I) calculated as the standard error of PBEFORE in the class
divided by the square root of the number ofobservations. For (2) it is the estimated standard error of
the coefficient of the award dummy in an OLS regression of PBEFORE on a constant and the award
dummy, using all observations in the class.

the different age cohorts. The first two columns of Table 4B suggest that there is
a correlation between PBEFORE and selection even after controlling for SCORE.
Interestingly enough, when we do this comparison by age cohorts this correla
tionis lessmarkedfortheASSIST professors
thanforthe
ASSOC andPROF. For
ASSIST professors the score on theproposal is critical for selection and littlenews
is contained in theirpast reputation.By contrast, for anASSOC or PROF the evalu
ation of the proposal can be mitigated by his or her reputation:A reputed PI who
produced a "bad" application may still get funded,while a less reputed PI may
not be able to convince the panel thathe or she can successfully complete even a
potentially "good" application.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
102 ANNALES D'?CONOMIE ET DE STATISTIQUE

This differential treatmentof younger andmore senior scholars is intriguing.One


possibility is that younger Pis may not yet have a sizable record of publications
which NSF could use to evaluate theirreputation.The distribution of PBEFORE for
theASSIST Pis suggests considerable variability in theirpast publication records
but this appears not tomatter as much forNSF grants given the quality of the pro
posal6. At this point, we can only speculate about the reasons. It is possible that,
unlike themore senior Pis, the younger Pis have had a shorter career. Therefore,
any evaluation of theirpotential forpublication is based on a shorter time span, and
hence subject to a greater error.
Table 5 presents the results of a probit regression with AWARD as the dependent
variable. To account for possible non-linearities in the effectof SCORE, we use the
dummies Sl-1.75, Sl.75-2.25, S2.25-3, which correspond to thefirst three SCORE
classes inTable 4A. (The omitted dummy is the one for SCORE > 3.) Table 5 con
firmsmany of the correlations thatwe found in our analysis of the sample means
inTable 3. First, there is a strong association between SCORE and AWARD. From
the estimated score parameters we computed the implied changes in probability for
differentscore classes, evaluated with respect to thebaseline class of proposals with
SCORE ? 3. For theproposals whose review scores are between 1 and 1.75 theprob
ability of being selected is about 0.51 units greater than thebaseline probability. For
score classes [1.75 - 2.25) and [2.25 - 3) the estimated changes inprobabilities (with
respect to thebaseline case) are about 0.36 and 0.20.
Table 5 also confirms that past performance, PBEFORE has a different impact
on the probability of selection formore senior Pis (ASSOC and PROF) vis-?-vis
the younger Pis (ASSIST). While past publication record has a sizable impact on
selection, the impact ismuch smaller for younger Pis. Table 5 also confirms that
Pis coming from ELITE or OTHER institutions,or located in theWest, Mid West,
and North East (GEOG) aremore likely tobe selected. Ifone believes thatSCORE
and PBEFORE are unbiased measures of thequality of theproject and thePI, these
result are then intriguing.Although it is possible that the institutional affiliation or
regional location are signals of ability thatare not reflected inpast research output,
this signal is available to the reviewers as well, and indeed, reviewers are explic
itly instructed to take into account the ability of the PI when judging proposals.
Hence, after controlling for SCORE and PBEFORE, one does not expect to find a
significant effect of ELITE, GEOG, or OTHER. To anticipate our estimates of the
publication equation, we will see that these variables also have an effect on publi
cations. This suggests thatsocial networks,membership inwhich is correlatedwith
the characteristics inquestion, play a role in the fundingprocess, and also play a role
in theproduction of publications. Whether thereare any implied causal relationships
is an importantissue thatwe cannot resolve here.

4.2 Grant received

Do quality and reputation, or the other characteristics of the Pis also affect the
amount of grant received? To address this question, Table 6 presents the results of

6 We found
mat thedistribution
ofPBEFORE forASSIST andPROF looksimilar.Inbothcases, the
=
median is about PBEFORE 30, and all the deciles of the two distributions are of comparable mag
nitudes. By contrast, the distribution of PBEFORE forASSOC dominates (in the sense of first order
stochastic dominance) the other two.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
the impact of nsf support for basic research in economics 103

Table 5
Probit for Selection, Dependent Variable AWARD

Variable Estimated Coefficient


ASSIST -1.617
(0.270)
ASSOC - 2.026

(0.385)
PROF -1.741
(0.271)
D8890 - 0.084
(0.097)
PHD -0.162
(0.126)
NOPHD 0.073
(0.266)
DCOPI2 -0.152
(0.101)
DCOPI3 -0.187
(0.204)
ELITE 0.265
(0.103)
GEOG 0.326
(0.126)
MALE -0.473
(0.154)
LBEF*ASSIST 0.035
(0.043)
LBEF*ASSOC 0.179
(0.073)
LBEF*PROF 0.119
(0.036)
Sl-1.75 2.110
(0.144)
S 1.75-2.25 1.489
(0.132)
S2.25-3 0.839
(0.131)

Log Likelihood. -638.12


No. obs. 1 473
No. positive
obs._414_
Standard Errors inparenthesis

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
104 ANNALES D'?CONOMIE ET DE STATIQUE

Table 6
Size of Grant Equation, Dependent Variable LREC
Sample Selection,Max LikEstimationConditionaluponAWARD=1
Variable Estimated Coefficient
ASSISTTili
(1.126)
ASSOC 1.762
(1.243)
PROF 1.622
(0.995)
D8890 0.120
(0.066)
PHD - 0.050

(0.118)
NOPHD - 0.342

(0.185)
DCOPI2 0.209
(0.107)
DCOPI3 0.277
(0.271)
ELITE 0.015
(0.190)
GEOG 0.326
(0.196)
MALE 0.284
(0.117)
LBEF*ASSIST 0.057
(0.071)
LBEF*ASSOC - 0.002

(0.114)
LBEF*PROF 0.039
(0.064)

Log Likelihood. -916.60


No. obs. 1 473
No. positive obs. 414
Standard Errors inparenthesis.
Probit equation for AWARD not shown.

a maximum likelihood sample selection estimation inwhich we jointly estimated


the probability of obtaining the grant and the effects of our independent variables
on thesize of thegrant(in logs,LREC) fortheselectedPis. The probitequation
is identical to the one inTable 5, and so are its results,which are thereforeomitted
from Table 6. In the regression equation we included all our control variables, but

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
THE IMPACTOF NSF SUPPORT FOR BASIC RESEARCH INECONOMICS 105

the score classes. In particular,we included the log of PBEFORE (namely LBEF),
and distinguished the effect of thisvariable according to age cohorts.
Our purpose here is to studywhether the past reputation of thePI, and the other
factors that affect selection, also affect the size of the grant given selection. In
fact, conversations with NSF officials and economists who have served on review
panels suggest that the critical decision is selection. Once a project is selected, the
amount of the grant received typically includes a few months of support for the
PI, support for one research assistant, and possibly some support to buy a personal
computer. In short,many of the factors that affect selection should not be that
important in determining the size of the grant.
This conjecture is confirmed by our data. In particular, past publications do not
affect the size of the grant. Past reputation affects selection, but not the size of the
grant once the proposal has been accepted. Similarly, ELITE does not affect the
size of the grant,while male Pis or Pis coming from theWest, theMid-West or the
North-East tend to have largergrants. In short,unlike selection, the amount of the
grant given selection appears to be a more routine decision, which is not consider
ably affected by the various PI characteristics or other factors.

5 Production of publications

In order to estimate the production function of publications, and the effect of


NSF funding, one has to deal with two issues. The first one is that the resources
of our Pis other than theNSF grant are not observed. In other words, while we
measure the total outputs of the researchers, we may not measure accurately the
total research support available to them.The importance of thismeasurement error
depends on the importance ofNSF as a source of research support for economists.
The extent of other research supportmay also be correlated with NSF funding.For
instance,NSF grants could be a signal to other sources of funding about the abil
ityof a given PI, and thismay encourage additional fundingfrom other sources
( "leveraging"). Alternatively, NSF fundingmay "crowd out" other funding. Since
we cannot empirically distinguish between the direct and indirect effects (lever
aging, crowding out), we cannot measure themarginal "social" productivity of
an additional dollar of NSF support. Instead, we measure the net effect of NSF
funding7.
The second problem, already discussed at some length, is about possible sources
of unobserved heterogeneity in theproduction functionof publications. If theNSF
selection decision is based upon expected futurepublication output, thenwhen
regressing publications on PI characteristics andAWARD (orREC_DOL), the esti
mated coefficient of the latterhas an upward bias. Without an appropriate instru
ment that affects selection but not research output, unobserved heterogeneity is
difficult to tackle directly.We take an eclectic approach. We attempt to control

7 In essence, ifNSF support is leveraged, we overestimate the true effect of increasing the research
funding for a given PI, and under-estimate it ifNSF support crowds out other sources (for the PI in
question).

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
106 ANNALES D'?CONOMIE ET DE STATISTIQUE

as much as possible using observed characteristics and use standard parametric


will
techniques to control for selection. Any remaining unobserved heterogeneity
that our estimate is an upper bound for the true effect.
merely imply
We begin with the results of some non-parametric analyses where we compare
the average growth inpublications of selected and unselected individuals belonging
to groups of Pis with similar characteristics. The validity of this analysis depends
on whether Pis are similar within a group, which would imply thatwhether they
are awarded a grant or not is approximately random. By comparing the average
these groups
growth in publications of selected and unselected individuals within
one can then assess whether there is any specific effectproduced by theNSF award.
Moreover, non-parametric analyses of this sort enable us tomove away from the
assumption of linearity in regression analysis.

5.1 Non-parametric analysis

of
Following Card and Sullivan ([1988]: 504) we compare the performance
selected proposals with a "matched" sample of similar but unselected proposals.
The characteristics along which proposals were matched are (i) age cohort of PI;
(ii) SCORE; (iii)PBEFORE8.
More precisely, to construct our matched sample, we first selected the proposals
with scores less than3 and grouped them into the following score classes: [1 -1.75),
- - as in these
[1.75 2.25), [2.25 3). (We did not consider score classes greater than 3,
classes thereare too few awarded proposals.) Then, within each age cohort,we ranked
all Pis according to theirpast publications (PBEFORE). The distributionof PBEFORE
within each age class ishighly skewed,with a long righttail.We thenconsidered three
classes of Pis in termsof theirpast publications. For each age cohortwe distinguished
withPBEFORE fallinginthebottom
amongthose (LOW) for
40% of thedistribution
thatcohort; thosewith PBEFORE Ming between thebottom 40% and the top 20%
(MEDIUM); those in the top 20% (HIGH). Our cells are thencomposed of Pis within
a given age cohort (ASSIST, ASSOC, or PROF) and a given SCORE class, andwith
LOW, MEDIUM or HIGH past publications.
For all thePis in each cell we computed the increase in publications, GROWTH,
defined as the differencebetween PAFTER and PBEFORE. We then computed the
average of GROWTH for selected and non-selected Pis in each cell, and die differ
ence between the two.The latteris an estimate of theNSF effect in each PI class.
The results are inTable 7A and 7B9. As shown by Table 7A, inmost of our cells,
but not all, theaverage value ofGROWTH for selected Pis is greater than the average
value ofGROWTH fornon-selected Pis. Unfortunately,by restrictingour analysis to
observations within well defined cells, one quickly runs out of degrees of freedom.As
a result, the statistical significance of our estimatedmeans is unimpressive. However,
our point estimates suggest that on average the difference inGROWTH between
AWARD and non-AWARD is positive. Clearly, there is a lotof variation across cells,
and this isnatural as thepurpose of non-parametric analyses is also to account forpos

8 Moreover, we neglected all the proposals with co-PIs, and restricted our attention to the proposals
with only one PI. We also tried to distinguish between male and female Pis, but this has practically
no effect on the final results.
9 We also performed the same analysis using differences in logs rather than levels. The qualitative
results do not change.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
the impact of nsf support for basic research in economics 107

Table 7a
Non-ParametricAnalysis,NSF EffectsbyClasses ofPis
NSF Effect
Average of Average of b/w
GROWTH GROWTH for (Difference
PI Class AWARD=1&
(Age cohort/score/classes of
forAWARD=0 AWARD=1
AWARD=0)
PBEFORE)
N. obs. N. obs.

Average Average
1
ASSIST/1-1.75/LOW 8.00 77.57 69.57
(73.87) (27.92) (78.97)
ASSIST/1-1.75/MEDIUM 5 8.40 13 14.31 5.91
(33.04) (20.49) (38.87)
13 19 - 23.26 - 18.34
ASSIST/1-1.75/HIGH -4.92

(20.49) (16.95) (26.59)


ASSIST/1.75-2.25/LOW 11 44.82 44.67 -0.15

(22.27) (42.65) (48.11)


ASSIST/1.75-2.25/MEDIUM 27 39.59 23 76.57 43.97
(14.22) (15.40) (20.96)
ASSIST/1.75-2.25/HIGH 14 - 69.36 12 - 69.42 -0.06

(19.74) (21.32) (29.06)


ASSIST/2.25-3/LOW 36 48.11 54.33 6.22
(12.31) (30.16) (32.57)
ASSIST/2.25-3/MEDIUM 42 35.90 11 40.09 4.19
(11.40) (22.27) (25.02)
17 - 62.47
ASSIST/2.25-3/HIGH -29.00 33.47
(17.92) (33.04) (37.58)
7
ASSOC/1-1.75/LOW -7.71 42.17 49.88
(27.92) (30.16) (41.10)
ASSOC/1-1.75/MEDIUM 6 - 14.83 15 -2.47 12.37
(30.16) (19.07) (35.68)
4 18 - 95.22
ASSOC/1-1.75/HIGH -117.50 22.28
(36.93) (17.41) (40.83)
ASSOC/1.75-2.25/LOW 14 -
1.57 13 8.62 10.19
(19.74) (20.49) (28.45)
ASSOC/1.75-2.25/MEDIUM 24 - 27.54 18 3.89 31.43
(15.08) (17.41) (23.03)
ASSOC/1.75-2.25/HIGH 8 -89.00 15 -90.27 -1.27

(26.11) (19.07) (32.34)


ASSOC/2.25-3/LOW 33 13.82 8.00 -5.82

(12.86) (26.12) (29.11)


ASSOC/2.25-3/MEDIUM 33 - 38.33 -21.88 16.46
(12.86) (26.12) (29.11)
ASSOC/2.25-3/HIGH 12 -107.00 -100.33 6.67
(21.32) (24.62) (32.57)

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
108 annales d'?conomie et de statique

Table 7a, cont.

NSF Effect
PI Class Average of Average of
GROWTH GROWTH for (Difference b/w
(Age cohort/score/classes of AWARD=1&
PBEFORE) forAWARD=0 AWARD=1
AWARD=0)
. obs. N. obs.

Average Average
PROF/1-1.75/LOW
13 0.15 25 5.24 5.09
(20.49) (14.77) (25.26)
PROF/1-1.75/MEDIUM 15 8.47 25 6.20 -2.27

(19.07) (14.77) (24.13)


PROF/1-1.75/HIGH - 47.57 32 - 45.09 2.48
(27.92) (13.06) (30.82)
PROF/1.75-2.25/LOW 26 17.92 20 9.70 -8.22

(14.49) (16.52) (21.97)


PROF/1.75-2.25/MEDIUM 52 14.00 25 9.24 -4.76

(10.24) (14.77) (17.98)


PROF/1.75-2.25/HIGH 17 - 32.35 20 - 52.10 - 19.74

(17.91) (16.52) (24.37)


53
PROF/2.25-3/LOW 8.83 10 17.30 8.47
(10.15) (23.36) (25.47)
PROF/2.25-3/MEDR7M 73 -4.41 17 4.82 9.23
(8.65) (17.92) (19.89)
PROF/2.25-3/HIGH 29 - 49.62 11 - 15.64 33.98
(13.72) (22.27) (26.16)
Standard Errors of the sample means inparenthesis, calculated as the ratio between the standard error
of the variable divided by the square root of the number of observations. For theNSF effect, the stan
dard errors are the estimated standard errors of the coefficients of the award dummy in an OLS regres
sion of the difference inGROWTH on a constant and the award dummy, using the observations in each
class. Total number of observations in this sample = 986 (394 AWARD-1). LOW, MEDIUM, HIGH
correspond to thefollowing ranking of the distribution of PBEFORE by age cohorts: bottom 40%;
between40% and 20%; top20%

Table 7b
AverageNSF Effects

Sample Average NSF Effect


Total 8.31
(6.00)
ASSIST 14.91
(11.69)
ASSOC 15.55
(11.59)
PROF 0.47
(8.77)
Weighted averages computed over all observations with AWARD=1 (394 for total, 99 ASSIST, 110
ASSOC, 185 PROF), and using thefraction of awards in each class on the total number of awards in
the sample. Standard Errors inparenthesis.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
the impact of nsf support for basic research in economics 109

sible non-linearities in thedata. Indeed, our point estimates suggest that,on average,
theNSF effect tends tobe strongerfor theyounger Pis (ASSIST andASSOC) thanfor
themore senior ones (PROF).
This is confirmed by Table 7B which shows the average NSF effects across all
classes, and the age-specific NSF effects forASSIST, ASSOC, and PROF. These
averages were computed afterweighing the class-specific averages inTable 7A by
thefraction of awarded Pis in that class on the total number of awards across all
classes. (For the age-specific means we used the total number of awards of the age
cohort.) Following Card and Sullivan [1988], this implied giving greater weight
to those 'NSF-effects' that corresponded to a relatively larger number of awards.
Table 7B shows that,for the sample as a whole, theNSF effect ismodest. The esti
mated average NSF effect is 8.3 additional publications in quality-adjusted units.
Using the listof impact factors of economic journals reported in theAppendix, this
corresponds to one paper in theJournal of Business. Moreover, theNSF effect dif
fersbetween more junior andmore senior Pis. For theASSIST andASSOC catego
ries, the estimated NSF effect is 15 (equivalent to one publication in theEconomic
Journal). For PROF, it is negligible.
To test the robustness of these results, we performed another non-parametric
exercise. We constructed a sample of "identical" individuals bymatching thePis in
our sample by characteristics.We defined groups of Pis thatwere in the same age
cohort, had the same sex, belonged to the same type of institution (PHD, NOPHD,
or OTHER), were either ELITE or non-ELITE, with GEOG=l or not, within the
same score classes (using 8 score classes from 1 to 3 by 0.25), and belonging to
the same decile of the distribution of PBEFORE by age group. This produced 92
"types", with at least one PI who was awarded a grant and one PI who was not
awarded a grant10.Table 8 shows the differences in the "type-"specific means of
GROWTH between selected and non-selected Pis. As before, these means were
weighted by the share of awards in each class over the total number of awards in
the sample. The results inTable 8 are similar to those inTable 7b. The overall aver
age NSF-effect obtained in thiscase is again 8.3. The effectforASSIST is 13.5, and
thatforPROF isaboutzero,while theeffect ASSOC is slightly
for higher(21.3).

Table 8
Non-parametric Analysis, "Exact Matches" 92 classes of'identical'Pis

Sample Average NSF Effect


Total 8.26
(9.65)
ASSIST 13.49
(15.07)
ASSOC 21.31
(20.83)
PROF 0.49
(14.41)
on total awards in the sample.
Weighted averages computed by using the fraction of awards in each class
Standard Errors in parenthesis.

10 In fact,we obtained 94 "identical" types. But twowere discarded as they contained obvious outliers.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
110 annales d'?conomie et de statistique

5.2 Regression analysis

We now turo to the regression results. Table 9 reports estimates from an OLS
the
regression of log of PAFTER, (LAFT), using all observations in the sample. In
first column of table 9 we regressed LAFT on the various PI characteristics, the
log of PBEFORE (LBEF), and a dummy variable forAWARD. We allowed for
the coefficients of LBEF and AWARD to be differentacross age groups (ASSIST,
ASSOC. and PROF). The second column inTable 9 adds the three score classes,
Sl-1.75, S 1.75-2.25, and S2.25-3 amongst the regressors. (The omitted dummy is
for theproposals with SCORE > 3.) The latter two columns of the table shows use
the log of size of the grant received (LREC) instead of the award dummy.

Table 9
PublicationEquation, Dependent VariableLAFT, OLS

Variable Estimated Estimated Estimated Estimated


Coefficient Coefficient Coefficient Coefficient

ASSIST 2.520 2.472 2.550 2.497


(0.213) (0.214) (0.213) (0.214)
ASSOC 0.709 0.656 0.730 0.672
(0.221) (0.225) (0.221) (0.225)
PROF 0.699 0.619 0.717 0.634
(0.202) (0.206) (0.202) (0.206)
D8890 0.052 0.043 0.049 0.041
(0.081) (0.081) (0.081) (0.081)
- 0.308 - 0.306 - 0.302 - 0.300
PHD
(0.098) (0.098) (0.098) (0.099)
- 1.000 - 0.963 - 0.991 - 0.955
NOPHD
(0.189) (0.192) (0.188) (0.190)
- 0.203
DCOPI2 -0.196 -0.197 -0.204
(0.070) (0.070) (0.070) (0.070)
- 0.225 - 0.239 - 0.228 - 0.242
DCOPI3
(0.132) (0.131) (0.132) (0.131)
ELITE 0.142 0.133 0.139 0.131
(0.091) (0.091) (0.092) (0.091)
GEOG 0.221 0.215 0.215 0.210
(0.085) (0.083) (0.085) (0.084)
MALE 0.143 0.141 0.123 0.127
(0.129) (0.129) (0.130) (0.129)
AWARD*ASSIST 0.473 0.419
(0.122) (0.127)
AWARD*ASSOC 0.309 0.273
(0.122) (0.126)
AWARD*PROF 0.100 0.067
(0.108) (0.111)

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
the impact of nsf support for basic research in economics 111

Table 9, cont.

Variable Estimated Estimated Estimated Estimated


Coefficient Coefficient Coefficient
_Coefficient
LREC*ASSIST - - 0.245 0.222
(0.057) (0.060)
LREC*ASSOC - - 0.124 0.113
(0.055) (0.056)
LREC*PROF - - 0.059 0.051
(0.047) (0.048)
LBEF'ASSIST 0.325 0.314 0.320 0.310
(0.034) (0.035) (0.034) (0.035)
LBEF*ASSOC 0.658 0.643 0.660 0.646
(0.035) (0.036) (0.035) (0.036)
LBEF*PROF 0.694 0.685 0.692 0.682
(0.027) (0.027) (0.027) (0.027)
Sl-1.75 - 0.110 - 0.092
(0.104) (0.104)
Sl.75-S2.25 - 0.226 - 0.218
(0.093) (0.093)
S2.25-S3 - 0.198 - 0.199
(0.082) (0.092)

- 2 301.87 - 2 305.40 - 2 301.07


Log Likelihood. -2 306.17
No.ofobs. 1473 1473 1473 1473
AdjustedR2 0.500 0.502 0.500 0.502

The results suggest that the effectof theNSF awards declines with seniority,and
in particular they confirm that the effect of theNSF award is higher forASSIST
and ASSOC, and is very close to zero for senior professors. Moreover, this result
is remarkably stable across the four specifications inTable 9. With the caveat that
we are ignoring unobserved aspects of ability which are possibly correlated with
selection, some simple experiments using the estimated coefficients of AWARD
may be illuminating.Using the estimates in the second column of Table 9, a typi
cal proposalfroma juniorPI (ASSIST) would produce52% more publications
if she was awarded the grant. Since the sample mean of PAFTER for this cat
upon selection)is 120,thisimpliesthatheroutput
egory(conditional would be
higher by 41 units, which corresponds roughly to two publications in theJournal
ofEconomic Theory, or one publication in theRand Journal. Similar experiments
can be performed for the other cohorts. For instance, the sample means of awarded
ASSOC and PROF are respectively 104 and 89. This implies that, ifnot awarded,
an awarded ASSOC PI whose PAFTER is equal to the sample mean would pro
duce 25 fewer quality adjusted publication units, or a little less than one Review of
Economic Studies paper. Similar calculations forPROF implya reduction of 6 pub

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
112 ANNALES D'?CONOMIE ET DE STATIQUE

lication units, or a littlemore than one Public Finance Quarterly paper. These are
not very large differences, especially ifone believes thata part of diese differences
is likely to be attributable to unobserved aspects of the ability of thePI rather than
to the effect ofNSF funding.
The other noteworthy result inTable 9 is thatELITE, GEOG, and OTHER (orga
nizations differentfrom PhD or non-PhD granting universities) affect publication
output, even after controlling for factors like SCORE or past publications. We
noted in section 4 that these variables also had an independent effect upon selec
tion.What is not clear iswhether memberships in these networks causally improve
scientific productivity (and correspondingly also the chance of being f?nded), or
they simply reflectparticipation in social networks that aremore closely connected
to the editors of the journals and possibly to theNSF panels.
Finally, Table 10 presents maximum likelihood estimates of the production
function of publications conditional upon selection. The sample selection probit
equation is theusual one, with AWARD as the dependent variable. The regression
equation uses LAFT as the dependent variable, and this is regressed on PI char
acteristics, LBEF interactedwith the cohort dummies, and LREC also interacted
with the cohort dummies, as well as the three score classes Sl-1.75, SI.75-2.25,
S2.25-3. The results indicate that after controlling for selection, the past publica
tions of thePI continue to be strongly related to the futureperformance, even after
conditioning on the reviewer score and other characteristics. The estimates suggest
both thatpublication performance is persistent and that there is regression towards
the mean.

The elasticities of LREC for the different cohorts denote themarginal effect of
NSF-funding on the publication output of the Pis in that cohort. Note that this
elasticity is sizable and significantonly for theyoung Pis (ASSIST); it is practically
zero for the intermediate category (ASSOC), and positive for PROF, although
smaller thanASSIST and statistically insignificant.Thus, for junior Pis not only
is selection important, so are additional resources. The average grant received,
REC_DOL, forASSIST is about $70,000 dollars and PAFTER is 120. Given the
estimated elasticity ofNSF funding of 0.64, thismeans thatan increase of $10,000
for such a selected PI would increase publication output by about 11 units, just
under one publication in theReview ofEconomics and Statistics. For PROF, REC_
DOL at the sample mean for awarded PI is $98,000, and the corresponding sample
mean for PAFTER is 89. The estimated elasticity of 0.18 in Table 9 would then
imply that,at themargin, an addition of $10,000 to the grantwould increase output
by less than 2 publication units, or one publication in theJournal ofDevelopment
Economics.

As was the case earlier, these differences are small. This implies thateven though
we do not have the exact estimate for the impact of NSF funding on the research
output of individual researchers,we can statewith some confidence that the impact
is small, except perhaps for researchers at the startof their careers. The impact is
small both on average, and is especially small at themargin, for researchers who
have progressed beyond the sixth year of theirprofessional life.
The results presented should be interpretedwith caution. Researchers differ
in ability and rich as our data are, we cannot hope tomeasure all the important
dimensions. If so, one may incorrectly ascribe to theNSF what is due to abil
ity.Furthermore, the process we are trying tomap is a quintessentially dynamic
one. Researchers enter theprofession with different reputations (perceived ability).
Their credentials and luck affect their ability to obtain research support. In turn,

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
the impact of nsf support for basic research in economics 113

Table 10
MLE ConditionaluponAWARD=1
DependentVariableLAFT; SampleSelection,
Variable Estimated Coefficient Estimated Coefficient
ASSIST 2.477 2.460
(1.152) (0.637)
ASSOC 0.967 0.901
(1.478) (0.740)
PROF0.122 0.075
(1.078) (0.557)
D8890 0.157 0.165
(0.134) (0.131)
PHD
-0.303 -0.298
(0.162) (0.133)
- 0.848 -
NOPHD 0.825
(0.472) (0.474)
DCOPI2-0.051 -0.047
(0.133) (0.124)
-
DCOPI3- 0.153 0.162
(0.363) (0.322)
ELITE-0.116 -0.117
(0.195) (0.131)
GEOG0.359 0.351
(0.264) (0.227)
- 0.336
MALE - 0.323
(0.250) (0.226)
LREC*ASSIST 0.629
0.635
(0.256) (0.256)
- 0.037 - 0.027
LREC*ASSOC
(0.261) (0.237)
LREC*PROF 0.185
0.181
(0.175) (0.147)
LBEF*ASSIST 0.187
0.183
(0.088) (0.065)
LBEF*ASSOC 0.743
0.746
(0.142) (0.095)
LBEF*PROF 0.842
0.838
(0.060) (0.043)
Sl-1.75 - 0.036
(0.279)
- 0.079
Sl.75-2.25
(0.268)
S2.25-3 - 0.134
(0.260)
Log Likelihood. -1234.88 -1234.64
No. obs. 14731473
No. positive obs. 414 414

Standard Errors inParenthesis. Prohit Equation for AWARD not shown.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
114 ANNALES D'?CONOMIE ET DE STATISTIQUE

research support affects theirpublication output. In the next stage, theirpublica


tion output is an importantpart of theirperceived ability, and is a key to obtaining
furtherresearch support.
What thismeans is thateven the distribution of unmeasured (latent) abilitymay
differ systematically across the age cohorts of applicants. Only the "stars" may
apply to theNSF at early stages of the career. Correspondingly, lack of success may
discourage researchers from applying to theNSF, especially late in their career
when theymay have alternative sources of support, or non research interests11.

6 Conclusions

Our results have interesting implications for public policy in two areas. First,
they raise some questions about theway inwhich proposals are selected. Although
the track record of the PI and the reviewer score are important determinants as
they should be, selection also appears to be related to non-cognitive characteristics
(location, institutional affiliation).We most certainly do not mean to suggest that
the external evaluations and scores should be the sole criterion for deciding upon
selection. Indeed, theNSF panel thatmakes the final recommendation may well
have superior information about the expected productivity of the proposals. The
vexing question iswhy these judgments about futureproductivity are systemati
cally related to observed characteristics of thePis thatwere known to the reviewers
and which the reviewers were explicitly instructed to use in evaluating thepropos
als.

There are various possibilities. Itmay simply be that theNSF panel has superior
information about the ability of the PI or the likelihood of success that reviewers
lack and that is related to diese characteristics. In our judgment, this possibility
does not seem a very likely one. Another possibility is that the reviewers might
not be providing an unbiased estimate of the futureproductivity of the proposal.
Consequently, theNSF panel is merely undoing these biases. Alternatively, the
NSF decisions are not aimed at simplymaximizing the expected publication out
put. Itmay well be that encouraging diversity, including geographical diversity,
are auxiliary objectives. If so, our estimates of theproduction function suggest that
such "affirmative action" ought to be largely limited to junior Pis.
Indeed, our estimates imply one of two possibilities. First, that themajority of
economists (junior Pis excepted) appear to derive little productivity gains from
funding so that research support is pure rent: The research being fiinded would
have been undertaken regardless of the outcome. One may speculate whether
these effectswould differ for theorists (pure and applied) as compared to empiri
cal economists, with greater need for research assistants, data sets, and computer
time.Alternatively, our results could be interpretedas implying that for the bulk
of economists, NSF funding crowds out other sources of research support. Thus,
those thatobtain this support do not seek other support that is easily available. In

11 Such an interpretationwould be consistent both with anecdotal evidence, as well as themore formal
theories about path dependencies in career profiles (e.g. Dasgupta and David [1994]).

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
THE IMPACTOF NSF SUPPORTFOR BASIC RESEARCH INECONOMICS 115

turn, the availability of NSF support provides at best a verymodest boost to their
productivity.
In either case, it suggests thatNSF fundsought to be directed towards junior Pis.
Clearly, one ought to proceed with some caution, for, as noted earlier, die process
we are trying tomodel is a dynamic and complex one. This study is a first step
towards a more rigorous and quantitative examination of these issues.

References

-
Adams J.andGriliches Z. (1998). ? Research Productivityin a SystemofUniversities ?,
Les annales des ?conomie et des statistiques, Special Number (The Economies and
Econometrics of Innovation),49/50: pp. 128-162.
Arora A., David P.A. and Gambardella A. (1998). -? Reputation and Competence in
Publicly Funded Science ?, Les annales des ?conomie etdes statistiques,Special Number
(The Economies and Econometrics of Innovation),49/50: pp. 164-198.
AverchH.A. (1988).-? Exploring the Cost-Efficiency of Basic Research Funding in
Chemistry?, Research Policy 18,pp. 165-172.
Averch H.A. (1987). - ?Measuring theCost-Efficiencyof Basic Research: Input-Output
Approaches ?, Journal ofPolicy Analysis andManagement, 6: pp. 342-362.
-
Blank D. and Stigler G. (1957). ? The Demand and Supply of ScientificPersonnel ?,
New York:National Bureau ofEconomic Research.
-
Card D. and Sullivan D. (1988). ?Measuring theEffectof Subsidized TrainingPrograms
onMovement In andOut Employment?, Econometrica, 56: pp. 497-530.
-
Cole S., Rubin L. and Cole J.R. (1977). ? Peer Review and the Support of Science ?,
ScientificAmerican, 237,4: pp. 30-42.
Cole S. (1992). - ?Making Science: Between Nature and Society ?, CambridgeMass, and
London, England:Harvard UniversityPress.
-
Dasgupta P. andDavid P.A. (1994). ? Towards a New Economics of Science ?, Research
Policy, 23: pp. 487-521.
David P.A. (1994). -? Positive Feedbacks and Research Productivity in Science?, in
Economics of Technology (ed.O. Grandstrand),Amsterdam and London: North-Holland.
EhrenbergR. (1991). - ? Academic Labor Supply ?, inC. Clotfelteret al (eds.),Economic
Challenges inHigher Education, Chicago UniversityPress,Chicago.
EhrenbergR. (1992). - ? The Flow ofNew Doctorates ?, Journal ofEconomic Literature,
XXX, pp. 830-875.
-
Liebowitz S. and Palmer J. (1984). ? The Impacts of Economics Journals?, Journal of
Economic Literature, 22-1: pp. 77-88.
-
Merton R. (1973). ? The Sociology ofScience: Theoretical andEmpirical Investigations?,
Chicago: Chicago UniversityPress.
-
OECD. (1994). ?Main Science & Technology Indicators?, Paris: OECD.
StephanP. (1996).-?The Economics of Science?, Journal of Economic Literature,34:
pp. 1199-1235.
TremblayCH. (1992). -? National Science Foundation Funding in Economics and
Chemistry ?, Atlantic Economic Journal, 20: pp. 57-64.

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
116 ANNALES D'?CONOMIE ET DE STATISTIQUE

APPENDIX

Publication Output

The original database used to create the publication productivity scores is


ECONLIT, which contains over 300,000 records on publications in journals, books,
and Ph.D. dissertations from 1969 to 1995.We only used publications in journals.
The first step in creating publication scores was to find the complete list of jour
nal articles byNSF applicants during the 1969-1995 period. This was accomplished
throughmatching queries thatmatch the names of the authors in the ECOLIT
records with die names of NSF applicants. This resulted in about 30,000 articles.
For each article the authorwas given a publication score equal to the impact factor
of the journal divided by thenumber of authors of that article.
The impact factorswere takenfrom Liebowitz and Palmer ([1984]: table 2, col
umn 4). These impact factors are based on impact adjusted citations per article
(1980 citationstoarticlespublishedduringtheperiod 1975-1979).The listof the
top 50 journals with their impact scores is given below.
If the article is published by one of the 50 top journals, the publication score for
this article is the impact score of the publishing journal divided by the number of
authors of this article.Any journal not among the top 50 journals is given thebasic
score of 1.Thus, the publication score of an article published in a non-top-50 jour
nal is 1 divided by thenumber of authors in that article.
The publication scores by anNSF applicant were grouped by theyear of publica
tion and added to give the annual publication scores during the period from 1976
to 1995 for each applicant. These yearly scores were used to create thevariables of
the sum of publication scores during the periods five years before and after thefis
cal year inwhich anNSF applicant has at least one application forNSF funding.

1 Journal of Economic Literature 100


2 Brookings Papers on Economic Activity 96.86
3 Journal of Financial Economics 62.15
4 Journal of Political Economy 59.12
5 Bell (Rand) Journal of Economics 39.45
6 American Economic Review 34.48
7 Journal ofMonetary Economics 33
8 Economica 31.63
9 Econometrica 31.6
10Review of Economic Studies 30.36
11 Journal ofMathematical Economics 24.73
12 Journal of Law and Economics 22.89
13 Journal of Economic Theory 22.28
14 Journal of Public Economics 19.65
15 International Economic Review 19.04
16 Journal of Econometrics 17.32
17 Journal of Industrial Economics 16.55

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions
THE IMPACTOF NSF SUPPORT FOR BASIC RESEARCH INECONOMICS 117

18Quarterly Journal of Economics 16.17


19 Economic Journal 14.96
20 Journal of Finance 14.63
21 American Economic Review Papers and Publications 14.613
22 Journal of International Economics 14.12
23 Journal of Human Resources 13.63
24 Review of Economics and Statistics 12.4
25 Public Finance 11.92
26 National Tax Journal 9.9
27 Journal ofMoney, Credit, and Banking 9.88
28 Canadian Journal of Economics 9.43
29 Manchester School of Economic and Social Studies 9.38
30 Industrial and Labor Relations Review 8.95
31 Journal of Legal Studies 8.43
32 Journal of Business 8.29
33 Journal ofUrban Economics 8.07
34 Economic Inquiry 7.88
35 Scandinavian Journal of Economics 7.11
36 Journal ofAccounting Research 6.98
37 Environmental Economics Review 6.66
38 Public Finance Quarterly 5.52
39 Oxford Economic Papers 4.86
40 Southern Economic Journal 4.83
41 British Journal of IndustrialRelations 4.75
42 Applied Economics 4.39
43Kyklos4.3
44 Journal of Environmental Economics andManagement 4.16
45 Journal of Royal Statistical Society, Series A 4.14
46 Public Choice 4.09
47 Journal of Financial and Quantitative Analysis 3.44
48 Journal of theAmerican Statistical Association 3.02
49 Inquiry 3.01
50 Journal of Development Economics 2.2

This content downloaded from 131.172.36.29 on Sun, 07 Feb 2016 12:49:45 UTC
All use subject to JSTOR Terms and Conditions

You might also like