You are on page 1of 68

Academy of Management Discoveries

DOES GOVERNMENT FUND THE BEST ENTREPRENEURIAL


VENTURES? THE CASE OF THE SMALL BUSINESS
INNOVATION RESEARCH PROGRAM

Journal: Academy of Management Discoveries

Manuscript ID AMD-2019-0078.R2

Manuscript Type: Revision

Entrepreneurial finance (IPO, Crowdfunding, Bootstrapping) <


Entrepreneurship & Family Business, Small Business < Entrepreneurship
Keywords: & Family Business, Quasi Experimental Designs < Research Methods,
Innovation < Performance & Effectiveness, Value Creation / Value
Emergence Venture Capital < Entrepreneurship & Family Business

Policy makers around the world are increasingly interested in spurring


entrepreneurship by providing capital to promising ventures, and often
develop government programs designed to do so. Whether governments
can effectively identify and reward the most promising ventures is a
topic of considerable debate. We examine the selection capabilities of the
largest such government program in the United States - The Small
Business Innovation Research (SBIR) grant. No prior work has
systematically evaluated whether SBIR prioritizes the most promising
technical and commercial ventures. We are able to do so by exploiting a
quasi-natural experiment made possible by the sudden release of
Abstract:
additional funds through the American Reinvestment and Recovery Act
(ARRA). Since our sample consists of two sets of firms having received
SBIR grants – one prioritized through the regular-funded process and a
second group funded through the additional ARRA money – we can
ascertain whether the prioritized ventures outperform the others, while
controlling for the treatment effect. Our evidence supports the view that
governments can effectively implement entrepreneurial programs, and
we provide some insight into the capabilities underlying effective
selection. Moreover, the evidence reveals that governments are capable
of selecting risky ventures – the kind that might produce high impact.
Page 1 of 67 Academy of Management Discoveries

1
2
3 DOES GOVERNMENT FUND THE BEST ENTREPRENEURIAL VENTURES?: THE
4
5 CASE OF THE SMALL BUSINESS INNOVATION RESEARCH PROGRAM
6
7
8
9 SUPRADEEP DUTTA
10 School of Management
11
12 State University of New York – Buffalo
13 341 Jacobs Management Center
14 Buffalo, NY 14260
15
16 Tel: (716) 645 - 5237
17 e-mail: supradee@buffalo.edu
18
19
20 TIMOTHY B. FOLTA
21 Thomas John and Bette Wolff Family Chair in Strategic Entrepreneurship
22 School of Business
23
24
University of Connecticut
25 2100 Hillside Road
26 Storrs, CT 06269-1041
27
28
Tel: (860) 486 - 3734
29 e-mail: timothy.folta@uconn.edu
30
31 JENNA RODRIGUES
32
School of Business
33
34 University of Connecticut
35 2100 Hillside Road
36
Storrs, CT 06269-1041
37
38 Tel: (860) 486 - 6423
39 e-mail: jenna.rodrigues@uconn.edu
40
41
42
43 Acknowledgements
44
45 The authors thank the two anonymous referees and Editor Christopher Tucci for their helpful
46 guidance. The authors are grateful for comments from Daniel Blaseg, Riitta Katila, Sheryl
47 Winston Smith, Giovanni Valentini, and participants at the following institutions: SKEMA,
48
University of Illinois, ESADE, and Copenhagen Business School.
49
50
51
52
53
54
55
56
57
58
59
60
Academy of Management Discoveries Page 2 of 67

1
2
3 DOES GOVERNMENT FUND THE BEST ENTREPRENEURIAL VENTURES?: THE
4
5 CASE OF THE SMALL BUSINESS INNOVATION RESEARCH PROGRAM
6
7
8
9 ABSTRACT
10
11
Policy makers around the world are increasingly interested in spurring entrepreneurship by
12
13
14 providing capital to promising ventures and often develop government programs designed to do
15
16 so. Whether governments can effectively identify and reward the most promising ventures is a
17
18 topic of considerable debate. We examine the selection capabilities of the largest such
19
20
21 government program in the United States – The Small Business Innovation Research (SBIR)
22
23 grant. No prior work has systematically evaluated whether SBIR prioritizes the most promising
24
25 technical and commercial ventures. We are able to do so by exploiting a quasi-natural
26
27
28
experiment made possible by the sudden release of additional funds through the American
29
30 Reinvestment and Recovery Act (ARRA). Since our sample consists of two sets of firms having
31
32 received SBIR grants—one prioritized through the regular-funded process and a second group
33
34
funded through the additional ARRA money—we can ascertain whether the prioritized ventures
35
36
37 outperform the others, while controlling for the treatment effect. Our evidence supports the view
38
39 that governments can effectively implement entrepreneurial programs, and we provide some
40
41 insight into the capabilities underlying effective selection. Moreover, the evidence reveals that
42
43
44 governments are capable of selecting risky ventures—the kind that might produce high impact.
45
46
47
48 Keywords: Selection Effect; SBIR Grant; Innovation; Commercialization; Natural Experiment;
49
50
51
American Recovery and Reinvestment Act
52
53 JEL Classification: G24, L26, L53, O31, O34, O38
54
55
56
57
58
59
60
Page 3 of 67 Academy of Management Discoveries

1
2
3 INTRODUCTION
4
5
6
In the the last few decades, “communities, cities, regions, and nations throughout the world have
7
8 been turning to entrepreneurship as an engine of growth, jobs, and competitiveness” (Ács et al.,
9
10 2009). Undoubtedly, the United States government program committing the most capital to new
11
12
ventures is the Small Business Innovation Research (SBIR) program, which allocated $2.73 in
13
14
15 fiscal year 2018, and over $25 billion in the last decade (2009-2018). The Small Business
16
17 Administration (SBA) manages these programs, and eleven government agencies contribute a
18
19 required percentage of their R&D budgets. Tasked with the stated goal of increasing “private
20
21
22 sector commercialization of innovations derived from federal research,” the program aims to
23
24 provide early-stage funding to the most promising applicants.1 Whether these agencies select
25
26 appropriately is the focus of this research. This question has not yet been answered despite its
27
28
29
importance to the welfare of future economic growth and prosperity. Not only does our study
30
31 speak to whether the U.S. government can efficiently spur entrepreneurship and innovation, but
32
33 it also has implications for whether any government can productively drive economic growth by
34
35
effectively distinguishing between “winners” and “losers” in the commercial marketplace.
36
37
38 There are compelling reasons why governments should provide capital to entrepreneurial
39
40 ventures. It might overcome private sector underinvestment in social returns from research
41
42 because of the externalities in the free market (Nelson, 1959; Arrow, 1962). It might spur a
43
44
45 country’s comparative advantage by investing in commercial endeavors that push the boundaries
46
47 of current technology (Lin, 2012) or inspire new technological landscapes (e.g., biotech,
48
49 nanotech) (Block and Keller, 2011; Mazzucato, 2013) by connecting researchers who would not
50
51
52
otherwise connect (Fuchs, 2010). It might also provide complementary support to private sector
53
54
55
56 1 Public Law 97-219, July 22, 1982.
57
58
59
60
Academy of Management Discoveries Page 4 of 67

1
2
3 investment (Lin, 2012; Lanahan et al., 2016). Furthermore, government investment may convey
4
5
6 status (Meyer and Rowan, 1977; Oliver, 1990) or certify (King et al., 2005; Sine et al., 2007)
7
8 ventures to private investors and address information asymmetries that preclude investment in
9
10 innovative technologies (Greenwald et al., 1984; Hall and Lerner, 2010).
11
12
13
There are also compelling reasons why governments should not fund entrepreneurial
14
15 ventures. Lerner (2009) reports that for every public intervention that spurs entrepreneurial
16
17 activity, there are many failed efforts that waste untold billions in taxpayer dollars. The leading
18
19
arguments against such efforts center on whether the government is actually choosing to fund the
20
21
22 right ventures. They may be subject to regulatory capture, where government involvement is
23
24 distorted by the desire of interest groups (Hegde and Mowery, 2008), or it may be that
25
26 government officials lack the expertise necessary to adequately resolve information asymmetry
27
28
29 (Lerner, 1999). Selection process biases (Boudreau et al., 2016; Li, 2012) and outdated and static
30
31 peer review processes (Azoulay et al., 2012; Bourne and Lively, 2010) are additional factors that
32
33 may contribute to the suboptimal allocation of funds. Moreover, a multiplicity of program
34
35
36
objectives may confound agencies seeking to optimize investment outcomes. The four program
37
38 objectives of the SBIR grant program include the following: a) stimulate technological
39
40 innovation, b) meet federal research and development needs, c) foster and encourage
41
42
participation in innovation and entrepreneurship by women and socially or economically
43
44
45 disadvantaged persons, and d) increase private-sector commercialization of innovations.2
46
47 Most believe that the question of whether governments should provide capital to ventures
48
49 cannot be sufficiently addressed with a “yes” or “no” answer. In this sense, we agree with
50
51
52 Mazzucato and Perez (2015) who emphasize that the core of the policy debate is not whether
53
54
55 2 Congressional objectives as stated in the Small Business Innovation Development Act (PL 97-219) and in its reauthorization of
56 the program in 1992 (PL 102-564).
57
58
59
60
Page 5 of 67 Academy of Management Discoveries

1
2
3 policies require picking and choosing but how to enable such picking to occur in the smartest
4
5
6 way possible. The important question is how the government should allocate its resources, such
7
8 that the processes employed achieve the outcomes desired. This is our purpose. We examine
9
10 whether one government agency, the National Science Foundation (NSF), chooses to fund the
11
12
13
most deserving ventures.
14
15 Our interest in studying selection effects in the SBIR program contrasts with the work of
16
17 others who have focused on program treatment effects. This distinction between selection and
18
19
treatment is important because while prior research generally finds that firms receiving
20
21
22 government funding outperform others (e.g., Lerner, 1999; Audretsch et al., 2002; Czarnitzki and
23
24 Fier, 2002; Gonzalez et al., 2005; Wessner, 2008; Link and Scott, 2009, 2010; Keller and Block,
25
26 2012), whether grant funding influences venture success or survival may be due to either the
27
28
29 substantive impact of the awarding of the grant (i.e., treatment effect) or the fact that the
30
31 government selects strong firms (i.e., selection effect).3 Curiously, there is no strong evidence in
32
33 favor of positive treatment effects. Wallsten (2000) finds that SBIR funding has no influence on
34
35
36
awardees’ future research and development expenditures or employment; whereas, Howell
37
38 (2017) finds that SBIR funding influences awardees’ future patent citations, venture capital
39
40 investment, revenue, survival, and successful exit.4 As a consequence, a likely explanation for
41
42
why SBIR-funded firms outperform unfunded firms is that agencies select the better firms.
43
44
45
46
47
48 3 Evidence suggests a strong association between federal grant programs like SBIR and firm performance on a number of fronts,
49 including employment and sales growth (Lerner, 1999), technological innovation (Audretsch et al., 2002; Link and Scott, 2010;
50 Wessner, 2008), commercialization of innovations (Audretsch et al., 2002; Link and Scott, 2009; Wessner, 2008), investment in
51 research and development (Audretsch et al., 2002; Czarnitzki and Fier, 2002; Gonzalez et al., 2005), access to subsequent
52 external funding (Keller and Block, 2012; Lerner, 1999). Likewise, studies have noted a positive correlation of government
53 intervention and firm performance in other international economies including China (Guo et al., 2016; Wang et al., 2017), Japan
54 (Inoue and Yamaguchi, 2017), and the European Union (Bronzini and Piselli, 2016).
4 Wallsten’s (2000) sample included only a small percentage of Department of Defense applicants denied grants, and he
55 supplemented this subsample with a matched sample of non-applicants. Howell’s (2017) test included all applicants for
56 Department of Energy SBIR grant. She also has a more robust set of performance outcomes.
57
58
59
60
Academy of Management Discoveries Page 6 of 67

1
2
3 Unfortunately, while Wallsten (2000) and Howell (2017) control for selection effects, neither
4
5
6 they nor any others systematically study whether government agencies select better firms.
7
8 One reason prior work has not explored selection effects in the SBIR program is because
9
10 it is challenging to do so. Little is actually known about project selection capabilities because
11
12
13
federal agencies do not disclose projects denied funding—limiting the diagnosis of whether
14
15 agencies select the most promising projects. Even if data on unsuccessful applicants were
16
17 publicly available, the best researchers could do is to control for selection, rather than ascertain
18
19
whether SBIR grants are awarded to appropriate ventures. In contrast, we are able to identify
20
21
22 pure selection effects through a quasi-natural experiment where all firms in the sample receive
23
24 the same treatment, but one set of firms is prioritized over the other, since the latter only received
25
26 funding due to the American Recovery and Reinvestment Act (ARRA) of 2009, which
27
28
29 unexpectedly provided extra money to NSF that the agency chanelled into the SBIR program.5
30
31 The ARRA event can be exploited as a quasi-natural experiment because the sudden inflow of
32
33 ARRA capital is an exogenous event, uncorrelated with any directives adopted by the federal
34
35
36
agency.6 Just as Park et al. (2015) use the ARRA quasi-natural experiment to study whether NIH
37
38 selects the most promising R01 research grants, we are able to use the event to ascertain whether
39
40 NSF selects the most promising commercial ventures for SBIR funding. Since both ARRA-
41
42
funded and regular-funded firms receive a treatment, any observed differences in post-grant
43
44
45 performance outcomes between the two groups will be due to selection preferences.7 A belief
46
47
48 5 Note that ARRA money was also distributed to the National Institutes of Health, but the number of firms receiving Phase I
49 SBIR grants through NIH was small. Unreported empirical analysis comparing NIH versus NSF suggested no difference in
50 selection capabilities.
51 6 Thus, the selection criteria employed by the federal agencies is unlikely to have been influenced by the initiatives or biases

52 undertaken by the agency, enabling us to rule out factors affecting selection into these two groups unrelated to venture quality.
7 ARRA mandates a significant level of transparency and accountability that requires all ARRA-funded projects to expend the
53
54 funds in a timely manner and report the project details online through the ARRA website (www.FederalReporting.gov). The
accurate identification of the ARRA-funded firms enables us to measure and compare the innovation output and commercial
55 potential of firms. Projects were associated 1:1 with firms, so we use the term interchangeably. The SBIR program is different
56 from other federal grant programs that do not require firms to be associated with the grant proposal.
57
58
59
60
Page 7 of 67 Academy of Management Discoveries

1
2
3 that government agencies select appropriately would suggest that regular-funded ventures
4
5
6 outperform ARRA-funded ventures because they were prioritized for funding. Several venture-
7
8 level performance metrics are used to make this assertion, including patents granted, patent
9
10 citations, venture capital infusion, and incidence of initial public offering or acquisition.
11
12
13
Our investigation into whether NSF selects the most qualified commercial venture
14
15 applicants to fund contributes in several respects. It complements prior work focusing on
16
17 treatment effects in the same important program. The question of program selection is vital for
18
19
both scientific and commercial applications, but is particularly intriguing for the latter because
20
21
22 evaluators in government agencies tend to be technical experts but generally lack experience and
23
24 training in evaluating commercial potential. Unlike decisions to fund science, decisions to fund
25
26 innovative commercial ventures requires market expertise and due diligence that may undermine
27
28
29 the quality of funding decisions. Our work not only assesses whether proper selection is
30
31 achieved, but it may also inspire and illuminate new processes and structures to achieve optimal
32
33 selection of the best ventures. The downward trend in federal funding over the last decade
34
35
36
(currently at 78% of the 2006 budget) and impending budget cuts proposed in 2017 imply more
37
38 competition for fewer grants that escalates the importance of the selection process. While we
39
40 empirically investigate only one government agency awarding SBIR grants, NSF is one of the
41
42
largest funders of non-defense grants in the program, and our findings may also speak to other
43
44
45 federal agencies administering SBIR funds.
46
47 There are also implications of our study beyond the SBIR context, which are fundamental
48
49 to communities, cities, regions, and nations throughout the world attempting to spur innovative
50
51
52 commercial activity. Moreover, selection plays a central role in many settings; for example,
53
54 organizations select new projects in which to invest, human resource teams select the most
55
56
57
58
59
60
Academy of Management Discoveries Page 8 of 67

1
2
3 appropriate human capital, universities and colleges select faculty and students with the right
4
5
6 credentials, and private equity investors select promising ventures. Indeed, possessing the
7
8 internal organizational processes required to make the right selection decisions is a salient
9
10 capability. While academic scholarship is largely centered around analyzing the potential
11
12
13
benefits/outcomes (treatment) of an organizational activity, making the right selection decision is
14
15 an early step in any activity—having direct implications on the realized outcomes.
16
17
18 SBIR GRANT PROGRAM
19
20 To understand whether selection processes for SBIR grants are consistent with policy guidelines,
21
22
23 it is important to better understand program objectives and review processes at both the federal
24
25 and agency levels. The SBIR program is a congressionally-mandated public/private partnership
26
27 grant program that fosters R&D investment in U.S. small businesses with strong
28
29
30
commercialization potential. All government departments and agencies with external research
31
32 programs exceeding $100 million are required to dedicate a minimum percentage of their
33
34 external research budgets to the SBIR program (with a 3.2% minimum requirement as of 2017).
35
36 The SBIR program was authorized as a government-wide program through the Small Business
37
38
39 Innovation Development Act of 1982. SBIR programs award three phases of funding. Phase I
40
41 awards provide approximately $150,000 of funding distributed over 6-12 months, with the aim
42
43 of establishing the technical merit, feasibility, and commercial potential of the proposed research
44
45
46 and to determine the preliminary performance quality of the associated small business prior to
47
48 providing further monetary support. Phase II funding determinations are based on the results
49
50 achieved in Phase I and on the scientific merit, technical merit, and commercialization potential
51
52
53
of the project proposed in Phase II. Only Phase I awardees are eligible for Phase II, with funding
54
55
56
57
58
59
60
Page 9 of 67 Academy of Management Discoveries

1
2
3 that normally does not exceed $1,000,000 distributed over two years. Phase III awards are
4
5
6 targeted towards advanced pursuit of commercialization objectives subsequent to Phase I and II.
7
8 The eleven agencies participating in the joint program include NIH (Health and Human
9
10 Services), NSF, Department of Defense, NASA, Department of Energy, Environmental
11
12
13
Protection Agency, Department of Agriculture, Department of Commerce, Department of
14
15 Education, Department of Transportation, and Homeland Security. From fiscal years 2009-2011,
16
17 the program distributed an average of $2 billion in grants. The first five agencies listed above
18
19
accounted for nearly 97 percent of the programs’ grants. Out of the participating agencies, NSF
20
21
22 and NIH allocated ARRA funds to proposals that were initially rejected in the regular budget
23
24 cycle. Since NIH funded only 19 ventures across 6 existing requests for proposals, we decided to
25
26 focus on ventures funded through NSF, enabling us to exploit the quasi-natural experiment made
27
28
29 accessible through the release of ARRA funds.
30
31
32 REASONS UNDERLYING SUBOPTIMAL SELECTION
33
34 The effectiveness of government grant programs should not be taken for granted. Program
35
36 success depends on how the agencies implement the selection process, which types of projects
37
38
39 are given preference, and whether this aligns with the broader objectives of the program. In
40
41 theory, a grant program will be effective if it selects to fund the marginal project—the project
42
43 that would not be undertaken without receipt of the grant (David et al., 2000; Wallsten, 2000).
44
45
46 There are multiple facets concerning the effectiveness of project selection. Should we
47
48 assume that federal agencies will select venture applicants with the most technical and
49
50 commercial promise? Scholars have noted a number of fundamental reasons to question such an
51
52
53
assumption. Evaluation of applications may be influenced by any number of factors beyond the
54
55 “true” quality of the project, including applicant and reviewer characteristics, ties between
56
57
58
59
60
Academy of Management Discoveries Page 10 of 67

1
2
3 applicants and their reviewers, and selection procedures (Lee et al., 2013). First, peer review
4
5
6 processes are not insulated from political influence (Hegde and Mowery, 2008) and program
7
8 managers may face pressure from congressional officials to fund certain types of projects over
9
10 others, yielding inequality in the distribution of funds being awarded on the basis of applicant
11
12
13
ethnicity (Ginther et al., 2011), research topic (Bisias et al., 2012), and geography (Lerner,
14
15 1999). For example, it is no secret that the overwhelming share of funds being distributed to
16
17 firms in California and Massachusetts has prompted scrutiny on behalf of congressmen
18
19
dissatisfied with the unbalanced geographic distribution of disbursed capital.8
20
21
22 Second, some venture applicants have been identified as “SBIR mills” because of their
23
24 ability to continually attract grants, and this capability might supersede the quality of their
25
26 underlying commercial potential (Lerner, 1999; Link and Scott, 2009). SBIR assessment reports
27
28
29 conducted by the National Research Council highlight the contentious issue of SBIR mills—
30
31 firms receive many awards yet have low commercialization outcomes (Wessner, 2004). The U.S.
32
33 General Accounting Office (GAO) (1992) suggests that many of these ventures have offices in
34
35
36
Washington that focus on identifying application opportunities, but they commercialize
37
38 technologies at much lower rates than other firms do.
39
40 Third, it is possible that awards are granted in attempt to align with agency-specific
41
42
procedures and policy objectives rather than purely on the basis of technical and commercial
43
44
45 merit or long-run business sustainability, and such procedures may not always remain in
46
47 congruence with the broader objectives of the SBIR program. Prior work assumes government
48
49 agencies have institutionalized processes that are better at assessing scientific instead of
50
51
52 commercial merit (Pahnke et al., 2015). Likewise, GAO (1992) found that the Department of
53
54
55
56 8 California and Massachusetts account for 35.3 percent of all Phase I awards (Wessner, 2008).
57
58
59
60
Page 11 of 67 Academy of Management Discoveries

1
2
3 Defense (DoD) has commercialization pathways influenced more so by its agency specific
4
5
6 mission than other agencies like NSF because many of the resulting technologies that the DoD
7
8 supports are procured by the federal government, which raises the question of whether agencies
9
10 may place different levels of emphasis on private sector commercialization (Wallsten, 2000).
11
12
13
Indeed, Howell (2017) noted that peer review scoring criteria are centered on identification of
14
15 the technological merit of proposals rather than the more subjective assessment of commercial
16
17 potential, perhaps because of a lack of access to qualified commercial reviewers, processes tuned
18
19
towards assessing scientific rather that commercial merit, or differences in the degree to which
20
21
22 predicted commercial impact aligns with agency-level policy objectives.
23
24 Fourth, award selection may be subject to reviewer biases. Reviewers may have a
25
26 preference towards projects that fall within the domain of their expertise or research interest area
27
28
29 (Li, 2012) that may also bias them to be more critical towards projects that lie within their own
30
31 domain (Boudreau et al., 2016). Reviewer biases may also surface due to conformance to
32
33 organizational identity—how similar the candidates are to themselves (Lamont, 2009; Zhou,
34
35
36
2005) and evaluations made by other reviewers (Fini et al., 2018). Outdated and static review
37
38 processes may inhibit adaptation to a changing scientific ecosystem, biasing them away from the
39
40 most promising projects led by younger researchers (Azoulay et al., 2012; Bourne and Lively,
41
42
2010).
43
44
45 Even if the selection processes are immune to political forces and reviewer, applicant,
46
47 and agency-level biases, agencies may generally lack the experience required to select the most
48
49 promising commercial opportunities. Financing innovation is fraught with the challenges of
50
51
52 information asymmetries because of the inherent uncertainty in assessing the technological
53
54 feasibility, business model credibility, and product or service viability. The private equity
55
56
57
58
59
60
Academy of Management Discoveries Page 12 of 67

1
2
3 industry has developed the capacity to overcome information asymmetries with specialized
4
5
6 knowledge, experience, and industry contacts.9 Critics contend that the government should not
7
8 be involved in such selection processes because of its relative inferiority to private markets in
9
10 judging venture quality and allocating funds efficiently (Lerner, 2009). However, NSF draws
11
12
13
expertise from external reviewers to ascertain commercial merit of the projects. As one of the
14
15 NSF Program Directors noted, “We (NSF) have people brought in to give us specific feedback
16
17 [...] consisting of entrepreneurs, consultants, and people familiar with regulatory tasks to assist in
18
19
evaluating the commercial viability of projects.”10 This complements the technical and
20
21
22 commercial expertise of NSF Program Directors, many of whom have prior experience as
23
24 entrepreneurs and investors themselves. Given the lack of commercial expertise within federal
25
26 agencies in comparison to that of private sector financers, there is good reason to question
27
28
29 whether selection occurs optimally within the grant selection process. In light of these
30
31 considerations, ascertaining the effectiveness of the selection process of the grant program
32
33 warrants empirical justification.
34
35
36 AMERICAN RECOVERY AND REINVESTMENT ACT (ARRA)
37
38
39 The ARRA event constitutes our quasi-natural experiment to isolate the efficiency of selection
40
41 processes for SBIR grants. In February 2009, the U.S. government allocated $831 billion for the
42
43 economic stimulus package based on the ARRA enacted by the 111th U.S. Congress. This is the
44
45
46 single largest stimulus flowing into the U.S. economy through many federal agencies, including
47
48 NSF. A principle purpose of ARRA was to provide investment needed to spur technology
49
50 advancements in science and health and to preserve and create new jobs. Accordingly, the
51
52
53
54 9 Nevertheless, the amount of capital provided by the SBIR program dwarfs that of the venture capital ($3.2 billion versus $26.65
55 billion, respectfully, in 2012), but suggests that many firms rely on the SBIR grant programs for support.
56 10 Interview with NSF Program Director. Personal phone interview. March 19, 2020.
57
58
59
60
Page 13 of 67 Academy of Management Discoveries

1
2
3 legislation designated $3 billion to NSF, with $2 billion dedicated to extramural research funds.
4
5
6 NSF allocated a fraction of this funding ($49.9 million) to the SBIR grant program.
7
8 NSF has one SBIR grant solicitation per year. The relevant solicitation in the case of
9
10 ARRA was NSF 08-548 (https://www.nsf.gov/pubs/2008/nsf08548/nsf08548.htm), having due
11
12
13
dates on June 10, 2008 and December 4, 2008, and targeting Biotech and Chemical
14
15 Technologies, Electronics, Components, and Engineering Systems, and Software and Services.
16
17 It is clear that ARRA money was distributed to applications that had been reviewed (and denied
18
19
funding through the regular budget) for this solicitation because: (a) on March 18, 2009, NSF
20
21
22 released a notice stating that it would consider the ARRA money for funding proposals declined
23
24 after October 1, 2008; (b) notifications occur about six months after due dates, which implies
25
26 that proposals submitted on June 10 were included; and finally (c) proposals submitted for the
27
28
29 June and December 2008 proposal deadlines are associated with fiscal year 2009, the year in
30
31 which all the ARRA proposals were associated.11 The ARRA funding event happened after the
32
33 due dates of the regular grant solicitation cycle. An interview with NSF Senior Program Director
34
35
36
confirmed that NSF did not have sufficient time to run a new review when notified of receipt of
37
38 the ARRA funding; therefore, NSF awarded ARRA funding to projects that had already been
39
40 reviewed.12 Figure 1 provides a timeline.
41
42
[Insert Figure 1 about here]
43
44
45 EMPIRICAL STRATEGY
46
47
48
Our research design differs from most of the archival research on the impact of federal grants.
49
50 Therefore, before providing the full details of the methodology, a high-level roadmap of the
51
52 empirical approach is provided below.
53
54
55 11 https://www.nsf.gov/about/congress/111/hs_090319_recoveryactscience.jsp
56 12 Interview with NSF Senior Program Director. Personal phone interview. August 5, 2019.
57
58
59
60
Academy of Management Discoveries Page 14 of 67

1
2
3 A set of firms were identified as close contenders for an SBIR Phase I award, but were
4
5
6 not selected in the regular process of grant solicitation, and ended up receiving the grant because
7
8 of the exogenous capital inflow through ARRA. These firms are referred to as ARRA-funded
9
10 firms. Also identified were the firms in the same application pool who received SBIR Phase I
11
12
13
awards during the regular process of the same grant solicitation. These firms are referred to as
14
15 regular-funded firms. This process results in a sample of firms receiving the SBIR award, with
16
17 the regular-funded firms being prioritized above the ARRA-funded firms. A rational and
18
19
unbiased process for awarding the grants should reveal that the regular-funded firms outperform
20
21
22 the ARRA-funded firms. Annual data was also collected on each firm related to any patent filed
23
24 and granted, patent citations, venture capital raised, and incidence of initial public offering (IPO)
25
26 or acquisition. The analysis of this data assesses whether the priority that NSF gave to the
27
28
29 regular-funded firms was justified based on performance outcomes of the venture.
30
31 There are several attractive qualities of the ARRA event as a quasi-natural experiment.
32
33 First, our design enables us to identify the pure selection effect, since both regular-funded and
34
35
36
ARRA-funded firms receive the treatment (i.e., the SBIR award). Second, the inflow of ARRA
37
38 money allows us to observe which projects have the highest priority for NSF, presumably
39
40 because they are the ones with potential to create the most value. The quasi-natural experimental
41
42
set up of the ARRA event mimics the benefits of a regression discontinuity approach without
43
44
45 having explicit cut off scores—enabling us to identify firms that are below a certain threshold,
46
47 who later get the funding through the exogenous windfall of the ARRA money. Third, the inflow
48
49 of money was unexpected by NSF and other agencies and so could not influence the selection
50
51
52 process during the regular cycle, which was completed before ARRA was announced. A number
53
54 of sources have confirmed the unexpected nature of the ARRA funding. An interview with the
55
56
57
58
59
60
Page 15 of 67 Academy of Management Discoveries

1
2
3 NSF Senior Program Director active at the time confirmed that the selection for the regular cycle
4
5
6 was completed prior to the ARRA announcement and that the funding objectives during the
7
8 ARRA cycle were the same as during the regular cycle.13 Based on these qualities, we believe
9
10 that any differences in quality between firms funded through ARRA and firms funded during the
11
12
13
regular budget cycle reflect the priorities of NSF to first fund the highest quality ventures.
14
15
16 Sample Data
17
18 The sampling procedure occurred through the following steps. All 147 firms that received a
19
20 Phase I SBIR grant from NSF through the ARRA funds were identified. Data detailing the grant
21
22
23 date, firm identity (grant recipient), grant amount ($), and project description were sourced from
24
25 the ARRA website (www.FederalReporting.gov). Since ARRA funds were distributed to firms
26
27 that submitted proposals to the solicitation NSF 08-548, we identified that 620 firms, identified
28
29
30
from the SBIR website (www.SBIR.gov), were funded through the regular budget for the same
31
32 NSF grant solicitation. From the sample of 767 firms, six were eliminated that had publicly-
33
34 traded stock (one ARRA-funded; five regular-funded), 99 were eliminated that were founded
35
36 more than 10 years prior to the date of receiving the grant (3 ARRA-funded; 96 regular-funded),
37
38
39 and five regular-funded firms were eliminated because of incomplete information.14 These steps
40
41 yielded a sample of 657 awarded firms (143 ARRA-funded and 514 regular-funded).
42
43
44
45
46
47
48
49 13 An interview with NSF Senior Program Director at the time of ARRA enactment corroborates the quasi-experimental set up of
50 ARRA event. The Program Director at NSF noted, “We (NSF) realized we would get the funding in May of that year […] there
51 would have been no way that we would have had time to run a new review […] we had to award projects that were already
52 reviewed when we were notified that we would receive additional funding through ARRA. We had proposals that were evaluated
53 that we had wanted to fund, so we used this (ARRA) money to go back and fund those projects.”
14 A similar firm age cut-off was used by Hellmann and Puri (2002) to categorize startups. Small businesses are resource
54
constrained as they lack market credibility and legitimacy and suffer from the liability of newness (Stinchcombe, 1965) that
55 offers a desirable dimension of homogeneity to evaluate the selection preferences of SBIR grants on startups’ innovation and
56 commercialization. We also conducted a robustness test by relaxing the firm age cut-off criterion.
57
58
59
60
Academy of Management Discoveries Page 16 of 67

1
2
3 The final step in the data collection process augments the information with other sources
4
5
6 by matching on firm name.15 For example, innovation outcomes are measured from patent
7
8 information sourced from the USPTO patent database. Data on firm commercialization potential,
9
10 such as venture capital, IPO, and acquisition, was gathered through SDC Platinum, Thomson
11
12
13
One VentureXpert, Dun & Bradstreet, and Factiva news.16 Dun & Bradstreet provided
14
15 information on firm year of founding and industry classification. The result of these efforts is a
16
17 unique panel database of 657 NSF awarded firms observed from founding year until 2015.17
18
19
20 Variables
21
22
23 Dependent variables. The dependent variables for the analyses include performance metrics
24
25 related to innovation and commercialization potential, which correspond to two of the stated
26
27 objectives of the grant program.
28
29
30
Innovation. To ascertain innovation performance, we identify patents associated with
31
32 each firm from the annual USPTO patent database and construct two measures – patent count
33
34 and patent citations. Patent count is measured as the inverse hyperbolic sine transformation of a
35
36 firm’s annual number of patents filed (and eventually granted).18 Patent filing date is used
37
38
39 because it more closely approximates innovation date and is unaffected by potential delays in the
40
41 patent granting process (Griliches et al., 1987). Patents that are more frequently cited are
42
43 typically interpreted as having more impact than less cited patents (Trajtenberg, 1990). Prior
44
45
46
47
48 15 We used a name-matching algorithm that matches various permutations of the company name and patent assignee name in
49 USPTO database to the firm names in the sample (Appendix B provides the explanation and the general SAS code).
50 16 Venture Xpert has relatively complete coverage of VC investments (Kaplan and Lerner, 2016) and has precedence in academic
51 research. However, we note the caveat that Venture Xpert provides more historical data coverage while other databases like
52 PrivCo report more detailed activity on recent funding.
17 The industrial representation of the firms in the sample is typical of the broader set of technology intensive industries. Of the
53
54 sampled firms, SIC 87 (engineering, research, development, and testing services industry), SIC 38 (measuring, analyzing, and
controlling instruments industry), SIC 28 (chemicals and allied products industry), and SIC 73 (computer programming and data
55 processing industry) have a higher proportion of representation.
56 18 The results are similar for the alternative log transformed variable.
57
58
59
60
Page 17 of 67 Academy of Management Discoveries

1
2
3 work has shown that citations have a strong correlation with the innovation’s economic value
4
5
6 (Hall et al., 2005). Consequently, consistent with prior work (Hall et al., 2001), patent citations is
7
8 the inverse hyperbolic sine transformation of the number of times the firm’s patents are cited by
9
10 other patents in the calendar year of the patent application and three subsequent years.19 The 657
11
12
13
firms in our sample have 3,249 patents receiving 15,492 citations in the three-year window.
14
15 Commercialization. Commercialization potential is approximated through two variables.
16
17 Venture capital (VC) investment is a well-known signal of firm quality and growth potential
18
19
(Megginson and Weiss, 1991; Stuart et al., 1999) and empirical evidence shows startups that
20
21
22 attract VC investment are likely to perform better compared to startups that do not receive VC
23
24 investment (Hellmann and Puri, 2002; Hsu, 2006). Venture Capital funding approximates a
25
26 young firm’s commercialization potential, and equals “1” in the year a firm receives venture
27
28
29 capital after receiving the SBIR grant, and “0” otherwise. Another indicator of potential
30
31 commercial success is when technical firms get acquired or have an initial public offering.
32
33 IPO/Acquisition is coded “1” for the year in which the firm commercializes either through IPO
34
35
36
or acquisition after receiving an SBIR grant, and “0” otherwise. In our sample window, 17.2%
37
38 (113 out of 657 firms) received VC investment, and 9.7% (64 out of 657 firms) realize
39
40 IPO/acquisition.20
41
42
43 Independent variables. The key independent variable is ARRA, an indicator variable that equals
44
45
46 “1” for firms funded through ARRA, and “0” for firms supported by the regular budget. The
47
48 empirical specification also controls for grant amount, industry, geographic location of the firm,
49
50 and year effects.
51
52
53
19 The results are similar for the alternative four-year window to measure patent citations. Citation intensity varies over time - a
54
fixed shorter time window allows for consistency, but caution should be taken in the interpretation.
55 20 Recognizing that not all acquisitions are positive events, we ran models eliminating acquisitions and focusing purely on IPO
56 events, and confirm our results hold with this specification.
57
58
59
60
Academy of Management Discoveries Page 18 of 67

1
2
3 [Insert Tables 1 and 2 about here]
4
5
6 Table 1 defines the measures and presents descriptive statistics for both the ARRA-
7
8 funded and regular-funded firms and table 2 shows the correlation matrix. Compared to regular-
9
10 funded firms, the average ARRA-funded firm has fewer prior patents (0.25 vs 0.43), received
11
12
13
more grants prior to the focal grant (0.80 vs 0.61),21 and received a lower grant dollar amount
14
15 (11.63 vs. 11.95). The percentage of minority-owned businesses is marginally higher for regular-
16
17 funded firms (0.04 vs 0.07) but percentages of women-owned businesses are equal for both firm
18
19
samples. Regular-funded firms perform better for every measure of innovation and
20
21
22 commercialization. Whether these performance advantages are tied to selection capabilities is the
23
24 focus of the multivariate analysis that follows.
25
26
27 ANALYSIS
28
29
30
The analysis has four components. First, we look for evidence indicating whether regular-funded
31
32 firms outperform ARRA-funded firms for four different dependent variables capturing
33
34 innovation impact and commercial potential. If regular-funded firms outperform their ARRA
35
36 counterparts, it would provide evidence that agencies prioritize and select appropriately. Second,
37
38
39 we examine whether there might be selection preferences for “cutting-edge, high-risk” projects,
40
41 because NSF has explicitly stated such a preference. Third, we attempt to ascertain observable
42
43 firm attributes that could explain the difference in innovation outcomes, shedding light on some
44
45
46 factors that NSF considers in their selection process. Finally, because our dependent variables
47
48 are observed after selection, it is possible that effects we attribute to selection are actually tied to
49
50 differences in how the two groups benefit after receiving the funds. We perform a difference-in-
51
52
53
54 21The number of grants received before the focal grant proxies for “SBIR Mills”- firms that continue to receive many awards and
55 are more effective at getting a grant (Link and Scott, 2009). As an alternative, we used a dummy variable equal to one for firms
56 that received at least three grants prior to the focal grant, and zero otherwise. The results are similar with both measures.
57
58
59
60
Page 19 of 67 Academy of Management Discoveries

1
2
3 differences analysis before and after the grant was awarded to investigate the extent to which
4
5
6 differences in innovation and commercialization are attributable to selection preferences of NSF
7
8 or whether the benefit of receiving the grant leads to a substantive difference between the two
9
10 groups of firms, or both.
11
12
13 Innovation Impact and Commercial Potential of Regular versus ARRA Firms
14
15
16 Establishing whether regular-funded firms outperform ARRA-funded firms is the first step in our
17
18 analysis. We expect this outcome because NSF has prioritized regular-funded firms. Any
19
20 superiority of the regular-funded group on the innovation and commercialization measures
21
22
23 would indicate NSF’s preference to select higher quality projects. The empirical design
24
25 compares performance outcomes of regular-funded firms and ARRA-funded firms over a five-
26
27 year period after receiving the grant. The model we estimate is the following:
28
29
30
Yit = α + β1(ARRAi) + γi(Xit) + μt + εit (1)
31
32 where Y is an outcome measure of firm i in year t, X is a vector of control variables including
33
34 grant amount, a vector of location dummies, and a vector of industry dummies, μ includes year
35
36 fixed effects, and ε is an error term. Note that additional controls potentially related to selection
37
38
39 are not used in this estimation so as not to confound the identification of selection effects. To
40
41 capture the effect of selection in its entirety, including observable and unobservable factors, the
42
43 analysis is focused on the ARRA variable, and does not include other firm level controls. In
44
45
46 subsequent models, we add additional variables to isolate which specific observable determinants
47
48 influence selection.
49
50 [Insert Table 3 about here]
51
52
53
Model 1 is estimated in Table 3, albeit with different dependent variables and regression
54
55 approaches across the columns. Patent Count and Patent Citations are estimated in columns 1
56
57
58
59
60
Academy of Management Discoveries Page 20 of 67

1
2
3 and 2 with a generalized estimating equation (GEE) regression method, appropriate for
4
5
6 accounting for auto-correlation that may arise because innovation measures are temporal in
7
8 nature and each firm is measured repeatedly across multiple years (Liang et al., 1986).22 In
9
10 columns 3 and 4, time-to-VC investment and time-to-IPO/acquisition models are estimated in the
11
12
13
form of a parametric accelerated time-to-failure (AFT) model, and addresses the right-censoring
14
15 in the data. These are hazard (or duration) models written with log time as the dependent
16
17 variable. Parametric AFT models require that we specify a distribution for log time ln(T) = Xβ +
18
19
z. Therefore, T = e Xβ e z. For example, if the exponentiated coefficient, eβ = 1.10, then we say
20
21
22 that a one unit increase in X, increases the survival time by 10%; hence the expected duration to
23
24 reach the failure event increases by 10%. The advantage of the log-normal distribution is that it
25
26 does not assume the hazard rate (the instantaneous probability of obtaining VC investment /
27
28
29 exiting in the next instant given that a company has not obtained VC investment / exited so far)
30
31 is either monotonic (e.g., Weibull) or constant (e.g., exponential). In the AFT model, positive
32
33 (negative) coefficients indicate that the covariate increases (decreases) the survival time to the
34
35
36
failure event, and hence, the expected duration. All models contain robust standard errors using
37
38 the “Huber-Sandwich” estimator to account for any heteroscedasticity across firm years.23
39
40 The results in Table 3 broadly confirm that ARRA-funded firms underperform regular-
41
42
funded firms, indicating that NSF prioritizes ventures with the highest innovation and
43
44
45 commercialization potential. The economic interpretation indicates that compared to regular-
46
47 funded firms, ARRA-funded firms produce 19% fewer patents, have 12.9% fewer patent
48
49
50
51
52
22The results are similar with the OLS specification (Appendix Tables G1).
53
23The results are also robust to semi-parametric Cox model (Appendix Table G1). In Cox model, a negative (positive) coefficient
54
means the covariate decreases (increases) the hazard rate of achieving the failure event, and therefore, increases (decreases) the
55 expected duration. The analysis indicates that compared to regular-funded firms, ARRA-funded firms have 60% decreased risk of
56 receiving subsequent VC money, and 89% decreased risk of achieving IPO/acquisition.
57
58
59
60
Page 21 of 67 Academy of Management Discoveries

1
2
3 citations, duration to receive subsequent VC money is increased by 36%, and duration to achieve
4
5
6 IPO/acquisition is increased by 71%.
7
8
9 Examining Selection Effects for Risky Projects
10
11 Given that SBIR objectives include stimulating technological innovation and increasing private
12
13 sector commercialization, it is important to also investigate whether an implementing agency’s
14
15
16 processes and capabilities prefer high impact innovation over incremental innovation. A
17
18 preference for high impact innovation may be more risky because it tends to involve more
19
20 complexity and exploration, requiring an assimilation of knowledge from a broader array of
21
22
23 knowledge domains (Jaffe et al., 1993; Hall et al., 2001, 2005). Consequently, outcomes of
24
25 efforts pursuing high impact innovation are far from certain. A preference for high impact
26
27 innovations should yield greater variance in the outcomes of regular-funded firms compared to
28
29
30
the outcomes of ARRA-funded firms. Indeed, NSF’s implementation of SBIR specifically
31
32 targets “high-risk, high-impact technologies – those that show promise but whose success hasn’t
33
34 yet been validated” (https://seedfund.nsf.gov/about/).
35
36 To ascertain whether NSF prioritizes funding risky technology ventures, riskiness is
37
38
39 evaluated based on extreme outcomes around patent citations of patents generated after the grant.
40
41 One approach to measure riskiness is using extreme outcomes around forward citations.
42
43 Applicant firms are evaluated as having extreme outcomes if they have at least one patent
44
45
46 receiving a 3-year forward citation rate in the upper (High Citation Patent) or lower (Low
47
48 Citation Patent) ten percent of patent citations of all patents in the same patent class and
49
50 application year in the entire USPTO database.24 The underlying logic is that firms with patents
51
52
53
ultimately receiving the highest and lowest number of citations represent those seeking to make
54
55
56 24 The results are robust to sensitivity analysis at 5% and 25% cutoffs.
57
58
59
60
Academy of Management Discoveries Page 22 of 67

1
2
3 the most radical innovation impact. It is established that patents receiving high forward citations
4
5
6 have the greatest impact (Griliches, 1990; Trajtenberg, 1990; Hall et al., 2001) due to their
7
8 complex and radical nature. However, radical innovations are also more likely to fail, leading to
9
10 few forward citations. In contrast, patents of incremental innovations have lower degrees of risk,
11
12
13
which should more reliably produce forward citations, but at more moderate levels.
14
15 A second approach to measure riskiness is using backward patent citations to appraise a
16
17 patent’s originality. In particular, a patent’s originality is calculated as in Hall, Jaffe, and
18
19
Trajtenberg (2005) - a Herfindahl index of the patent classes associated with the patents that a
20
21
22 focal patent cites. By taking the inverse hyperbolic sine transformation of one minus the
23
24 Herfindahl index, a higher value suggests that a patent builds on a broader set of technological
25
26 areas. A firm-level value of Patent Originality is the mean of all patent originality values in each
27
28
29 year. Firms with higher values should have a patent portfolio that is more risky.25
30
31 [Insert Tables 4 about here]
32
33 The measures of riskiness are employed as dependent variables in an estimation model
34
35
36
similar to Equation (1). Table 4 presents the results of the models examining the different risk
37
38 variables. The evidence suggests NSF targets ventures with riskier technologies. Compared to
39
40 regular-funded firms, ARRA-funded firms are significantly less likely to generate highly cited
41
42
patents by 2.2 percentage points, less likely to generate poorly cited patents by 4.4 percentage
43
44
45 points, and generate patents with a 6.4 percent lower originality value. An unreported univariate,
46
47 non-parametric test is also used to evaluate whether NSF prioritizes ventures with risky
48
49 technology. It compares the dispersion (variance) in patent citations across ARRA-funded (0.32)
50
51
52
25The results are robust to other alternative variations of risk measures. For instance, we analyzed using a consolidated risk
53
54 variable that represents the logical of the two tail end risk variables – dummy equal to 1 if the firm has a patent in either the left
or right tail. We also considered an alternative measure using backward citations. The extent to which patents reference a broader
55 array of technology classes outside its own domain is indicative of risky innovation moving beyond technology boundaries.
56 Outside domain citation is the number of the citations by firm i’s patents that are different from the focal patent’s class.
57
58
59
60
Page 23 of 67 Academy of Management Discoveries

1
2
3 and regular-funded (0.51) firms. One advantage of this test is its robustness to potential non-
4
5
6 normality in the underlying distribution of the variables (Levene, 1960). It reveals a significant
7
8 difference, with regular-funded firms having patents with significantly more dispersion. In
9
10 summary, both tests suggest NSF targets firms with greater riskiness, as measured through patent
11
12
13
citations.
14
15
16 Ascertaining Correlates Explaining Selection Effects
17
18 Analysis presented above suggests NSF prioritizes projects having higher innovation and
19
20 commercialization potential, and projects representing riskier bets. In Table 5a, we re-estimate
21
22
23 equation (1) with a vector of firm control variables available to decision makers at the time of
24
25 selection: firm age, number of patents, number of grants received prior to focal grant, minority-
26
27 owned, and woman-owned. The aim of this exercise is twofold. First, it will illuminate potential
28
29
30
cues used by NSF to identify and select promising projects, to the extent that these firm-level
31
32 factors explain the differences in innovation impact between ARRA-funded and regular-funded
33
34 firms. The analysis suggests that important criteria for selection include number of patents and
35
36 number of prior SBIR grants at the time of application. The latter result confirms concerns
37
38
39 around applicants as “SBIR mills.”
40
41 A second objective of this analysis is to analyze whether the ARRA coefficient is
42
43 significant after the addition of observable venture traits. A significant ARRA coefficient would
44
45
46 suggest there may be unobservable selection capabilities unaccounted by observable firm traits.
47
48 Table 5a reveals that the ARRA coefficient continues to have a significant impact across the
49
50 models, although it appears smaller in magnitude and significance relative to Tables 3 and 4.
51
52
53
This is a strong indicator that NSF has unobservable selection capabilities.
54
55 [Insert Tables 5a and 5b about here]
56
57
58
59
60
Academy of Management Discoveries Page 24 of 67

1
2
3 In an attempt to better understand unobservable selection capabilities, interviews were
4
5
6 conducted with two Program Directors at NSF. Table 5b highlights several themes emanating
7
8 from these interviews and hints at the sources of selection capability. The Program Directors
9
10 emphasize four aspects of the process that may underlie a capability to prioritize stronger
11
12
13
ventures. The first element is how NSF focuses on commercialization during the process, by
14
15 hiring Program Directors with experience in entrepreneurship and/or venture investing, and by
16
17 explicitly asking reviewers to evaluate commercial merit as depicted in applicants’ commercial
18
19
plan. A second element of the process is an iterative due diligence process, where applicants
20
21
22 receive feedback through multiple exchanges, serving not only to improve the submission
23
24 quality, but also providing reviewers with more information. Another potential driver of
25
26 selection capability is the diversity of technical and commercial expertise on review committees,
27
28
29 including dynamic attempts to recruit external advisors with specific expertise pertinent to a
30
31 commercial plan. A final important facet seems to be the flexibility in the process to reward the
32
33 ventures that appear to be exceptional along important dimensions.
34
35
36 Disentangling Selection from Performance Differences after Receiving the Grant
37
38
39 While the analysis is suggestive of NSF’s selection capability, it is important to recognize
40
41 alternative explanations for why performance subsequent to receiving a grant might be stronger
42
43 for regular-funded firms versus ARRA-funded firms. It is possible that SBIR grants differentially
44
45
46 benefit better firms, which merely means that treatment effects are larger for the better firms.
47
48 Conversely, grants may have a higher marginal benefit for weaker firms, who might benefit
49
50 asymmetrically from the government’s endorsement or might have lower access to capital. To
51
52
53
parse differences in treatment versus selection effects, a difference-in-differences (DiD) model
54
55 (Abadie, 2005; Ashenfelter and Card, 1985) is used to yield separate estimates for selection and
56
57
58
59
60
Page 25 of 67 Academy of Management Discoveries

1
2
3 treatment effects, and is robust when two firms are not a perfect match in initial performance but
4
5
6 their performance trends are parallel.
7
8 The starting point is creating a matched sample of ARRA-funded and regular-funded
9
10 ventures. Based on prior work focusing on innovation impact (e.g., Hsu, 2006; Lerner, 1999;
11
12
13
Pahnke et al., 2015; Wallsten, 2000), firms are matched on a broader range of applicant attributes
14
15 at the time of grant submission: prior patents, prior grants, grant amount, minority-owned,
16
17 women-owned, founding year, and geographical location. A coarsened exact matching (CEM)
18
19
process yields a sample of 136 firms (68 matched pairs of regular-funded and ARRA-funded).26
20
21
22 The matching approach, detailed in Appendix tables C1 and C2, achieves balance along the
23
24 complete set of pretreatment variables.
25
26 This matched sample is used to assess differences in outcomes both before and after the
27
28
29 grant (three years before/after) for both groups and to calculate the difference in these two
30
31 numbers, i.e., difference-in-differences across these two groups using the following model:
32
33 Yit = α + β1(ARRAi) + β2(Post Grantt) + β3(ARRAi x Post Grantt) + γi (Xit) + μt+ εit (2)
34
35
36
where ARRA is equal to “1” for all years if firm i received the grant through ARRA, and “0”
37
38 otherwise; Post Grant is an indicator variable equal to “1” if the year t is after the grant was
39
40 awarded, and “0” otherwise, and ARRA x Post Grant is the interaction term.27 The table below
41
42
presents a 2x2 model detailing the parameter estimates of the difference-in-differences model.
43
44
45 Post Grant Pre Grant Difference
46 ARRA-Funded Firm α + β1 + β2 + β3 α + β1 β2 + β3
47 Regular-Funded Firm α + β2 α β2
48 Difference β1 + β3 β1 (selection effect) β3 (treatment effect)
49
50
51
52 26 To create a matched sample, we employed the CEM procedure (Iacus et al., 2012). For each ARRA-funded firm we attempted
53 to find an exact match on each criterion regular-funded firm. We could find an exact matched pair of 68 ARRA-funded firms
54 matched to 68 regular-funded firms. We plotted the parallel trends for the CEM matched sample (Appendix Figure C1) to ensure
the parallel trends assumption holds in outcome of the difference-in-differences estimation.
55 27 We ran models with separate time dummies for the three-year time window but they yield an identical conclusion to the results
56 presented in table 6.
57
58
59
60
Academy of Management Discoveries Page 26 of 67

1
2
3 The coefficient β1 captures the difference between the two groups of firms prior to receiving the
4
5
6 grant (selection effect), while β3 captures the difference-in-differences estimator that represents a
7
8 differential treatment effect. In other words, β3 = (Yarra, post – Yarra, pre) – (Yreg, post – Yreg, pre).
9
10 [Insert Table 6 about here]
11
12
13
Table 6 presents the results of the DiD analysis, which helps diagnose the possibility of
14
15 differential treatment effects. Differential treatment effect is illuminated in the ARRA x Post
16
17 Grant coefficient. The fact that this coefficient lacks significance across all models suggests
18
19
(Yarra, post – Yarra, pre) is equivalent to (Yreg, post – Yreg, pre), i.e. no difference in treatment effects
20
21
22 across ARRA-funded and regular-funded ventures. It is noteworthy that this finding is somewhat
23
24 consistent with Wallsten’s (2000) analysis showing firms receiving the grant do no better than
25
26 firms not receiving the grant, after controlling for selection. The evidence we provide
27
28
29 corroborates this work with different dependent variables, a more rigorous control for selection,
30
31 and a different federal agency. Finally, the robust negative ARRA coefficient across all models
32
33 confirms earlier results showing selection effects are driving the performance, and the economic
34
35
36
effect is striking. When NSF prioritized regular-funded firms, those firms produced 13.0 percent
37
38 more patents, having 11.6 percent more citations. There is also strong evidence that NSF seeks to
39
40 fund high impact ventures, because compared to ARRA-funded firms, regular-funded firms have
41
42
patents with 4.6 percent points higher probability to be highly cited, 4.0 percent points higher
43
44
45 probability to be infrequently cited, and 5.4 percent more likely to generate original patents.
46
47 Overall, the evidence suggests that the superior performance of regular-funded ventures
48
49 is driven by selection effects.28 Moreover, ruling out differential treatment effects gives us added
50
51
52 confidence in our earlier diagnosis that unobservable selection capabilities are important.
53
54
55 28Note that we are unable to parse selection and treatment effects for incidence of VC funding and IPO/acquisition because the
56 dependent variable is always zero prior to selection.
57
58
59
60
Page 27 of 67 Academy of Management Discoveries

1
2
3 Robustness Checks
4
5
6 The robustness of the results were tested in several additional ways. First, additional measures of
7
8 venture performance were considered. Sales revenue was considered in the same way as Howell
9
10 (2017) and is presented in Appendix Table D1. Another measure analyzed was the ability to
11
12
13
obtain SBIR Phase II grants, since obtaining a second round of SBIR funding indicates progress
14
15 towards commercialization. The analysis reveals that NSF prioritizes firms yielding higher
16
17 revenues and more likely to get SBIR Phase II funding.
18
19
A second robustness test accounts for the relative timing in which firms received ARRA
20
21
22 versus regular funds. If some regular-funded firms received the grant earlier in the fiscal year, it
23
24 is possible that a systematic bias may exist due to time-oriented differences contributing to
25
26 differences in performance outcomes. While all results presented thus far include year-fixed
27
28
29 effects, these do not control for intra-year differences in timing across the two groups of
30
31 ventures. Accordingly, we also ran analysis giving ARRA-funded firms one extra year to assess
32
33 post-grant performance outcomes. Even with this conservative approach, the analysis continues
34
35
36
to show that ARRA-funded firms underperform regular-funded firms (Appendix Table E1). A
37
38 second approach was limiting our sample to regular-funded firms with award dates falling within
39
40 three months of the award dates of the ARRA-funded firms. This analysis also reveals that
41
42
regular-funded firms outperform ARRA-funded firms on the four performance variables and
43
44
45 undertake riskier innovations (Appendix Table E2).
46
47 A third robustness test pertains to the time lags in patent outcomes after receiving the
48
49 grant. While our main model analyzes patent outcomes in the five years after receiving the grant,
50
51
52 we conducted robustness analysis with different time lag structures (t+2 to t+5) to analyze patent
53
54 count and citation outcomes after the firm receives the grant in year t. The results (Appendix F)
55
56
57
58
59
60
Academy of Management Discoveries Page 28 of 67

1
2
3 are similar to those reported in the main results indicating that regular-funded firms outperform
4
5
6 ARRA-funded firms.
7
8 Finally, given that the SBIR grant programs encourage participation from small
9
10 businesses founded by minority groups as well as woman-owned businesses, and it is one of the
11
12
13
overarching objectives of the SBIR grant program, we investigate whether these attributes factor
14
15 into the selection preferences of NSF. The results show that minority-owned businesses have a
16
17 higher likelihood of receiving the grant under the regular budget cycle than under ARRA
18
19
(Appendix H). This indicates that NSF gives preference to minority-owned businesses in
20
21
22 selecting the grant recipients. We do not observe a selection preference for woman-owned firms.
23
24
25 DISCUSSION
26
27 This study confirms that NSF prioritizes innovative, commercially viable, and risky technology
28
29
30
ventures within their SBIR applicant pool. It is not obvious that governments should be able to
31
32 develop a selection capability for technology-oriented commercial ventures even if they have a
33
34 desire to create social and economic value. The fact that no prior work has systematically
35
36 investigated this issue raises the need for such scrutiny, especially given that the SBIR program
37
38
39 represents an annual allocation of billions of dollars, and existing threats to dramatically
40
41 diminish budgets will increase the importance of effective selection processes. The study also
42
43 speaks more generally to whether governments around the world can effectively spur
44
45
46 entrepreneurship and innovation. Our method of employing a quasi-natural experiment
47
48 contributes more broadly to the examination of selection in government funding programs.
49
50 Our ability to parse selection effects of SBIR grants is novel in the literature and informs
51
52
53
prior research that has controlled for selection but not investigated it in detail. The fact that we
54
55 confirm ARRA-funded firms underperform regular-funded firms corroborates that NSF selects
56
57
58
59
60
Page 29 of 67 Academy of Management Discoveries

1
2
3 based on the expectation of diminishing marginal returns—which expects grants to be awarded
4
5
6 on a rational priority from good to worse. Note that results suggesting any deviation from that
7
8 priority would suggest poor selection processes. The fact that we confirm that innovation of
9
10 regular-funded firms is riskier than ARRA-funded firms reflects the emphasis NSF puts on
11
12
13
prioritizing “cutting-edge, high-risk” projects. While some have argued for the inadequacy of
14
15 government agencies in selecting the best commercial ventures (Lerner, 1999, 2009) and
16
17 highlighted potential biases in the peer review selection processes (e.g., Lee et al., 2013;
18
19
Boudreau et al., 2016), our findings suggest that current NSF selection policies are reasonable
20
21
22 and effective in prioritizing ventures with greater technical and commercial merit, although
23
24 prioritization based on other attributses like diversity and women-owned seems low.
25
26 Evidence is also provided concerning factors underlying NSF’s selection capabilities.
27
28
29 Clearly, they rely on observable venture characteristics as a basis for selection, but it seems they
30
31 have also honed their process and ability to discern venture quality beyond observable
32
33 characteristics. Interviews with NSF Program Directors point to these important factors. Beyond
34
35
36
evidence of selection capabilities, one interesting and unexpected outcome of our study revolves
37
38 around who benefits more from receiving the subsidy—the marginal firm on the threshold of
39
40 being denied the subsidy or the stronger firm clearly exceeding this threshold. Indeed, one
41
42
economic rationale for designing government subsidy programs is to support the innovation of
43
44
45 the marginal firm that would not be possible without the grant money. We found no evidence of
46
47 any difference in treatment effects for marginal versus stronger firms. Clearly, more work is
48
49 needed to confirm the effectiveness of government intervention on the marginal firm.
50
51
52
53
54
55
56
57
58
59
60
Academy of Management Discoveries Page 30 of 67

1
2
3 Implications Beyond the SBIR Program
4
5
6 The fact that we observe a government agency having the capability to select the best ventures to
7
8 fund has broad implications beyond the SBIR program. It certainly contrasts with a body of work
9
10 suggesting that markets should be the exclusive source of venture funding because markets
11
12
13
allocate resources more efficiently and governments do not have the processes or the capability
14
15 to invest in entrepreneurship. Even among scholars that have had a positive outlook regarding
16
17 governments funding entrepreneurship, our work enlightens in several ways. Our finding that
18
19
governments can effectively invest directly in firms that will commercialize science
20
21
22 differentiates us from Mazzucato (2013) who suggests that while governments can fund
23
24 promising science, they generally rely on others to invest in building economic value. While Lin
25
26 (2012) argues that governments should play an important complementary role with markets in
27
28
29 encouraging entrepreneurship through upgrading its industrial structure and infrastructure
30
31 consistent with a country’s comparative advantages, our findings suggest governments can go
32
33 beyond existing comparative advantages and develop new ones through engaging in riskier
34
35
36
endeavors that push the boundaries of current technology. While Armanios et al. (2019)
37
38 emphasize that governments are increasingly asked to experiment to harness a region’s high
39
40 growth potential, we confirm they can generate capabilities to successfully implement
41
42
experimentation through entrepreneurial investment. While Mazzucato (2013) argues
43
44
45 governments should pursue entrepreneurial investment as custodians of a region’s long-term
46
47 interests, and not cede it to private investors overly focused on short-term gains, we empirically
48
49 demonstrate that governments are actually capable of identifying and funding ventures with the
50
51
52 most promising long-term outcomes.
53
54
55
56
57
58
59
60
Page 31 of 67 Academy of Management Discoveries

1
2
3 Our work also illuminates some less observable capabilities underlying entrepreneurial
4
5
6 selection, such as a focus on commercialization, an iterative due diligence process, a diverse
7
8 evaluation committee, and a flexible selection process. While prior literature has devoted
9
10 considerable attention to firm and managerial capabilities, little work has focused on the
11
12
13
capabilities underlying selection processes in government agencies. Certainly, one implication of
14
15 our work is that these capabilities are consequential and merit more attention, particularly given
16
17 the tremendous government resources devoted to spurring entrepreneurial activity. This has
18
19
implications for other funding agencies to adopt similar approaches in the due diligence and
20
21
22 selection processes that overlaps with the objectives of identifying projects with commercial
23
24 merit, and in so doing alleviate concerns around adverse selection: for instance, the likes of
25
26 Solyndra, Abound Solar, Ener1, and Fisker – the Department of Eneregy supported ventures that
27
28
29 failed to commercialize and declared bankruptcy.
30
31 Our empirical findings appeal to management theorists who evaluate selection
32
33 capabilities and biases to extend theories dealing with the special circumstances of government
34
35
36
interventions. There are at least two dimensions where theorizing can be fruitfully developed.
37
38 First, better understanding of selection processes is needed: how organizations rationalize them,
39
40 what factors take priority, what biases may hinder them, and the mechanisms driving the
41
42
heterogeneity in selection capabilities. For instance, analyzing variance in selection capabilities,
43
44
45 Capron and Mitchell (2009) show that firms that are better at selecting the appropriate modes of
46
47 sourcing new capability tend to survive longer. One topic which we did not cover is whether
48
49 prior private sector experience improves the effectiveness of government officials and the
50
51
52 agencies in which they work. Evans and colleagues suggest it does not, and that it is important to
53
54 keep government officials’ activities separate and distinct from the private sector (Evans, 1996;
55
56
57
58
59
60
Academy of Management Discoveries Page 32 of 67

1
2
3 Evans and Rauch, 1999). Alternatively, research studying the Ireland region suggests it is
4
5
6 beneficial because private sector experience seems to improve the Irish government’s ability to
7
8 identify and cultivate local businesses (Ó Riain, 2000a, b). Our interviews with NSF Program
9
10 Directors illuminate that some grant reviewers possess extensive entrepreneurial and investment
11
12
13
experience. Certainly, future work should ascertain whether the success we observe in picking
14
15 strong ventures is tied to private sector experience, and more generally understand which
16
17 capabilities matter for effective selection, and the aspects of government grant programs that
18
19
may undermine the selection process. Theory is also needed to discern whether better or
20
21
22 marginal firms benefit more from government intervention, and the design structure of grant
23
24 programs to help in identifying firms that would benefit more from the grant support.
25
26 There are limitations to our study worth highlighting. For example, we could have
27
28
29 widened our scope to consider performance metrics beyond patent citations, patent rates,
30
31 survival, attainment of venture capital, and SBIR Phase II awards. For example, subsequent work
32
33 might consider the attainment of crowdfunding or angel investment. Moreover, there are likely to
34
35
36
be indirect benefits to commercialization that our study does not capture. For instance, Li et al.
37
38 (2017) documented indirect effects of public research investment and subsequent patenting. Our
39
40 focus on patent citations captures some of the indirect benefits to commercialization. To the
41
42
extent that we do not capture the full impact of these indirect benefits of SBIR grants, our study
43
44
45 may underreport the full spectrum of selection effects. Important work remains around
46
47 discerning the quality of selection around objectives alternative to the technical and commercial
48
49 impact of the awardee. As noted at the outset, SBIR also prioritizes participation by women and
50
51
52 socially or economically disadvantaged applicants. Supplementary analysis suggests that priority
53
54 is given to minority-owned applicants, but we find no evidence supporting a priority for women.
55
56
57
58
59
60
Page 33 of 67 Academy of Management Discoveries

1
2
3 Subsequent work should definitely investigate and compare the selection ability of the ten
4
5
6 other agencies beyond NSF. Unfortunately, our quasi-natural experiement was constrained
7
8 because ARRA funds were only used for NIH and NSF, and NIH funded very few ventures with
9
10 such funds. We believe our focus on one NSF solicitation helps to resolve any unobserved
11
12
13
heterogeneity tied to cross-agency differences or differences across solicitations within an
14
15 agency. However, we acknowledge that the fact that NSF had two due dates within a single
16
17 solicitation may raise concerns that differences in quality across due dates may be influencing
18
19
our results. Given our focus on NSF, it is particularly important to acknowledge that there is a
20
21
22 great deal of industry and firm-level heterogeneity surrounding the pursuit of the intellectual
23
24 property. Some funded firms may choose to forgo patenting, yet still conduct high quality
25
26 research. Patent acquisition is endogenous to the firm’s decision to pursue a patenting strategy;
27
28
29 thus, causality claims linking patents to innovation should be viewed conservatively. We attempt
30
31 to alleviate this concern by analyzing multiple alternative performance metrics that reflect firms’
32
33 growth potential, like securing venture capital funding and realizing commercialization. We use
34
35
36
“realized” innovation and performance outcomes to infer the underlying risk profile of the
37
38 projects. This approach admittedly ignores the inherent uncertainty and possible idiosyncrasies
39
40 of the R&D process implemented by technology startups. Nonetheless, to the extent that the
41
42
noted uncertainties are equally characteristic of regular-funded and ARRA-funded firms, our
43
44
45 results reasonably demonstrate the comparative differences in the underlying risk profiles.
46
47 Our sample is conditional on the firm being selected for either regular-funding or ARRA
48
49 funding. An inability to observe the full set of firms at risk of being selected hinders a complete
50
51
52 analysis of treatment effects. Ideally, if we had information on the firms in the grant solicitation
53
54 pool that did not receive the grant, we could do additional analysis to econometrically control for
55
56
57
58
59
60
Academy of Management Discoveries Page 34 of 67

1
2
3 selection using a Heckman model, instrumental variable approach, or regression discontinuity
4
5
6 design, and ascertain the average treatment effects, similar to the approach used in Howell
7
8 (2017). Moreover, there are many proposals that surpass the threshold for scientific merit but
9
10 were not selected into the set of ARRA-funded firms. Thus, the profile of firms in the ARRA
11
12
13
comparison group is not likely representative of the underlying distribution of grant proposals, so
14
15 the magnitude of the selection effects ascertained in our analysis is conservative in nature.
16
17 It is important to emphasize that while the ARRA event was implemented after the
18
19
economic recession of 2007-08, this approach does not limit the generalizability of our findings
20
21
22 to recessionary periods; rather, it shows the results are resilient in the midst of economic shocks.
23
24 The ARRA event is merely used as an exogenous shock that enables us to ascertain the selection
25
26 preferences of NSF in granting SBIR awards. We also acknowledge that NSF selection processes
27
28
29 may have changed in the period subsequent to our sample period. Indeed, our interviews with
30
31 NSF Program Directors suggest that the agency has changed the evaluation process in important
32
33 ways, including placing more emphasis on the commercial potential of the venture and targeting
34
35
36
ventures not having received prior SBIR funding. We contend that our study and studies like
37
38 ours may play an important role in such policy changes.
39
40 Finally, unlike other agencies like the Department of Energy, NSF does not assign exact
41
42
scoring to an application and there is no precise cutoff or ranking system in the selection process.
43
44
45 A discrete ranking system would have enabled us to ascertain agency selection preferences in a
46
47 more precise manner, comparing firms most proximal to the score cut-off margin—those
48
49 selected just above the cut-off margin in the regular budget cycle compared to those just below
50
51
52 the cut-off margin but funded later under the ARRA. Nonetheless, the comparative analysis
53
54 using the innovation and performance parameters we employ fairly represents the objectives of
55
56
57
58
59
60
Page 35 of 67 Academy of Management Discoveries

1
2
3 the SBIR grant program in determining selection preferences. Lack of data on rejected proposals
4
5
6 and limited information on the review process has challenged scholars to effectively determine
7
8 the true value of the grant program, as demonstrated through mixed results in the literature.
9
10 Anecdotal studies reporting high commercialization rates of SBIR firms do not articulate
11
12
13
whether the grant was responsible for the change in the firm’s performance or whether better
14
15 performing firms received grants. The selection effects we report in this study reflect NSF’s
16
17 desire to pick winners and take us a step closer to ascertaining the marginal economic benefits
18
19
federal grants may provide to a firm that may not otherwise have pursued the innovation.
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Academy of Management Discoveries Page 36 of 67

1
2
3 FIGURE 1
4
5 NSF Regular-funded and ARRA-funded Timeline
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24 Note: For the ARRA-funded projects, NSF considered proposals declined on or after October 1, 2008. The reversal of the
25 decision to decline must be based on both the high quality of the reviews received on the initial submission and the lack of
26 available funding at the time the original decision was made.
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Page 37 of 67 Academy of Management Discoveries

1
2
3 TABLE 1
4
5
Descriptive Statistics and Variable Definitions
6
7 VARIABLE DEFINITION Regular ARRA
8 Dependent variables
9 Patent Count No. of patent applications filed (and subsequently
10
granted) by firm i in year t. Variable is Inverse hyperbolic 0.27 0.16***
11
12 sine transformed.
13 Patent Citations Patent citations to firm i’s patents in year t of the patent
14 application and three subsequent years. Variable is 0.14 0.05***
15 inverse hyperbolic sine transformed.
16 Venture Capital Dummy= 1 only for the year in which a firm received the
17 0.03 0.01**
first VC investment.
18
19 IPO/Acquisition Dummy= 1 only for the year t in which a firm
0.01 0.003**
20 commercialized through an IPO or M&A.
21 High Citation Dummy= 1 if at least one patents filed by firm i in year t
22 Patent has citation rate that lies in the top 10 percentile of
23 0.03 0.01*
citation distribution of all patents in the same patent class
24
and application year.
25
26 Low Citation Dummy= 1 if all patents filed by firm i in year t have
27 Patent citation rates that lie in the bottom 10 percentile of
0.10 0.08*
28 citation distribution of all patents in the same patent class
29 and application year.
30 Patent 1- Herfindahl concentration of patent class assignments
31 Originality associated with patents cited by the focal patent, adjusted
32
33
for bias correction associated with small numbers of 0.10 0.06***
34 backward citation counts. Variable is inverse hyperbolic
35 sine transformed.
36 Control variables
37 Firm Age Age in years of firm i as of year t. Variable is log
38 1.85 1.84
transformed.
39
40
Prior Patents§ No. of patents filed by firm i before receiving the focal
0.43 0.25**
41 grant. Variable is log transformed.
42 Prior Grants§ No. of grants received by firm i before receiving the focal
0.61 0.80*
43 grant. Variable is log transformed.
44 Minority-owned§ Dummy = 1 if firm i founder(s) belong to a minority
45 0.07 0.04†
/economically disadvantaged group.
46
47 Women-owned§ Dummy = 1 if firm i founder(s) is a female. 0.06 0.07
Grant Amount § Log transformed amount of grant received by firm i.
48 11.89 11.59***
49 Variable is log transformed.
50 No. of firms 514 143
51 Note: Firm-year unit of analysis. § denotes the statistics at the firm level. ***, **, *, † indicates difference is significant at p <
52 0.001, p < 0.01, p < 0.05, and p < 0.1 respectively; two-tailed tests.
53
54
55
56
57
58
59
60
Academy of Management Discoveries Page 38 of 67

1
2
3 TABLE 2
4
5
Correlation Matrix
6
7 Variables 1 2 3 4 5 6 7 8 9 10 11 12 13 14
8 1 Patent Count 1
9
2 Patent Citations 0.56 1
10
11 3 Venture Capital 0.10 0.09 1
12 4 IPO/Acquisition 0.002 0.03 -0.01 1
13
5 High Citation Patent 0.48 0.48 0.04 -0.01 1
14
15 6 Low Citation Patent 0.51 -0.04 0.05 0.01 -0.05 1
16 7 Patent Originality 0.74 0.33 0.09 -0.01 0.36 0.67 1
17
8 ARRA -0.08 -0.07 -0.04 -0.04 -0.02 -0.03 -0.05 1
18
19 9 Grant Amount -0.05 -0.03 -0.04 -0.01 -0.03 -0.02 -0.05 -0.35 1
20 10 Firm Age 0.05 -0.02 -0.08 0.02 0.01 0.04 0.03 -0.01 0.02 1
21 11 Prior Patents 0.43 0.19 0.01 0.03 0.20 0.22 0.36 -0.09 -0.02 0.35 1
22
23 12 Prior Grants 0.10 0.01 -0.04 -0.01 -0.01 0.10 0.10 0.09 -0.04 0.40 0.36 1
24 13 Minority-owned 0.04 0.03 -0.02 0.00 0.03 0.01 0.02 -0.06 0.04 0.08 0.05 0.17 1
25 14 Women-owned -0.02 -0.02 0.01 0.01 -0.02 0.01 0.004 0.01 0.00 0.07 -0.03 0.08 0.30 1
26 Note: Variables 1-7 are dependent variables. Absolute values greater than or equal to 0.03 are significant at p < 0.05.
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Page 39 of 67 Academy of Management Discoveries

1
2
3 TABLE 3
4
5
Multivariate Analysis Comparing Regular-funded and ARRA-funded Firms
6
7 Generalized Estimating Accelerated Time-to-
8 Estimation model
Equations (GEE) Failure
9 Patent Patent Venture IPO/Acquis
10
11
Count Citations Capital ition
12 (1) (2) (3) (4)
13 -0.190*** -0.129*** 0.307* 0.536*
ARRA
14 (0.046) (0.030) (0.135) (0.270)
15 -0.259** -0.132* -0.152 0.157
16 Grant Amount
(0.096) (0.068) (0.297) (0.473)
17
18 -0.184 0.274 -0.953 5.176
Constant
19 (0.238) (0.254) (0.740) (1.713)
20 Location effects Yes Yes Yes Yes
21
Industry effects Yes Yes Yes Yes
22
23 Year effects Yes Yes Yes Yes
24
25 # observations (firms) 3942 (657) 3942 (657) 2940 (657) 3197 (657)
26 Wald Chi-square 167.38 170.10 630.98 109.57
27 Note: Firm-year level of analysis. The dependent variables in cols (1) - (2) are inverse hyperbolic sine transformation of patent
28 count and patent citations. Cols. (3) - (4) reports the accelerated time-to-failure analysis of time-to-VC investment (Hazard type:
29 Venture Capital) and time-to-IPO/Acquisition (Hazard type: IPO/Acquisition). Positive (negative) coefficients indicate that the
30 covariate increases (decreases) the duration to receive VC investment or to reach IPO/acquisition. The analyses are right censored.
31 The main independent variable ARRA is a dummy variable equal to one for firms funded through ARRA, and zero otherwise.
Heteroscedasticity-adjusted robust standard errors estimated using the “Huber-sandwich” estimator are reported in parenthesis.
32 ***, **, *, † indicates significance at p < 0.001, p < 0.01, p < 0.05, and p < 0.1 respectively; two-tailed tests.
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Academy of Management Discoveries Page 40 of 67

1
2
3 TABLE 4
4
5
Multivariate Analysis on Innovation Risk
6
7 Estimation Model GEE Probit Marginal Effects GEE
8 High Citation Low Citation Patent
9 Patent Patent Originality
10
11
(1) (2) (3)
12 -0.022** -0.044*** -0.064***
ARRA
13 (0.008) (0.013) (0.016)
14 -0.035† -0.060† -0.089**
15 Grant Amount
(0.020) (0.032) (0.033)
16
-0.007
17 Constant
18 (0.083)
19 Location effects Yes Yes Yes
20 Yes Yes Yes
Industry effects
21
22 Year effects Yes Yes Yes
23
# observations (firms) 3942 (657) 3942 (657) 3942 (657)
24
25 Wald Chi-square 68.23 78.43 150.77
26 Note: Firm-year level of analysis. The table reports the analyses of generating tail outcomes in innovation – high citation patent
27 (dummy variable equal to 1 if firm i in year t generated a patent with citation rates in the top ten percentile of citation distribution
28 – col. 1), low citation patent count (dummy variable equal to 1 if firm i in year t generated patents that lie in the bottom ten
29 percentile of the citation distribution – col. 2), and patent originality (inverse hyperbolic sine transformed - col. 3). The main
independent variable ARRA is a dummy variable equal to one for firms funded through ARRA, and zero otherwise.
30 Heteroscedasticity-adjusted robust standard errors estimated using the “Huber-sandwich” estimator are reported in parenthesis.
31 ***, **, *, † indicates significance at p < 0.001, p < 0.01, p < 0.05, and p < 0.1 respectively; two-tailed tests.
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Page 41 of 67 Academy of Management Discoveries

1
2
3 TABLE 5a
4
5
Correlates Explaining Key Performance Outcomes
6 Accelerated Time-to- GEE Probit Marginal
Estimation model GEE GEE
7 Failure Effects
8 High Low Patent
9 Patent Patent Venture IPO/Acquis
Citation Citation Originality
10 Count Citations Capital ition
Patent Patent
11
12
(1) (2) (3) (4) (5) (6) (7)
13 -0.084† -0.079* 0.274* 0.449† -0.016† -0.027† -0.029†
ARRA
14 (0.047) (0.033) (0.127) (0.255) (0.009) (0.014) (0.015)
15 0.015 0.031 0.603*** 0.172 -0.008 -0.013 -0.011
16 Firm Age
(0.026) (0.023) (0.134) (0.202) (0.008) (0.014) (0.010)
17
0.321*** 0.122*** -0.177** -0.190** 0.027*** 0.057*** 0.112***
18 Prior Patents
19
(0.035) (0.017) (0.069) (0.077) (0.005) (0.008) (0.010)
20 -0.036† -0.040** 0.018 0.040 -0.011* 0.010 -0.005
Prior Grants
21 (0.022) (0.014) (0.063) (0.088) (0.005) (0.007) (0.008)
22 -0.141 -0.080 -0.077 0.093 -0.031 -0.036 -0.050
23 Grant Amount
(0.098) (0.071) (0.285) (0.476) (0.025) (0.033) (0.032)
24
0.061 0.077 0.325 0.132 0.024 -0.016 -0.004
25 Minority-owned
26 (0.078) (0.069) (0.236) (0.239) (0.016) (0.020) (0.023)
27 0.001 -0.043 -0.317† -0.387† -0.016 0.035 0.025
Women-owned
28 (0.051) (0.032) (0.166) (0.210) (0.016) (0.025) (0.024)
29 -0.076 0.308 -1.486* 4.650** 0.048
30 Constant
(0.239) (0.257) (0.692) (1.575) (0.078)
31
Location effects Yes Yes Yes Yes Yes Yes Yes
32
33 Industry effects Yes Yes Yes Yes Yes Yes Yes
34 Year effects Yes Yes Yes Yes Yes Yes Yes
35 # observations 3942 3942 2940 3197 3942 3942 3942
36 (firms) (657) (657) (657) (657) (657) (657) (657)
37 Wald Chi-square 284.78 199.09 635.95 127.09 96.96 174.20 275.38
38 Note: Firm-year level of analysis. The main independent variable ARRA is a dummy variable equal to one for firms funded through ARRA, and zero otherwise. Heteroscedasticity-
39 adjusted robust standard errors estimated using the “Huber-sandwich” estimator are reported in parenthesis. ***, **, *, † indicates significance at p < 0.001, p < 0.01, p < 0.05, and
40 p < 0.1 respectively; two-tailed tests.
41
42
43
44
45
46
47
Academy of Management Discoveries Page 42 of 67

1
2
3 TABLE 5b
4
5
Themes Reflecting Selection Capability
6 Selection Selection Attributes Remarks/Notes
7 Capability
8  Program Director or external “Program Directors have been entrepreneurs and investors,
9 commercial reviewers explicitly … use a lot of their personal experience and knowledge … to
10 evaluate commercial merit. evaluate commercial potential.”
11 Focus on  Commercialization is a necessary
12 “Commercialization plan exploring who the competitors are,
Commercializa criterion - commercialization merit company’s competitive advantage, and what the regulatory
13 and intellectual merit given equal
tion factors are. We feel these things are important to assess …
14 weight.
15 influences the type of work they (company) may have to do
 Commercialization plan (business
16 and the needs of the customers … to be able to continue to
plan) required in grant proposal.
17 grow beyond the grant money.”
18  In-person meetings and email “Reviewers have an in-person meeting, and other times
19 communications back and forth conduct the full review via email. Program Directors provide
20 with applicants – seeking advice to applicants throughout the review process.”
21 clarification and discovering
22 Iterative Due “Sometimes additional diligence is conducted … discussion
information beyond the proposal.
23 Diligence with a panel of experts … take their feedback into account.”
 Detailed feedback provided to all
24 Process
applicants – and applicants can “We get several clarifications from the company. If we
25
26 resubmit applications multiple discover something new during the review process, we go
27 times after addressing the back to the company for clarification.”
28 feedback.
29  Reviewer panel includes both “We have people brought in to give us specific feedback
30 Diversity of technical and commercial experts. (consultants, entrepreneurs, and people familiar with
31 Review  Review committee includes regulatory tasks).”
32 Committee entrepreneurs, investors,
33 Expertise consultants, regulatory experts. “Program Directors have extensive commercial experience.
34 Some external commercial reviewers are brought in as well.”
35  Proposals are not scored or ranked “There is no hard cutoff (score). There is a three-tier system:
36 on any scale. Each reviewer highly competitive for funding, competitive, not competitive.”
37 evaluation is categorized as
38 excellent/highly competitive and “We look at whether the company is innovative …
39 poor/not competitive. understands what it takes to … get the work done. We assess
40  Reviewers ascertain multiple … is it too ambitious; do they understand what they are trying
41 facets – innovativeness; technical to do; how will they understand if the project is successful;
42 and commercial feasibility; how will they know if their prototype is successful.”
43
management team knowledge, “They (reviewers) look at who is running the company to see
44 Subjectivity in
expertise, and involvement; if they are fully engaged … also look at the personal risk the
45 the Proposal
46
applicants’ understanding of the founders are taking on, and what will happen to the founders
Assessment
47 risks; and resources needed to if they fail. They use similar criteria to what startup investors
48 achieve successful fruition. look for.”
49  Project selection is not mapped to
50 individual agency objectives; “There is no top down prioritization of the goals … we use a
51 rather a ‘random gut feel scale’ ‘random gut feel scale’ rather than parsing out which goals
52 approach is adopted to ensure each proposal achieves. At large, we think about meeting the
53 agency objectives are balanced and congressional SBIR criteria, but we don’t try to take a
54 met at large. particular goal and choose a proposal just to satisfy one
55 goal.”
56 Note: Based on phone interviews conducted with two Program Directors at NSF.
57
58
59
60
Page 43 of 67 Academy of Management Discoveries

1
2
3 TABLE 6
4
5
Difference-in-differences Analysis on Innovation Outcome
6
7 Estimation model GEE GEE Probit Marginal Effects GEE
8 Patent Patent High Citation Low Citation Patent
9 Count Citations Patent Patent Originality
10
11
(1) (2) (3) (4) (5)
12 -0.130*** -0.116*** -0.046** -0.041* -0.054***
ARRA
13 (0.034) (0.035) (0.016) (0.020) (0.014)
14 0.010 0.035 0.017 0.017 0.014
15 Post grant
(0.046) (0.052) (0.018) (0.019) (0.025)
16
-0.014 -0.059 -0.018 -0.004 -0.010
17 Post grant x ARRA
18 (0.042) (0.060) (0.027) (0.026) (0.021)
19 -0.293** -0.252*** -0.056 -0.073 -0.064†
Grant Amount
20 (0.112) (0.078) (0.045) (0.052) (0.038)
21 -0.671** -0.540** -0.143†
22 Constant
(0.253) (0.184) (0.086)
23
Industry effects Yes Yes Yes Yes Yes
24
25 Year effects Yes Yes Yes Yes Yes
26
# observations (firms) 1211 (136) 1211 (136) 1211 (136) 1211 (136) 1211 (136)
27
28 Wald Chi-square 71.60 46.04 106.76 34.32 70.49
29 Note: CEM sample of 136 firms (68 matched pairs of regular-funded and ARRA-funded). Firm-year level of analysis. Analysis
30 included observations three years before and after the grant. The main independent variables – ARRA is a dummy variable equal
31 to 1 for all firm-years if the firm is funded through ARRA, and 0 otherwise; Post grant is a dummy variable equal to one for
32 years after the grant, and zero otherwise. Heteroscedasticity-adjusted robust standard errors estimated using the “Huber-
33 sandwich” estimator are reported in parenthesis. All models include unreported year, industry, and location effects. ***, **, *, †
indicates significance at p < 0.001, p < 0.01, p < 0.05, and p < 0.1 respectively; two-tailed tests.
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Academy of Management Discoveries Page 44 of 67

1
2
3
4
Supplementary Materials for
5
6
7
DOES GOVERNMENT FUND THE BEST ENTREPRENEURIAL VENTURES?: THE
8 CASE OF THE SMALL BUSINESS INNOVATION RESEARCH PROGRAM
9
10
11
12
13
14 This file includes:
15
16
Appendices A to H
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Page 45 of 67 Academy of Management Discoveries

1
2
3 APPENDIX A
4
5
SBIR Grant Review Process at the National Science Foundation (NSF)
6
7 When an application is submitted to Phase I of the National Science Foundation (NSF) SBIR grant
8 program, it is typically placed into a group of 4-18 similar proposals called a panel following initial
9 administrative review.29 Program Directors assign three to ten experts in the proposed topic area
10 to a panel, with a minimum of three reviewers assigned to confidentially review each proposal.
11
These reviewers are primarily technical experts in the respective research field or proposed
12
13
markets, while some reviewers have a mix of commercial and technical expertise.30 Once the
14 assigned reviewers provide feedback, all external reviewers meet in person or engage in virtual
15 conversations to discuss the proposals in the given panel. If the application is competitive for
16 funding, the Program Director may conduct additional diligence through asking follow-up
17 questions to the applicant via email.
18
19
SBIR Program Directors, assigned technical reviewers, and any additional commercial reviewers
20
21 asked to sit on the Phase I panel help to evaluate the technical and commercial merit of each
22 proposal. Reviews are based on two basic criteria in accordance with the National Science Board
23 (intellectual merit and the broader impacts of the proposed effort) consistent across all NSF grant
24 reviews and one additional criterion in accordance with the SBIR mission (technical and
25 commercial merit). Proposals are evaluated individually, where no exact score is assigned to each
26 proposal. There is no score-based cutoff within each panel, and it is possible that all proposals
27
28
within a given panel of applications could be funded. The Program Director has significant
29 discretion in terms of which projects get funded, and his or her opinion may not always be aligned
30 with that of the expert reviewers on the panel. This process stands in contrast to the SBIR selection
31 processes of several other agencies who rank applications and fund only the proposals with scores
32 that fall above a certain threshold.
33
34
NSF’s SBIR program goal is to encourage firms to conduct cutting-edge, high-risk, high-quality
35
36
research and derive social and economic benefit from scientific discovery through private sector-
37 commercialization. In order to select projects that advance this program goal, NSF reviews projects
38 in accordance with selection criteria detailed in Table A1. All NSF proposals are evaluated for
39 intellectual merit and broader impact, with SBIR proposals additionally evaluated for commercial
40 impact. A sample NSF SBIR Phase I commercial reviewer form is detailed in Figure A1. While
41 NSF keeps the agency priorities in mind, there is no top-down prioritization of goals when
42
evaluating individual applications. Rather, the agency relies heavily on the personal experience of
43
44 Program Directors and gut instinct to ensure that each batch of proposals selected for funding is
45 aligned with program and agency objectives at large. 31
46
47
48
49
50
51
52
53
54 29 https://www.nsf.gov/pubs/2017/nsf17071/nsf17071.jsp#q44
55 30 https://seedfund.nsf.gov/resources/review/
56 31 Interview with NSF Senior Program Director. Personal phone interview. August 5, 2019.
57
58
59
60
Academy of Management Discoveries Page 46 of 67

1
2
3 Table A1: SBIR– NSF Evaluation Criteria
4
5
6 Criteria Criteria Category Criteria Characteristics
7
8 Technical and commercial feasibility,
9 Intellectual Merit of unique/ingenious concepts or applications,
10 Proposed Activity All NSF Proposals qualifications of technical team, access to resources,
11 likelihood of state-of-the-art advancements
12 Potential commercial and societal benefits, propensity
13 to lead to a marketable product/process, balance of
14 Broader Impact of team’s technical and business skills, prior success at
15 Proposed Activity All NSF Proposals commercializing technology with or without SBIR
16 support, competitive advantage of technology,
17 innovation potential, ability to attract further funding
18
19 High Degree of Never previously successfully attempted, still facing
Specific to SBIR
20 Technical Risk technical hurdles
21 Significant
22 Potential to disrupt target market segment, strong
Commercial
23 Specific to SBIR product-market fit, barriers to entry, potential for
Impact/Societal
24 societal benefit
Benefit
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Page 47 of 67 Academy of Management Discoveries

1
2
3 Figure A1: NSF SBIR Phase I Commercial Review Form
4
5
USE THIS FORM ONLY FOR PHASE I COMMERICIAL REVIEWS.
6
7
8 Please comment on the four areas specified. Note, these are Phase I proposals and will
9 not have the same amount of content on the commercial potential typically found in a
10 competitive Phase II proposal.
11
12 Key Points Regarding the PHASE I proposal requirements:
13 Inclusion of letters of support in the proposal from appropriate stakeholders (strategic
14 partners, customers and/or investors) is optional.
15
16 Market Opportunity
17
18 Has the company succinctly described the market opportunity?
19 Have they demonstrated an understanding of a typical customer profile?
20 Has the company described the product or service and the “customer needs” which are
21 being addressed?
22 Can you tell where they are in the development cycle?
23 Does there appear to be an adequate market opportunity to justify a Phase I feasibility
24 effort?
25
26
27
Company/Team
28 Is this is a seed, early stage or expanding company?
29 How well is the team poised to take this innovation to market?
30 Have they taken similar products to market previously?
31 Do they have additional outside advisors, mentors, partners and stakeholders?
32 Is the corporate structure consistent with the company’s stage and vision?
33
34
35
Product/Technology and Competition
36 Has the company described the features of their product or service that are going to
37 provide a compelling value proposition to the customer?
38 What validation is there from the market about the proposed value proposition?
39 Does the company demonstrate knowledge of the competitive landscape?
40 How is this company going to compete: price, performance or other?
41 Does the company appear to understand issues regarding IP?
42 Is there adequate evidence that the company knows its position in the IP landscape?
43 Does there appear to be a management plan for handling IP issues as they arise?
44
45
46
Revenue and Finance Plan
47 Does the company demonstrate adequate knowledge for the level of financial resources it
48 will take to get the proposed innovation to market?
49 Does there appear to be a plan to bring reasonable resources to bear to get this proposed
50 innovation to market?
51
52
53
54
55
56
57
58
59
60
Academy of Management Discoveries Page 48 of 67

1
2
3 Table A2: SBIR– NSF Review Timeline
4
5
6 Time After Application Deadline Characteristics of Time Period
7
8 Program Director conducts due diligence,
2-4 Months
9 applicants may be contacted with questions
10 Funding decisions and written reviews for
11 4-6 Months full merit review process proposals sent to
12 applicants
13 6-7 Months Phase I awards initiated
14
15
16
17 If applications do not pass the initial administrative review, they do not move on to the panel
18 review stage and are returned to the applicant without review. In 2-4 months after the application
19 deadline, Program Directors conduct due diligence on proposals and may contact applicants with
20
21
any questions. In 4-6 months after the application deadline, official funding notifications will be
22 given out to the applicants. Applicants with proposals that went through the entirety of the merit
23 review process will be notified of this decision and receive anonymous written reviews detailing
24 how the proposal could be improved. These applicants can contact the Program Director with any
25 questions that arise concerning their review in order to improve their application for the next
26 funding cycle. In 6-7 months following the application deadline, Phase I awards will begin to be
27
dispersed to the winning applicants. A summary of the timeline is detailed in Table A.2.
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Page 49 of 67 Academy of Management Discoveries

1
2
3 APPENDIX B
4
5
Company Name Matching between Two Datasets
6
7 The company name matching approach we employed has similarities to the modern methods of
8 record linkage and disambiguation techniques adopted by researchers to link large datasets like
9 the USPTO patent data. For instance, Li et al., 2014 and Ventura et al., 2015 disambiguate
10 inventor names in the USPTO patent data using an iterative disambiguation process to uniquely
11
identify inventors to the respective patents. Unlike the semi-supervised machine learning
12
13
approach in Li et al., 2014, we use a name matching algorithm employing ‘rule and threshold’
14 method of exact string matching on company names (e.g., first name, second name, third name,
15 fourth name, and its combinations) and if-else decision-making to determine matching pairs of
16 companies across two data sources that has precedence (e.g., Singh, 2005; Fleming et al., 2007).
17 Exact name matching algorithm was also implemented in Ventura et al., 2015. Simple name
18 matching algorithm can be more accurate than other algorithms that utilize more information
19
(Milojevic, 2013). Exact name matching of company names reduces the likelihood of false
20
21 positive errors. Given the size of the dataset in our case is relatively small (N=657), we also
22 manually compare the unmatched company names to reduce likelihood of false negatives.
23
24 The SAS code below provides a general code for matching company names on two data sets
25 (Data_1 and Data_2).
26
27 PROC IMPORT OUT= WORK.DATA_1
28 DATAFILE= "D:\Documents\DATA_1.csv"
29 DBMS=CSV REPLACE;
30 GETNAMES=YES;
31 DATAROW=2;
32 guessingrows=32767;
33 RUN;
34
35 PROC IMPORT OUT= WORK.DATA_2
36 DATAFILE= "D:\Documents\DATA_2.csv"
37 DBMS=CSV REPLACE;
38 GETNAMES=YES;
39 DATAROW=2;
guessingrows=32767;
40
RUN;
41
42 DATA DATA_1;
43 SET DATA_1;
44 COUNT=_n_;
45 RENAME COUNT=COMPANYID;
46 RUN;
47
48 DATA DATA_2;
49 SET DATA_2;
50 COUNT=_n_;
51 RENAME COUNT=COMPANYID;
52 RUN;
53
54 PROC SQL;
55 CREATE TABLE UnikCompany_1 AS SELECT DISTINCT COMPANYID, Company FROM DATA_1
56 ORDER BY COMPANYID;
57
58
59
60
Academy of Management Discoveries Page 50 of 67

1
2
3 QUIT;
4
5 PROC SQL;
6 CREATE TABLE UnikCompany_2 AS SELECT DISTINCT COMPANYID, Company FROM DATA_2
7 ORDER BY COMPANYID;
8 QUIT;
9
10 DATA UnikCompany_1;
11 SET UnikCompany_1;
12 target="-";
replacement=" ";
13
org=transtrn(Company,target,replacement);
14
target1="&";
15 replacement1="and";
16 org=transtrn(org,target1,replacement1);
17 RUN;
18
19
20 DATA UnikCompany_1;
21 SET UnikCompany_1;
22 Co_compress= compress(org,"''-()=?@+;”{}–,.!„#'''/‚");
23 target2="mfg";
24 replacement2="manufacturing";
25 Co_compress1=transtrn(Co_compress,target2,replacement2);
26 target3="Mfg";
27 replacement3="manufacturing";
28 Co_compress2=transtrn(Co_compress1,target3,replacement3);
29 RUN;
30
DATA UnikCompany_1;
31
SET UnikCompany_1;
32 Companyname_trunc= lowcase(strip(scan(org,1,",.-")));
33 Co_firstword=lowcase(strip(scan(Co_compress2,1," ")));
34 Co_secondword=lowcase(strip(scan(Co_compress2,2," ")));
35 Co_thirdword=lowcase(strip(scan(Co_compress2,3," ")));
36 Co_fourthword=lowcase(strip(scan(Co_compress2,4," ")));
37 Co_firstsecondword=CATX(" ",Co_firstword,Co_secondword);
38 Co_firstsecondthirdword=CATX(" ",Co_firstword,Co_secondword,Co_thirdword);
39 Co_firstsecondthirdfourword=CATX("
40 ",Co_firstword,Co_secondword,Co_thirdword,Co_fourthword);
41 DROP Co_compress Co_compress1 target replacement target1 replacement1 target2
42 replacement2 target3 replacement3;
43 RUN;
44
45 DATA UnikCompany_1;
46 SET UnikCompany_1;
IF Co_secondword="inc" or Co_secondword="llc" or Co_secondword="corp" or
47
Co_secondword="corporation" or Co_secondword="incorporation" or
48
Co_secondword="lp" or Co_secondword="properties" or
49 Co_secondword="incorporated" or
50 Co_secondword="gmbh" or Co_secondword="ltd" or Co_secondword="limited" or
51 Co_secondword="spa" or Co_secondword="sa" or Co_secondword="co" or
52 Co_secondword="company" or Co_secondword="ab" or Co_secondword="nv" or
53 Co_secondword="bv" or Co_secondword="plc" or Co_secondword="srl" or
54 Co_secondword="ag" or Co_secondword="pty" or Co_secondword="proprietary" or
55 Co_secondword="gesellschaft" or Co_secondword="company" or Co_secondword="co"
56
57
58
59
60
Page 51 of 67 Academy of Management Discoveries

1
2
3 or Co_secondword="plc" or Co_secondword="fund" THEN Co_stripname=
4 Co_firstword;
5 RUN;
6
7 DATA UnikCompany_1;
8 SET UnikCompany_1;
9 IF MISSING(Co_stripname) and Co_thirdword="inc" or Co_thirdword="llc" or
10 Co_thirdword="corp" or Co_thirdword="corporation" or
11 Co_thirdword="incorporation" or Co_thirdword="lp" or
12 Co_thirdword="properties" or Co_thirdword="incorporated" or
Co_thirdword="gmbh" or Co_thirdword="ltd" or Co_thirdword="limited" or
13
Co_thirdword="spa" or Co_thirdword="sa" or Co_thirdword="co" or
14
Co_thirdword="company" or Co_thirdword="ab" or Co_thirdword="nv" or
15 Co_thirdword="bv" or Co_thirdword="plc" or Co_thirdword="srl" or
16 Co_thirdword="ag" or Co_thirdword="pty" or Co_thirdword="proprietary" or
17 Co_thirdword="gesellschaft" or Co_thirdword="company" or Co_thirdword="co" or
18 Co_thirdword="plc" or Co_thirdword="fund" THEN Co_stripname=
19 Co_firstsecondword;
20 RUN;
21 DATA UnikCompany_1;
22 SET UnikCompany_1;
23 IF MISSING(Co_stripname) and Co_fourthword="inc" or Co_fourthword="llc" or
24 Co_fourthword="corp" or Co_fourthword="corporation" or
25 Co_fourthword="incorporation" or Co_fourthword="lp" or
26 Co_fourthword="properties" or Co_fourthword="incorporated" or
27 Co_fourthword="gmbh" or Co_fourthword="ltd" or Co_fourthword="limited" or
28 Co_fourthword="spa" or Co_fourthword="sa" or Co_fourthword="co" or
29 Co_fourthword="company" or Co_fourthword="ab" or Co_fourthword="nv" or
Co_fourthword="bv" or Co_fourthword="plc" or Co_fourthword="srl" or
30
Co_fourthword="ag" or Co_fourthword="pty" or Co_fourthword="proprietary" or
31
Co_fourthword="gesellschaft" or Co_fourthword="company" or Co_fourthword="co"
32 or Co_fourthword="plc" or Co_fourthword="fund" THEN Co_stripname=
33 Co_firstsecondthirdword;
34 RUN;
35
36 DATA UnikCompany_1;
37 SET UnikCompany_1;
38 IF MISSING(Co_stripname) THEN Co_stripname=Co_firstsecondthirdfourword;
39 RUN;
40
41 DATA UnikCompany_2;
42 SET UnikCompany_2;
43 target="-";
44 replacement=" ";
45 org=transtrn(Company,target,replacement);
46 target1="&";
replacement1="and";
47
org=transtrn(org,target1,replacement1);
48
RUN;
49
50
51 DATA UnikCompany_2;
52 SET UnikCompany_2;
53 Co_compress= compress(org,"''-()=?@+;”{}–,.!„#'''/‚");
54 target2="mfg";
55 replacement2="manufacturing";
56 Co_compress1=transtrn(Co_compress,target2,replacement2);
57
58
59
60
Academy of Management Discoveries Page 52 of 67

1
2
3 target3="Mfg";
4 replacement3="manufacturing";
5 Co_compress2=transtrn(Co_compress1,target3,replacement3);
6 RUN;
7
8 DATA UnikCompany_2;
9 SET UnikCompany_2;
10 Companyname_trunc= lowcase(strip(scan(org,1,",.-")));
11 Co_firstword=lowcase(strip(scan(Co_compress2,1," ")));
12 Co_secondword=lowcase(strip(scan(Co_compress2,2," ")));
Co_thirdword=lowcase(strip(scan(Co_compress2,3," ")));
13
Co_fourthword=lowcase(strip(scan(Co_compress2,4," ")));
14
Co_firstsecondword=CATX(" ",Co_firstword,Co_secondword);
15 Co_firstsecondthirdword=CATX(" ",Co_firstword,Co_secondword,Co_thirdword);
16 Co_firstsecondthirdfourword=CATX("
17 ",Co_firstword,Co_secondword,Co_thirdword,Co_fourthword);
18 DROP Co_compress Co_compress1 target replacement target1 replacement1 target2
19 replacement2 target3 replacement3;
20 RUN;
21
22 DATA UnikCompany_2;
23 SET UnikCompany_2;
24 IF Co_secondword="inc" or Co_secondword="llc" or Co_secondword="corp" or
25 Co_secondword="corporation" or Co_secondword="incorporation" or
26 Co_secondword="lp" or Co_secondword="properties" or
27 Co_secondword="incorporated" or
28 Co_secondword="gmbh" or Co_secondword="ltd" or Co_secondword="limited" or
29 Co_secondword="spa" or Co_secondword="sa" or Co_secondword="co" or
Co_secondword="company" or Co_secondword="ab" or Co_secondword="nv" or
30
Co_secondword="bv" or Co_secondword="plc" or Co_secondword="srl" or
31
Co_secondword="ag" or Co_secondword="pty" or Co_secondword="proprietary" or
32 Co_secondword="gesellschaft" or Co_secondword="company" or Co_secondword="co"
33 or Co_secondword="plc" or Co_secondword="fund" THEN Co_stripname=
34 Co_firstword;
35 RUN;
36
37 DATA UnikCompany_2;
38 SET UnikCompany_2;
39 IF MISSING(Co_stripname) and Co_thirdword="inc" or Co_thirdword="llc" or
40 Co_thirdword="corp" or Co_thirdword="corporation" or
41 Co_thirdword="incorporation" or Co_thirdword="lp" or
42 Co_thirdword="properties" or Co_thirdword="incorporated" or
43 Co_thirdword="gmbh" or Co_thirdword="ltd" or Co_thirdword="limited" or
44 Co_thirdword="spa" or Co_thirdword="sa" or Co_thirdword="co" or
45 Co_thirdword="company" or Co_thirdword="ab" or Co_thirdword="nv" or
46 Co_thirdword="bv" or Co_thirdword="plc" or Co_thirdword="srl" or
Co_thirdword="ag" or Co_thirdword="pty" or Co_thirdword="proprietary" or
47
Co_thirdword="gesellschaft" or Co_thirdword="company" or Co_thirdword="co" or
48
Co_thirdword="plc" or Co_thirdword="fund" THEN Co_stripname=
49 Co_firstsecondword;
50 RUN;
51 DATA UnikCompany_2;
52 SET UnikCompany_2;
53 IF MISSING(Co_stripname) and Co_fourthword="inc" or Co_fourthword="llc" or
54 Co_fourthword="corp" or Co_fourthword="corporation" or
55 Co_fourthword="incorporation" or Co_fourthword="lp" or
56 Co_fourthword="properties" or Co_fourthword="incorporated" or
57
58
59
60
Page 53 of 67 Academy of Management Discoveries

1
2
3 Co_fourthword="gmbh" or Co_fourthword="ltd" or Co_fourthword="limited" or
4 Co_fourthword="spa" or Co_fourthword="sa" or Co_fourthword="co" or
5 Co_fourthword="company" or Co_fourthword="ab" or Co_fourthword="nv" or
6 Co_fourthword="bv" or Co_fourthword="plc" or Co_fourthword="srl" or
7 Co_fourthword="ag" or Co_fourthword="pty" or Co_fourthword="proprietary" or
8 Co_fourthword="gesellschaft" or Co_fourthword="company" or Co_fourthword="co"
9 or Co_fourthword="plc" or Co_fourthword="fund" THEN Co_stripname=
10 Co_firstsecondthirdword;
11 RUN;
12
DATA UnikCompany_2;
13
SET UnikCompany_2;
14
IF MISSING(Co_stripname) THEN Co_stripname=Co_firstsecondthirdfourword;
15 RUN;
16
17 PROC SQL;
18 CREATE TABLE Match_Company AS SELECT DISTINCT a.*, b.Company as
19 Company_Match, b.COMPANYID as COMPANYID_Match
20 FROM UnikCompany_1 a LEFT JOIN UnikCompany_2 b
21 ON a.Co_stripname_low=b.Co_stripname_low
22 ORDER BY a.COMPANYID;
23 QUIT;
24
25 DATA Match_Company_miss;
26 SET Match_Company;
27 IF MISSING(Company_Match);
28 RUN;
29
30 PROC SQL;
31 CREATE TABLE Match_Company_miss_1 AS SELECT DISTINCT a.*, b. Company as
32 Company_Match_1, b.COMPANYID as COMPANYID_Match_1 FROM Match_Company_miss a
33 LEFT JOIN UnikCompany_2 b
34 ON a.Co_firstsecondthirdword =b.Co_firstsecondthirdword
35 ORDER BY a.COMPANYID;
36 QUIT;
37
PROC SQL;
38
CREATE TABLE Match_Company_1 AS SELECT DISTINCT a.*, b. Company_Match_1, b.
39
COMPANYID_Match_1 FROM Match_Company a LEFT JOIN Match_Company_miss_1 b
40 ON a.Company=b.Company
41 ORDER BY a.COMPANYID;
42 QUIT;
43
44 DATA Match_Company_1;
45 SET Match_Company_1;
46 IF MISSING(Company_Match) THEN Company_Match=Company_Match_1;
47 IF MISSING(COMPANYID_Match) THEN COMPANYID_Match=COMPANYID_Match_1;
48 RUN;
49
50 DATA Match_Company_1;
51 SET Match_Company_1;
52 DROP Company_Match_1 COMPANYID_Match_1;
53 RUN;
54
55 DATA Match_Company_miss_1;
SET Match_Company_1;
56
57
58
59
60
Academy of Management Discoveries Page 54 of 67

1
2
3 IF MISSING(Company_Match);
4 RUN;
5
6 PROC SQL;
7 CREATE TABLE Match_Company_miss_2 AS SELECT DISTINCT a.*, b. Company as
8 Company_Match_1, b.COMPANYID as COMPANYID_Match_1 FROM Match_Company_miss_1 a
9 LEFT JOIN UnikCompany_2 b
10 ON a.Co_firstsecondword=b.Co_firstsecondword
11 ORDER BY a.COMPANYID;
12 QUIT;
13
PROC SQL;
14
CREATE TABLE Match_Company_2 AS SELECT DISTINCT a.*, b. Company_Match_1, b.
15
COMPANYID_Match_1 FROM Match_Company_1 a LEFT JOIN Match_Company_miss_2 b
16 ON a.Company=b.Company
17 ORDER BY a.COMPANYID;
18 QUIT;
19
20 DATA Match_Company_2;
21 SET Match_Company_2;
22 IF MISSING(Company_Match) THEN Company_Match=Company_Match_1;
23 IF MISSING(COMPANYID_Match) THEN COMPANYID_Match=COMPANYID_Match_1;
24 RUN;
25
26 DATA Match_Company_2;
27 SET Match_Company_2;
28 DROP Company_Match_1 COMPANYID_Match_1;
29 RUN;
30
31 DATA Match_Company_miss_2;
SET Match_Company_2;
32
IF MISSING(Company_Match);
33
RUN;
34
35 PROC SQL;
36 CREATE TABLE Match_Company_miss_3 AS SELECT DISTINCT a.*, b. Company as
37 Company_Match_1, b.COMPANYID as COMPANYID_Match_1 FROM Match_Company_miss_1 a
38 LEFT JOIN UnikCompany_2 b
39 ON a.Co_firstword=b.Co_firstword
40 ORDER BY a.COMPANYID;
41 QUIT;
42
43 PROC SQL;
44 CREATE TABLE Match_Company_3 AS SELECT DISTINCT a.*, b. Company_Match_1, b.
45 COMPANYID_Match_1 FROM Match_Company_2 a LEFT JOIN Match_Company_miss_3 b
46 ON a.Company=b.Company
47 ORDER BY a.COMPANYID;
48 QUIT;
49
DATA Match_Company_3;
50
SET Match_Company_3;
51
IF MISSING(Company_Match) THEN Company_Match=Company_Match_1;
52 IF MISSING(COMPANYID_Match) THEN COMPANYID_Match=COMPANYID_Match_1;
53 RUN;
54
55 DATA Match_Company_3;
56 SET Match_Company_3;
57
58
59
60
Page 55 of 67 Academy of Management Discoveries

1
2
3 DROP Company_Match_1 COMPANYID_Match_1;
4 RUN;
5
6 DATA Match_Company_miss_3;
7 SET Match_Company_3;
8 IF MISSING(Company_Match);
9 RUN;
10
11 /* Do Manual Check of the matched companies and the companies that did not
12 match*/
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Academy of Management Discoveries Page 56 of 67

1
2
3 APPENDIX C
4
5
CEM Statistics
6
7 To analyze the difference-in-difference estimation, we employed Coarsened Exact Matching
8 (CEM) to obtain a matched sample of regular-funded firms and ARRA-funded firms matched on
9 prior patents, prior grants, grant amount, minority-owned, women-owned, founding year, and
10 geographical location that resulted in 136 matched sample firms (68 matched pairs of regular-
11
funded and ARRA-funded).
12
13
14 A Description of Coarsened Exact Matching
15
16 Coarsened Exact Matching is part of a general class of methods termed “monotonic imbalance
17 bounding”, which has superior statistical properties as compared to matched-pairs methodology
18
and matching models based on “equal percent bias reducing”, for example propensity score
19
20 matching. Iacus et al., 2012 notes: “CEM generates matching solutions are better balanced and
21 estimates of the causal quantity of interest have lower root mean square error than methods under
22 the older existing class, such as based on propensity scores, nearest neighbors, and optimal
23
24 matching.” A key difference in CEM compared to other matching techniques is: whereas
25 methods such as propensity score matching require determining ex-ante the size of the matched
26 control sample, then ensuring balance ex-post, CEM performs the balancing ex-ante. The CEM
27
28 algorithm temporarily “coarsens” a set of observed covariates, conducts exact matching on the
29 coarsened data, “prunes” observations so that matched strata has at least one treatment and one
30 control unit, then runs estimations using the original (but pruned) uncoarsened data.
31
32
Figure C1: Trend Line of Patents/year for the CEM matched sample
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Page 57 of 67 Academy of Management Discoveries

1
2
3 TABLE C1: Comparing L1 Statistics Pre- and Post-Match
4
5 Pre-match. Multivariate L1 distance: 0.79
6 L1 Mean diff. min 25% 50% 75% max
7 Prior Patents 0.10 -0.75 0 0 0 -1 -53
8 Prior Grants 0.11 0.06 0 0 1 1 -16
9 Grant Amount 0.63 -0.04 -0.03 -0.05 -0.05 -0.003 -0.05
10 Minority-owned 0.04 -0.04 0 0 0 0 0
11 Women-owned 0.004 0.004 0 0 0 0 0
12 Founding Year 0.23 0.03 0 1 1 0 0
California 0.02 -0.02 0 0 0 0 0
13
Massachusetts 0.003 0.003 0 0 0 0 0
14 New York 0.003 0.003 0 0 0 0 0
15 Pennsylvania 0.007 0.007 0 0 0 0 0
16 Others 0.02 0.02 0 0 0 0 0
17 Post-match. Multivariate L1 distance: 0.39
18 L1 Mean diff. min 25% 50% 75% max
19 Prior Patents 0.05 -0.10 0 0 0 0 -1
20 Prior Grants 0.10 0.42 0 0 1 1 3
21 Grant Amount 0 4.7e-05 0.002 -1.0e-05 0 6.0e-06 0
22 Minority-owned 0 0 0 0 0 0 0
Women-owned 0 0 0 0 0 0 0
23
Founding Year 0 0.01 1 0 0 0 0
24
California 0 0 0 0 0 0 0
25 Massachusetts 0 0 0 0 0 0 0
26 New York 0 0 0 0 0 0 0
27 Pennsylvania 0 0 0 0 0 0 0
28 Others 0 0 0 0 0 0 0
29 Note: Col. 1 reports the L1 measure. Col. 2 reports the difference in means. The remaining cols. report the difference in the
30 empirical quantiles of the distributions of the two groups.
31
TABLE C2: Summary statistics of matching covariates for Regular-funded and ARRA-
32
33 funded firms
p-value p-value
34 Matching characteristics Mean 25th 50th 75th
(t-test) (KS-test)
35 Regular 0.42 0 0 0
36 Prior Patents 0.51 1.00
ARRA 0.32 0 0 0
37 Regular 1.07 0 0 1
Prior Grants 0.32 0.10
38 ARRA 1.50 0 1 2
39 Regular 0.12 0.10 0.10 0.15
Grant Amount 0.99 0.73
40 ARRA 0.12 0.10 0.10 0.15
41 Regular 0.01 0 0 0
Minority-owned 1.00 1.00
42 ARRA 0.01 0 0 0
Regular 0.06 0 0 0
43 Women-owned 1.00 1.00
ARRA 0.06 0 0 0
44 Regular 2006.22 2005 2007 2008
45 Founding Year 0.97 1.00
ARRA 2006.23 2005 2007 2008
46 Regular 0.13 0 0 0
47 California 1.00 1.00
ARRA 0.13 0 0 0
48 Regular 0.06 0 0 0
Massachusetts 1.00 1.00
49 ARRA 0.06 0 0 0
50 Regular 0.06 0 0 0
New York 1.00 1.00
51 ARRA 0.06 0 0 0
52 Regular 0.04 0 0 0
Pennsylvania 1.00 1.00
ARRA 0.04 0 0 0
53
Regular 0.05 0 0 0
54 Others
ARRA 0.05 0 0 0
1.00 1.00
55 Note. CEM sample of 136 firms (68 matched pairs of regular-funded and ARRA-funded).
56
57
58
59
60
Academy of Management Discoveries Page 58 of 67

1
2
3 APPENDIX D
4
5
Robustness Tests with Sales Outcomes
6
7 Sales information was manually collected from Dun & Bradstreet for the most recent year revenue
8 as of January 2016. Results using Generalized Estimating Equations OLS and zero-inflated
9 negative binomial (ZINB) are reported in Table D1. The ZINB model provides two estimates of
10 the ARRA: first, a logistic portion predicting the likelihood of a zero revenue, and then a full model
11 predicting revenue. The positive coefficient of ARRA (logistic ZINB) shows ARRA-funded firms
12
have higher odds of having zero revenue. If a firm is not in the certain zeros group, the negative
13
14 and significant coefficient of ARRA variable shows ARRA-funded firms generate less revenue
15 compared to regular-funded firms.
16
17 Table D1: Comparing Sales of Regular-funded and ARRA-funded Firms after the Grant
18 Generalized Estimating Equations (GEE) Zero Inflated Negative Binomial
19 Estimation model
(ZINB)
20 Sales a Sales Sales Sales Sales
21 (1) (2) (3) (4) (5)
22
-0.029*** -0.027*** -0.019** -2.146*** -1.634***
23 ARRA
(0.007) (0.007) (0.007) (0.422) (0.382)
24
25 8.724*** 8.853***
ARRA (logistic ZINB)
26 (0.177) (0.181)
27 -0.019 -0.011 -0.001 -1.910* -1.386*
Grant Amount
28 (0.017) (0.017) (0.017) (0.847) (0.714)
29 0.004 1.356**
Firm Age
30 (0.009) (0.514)
31 0.027*** 0.640***
32
Prior Patents
(0.008) (0.156)
33 0.007 -0.098
34 Prior Grants
(0.006) (0.115)
35 -0.001 -0.114
36 Minority-owned
(0.015) (0.269)
37
-0.017 -0.718*
38 Women-owned
(0.011) (0.294)
39
40 -0.015 0.002 0.004 -37.772*** -22.446***
Constant
41 (0.041) (0.040) (0.008) (0.068) (1.997)
42 Location effects Yes Yes Yes Yes Yes
43
Industry effects Yes Yes Yes Yes Yes
44
45 Year effects Yes Yes Yes Yes Yes
46
# observations (firms) 3690 (615) 3942 (657) 3942 (657) 3942 (657) 3942 (657)
47
48 Note: The dependent variables are inverse hyperbolic sine transformation of sales (cols. 1 - 3) and sales (col. 4 and 5). Cols. (1) –
(3) reports the generalized estimating equation OLS analysis and cols. (4) and (5) report the ZINB analysis after receiving grant in
49 year t. The main independent variable ARRA is a dummy variable equal to one for firms funded through ARRA, and zero otherwise.
50 Heteroskedasticity-adjusted robust standard errors estimated using the “Huber-sandwich” estimator are reported in parenthesis.
51 a Sample is restricted to firms with revenue > 0. ***, **, *, † indicates significance at p < 0.001, p < 0.01, p < 0.05, and p < 0.1

52 respectively; two-tailed tests.


53
54
55
56
57
58
59
60
Page 59 of 67 Academy of Management Discoveries

1
2
3 APPENDIX E
4
5
Robustness Tests to Account for Grant Timeline
6
7 Table E1: Comparing Regular-funded and ARRA-funded Firms
8 Generalized Estimating Cox Proportional Hazard
Estimation model
9 Equations (GEE)
10 Patent Patent Venture IPO
11 Count Citations Capital /Acquisition
12 (1) (2) (3) (4)
13 -0.185*** -0.122*** -0.877** -2.227**
14 ARRA
(0.044) (0.029) (0.296) (0.743)
15 -0.246** -0.128* -1.133† -3.090**
16 Grant Amount
(0.093) (0.066) (0.687) (1.163)
17
-0.153 0.281
18 Constant
(0.231) (0.249)
19
20 Location effects Yes Yes Yes Yes
21 Industry effects Yes Yes Yes Yes
22
23 Year effects Yes Yes Yes Yes
24 # observations (firms) 4085 (657) 4085 (657) 3068 (657) 3337 (657)
25
26 Wald Chi-square 198.05 168.36 1474.07 1911.63
27 Note: The dependent variables of ARRA-funded firms are assessed one year following the receipt of the grant, giving ARRA-
28 funded firms an extra year following the receipt of the grant. Heteroskedasticity-adjusted robust standard errors estimated using
29 the “Huber-sandwich” estimator are reported in parenthesis. ***, **, *, † indicates significance at p < 0.001, p < 0.01, p < 0.05,
and p < 0.1 respectively; two-tailed tests.
30
31
Table E2: Comparing Regular-funded and ARRA-funded Firms
32
33 Generalized Estimating
Estimation model Cox Proportional Hazard
34 Equations (GEE)
35 Patent Patent Venture IPO
36 Count Citations Capital /Acquisition
37 (1) (2) (3) (4)
38 -0.098*** -0.125*** -0.876** -2.218**
39 ARRA
(0.022) (0.031) (0.297) (0.752)
40 -0.267** -0.119† -1.172† -3.026*
41 Grant Amount
(0.098) (0.068) (0.688) (1.225)
42
-0.166 0.343
43 Constant
(0.246) (0.264)
44
45 Location effects Yes Yes Yes Yes
46 Industry effects Yes Yes Yes Yes
47
48 Year effects Yes Yes Yes Yes
49 # observations (firms) 3924 (654) 3924 (654) 2926 (654) 3182 (654)
50
51 Wald Chi-square 168.56 165.19 199.95 1682.28
52 Note: The sample used for analysis is matched on grant notice date (+/- 3months). Heteroskedasticity-adjusted robust standard
53 errors estimated using the “Huber-sandwich” estimator are reported in parenthesis. ***, **, *, † indicates difference is significant
54 at p < 0.001, p < 0.01, p < 0.05, and p < 0.1 respectively; two-tailed tests.
55
56
57
58
59
60
Academy of Management Discoveries Page 60 of 67

1
2
3 APPENDIX F
4
5
Robustness Tests with Lagged Patent Outcomes
6
7 Table F1: Comparing Innovation of Regular-funded and ARRA-funded Firms for the
8 Time Period t+2 to t+5 and t+3 to t+5 after Receiving the Grant in Year t
9
10
11
Estimation model Generalized Estimating Equations (GEE)
12 Patent Patent Patent Patent
13 Count Citations Count Citations
14 (1) (2) (3) (4)
15 -0.170*** -0.065*** -0.179*** -0.111***
ARRA
16 (0.048) (0.015) (0.048) (0.027)
17 -0.247* -0.099 -0.258** -0.090
18 Grant Amount
(0.099) (0.071) (0.101) (0.063)
19 0.079 -0.004 -0.219 0.197
20 Constant
(0.071) (0.187) (0.215) (0.198)
21
22
Location effects Yes Yes Yes Yes
23 Industry effects Yes Yes Yes Yes
24
25
Year effects Yes Yes Yes Yes
26 # observations 3285 3285 2628 2628
27 (firms) (657) (657) (657) (657)
28 Wald Chi-square 150.85 133.22 139.53 104.63
29 Note: The dependent variables are inverse hyperbolic sine transformation of patent count and patent citations. Cols. (1) and (2)
30 reports the analysis for years t+2 to t+5 and cols. (3) and (4) report the analysis for years t+3 to t+5 after receiving grant in year t.
31 The main independent variable ARRA is a dummy variable equal to one for firms funded through ARRA, and zero otherwise.
32 Heteroskedasticity-adjusted robust standard errors estimated using the “Huber-sandwich” estimator are reported in parenthesis.
33 ***, **, *, † indicates significance at p < 0.001, p < 0.01, p < 0.05, and p < 0.1 respectively; two-tailed tests.
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Page 61 of 67 Academy of Management Discoveries

1
2
3 APPENDIX G
4
5
Robustness with alternative empirical models
6
7 Table G1: Multivariate Analysis Comparing Regular-funded and ARRA-funded Firms
8 Estimation model OLS Cox Proportional Hazard
9 Patent Patent Venture IPO/Acquisi
10 Count Citations Capital tion
11 (1) (2) (3) (4)
12 -0.205*** -0.134*** -0.905** -2.201**
13 ARRA
(0.049) (0.032) (0.297) (0.757)
14
-0.286** -0.137† -1.227† -2.996*
15 Grant Amount
(0.103) (0.073) (0.688) (1.236)
16
17
-0.223 0.270
Constant
18 (0.266) (0.269)
19 Location effects Yes Yes Yes Yes
20 Industry effects Yes Yes Yes Yes
21 Year effects Yes Yes Yes Yes
22 # observations (firms) 3942 (657) 3942 (657) 3942 (657) 3942 (657)
23 Wald Chi-square 166.21 150.45 196.97 1700.34
24 Note: Firm-year level of analysis. The dependent variables in cols (1) - (2) are inverse hyperbolic sine transformation of patent
25 count and patent citations. Cols. (3) - (4) reports the Cox Proportional Hazard analysis of VC investment (Hazard type: Venture
26 Capital) and IPO/Acquisition (Hazard type: IPO/Acquisition).The analyses are right censored. Heteroscedasticity-adjusted robust
27 standard errors estimated using the “Huber-sandwich” estimator are reported in parenthesis. ***, **, *, † indicates significance at
28 p < 0.001, p < 0.01, p < 0.05, and p < 0.1 respectively; two-tailed tests.
29
30 TABLE G2: Multivariate Analysis on Innovation Risk
31 Estimation Model Logit OLS
32
High Citation Low Citation Patent
33
Patent Patent Originality
34
35 (1) (2) (3)
36 -1.862** -0.746** -0.065**
ARRA
37 (0.622) (0.244) (0.016)
38 -1.993† -0.948* -0.092**
Grant Amount
39 (1.054) (0.456) (0.034)
40 -8.645** -5.039*** -0.009
Constant
41 (2.879) (1.224) (0.083)
42 Location effects Yes Yes Yes
43
Industry effects Yes Yes Yes
44
Year effects Yes Yes Yes
45
46 # observations (firms) 3942 (657) 3942 (657) 3942 (657)
47 Wald Chi-square 54.63 72.80 152.47
48 Note: Firm-year level of analysis. High citation patent (dummy equal to 1 if firm i in year t generated a patent with citation rates
in the top ten percentile of citation distribution), low citation patent count (dummy equal to 1 if firm i in year t generated patents
49 that lie in the bottom ten percentile of the citation distribution), and patent originality (inverse hyperbolic sine transformed).
50 Heteroscedasticity-adjusted robust standard errors estimated using the “Huber-sandwich” estimator are reported in parenthesis.
51 ***, **, *, † indicates significance at p < 0.001, p < 0.01, p < 0.05, and p < 0.1 respectively; two-tailed tests.
52
53
54
55
56
57
58
59
60
Academy of Management Discoveries Page 62 of 67

1
2
3 TABLE G3: Correlates Explaining Key Performance Outcomes
4
5 Estimation model OLS Cox Proportional Hazard Logit OLS
6 Patent Patent Venture IPO/Acquisi High Citation Low Citation Patent
7 Count Citations Capital tion Patent Patent Originality
8 (1) (2) (3) (4) (5) (6) (7)
9 -0.092† -0.084* -0.889** -1.956* -1.314† -0.462† -0.030*
ARRA
10 (0.050) (0.035) (0.304) (0.786) (0.676) (0.239) (0.015)
11 0.056† 0.043† -0.987*** 0.025 -0.082 -0.225 -0.003
12 Firm Age
(0.029) (0.025) (0.260) (0.561) (0.469) (0.203) (0.012)
13 0.316*** 0.115*** 0.369** 0.494** 1.607*** 0.842*** 0.112***
14 Prior Patents
(0.037) (0.017) (0.140) (0.186) (0.223) (0.114) (0.011)
15
-0.044* -0.044** -0.075 -0.244 -0.704** 0.176† -0.007
16 Prior Grants
(0.022) (0.014) (0.129) (0.239) (0.262) (0.098) (0.008)
17
18 -0.154 -0.085 -1.277† -2.869* -1.728 -0.615 -0.051
Grant Amount
19 (0.104) (0.076) (0.709) (1.344) (1.249) (0.469) (0.032)
20 0.077 0.084 -0.778 -0.266 1.086 -0.178 0.001
Minority-owned
21 (0.079) (0.070) (0.547) (0.643) (0.768) (0.287) (0.024)
22 -0.005 -0.044 0.719* 0.882† -0.768 0.455 0.022
Women-owned
23 (0.047) (0.033) (0.349) (0.533) (0.942) (0.351) (0.025)
24 -0.135 0.296 -8.556** -4.519*** 0.037
25
Constant
(0.268) (0.272) (3.340) (1.237) (0.078)
26 Location effects Yes Yes Yes Yes Yes Yes Yes
27 Industry effects Yes Yes Yes Yes Yes Yes Yes
28 Year effects Yes Yes Yes Yes Yes Yes Yes
29
# observations 3942 3942 2940 3197 3942 3942 3942
30
(firms) (657) (657) (657) (657) (657) (657) (657)
31
32 Wald Chi-square 267.57 184.88 156.06 1798.38 128.27 173.54 271.03
Note: Firm-year level of analysis. The main independent variable ARRA is a dummy variable equal to one for firms funded through ARRA, and zero otherwise. Heteroscedasticity-
33
adjusted robust standard errors estimated using the “Huber-sandwich” estimator are reported in parenthesis. ***, **, *, † indicates significance at p < 0.001, p < 0.01, p < 0.05, and
34
p < 0.1 respectively; two-tailed tests.
35
36
37
38
39
40
41
42
43
44
45
46
47
Page 63 of 67 Academy of Management Discoveries

1
2
3 APPENDIX H
4
5
Selection Preferences for Minority and Woman-Owned Businesses
6
7 Given the SBIR grant program encourages participation from small businesses owned by founders
8 who belong to minority groups as well as woman-owned businesses, we investigate whether these
9 attributes factor into the selection preferences of NSF. The empirical model tests the likelihood of
10 receiving the grant (through ARRA or regular budget) if a firm is minority-owned (dummy
11
variable equal to one if a firm is minority-owned, and zero otherwise) and woman-owned (dummy
12
13
variable equal to one if a firm is woman-owned, and zero otherwise).
14
15 Table H1 reports the results of the probit estimation for the likelihood of obtaining the grant
16 through ARRA funds. The dependent variable, ARRA-funded is a binary variable equal to one if a
17 firm obtains the grant through ARRA, and zero otherwise. The coefficient estimate of the minority-
18 owned variable is negative and significant (model 1), but the coefficient of the woman-owned
19
variable is not significant (model 2). The results show that minority-owned businesses had a higher
20
21 likelihood of receiving the grant under the regular budget cycle than under ARRA. This suggests
22 that NSF give preference to minority owned businesses in selecting the grant recipients. We cannot
23 infer any selection preference for woman-owned businesses.
24
25 Table H1: Minority and Woman-Owned Businesses
26 Estimation Model GEE Probit
27
ARRA-funded
28
(1) (2)
29
30 -0.353*
Minority-owned
31 (0.176)
32 -0.086
Woman-owned
33 (0.138)
34 Location effects Yes Yes
35
36 Industry effects Yes Yes
37 Year effects Yes Yes
38
-1.781*** -1.787***
39 Constant
(0.117) (0.116)
40
41 # observations (firms) 2940 (657) 2940 (657)
42 Wald Chi-square 43.83 40.73
43 Note: The dependent variable ARRA-funded is a dummy variable equal to one only for the year a firm receives the grant funded
44 through ARRA, and zero otherwise. It is set to missing in the following years after the grant. Therefore, firm-years effectively drop
45 out of the sample for all years subsequent to the year of the grant. The main independent variables are Minority-owned (dummy
46 equal to one if firm’s founder(s) belong to a minority/economically disadvantaged group, and zero otherwise) and Woman-owned
47 (dummy equal to one if firm’s founder(s) is a female, and zero otherwise). Heteroskedasticity-adjusted robust standard errors
48 estimated using the “Huber-sandwich” estimator are reported in parenthesis. ***, **, *, † indicates difference is significant at p <
0.001, p < 0.01, p < 0.05, and p < 0.1 respectively; two-tailed tests.
49
50
51
52
53
54
55
56
57
58
59
60
Academy of Management Discoveries Page 64 of 67

1
2
3 REFERENCES
4
5 Ács, Z. J., Audretsch, D. B., Strom, R., Strom, R. J. 2009. Entrepreneurship, growth, and public
6
7
policy. Cambridge University Press: Cambridge.
8 Abadie, A. 2005. Semiparametric difference-in-differences estimators. The Review of Economic
9 Studies, 72(1): 1-19; doi:10.1111/0034-6527.00321
10 Armanios, D. E., Lanahan, L., Yu, D. 2019. Varieties of Local Government Experimentation: US
11 State-led Technology-Based Economic Development Policies, 2000-2015. Academy of
12 Management Discoveries, in press; doi:10.5465/amd.2018.0014
13 Arrow, K. 1962. Economic welfare and the allocation of resources for invention. In: The Rate
14
15
and Direction of Inventive Activity: Economic and Social Factors. Princeton University Press,
16 Princeton, pp. 609–626.
17 Ashenfelter, O., Card, D. 1985. Using the longitudinal structure of earnings to estimate the effect
18 of training programs. The Review of Economics and Statistics, 67(4): 648–660;
19 doi:10.2307/1924810
20 Audretsch, D. B., Link, A. N., Scott, J. T. 2002. Public/private technology partnerships:
21
Evaluating SBIR-supported research. Research Policy, 31(1): 145-158; doi:10.1016/S0048-
22
23 7333(00)00158-X
24 Azoulay, P., Zivin, J. S. G., Manso, G. 2012. NIH peer review: Challenges and avenues for
25 reform (No. w18116). National Bureau of Economic Research; doi:10.3386/w18116
26 Bisias, D., Lo, A. W., Watkins, J. F. 2012. Estimating the NIH efficient frontier. PloS one, 7(5);
27 doi:10.1371/journal.pone.0034569
28 Block, F. L., and Keller, M. R. 2011. State of Innovation: The US Government’s Role in
29
30
Technology Development, Boulder, CO, Paradigm.
31 Boudreau, K.J., Guinan, E.C., Lakhani, K.R., Riedl, C. 2016. Looking across and looking beyond
32 the knowledge frontier: Intellectual distance, novelty, and resource allocation in science.
33 Management Science, 62(10): 2765-2783; doi:10.1287/mnsc.2015.2285
34 Bourne, H. R., Lively, M. O. 2012. Iceberg alert for NIH. Science, 390–390;
35 doi:10.1126/science.1226460
36 Bronzini, R., Piselli, P. 2016. The impact of R&D subsidies on firm innovation. Research Policy,
37
38
45(2): 442-457; doi:10.1016/j.respol.2015.10.008
39 Capron, L., Mitchell, W. 2009. Selection capability: How capability gaps and internal social
40 frictions affect internal and external strategic renewal. Organization Science, 20(2): 294-312;
41 doi:10.1287/orsc.1070.0328
42 Czarnitzki, D., Fier, A. 2002. Do innovation subsidies crowd out private investment: Evidence
43 from the German service sector. Applied Economics Quarterly, 48(1): 1–25.
44
David, P. A., Bronwyn, H. H, Toole, A. A. 2000. Is Public R&D a Complement or Substitute for
45
46 Private R&D? A Review of the Econometric Evidence. Research Policy, 29 (4-5): 497-529;
47 doi:10.1016/S0048-7333(99)00087-6
48 Evans, P. 1996. Government action, social capital and development: Reviewing the evidence on
49 synergy. World Development, 24(6): 1119–1119.
50 Evans, P., Rauch, J. E. 1999. Bureaucracy and growth: A cross-national analysis of the effects of
51 "Weberian" state structures on economic growth. American Sociological Review, 748-765;
52
53
doi:10.2307/2657374
54 Fini, R., Jourdan, J., Perkmann, M. 2018. Social valuation across multiple audiences: The interplay
55 between ability and identity judgments. Academy of Management Journal, 61(6): 2230-2264;
56 doi:10.5465/amj.2016.0661
57
58
59
60
Page 65 of 67 Academy of Management Discoveries

1
2
3 Fuchs, E. R. 2010. Rethinking the role of the state in technology development: DARPA and the
4
5
case for embedded network governance. Research Policy, 39(9): 1133-1147;
6 doi:10.1016/j.respol.2010.07.003
7 Ginther, D. K., Schaffer, W. T., Schnell, J., Masimore, B., Liu, F., Haak, L. L., Kington, R. 2011.
8 Race, ethnicity, and NIH research awards. Science, 333(6045): 1015-1019;
9 doi:10.1126/science.1196783
10 Gonzalez, X., Jaumandreu, J., Pazo. C. 2005. Barriers to innovation and subsidy effectiveness.
11
RAND Journal of Economics, 36(4): 930-950.
12
13
Greenwald, B. C., Stiglitz, J. E., Weiss, A. 1984. Informational imperfections in the capital
14 market and macro-economic fluctuations. NBER Working Paper.
15 Griliches, Z. 1990. Patent statistics as economic indicators. Journal of Economic Literature, 92:
16 630-653.
17 Griliches, Z., Hall, B. H., Pakes, A. 1987. The value of patents as indicators of inventive activity.
18 In: Dasgupta, P., Stoneman, P. (Eds.), Economic Policy and Technological Performance,
19
Cambridge University Press, Cambridge.
20
21 Guo, D., Guo, Y., Jiang, K. 2016. Government-subsidized R&D and firm innovation: Evidence
22 from China. Research Policy, 45(6): 1129-1144; doi: 10.1016/j.respol.2016.03.002
23 Hall, B. H., Jaffe, A., Trajtenberg, M. 2001. The NBER patent citations data file: Lessons, insights
24 and methodological tools. NBER Working Paper Series 8498; doi:10.3386/w8498
25 Hall, B. H., Jaffe, A., Trajtenberg, M. 2005. Market value and patent citations. RAND Journal of
26 Economics, 36: 16-38.
27
28
Hall, B. H., Lerner, J. 2010. The financing of R&D and innovation. In: Hall, B.H., Rosenberg, N.
29 (Eds.), The Handbook of the Economics of Innovation, vol. I. Elsevier, Amsterdam, NL, pp.
30 609–639.
31 Hegde, D., Mowery, D. C. 2008. Politics and funding in the US public biomedical R&D
32 system. Science, 322(5909): 1797-1798; doi:10.1126/science.1158562
33 Hellmann, T., Puri, M. 2002. Venture capital and the professionalization of start-up firms:
34
Empirical evidence. Journal of Finance, 57: 169–197; doi: 10.1111/1540-6261.00419
35
36
Howell, S. T. 2017. Financing innovation: evidence from R&D grants. American Economic
37 Review, 107(4): 1136-64; doi:10.1257/aer.20150808
38 Hsu, D.H. 2006. Venture capitalists and cooperative start-up commercialization strategy.
39 Management Science, 52: 204-219; doi:10.1287/mnsc.1050.0480
40 Iacus, S. M., King, G., Porro, G. 2012. Causal inference without balance checking: Coarsened
41 exact matching. Political Analysis, 20(1): 1-24.
42
Inoue, H., Yamaguchi, E. 2017. Evaluation of the small business innovation research program in
43
44 Japan. SAGE Open, 7(1); doi:10.1177/2158244017690791
45 Jaffe, A. B., Trajtenberg, M., Henderson, R. 1993. Geographic localization of knowledge
46 spillovers as evidenced by patent citations. Quarterly Journal of Economics, 108(3): 577-598;
47 doi:10.2307/2118401
48 Kaplan, S. N., Lerner, J. 2016. Venture capital data: Opportunities and challenges. NBER Paper.
49 Keller, M. R., Block, F. 2012. Explaining the transformation in the US innovation system: the
50
51
impact of a small government program. Socio-Economic Review, 11(4): 629-656;
52 doi:10.1093/ser/mws021
53 King, A. A., Lenox, M. J., Terlaak, A. 2005. The strategic use of decentralized institutions:
54 Exploring certification with the ISO 14001 management standard. Academy of Management
55 Journal, 48(6): 1091-1106; doi:10.5465/amj.2005.19573111
56
57
58
59
60
Academy of Management Discoveries Page 66 of 67

1
2
3 Lamont, M. 2009. How professors think. Harvard University Press: Massachusetts.
4
5
Lanahan, L., Graddy-Reed, A., Feldman, M. P. 2016. The domino effects of federal research
6 funding. PLoS One, 11(6); doi:10.1371/journal.pone.0157325
7 Lee, C. J., Sugimoto, C. R., Zhang, G., Cronin, B. 2013. Bias in peer review. Journal of the
8 American Society for Information Science and Technology, 64(1): 2-17;
9 doi:10.1002/asi.22784
10 Lerner, J. 1999. The Government as Venture capitalist: The Long-Run Effects of the SBIR
11
program. Journal of Business, 72: 228-247.
12
13
Lerner, J., 2009. Boulevard of broken dreams: why public efforts to boost entrepreneurship and
14 venture capital have failed and what to do about it. Princeton University Press, Princeton, NJ.
15 Levene, H.1960. Robust tests for equality of variances. In: Contributions to probability and
16 statistics: Essays in honor of Harold Hotelling. Stanford University Press. vol. 2, pp.278–292.
17 Li, D. 2012. Information, bias, and efficiency in expert evaluation: Evidence from the NIH. Job
18 market paper, 1-57.
19
Li, D., Azoulay, P., Sampat, B. N. 2017. The applied value of public investments in biomedical
20
21 research. Science, 356(6333): 78-81; doi:10.1126/science.aal0010
22 Liang, K. Y., Zeger, S. L., Qaqish, B. 1986. Longitudinal data analysis using generalized linear
23 models. Biometrika, 73(1): 13-22; doi:10.1093/biomet/73.1.13
24 Lin, J. Y. 2012. New structural economics: A framework for rethinking development and policy.
25 The World Bank.
26 Link, A. N., Scott, J. T. 2009. Private Investor Participation and Commercialization Rates for
27
28
Government‐sponsored Research and Development: Would a Prediction Market Improve the
29 Performance of the SBIR Program?. Economica, 76(302): 264-281; doi:10.1111/j.1468-
30 0335.2008.00740.x
31 Link, A. N., Scott, J. T. 2010. Government as entrepreneur: Evaluating the commercialization
32 success of SBIR projects. Research Policy, 39(5): 589-601; doi:10.1016/j.respol.2010.02.006
33 Mazzucato, M. 2013. The Entrepreneurial State: Debunking Public vs Private Sector Myth.
34
Anthem Press, London.
35
36 Mazzucato, M., Perez, C. 2015. Innovation as Growth Policy. In: Fagerberg J., Laestadius S.,
37 Martin B. (eds),The Triple Challenge: Europe in a New Age,Oxford:Oxford University Press.
38 Megginson, W., Weiss, K. 1991. Venture capitalist certification in initial public offerings. Journal
39 of Finance, 46(3): 879– 903; doi:10.1111/j.1540-6261.1991.tb03770.x
40 Meyer, J. W., Rowan, B. 1977. Institutionalized organizations: Formal structure as myth and
41 ceremony. American Journal of Sociology, 83(2): 340-363; doi:10.1086/226550
42
Nelson, R. R. 1959. The economics of invention: a survey of the literature. The Journal of
43
44 Business, 32(2): 101–127.
45 Oliver, C. 1990. Determinants of interorganizational relationships: Integration and future
46 directions. Academy of Management Review, 15(2): 241-265; doi:10.5465/amr.1990.4308156
47 Ó Riain, S. 2000. States and Markets in an Era of Globalization. Annual Review of Sociology,
48 26(1): 187-213; doi:10.1146/annurev.soc.26.1.187
49 Ó Riain, S. 2000. The flexible developmental state: Globalization, information technology and the
50
51
"Celtic Tiger". Politics and Society, 28(2): 157-193.
52 Pahnke, E. C., Katila, R., Eisenhardt, K. M. 2015. Who takes you to the dance? How partners’
53 institutional logics influence innovation in young firms. Administrative Science
54 Quarterly, 60(4): 596-633; doi:10.1177/0001839215592913
55
56
57
58
59
60
Page 67 of 67 Academy of Management Discoveries

1
2
3 Park, H., Lee, J. J., Kim, B. C. 2015. Project selection in NIH: A natural experiment from
4
5
ARRA. Research Policy, 44(6): 1145-1159; doi:10.1016/j.respol.2015.03.004.
6 Sine, W. D., David, R. J., Mitsuhashi, H. 2007. From plan to plant: Effects of certification on
7 operational start-up in the emergent independent power sector. Organization Science, 18(4):
8 578-594; doi:10.1287/orsc.1070.0300
9 Stinchcombe, A. L. 1965. Social structures and organizations. In: March JG. (Ed.), Handbook of
10 Organizations, pp. 142–193. Chicago: Rand McNally.
11
Stuart, T. E., Hoang, H., Hybels, R. 1999. Interorganizational endorsements and the performance
12
13
of entrepreneurial ventures. Administrative Science Quarterly, 44: 315-349;
14 doi:10.2307/2666998
15 Trajtenberg, M. 1990. A penny for your quotes: Patent citations and the value of innovations.
16 RAND Journal of Economics, 21(1): 172-187; doi:10.2307/2555502
17 U.S. General Accounting Office (GAO). 1992. Federal Research: Small Business Innovation
18 Research Shows Success but can be Strengthened. GAO/RCED-92-37. Washington, D.C.:
19
U.S. Government Printing Office.
20
21 Wallsten, S. J. 2000. The effects of government-industry R&D programs on private R&D: the case
22 of the Small Business Innovation Research program. RAND Journal of Economics, 82-100;
23 doi:10.2307/2601030
24 Wang, Y., Li, J., Furman, J. L. 2017. Firm performance and state innovation funding: Evidence
25 from China’s innofund program. Research Policy, 46(6): 1142-1161;
26 doi:10.1016/j.respol.2017.05.001
27
28
Wessner, C. W. 2004. SBIR Program Diversity and Assessment Challenges. National Academies
29 Press.
30 Wessner, C. W. 2008. An Assessment of the SBIR Program at the National Science Foundation.
31 National Academies Press.
32 Zhou, X. 2005. The institutional logic of occupational prestige ranking: Reconceptualization and
33 reanalyses. American Journal of Sociology, 111(1): 90-140; doi:10.1086/428687
34
35
36
37
38
39 Supradeep Dutta (supradee@buffalo.edu) is an assistant professor of management at SUNY,
40 Buffalo. He received his Ph.D. from Purdue University. His research is at the intersection of
41 innovation, strategy, and entrepreneurship—exploring a central conundrum for the technology
42 entrepreneur—attracting resources when there is information asymmetry about the quality of a
43 venture.
44
45 Timothy B. Folta (timothy.folta@uconn.edu) (Ph.D. Purdue University) is the 2020-21 Chair of
46
the Strategic Management Division (Academy of Management); Thomas John and Bette Wolff
47
48 Family Chair of Strategic Entrepreneurship at University of Connecticut; and Faculty Director of
49 the Connecticut Center for Entrepreneurship and Innovation. His research and teaching examine
50 entrepreneurship and corporate strategy, analyzing decisions around entry, exit, and
51 diversification.
52
53 Jenna Rodrigues (jenna.rodrigues@uconn.edu) is a doctoral candidate in Management at the
54 University of Connecticut. Her work focuses on innovation, entrepreneurship policy, and
55 resource redeployment. She has a B.A. in Economics from Princeton University.
56
57
58
59
60

You might also like