The Policy Studies Journal, Vol. 32, No.

3, 2004

Evaluating the Effectiveness of Nonprofit Fundraising
Arthur C. Brooks
In this article, I apply and discuss two alterative approaches for evaluating nonprofit fundraising practices. First, I introduce simple financial ratios, using a sample of New York state social welfare nonprofits. Then, I construct Adjusted Performance Measures, which attempt to neutralize the uncontrollable outside forces that nonprofits face in their fundraising. I compare the two types of measures statistically, and examine their respective strengths and weaknesses.

Nonprofit activity is receiving increasing policy attention. There are two main reasons for this. First, many nonprofits receive high levels of government funding: Governments at all levels provide about a third of all nonprofit revenues, which amounts to more than $200 billion annually (Brooks, forthcoming). Second, governments are increasingly using the nonprofit sector—usually social welfare organizations—to produce or deliver public goods and services, resulting in a huge wave of public-private contracting (Van Slyke, 2003). Privately donated support (in terms of both time and money) for nonprofits is also crucial to their viability. In 2000, 20% of all nonprofit sector revenues—more than $130 billion—were donated by individuals, and the value of adult volunteer time to nonprofits in 1998 was over $225 billion (Independent Sector, 2001). According to the Social Capital and Community Benchmark Survey (Roper, 2000), 81% of U.S. households gave charitably in 2000, with an average annual gift of $1,347. Sixtyfive percent of households contributed to religious organizations (average annual gift of $858), and 68% gave to nonreligious organizations ($502). Twelve percent of all individual charitable giving, about $19 billion, flowed to social welfare service providers (Brooks, 2004). It is apparent, however, that this type of organization tends to face the most difficult circumstances in raising donated funds. As a rule, social welfare nonprofits receive smaller average donations than organizations in other areas, such as education or the arts, and thus must solicit larger numbers to acquire adequate funds. One way to illustrate this point is to look at the ratio of the average donation size to the number of donations (in millions) in different nonprofit subsectors. Nationally, in 1995, this ratio was 103 for arts and culture nonprofits, 39 for education, but just 14 for social welfare organizations (Brooks, 2003). This indicates that, even holding constant the number of donors for each type of nonprofit, the average donation to an arts firm is about seven and a
0162-895X © 2004 The Policy Studies Journal Published by Blackwell Publishing. Inc., 350 Main Street, Malden, MA 02148, USA, and 9600 Garsington Road, Oxford, OX4 2DQ.


Policy Studies Journal, 32:3

half times higher than that to a social welfare provider. Potentially, this is a fiscally disadvantageous state of affairs for these providers, in that they must develop a relatively large number of small donations. Not surprisingly, fundraising effectiveness is a topic that receives significant attention among social welfare nonprofit practitioners. In fact, for-profit marketing companies have generally made their deepest inroads into the nonprofit world by bringing their expertise to bear in designing effective direct mail and media campaigns for these firms.1 This fundraising situation raises an area of policy interest as well, given the reliance of many states on these organizations for the delivery of crucial services (Frumkin, 2002). In any contracting relationship between governments and nonprofits, performance evaluation is key to understanding and ensuring due diligence with tax-supported funds. At present, evaluation of nonprofits tends to focus on program effectiveness. However, given the importance and difficulty of fundraising among social welfare providers, fundraising performance is an evaluation focus that governments and other funders will almost certainly increasingly adopt in their contracting and granting processes: It reflects the ability of these nonprofits to generate the resources (on top of government revenues) they need to survive. It also may show how well organizations use tax-supported income in their fundraising. With an eye to the interests of both policymakers and nonprofit practitioners, this article describes and compares common quantitative measures of fundraising performance. After a brief discussion of the existing literature on simple fundraising ratios, I introduce two that nonprofits often calculate in assessing or demonstrating their efficiency or effectiveness: the proportion of total costs devoted to core services instead of fundraising, and the donations per fundraising dollar. Then, I describe “adjusted performance measures” (APMs), which try to improve on simple ratios by netting out variables beyond organizations’ control, such that their true efforts can be observed. To demonstrate both of these techniques, as well as their data requirements, I use 2001 IRS Form 990 data for social service providers in upstate New York. Neither of these methods is without problems, both for technical reasons and because neither yields much useful information to nonprofit organizations regarding what they should do to achieve higher fundraising effectiveness. Thus, I close by introducing a method that is slightly more complex but that holds promise for understanding future fundraising success.

Fundraising Effectiveness, by the Numbers Background The use of ratios in assessing financial success, failure, or vulnerability has a controversial history among nonprofit practitioners and scholars. Beginning with the work of Tuckman and Chang (1991), financial measures such as the ratio of administrative overhead to total costs and the ratio of cash on hand to total revenues have been used in predictive modeling and performance evaluation (Greenlee & Trussel, 2000; Hager, 2001). The advantage of using these financial ratios

Brooks: Evaluating the Effectiveness of Nonprofit Fundraising


centers on their simplicity and transparency. They are easy to calculate, interpret, and explain to managers and policymakers. On the other hand, critics object to their use for three reasons. First, they measure averages, which don’t tell us about effectiveness at the scale of operations chosen by a nonprofit. For example, a nonprofit might have a high average return on its fundraising expenses, yet be fundraising too much (if, for the last dollar it spends, the return is less than a dollar). In other words, gauging true effectiveness requires information on marginal returns (which is not usually available), not average returns. Second, in spite of the organizations’ similarity in activities, these ratios are inherently contaminated by factors beyond the firms’ control. For instance, the home areas of nonprofits often vary greatly in terms of population, economic circumstances, and demographics. Thus, we might be uncomfortable saying that a nonprofit that has a low donation level per fundraising dollar, but which resides in a poor area, is necessarily performing “worse” than an organization in a wealthy area that sees a higher yield. Indeed, we would probably be surprised if this were not the case, despite any underlying effort and skill on the part of the organizations. Third, internal accounting standards for fundraising are notoriously porous. Looking at the IRS form 990, which all 501(c)(3) nonprofits with gross annual revenues over $25,000 are required to file, one gets the impression that fundraising expenditures are a clearly delineated expense, neatly recorded on line 15 of the form. However, the fact is that fundraising responsibilities are distributed broadly across organizations and personnel. Most nonprofit executives devote some of their time (and thus some of their salary) to fundraising, but the actual amount is open to imprecision and bias. In sum, financial ratios—especially involving fundraising expenditures—are problematic. Whether we like them or dislike them, however, they are very commonly used, and their strengths and weakness need to be understood. Simple Ratios Imagine we are interested in judging the fundraising effectiveness of a social welfare nonprofit. We want to know the level of unearned income Di and fundraising expenses Fi for an organization i. However, these numbers are not really useful unless they are known in comparison to one another, relative to other organizations, and/or given the organization’s total budget. Thus, we might start by constructing two measures: 1. 1 - Fi/TCi, where Fi is i’s annual fundraising expenditures, and TCi is i’s total expenses. This represents the proportion of total costs that go to core services instead of to fundraising. 2. Di/Fi. This ratio represents the amount of unearned revenues generated by each fundraising dollar, on average. Obviously, each of these measures captures different phenomena and would be useful under different circumstances.2 The first might be thought of as measuring


Policy Studies Journal, 32:3

the resources an organization has left over after fundraising, which has implications for its sustainability and fundraising efficiency. The second measure can be thought of as measuring an organization’s effectiveness in targeting and retaining donors. We wouldn’t necessarily assume a high correlation between these measures—indeed, some might believe that a low value of 1 - Fi/TCi is desirable (e.g., Rose-Ackerman, 1982), although this view is probably unusual. The value of each measure is highest in a comparative sense. That is, we would most likely want to seek these measures in the context of a ranking of “peer” organizations, in order to establish benchmarks. To demonstrate the use of these common measures, consider the following small sample of New York state social welfare nonprofits. This sample considers data from fiscal year 2001 on the 47 organizations that fulfill the following criteria. 1. Each organization is from Albany, Syracuse, or Rochester, New York. 2. Each organization filed an Internal Revenue Service (IRS) Form 990 for 2001. 3. Each organization is principally involved in one of the following social welfare activities: children’s services, family services, financial counseling, transportation for the needy, emergency assistance, care for institutionalized populations, or services to promote the independence of vulnerable groups (the elderly, immigrants, etc.). 4. Each organization has a positive fundraising budget. The data were assembled from IRS records by the National Center for Charitable Statistics at the Urban Institute.3 Table 1 summarizes F, TC, D, and the constructed measures 1-F/TC and D/F for the sample. It also summarizes the years since inception for organizations (ORGNAGE) and their total revenues (TR). In these data, D includes not only private contributions but also government subsidies.4 Note this sample is assembled only as an example to illustrate the measures and techniques. Its limited size and scope are intentional to facilitate ease in understanding the results. However, a much larger sample of firms would be necessary to conduct an actual unbiased evaluation of fundraising performance. Table 2 lists the individual organizations with respect to 1-F/TC, D/F, and the rankings of these indexes. Of the 47 organizations, 13 (28%) are faith-based; however, this designation is not as simple as it seems at first, because different

Table 1. IRS Form 990 data for social welfare nonprofits Variable F TC D 1-F/TC TR D/F ORGNAGE Mean $71,068 $4,269,558 $1,790,911 0.94 $4,486,429 131 32 Standard deviation $153,904 $7,154,622 $4,493,451 0.16 $7,437,557 300 19 Minimum $100 $43,540 $446 0 $0 0.33 3 Maximum $830,818 $35,805,623 $30,178,984 1 $36,722,136 1,484 70

Note. The minimum level of D in this sample exceeds the minimum level of TR because TR includes earned revenues, which can be negative.

Brooks: Evaluating the Effectiveness of Nonprofit Fundraising Table 2. IRS data for ranking organizations by self-sufficiency and fundraising effectiveness Organization Rochester Presbyterian Home Kripalu Yoga Fellowship of the Capital District Centro Civico Hispano Americano Edgerton Child Care Services Arbor Park Child Care Center Trinity Institution Women’s Building Baden Street Settlement Elmcrest Children’s Center NYSARC Onondaga Community Living Spaulding PRAY Residence YMCA of The Capital District Southwest Area Neighborhood Association Syracuse Model Neighborhood Facility Twelve Corners Day Care Center Parsons Child & Family Center Catholic Charities of Syracuse YMCA of Greater Rochester Lewis Street Center International Center of the Capital Region Transitional Living Services of Onondaga County Rochester Rehabilitation Center YMCA of Greater Syracuse Holding Our Own YWCA of Syracuse & Onondaga County Huntington Family Centers Lifespan of Greater Rochester New York Statewide Senior Action Council Center for the Disabled Foundation St. Paul’s Day Care Center Urban League of Rochester Threshold Center for Alternative Youth Services Syracuse Jewish Family Service Pirate Toy Fund Capital District Center for Independence Rescue Mission Alliance of Syracuse 100 Groton Parkway Crisis Pregnancy Services Jewish Home of Central New York Colonie Senior Service Centers New York State Coalition for the Aging Women’s Foundation of Genesee Valley Rochester Children’s Nursery Foundation Francis House Hillside Children’s Center Foundation Kirkhaven Foundation 1-F/ TC 0.999934 0.999730 0.999457 0.999456 0.999198 0.999055 0.998788 0.998544 0.998043 0.997616 0.997540 0.996762 0.996193 0.994938 0.994674 0.994572 0.993894 0.993058 0.993008 0.991925 0.990998 0.989131 0.988687 0.987784 0.985640 0.983527 0.982369 0.976589 0.974517 0.972849 0.968687 0.967946 0.963780 0.962368 0.961978 0.941191 0.940780 0.931767 0.925454 0.921706 0.912934 0.874191 0.842301 0.830422 0.776930 0.631919 0.0 D/ F 240.71 4.46 1,484.37 1,124.09 153.22 911.06 220.00 638.83 33.32 77.52 104.48 14.32 42.50 174.68 169.65 31.12 17.17 121.42 29.53 79.48 115.12 1.36 6.53 12.03 2.64 16.43 19.91 55.26 37.94 10.79 5.43 26.23 25.06 20.34 43.43 17.15 7.68 1.23 15.40 15.81 5.45 3.89 8.56 0.33 6.14 5.20 2.72 Rank 1-F/ TC 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47


Rank D/F 5 41 1 2 9 3 6 4 19 14 12 31 17 7 8 20 26 10 21 13 11 45 36 32 44 28 25 15 18 33 39 22 23 24 16 27 35 46 30 29 38 42 34 47 37 40 43


Policy Studies Journal, 32:3

organizations have very different types of religious purpose and mission (Ebaugh, Pipes, Chafetz, & Daniels, 2003). Given the nature of IRS Form 990 data, I can only record the presence of a religious affiliation, data which I use in the next section. Ideally, greater detail on the role of religion in organizations would be used. The two ratios measure different phenomena but are fairly highly-correlated: The rank correlation between them is 0.70. The top organization in 1-F/TC, the Pirate Toy Fund of Rochester (which provides toys to poor children), is fifth in D/F; the top organization in D/F, the Hispanic American Civic Center of Albany, is third in 1-F/ TC.5 Adjusted Performance Measures Recall that there were several common objections to the use of simple ratios. The first objection is that they ignore marginal effects, which are most relevant in decision making. Unfortunately, this objection cannot typically be remedied with data collected by a nonprofit or governing body. Hence, we may have to rely on averages, albeit with their weaknesses in mind as we interpret them. The second objection to simple ratios—that they are contaminated by influences beyond organizations’ control—can be addressed. One method intended to do this is the use of Adjusted Performance Measures (APMs). These are regression-based metrics designed to simulate measures free from environmental influences that might make raw comparisons unfair or unreliable (Steifel, Rubenstein, & Schwartz, 2002). For the application at hand, the basic idea of APMs is as follows. The best linear model of effectiveness we can build, given available data, might be R = a + Xb + e, where R is the measure 1-F/TC or D/F, X represents a vector of measurable environmental factors (mostly beyond an organization’s control) that affect fundraising, and e is a random disturbance. The portion of R that owes to an organization’s efforts is not measured by this model—indeed, it is probably unmeasurable—so it is captured in the disturbance term. This disturbance, and hence the portion of an organization’s performance not described by the environmental ˆ ˆ ˆ ˆ factors, is estimated by e = R - R, where R = a + X b . Then, the element of e corresponding to a particular organization i, ei, is treated as an uncontaminated indicator of the true measure sought, relative to the other organizations in the sample. That is, e is an APM. A positive value of ei indicates that an organization would tend to receive a high score in terms of R, even if all environmental factors were neutralized; a negative value of ei indicates the opposite. The units in e arguably do not have an especially meaningful interpretation, so APM information is most useful in ordinal form. By way of example, I turn once again to the data used in the last section, and calculate APMs for each of the ratios. I augment the IRS data (including ORGNAGE and TR, to neutralize the fact that older and larger organizations may naturally have an easier time fundraising than younger and smaller firms) with several socioeconomic variables, calculated at population averages for the zip code in which each nonprofit is located. The idea here is that a nonprofit’s “home” area will tend to affect its fundraising ability. (Note that the assumption that each nonprofit’s zip code

Brooks: Evaluating the Effectiveness of Nonprofit Fundraising Table 3. Socioeconomic data for APM calculations Variable NOHS HHINC POV POP FAITH ALB ROC Mean 0.25 $28,466 0.24 14,963 0.28 0.30 0.40 Standard deviation 0.13 $15,128 0.16 7,788 Minimum 0.06 $9,692 0.01 1,683 0 0 0


Maximum 0.48 $70,744 0.47 28,094 1 1 1

Table 4. APM regression estimates Dependent Variable: ln(1-F/TC) Coefficient (standard error) Intercept ln(TR) NOHS ln(HHINC) POV ln(POP) FAITH ALB ROC R2 -8.869 0.055 0.163 0.068 0.861 -0.021 -0.266 0.114 0.351 0.13 (14.786) (0.091) (0.298) (0.058) (1.43) (0.077) (0.463) (0.53) (0.617) Dependent Variable: ln(D/F) Coefficient (standard error) -44.827 0.117 0.066 0.078 4.566 0.115 -0.538 0.054 1.105 0.34 (17.655) (0.109) (0.356) (0.069) (1.707) (0.092) (0.553) (0.633) (0.737)

Note. Regressions are estimated using ordinary least squares.

adequately represents its “home” is fairly arbitrary and done for illustrative purposes only. In an actual evaluation, researchers would need to define the appropriate home area with much greater precision.) These variables are the percentage of the adult population without a high school diploma (NOHS), household income (HHINC), the percent of the population living below the poverty line (POV), and population (POP). I include dummy variables denoting whether the organization is faith-based (FAITH) or is located in the cities of Albany (ALB) or Rochester (ROC). (The reference city is Syracuse.) I transform all of the continuous variables, including the ratios, into their natural logarithms, such that a normal distribution of the errors is a plausible assumption, the left-hand side of each equation is not censored, and consequently the models can be estimated using ordinary least squares (OLS). Table 3 summarizes these variables across the organizations in the sample. Table 4 summarizes the regression equations used to build each APM. The APM ˆ for each organization is calculated as exp{Ri} - exp{ Ri}. The mean estimated APM for 1-F/ TC is 0.07, with a standard deviation of 0.33. The mean value of the APM for D/F is 82.27, with a standard deviation of 281.53. Table 5 summarizes the organizations with respect to the actual ratios, as well as the calculated APMs, and their rankings. The adjusted ratios diverge from each other much more than the simple ratios do. The rank correlation between the APMs of 1-F/TC and D/F is 0.19, which is not statistically significantly different from zero.


Policy Studies Journal, 32:3 Table 5. IRS data for ranking organizations by self-sufficiency and fundraising effectiveness

Organization Pirate Toy Fund St. Paul’s Day Care Center Women’s Foundation of Genesee Valley Rochester Children’s Nursery Foundation Twelve Corners Day Care Center 100 Groton Parkway Rochester Rehabilitation Center Lifespan of Greater Rochester Southwest Area Neighborhood Association Colonie Senior Service Centers Crisis Pregnancy Services Women’s Building Hillside Children’s Center Foundation Threshold Center for Alternative Youth Services New York Statewide Senior Action Council Center for the Disabled Foundation New York State Coalition for the Aging Spaulding PRAY Residence Transitional Living Services of Onondaga County Arbor Park Child Care Center YMCA of Greater Syracuse Edgerton Child Care Services YWCA of Syracuse & Onondaga County Elmcrest Children’s Center Rescue Mission Alliance of Syracuse Rochester Presbyterian Home Onondaga Community Living Catholic Charities of Syracuse Parsons Child & Family Center Syracuse Model Neighborhood Facility Jewish Home Of Central New York YMCA of Greater Rochester Trinity Institution Holding Our Own Syracuse Jewish Family Service Francis House Huntington Family Centers Kirkhaven Foundation NYSARC Centro Civico Hispano Americano Capital District Center for Independence Kripalu Yoga Fellowship of the Capital District International Center of the Capital Region Urban League of Rochester Lewis Street Center Baden Street Settlement-Rochester YMCA of the Capital District

1-F/ TC (APM) [APM rank] 0.962 (0.739) [1] 0.969 (0.619) [2] 0.842 (0.52) [3] 0.83 (0.519) [4] 0.995 (0.482) [5] 0.932 (0.474) [6] 0.989 (0.473) [7] 0.977 (0.453) [8] 0.995 (0.381) [9] 0.913 (0.354) [10] 0.925 (0.313) [11] 0.999 (0.309) [12] 0.632 (0.277) [13] 0.964 (0.264) [14] 0.975 (0.259) [15] 0.973 (0.256) [16] 0.874 (0.244) [17] 0.997 (0.234) [18] 0.989 (0.228) [19] 0.999 (0.193) [20] 0.988 (0.141) [21] 0.999 (0.111) [22] 0.984 (0.11) [23] 0.998 (0.096) [24] 0.941 (0.076) [25] 1 (0.025) [26] 0.998 (-0.029) [27] 0.993 (-0.05) [28] 0.994 (-0.07) [29] 0.995 (-0.088) [30] 0.922 (-0.1) [31] 0.993 (-0.104) [32] 0.999 (-0.121) [33] 0.986 (-0.126) [34] 0.962 (-0.137) [35] 0.777 (-0.163) [36] 0.982 (-0.178) [37] 0 (-0.256) [38] 0.998 (-0.258) [39] 0.999 (-0.269) [40] 0.941 (-0.304) [41] 1 (-0.305) [42] 0.991 (-0.348) [43] 0.968 (-0.388) [44] 0.992 (-0.406) [45] 0.999 (-0.46) [46] 0.996 (-0.871) [47]

D/F (APM) [APM rank] 43.4 (39.8) [13] 5.4 (0.6) [19] 8.6 (-0.6) [24] 0.3 (-6.6) [34] 31.1 (21.1) [15] 1.2 (-8) [35] 6.5 (-6.6) [33] 55.3 (21.1) [16] 174.7 (134) [7] 5.4 (-0.4) [23] 15.4 (-3) [28] 220 (211.1) [5] 5.2 (-5.3) [31] 25.1 (0.7) [18] 37.9 (0) [21] 10.8 (-0.4) [22] 3.9 (-29.9) [42] 14.3 (-2.1) [27] 1.4 (-16.7) [39] 153.2 (111.1) [9] 12 (-6) [32] 1124.1 (1020.6) [2] 16.4 (0.3) [20] 33.3 (2.5) [17] 7.7 (-12) [38] 240.7 (197) [6] 104.5 (91.3) [11] 121.4 (97.4) [10] 17.2 (-0.8) [25] 169.6 (131.5) [8] 15.8 (-11.1) [37] 29.5 (-8.4) [36] 911.1 (784) [3] 2.6 (-42.4) [44] 20.3 (-4.3) [30] 6.1 (-3) [29] 19.9 (-17.4) [40] 2.7 (-1.1) [26] 77.5 (32.3) [14] 1484.4 (1376.7) [1] 17.2 (-38.8) [43] 4.5 (-23.5) [41] 115.1 (61.1) [12] 26.2 (-229) [47] 79.5 (-159.6)[45] 638.8 (377.3) [4] 42.5 (-184.1) [46]

Brooks: Evaluating the Effectiveness of Nonprofit Fundraising Table 6. Rank correlations between ratios and APMs 1-F/TC 1-F/TC 1-F/TC (APM) D/F D/F (APM) 1 -0.31** 0.70*** 0.51*** 1-F/TC (APM) 1 -0.28* 0.19 D/F D/F (APM)


1 0.71***


***denotes significance at the 0.001 level, **denotes significance at the 0.05 level, and *denotes significance at the 0.10 level.

The simple ratio D/F is highly-correlated with its APM, whereas 1-F/TC is not. Table 6 illustrates this by way of the partial correlations between the four measures. The correlation between 1-F/ TC and its APM is -0.31; the correlation between D/F and its APM is 0.71. Both of these values are statistically significant. In other words, the simple ratio is fairly accurate in representing D/F in spite of demographic influences. In contrast, the simple measure of 1-F/TC produces a ranking that is negatively associated with its APM. One explanation might be that nonprofits in unfavorable fundraising circumstances may select revenue generation mechanisms—such as state contracts—that do not rely on a high level of F, and hence a neutralization of these circumstances pushes their predicted fundraising proportion in the other direction. If this is the case, it weakens the utility of APM rankings to judge true fundraising effectiveness and suggests that samples of nonprofits for comparison need to be more homogeneous than that used here. If we trust the underlying methodology of APMs, these measures are important in showing how much of the organizations’ effectiveness is due to the environment, as opposed to true effort. In many cases (such as the Hispanic American Civic Center), the APM doesn’t materially change the picture. However, in other cases, the difference is dramatic. The Kripalu Yoga Fellowship in Albany provides a good example of this. In “raw” service spending, 1-F/TC, it ranks near the top of the sample, at second. However, among the people living in its neighborhood, 14% have no high school diploma and only 4% live below the poverty level. Given these advantages, a score near the top half of the sample might be assumed. But when neutralizing these effects with the APM, the adjusted ratio places the organization near the bottom, at 42nd. APMs are not without problems. Most notably, Brooks (2000) shows how relying on an estimate of a variable that is intentionally—although unavoidably—left out of a regression model can lead to distortions from omitted variable bias. This would be especially true in cases in which the sample is small, as in the example here. However, a number of performance evaluation scholars (e.g., Rubenstein, Schwartz, & Stiefel, 2003) argue forcefully that the costs in statistical distortion in APMs are preferable to the costs in measuring a variable contaminated by environmental forces outside organizations’ control. Further methodological development and applications are needed to achieve a reliable consensus on the utility of APMs.


Policy Studies Journal, 32:3

Predicting Fundraising Success The attraction of simple ratios and their more sophisticated APM cousins is that they allow comparative fundraising evaluation on the basis of existing data. Hence, government agencies and other potential funders can judge the efforts of social service nonprofits in an analytic way. Unfortunately, simple ratios and APMs frequently give little guidance about content: Funders and nonprofits themselves cannot judge which fundraising levels and approaches will be most appropriate, unless, across a sample, generalizable patterns of successful and unsuccessful fundraising appear among the organizations ranked. Looking into fundraising content suggests slightly more complex evaluation approaches than those using simple ratios or APMs. One possibility is the use of natural experiments, which have been employed extensively in for-profit marketing for years, and which hold the potential to enhance substantially the effectiveness of actual fundraising strategies—especially in the area of direct mail appeals. They require a fundraising “control” technique, as well as an enhancement to that control. Then, groups of organizations can be exposed to each approach, and the results compared.6 For example, imagine that social welfare nonprofits have a standard practice of sending out direct mail costing 25 cents each piece to their list of potential and current contributors. They might benefit from increases in mailer “quality,” such as using faster delivery; a government body or foundation would like to mandate (or sponsor) an experiment to investigate such an increase. Imagine that this would double the cost per unit. The objective is to measure the net benefits and choose an alternative. The n nonprofits involved are separated into equal (and randomly selected) groups of n/2 organizations; the first group sends out 25-cent mailers, while the second group sends out 50-cent mailers. After one year (or other relevant period to evaluate success), we can judge the success of the alternative approaches by first calculating Dij/Fij (or the APM of this ratio) for each organization i in group j = 1,2 (25-cent or 50-cent mailers), and then comparing the appeals by calculating a comparison of means across the three groups, under the null hypothesis that Di 2 . The “best” alternative will emerge from the appeal with the i1 i =1 i =1 Fi 2 (statistically-significant) highest mean value.7 The strength of this experimental approach is that, instead of evaluating past effectiveness, it evaluates alternative approaches. Not only does this give nonprofits information on what will be most effective, but it also suggests to funders what type and level of fundraising they might require of their future grantees.


Di 1


Conclusion In an environment in which governments seek out private nonprofit partners in the production and delivery of public services, financial due diligence by the

Brooks: Evaluating the Effectiveness of Nonprofit Fundraising


partners is extremely important. Frequently, scholars talk about efficiency and effectiveness in the production and delivery processes, but rarely in terms of how the nonprofits raise their nongovernmental funds. Policymakers may fear that nonprofits that deal with governments will become ineffective in raising contributions, as their dependence on public sector subsidies pushes their organizational resources toward the acquisition and maintenance of these subsidies.8 In this article, I have weighed the advantages and disadvantages of two alterative approaches to evaluating fundraising practices and donations levels. First, I illustrated how simple ratios (of service spending to total expenses and donations to fundraising expenditures) can be employed, using a sample of upstate New York social welfare nonprofits as an example. Then, I introduced a modified form of these ratios, called Adjusted Performance Measures (APMs), which attempted to neutralize the uncontrollable environmental forces a firm faces in its fundraising. Neither approach is without practical and technical difficulties, as I have discussed. In addition, neither gives much information about what nonprofits should do, only what they have done. Thus, I have also briefly outlined the use of natural experiments in developing fundraising standards. My objective in this article has been to discuss ratios and APMs methodologically and provide a clear and understandable example, as opposed to introducing a comprehensive list of useful measures or executing a full-scale evaluation. In point of fact, at present, those interested in evaluating nonprofit fundraising performance and efficiency do not have an established array of ratios, APMs, or other measures to watch. A productive avenue for future research, therefore, might be to turn the insights in this article into a fuller set of measures for nonprofit managers and public policymakers tasked with evaluating various aspects of nonprofit effectiveness. Arthur C. Brooks is an associate professor of public administration and the director of the Nonprofit Studies Program at Syracuse University’s Maxwell School of Citizenship and Public Affairs. His research focuses on philanthropy, nonprofit economics, and cultural policy. Notes
For helpful suggestions on this paper, I am grateful to George von Furstenberg. This paper was prepared for the “Evaluation Methods and Practices Appropriate for Faith-Based and Other Providers of Social Service” conference, October 2003, Indiana University. 1. See, for example, The Domain Group’s website, 2. These measures are not intended to be comprehensive, or necessarily even the best measures for a given purpose; rather, I am using them to demonstrate the strengths and weaknesses of ratios per se as measures for performance evaluation. 3. The data are available at 4. This may lead to problems of interpretation, if government funding is unconnected to F, as I will show in the next section. 5. The Hispanic American Civic Center’s high rankings on these measures is driven mostly by a low fundraising budget. The organization’s relatively high value of D may be a function of government grants, suggesting that conflating private and public subsidies can lead to results that are hard to interpret. Actual evaluations should try to avoid this problem.


Policy Studies Journal, 32:3

6. Group assignment could be random or follow a system of stratification by demographics or organizational characteristics. 7. More sophisticated regression-based approaches are also possible. For examples, see Meyer (1995). 8. Note that questions about this specific concern require data that, unlike my example here, disaggregate public and private contributions.

Brooks, A. C. (2000). The use and misuse of adjusted performance measures. Journal of Policy Analysis and Management, 19(2), 323–328. Brooks, A. C. (2003). Do government subsidies to nonprofits crowd out donations or donors? Public Finance Review, 31(2), 166–179. Brooks, A. C. (2004). The effects of public policy on private charity. Administration & Society, 36(2), 166–185. Ebaugh, H. R., Pipes, P. F., Chafetz, J. S., & Daniels, M. (2003). Where’s the religion? Distinguishing faithbased from secular social service agencies. Journal for the Scientific Study of Religion, 42(3), 411–426. Frumkin, P. (2002). On being nonprofit: A conceptual and policy primer. Cambridge, MA: Harvard University Press. Greenlee, J. S., & Trussel, J. (2000). Predicting the financial vulnerability of nonprofit organizations. Nonprofit Management and Leadership, 11, 199–210. Hager, M. A. (2001). Financial vulnerability among arts organizations: A test of the Tuckman-Chang measures. Nonprofit and Voluntary Sector Quarterly, 30(2), 376–392. Independent Sector. (2001). The new nonprofit almanac IN BRIEF: Facts and figures on the independent sector. Washington DC: Author. Meyer, B. D. (1995). Natural and quasi-experiments in economics. Journal of Business & Economic Statistics, 13(2), 151–160. Roper Center for Public Opinion Research. (2000). Social capital community benchmark survey., October 1, 2003. Rose-Ackerman, S. (1982). Charitable giving and excessive fundraising. The Quarterly Journal of Economics, 97(2), 193–212. Rubenstein, R., Schwartz, A. E., & Stiefel, L. (2003). Better than raw: A guide to measuring organizational performance with adjusted performance measures. Public Administration Review, 63(5), 607–615. Stiefel, L., Rubenstein, R., & Schwartz, A. E. (1999). Using adjusted performance measures to evaluate resource use. Public Budgeting and Finance, 19(1): 67–87. Tuckman, H. P., & Chang, C. F. (1991). A methodology for measuring the vulnerability of charitable nonprofit organizations. Nonprofit and Voluntary Sector Quarterly, 20, 445–460. Van Slyke, D. M. (2003). The myth of privatization in contracting for social services. Public Administration Review, 63(3), 296–315.

Sign up to vote on this title
UsefulNot useful