P. 1
speaking_for_themselves_web_enhanced

speaking_for_themselves_web_enhanced

|Views: 360|Likes:

More info:

Published by: Teachers Without Borders on May 10, 2009
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

05/11/2014

pdf

text

original

Speaking for Themselves

Advocates’ Perspectives on Evaluation

A Research Study by Innovation Network, Inc. Commissioned by the Annie E. Casey Foundation and The Atlantic Philanthropies

Speaking for Themselves
Advocates’ Perspectives on Evaluation

Table of Contents
Foreword Acknowledgements Introduction Methodology Terminology Organizational Characteristics Findings: Advocacy Approaches Effective Strategies Requisite Capacities Findings: Evaluation Practices Interim Measures of Success Resources for Documenting Progress Communicating Advocacy Success Recommendations Conclusion Appendix One: Survey Questions Appendix Two: Source Data 2 2 3 4 4 4 8 8 9 11 13 14 14 16 18 19 20

www.innonet.org 1

Foreword

T

o convey concisely what has happened over the past couple of years to strengthen advocacy evaluation, perhaps the best words are “awareness to action.” This is a shift advocates know well. Before the public can be mobilized and before policymakers can be convinced to take action, they must be aware of the issue and the policy implications. “Increased awareness” is one of those interim outcomes or benchmarks that can help advocates see if a campaign is on track. We think it’s fair to say that the effort to build capacity for more effective advocacy evaluation is indeed on track and making good progress. A growing number of funders and evaluators have moved beyond awareness and are actively engaged to improve practice. Still, there has been a significant gap in the work to date: What do the advocates themselves think about evaluating their own work? What skills and resources do they need to do it? What do they think of the evaluation methods and tools now available? As an early step to begin addressing this gap, our foundations provided support for this research initiative intended to confirm anecdotal knowledge and answer some basic questions. Designed and implemented by Innovation Network, the research captures advocates’ perspectives on advocacy work and on evaluation practice. These survey data show that only one in four organiza-

tions in this sample has evaluated its advocacy work. The report also notes, “The advocates who took the survey have mixed feelings about evaluation.” Neither of these findings is a surprise. Obviously, there is still work to do in order to make advocacy evaluation a real contributor to social change. But by engaging the advocates in helping us think about how best to align evaluation practice with real-world advocacy work, we are better positioned to respond effectively. Despite the complexities of evaluation, advocates are looking for ways to integrate it into their work and use it to inform strategy. They may be struggling with basics like articulating intermediate outcomes. They may already have some experience with evaluation, but require more advanced tools and methodologies. It is our responsibility as a field to continue developing and sharing resources to meet their needs.

Jackie Williams Kaye, Strategic Learning and Evaluation Executive, The Atlantic Philanthropies Tom Kelly, Evaluation Manager, the Annie E. Casey Foundation

Acknowledgements

T

o the Annie E. Casey Foundation, The Atlantic Philanthropies, the JEHT Foundation, Beverly Buck, the Public Welfare Foundation, the staff of the GrantStation Insider, and everyone else who helped us publicize this research project: Thank you. Without your help identifying advocates to participate in the survey, we would not have been able to produce this report. To everyone who responded to the survey: We hope you find this report useful, and that the tools and resources that come out of this research will amply repay you for taking the time to share your experience. Without your thoughtful feedback, this work could not have happened. To our colleagues in the field of advocacy evaluation: Thank you for your contributions. This work was possible because of what each of you have completed thus far. In particular, to Julia Coffman: Thank you for your generous time and consideration in reviewing this document. —The Innovation Network Team

This research was funded by The Atlantic Philanthropies and the Annie E. Casey Foundation. We thank them for their support, but acknowledge that the findings and conclusions presented in this report are those of the authors alone, and do not necessarily reflect the opinions of The Atlantic Philanthropies or the Annie E. Casey Foundation.

2 SPEAkIng foR ThEMSElvES: Advocates’ Perspectives on Evaluation

Introduction

I

n an age of ever-increasing social challenges and limited public and private funding, direct service alone is not enough to change the way things are. Serving hot meals to 50 families tonight will mean those people do not go to bed hungry, but they will still need dinner again tomorrow. Direct services improve lives, but do not address the systemic problems that lead people to need services in the first place. In an effort to tackle the roots of these problems, many nonprofit organizations and foundations have added advocacy and policy change efforts to their program strategies. As advocacy efforts increase, nonprofits and their funders want to know what impact they are having. Many individuals and organizations are working to answer this question by building advocates’ ability to evaluate their work. The Atlantic Philanthropies and the Annie E. Casey Foundation are among those at the forefront of the fieldbuilding efforts. Both foundations support advocacy as a way to address major social challenges. They also support using evaluation as a mechanism that empowers grantees and ensures greater effectiveness. These foundations see advocacy evaluation as a tool that can not only demonstrate impact, but also strengthen future advocacy efforts. Advocacy evaluation is coming into its own as a specialization. Although hardly anyone was talking about advocacy evaluation ten years ago, the field now has websites, conference sessions, and professional networks devoted to it. All this attention has resulted in helpful and practical lessons for the field. A good deal of work has been done to gather and share the thoughts of funders and evaluators on this topic. From these reflections, we can distill a number of challenges and implications relative to evaluating advocacy— important factors that influence the evaluation design.1 Some of the most important of these factors are:
n

The purpose of this research is to gain a better understanding of advocates’ views on evaluation, the advocacy strategies and capacities they find effective, and current evaluation practices.

sustainability requires strong infrastructure and robust capacity—qualities that will keep an advocacy organization viable for as long as it takes to achieve its ends.
n

Contribution, not attribution. Proving attribution or causality is costly and difficult. It can also damage alliances with like-minded organizations if it appears that one organization is trying to “own” a victory. There are multiple players, partnerships, and coalitions involved in nearly any given issue. Evaluating the contribution of an organization yields useful results without alienating partners. Documenting progress. Since advocacy’s long-term goals are relatively far into the future, advocates need interim measures of success. These serve as milestones to show work is on track, keeping advocates informed about their own progress and helping them share success stories on the way to the “big win.”

n

Time frame. Many advocacy campaigns—or even components of an advocacy campaign—take longer than the duration of a grant award. Trying to meet reporting requirements, advocacy organizations may overstate their ability to accomplish a “big win” (e.g., a new immigration policy, a cleaner river, an improved foster care system) within a single grant period—even though such a “big win” may take decades to attain. Need for sustainability. Advocacy organizations need to be sustainable over the life of an issue, which, as noted above, can be decades. This kind of

n

A great deal of progress has been made in advocacy evaluation, as funders and evaluators have identified and begun to craft solutions to these challenges. Nevertheless, there exists a gap in the work to date: What do the advocates themselves have to say? What do advocates think about evaluating their own work? What skills and resources do they need to do it? What do they think of the evaluation methods and tools now available? Our hope is that this report will begin to fill the gap and add advocates’ voices to the advocacy evaluation conversation. The purpose of this research is to gain a better understanding of advocates’ views on evaluation, the advocacy strategies and capacities they find effective, and current evaluation practices. This research effort is an early step, designed to confirm anecdotal evidence and answer some basic questions about advocacy evaluation practice. During the analysis of these initial data, several additional questions have arisen about advocacy and evaluation. These questions, included in the conclusion of the report, suggest a direction for possible future research.

1

To read more, see The Challenge of Assessing Policy and Advocacy Activities: Part I and Part II at www.blueprintrd.com. www.innonet.org 3

Methodology

Organizational Characteristics

I

nnovation Network designed this research initiative to capture a sample of advocates’ perspectives. This report is based on data gathered through an online survey, attached as Appendix One. Grantees of The Atlantic Philanthropies, the Annie E. Casey Foundation, the JEHT Foundation, and the Public Welfare Foundation were invited to participate in the survey. We also linked to the survey from our website (www.innonet.org), mentioned it in newsletters (including the Advocacy Evaluation Update and the GrantStation Insider), and emailed information about it directly to advocacy organizations. The survey was available online from October 2007 through April 2008, and received 211 complete responses from nonprofit staff involved in advocacy work. We attempted to make this sample diverse and representative of advocacy nonprofits, but by no means is the sample truly random. We cannot definitively generalize the report findings to all nonprofit advocacy organizations, but we can begin to understand likely trends and topics for deeper exploration.

T

o understand the context and origin of survey answers, we asked respondents about their organizations. Responses represented groups of every size, range, and scope. Ninety-six percent of respondents were from organizations based in the United States. The remaining respondents represented six countries: There were three respondents from Canada and one respondent each from Ecuador, El Salvador, Ghana, Hungary, and South Africa. The survey also asked about the geographic focus of the organization’s advocacy efforts. Nearly two-thirds of respondents reported that the focus of their advocacy work is at either the community, local, or state levels. Only one in five organizations reported that their work is at the U.S. national level. 2
Figure 1: Geographic focus (n = 209)
International & National— Non-US Based 11% Community and Local 33% National— US Based 21%

Terminology

I

nnovation Network uses some non-traditional definitions in our advocacy evaluation work.
n Advocacy: We define advocacy as “a wide range of

activities conducted to influence decision makers at various levels.” This definition intentionally includes not only traditional advocacy work like litigation, lobbying, and public education, but also capacity building, network formation, relationship building, communication, and leadership development. Evaluation: We define evaluation as “the systematic collection of information about a program that enables stakeholders to gain better understanding of the program, improve its effectiveness, and/or make decisions about future programming.” Traditional definitions of program evaluation focus on progress towards goals, processes, and impacts. “Evaluation” in the context of advocacy includes all those elements, and focuses on using evaluation lessons to inform strategy. Advocates: People who identify themselves as “advocates” rarely devote 100 percent of their time and resources to advocacy efforts. Most advocates, as we will explore later in the data, conduct both advocacy and direct service activities.

Regional 4%

State 31%

n

n

2

The question was open-ended and responses were recoded for analysis.

4 SPEAkIng foR ThEMSElvES: Advocates’ Perspectives on Evaluation

More than 60 percent of respondents reported annual budgets of less than $1 million. Almost 50 percent had annual budgets of less than $500,000, and thirteen percent reported budgets no greater than $50,000.
Figure 2: Annual budget size (n = 209)
$10 Million or More 8%

Advocates also varied in their programmatic focuses: The most common category was “Human Services” (39.9% of responses), followed closely by “Public/Societal Benefit” (30.8%).3
Figure 4: Programmatic focus (n = 208)
Arts, Culture & Humanities 5%

$5 Million– $9.9 Million 5% $3 Million– $4.9 Million 7%

Under $50,000 13% $50,000– $99,999 7%
Public, Societal Bene t 31%

Education & Research 8% Environment & Animals 2%

$1 Million– $2.9 Million 18%

$100,000– $249,999 9%

$500,000– $999,999 16%

$250,000– $499,999 17%

International 2%

Human Services 40%

The organizations represented in the sample are relatively experienced: The majority (57.0%) has engaged in advocacy for more than ten years. Only three percent of respondents reported having engaged in advocacy efforts for less than a year.
Figure 3: Annual budget, by greater than/less than $1 million (n = 209) Figure 5: Advocacy experience (n = 207)

Less Than 1 Year 3%

$1 Million or More 38%

More Than 10 Years 57% 1 Year– 10 years 40%

Less Than $1 Million 62%

3

The question was multiple choice. Explanations for the category of “other” were recoded for analysis. www.innonet.org 5

Unsurprisingly, organizations that have been involved in advocacy for some time tend to have larger budgets than the newcomers. Organizations that are more established have had more time to develop their infrastructure and grow their budgets.
Figure 6: Annual budgets, by advocacy experience (n=189)
Less than 1 year 1 year to 10 years More than 10 years

Figure 8: Percentage of resources dedicated to advocacy, by greater than/less than 50% of resources dedicated to advocacy (n = 206)

less than $1 million $1 million or more

More Than 50% 41%

66.7% 33.3%

43.6% 56.4%

19.0% 80.9%

50% or Less 59%

We also asked about the percentage of resources devoted to advocacy. As mentioned earlier, many “advocacy organizations” are hybrid organizations conducting both advocacy and direct service activities. Essentially, most organizations are not primarily advocacy organizations. More than half of the respondents shared that their organization devotes 50 percent or less of its resources to advocacy. Only a third of respondents are expending the majority of their resources on advocacy-related activities.
Figure 7: Percentage of resources dedicated to advocacy (n=206)
I Don’t Know 7% 0–10% of Resources 25%

Interestingly, smaller organizations tend more toward “pure” advocacy. Smaller organizations (those with an annual budget of less than $1 million) are more likely than larger organizations to devote a higher percentage of organizational resources to advocacy activities.
Figure 9: Annual budgets, by percentage of resources dedicated to advocacy (n = 177)
50% or less More than 50%

>75% of Resources 22%

less than $1 million $1 million or more

42.9% 64.5%

57.1% 35.5%

51%–75% of Resources 17% 26%–50% of Resources 12%

11%–25% of Resources 17%

The survey also examined the sources from which respondents received funding to support their advocacy work.4 The majority of respondents (79.9%) reported private funding5 as the source of support for their advocacy work. The second most commonly identified source of funding was the public sector 6 (26.8%). Of the 179 responses, 21 percent reported using a mix of private, public sector, member, and self-funding. The vast majority of advocacy work is supported by private funding sources: grants, individual donations, and other charitable donations.

4 5

The question was open-ended and responses were recoded for analysis. Private funding includes foundations, faith-based organizations, nonprofits functioning as re-granters, and in-kind contributions from individuals and groups. Funding from various levels of government.

6

6 SPEAkIng foR ThEMSElvES: Advocates’ Perspectives on Evaluation

Figure 10: Funding sources (n=179)7
Frequency Percent

Private funding Public funding Membership dues general operating funds Self-funded fee-for-service

143 48 15 6 4 2

79.9% 26.8% 8.4% 3.4% 2.2% 1.1%

Many respondents qualified their responses about funding. For example, they clarified that they use public funding for general operations, whereas private, membership, and self-funding goes to programmatic and advocacy work. This differentiation is important: Respondents expressed a need to “walk the line” between advocacy and lobbying. Public funding comes with strict protocols about lobbying, and there is a perception that not all funders support advocacy work since it can be perceived as political. As a result, respondents say it can be difficult to be explicit with funders about their advocacy goals. For example, one respondent commented that the ability to do advocacy work is hindered by restrictions on funding: Funds received are often tied to certain activities set forth by their funders. As another advocate writes: It is extremely difficult to walk the line between the work we do as it relates to policy change, and on the other hand not being able to include lobbying activities or activities that might be perceived as lobbying in [funder] reports. While the actual percentage of time we spend on lobbying under the IRS is within the range, it is hard to describe our work without it seeming like all of our work is lobbying. Overall, public funding has more legal limitations when it comes to balancing advocacy and lobbying work. It is thus not surprising that the majority of respondents reported receiving advocacy funds from private funders.

Many “advocacy organizations” are hybrids, conducting both advocacy and direct services. Essentially, most organizations are not primarily advocacy organizations.

7

Responses total more than 100 percent as respondents were allowed to give more than one response. www.innonet.org 7

Findings: Advocacy Approaches

I

n addition to capturing advocates’ perspectives on evaluation, this research effort offered advocates the opportunity to reflect upon the effectiveness of their work. To that end, the survey asked three open-ended questions:
n

When asked which strategies10 were most effective, advocates cited many of the same approaches.11 Looking across the 174 responses, the four most common themes were:
n n n n

Community or grassroots organizing (31.6%); Coalition building (28.7%); Public education (26.4%); and Legislative advocacy (21.8%).

What are your organization’s primary advocacy activities? Which advocacy strategies employed by your organization have been most effective? Why do you think those strategies have been most effective? and Which of your organization’s advocacy capacities have been most essential? Why have they been most essential?

n

n

Advocates’ answers to these questions offer several insights for funders, evaluators, and other advocates.

Effective Strategies

In the wide range of activities used by advocates, certain approaches appear to be more commonly employed than others. For instance, legislative advocacy (56% of respondents) appears to be far more prevalent than judicial (12%) or administrative advocacy (5%). Public education (59%) and grassroots organizing (47%) are also popular.
Figure 11: Advocacy activities (n=184)8
Frequency Percent

Public education legislative and public policy advocacy9 Community and grassroots organizing Research and publications Capacity-building and training for advocacy Judicial advocacy network and coalition activities Civic and community engagement Administrative advocacy Communications and media advocacy Modeling best practices Rallies, marches, and civil disobedience “All”

122 115 98 48 29 25 16 10 10 9 4 4 3

58.9% 55.5% 47.3% 23.2% 14.0% 12.1% 7.7% 4.8% 4.8%

The strategy most commonly reported as having been effective is grassroots organizing. Many respondents noted the unique power that grassroots action has in influencing policymakers. Of the 55 respondents who mentioned this theme, 38 percent also referenced a capacity-building element to their work: training and skills-building activities to enable community members to advocate more effectively. Coalition building was mentioned by nearly three out of ten respondents as one of their most effective strategies, despite not being commonly referenced in response to the “activities” question. Partnering with allies can allow for a stronger and more credible voice when pursuing policy change. As one respondent noted, “This is effective because many agencies don’t have the capacity, resources, or flexibility to do advocacy work on their own. This also cushions any backlash that may result from advocacy so that any one agency is not in the ‘line of fire.’ ” Another respondent specifically referenced the benefit of including “disparate” and even “competitive” groups in one’s coalition in order to “broaden perspectives” and strive for “common goals, previously regarded as territorial.” Public education and awareness building was the third most commonly referenced strategy. This category of responses includes research, publications, media outreach, public events, website development, email campaigns, and newsletters. Some organizations conduct public education efforts to sway public will. Others use outreach to expand their grassroots networks. Still others hope such efforts will shift the opinions of targeted policymakers.
9

Only 5.2 percent of respondents that reported that they engaged in some form of legislative advocacy used the term “lobbying” when describing their activities. In the survey, we describe a “strategy” as follows: “Your organization’s long-term plan of action to achieve an advocacy or policy change goal. For example, a human rights organization’s strategy may be to develop a coalition of organizations that will create a grassroots movement that will influence policymakers and result in a national policy change. Another example may be a community health clinic that implements a public awareness campaign through advertising and programming asking residents to get a HIV/AIDS test to combat a rising infection rate.” The question was open-ended and responses were recoded for analysis. Since only one-quarter of respondents reported evaluating their advocacy efforts, these responses may be based on an intuitive understanding of effectivness, rather than on any systematic data collection.

10

4.3% 1.9% 1.9% 1.4%
11

8

The question was open-ended and responses were recoded for analysis. Respondents were allowed to provide more than one response.

8 SPEAkIng foR ThEMSElvES: Advocates’ Perspectives on Evaluation

The fourth theme referenced by respondents was legislative advocacy aimed at “concrete” policy changes. Striving for policy change is attractive because it could potentially have a very broad impact. Nearly one-third of the respondents who mentioned legislative advocacy suggested that building personal relationships with relevant policymakers and securing face-to-face meetings are essential steps in this process. Many organizations listed a combination of strategies in response to these questions: Seven respondents (4%) specifically mentioned a multi-prong or integrated approach to advocacy. One noted: “Integrated advocacy campaigns that coordinate direct lobbying, communications, and organizing are most effective because it is critical for policymakers to hear consistent messages.” Another added: “We have tried to strike a balance between organizing a grassroots network to influence policymakers, on the one hand, and designing a winnable policy proposal on the other. [Even] an incredible network can’t pass an unwieldy, complex proposal.”

n n

Strong networks; Polished technical skills (in the areas of research, policy analysis, budget analysis, communications, lobbying, and community organizing); and A collaborative and celebratory culture.

n

To explore this issue further, our survey asked advocates to identify which capacities, in their opinions, were most essential for effective advocacy organizations.13 One hundred and seventy respondents offered up their opinions, from which four themes rose to the top.14
Figure 12: Advocacy Capacities (n=170)15
Frequency Percent

Research and Communications organizational Support for Advocacy Collaboration with External Parties Resources and Staffing for Advocacy

71 70 55 39

41.8% 41.2% 32.4% 22.9%

Requisite Capacities

Creative and effective strategies are no guarantee of success. Advocates also require the capacity to engage effectively in advocacy, and to keep their organizations going until longterm goals are achieved. Funders and evaluators of advocacy efforts have recognized the importance of building the capacity of advocacy organizations. In the groundbreaking publication The Challenge of Assessing Policy and Advocacy Activities: Strategies for a Prospective Evaluation Approach, Blueprint Research & Design, Inc. recommended that advocates outline benchmarks for both policy change and capacity building efforts. TCC Group followed that report with research into what capacities were common across effective advocacy organizations.12 For its study, TCC Group interviewed thirteen policy advocacy experts: funders, researchers, consultants, and practitioners. TCC Group organized the findings into seven themes of common capacities among effective advocacy organizations:
n

Research and Communications: As all advocates know, words and ideas are the building blocks of advocacy campaigns. Unsurprisingly, research and communications were among the most important capacities from the advocates’ perspective. This category includes strategic messaging, research to bolster the cause’s position and refute competing claims, production and distribution of marketing materials, and public events. One respondent emphasized the importance of communications by observing, “The communication piece is essential because the formulation of what that one voice is saying and how it is saying it is crucial. Our ability to get that common message out—in person, through the media, and through pieces like a website, listserv, and newsletter—has been vital to success.”

A high level of visibility, media savvy, and effective relationship-building; Visionary leadership; An actively engaged board; Ample staffing and skillful management;

13

n n n

The question also offered the following explanation: “Advocacy capacities are the skills that allow your organization to conduct advocacy activities. Examples of advocacy capacities include communications and marketing, staff and resources, organizational commitment, articulated decision-making structures and procedures, etc.” The top two capacities were both listed in the example text preceding the question, so it is possible that these results are skewed upwards. On the other hand, these capacities were listed as examples precisely because they were assumed to be common, and would therefore be helpful in explaining the capacities concept to survey participants. It is unclear if including named capacities in the example did or did not skew the data. The question was open-ended and responses were recoded for analysis. Respondents were allowed to provide more than one response. See Appendix Two for more detailed responses. www.innonet.org 9

14

15 12

TCC Group, The California Endowment Advocacy General Operating Support Evaluation: Summary of Expert Interview Findings. June 28, 2006.

Organizational Support for Advocacy: Respondents identified a number of infrastructural characteristics as essential to effective advocacy, including board and leadership commitment to advocacy, articulated decision-making structures, and clear and strategic plans and procedures. We have categorized these as “organizational support for advocacy.” Several echoes of the capacities identified by TCC Group emerge here, including visionary leadership, board engagement, and skillful management. In the words of one advocate, “We’ve also invested in strengthening our internal infrastructure to maximize our impact—e.g. an outreach coordinator, enhanced communications capacity, etc. Building our internal capacity has been important to the success of most of our efforts.” Collaboration with External Parties: The survey respondents, like the informants interviewed by TCC Group, saw the ability to build strong relationships and networks as a vital capacity for effective advocacy. These relationships include grassroots and community organizing, contacts with decision makers, and connections with the media. One respondent noted, “I think being able to build a committed, highly participatory base and acting on their most relevant policy needs has been essential to our success.” Resources and Staffing for Advocacy: Advocates recognize the importance of having full-time dedicated staff with legislative and communications experience. “We started out with a lobbyist; added organizing [staff]; added policy analysts; and finally added communications and development staff. The lobbyist is the most critical…having someone in the State House is … our only way to reach legislators.” Having dedicated staff (and organizational support, as mentioned) helps advocates remain flexible and agile. The responses from advocates elicited by this survey largely validate the list of essential capacities previously compiled by TCC Group, with one notable exception. The seventh theme identified by TCC Group—“a collaborative and celebratory culture”—was not mentioned by advocates. While there is general agreement on what is a necessary capacity, it would be interesting to explore any differences in how those capacities are perceived. For example, these survey results imply that effective communications is the most critical capacity from the advocates’ perspective. Would funders, evaluators, or other policy experts agree?

“Our ability to get that common message out—in person, through the media, and through pieces like a website, listserv, and newsletter—has been vital to success.”

10 SPEAkIng foR ThEMSElvES: Advocates’ Perspectives on Evaluation

Findings: Evaluation Practices

U

nderstanding the strategies and capacities necessary for advocacy success—such as those described in the preceding section—is valuable. It helps to illuminate the path to success for advocates and funders. To learn what works best, advocates (and/or their evaluators) must systematically assess their work. But only one in four survey respondents (24.6%) reported that their advocacy work has been evaluated.16
Figure 13: Advocacy work has been evaluated (n = 211)
I Don’t Know 18% Yes 25%

The first two categories—internal and external evaluators—represent the two most typical types of individuals that would conduct an evaluation for a nonprofit advocacy organization. These people likely are evaluation professionals whose primary function is to plan and implement evaluation designs. Comparing these numbers to the total number of organizations that participated in the study, only 17% of organizations participating in the survey have had their work assessed by a professional evaluator. By no means must evaluation always be done by a professional, but having an evaluator manage the process frees up valuable staff time to focus on actually doing the advocacy activities. We also asked organizations how they benefited from the evaluation, and how their organization used what they learned.17 The most common benefit was that organizations were better able to refine their strategy or next steps (36.2%). Examples of responses include:
n

“[Evaluation] helped to hone the advocacy agenda and determine next steps.”
Frequency Percent

Figure 15: Benefits of Evaluation (n = 47)
No 57%

Better able to refine/focus strategy or next steps learned about progress on strengthening process or outcomes Refined program and/or evaluation plan

17 6 4 3 3 3 3 1 1 1 1 1 8

36.2% 12.8% 8.5% 6.4% 6.4% 6.4% 6.4% 2.1% 2.1% 2.1% 2.1% 2.1% 17.0%

Of the scant 25% of organizations that have evaluated their work, the evaluations have been conducted by external evaluators, internal evaluators, other internal staff, or a combination.
Figure 14: Evaluation of Advocacy Work (n = 50)
Combination of internal and external evaluation 14% Evaluator from outside of the organization 42%

learned about evaluation Used to attract/maintain financial support To attract/maintain stakeholder support/engagement learned more about how a strategy worked Informed leadership of program value Inform assessment of funding and time vs. remaining program work Implement and follow organizational strategic plan learned more about member/ client needs Members/clients who participated felt valued

Someone else from within the organization 14%

Evaluator from within the organization 30%

other Responses18

17

For each of these questions, responses were recoded for analysis. Some responses fell into multiple categories. Other responses include: “Non-specific assertion of benefit” (four responses), “The evaluation is not complete” (two responses), “No mention of learning— evaluation conducted per contract requirements” (one response), and “No mention of learning—evaluation for sake of accountability” (one response). www.innonet.org 11

16

To the best of our knowledge, no field-wide data are available about the proportion of nonprofits that evaluate their work. Lacking a basis for comparison, we cannot estimate any difference in evaluation prevalence between our sample and the field.

18

n

“[Evaluation] helped find strengths and weaknesses [and] set priorities for next five years based on building the base and community identified needs.” “[Evaluation] provided us important feedback that assisted us in developing new strategies and tactics.”

Figure 17: Evaluation Challenges (n = 46)
Frequency

Percent
23.9% 13.0% 10.9% 8.7% 8.7% 6.5% 6.5% 6.5% 4.3% 4.3% 2.2% 2.2% 16.7%

lack of resources: time and/or money lack of evaluation knowledge and/ or tools Some work/results of the work are hard to measure Identifying impact measures and/ or results Identifying contribution to a larger change Identifying interim outcomes Identifying quantitative measures and conducting analysis focusing the evaluation Resistance to participation by constituents Weak data collection and results getting the evaluation results to the target audience Crafting meaningful recommendations from the findings other 21

11 6 5 4 4 3 3 3 2 2 1 1 9

n

Responses to the question of how organizations used evaluation learnings were very similar to the answers to the “benefits” question. Several individuals’ answers were in the vein of “see above.” Most commonly, organizations used evaluation learnings to inform program design/redesign (40.5%)—nearly identical to the most common response of “Better able to refine/focus strategy or next steps” to the previous question.
Figure 16: Use of Evaluation Learnings (n = 42)19
Frequency

Percent
40.5% 14.3% 11.9% 9.5% 7.1% 7.1% 4.8% 2.4% 2.4% 16.7%

Program design/redesign Changes in staffing/staffing procedures Develop strategic plans fundraising Plan for organizational/program changes Affirmation of effective strategies Increase/strengthen stakeholder support gain support for next steps Refine evaluation plan other
20

17 6 5 4 3 3 2 1 1 7

In the survey’s open-ended general comments box, 19 respondents expressed the need for more tools, training, and technical assistance. These individuals related that they were building an evaluation strategy and wanted more support in developing their own capacity. For example:
n

Answers to the two questions—“benefits of evaluation” and “use of evaluation learnings”—fell into nearly identical themes: funding/fundraising, organizational/program planning, strategic planning, and increasing/strengthening stakeholder support. Working through an evaluation is not without its frustrations, and advocates shared many challenges to evaluating advocacy work. The most common was simply finding the time or money to conduct the evaluation (23.9%). Nearly every other challenge relates to organizations’ not having the evaluation capacity (experience, knowledge, and skills) necessary to conduct a meaningful evaluation.

“We are just beginning to look at evaluation methods for advocacy. We still need some hand-holding as we go forward because it is not clear if there is a ‘best practice’ evaluation method as of yet. We would love all the help we can get.” “We are seeking tools to help us better define success in advance so we know if we’ve achieved it.”

n

19

The question was open-ended and responses were recoded for analysis. Respondents were allowed to provide more than one response. Other responses include: “Same response as above response” (five responses), “I don’t know” (one response), and “Not applicable” (one response).

Also in the open-ended comments segment, eleven respondents reported being beyond “Advocacy Evaluation 101” and needing advanced tools. One respondent noted, “We are looking for the next layer of tools to help measure the impact of our work.” Another commented, “We are able to set up realistic benchmarks in biannual retreats that then help us to evaluate mid-term victories and final victories.” A final challenge mentioned in the open-ended comments was adoption of evaluation practices. Advocates are organically developing approaches to gauge progress, but there are obstacles to formalizing these processes and making them systematic.
21

20

Other responses include: “Evaluation is not challenging” (six responses) and “Not applicable” (three responses).

12 SPEAkIng foR ThEMSElvES: Advocates’ Perspectives on Evaluation

Interim Measures of Success
Many advocates use long-term goals (a new immigration policy, a cleaner river, an improved foster care system) as their only benchmarks for success. But assessing advocacy work in this way only reveals whether an end goal has been achieved or not. Exclusive focus on the end goal contributes little to the advocacy campaign as it unfolds. In order to use evaluation results to inform mid-course strategy shifts, advocates need to track interim successes. Advocacy efforts tend to take place over an extended timeframe—from at least several months to decades or
Figure 18: Interim Measures of Success (n = 150)22

even longer. With such a long timeframe, what can advocates measure along the way to make sure they stay on the path to victory? We asked the following question: “Short of overall success, how does your organization know you are making progress? Are there any specific indicators or benchmarks your organization tracks to measure progress?” Advocates responded with numerous examples of interim measures of their success. In the following figure, we have grouped the responses into categories—eight of them—to distill underlying themes.

Frequency

Percent

1. BUILDING ThE BASE
n Increased Participation by Members or other Audiences n Increased number of Partners, Allies, network Members n Increased Stakeholder Participation in Decision-making Process n Increased number of Inquiries n Increased Awareness and knowledge of Issue and/or organization n number of Target Population Reached Through Communications n Increased Target Audience Support for Issue

61 22 10 8 9 6 4 2 34 18 6 4 4 1 1 23 17 4 1 1 20 10 7 3 6 5 1 2 1 1 47 39 7 1 30

40.7% 14.7% 6.7% 5.3% 6.0% 4.0% 2.7% 1.3% 22.7% 12.0% 4.0% 2.7% 2.7% 0.7% 0.7% 15.3% 11.3% 2.7% 0.7% 0.7% 13.3% 6.7% 4.7% 2.0% 4.0% 3.3% 0.7% 1.3% 0.7% 0.7% 31.3% 26.0% 4.7% 0.7% 20.0%

2. DECISION MAkER SUPPORT
n Increased Engagement and Support of Decision Makers n Increased Decision Maker Attention to Issue n More or Strengthened Relationships n number and Quality of Meetings with Decision Makers n Increased Access to Decision Makers n Increased Decision Maker Inquiries

3. STRENGThENED INFRASTRUCTURE OR POSITION WIThIN ISSUE MOvEMENT
n Increased funding n growth or Strengthening of organization Reputation/Recognition n Decreasing Staff Turnover Rate n gaining a Seat at “the Table”

4. COMMUNICATIONS
n Increased Media Coverage n Increased Reflection of organization Messages n number of Materials Distributed

5. OPENING WINDOW OF OPPORTUNITy
n Issue Climate Changing to Support Position n Increased Campaign Momentum

6. ISSUE CAMPAIGN SUSTAINABILITy OR STRENGThENING
n Increased Diversity of Individuals and organizations in Campaign n number of leaders Developed

7. INTERIM PROGRESS TIED TO LEGISLATIvE vICTORy
n Campaign, Policy, or legislative victory n legislative victory as an Interim outcome towards Systems Change or Condition Change n Better Condition for Target Population

8. GENERAL MENTION OF EvALUATION
22

The question was open-ended and responses were recoded for analysis. www.innonet.org 13

From an evaluator’s perspective, some of these measures may be more helpful than others to show steps along the journey to success. One reason for a focus on interim measures is so advocates can know they are making progress all along their journey, rather than waiting until the very end. Imagine running a race blindfolded, not knowing how many competitors are ahead or even how much distance remains. Lacking these simple visual cues makes it difficult to be strategic. Now, imagine removing the blindfold: You can see each mile marker and where your fellow competitors are. Suddenly you have information from which to formulate a plan and strategically adapt your pace so you have a better chance of winning the race, or at least improving your performance. Applying the metaphor to advocacy, interim measures are the visual cues the runner can see once the blindfold is removed. Interim measures can tell advocacy organizations whether they have veered off course, have fallen behind, or are still headed in the right direction towards a finish line victory. Therefore, measures in the first six categories are likely more helpful for understanding progress. Advocates can use these measures—and others like them—when they are in the ever-changing and ill-defined “middle” of an advocacy campaign. Systematic collection and analysis of interim measures will enable advocacy organizations to think strategically about how best to reach the finish line.

outcomes. It also looks at contextual factors (such as changes in political, economic, and social climate) that are especially pertinent for advocacy organizations. In our advocacy evaluation practice, Innovation Network tends to focus on interim measures that relate to relationship building. In short, our approach is as follows: Most advocacy issues revolve around a core of decision makers with the power and influence to motivate change. Surrounding this core are “key players”—other people and organizations who are interested in an issue. An advocacy organization is a key player in its own campaign. Other players include, e.g., people and groups directly affected by the issue; the media; universities, think tanks, and research institutions; churches and other faith communities; and labor organizations. An advocacy organization builds relationships in stages: increasing awareness of an issue, gaining allies, and winning champions. These stages are repeated for key players and decision makers in order to motivate change. We recently wrote a series of articles that discuss this process in detail. The series is available at www.innonet.org/resources (free login required).

Communicating Advocacy Success
Telling the story of your campaign’s success is key to winning new supporters and securing funding. Effective storytelling is not always easy. We asked advocates to tell us about the perceived obstacles they face in communicating their advocacy success to funders.
Figure 19: Obstacles to Communicating Advocacy Success (n = 162)25
Frequency Percent

Resources for Documenting Progress
To support advocates’ adoption of interim progress measures, many organizations have developed free resources. Organizational Research Services (“ORS”) has created a list of pertinent interim outcome categories23 with examples of specific indicators (measurable markers that an outcome has been achieved). ORS outlines six categories, four of which focus on interim outcomes:
n n n n

Difficulty of documenting success funder preference not to support advocacy low internal capacity for evaluation lack of funds for evaluation lobbying tensions funder reporting structure not optimized for reporting advocacy success Unique issue environment challenges Reported that they had no obstacles
23

48 44 23 18 16 12 8 19

29.6% 27.2% 14.2% 11.1% 9.9% 7.4% 4.9% 11.7%

Shift in social norms; Strengthened organizational capacity; Strengthened alliances; and Strengthened base of support.

Another resource about interim outcomes is the Advocacy and Policy Change Composite Logic Model.24 This tool was developed collaboratively by Harvard Family Research Project, The California Endowment, The Atlantic Philanthropies, and the Annie E. Casey Foundation. The Composite Logic Model is a compact yet comprehensive planning tool that focuses on activities and their interim outcomes that lead to longer-term change. The interim outcome categories in the Model are similar to those described by ORS, but the Model also includes more specific interim
14 SPEAkIng foR ThEMSElvES: Advocates’ Perspectives on Evaluation

Reisman, Jane; Gienapp, Anne; Stachowiak, Sarah. A Guide to Measuring Advocacy and Policy. Organizational Research Services. www.organizationalresearch.com The Composite Logic Model and its associated materials are available at www.innonet.org/resources (free login required), or as an interactive online tool at www.planning.continuousprogress.org. The question was open-ended and responses were recoded for analysis. Respondents were allowed to provide more than one response.

24

25

Nearly a third of respondents (48 responses) ranked “difficulty of documenting success” as the number one obstacle to communicating success to funders. Respondents remarked that advocacy successes that fall short of largescale changes go unrecognized by funders. One respondent illustrated the problem as follows: “The longer term outcomes … are more complex and take a long time to achieve and measure. We therefore must rely on more immediate results, limiting our ability to demonstrate the depth of the impact we are able to make.” In not recognizing interim successes, funders miss the big picture of advocacy change, that “success in our world comes slowly.” Advocates also mused about measuring changes that seem impossible to quantify. Emotions aren’t beans that can be counted. Empowerment can’t be measured with a thermometer. Advocates have an intuitive sense that success is occurring, but where this success isn’t concrete, it often remains untracked. What you don’t track, you can’t report. In the same vein, many respondents expressed concern that funders want to see the effects of the work quickly, and do not understand the long-term, incremental nature of advocacy and policy change efforts. Respondents also noted that when funders express interest in supporting legislative and policy change campaigns, they often do not support preliminary work, such as leadership development, coalition building, and public education. There is a perception among advocates that funders prefer to support the high profile work—activities that are more directly connected to an “end goal.” For example, one respondent noted that funders become involved at strategic points during election campaigns, hoping to ignite one-off fixes. Another respondent wrote, “Funders often don’t believe that building a base of everyday people can create longterm systemic change. They often judge success by policy victories (even shallow ones) and number of times a group gets into the media. Therefore, the long and difficult process of building power at the base is not seen as necessary for long-term change.” This comprehensive statement captures how essential preliminary work is, and underscores the perspective that expectations for quick policy fixes are unrealistic. A slightly smaller percentage (44 responses) cited their funders’ preference not to support advocacy. Respondents suggest that funders shy away from advocacy, often equating it with lobbying. The line between advocacy and lobbying is thin; respondents sometimes feel it is necessary to reframe their advocacy success so it will not be confused with lobbying. Another respondent wrote, “Many funders do not separate advocacy from political activity. They are afraid to get involved for fear they may appear political.” This suggests that funders and advocacy organizations alike should seek out resources about the differences between advocacy and lobbying within the parameters of nonprofit status.26 The next two categories are two sides of the capacity coin. Internally, organizations lack staff capacity to conduct

Empowerment can’t be measured with a thermometer. Advocates have an intuitive sense that success is occurring, but where this success isn’t concrete, it often remains untracked. What you don’t track, you can’t report.
evaluations; externally, they lack funding to pay for evaluations. Twenty-five percent of respondents reported that their organization lacked the capacity—whether internal or external—to evaluate their advocacy activities. For example, one respondent shared that “We don’t know where to go for funds to keep the organization itself operating, much less write successful grants for [evaluation].” Another respondent observed that persuading foundations to provide operational grants to an advocacy organization was difficult enough; getting targeted grants to support advocacy initiatives was even more challenging. When funding for staff, supplies, and other general operating necessities is short, building internal capacity is not an option. An ongoing challenge for the evaluation field is the need for grantee reporting to be responsive to a variety of funder systems. This challenge applies equally, if not more so, to advocates. Approximately seven percent of respondents reported that funders’ reporting systems are an obstacle to communicating success. The reporting systems were described as out of touch with day-to-day advocacy work, and lacking guidelines for capturing various degrees of success. The final category of challenges to communicating advocacy success is environmental: Five percent of respondents reported that changes in the milieu of their advocacy efforts can limit or delay successes. The climate can shift with one controversial speech, media coverage of a single incident, or a change in the political viability of a legislator. As mentioned earlier, less than a quarter of organizations evaluate their advocacy work. Does that mean that only a quarter of advocacy organizations have an information feedback loop to guide strategy? What about the remaining 75 percent of advocacy organizations? On what do they base their decisions? The point of this discussion is simple: Evaluation helps organizations be more strategic. Advocacy organizations need to be strategic to be effective. Otherwise, their attempts at communication and change will be lost in the din of so many other organizations also competing for attention.
26

There are many helpful resources online about the parameters around advocacy and lobbying. For example, see the Alliance for Justice website at www.afj.org, the Center for Lobbying in the Public Interest (www.clpi.org), and the Southern California Center for Nonprofit Management’s lobbying FAQ, www.cnmsocal. org/ForNonprofits/FAQLobbying.html. www.innonet.org 15

Recommendations

T

he advocates who took the survey have mixed feelings about evaluation. Advocacy work is complicated and can be unpredictable. Sudden changes in politics or the economy (or even the weather) can mean immediate strategy changes in advocacy activities. Advocacy work is also a long haul. Work plans must be flexible, and evaluation planning must reflect this flexibility. The purpose of evaluation itself is also questioned: Why evaluate, when (as many advocates see it) evaluation is how funders justify budget cuts? Who has time for evaluation when there is work to do? At the same time, these advocates want to know how to measure success. They see that advocacy evaluation can help them improve their work, make better plans, and involve more people in the cause. On the other hand, getting started in evaluation is daunting: Advocates feel that they lack the time, skills, and tools to use evaluation in a worthwhile way. Advocacy evaluation practice has made great strides in the past few years, but there is still work to do in order to make it a real contributor to social change. Funders, evaluators, and advocates all have roles to play in strengthening the state of advocacy evaluation.

now. Also, since almost every advocacy issue is supported and advanced by a myriad of individuals and organizations over its lifespan, pointing to a single “straw that broke the camel’s back” can seem inconsequential—even arrogant or offensive—to an organization’s partners. In contrast, demonstrating contribution is relatively easy to accomplish, and provides useful results that lead to immediate improvement. Evaluations conducted to show contribution are less technically demanding and more accessible to a larger audience. This lowers the barriers faced by the approximately 75 percent of advocacy organizations currently not evaluating their work.
n

Working Together
n

Define Terms: Evaluation will not take its place as a major motivator of social change until there is broad agreement on its role. Evaluation should be viewed as a tool for continuous reflection, learning, and improvement, rather than as an audit or performance review method. Evaluation will gain increasing support if it demonstrates its worth—and to be worthwhile, it must be dynamic and relevant. Evaluation activities should happen throughout the lifetime of an advocacy effort, so advocates can learn from their own work and change course when necessary. This kind of ongoing evaluation needs to be encouraged and rewarded. Contribution, not Attribution: Funders, advocates, and evaluators also must shift their thinking and agree that evaluation should seek to demonstrate contribution, not attribution. Attributing success in advocacy work is like demonstrating impact for direct service: It is costly, time-consuming, and often results in inconclusive findings. “Proving” attribution—exactly who was (or was not) responsible for a final “win”—rarely helps advocates work better. It shifts the spotlight to far-off, long-term outcomes such as policy wins or systems change, instead of keeping the focus on the work happening

Outcomes Planning: The basis for being able to document contribution is planning. At the outset of advocacy campaigns, advocates should articulate the series of changes their activities are designed to create. By no means is this pathway set in stone—it should be regularly updated as the work, environment, and other factors change. By outlining the intended pathway of change, advocates can focus their evaluation efforts on tracking and assessing progress towards shorter-term and interim measures of success. Understanding progress towards these measures feeds back into campaign strategy and activities, adding value to the advocacy work. If the articulated pathway of change is believable, an organization’s demonstrated interim progress connects logically to long-term success. Improve Interim Outcomes: A fourth piece of work shared by funders, evaluators, and advocates is to improve the understanding and use of interim outcomes to assess progress. Advocates need to set realistic expectations of what their work can accomplish in a given timeframe. Funders need to hold grantees accountable for these interim measures of success, tied to infrastructure as well as advocacy activities. Evaluators need to place more emphasis on defining, tracking, and assessing progress towards interim measures of success rather than long-term measures such as policy wins. If funders, evaluators, and advocates work together, the focus of accountability can be shifted to practical measures that provide meaningful feedback to the ongoing advocacy work.

n

n

There is more that each group can do on its own:

Evaluators:
n

Distill and share know-how. The advocacy field needs evaluators to continue to collect and condense learnings from their work, create new tools based on those learnings, and make them available.

16 SPEAkIng foR ThEMSElvES: Advocates’ Perspectives on Evaluation

n

Be flexible. There is no single evaluation model that suits every situation. Evaluation methods should continually change and adapt to meet advocates’ and funders’ needs. If evaluation is perceived as rigid, it will not win new recruits. Design and use interim outcomes. Advocates (and sometimes funders) need assistance developing measurable statements to assess progress. Most people have clarity around the “end goal,” but finding common ground about the steps toward the end goal can be more difficult. Helpful interim outcomes include measures related to advocacy success and to improved capacity and sustainability. Find a balance between numbers and stories. One strong case study can gain support better than pie charts and percentages. Quantitative information is valuable, provides the “facts”, and may logically attract support. Qualitative information adds context and texture, moves the discussion from the abstract to the personal, and may be an emotional trigger for support. There is a time and place for both approaches—use them appropriately for maximum impact. Explore creative designs. New methods have been created specifically for evaluating advocacy. Examples of creative design include the Intense Period Debrief 27, the Bellwether Methodology 28, and the Policymaker Ratings exercise 29.

n

Develop your own measures of success. Consider the informal cues that signal success in your work and try to convert them to measurable statements. Once you have your measurable statements, work on tracking them systematically. Infuse evaluation into your work. Look at ways to integrate evaluation into your everyday activities. For example, repurpose existing intake forms to collect a few additional pieces of information. Standardize meeting notes so the information is consistent and can be analyzed over time. Champion the cause: Already engaged in and supporting advocacy evaluation? Now is the time to become a champion: Reach out to your peers in philanthropy to attract additional support.31 Support evaluation and capacity building: Consider how to support advocates who are ready to integrate evaluation into advocacy. Keep in mind, results of this survey show that less than seven percent of organizations have program staff who have conducted evaluation. Capacity is not easily built; it generally requires training and ongoing technical assistance. Funders should also keep in mind the sustainability and capacity necessary for advocacy success. Emphasize strategic learning. As noted above, the purpose of evaluation should be to funnel information back into the advocacy work to make better decisions and ultimately have more impact. Develop a rapport with grantees so they feel comfortable learning from their work—and sharing the learnings with stakeholders.

n

n

Funders:
n

n

n

n

Advocates:
n

n

If you are not evaluating, start. Understandably, getting started is a challenge, especially for organizations with no evaluation staff. Nevertheless, as more tools and resources become available30, advocates need to start using them. Articulate and document assumptions. Write out the pathway of change the campaign will likely follow to reach the ultimate goal. Highlight the “steps” in the pathway that may serve as important milestones or indicators of progress.

n

Advocates want to use evaluation results to inform strategy. They may be struggling with basics like articulating intermediate outcomes, or they may need more advanced tools and evaluation mechanisms. Despite the complexities of evaluation, advocates are looking for ways to integrate it into their work. It is our responsibility as a field to continue developing tools and methodologies to meet their needs.

27

Search for “Intense Period Debrief Protocol” at www.innonet.org/resources. (Free login required.) Evaluating an Issue’s Position on the Policy Agenda: The Bellwether Methodology at www.gse.harvard.edu/hfrp/content/eval/issue34/spring2007.pdf. Coffman, Julia. Advocacy Evaluation Trends and Practical Advice, November 2007, http://depts.washington.edu/mlcenter/assets/docs/aisle/Coffman.TP.2.pdf. To help advocates take this first step, Innovation Network offers an online database of free advocacy evaluation and other evaluation-related tools and resources, at www.innonet.org/resources.
31

28

29

30

For examples of how funders can support advocacy and evaluation, see Investing in Change: Why Supporting Advocacy Makes Sense for Foundations, The Atlantic Philanthropies, May 2008, available at http://atlanticphilanthropies.org. www.innonet.org 17

Conclusion

T

his report is a first step in building a knowledge base about advocacy organizations and their use of evaluation. It has produced a baseline, confirmed some assumptions, and answered many questions. It has also raised new questions, such as:
n

How do the findings contained in this report compare with other statistics about the nonprofit community—specifically nonprofits engaged in advocacy? Which findings can be generalized to the nonprofit advocacy community? Most organizations devote less than half of their budget to advocacy—what other work are they doing? Are these primarily direct service organizations that do some advocacy, or are they another kind of hybrid (e.g., membership organization and advocacy; think tank and advocacy)? How are advocates using the strategies they find effective? Are they using multiple strategies simultaneously or consecutively? Fifty-two respondents said that their organization’s advocacy work had been evaluated. Nearly three times as many respondents said they were using specific indicators or benchmarks to measure progress. Is there a disconnect between evaluators and advocates about what we mean by evaluation and strategic learning, or are these interim measures an attempt at articulating an intuitive understanding of how advocacy work unfolds? Without a strategic learning process, are data gathered from interim outcomes used to inform planning?

Advocates change the world. Evaluation can help, by telling advocates if victory is two hundred yards or two hundred miles away. knowing when to make an all-out sprint to the finish line and when to stick to a slow and steady pace can make all the difference.

n

n

n

What are the primary motivations for organizations that evaluate their work? Are they motivated by internal benefits such as planning, or by external pressure from funders, for example? Ninety-six percent of our survey respondents were from the U.S. What can we learn from the international advocacy community? How do international organizations’ evaluation experiences and practice compare to those of domestic organizations? Are some strategies more effective internationally?

n

n

n

We hope to continue researching these questions, and to continue to expand and deepen evaluation know-how in the advocacy community. Advocates change the world. Evaluation can help, by telling advocates if victory is two hundred yards or two hundred miles away. Knowing when to make an all-out sprint to the finish line and when to stick to a slow and steady pace can make all the difference.

18 SPEAkIng foR ThEMSElvES: Advocates’ Perspectives on Evaluation

Appendix One: Survey Questions
A. 1. DEMOGRAPhICS What is your organization’s programmatic focus?
n n n n n n n n n

6.

has your organization’s advocacy work been evaluated?
n n n

Arts, Culture, and humanities Education and Research Environment and Animals health human Services International Public, Societal Benefit Religion other, please specify

Yes no I don’t know

7.

If yes, who conducted the evaluation?
n n n n n

2.

What is the geographic focus of your organization’s work? [open-ended.] 8.

An evaluator from within the organization Someone else from within the organization An evaluator from outside of the organization I don’t know other, please specify

If yes, how did your organization benefit from the evaluation? [open-ended]

3.

What is your organization’s postal code and country? (Main office) [open-ended.]

9.

If yes, how did your organization use learnings from the evaluation? [open-ended]

B. 1.

GENERAL ORGANIzATION INFORMATION What is your organization’s budget? Under $50,000 $50,000 - $99,999 n $100,000 - $249,999 n $250,000 - $499,999 n $500,000 - $999,999 n $1 million – 2.9 million n $3 million - $4.9 million n $5 million - $9.9 million n $ 10 million or more
n n

10. If yes, what were the challenges to evaluating your work? [open-ended] C. 1. ADvOCACy INFORMATION In your opinion, which advocacy strategies employed by your organization have been most effective? Why do you think those strategies have been most effective? [open-ended] 2. Which of your organization’s advocacy capacities have been most essential? Why have they been most essential? [open-ended] 3. Short of overall success, how does your organization know you are making progress? Are there any specific indicators or benchmarks your organization tracks to measure progress? [open-ended] 4. What obstacles have you encountered in communicating your advocacy success to funders? [open-ended] 5. Please share any other comments you feel would help us better understand how your organization conducts and/or evaluates its advocacy efforts. [open-ended]

2.

What are your organization’s primary advocacy activities? Examples of possible activities may include legislative advocacy, judicial advocacy, research, public education, community organizing, etc. [open-ended.]

3.

for how many years has your organization been involved in advocacy?
n n n

less than 1 year 1 year to 10 years More than 10 years

4.

Who are the primary funders of your advocacy work? [open-ended.]

5.

What is the approximate percentage of your organization’s resources that are devoted to advocacy?
n n n n n n

0% - 10% 11% - 25% 26% - 50% 51% - 75% > 75% I don’t know

www.innonet.org 19

Appendix Two: Source Data
Source data for Figure 12: Recoded Response Groupings by Theme
RESEARCh AND COMMUNICATIONS
n Communications n Research n Providing n Spurring

71 52 17 1 1 70 40 9 8 5 2 2 1 1 1 1 55 21 17 12 3 1 1 39 15 5 5 4 4 3 2 1

41.8% 30.6% 10.0% 0.6% 0.6% 41.2% 23.5% 5.3% 4.7% 2.9% 1.2% 1.2% 0.6% 0.6% 0.6% 0.6% 32.3% 12.3% 10.0% 7.1% 1.8% 0.6% 0.6% 22.9% 8.8% 2.9% 2.9% 2.3% 2.3% 1.8% 1.2% 0.6%

and Marketing Skills and Capacity

Materials to Decision-makers

Discussion at Different levels and outlets

ORGANIzATIONAL SUPPORT FOR ADvOCACy
n organizational n Making

and Staff Commitment to Advocacy

and following a Plan Decision-making Structures and Staff Commitment to Evaluation

n Articulated n Board

of Directors Involvement and Support

n organizational n Patience n Integrating n Board

Advocacy into All Aspects of organization

Member Recruitment and Retention in Service of Advocacy Agenda Communication Prepared for opportunity Windows

n Internal n Being

COLLABORATION WITh ExTERNAL PARTIES
n grassroots

and Community organizing and Mobilization Partnerships, and Coalition-building with the Media in Task forces and Maintaining Strong Relationships with Decision Makers

n networking, n Developing

n Relationship n Engaging

Unlikely Allies

n Participation

RESOURCES AND STAFFING FOR ADvOCACy
n Experienced n Adequate n Staff

Staff

Resources for Advocacy

with legislative Expertise Dedicated Advocacy Staff Person Communications and fundraising Staff and Resources Staffing and volunteer Resources

n full-time

n Dedicated n Adequate n Diverse

and visible leadership Reassignment between Direct Service and Advocacy Work

n Resource

Speaking for Themselves: Advocates’ Perspectives on Evaluation The following Innovation Network staff contributed to this report: Johanna Gladfelter Ehren Reed Laura Ostenso Simone Parrish Lily Zandniapour Andy Stamp This work is licensed under the Creative Commons AttributionNoncommercial-No Derivative Works 3.0 United States License. To view a copy of this license, visit http://creativecommons.org/ licenses/by-nc-nd/3.0/us/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

20 SPEAkIng foR ThEMSElvES: Advocates’ Perspectives on Evaluation

INNOVATION NETWORK, INC. 202.728.0727 phone • 202.728.0136 fax www.innonet.org • info@innonet.org

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->