Professional Documents
Culture Documents
Mercy College
Lecture 12
Prof. Lin
Today’s Agenda:
It is another focus within counseling There are three major types of process
research that explores the events that occur research:
within the therapeutic encounter.
Experimental
Correlational
Longitudinal
Qualitative
This Photo by Unknown Author is licensed under
CC BY-NC-SA
Program Evaluation
Program Evaluation:
• This is a comprehensive assessment of a existing or proposed program at
all stages of development and implementation.
• The primary purpose of program evaluation is to provide consistent quality
services to individuals in need.
• With the increased attention on empirically supported treatment and
reliance on external funding, program evaluation has become a more
prominent part of counseling.
Some questions addressed by program evaluation:
• Posavac and Carey (2007) noted several questions that program
evaluations address:
– Is a program needed?
– Who should receive services and for how long?
– Is a program implemented as planned?
– What are the program outcomes?
– Which programs produce the most favorable outcomes?
– Are program benefits maintained over time?
– What are the program costs?
– Do the benefits outweigh the costs?
Is program evaluation research?
• Not quite; although research techniques are used as part of
program evaluation, program evaluation isn’t considered as
research. Here are two notable differences:
– Research (especially quantitative research) involves the systematic and
controlled investigation of phenomena; findings often lead to narrow
applicability or generalizability. In other words, investigating for the
sake of knowing.
– Responsibilities of program evaluation tend to be spread amongst
different individuals with different roles; in research, it can be done in
relative isolation.
Key terms of program evaluation:
Accountability: refers to the process of providing program feedback about a program to its
stakeholders. With the greater competition for funding, counselors play an increasingly
larger role in terms of ensuring high levels of accountability.
Stakeholders: any individuals involved in or affected by the program. These individuals
are also the same persons who the counselors are accountable for.
Formative evaluations: the ongoing evaluation of a program throughout its
implementation to ensure that it is being conducted as planned, and that changes are being
made as per stakeholder feedback. These are generally conducted to examine the success of
a program.
Summative evaluation: assessment of the program to determine the degree to which
program goals and objectives have been met. This type of evaluation is normally used to
compare an existing program against potential alternative programs.
There are four major components or types of program
evaluation:
• Needs assessment: this type of assessment is done to decide whether a program is
necessary or not for a target population.
• Process evaluation: this type of assessment assesses whether an ongoing or existing
program’s activities match the initial program design.
• Outcome evaluation: this type of assessment determines how successful a ongoing
program is by comparing the outcomes of individuals in the program against those not in
the program.
• Efficiency analysis: this type of assessment is a formal term for a cost-benefit analysis
(do the benefits of the program outweigh the costs?)
Needs assessment:
• A subjective and contextual process; potential evaluators needs
to be knowledgeable of similar programs and ensure that
programs are not duplicated.
• A needs assessment primarily involves identifying objectives to
plan and develop services for clients.
– This is important because it allows evaluators to understand the needs of
the client population and to develop or revise program goals/objectives
accordingly. Then, those goals/objectives can be reviewed in later
phases of the evaluation to see if they are being met.
Needs assessment (cont.):
• A successful needs assessment usually involves the formation of
an advisory committee, or a group that represents the various
stakeholder groups present. The better the representation, the
more holistic and comprehensive the evaluation will likely be.
• In preparation, the purpose of the evaluation must be determined,
and the program details outlined:
– Who is interested in the needs assessment?
– What is the political and social context of the program?
– What target population(s) are being served (or will be served) by the
program?
Needs assessment (cont.):
• Erford (2011) recommended the use of the ABCD model for developing
program objectives:
– A = audience (individuals influenced by program objective)
– B = behavior (expected action or attitude)
– C = conditions (context or mode in which behavior will occur)
– D = description (concrete performance criterion)
• Example:
– The client (A) will demonstrate reduced compulsive spending (B) by staying within a
predetermined weekly budget (C), which will be reported and verified by the client’s
designated independent financial counselor (D).
Needs assessment (cont.):
• Once all elements of the needs assessment are identified, an
executive summary is created and presented to the advisory
committee.
• This summary includes the following:
– Background information about the needs assessment.
– Information about the data sources and data analysis used.
– Recommendations based on the findings for program implementation
and future evaluation.
Process Evaluation:
• Also known as program monitoring.
• This evaluation examines whether the program was employed as originally
planned and whether it met expectations.
• Alternatives to the current methods are also investigated to ensure that the
current methods being implemented are the best available.
• With respect to government-run social programs, a combination of process
evaluation and efficiency analysis (cost-benefit analysis) is used to
evaluate program performance.
Outcome evaluation:
• Measures the effectiveness of a program at the end of the
program.
• Typically accomplished via posttest measure, exit interview,
cost-benefit analysis, records review, or checklist.
• Three aspects are normally assessed at this stage:
1. Whether the program was more effective than no intervention at all.
2. Whether the program was more effective than another program.
3. The degree to which the program was more effective than another
program.
Efficiency Analysis:
• Also referred to as a cost-benefit analysis.
• Allows decision-making on a quantitative basis; weighs the
benefits of a particular course of action against its costs.
• Examples:
– Efficacy of elaborate and expensive therapies vs. outcome.
– Test preparations (cramming vs. physiological cost).
– Social research: greatest benefit at least cost (may not necessarily be
money; it could be time, amount of effort, or other measurable factors).
How a program evaluation works:
1. Identify the program to be evaluated:
– Who wants the evaluation done and why? What are the available
resources and timeline for completion of the evaluation?
2. Plan the evaluation:
– Consider which type of research design will be used, the number of data
sources to strengthen validity of findings, and to examine existing
evaluations from similar programs to ensure the best plan.
3. Conduct a needs assessment and provide recommendations:
– See definition for “needs assessment” on Slide 25.
How a program evaluation works (cont.):
4. Define/determine what “success” is:
– Co-develop program goals and objectives.
– Development of measures for short-term and long-term success.
– Should be based on the stakeholder's mutually agreed upon objective
and operational goals; these may also vary due to program type and
resource availability.
– Utilization of multiple dependent (outcome) variables to examine
success, and measure each differently.
– Minimize validity threats through repeated measures design.
How a program evaluation works (cont.):
5. Select data sources:
– Use multiple data sources to measure outcome variables.
• Participants, program staff, program records, artifacts, outside
observers.
– Useful assessments:
• Surveys, interviews, checklists, tests, observation protocols.
– Data sources can be utilized throughout the entire program
evaluation process.
How a program evaluation works (cont.):
6. Monitor and evaluate the program process (process evaluation).
7. Determine the degree to which a program is successful (outcome
evaluation).
8. Analyze the program’s efficiency (efficiency analysis).
9. Continue, revise, or stop program based on the program evaluation
findings.
Program evaluation models and strategies:
• Treatment package strategy or social science research model:
– Control and treatment groups are compared to determine if a program is
effective.
• Comparative outcome strategy:
– Two or more programs or interventions are compared to assess which is
effective.
• Dismantling strategy:
– Various program components from different perspectives are evaluated to
determine the effective and ineffective parts of a program.
Program evaluation models and strategies (cont.):
• Constructive strategy:
– A new component is added to an already effective program and assessed
for “added value”.
• Parametric strategy:
– A program evaluated at different stages is reviewed to determine the
most appropriate times to evaluate it.
• Common factors control group strategy:
– A program is evaluated to determine whether a specific component or
specific factors of a program resulted in its effectiveness.
Program evaluation models and strategies (cont.):
• Moderation design strategy:
– Participants and other stakeholders are assessed to consider who might
benefit most from a program.
• Objectives-based evaluation model:
– The most common model used, whereas professional counselors
determine if goals and objectives were met.
• Expert-opinion model:
– An outside and neutral expert examines the program process and
outcome.
Program evaluation models and strategies (cont.):
• Success case method:
– Information is sought from those individuals who benefited most from a
program.
• Improvement-focused approach:
– Ineffective program components are reviewed to figure out what went
wrong.
Applying Program
Evaluation
#FightFor15
• Will be available on Blackboard beginning at 9:00am on 5/1 (in the Week 15 & 16 folder).
• Deadline: 5/7, by 11:59pm (exam must be submitted by this date/time; no late submissions
accepted).
Reminder:
• The final version of your research proposal is due on May 10th! The Chalk
& Wire/Anthology link will be accessible starting May 3rd in the Week 15
& 16 folder.