You are on page 1of 57

CNSL 673

Mercy College
Lecture 12
Prof. Lin
Today’s Agenda:

Counseling Process Program Applying


Outcome Research Evaluation Program
Research Evaluation
Counseling Outcome Research: Does Counseling Work?
Outcome Research:

Is classified as a focus of The most typical application of


counseling research rather than as outcome research is… evaluating
a standalone design. the efficacy of treatment.
How is outcome research typically performed?

It is typically conducted from a true experiment or quasi-


experimental design, but any research design can be purposed to
focus on outcomes.

Therefore, there usually will be a comparison between a treatment


group and control group, or a comparison of different treatments.
Ways to conduct outcome research:
• There are seven strategies presented in the text:
1. Treatment Package Strategy
2. Dismantling Strategy
3. Additive Strategy
4. Parametric Strategy
5. “Common Factor” Control Group
6. Comparative Outcome Strategy
7. Moderation Design
Treatment Package Strategy:
• Comparison of a group that undergoes a treatment/intervention versus
another group that serves as a control.
• The aim of this strategy is to identify whether a treatment/intervention has
an effect (or not).
Dismantling Strategy:
• Comparison of a full treatment/intervention against an incomplete version
of itself (with one critical component removed).
• The aim of this strategy is to identify which components are critical, and
which components are not (and could be removed).
Additive Strategy:
• An existing treatment/intervention is compared against a version of itself
with an added component.
• The aim of this strategy is to identify potential additions to an existing
treatment/intervention that could enhance efficacy.
Parametric Strategy:
• Different parameters (quantities of aspects of the treatment/intervention,
such as time, dosage, or frequency) of an existing treatment/intervention
are compared against one another.
• The aim of this strategy is to identify the optimal parameters to maximize
efficacy.
“Common Factor” Control Group:
• Effectively a pseudo-”double-blind” study equivalent; but applied to
psychotherapy rather than medicine. There would be some type of
“placebo control” condition compared against an “active” group.
• The aim of this strategy is to attempt to establish the efficacy of a
treatment/intervention in a similar fashion to a “double-blind” study.
Comparative Outcome Strategy:
• Two (or more) different treatment/interventions for a common
condition are assessed. A optional no treatment condition can be
added for added comprehensiveness.
– Additional consideration must be made to ensure that critical parameters
of the chosen treatments are identical (i.e., number of sessions).
• The aim of this strategy is to identify which
treatment/intervention is more effective in addressing a
particular condition.
Moderation Design:
• Examination of different clients, settings, or contexts to evaluate its
potential impact on the efficacy of a treatment/intervention.
• The aim of this strategy is to provide an answer to this question: which
treatments work with which type of clients?
Methodological Issues with Outcome Research:
1. Inclusion/Exclusion Criteria
– Balancing generalizability against validity.
2. Assessment of Treatment Integrity
– The treatment/intervention needs to be a valid representation of the treatment/intervention
itself.
3. Measuring Change
– What qualifies as change? What standard is being used (clinical versus. Statistical
significance)?
4. Counselor Effects
– Think about potential biases/pre-existing differences in terms of last week
(Investigator/Experimenter/Participant). This also exists with the counselor/therapist as
well.
Process Research: How & Why Counseling Works

This Photo by Unknown Author is licensed under CC BY-SA


What is Process Research?

It is another focus within counseling There are three major types of process
research that explores the events that occur research:
within the therapeutic encounter.

Studies that independently examine a specific


process.
Studies that examine the relationship between
different counseling processes.
Studies that examine the relationship between
counseling processes and outcomes.
Ancillary Verbal behaviors, Covert behaviors, Content, or the
behaviors, such as such as counselor such as examination of the
speech quality or self-disclosure, “supportive” or topics of
the body posture interpretation, and “challenging” discussion, with a
of the counselor. self disclosure. therapist focus on client
intentions. behaviors.
Hill (1991) identified
seven behaviors that
have been examined in
the process area:
Strategies, which Interpersonal The therapeutic
focuses on the manner, such as relationship, such
therapist’s therapist as the working
techniques, like involvement, alliance and the
identifying empathy, and control of the
maladaptive congruence. topic of
cognitions or discussion.
challenging client
defenses.
• How to make it fit within the context of
research. As the overall therapeutic
process/intervention itself comprises
several components, it difficult to start
building the frame for the scientific
investigation.
The difficulty with • There are four broad questions to
process research consider:
is… 1. Where to start?
2. What to measure?
3. Whose perspective?
4. How much to measure?
Process research design options:

Experimental

Correlational

Longitudinal

Qualitative
This Photo by Unknown Author is licensed under
CC BY-NC-SA

Program Evaluation
Program Evaluation:
• This is a comprehensive assessment of a existing or proposed program at
all stages of development and implementation.
• The primary purpose of program evaluation is to provide consistent quality
services to individuals in need.
• With the increased attention on empirically supported treatment and
reliance on external funding, program evaluation has become a more
prominent part of counseling.
Some questions addressed by program evaluation:
• Posavac and Carey (2007) noted several questions that program
evaluations address:
– Is a program needed?
– Who should receive services and for how long?
– Is a program implemented as planned?
– What are the program outcomes?
– Which programs produce the most favorable outcomes?
– Are program benefits maintained over time?
– What are the program costs?
– Do the benefits outweigh the costs?
Is program evaluation research?
• Not quite; although research techniques are used as part of
program evaluation, program evaluation isn’t considered as
research. Here are two notable differences:
– Research (especially quantitative research) involves the systematic and
controlled investigation of phenomena; findings often lead to narrow
applicability or generalizability. In other words, investigating for the
sake of knowing.
– Responsibilities of program evaluation tend to be spread amongst
different individuals with different roles; in research, it can be done in
relative isolation.
Key terms of program evaluation:
Accountability: refers to the process of providing program feedback about a program to its
stakeholders. With the greater competition for funding, counselors play an increasingly
larger role in terms of ensuring high levels of accountability.
Stakeholders: any individuals involved in or affected by the program. These individuals
are also the same persons who the counselors are accountable for.
Formative evaluations: the ongoing evaluation of a program throughout its
implementation to ensure that it is being conducted as planned, and that changes are being
made as per stakeholder feedback. These are generally conducted to examine the success of
a program.
Summative evaluation: assessment of the program to determine the degree to which
program goals and objectives have been met. This type of evaluation is normally used to
compare an existing program against potential alternative programs.
There are four major components or types of program
evaluation:
• Needs assessment: this type of assessment is done to decide whether a program is
necessary or not for a target population.
• Process evaluation: this type of assessment assesses whether an ongoing or existing
program’s activities match the initial program design.
• Outcome evaluation: this type of assessment determines how successful a ongoing
program is by comparing the outcomes of individuals in the program against those not in
the program.
• Efficiency analysis: this type of assessment is a formal term for a cost-benefit analysis
(do the benefits of the program outweigh the costs?)
Needs assessment:
• A subjective and contextual process; potential evaluators needs
to be knowledgeable of similar programs and ensure that
programs are not duplicated.
• A needs assessment primarily involves identifying objectives to
plan and develop services for clients.
– This is important because it allows evaluators to understand the needs of
the client population and to develop or revise program goals/objectives
accordingly. Then, those goals/objectives can be reviewed in later
phases of the evaluation to see if they are being met.
Needs assessment (cont.):
• A successful needs assessment usually involves the formation of
an advisory committee, or a group that represents the various
stakeholder groups present. The better the representation, the
more holistic and comprehensive the evaluation will likely be.
• In preparation, the purpose of the evaluation must be determined,
and the program details outlined:
– Who is interested in the needs assessment?
– What is the political and social context of the program?
– What target population(s) are being served (or will be served) by the
program?
Needs assessment (cont.):
• Erford (2011) recommended the use of the ABCD model for developing
program objectives:
– A = audience (individuals influenced by program objective)
– B = behavior (expected action or attitude)
– C = conditions (context or mode in which behavior will occur)
– D = description (concrete performance criterion)

• Example:
– The client (A) will demonstrate reduced compulsive spending (B) by staying within a
predetermined weekly budget (C), which will be reported and verified by the client’s
designated independent financial counselor (D).
Needs assessment (cont.):
• Once all elements of the needs assessment are identified, an
executive summary is created and presented to the advisory
committee.
• This summary includes the following:
– Background information about the needs assessment.
– Information about the data sources and data analysis used.
– Recommendations based on the findings for program implementation
and future evaluation.
Process Evaluation:
• Also known as program monitoring.
• This evaluation examines whether the program was employed as originally
planned and whether it met expectations.
• Alternatives to the current methods are also investigated to ensure that the
current methods being implemented are the best available.
• With respect to government-run social programs, a combination of process
evaluation and efficiency analysis (cost-benefit analysis) is used to
evaluate program performance.
Outcome evaluation:
• Measures the effectiveness of a program at the end of the
program.
• Typically accomplished via posttest measure, exit interview,
cost-benefit analysis, records review, or checklist.
• Three aspects are normally assessed at this stage:
1. Whether the program was more effective than no intervention at all.
2. Whether the program was more effective than another program.
3. The degree to which the program was more effective than another
program.
Efficiency Analysis:
• Also referred to as a cost-benefit analysis.
• Allows decision-making on a quantitative basis; weighs the
benefits of a particular course of action against its costs.
• Examples:
– Efficacy of elaborate and expensive therapies vs. outcome.
– Test preparations (cramming vs. physiological cost).
– Social research: greatest benefit at least cost (may not necessarily be
money; it could be time, amount of effort, or other measurable factors).
How a program evaluation works:
1. Identify the program to be evaluated:
– Who wants the evaluation done and why? What are the available
resources and timeline for completion of the evaluation?
2. Plan the evaluation:
– Consider which type of research design will be used, the number of data
sources to strengthen validity of findings, and to examine existing
evaluations from similar programs to ensure the best plan.
3. Conduct a needs assessment and provide recommendations:
– See definition for “needs assessment” on Slide 25.
How a program evaluation works (cont.):
4. Define/determine what “success” is:
– Co-develop program goals and objectives.
– Development of measures for short-term and long-term success.
– Should be based on the stakeholder's mutually agreed upon objective
and operational goals; these may also vary due to program type and
resource availability.
– Utilization of multiple dependent (outcome) variables to examine
success, and measure each differently.
– Minimize validity threats through repeated measures design.
How a program evaluation works (cont.):
5. Select data sources:
– Use multiple data sources to measure outcome variables.
• Participants, program staff, program records, artifacts, outside
observers.
– Useful assessments:
• Surveys, interviews, checklists, tests, observation protocols.
– Data sources can be utilized throughout the entire program
evaluation process.
How a program evaluation works (cont.):
6. Monitor and evaluate the program process (process evaluation).
7. Determine the degree to which a program is successful (outcome
evaluation).
8. Analyze the program’s efficiency (efficiency analysis).
9. Continue, revise, or stop program based on the program evaluation
findings.
Program evaluation models and strategies:
• Treatment package strategy or social science research model:
– Control and treatment groups are compared to determine if a program is
effective.
• Comparative outcome strategy:
– Two or more programs or interventions are compared to assess which is
effective.
• Dismantling strategy:
– Various program components from different perspectives are evaluated to
determine the effective and ineffective parts of a program.
Program evaluation models and strategies (cont.):
• Constructive strategy:
– A new component is added to an already effective program and assessed
for “added value”.
• Parametric strategy:
– A program evaluated at different stages is reviewed to determine the
most appropriate times to evaluate it.
• Common factors control group strategy:
– A program is evaluated to determine whether a specific component or
specific factors of a program resulted in its effectiveness.
Program evaluation models and strategies (cont.):
• Moderation design strategy:
– Participants and other stakeholders are assessed to consider who might
benefit most from a program.
• Objectives-based evaluation model:
– The most common model used, whereas professional counselors
determine if goals and objectives were met.
• Expert-opinion model:
– An outside and neutral expert examines the program process and
outcome.
Program evaluation models and strategies (cont.):
• Success case method:
– Information is sought from those individuals who benefited most from a
program.
• Improvement-focused approach:
– Ineffective program components are reviewed to figure out what went
wrong.
Applying Program
Evaluation
#FightFor15

This Photo by Unknown Author is licensed under CC BY-NC-SA


This hashtag…
• Represents the organization Fight for $15, a social movement that is
attempting to push for a $15 minimum wage (and unionization rights) for
many different groups of workers (e.g., primarily service-based part-time
positions that typically does not have unionization).
What this organization is addressing:
• Is a long-standing issue of growing economic disparity.
– Prior to Fight for $15, one of the more publicized movements that also
drew attention to a similar area was Occupy Wall Street in 2011.
One potential solution that has been suggested to try to
address economic disparity:
• Is the general increasing of the minimum wage from the
existing level to a higher amount.
– This course of action has been done in several different states and cities
across the United States already; 21 states have a current minimum wage
equal to the federal minimum wage of $7.25.
– The method of increase varies; some are simple direct increases above
the federal minimum wage, others are more complex, incorporating
additional conditions and adjustment (most commonly a COLA [Cost of
Living Adjustment] tied to the CPI [Consumer Price Index], a measure
of economic inflation).
One of the more documented implementation is the
increase conducted in Seattle:
• Minimum Wage Landing Page (seattle.gov)
• Ordinance SMC 14.19 (library.municode.com)
With many other states considering (or planning to
consider) similar efforts:
• The data from the ongoing Seattle implementation has been one
of the more frequently analyzed and utilized for planning and
decision purposes. Would increasing the minimum wage like
Seattle (and others) be a potential option to address economic
disparity (at least from a state or local perspective)?
– Does this sound like something mentioned earlier in the PowerPoint?
Next, let’s explore this idea further:
• You’re a member of an advisory committee for a state governor-elect who
has pledged to raise the minimum wage as one of his campaign platforms.
• Can program evaluation be used in this situation to address this scenario?
Let’s find out by working through the program evaluation procedure.
Recap: Program Evaluation Process
1. Identify the program to be evaluated:
– Who wants the evaluation done and why? What are the available
resources and timeline for completion of the evaluation?
2. Plan the evaluation:
– Consider which type of research design will be used, the number of data
sources to strengthen validity of findings, and to examine existing
evaluations from similar programs to ensure the best plan.
3. Conduct a needs assessment and provide recommendations:
– See definition for “needs assessment” on a previous slide.
Recap: Program Evaluation Process (cont.)
4. Define/determine what “success” is:
– Co-develop program goals and objectives.
– Development of measures for short-term and long-term success.
– Should be based on stakeholders’ mutually agreed upon objective and
operational goals; these may also vary due to program type and resource
availability.
– Utilization of multiple dependent (outcome) variables to examine
success, and measure each differently.
– Minimize validity threats through repeated measures design.
How a program evaluation works (cont.):
5. Select data sources:
– Use multiple data sources to measure outcome variables.
• Participants, program staff, program records, artifacts, outside
observers.
– Useful assessments:
• Surveys, interviews, checklists, tests, observation protocols.
– Data sources can be utilized throughout the entire program
evaluation process.
How a program evaluation works (cont.):
6. Monitor and evaluate the program process (process evaluation).
7. Determine the degree to which a program is successful (outcome
evaluation).
8. Analyze the program’s efficiency (efficiency analysis).
9. Continue, revise, or stop program based on the program evaluation
findings.
As you can probably see…
• Although it’s best suited for counseling-related topics, there is certainly the
potential for the application of program evaluation on topics outside the
field of counseling.
Additional Reading (optional):
• Reich, Allegretto, Godoey (2017)
– This frequently-cited study that was commissioned by the city of Seattle (conducted by UC
Berkeley) is commonly used to support implementation of a minimum wage increase.

• Jardim, Long, Plotnick, van Inwegen, Vigdor, Wething (2018)


– This working manuscript from the University of Washington suggests that the Seattle minimum
wage increase has hurt low-income workers by reducing job opportunities and/or reducing work
hours, in contrast to several previously completed studies on the issue.
But here’s another example to illustrate the role of
program evaluation in an applied context:
Final Exam Overview:
• Format:
– Cumulative “take home” exam
• Anything covered in any lecture might be in the exam!
– 50 multiple choice questions (2 points each)
– Timed (full class duration – three hours); exam will auto-submit once the timer runs out (unless you manually
submit before the timer ends).
– Multiple access (you can enter/exit the test as many times as you want. However, the timer will continue to
run).
– Single submission
– You will receive feedback/grades immediately after submitting the exam.

• Will be available on Blackboard beginning at 9:00am on 5/1 (in the Week 15 & 16 folder).
• Deadline: 5/7, by 11:59pm (exam must be submitted by this date/time; no late submissions
accepted).
Reminder:
• The final version of your research proposal is due on May 10th! The Chalk
& Wire/Anthology link will be accessible starting May 3rd in the Week 15
& 16 folder.

This Photo by Unknown Author is licensed under CC BY

You might also like