You are on page 1of 19

PROGRAM EVALUATION PLAN

Tax Staff Essentials (TSE) Program Evaluation Plan


Laura Nations
Walden University

Program Evaluation Plan

2
Contents

Program Analysis ............................................................................................................................ 3


Program Evaluation Contextual Factor Concept Map ..................................................................... 8
Evaluation Model Table ................................................................................................................ 10
Evaluation Criteria......................................................................................................................... 13
Logic Map ..................................................................................................................................... 15
Data Collection Design and Sampling Strategy (Audio) ............................................................... 16
Reporting Strategy Table ............................................................................................................... 17
References ..................................................................................................................................... 19

Program Evaluation Plan

3
Program Analysis

The program I will be evaluating for this project is the Tax Staff Essentials (TSE)
Program that was originally released in 2012 by the American Institute of Certified Public
Accountants (AICPA). The AICPA is a not-for-profit membership organization for CPAs and
other qualified practitioners (OQPs) that has been in business for over 125 years. One of the
main revenue generators for the AICPA (other than membership dues) is the sale of continuing
education to CPAs. CPAs have to complete a certain number of continuing education credits
(CPEs) each year in order to maintain licensure. The TSE program is updated annually every
spring and released in two formats every year, group study onsite live and self-study online.
The TSE program is a part of the tax portfolio within the CPE line of products. The
program is designed in four progressive levels so that it can utilized to provide the education
necessary for the tax professional throughout their career. Included in the four levels are courses
on soft skills, business skills, and fundamental technical tax topics (please see Appendix A for a
breakdown of all the levels and the onsite hours of each course within those levels). This mix
was created in order to provide for a well-rounded educational program. For example you can
provide a tax professional with all the technical background in the world, but that doesnt do
them any good if they dont know how to speak with clients.
The main clients for this program are medium to large financial firms (5000+
employees). The market perception of the program has been that it is a well-rounded and
comprehensive program that provides their employees with the education they need to not only
service their clients, but also keep their CPA licensure active. This perception hasnt changed
over the years since the programs inception and neither has revenue, which has stayed at a
steady $8 million.

Program Evaluation Plan

There are a large number of stakeholders involved with this program. PMI defines a
stakeholder as individuals and organizations that are actively involved in the project, or whose
interests may be positively or negatively affected by execution of the project or project
completion. They may also exert influence over the project and its deliverables (Project
Management Institute, 2008). This list includes leadership, the VP of Member Learning &
Competency, the Director of Learning Design & Development, the Director of Content
Development, the Tax Portfolio Manager, the Program Manager, and the Project Manager. All of
these individuals want the program to meet whatever the yearly revenue goal set to it is and they
want the clients who buy the program year-after-year to be happy with the deliverable. Other
stakeholders include direct sales who want the program to meet the revenue goals because they
get a percentage of the sales. Also there is the development team including the (4) Technical
Managers, (3) Editors, (2) Formatters, (4) Digital Publishing Specialists, and (1) Marketing
Specialist. These team members are more concerned about the development and release timeline
considering that they work on multiple projects at once. Due to the quick turn done on the update
of this product line annually (4 months) they want the self-study online design to be as simple as
possible and the updates to be minimal so that they can complete their work to schedule and not
get penalized.
The major political factor that influences the program is the fact that the CEOs of two
firms that are TSE clients are also on the AICPA Board of Directors. They have had custom
requests for changes to the program that we have acquiesced to and even changed the entire selfstudy online design output based on their requests, not on any form of evaluation output. These
relationships put a heavy risk on the potential of the program because we arent basing decisions
on what is best for learner, we are making decisions to make two clients happy.

Program Evaluation Plan

To close there are a couple potential ethical issues that will arise during the evaluation of
this program. First, the evaluator (me) is an internal employee who has also been the program
and project manager for this program since its inception. The second potential ethical issue stems
from the political issue of having two of the largest program clients on our board we must tread
lightly during the evaluation and ensure that any bias stemming from their individual wants does
not color the evaluation findings. I believe that I can look at the evaluation from an unbiased
approach because my end goal is to ensure that the participants are getting engaging valuable
education that allows them to grow in their profession and provide their clients with the service
they need. However, I will have to level-set my ethical responsibilities with the other
stakeholders in order to reduce the chance of political bias weaving its way into the evaluation.
Our core text suggests doing this at the very beginning of the evaluation planning process and I
intend to take that advice (Fitzpatrick, Sanders, & Worthen, 2010).

Program Evaluation Plan

6
Appendix A: Tax Staff Essentials Outline

Foundational
ACRO
TSE.TR1
TSE.ITF
TSE.CCD
TSE.WPD
TSE.SCF
TSE.STD
TSE.TFLP

Title
Tax Research 1
Individual Tax Fundamentals
Capitalized Costs &
Depreciation
Working Paper
Documentation
S Corporation Fundamentals
Skills to Desk Business
Writing & Communication
Tax Fundamentals of LLCs
and Partnerships

GS Hours
4
8
6

Title
Tax Research 2
Multistate Taxation
Tax Accounting for
Inventories
Taxation of Property
Transactions
Intermediate Individual
Taxation
Taxation of Corporations
Accounting Methods &
Periods
Choice of and Formation of
Entity

GS Hours
4
8
2

2
8
4
8

Intermediate
ACRO
TSE.TR2
TSE.MST
TSE.TAI
TSE.TPT
TSE.IIT
TSE.TCC
TSE.AMP
TSE.CFE

4
8
8
2
4

Program Evaluation Plan

Advanced
ACRO
TSE.F9B
TSE.ITA
TSE.ETP
TBA
TSE.ATPSC
TSE.IBA

Title
Form 990
Income Tax Accounting
Estate & Tax Primer
Real Estate Course TBA
S Corporation Taxation
Introduction to Business
Acquisitions

GS Hours
8
8
4
4
8
8

Title
IRS Audits
Advanced Taxation Partnerships &
LLCs
International Taxation
Advanced Income Tax Accounting
Tax Planning for S Corporations
Individual Tax Planning

GS Hours
4
8

Expert
ACRO
TSE.IRS
TSE.APL
TSE.INT
TSE.ADVTA
TSE.TPSC
TSE.ITP

8
8
4
8

Program Evaluation Plan


Program Evaluation Contextual Factor Concept Map

The Tax Staff Essentials Program is in its five year of development and as an
organization we have never conducted an evaluation of the effectiveness of this program. As I
begin to plan my strategy for this program evaluation I am thinking about the contextual factors
of the program including the organizational culture that influences the program, the program
constraints and the stakeholders whose values and opinions may attempt to bring bias to the
evaluation.
The organization that creates the program is a not-for-profit that has been in business for
over 125 years. They over a diverse portfolio of educational options to finance industry
professionals. Layoffs are rare, though there is only one person on staff still who was involved
with the original development of the program five years ago, the rest of the original group has
left the organization. This will affect the ability to look to far back for historic information,
records are limited. The focus of the organization internally is on organizational learning and
best ISD practices. At the beginning of every project they ask themselves what will create the
best learning experience for the learner? They also have a large focus on revenue generation

Program Evaluation Plan

and customer satisfaction. They are very reactionary when a customer is unhappy, instead of
really sourcing the cause they do whatever the customer wants in order to placate them.
Program constraints, which limit the development opportunities due to budget and time
have been used as an excuse as to why a program evaluation has not occurred yet. In addition the
program must meet NASBA compliance standards, which affects the design of the program due
to aspects the program must have including a certain number of review questions and exam
questions, a glossary, search function, and full pilot testing, which eats away at the already
limited time given to the program annual for development and updating. Political relationships
with certain clients who purchase the program also need to be taken into account because the
entire design of the program deliverables has changed in previous years due to feedback from
one client who had political ties to the leadership team. In addition all the program deliverables
must meet internal quality control standards in order to ensure minimum quality standards are
upheld.
Finally, stakeholders are a key contextual element as well because their expectations of
the program and the evaluation can vary drastically from one another. Stakeholders for this
program include the development team, program clients, the marketing team, the direct sales
team, the portfolio manager, the leadership team, the program manager, and the project manager.
The development team is focused mainly on the human resource aspects of the program and how
much work is going to land on them. The clients want a high quality deliverable that increases
productivity and customer satisfaction for them internally because their employees are gaining
the skills they need for the program. The marketing team, sales team, portfolio manager, and
leadership team are focused on budget, revenue, and client satisfaction. The program manager
and project manager are focused on the overall program from budget to resources and from
revenue to customer satisfaction. All of this needs to be taken into consideration so that bias is
minimized, knowledge of these will make them easier to spot by the evaluator.

Program Evaluation Plan

10
Evaluation Model Table

Evaluation
Model

Advantages

EXPERTISE
AND
CONSUMERORIENTED
APPROACHES

PROGRAMORIENTED
EVALUATION
APPROACHES

These are the oldest evaluation


approaches and have been
reviewed and revised for years by
evaluation experts (Fitzpatrick,
Sanders, & Worthen, 2010).
Crowd sourcing (consumeroriented approach) allows the
evaluator valuable feedback from
a larger audience and real-world
program enhancement
suggestions from the target
audience.

Expertise-oriented evaluation
leverages the expertise of SMEs,
which adds great value
(Fitzpatrick, Sanders, & Worthen,
2010).

The higher education arena


leverages this evaluation
approach for accreditation. This
has led to the creation of industry
standards and best practices
(Fitzpatrick, Sanders, & Worthen,
2010).

Theory-based evaluation i s
t h e most rapidly growing area
of evaluation (Fitzpatrick,
Sanders, & Worthen, 2010).

Objectives-oriented approach
focuses on a specified activity
or objective and evaluates the
extent to which those activities
and objectives are achieved
(Fitzpatrick, Sanders, &
Worthen, 2010).

Disadvantages
The expertise-oriented approach is
centered on the professional opinion
of the experts brought in, and those
experts wont always agree.

The experts are SMEs, not


necessarily evaluators. This leads to
a risk of bias.

Experts and consumers may not be


searching for the same data that the
program team requires, so the end
value of the evaluation results might be
compromised due to lack of
stakeholder engagement (Fitzpatrick,
Sanders, & Worthen, 2010).

Because the logic map is not included


the evaluator does not see the links
between the inputs and outputs, they
dont know why the outputs are
occurring.

Bias on the evaluators part could be


very detrimental if not recognized.
Theory-based evaluators may
struggle to see beyond their selfimposed blinders (Fitzpatrick,
Sanders, & Worthen, 2010).

Logic models fill the gap


between the program and its
objectives by linking key inputs
and outputs (long-term and
short-term).

Sometimes the evaluators tend to test


the theories and may neglect the
information needs of the stakeholders
(Fitzpatrick, Sanders, & Worthen,
2010).

Theory-based evaluation

This approach has been criticized by

Program Evaluation Plan

Evaluation
Model

Advantages

11

Disadvantages

involves developing a theory


for why the program should
achieve the outcomes. This
theory should be based on
stakeholders input, research,
proven theory, and the
evaluators knowledge and
expertise (Fitzpatrick,
Sanders, & Worthen, 2010).

DECISIONORIENTED
EVALUATION
APPROACHES

evaluation experts for oversimplifying program delivery


(Fitzpatrick, Sanders, & Worthen,
2010).

This approach was developed to


counteract evaluations being
ignored and not having the
necessary impact (Fitzpatrick,
Sanders, & Worthen, 2010).

CIPP focuses on managers and

This evaluation approach can


assist administrators, managers,
policymakers, boards, program
staff, and others in leadership
roles push future decisions and
program revision efforts in
direction that adds value
(Fitzpatrick, Sanders, & Worthen,
2010).

End users and non-leadership

Incorporates both a systemsbased and people-based


approach working towards the
same goal.

Context, Input, Process, and


Product (CIPP) evaluations are
designed to help with planning,
structuring, implementation, and
recycling decisions (Fitzpatrick,
Sanders, & Worthen, 2010).

This approach can be utilized


prior to program completion in
order to determine program flaws
or room for improvement prior to
official launch.

Utilization-Focused Evaluation
(UFE) is based upon the
assumption that the evaluation is
to inform key decisions and that
the evaluation is better when a

above, those stakeholders closest


to the program are left out.

The decision-oriented approach is


too focused on decisions
stakeholders are not included in the
evaluation process, even though
they are valid and valuable
stakeholders (Fitzpatrick, Sanders,
& Worthen, 2010).

The focus is on the decision makers


(sometimes personal) agenda, t h i s
c a u s e s t h e evaluator to struggle
when having to answer questions that
do not align with the decision makers
agenda (Fitzpatrick, Sanders, &
Worthen, 2010).

The evaluation needs to take place in a


stable environment to end in valid
evaluation results. (Fitzpatrick,
Sanders, & Worthen, 2010).

Program Evaluation Plan

Evaluation
Model

Advantages

12

Disadvantages

dedicated subject matter expert is


involved (Fitzpatrick, Sanders, &
Worthen, 2010).
PARTICIPANTORIENTED
EVALUATION
APPROACHES

Leverages the program and


evaluation stakeholders to
provide evaluation support.

The stakeholders are closest to


the program and provide the
evaluator with valuable historic
and in-depth program information.

Participant-oriented evaluation
approaches are geared
towards the stakeholders asks.

Stakeholders who are


involved become more
invested and supportive of
the evaluation (Fitzpatrick,
Sanders, & Worthen, 2010).

The two broad categories of


drawbacks are (a) the feasibility, or
manageability, of implementing a
successful participative study; and (b)
the creditability of the results to those
who do not participate in the process
(Fitzpatrick, Sanders, & Worthen,
2010).

It takes more resources (time and


budget) in order to utilize
stakeholder input.

Leveraging stakeholders who are


close to the program may produce
biased results (Fitzpatrick, Sanders,
& Worthen, 2010).

The stakeholders may not have the


competency to conduct the evaluation
(Fitzpatrick, Sanders, & Worthen,
2010).

Participants can bring their political


biases into the evaluation.
Explain your choice of model for your program evaluation: I plan on utilizing a synthesized approach
to evaluate the Tax Staff Essentials (TSE) program. It will be a blend of an objectives-oriented approach
and a participant-oriented approach. The key role for the evaluator in an objectives-oriented evaluation is
to determine whether some or all of the program objectives are achieved and, if so, how well they are
achieved (Fitzpatrick, Sanders, & Worthen, 2010, pg. 154). Having the stakeholders involved when
assessing the program objectives is imperative because they have the history with the program and can
illuminate the objectives to the evaluator. Evaluators have the skills to analyze the merit or worth of a
program and to make recommendations for improvement or action. But, stakeholders have knowledge
that we do not. Policymakers know their decisions, and the factors, political and otherwise, that may
influence their decisions. Program managers and staff know the details of the program and some of the
struggles and successes of the students or clients. They know what they have tried before and why it
didnt work (Fitzpatrick, Sanders, & Worthen, 2010, pg. 223). It is important to remember everyone
involved in the evaluation brings a specific skill set and knowledge set, they are adding their own value
and you have to approach working with stakeholders in that manner.
By utilizing the aspects of these two approaches I will get the value of the stakeholders and their
knowledge of the program history and evolution, as well as information on their goals for the program
gives the evaluator a valuable information to work from. I can then work collaboratively with the
stakeholders to determine the objectives of the program and determine if those objectives have been met,
and if they havent how the program could be modified so that those goals could be met (Fitzpatrick,
Sanders, & Worthen, 2010, pg. 223).

Program Evaluation Plan

13
Evaluation Criteria

The Tax Staff Essentials (TSE) Program is an internationally selling continuing education
program for tax professionals. Due to its expansive reach in terms of market and the fact that it
has now been marketed and sold for four years, it is past time to evaluate the educational value
and monetary value of the program. In order to do that the program must be evaluated to ensure
its quality and profitability.
The purpose of the evaluation is to evaluate the course impact, as well as learner
knowledge retention, and whether the content in the program is comprehensive to meet the needs
of current tax professionals. The diverse set of stakeholders want to ensure they funding the
development of programs that are not only adding value to industry professionals, but that are also
going to provide the organization with a profit. In order to garner the buy-in of the stakeholders the
potential evaluation questions will be sent out for exposure and comment to all the stakeholders
including those in leadership and those on the development team. The value in this is that the
stakeholders at these various levels may see a nuance that I dont. Also some of the stakeholders are
SMEs and soliciting their opinion will be of assistance at a technical level.
The following evaluation questions were determined based on the objectives of the
evaluation to both show the profitability of the program long-term, the educational value of the
program to tax practitioners in industry, and whether the content still covers all topics of value to
tax practitioners. We will not be evaluating the internal development process as it is very lean and
has been reviewed previously and fine-tuned over the last four years.
1.

Do the learners understand the tax topics that are being presented in the program
materials? Ensuring that the learners are at least able to intake the information to begin
with is imperative. They cant retain and apply the information if this step does not take
place.

2.

What topics do tax practitioners face in practice that are not covered in this
program? By identifying trends in knowledge needs we can work ahead to provide the
education that those in the tax industry needs by adding to the program.

3.

What percentage of customers are return customers (they purchase the TSE
program for their employees annually)? This suggests customer satisfaction in the
product and also can help project future monetary value of the program to the
organization.

Program Evaluation Plan


4.

14

Have clients seen an increase in productivity and customer satisfaction in those


employees that have taken part in the program? This aligns to the need to see not only
if the program is selling and if the participants are getting the education they need, but
whether they are able to apply it so that their employers are seeing a ROI when
purchasing the program.

5. How will you apply the skills you learned in this program? This question posed to
participants in the actual program provides feedback on what aspects of the courses they
focus on for their individual roles and thus give us a better direction in terms of content
development for the program.
Taking on the role of evaluator for this program I intent to leverage data from these five
questions and a synthesized approach of the objectives-oriented and participant-oriented
approach to evaluate the Tax Staff Essentials (TSE) program. Evaluators have the skills to
analyze the merit or worth of a program and to make recommendations for improvement or
action. But, stakeholders have knowledge that we do not. Policymakers know their decisions,
and the factors, political and otherwise, that may influence their decisions. Program managers
and staff know the details of the program and some of the struggles and successes of the students
or clients. They know what they have tried before and why it didnt work (Fitzpatrick, Sanders,
& Worthen, 2010, pg. 223).
Next steps will be to put the evaluation questions and criteria out for comment to the
stakeholders for two weeks, then I will compile the comments and revise, then send out for three
days for final comments or changes before launching the evaluation. This will ensure that the
stakeholders are invested in the success of the evaluation and that I will have the support I need
to continue with the evaluation as this is not something our organization traditionally undertakes.

Program Evaluation Plan

15
Logic Map

Program Evaluation Plan


Data Collection Design and Sampling Strategy (Audio)
Please click the image below to access audio presentation.

Data Collection Design & Sampling Strategy.mp3

16

Program Evaluation Plan

17
Reporting Strategy Table

Reporting Strategy Table Tax Staff Essentials Program Evaluation


Stakeholder(s)

Reporting Strategy

Implications

Stakeholder
Involvement

Executive Leadership:
VP of Member Learning
& Competency,
Director of Learning
Design & Development,
and Director of Content
Development

The Executive
Leadership will need to
be included in high level
status reports that are
sent out weekly via
email. They must also
be included in the live
presentation of the
evaluation findings and
the dispersion of the
final evaluation report.
Executive Leadership
has a vested interest in
the success of the
program and will be
included in key
decisions such as the
design and data
collections strategy,
however their focus is
on revenue not
necessarily on the
educational benefit of
the program.
Communications should
be tailored to
accommodate this
(Fitzpatrick, Sanders, &
Worthen, 2010).
The Tax Portfolio
Manager is very similar
to the Executive
Leadership in terms of
communication needs.
They need a high level
overview of progress,
and will be included in
the final evaluation
findings presentation
and report dispersion.
However, their priorities
are put below those of
the other stakeholders.

The results of this


program evaluation
might lead to a large
scale redesign of the
TSE program. This
would mean we would
need to determine
resources (both human
and monetary) to meet
the redesign needs. We
would also have to work
with them to determine
overarching priority in
the overall product
pipeline.

The Executive
Leadership team would
need to be involved with
signing off on the
evaluation data
collection methods, and
be informed of weekly
progress during the
evaluation process at a
high level. They would
also need to be included
in the presentation of
the findings so that next
steps can be determined.

If the evaluation results


lead to a redesign the
Tax Portfolio Manager
will have to determine
overall budgetary
allowance for the
redesign and that
projects priority in the
pipeline.

The Tax Portfolio


Leader would need to be
informed of weekly
progress during the
evaluation process at a
high level. They would
also need to be included
in the presentation of
the findings so that next
steps can be determined.

Tax Portfolio Manager

Program Evaluation Plan

18

Reporting Strategy Table Tax Staff Essentials Program Evaluation


Stakeholder(s)
Program
Manager/Project
Manager

Project Team
(Development)

Reporting Strategy
The Program
Manager/Project
Manager is also the
Evaluator for this
Program Evaluation so
they will have first-hand
knowledge of the
evaluation as it unfolds
and be responsible for
communicating
appropriately with all
stakeholders.
The Project Team
should be utilized for
their knowledge of the
program and how it is
developed. They dont
need to know specifics
as the program
evaluation takes place,
but they should be
included in the
distribution list for the
final program evaluation
findings.

Implications

Stakeholder
Involvement

The results of this


program evaluation
might lead to a large
scale redesign of the
TSE program. This
would mean additional
work on the
Program/Project
Manager.

The Program
Manager/Project
Manager is also acting
as the Evaluator of the
TSE Program.

The results of this


program evaluation
might lead to a large
scale redesign of the
TSE program. This
would mean additional
work or a change in the
current development
process for the Project
Team.

The Project Team


should be informed of
the evaluation results
once the program
evaluation has
concluded and next
steps can be discussed
based on the findings.

Values, Standards, and Criteria: The evaluation will be conducted according to the American
Evaluation Association ethical standards (The Guiding Principles). In order to maintain these standards
the program evaluation will be accurately scoped, the data collection approaches used will be the best for
this particular program evaluation and they will be agreed upon by Executive Leadership. There will be
multiple data collection approaches in order to gain further insights, and there will be two methods of data
analysis as well. Utilizing two approaches (mixed-method and triangulation), will allow for a more
concrete validation of the results of the evaluation data collection process (Fitzpatrick, Sanders, &
Worthen, 2010).
Potential ethical issues: This evaluation, as with all evaluations should and will be conducted according
to the ethical standards of The Guiding Principles from the American Evaluation Association. In order to
adhere to these standards potential bias must be identified and steps must be taken to mitigate that bias.
The overarching bias stemming from Executive Leadership and the Tax Portfolio Manager is that their
focus is on revenue, not necessarily educational value, but the program has to be evaluated at both levels
and the educational value of the program is directly linked to its value. The other bias risk stems from the
Program Manager/Project Manager who is also acting as the evaluator for the program. The individual
has been involved with the program since its inception four years ago, which means they have a wealth of
knowledge about the program. However, it can also lead to a bias towards to the value of the program
leading to data skewed towards the programs benefit (Fitzpatrick, Sanders, & Worthen, 2010).

Program Evaluation Plan

19
References

Fitzpatrick, J., Sanders, J., & Worthen, B. (2010). Program evaluation: Alternative approaches
and practical guidelines (4th ed.). Boston, MA: Pearson.
Project Management Institute. (2008). A guide to the project management body of knowledge
(PMBOK Guide). Newtown Square, Pa: Project Management Institute.

You might also like