Professional Documents
Culture Documents
with
Ryan Watkins, PhD
and
Doug Leigh, PhD
All rights reserved. Printed in the United States of America. No part of this material may
be reproduced or utilized in any form or by any means, electronic or mechanical,
including photocopying, recording, or by any information storage and retrieval system
without written permission from the author.
ISBN 978-1-59996-128-6
Chapter 1: The Basics of Mega Thinking and Planning—The Rationale for the
Seven Self-Assessment Instruments ..................................................................................... 5
Introduction......................................................................................................................... 5
The Societal Value-Added Perspective and Frame of Mind .............................................. 5
Guide One: The Organizational Elements Model (OEM)........................................... 6
Guide Two: Six Critical Success Factors .................................................................... 6
Guide Three: A Six-Step Problem-Solving Model ..................................................... 10
New Realities for Organizational Success.......................................................................... 11
Mega Planning Framework ......................................................................................... 12
What does all of this have to do with this book?................................................................ 13
Related References ............................................................................................................. 13
Chapter 2: Assessment Instruments: What are they and how can they be useful?........... 17
Questionnaires .................................................................................................................... 18
Considerations for Design and Development.............................................................. 18
What data should be collected? ................................................................................... 19
Questionnaire Structure ............................................................................................... 19
Length.......................................................................................................................... 20
Suggested Data Analysis Approach for These Instruments ............................................... 20
Displaying Response Data........................................................................................... 22
Interpretation and Required Action ............................................................................. 24
Related References ............................................................................................................. 24
| iii
The Assessment Book
iv |
Table of Contents
|v
Introduction
Determining what should be accomplished before selecting how to accomplish it is essential for
improving performance among individuals, within teams, and across organizations, as well as for
making valued contributions to external partners and society. This book includes a set of profes-
sional self-assessment guides for achieving those goals, as well as a manual for how to success-
fully use them for strategic decision making within your organization. From e-learning, motiva-
tion, and competency development to valuable performance processes such as strategic planning
and evaluation, the self-assessments included in this book provide the necessary questions, logi-
cal frameworks, and systematic guidance for making practical decisions about both what should
be accomplished and how those objectives can best be achieved.
It is specifically targeted to let you and your organization ask and answer the “right questions”
relative to the vital areas of individual and organizational performance improvement that lead to
systemic success. It is for practitioners and managers alike.
A second distinctive aspect is that these instruments address the issue of “what” before “how.”
Most professional surveys, books, guidelines, and support help available today are in the form of
how-to-do-its. This how-to approach has popular appeal, but research tells us that starting with
implementation can often lead to consequences other than desirable results. Interventions are,
after all, only part of the performance improvement story. In this context is the reality provided
many years ago by Peter Drucker that “it is more important to do what is right rather than doing
it right.” This set of self-assessment instruments goes to the heart of Drucker’s insight. We offer
seven self-assessment instruments—validated by professionals and organizations, including IBM
and Hewlett Packard—that provide solid guidance on “what to accomplish” before deciding
“how to do it.”
This does not discount the application of how-to-do-it guides, but it does encourage individuals
and organizations to first ensure that they are headed where they want to end up before selecting
how to get there. Thus this series of self-assessment instruments does not conflict with existing
|1
The Assessment Book
how-to-do-it guidance but rather provides a set of complementary assessments that up until now
had been largely ignored.
Defining and delivering useful and measurable performance improvement for all organizations
and their associates are vital steps toward success. Usually missing are, however, cost-effective
ways for organizations to find out where they are in terms of results and consequences—neces-
sary information for deciding where they should be in terms of required skills, knowledge, atti-
tudes, and abilities for defining and delivering success and then proving it. This leads to the final
unique aspect of these self-assessment instruments: each of the self-assessments relates to estab-
lishing a value chain that aligns external clients and our shared society (Mega) with organiza-
tional (Macro) and individual (Micro) contributions and these with appropriate processes and
activities and then with resources. This alignment of the results to be accomplished at three
levels with the processes and resources required to achieve them is the hallmark of an effective
self-assessment approach to performance improvement.
The seven self-assessment instruments provided in this book offer guides for you and your
organization to define what results and consequences you want to deliver so that you may sen-
sibly define the approaches, tools, and methods you should use to deliver success.
Each of these instruments uses a unique dual response (i.e., “What Is” and “What Should Be”)
format with performance-related questions. This format easily provides you with useful data on
gaps between current practice and best practice that may be measurably and conveniently noted.
Table I.1 identifies how each currently available self-assessment instrument relates to the 10 ISPI
principles.
2|
Introduction
Table I.1. The Relationship of each available self-assessment instrument and the International
Society for Performance Improvement’s Standards (ISPI, 2002)
While all integrate, an “X” identifies coverage and linking to more than one standard, and “XX”
indicates major focus.
Implementation
Take a System
Development
Improvement
Specification
Partnerships
Performance
Add Value
Approach
Design to
Establish
Analysis
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
Strategic Thinking XX XX X X X X X X XX
and Planning
Needs Assessment XX XX X XX X X X XX
Corporate Culture XX XX XX X X XX XX XX
Evaluation X X X XX XX
Performance
XX XX XX X XX XX XX XX XX XX
Improvement
Competencies
Performance X X XX
Motivation
Readiness for X X XX XX XX X
E-learning
|3
Chapter 1
The Basics of Mega Thinking and Planning1—
The Rationale for the Seven
Self-Assessment Instruments
Introduction
Mega planning places a primary focus on adding value for all stakeholders. It is realistic, practi-
cal, and ethical. Defining and then achieving sustained organizational success are possible when
they rely on some basic elements:
2. A shared determination and agreement on where to head and why: all people who can
and might be impacted by the shared objectives must agree on purposes and results cri-
teria, and pragmatic and basic tools.
This chapter provides the basic concepts for thinking and planning Mega in order to define and
deliver value to internal and external partners. The concepts and tools discussed here form the
basis of the seven self-assessment instruments in this book with each assessment defining a par-
ticular part of the whole of individual and organizational performance improvement.
If you are not adding value to our shared society, what assurance do you have that you are
not subtracting value? Starting with results at the Mega (societal) level as the central focus,
strategic thinking provides the foundation for valued strategic planning.
1
Based in part on Kaufman, R. (2005). Defining and delivering measurable value: Mega thinking and
planning primer. Performance Improvement Quarterly, 18(3), 8–16.
2
The process for defining and using Mega relies on the democratic process of all persons who could be
impacted by the definition of Mega coming to agreement.
3
In some writings, “social value” formal considerations are limited to adding value to the associates
working within the organization and thus might be missing the external social value suggested in Mega
thinking and planning (Kaufman, R., 2005).
|5
The Assessment Book
A central question that each and every organization should ask and answer is:
If your organization is the solution, what’s the problem?
This fundamental proposition is central to thinking and planning strategically. Using a Mega
focus represents a shift from the usual focus only on yourself, individual performance improve-
ment, and your organization to one that makes certain you also add value to external clients and
society.
What follows are three guides, or templates, that will help you define and achieve organizational
success. We will begin with an overview of each, and then proceed later in the chapter with more
detailed descriptions.
Table 1.1. The five levels of results, the levels of planning, and a brief description
These elements are also useful for defining the basic questions every organization must ask and
answer as provided later in this chapter.
4
Based on Kaufman, 2006a, and Kaufman, Oakley-Browne, Watkins, and Leigh, 2003.
6|
The Basics of Mega Thinking and Planning
Critical Success Factor 1: Use new and wider boundaries for thinking, planning, doing,
and evaluating/continuous improvement. Move out of today’s zones. There is evidence
just about everywhere we look that success tomorrow is not a linear projection (or a straight-
line function) of success yesterday and today. For instance, a successful car manufacturer
that squanders its dominant client base by shoving unacceptable vehicles into the market is
likely to go out of business, just as an airline that focuses on shareholder value and ignores
customer value or safety. An increasing number of credible authors (Alvin Toffler and Peter
Drucker) tell us that the past is, at best, prologue and not a harbinger of what the future will
be. In fact, old paradigms can be so deceptive that Tom Peters suggests that “organizational
forgetting” must become conventional organizational culture for success now and in the
future.5
Times change, and anyone who doesn’t also change appropriately is risking failure. It is
vital to use new and wider boundaries for thinking, planning, doing, and delivering. Doing
so will require that you get out of current comfort zones to change the very foundations of
the decision making that has made you successful in the past. But not doing so will likely
deliver failure.6
Critical Success Factor 2: Differentiate between ends and means. Focus on “what”
(Mega/Outcomes, Macro/Outputs, Micro/Products) before “how.” People are “doing-types.”
We want to swing right into action, and in so doing, we usually jump right into solutions—
means—before we know the results—ends—we must deliver. Writing and using measurable
performance objectives is something upon which almost all performance improvement
authors agree. Objectives correctly focus on ends and not methods, means, or resources.7
Ends—what you accomplish—sensibly should be identified and defined before you select
means—how you will accomplish results—to get from where you are to your desired desti-
nations. If we don’t select our solutions, methods, resources, and interventions on the basis
of what results we are to achieve, what do we have in mind to make the selections of means,
resources, or activities?
Focusing on means, processes, and activities is usually a comfortable starting place for con-
ventional performance improvement initiatives. Starting with means for any organization
and performance improvement initiative would be as if you were provided process tools and
techniques without a clear map that included a definite destination identified (along with a
statement of why you want to get to the destination in the first place). This is obviously
5
Peters, 1997.
6
Again, in Peters, 1997, he states that it is easier to kill an organization than it is to change it.
7
Bob Mager set the original standard for measurable objectives. Later, Tom Gilbert made the important
distinction between behavior and performance (between actions and consequences). Recently, some
“Constructivists” have had objections to writing objectives because they claim doing so can cut down on
creativity and impose the planner’s values on the clients. This view, we believe, is not useful. For a
detailed discussion on the topic of Constructivism, please see the analysis of philosophy professor David
Gruender (Gruender, C. D. [1996]. Constructivism and learning. A philosophical appraisal. Educational
Technology, 30[3], 21–29).
|7
The Assessment Book
risky. An additional risk for starting a performance improvement journey with means and
processes would be the fact that there would be no way of knowing whether your trip is
taking you toward a useful destination nor would there be criteria for telling you if you were
making progress.
It is vital to focus on useful ends before deciding “how” to get things done. It also sets the
stage for other related Critical Success Factors, such as CSF 3 (Use and align all three levels
of results) through application of the Organizational Elements Model (OEM) and CSF 4
(Prepare objectives that have indicators of how you will know when you have arrived.) Both
the OEM and performance objectives rely on a results-focus because they define what every
organization uses, does, produces, and delivers, and the consequences of that for external
clients and society.
Critical Success Factor 3: Use and align all three levels of planning and results. As we
noted in the previous Critical Success Factor, it is vital to prepare all objectives that focus
only on ends and never on means or resources. There are three levels of results, shown in
Table 1.2, that are important to target and link:
Table 1.2. The levels of planning and results that should be linked during planning, doing,
and evaluation and continuous improvement and the three types of planning
8
The distinction between the three levels of results in terms of who is the primary client and beneficiary is
very important. Suffice it to say when one calls every result an “Outcome,” it tends to blur the differences
among the three types of results.
8|
The Basics of Mega Thinking and Planning
There are three levels of planning and results, based on who is to be the primary client and
beneficiary of what gets planned, designed, and delivered. For each level of planning, there
are associated three levels of results (Outcomes, Outputs, Products).9 Strategic planning tar-
gets society and external clients, tactical planning targets the organization itself, and opera-
tional planning targets individuals and small groups. Use all three to ensure that the results
you accomplish lead to positive societal consequences.
Critical Success Factor 4: Prepare objectives—including those for the Ideal Vision and
Mission Objectives—that have indicators of how you will know when you have arrived
(mission statement plus success criteria). It is vital to state in precise, measurable, and
rigorous terms where you are headed and how to tell when you have arrived (i.e., what
results you want to achieve and how you will measure their accomplishment).10 Statements
of objectives must be in performance terms so that one can plan how best to get there, how
to measure progress toward the end, and how to note progress toward it.11
Objectives at all levels of planning, activity, and results are absolutely vital. And every-
thing—from leadership and management to data entry and strategic direction setting—is
measurable. Don’t kid yourself into thinking you can dismiss important results as being
“intangible” or “non-measurable.” If you can name it, then you can measure it. It is only
sensible and rational therefore to make a commitment to measurable purposes and destina-
tions. Organizations throughout the world are increasingly focusing on Mega-level results.12
A simple mnemonic device for developing performance objectives is denoted by the acro-
nym PQRS (Leigh, 2004). First, performance requirements should specify the performer or
performers who are expected to achieve the desired result. Next, relevant qualifying criteria
should be laid out, typically indicating the time frame over which a result should be accom-
plished. Lastly, the results to be accomplished should be stated, along with the standards
against which the value of a performance will be judged.13
Critical Success Factor 5: Define need as a gap between current and desired results
(not as insufficient levels of resources, means, or methods). Conventional English-
language usage would have us employ the common word need as a verb (or in a verb sense)
to identify means, methods, activities, and actions and/or resources we desire or intend to
9
It is interesting and curious that in the popular literature, all results tend to be called “Outcomes.” This
failure to distinguish among three levels of results blurs the importance of identifying and linking all three
levels in planning, doing, and evaluating/continuous improvement.
10
An important contribution of strategic planning at the Mega level is that objectives can be linked to
justifiable purpose. Not only should one have objectives that state “where you are headed and how you
will know when you have arrived,” they should also be justified on the basis of “why you want to get to
where you are headed.” While it is true that objectives only deal with measurable destinations, useful
strategic planning adds to the reasons why objectives should be attained.
11
Note that this Critical Success Factor (CSF) also relates to CSF 2.
12
Kaufman, Watkins, Triner, and Stith, 1998, Summer.
13
Another compatible approach to setting objectives is provided in Kaufman, Oakley-Browne, Watkins,
and Leigh (2003) where they suggest expanding the attributes of objectives with the acronym SMARTER.
|9
The Assessment Book
use.14 As a consequence, terms such as need to, need for, needing, and needed are common
and conventional, and yet are counter to useful planning. These terms obligate you to a
method or means (e.g., training, more computers, bigger budgets) before deciding what
results are to be accomplished.
As hard as it is to change our own behavior (and most of us who want others to change seem
to resist it the most ourselves!), it is central to useful planning to distinguish between ends
and means (as noted in Critical Success Factor 2). To do reasonable and justifiable planning,
we have to (1) focus on ends and not means, and thus (2) use need as a noun. Need, for the
sake of useful and successful planning, is only used as a noun (i.e., as a gap between current
and desired results).
If you use need as a noun, you will be able to not only justify useful objectives, but you will
also be able to justify what we do and deliver on the basis of costs-consequences analysis.
You will be able to justify everything you (or your organization) uses, does, produces, and
delivers. As a result, it is the only sensible way we can demonstrate value added.
Critical Success Factor 6: Use an Ideal Vision as the underlying basis for all planning
and doing (don’t be limited to your own organization). Critical Success Factor 6 repre-
sents another area that requires some change from the conventional ways of doing planning.
An Ideal Vision is never prepared for an organization, but rather identifies the kind of world
we want to help create for tomorrow's child. From this societal-linked Ideal Vision, each
organization can identify what part or parts of the Ideal Vision we commit to deliver and
move ever-closer toward. If we base all planning and doing on an Ideal Vision of the kind of
society we want for future generations, we can achieve “strategic alignment” for what we
use, do, produce, deliver, and the external payoffs for our Outputs.
14
Because most dictionaries provide common usage and not necessarily correct usage, they note that
need is used as a noun as well as a verb. This dual conventional usage doesn’t mean that it is useful.
Much of this book depends on a shift in paradigms about need. The shift is to use it only as a noun—
never as a verb or in a verb sense.
10 |
The Basics of Mega Thinking and Planning
Figure 1.1. The six-step problem-solving process: A process for identifying and resolving
problems (and identifying opportunities)
| 11
The Assessment Book
The details and how-to’s for each of the three guides are also provided in the referenced sources
at the end of this chapter. The three basic “guides” or templates should be considered as forming
an integrated set of tools—like a fabric—instead of only each one on their own.15
When doing Mega planning, you and your associates will ask and answer the following ques-
tions shown in Table 1.3. The answers to these questions provide boundaries that help define the
scope of your strategic planning and organizational decision making.
Table 1.3. The basic questions every organization must ask and answer
Organizational
Self-Assessment
Questions Partners
No Yes No Yes
15
Of course, each one is valuable. But used together, they are even more powerful.
12 |
The Basics of Mega Thinking and Planning
A “yes” answer to all questions will lead you toward Mega planning and allow you to prove that
you have added value—something that is becoming increasingly important in our global econ-
omy and society. These questions relate to Guide One, the Organizational Elements Model. It
defines each organizational element in terms of its label and the question each addresses. If you
use and do all of these, you are better able to align everything you use, do, produce, and deliver
to adding measurable value to yourself, to your organization, and to external clients and society.
Mega planning is proactive by its very nature. It requires that you begin all planning and decision
making with a societal perspective. This allows you to work with others to define and achieve
success. Many approaches to organizational improvement wait for problems to happen and then
scramble to respond. But there is a temptation to react to problems and never take the time to
plan so that surprises are fewer and success is defined—before problems spring up—and then
systematically achieved.
Mega thinking and planning is about defining a shared success, achieving it, and being able
to prove it. Mega thinking and planning is a focus not on one’s organization alone but on society
now and in the future. It is about adding measurable value to all stakeholders.
Mega thinking and planning has been offered for many years, perhaps first formally with
Kaufman’s Educational System Planning (1972) and further developed in Kaufman and English
(1979), and continuing through 2006. In one form or another, using a societal frame for planning
and doing has shown up in the works of respected thinkers, including Senge (1990) and more
recently Prahalad (2005). There continues this migration from individual performance as the pre-
ferred unit of analysis for performance improvement to one that includes a first consideration of
society and external stakeholders. It is, after all, responsible, responsive, and ethical to add value
to all.
Related References
Barker, J. A. (2001). The new business of paradigms (classic ed.). St. Paul, MN: Star Thrower
Distribution. Videocassette.
Brethower, D. (2006). Performance analysis: knowing what to do and how. Amherst, MA:
HRD Press, Inc.
Brethower, D. M. (2005, Feb.). Yes we can: a rejoinder to Don Winiecki’s rejoinder about
saving the world with HPT. Performance Improvement, 44(2), 19–24.
Carleton, R. (in preparation). Implementation and management of solutions. Amherst, MA:
HRD Press.
Clark, R. E., & Estes, F. (2002). Turning research into results: a guide to selecting the right
performance solutions. Atlanta, GA: CEP Press.
| 13
The Assessment Book
14 |
The Basics of Mega Thinking and Planning
Kaufman, R., Guerra, I., & Platt, W. (2006). Practical evaluation for educators: Finding out
what works and what doesn’t. Thousand Oaks, CA: Corwin Press.
Kaufman, R., & Lick, D. (2000–2001, Winter). Change creation and change management: partners
in human performance improvement. Performance in Practice: 8–9.
Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic
planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/
Pfeiffer.
Kaufman, R., Stith, M., Triner, D., & Watkins, R. (1998). The changing corporate mind:
organizations, visions, mission, purposes, and indicators on the move toward societal
payoffs. Performance Improvement Quarterly, 11(3), 32–44.
Kaufman, R., & Unger, Z. (2003, August). Evaluation plus: beyond conventional evaluation.
Performance Improvement, 42(7), 5–8.
Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing,
and accomplishing. Lancaster, PA: Proactive Publishers.
Lagace, M. (2005, January). How to put meaning back into leading. Working Knowledge.
Cambridge, MA: Harvard School of Business.
Leigh, D. (2004). Conducting needs assessments: A step-by-step approach. In A. R. Roberts &
K. R. Yeager (Eds.) Evidence-based practice manual: research and outcome measures in
health and human services. New York: Oxford University Press.
Peters, T. (1997). The circle of innovation: you can’t shrink your way to greatness. New York:
Alfred A. Knopf.
Peters, T. J., & Waterman, R. H., Jr. (1982). In search of excellence: lessons learned from
America's best run companies. New York: Harper & Row.
Prahalad, C. K. (2005). The fortune at the bottom of the pyramid: eradicating poverty through
profits. Upper Saddle River, NJ: Wharton School Publishing/Pearson Education, Inc.
Senge, P. M. (1990). The fifth discipline: the art & practice of the learning organization. New
York: Doubleday-Currency.
Watkins, R. (2007). Performance by design: the systematic selection, design, development of
performance technologies that produce useful results. Amherst, MA: HRD Press, Inc.
| 15
Chapter 2
Assessment Instruments:
What are they and how can they be useful?
In the case of a questionnaire, which is the type of assessment instrument we illustrate in this
book, the type of data desired must be opinions, perceptions, or attitudes about a particular sub-
ject. The purpose of each instrument is to learn people’s perceptions about each item within the
instrument.
It is important to note that the findings of a questionnaire reflect reality according to each indi-
vidual, which is not independently verifiable. For that reason, you should triangulate people’s
perceptions with other forms of data, such as actual performance that can be measured through
observations and work products. Whatever the data you are after, all methods you select should
be focused on answering the “right” question so that useful decisions can be made.
Target population characteristics, such as culture, language, education, past experiences, and
gender, are also essential to consider. Whether written questionnaires, group techniques, inter-
views, or tests are used, one must understand the impact of these characteristics when deriving
questions and methods to collect data from individuals. The words in a question can mean many
different things to different people based on a myriad of factors. In some instances, those devel-
oping the data collection instruments can unconsciously over rely on their own experiences and
sense of “what is.” Such is the case with questions that include colloquialisms that, although are
well known for one group of people, are completely unfamiliar to others. The results from these
questions are often misleading, as the interpretations of these questions can potentially be as
numerous as the respondents. Similarly, one approach can be appropriate in a given culture, and
perhaps not others. For instance, in some cultures, it is considered rude to publicly disagree with
the position of others. In such cases, it may be difficult to use a standard group technique to elicit
honest responses from a group.
Other important factors to consider when selecting data collection instruments are the relative
costs, time, and expertise required to develop and/or obtain them. Once a range of suitable alter-
natives has been identified based on the type of data required and their source, the ultimate
selection should be based on the relative feasibility of each alternative. While a face-to-face
interview might be the best choice in terms of the data the evaluator is after on a given project,
the sheer number of those to be interviewed might put the time and money required beyond the
scope of the project.
| 17
The Assessment Book
Questionnaires16
Considerations for Design and Development
One of the most popular data collection tools is the questionnaire. As a general guideline for
increasing the usefulness of questionnaires, questions posed must be geared toward informed
opinions such as those based on the target group’s personal experience, knowledge, background,
and vantage point for observation. It is important that questionnaires avoid items that lead the
respondent to speculate about the information being requested, nor should they use a question-
naire to confirm or shape a pre-existing bias.
For instance, you would not want to ask in a questionnaire “If you were to buy this car, how
would your spouse feel about your decision?” The respondent could only speculate in their
answer to this question. Equally, you would not want to ask a leading question such as “Do all of
the wonderful safety features included with this car make you feel safer?”
Perhaps no questionnaire can be regarded as perfect or ideal for soliciting all the information
required, and in fact, most have inherent advantages as well as flaws (Rea and Parker, 1997).
However, there are factors, including professional experience and judgment, that may help
ensure any advantages and reduce the effects of inherent flaws of questionnaires. In developing
the self-assessments included in this book, the authors have strived to overcome many of these
challenges.
Another advantage of using questionnaires, such as those provided in this book, is that they can
be completed by respondents at their own convenience and at their own pace. Though a deadline
for completion should be given to respondents, they still have sufficient time to carefully reflect,
elaborate, and if appropriate, verify their responses. Of course, the drawback here is that mail-out
or online questionnaires can require significantly more time to administer than other methods.
The sooner you get a response, the more likely it will be complete.
Perhaps one of the most important advantages is that of providing the possibility of anonymity.17
Questionnaires can be administered in a way such that responses are not traced back to individ-
ual respondents. Explicitly communicating this to potential respondents tends to increase the
chances for their cooperation on at least two levels: (1) completing the survey to begin with and
(2) being more forthcoming and honest in their responses. However, even if guaranteed
anonymity may increase response rate, the overall response rate for questionnaires is usually still
lower than for other methods.
When responses are low, follow-ups, over sampling, respondent replacements, and non-respon-
dent studies can contribute toward a more representative, random sample, which is critical for
generalization of findings. Still, there will usually be some bias in the sample due to self-selec-
tion; some people, for their own reasons, might not respond to a questionnaire. But a representa-
tive sample is a must.
16
Based on Guerra-López, 2007.
17
Again, there are different opinions on anonymity. Some think it vital, others suggest that people should
not hide their observations, thinking, and suggestions. You pick which option based on the environment in
which you are using a questionnaire.
18 |
Assessment Instruments
There are a number of characteristics across which respondents and non-respondents may differ,
and thus, can impact the findings. You want to know where people agree and where they do not.
This is another important issue to acknowledge when interpreting and presenting data collected
through questionnaires.
The self-assessments included in this book provide a baseline for each area. They have been
developed to provide useful information for most organizations. For your organization, you may
be tempted to customize some of the questions or add new questions that address specific con-
cerns that may be unique to your organization. The guides described in Chapter 1 can be valu-
able guides for tailoring the instruments to your organization and its culture. Just remember to
focus on ends (rather than means) and always maintain societal results as your primary guide for
making decisions.
The instruments provided are based on key issues to consider in the design, development, and/or
selection of useful questionnaires. The important variables considered in the assessment items in
this book may be reviewed in Guerra-López (2007).18
Questionnaire Structure
Questionnaire respondents are not only sensitive to the language used in each question, but also
the order in which these questions are asked. Keep in mind that each question can become the
context for the next. Thus, poorly structured questionnaires may not only confuse the respon-
dents and cause them to provide inaccurate responses, but may also lead them to abandon the
questionnaire altogether.
A well-structured questionnaire should begin with straightforward yet interesting questions that
motivate the respondent to continue. As with any relationship, it takes time for an individual to
feel comfortable with sharing sensitive information, therefore sensitive items should be saved for
later on in the questionnaire. Questions that focus on the same specific issue should be presented
together to maximize the respondent’s reflection and recall. One way for both the questionnaire
designer and the respondent to get a clear picture is to cluster specific items around different
categories.
18
For a more advanced analysis of developing and testing items within a questionnaire, see DeVellis, R.
F. (2003). Scale development: theory and applications. Thousand Oaks, CA: Sage Publications.
| 19
The Assessment Book
Length
Simplicity is key. Nobody wants to complete a long and complicated questionnaire. The ques-
tionnaire should include exactly what is required—nothing more, nothing less. Only relevant
indicators should form the basis of a questionnaire. While there may be plenty of interesting
information that could be collected through the questionnaire, if it is not central to the indicators
being investigated, it will only be a distraction—both for the evaluators and the respondent.
In considering the length of the questionnaire, the questionnaire designer should not only think
about the actual length of the questionnaire, but the length of time the respondent will invest in
completing it. As a general rule, the entire questionnaire should take no more than 30 minutes to
complete, and ideally about half that long.
Analysis One: Discrepancy. For each question on a self-assessment, a gap analysis should
be performed by subtracting the value assigned to the “What Is” (WI) column from the value
assigned to the “What Should Be” (WSB) column (see Figure 2.1). The results of this analy-
sis will identify discrepancies between the current and desired performance for each variable
of the assessment. The size of the gap can provide valuable information in determining the
perceived acuteness of the need or the extent to which opportunities can be capitalized upon.
The results of this analysis are, however, necessary rather than sufficient for quality decision
making. Alone they only provide isolated values (data points) that have to be put into con-
text through their relationships with other analyses described below.
20 |
Assessment Instruments
Analysis Two: Direction. For each question, the positive or negative value of the gap should
be identified to different needs (when WSB is greater than WI) from opportunities (when WI
is greater than WSB).
The distinction between needs and opportunities provides a context for discrepancy data,
which by itself only illustrates the size of the gap between “What Should Be” and “What Is.”
Based on the direction of the discrepancy, decision makers can consider which gaps illus-
trate needs that have the potential to be addressed through organizational efforts, and which
identify opportunities that the organization may want to leverage (or maintain) in order to
ensure future success.
Analysis Three: Position. The position analysis illustrates the relative importance or prior-
ity of discrepancies from the perspective of the respondents. While many gaps between
“What Should Be” and “What Is” may have equivalent discrepancies and be in the same
direction, the position of the discrepancy on the Likert scale of the instrument can demon-
strate the relative priority of the discrepancy in relation to other gaps.
| 21
The Assessment Book
For example, two needs may be identified with a discrepancy of +3, but one need illustrated
a gap between WSB = 5 and WI = 2 while the other illustrated WSB = 3 and WI = 0. As a
result, the interpretation of these discrepancies in relation to one another would indicate a
perceived prioritization of the initial need over the other. This information can be valuable in
selecting which discrepancies are addressed when resources are limited.
Together, the three analyses (discrepancy, direction, and position) can offer valuable data for
identifying, prioritizing, and selecting performance improvement efforts related to the com-
plete e-learning system.
Analysis Four: Demographic Differences (optional). Organizations may want to view the
results of the self-assessment based on demographic differences (e.g., division, location,
position type, years of experience). Analysis of the results of the self-assessment can be
reviewed by demographic variables if items related to the desired categories are added to the
instrument. If your organization has collected data regarding the demographics of those
completing the self-assessment, the analysis for discrepancy, direction, and position should
be completed for each demographic on a section, subsection, and/or item basis depending on
the level of information required for decision making.
A graphic format for reporting that was jointly derived by Roger Kaufman & Associates and
E-valuate-IT19 is presented in Figure 2.2. By displaying your results in this way, you and your
associates can quickly scan and see both gaps and patterns.
19
This assessment instrument is available online at www.e-valuate-it.com/instruments/RKA for groups
and may also be customized for your particular organization. An associated analysis service is also
available.
22 |
Assessment Instruments
| 23
The Assessment Book
You may also want to collect information on why these gaps exist (via a causal or SWOT analy-
sis) so that the solutions and courses of action you consider have a high probability of closing
those gaps, and in turn, yielding the consequences you want and require.
Related References
Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for
performance improvement practitioners. Amherst, MA: HRD Press, Inc.
Rea, L. M., & Parker, R. A. (1997). Designing and conducting survey research: A
comprehensive guide (2nd ed.). San Francisco, CA: Jossey-Bass Publishers.
24 |
Chapter 3
Strategic Thinking and Planning—How Your
Organization Goes About It (and Finding out
What it Might Want to Change)
Roger Kaufman, PhD
Planning is an alternative to relying on good luck. Strategic planning (and strategic thinking—the
way we think when we want to create our own future) is a proactive approach. It is creating the
kind of world we want for our children and grandchildren.
When using the following assessment instrument, we ask you to rate, on two dimensions of
“What Is” and “What Should Be,” how your organization views and goes about strategic plan-
ning.
The following questions identify the most important variables so that you and your organization
can calibrate if you are going to do really useful strategic thinking and planning. What you target
in your planning and how precisely you develop your planning criteria will make an important
difference in your success.
This is a very specific assessment instrument, and the terms used are chosen carefully. So please
do not skim through them. If some terms seem strange to you, please check in the Glossary at the
back of this book.
This instrument is designed to provide you with information on whether or not you and your
organization are really doing strategic planning. Most organizations, in spite of the label, do not
do strategic planning but rather do tactical planning (considering and choosing methods, means,
programs, and projects) or operational planning (making sure that what is planned is kept on
target). All three levels of planning are important. The most effective planning starts with
strategic, or Mega—the societal contribution we make using our organization as the vehicle.
Thus, this instrument allows you to determine the extent to which you are being strategic. Please
pay close attention to the words in this assessment instrument.
In the first column (left side), indicate how you see our organization currently operating. In the
second column (right side), indicate how you feel our organization should be operating.
For each item, think of the phrase “In our organization, from my experience…” as you consider
the current and preferred states of the organization. If there are items about which you are
uncertain, give your best response.
| 25
The Assessment Book
1 – Rarely, if ever What Is Strategic Thinking and Planning Survey What Should Be
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
26 |
Strategic Thinking and Planning
1 – Rarely, if ever What Is Strategic Thinking and Planning Survey What Should Be
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
| 27
The Assessment Book
1 – Rarely, if ever What Is Strategic Thinking and Planning Survey What Should Be
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
Item 1 tells you if you are future oriented (or, perhaps, just planning to make the here-and-now
more efficient). Items 2, 3, 4, 5, and 6 probe into whether there is an understanding concerning
the differences among strategic, tactical, and operational planning. Discriminating among these
is really important, even though some might see that such detail is annoying or not useful. Also
important here are Items 7 and 8.
Items 7 and 8 will let you know the extent to which you not only understand and distinguish
among strategic, tactical, and operational planning but also link and align the plans and criteria
for all.
Items 9 and 15 identify if all the partners who can and might be affected by what gets planned
and delivered are involved. If they are not, you risk resistance from not achieving what Peter
28 |
Strategic Thinking and Planning
Drucker called “transfer of ownership,” which indicates that the plan is “our plan” and not “their
plan.” If people don’t “own” a plan, then its implementation and success will likely be seriously
limited.
Items 10 and 11 will tell you if you are planning for results or perhaps just about methods, pro-
grams, and/or resources. Item 24 will tell you about using the plan’s criteria to define and deliver
implementation. Items 23 and 25 reveal if rigorous measurable criteria and the needs (not
“wants”) assessment are used for evaluation and continual improvement.
Item 12 tells you if the plan is proactive. Items 13 and 22 tell you if the results of implementing
planning are used to revise the plan: continual improvement.
Items 16 and 17 relate to the validity of the data for strategic planning. If there are gaps here,
then defining needs as gaps in results (and not gaps in resources or methods) will likely make
your planning database suspect.
Item 18 and parts a–j will let you know if you are doing real strategic planning—if you are bas-
ing your planning on Mega (measurable societal value added) or on a more usual set of existing
organizational statements of purpose (that almost always focuses on the organization and not on
the external value added). Items 14 and 19 will let you know if the elements of Mega and the
Ideal Vision are really being integrated or if there is “splintering” going on. See the elements of
the Ideal Vision (a–j) as a fabric, not individual strands. And note that your organization might
not even intend to deal with all the elements.
Item 20 lets you know if planning is really used in terms of it really coming before swinging into
action. Items 21, 26, and 27 relate to the plan really being used (or just perhaps window dressing
or compliance).
The data are now available for objective assessment of your organizational culture, the major
variables in it, and what to change and what to continue.
| 29
The Assessment Book
There are several critical items for which a high “What Should Be” should be obtained if your
commitment is to sustain success. These are the cornerstone variables, and others are part of the
success tapestry. These include items 1, 2, 5, 7, 11, 14, 15, 16, 17, 19, 20, and 27.When there are
no high “What Should Be” for these, this can signal that your organization is not really doing
strategic planning but might be doing tactical or operational planning and just using the label
“strategic.” This assessment instrument can guide you to create a successful strategic thinking
and planning organization.
Related References
Barker, J. A. (2001a). The new business of paradigms (classic ed.). St. Paul, MN: Star Thrower
Distribution. Videocassette.
Barker, J. A. (2001b). The new business of paradigms (21st century ed.). St. Paul, MN: Star
Thrower Distribution. Videocassette.
Bernardez, M. (2005). Achieving business success by developing clients and community: lessons
from leading companies, emerging economies and a nine year case study. Performance
Improvement Quarterly, 18(3), 37–55.
Clark, R. E., & Estes, F. (2002). Turning research into results: a guide to selecting the right
performance solutions. Atlanta, GA: CEP Press.
Davis, I. (2005, May 26). The biggest contract. The Economist (London) 375(8428), 87.
Drucker, P. F. (1993). Post-capitalist society. New York: Harper Business.
Guerra, I. (2003). Key competencies required of performance improvement professionals.
Performance Improvement Quarterly, 16(1).
Guerra, I., Bernardez, M., Jones, M., & Zidan, S. (2005). Government workers adding value:
The Ohio Workforce Development Program. Performance Improvement Quarterly, 18(3),
76–99.
Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for
performance improvement practitioners. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2006a). 30 seconds that can change your life: a decision-making guide for those
who refuse to be mediocre. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2006b). Change, choices, and consequences: a guide to Mega thinking and
planning. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2005). Defining and delivering measurable value: Mega thinking and planning
primer. Performance Improvement Quarterly, 18(3), 8–16.
Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks,
CA: Sage Publications. Also Planificación Mega: Herramientas practicas paral el exito
organizacional. (2004). Traducción de Sonia Agut. Universitat Jaume I, Castelló de la
Plana, Espana.
Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised).
Arlington, VA & Washington, DC: Jointly published by the American Society for Training
and Development and the International Society for Performance Improvement. Also,
published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los
Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.
30 |
Strategic Thinking and Planning
Kaufman, R., & Lick, D. (Winter, 2000–2001). Change creation and change management: partners
in human performance improvement. Performance in Practice, 8–9.
Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic
planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/
Pfeiffer.
Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing,
and accomplishing. Lancaster, PA: Proactive Publishers.
Kaufman, R., Watkins, R., Sims, L., Crispo, N., & Sprague, D. (1997). Costs-consequences
analysis. Performance Improvement Quarterly, 10(3), 7–21.
Mager, R. F. (1997). Preparing instructional objectives: a critical tool in the development of
effective instruction (3rd ed.). Atlanta, GA: Center for Effective Performance.
Rummler, G. A. (2004). Serious performance consulting: according to Rummler. Silver Spring,
MD: International Society for Performance Improvement and the American Society for
Training and Development.
Watkins, R., Kaufman, R., & Leigh, D. (2000, April). A performance accomplishment code of
professional conduct. Performance Improvement, 35(4), 17–22.
Watkins, R., Leigh, D., & Kaufman, R. (1998, September). Needs assessment: a digest, review,
and comparison of needs assessment literature. Performance Improvement, 37(7), 40–53.
| 31
Chapter 4
Needs Assessment, Needs Determination,
and Your Organization’s Success
Roger Kaufman, PhD
This assessment instrument asks questions about needs assessment and using that performance
data for finding the direction for your organization. This determination and agreement on where
the organization is headed is central to any initiative for performance improvement and
delivering useful results.
Each organization will be different. These assessment questions and the pattern they provide are
designed to help you decide what you might want to change and what you might want to
continue relative to where you are headed and justifying why you want to get there. Needs
assessment data are used for finding out what results you should seek and what payoffs you can
expect.
Needs assessment, at its core, simply identifies where you are in terms of results and
consequences and where you should be. Poor or incomplete needs assessments can lead to poor
results and contributions.
This assessment instrument asks a lot of you. It uses words and concepts in very specific ways—
specific ways that are vital for your collecting appropriate data for decision making and
determining where your organization should head, why it should get there, and what data are
required for selecting the most effective and efficient ways to get from where you are to success.
Patience is invited for this instrument. If you skim-read the items, many of the vital, yet subtle,
distinctions might be missed. And the terms are used for a reason, not just to look different; this
approach is different.
A note on words: Some of the words used in this assessment instrument may seem new to you
or may be used differently than you typically use them. Review the list below that describes how
these words will be used:
• Management includes supervisors, leaders, or bosses to whom you report in the chain of
command. Associates are people with whom you work.
• Formally refers to doing something with a shared rigorous definition of the ways and
means in which we do something, such as collect data, rather than informally, which
can be casual or unstructured.
| 33
The Assessment Book
• Clients are those that some result is delivered to. They may be internal or external to
your organization.
• Internal clients are those within your organization (also referred to as internal
partners).
• External clients are those outside of your organization, such as your immediate clients
and the clients of your clients (also referred to as external partners).
• Stakeholders are those people internal and external to the organization who have some
personal interest in what gets done and delivered.
In the first column (left side), indicate how you see our organization currently operating.
In the second column (right side), indicate how you feel our organization should be operating.
For each item, think of the phrase: “In our organization, from my experience…” as you consider
the current and preferred states of the organization. If there are items about which you are
uncertain, give your best response.
34 |
Needs Assessment, Needs Determination, and Your Organization’s Success
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
{ | } ~ 1. We formally plan. { | } ~
{ | } ~ 2. We do needs assessment. { | } ~
{ | } ~ 3. Needs assessments are valued in our { | } ~
organization.
| 35
The Assessment Book
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
36 |
Needs Assessment, Needs Determination, and Your Organization’s Success
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
20
“Hard” data are results that are independently verifiable (such as error rate, production rate, employ-
ment, being a self-sufficient citizen).
21
“Soft” data are results that are personal and not independently verifiable (such as perceptions, opinions,
feelings).
| 37
The Assessment Book
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
38 |
Needs Assessment, Needs Determination, and Your Organization’s Success
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
22
Sometimes our external clients have clients themselves. Include these links in your response.
| 39
The Assessment Book
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
The responses can let you know about the beliefs, values, attitudes, and approach to needs
assessment that exist and the requirement for some changes there concerning creating the future.
40 |
Needs Assessment, Needs Determination, and Your Organization’s Success
Items 5 and 6 lets you know if the needs assessments being done or contemplated really examine
gaps in results. This focus is vital to defining useful needs. Also pertaining to this are Items 12,
13, and 14. Items 15, 16, 17, and 18 let you know the scope of the use of needs assessment data.
Items 7 and 8 signal a possible problem. If people use a needs assessment focusing on activities,
they are missing the concept of need as a noun—as a gap in results. The performance data
relative to doing a training needs assessment or an activities-focused needs assessment advise
that, while a popular approach, your decisions relative to such will be wrong 80 to 90 percent of
the time. This is not good relative to defining and delivering organizational success.
Items 9, 10, and 11 will show you why useful needs assessments are not being used.
Now things get a bit trickier—tricky, but important. Items 21–25 examine the Organizational
Elements—those things that any organization uses, does, produces, delivers, and the impact all
of that has for external clients and our shared society. Look at these one at a time:
Item 25 lets you know if value added for external clients and society is examined.
Of course, the most effective (and safest because it covers all important aspects of needs assess-
ment and defining success) needs assessment approach will get a positive “What Should Be” for
Item 26. Less than that, the approach being used is partial.
Item 26 lets you know if all levels of results (individuals, small groups, departments, the organi-
zation itself, and external clients and society) are indeed linked.
| 41
The Assessment Book
Item 27 examines if hard performance data is collected for individuals’ jobs and tasks, Item 28
examines that same question for units within the organization, Item 29 for the organization itself,
and Item 30 for impact on external clients. Item 31 seeks to determine if hard data are collected
for impact on our shared society.
The best news you can get is high “What Should Be” ratings for Items 27–30.
As was done in Items 27–31, the same examinations are probed for soft data.
Item 32 examines soft data for individual jobs and tasks, Item 33 for individual units within the
organization, Item 34 for the organization itself, Item 35 for external clients, and Item 36 for our
shared society and community. As before, the most valid needs assessments will collect both
hard and soft data at all organizational levels, and that is examined in Item 37.
Item 39 probes consequences of the needs assessment data for external clients, and Item 40
probes concerns society and our shared communities.
Item 41 seeks to determine if needs assessment data are used for decisions relative to individual
performance. Item 44 seeks to determine if plans are related to departmental or section results,
Item 45 seeks to determine if plans are related to desired organizational results, Item 46 is
concerned about external clients, and Item 47 is concerned about society and community.
Items 42 and 43 revisit the tendency to use a so-called needs assessment as a “cover” for
jumping into premature solutions by seeking to find out if the organization’s plans are resource
driven and/or activity driven. If you get a high response on one or both of these items, you
should be alerted to the potential that your organization is solutions-driven and not results-
driven.
Item 48 looks at planning and whether it is based on needs assessment data, and Item 49 seeks to
find out if needs assessment data are used for linked decisions about resources, activities,
programs, and projects. Item 50 continues the inquiry concerning the extent to which needs
assessment data are used for linking organizational efforts and contributions to external clients
and society.
Item 51 provides a “red flag” if needs assessment data are not used for such vital decisions as
training, restructuring, layoffs, or the like. A high response on “What Is” signals trouble. The
alternative to this is found in Item 52.
42 |
Needs Assessment, Needs Determination, and Your Organization’s Success
Continuing the inquiry concerning needs assessments, needs assessment data, and application by
the organization, Item 53 sees if they are used for Mega level concerns (health, safety, and well-
being), and Item 54 examines if data are used for Macro level, with a business plan. Item 55
examines if data are used for individual operations and performance tasks.
Costs and consequences—what you give and what you get—are examined in Item 56. The
sensible and rational approach to using needs assessment data is to prioritize on the costs of
meeting the need as compared to the costs for ignoring it. Item 57 seeks to find out if needs
identified in a needs assessment are used to set objectives: the “What Is” criteria are the basis for
measurable objectives.
Item 62 is another “red flag” item. It lets you know if needs assessments are done but the data
not used.
Item 63 probes to see if needs assessments are seen by associates as being useful and as
providing important information.
Needs assessments, when done completely and correctly, will be invaluable for you as you
define where to head, justify why you want to get there, and provide the criteria for strategic,
tactical, and operational planning as well as sensitive and sensible evaluation and continual
improvement.
There are some items that you should want to have high “What Should Be” scores. These are the
cornerstone items and the other items are part of the tapestry of success: Items 4, 5, 12, 14, 26,
31, 36, 37, 38, 47, 48, 53, 57, 59, and 60.
There are some “red flag” items that you will be better served if you have (and maintain) low
“What Should Be” ratings. These include Items 7, 8, 9, 10, 11, 19, 42, and 51.
These data may be used to identify what in your organization should productively change
relative to needs assessment. Remember, needs assessments provide the basic data and
justification for you to determine where you are headed, why you want to get there, and how to
tell when you have arrived.
| 43
The Assessment Book
Related References
Barker, J. A. (2001a). The new business of paradigms (classic ed.). St. Paul, MN: Star Thrower
Distribution. Videocassette.
Barker, J. A. (2001b). The new business of paradigms (21st century ed.). St. Paul, MN: Star
Thrower Distribution. Videocassette.
Bernardez, M. (2005). Achieving business success by developing clients and community: lessons
from leading companies, emerging economies and a nine year case study. Performance
Improvement Quarterly, 18(3), 37–55.
Clark, R. E., & Estes, F. (2002). Turning research into results: A guide to selecting the right
performance solutions. Atlanta, GA: CEP Press.
Davis, I. (2005, May 26). The biggest contract. The Economist (London): 375(8428), 87.
Drucker, P. F. (1993). Post-capitalist society. New York: Harper Business.
Guerra, I. (2003). Key competencies required of performance improvement professionals.
Performance Improvement Quarterly, 16(1).
Guerra, I., Bernardez, M., Jones, M., & Zidan, S. (2005). Government workers adding value:
The Ohio Workforce Development Program. Performance Improvement Quarterly, 18(3),
76–99.
Kaufman, R. (2005). Defining and delivering measurable value: a Mega thinking and planning
primer. Performance Improvement Quarterly, 18(3), 8–16.
Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks,
CA: Sage Publications. Also Planificación Mega: Herramientas practicas paral el exito
organizacional (2004). Traducción de Sonia Agut. Universitat Jaume I, Castelló de la Plana,
Espana.
Kaufman, R. (1998). Strategic thinking: A guide to identifying and solving problems (revised).
Arlington, VA & Washington, DC: Jointly published by the American Society for Training
and Development and the International Society for Performance Improvement. Also,
published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los
Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.
Kaufman, R., & Lick, D. (2000–2001, Winter). Change, creation, and change management:
partners in human performance improvement. Performance in Practice, 8–9.
Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic
planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-
Bass/Pfeiffer.
Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: Defining,
prioritizing, and accomplishing. Lancaster, PA: Proactive Publishers.
Kaufman, R., Watkins, R., Sims, L., Crispo, N., & Sprague, D. (1997). Costs–consequences
analysis. Performance Improvement Quarterly, 10(3), 7–21.
Mager, R. F. (1997). Preparing instructional objectives: A critical tool in the development of
effective instruction (3rd ed.). Atlanta, GA: Center for Effective Performance.
44 |
Needs Assessment, Needs Determination, and Your Organization’s Success
| 45
Chapter 5
Culture, Our Organization, and
Our Shared Future
Roger Kaufman, PhD
Each organization is different, and these items and the response patterns they provide can help
your organization decide what you might want to change and what you might want to continue.
Organizational success depends on ensuring that everyone in the organization is heading to the
same destination and that people can work both together and independently to get from here to
there.
The items in this assessment instrument are based on the basic concepts provided in Chapter 1.
The statements for the instrument are performance-based—results-based—in keeping with the
basic concepts of the value and usefulness of strategic thinking and planning and “what it takes”
to be successful.
The responses we ask for include the following current and perceived required status on organ-
izational culture variables:
• Associates and work, including trust, ethics, decision making, cooperation, innovation,
and valuing
• Measuring success, including nature of evaluations, the kind of data collected, what
data gets used, criteria, clarity, dealing with success and failure, compliance, focus on
kinds of results to achieve
• Customer relations, including customer feedback, planning with and for customers,
objectives sharing, modes of data collection, vision and purposes
There are no right or wrong answers, just variables for you and your associates to consider in
terms of what in the culture is productive and what might be changed.
| 47
The Assessment Book
A note on words: Some of the words used in this survey may seem new or may be used differ-
ently than you typically use them. Review the list below that describes how these words will be
used in this survey:
• Customers are those to which you deliver a thing or service.
• Managers are bosses, supervisors, or leaders to whom you report in the chain of
command.
• Associates are people, or employees, with whom you work.
• Actual performance is when somebody actually does and produces something as
contrasted with how we recall processes or procedures that might be employed.
• Rewards are the incentives, perks, financial gains, better assignments, and recognition
that are given on the basis of what is done and delivered.
• Resources are dollars, people, equipment, and tools.
• Society refers to the world in which we all live—our near and distant neighbors.
In the first column (left side), indicate how you see our organization currently operating.
In the second column (right side), indicate how you feel our organization should be operating.
For each item, think of the phrase “In our organization, from my experience…” as you consider
the current and preferred states of the organization. If there are items about which you are
uncertain, give your best response.
48 |
Culture, Our Organization, and Our Shared Future
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
| 49
The Assessment Book
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
Management Style
50 |
Culture, Our Organization, and Our Shared Future
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
| 51
The Assessment Book
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
Customer Relationships
52 |
Culture, Our Organization, and Our Shared Future
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
As with other assessment instruments of this type, gaps between “What Is” and “What Should
Be” of 1½ or more on the scale should attract your attention. The patterns of questions and how
they relate to your organization, or the organization you would like to create, are yours to
choose.
| 53
The Assessment Book
there are gaps here (people perhaps overly competing with one another), then there is a clue for
what is causing the gap in recognition.
Scan for other patterns in terms of gaps. Are there gaps between “What Is” and “What Should
Be” for being specific about objectives (Item 15)? Are there gaps in information, as well as
human and physical resources (Items 1, 6, 7, 14, 18, 19)? Are there trust problems (Items 8 and
12)?
Management Style
These questions and the gaps they provide give clues to how your associates and those in charge
interact on their way to organizational success. Again, look for patterns.
What are the characteristics of managers? Courageous (Item 25) or timid? Do associates ask for
compliance (Items 22, 23, 28) or are they encouraged to contribute to the purposes and efforts
(Items 21, 26, 27, 29, 30)? What about the ethics of managers (Item 24)?
Look for patterns in the data. Do some groups of questions have a different pattern than others?
Do some groups of respondents describe a differing perspective than other groups?
Measuring Success
Here is where you can find a lot about the organizational culture in terms of how it defines and
measures success. Does the organization do formal evaluations (Item 31) and collect appropriate
data (Items 33, 34)? How are evaluation results used (Items 32, 35, 38)? What is prized by the
organization (Items 42, 43, 44)? Is there enough time for evaluation (Item 45)?
It is a truism that what gets rewarded gets done. The responses between “What Is” and “What
Should Be” here can tell you much about how success is defined, measured, and how rewards
and incentives are used. Again, look for the patterns.
The Organization
The gaps displayed here also relate to what was found in Associates and Work, so there are
opportunities to look at the reliability of responses from two different sections. How good is the
environment for work (Items 46, 47)? Are purposes of the organization clear (Item 48)? What
drives—gets valued—the organization (Items 50, 51, 52)? How about why associates in the
organization do things and do them the way they do them (Items 53, 54, 55, 56, 57)? How about
how associates work together (Items 58, 59)?
The gaps here should be considered along with the gaps in other areas revealed by this assess-
ment.
Customer Relationships
How does the organization really view and interact with the customer? Do associates interact
with customers about what to provide (Items 60, 61)? Are we responsive to customer require-
ments or just what they want (Items 62, 64, 65, 66)? How much do we tell the customer (Items
63, 67, 68)? And how do we use our performance data (Items 69 and 70)?
54 |
Culture, Our Organization, and Our Shared Future
As before, the patterns of the gaps between “What Is” and “What Should Be” are useful and up
to you to determine.
Especially important given the emerging agreement and emphasis on organizations adding
societal value (sometimes called Corporate Social Responsibility) are Items 81 and 82. If there
are few responses for “What Should Be” for these, you might want to revisit your organization’s
vision, related mission, and associated policy: not to add value to our shared society is a pre-
scription for future corporate failure.
The future it seeks to create is up to the organization to determine. The patterns obtained here
can certainly help define the future and know if associates are open to making that happen.
The data are now available for objective assessment of your organizational culture, the major
variables in it, and what to change and what to continue.
It is your choice on what gaps you want to close and which ones are not important. It is sug-
gested that “What Should Be” for the following items will be especially important for you as you
move from your current results to ones that will deliver continual success. There are many more
items that are also important, but they make up the tapestry of success while the following are
the cornerstones of success: 9, 24, 31, 35, 36, 38, 42, 43, 48, 58, 61, 66, 69, 70, 71, 75, 78, 79,
and 82.
Trouble spots that should serve as red flags include: 10, 28, 50, 51, and 62.
It is your organization. Use the data from this assessment to change what should be changed and
keep what is working well. Ask yourself and your associates, “What kind of organization do I
want to work with,” and “what kind of organization would I want to work with if I were the
customers?”
Related References
Barker, J. A. (2001a). The new business of paradigms (classic ed.). St. Paul, MN: Star Thrower
Distribution. Videocassette.
Barker, J. A. (2001b). The new business of paradigms (21st century ed.). St. Paul, MN: Star
Thrower Distribution. Videocassette.
| 55
The Assessment Book
Bernardez, M. (2005). Achieving business success by developing clients and community: lessons
from leading companies, emerging economies and a nine year case study. Performance
Improvement Quarterly, 18(3), 37–55.
Clark, R. E., & Estes, F. (2002). Turning research into results: a guide to selecting the right
performance solutions. Atlanta, GA: CEP Press.
Davis, I. (2005, May 26). The biggest contract. The Economist (London) 375(8428), 87.
Drucker, P. F. (1993). Post-capitalist society. New York: Harper Business.
Guerra, I. (2003). Key competencies required of performance improvement professionals.
Performance Improvement Quarterly, 16(1).
Guerra, I., Bernardez, M., Jones, M., & Zidan, S. (2005). Government workers adding value:
The Ohio Workforce Development Program. Performance Improvement Quarterly, 18(3)
76–99.
Kaufman, R. (2006a). 30 seconds that can change your life: a decision-making guide for those
who refuse to be mediocre. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2006b). Change, choices, and consequences: a guide to Mega thinking and
planning. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2005). Defining and delivering measurable value: Mega thinking and planning
primer. Performance Improvement Quarterly, 18(3), 8–16.
Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks,
CA: Sage Publications. Also Planificación Mega: Herramientas practicas paral el exito
organizacional (2004). Traducción de Sonia Agut. Universitat Jaume I, Castelló de la Plana,
Espana.
Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised).
Arlington, VA & Washington, DC: Jointly published by the American Society for Training
and Development and the International Society for Performance Improvement. Also,
published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los
Problemas., Madrid, Editorial Centros de Estudios Ramon Areces, S. A.
Kaufman, R., & Lick, D. (2000–2001, Winter). Change creation and change management: partners
in human performance improvement. Performance in Practice, 8–9.
Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic
planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/
Pfeiffer.
Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing,
and accomplishing. Lancaster, PA: Proactive Publishers.
Kaufman, R., Watkins, R., Sims, L., Crispo, N., & Sprague, D. (1997). Costs-consequences
analysis. Performance Improvement Quarterly, 10(3), 7–21.
Mager, R. F. (1997). Preparing instructional objectives: a critical tool in the development of
effective instruction (3rd ed.). Atlanta, GA: Center for Effective Performance.
Rummler, G. A. (2004). Serious performance consulting: according to Rummler. Silver Spring,
MD: International Society for Performance Improvement and the American Society for
Training and Development.
56 |
Culture, Our Organization, and Our Shared Future
Watkins, R., Kaufman, R., & Leigh, D. (2000, April). A performance accomplishment code of
professional conduct. Performance Improvement, 35(4), 17–22.
Watkins, R., Leigh, D., & Kaufman, R. (1998, September). Needs assessment: a digest, review,
and comparison of needs assessment literature. Performance Improvement, 37(7), 40–53.
| 57
Chapter 6
Evaluation, You, and Your Organization
Roger Kaufman, PhD
This material introduces a survey that asks about evaluation and evaluation-related considera-
tions—how things get reviewed at your organization. Each organization will be different. These
items and the pattern they provide are designed to help you decide what you might want to
change and what you might want to continue relative to comparing your results with your inten-
tions. Evaluation, at its core, simply compares one’s results with their intentions. Poor evaluation
can lead to poor results and contributions. So let’s see what is going on within your organization.
A note on words: Some of the words used in this survey might seem new to you or may be used
differently than you typically use them. Review the list below that describes how these words
will be used in this survey:
• Supervisors are bosses, leaders, or management to whom you report in the organiza-
tion’s chain of command.
• Rewards (or incentives) are the perks, financial gains, better assignments, and
recognition that are given on the basis of what is done and delivered.
• Formally refers to doing something with a shared rigorous definition of the ways and
means in which we do something, such as collect data, rather than informally, which
can be casual or unstructured.
• Actual performance is when somebody actually does and produces something as con-
trasted with how we recall processes or procedures that might be employed.
• Clients are those who you deliver a thing or service to. They may be internal or external
to your organization.
• Internal clients are those within your organization (also referred to as internal
partners).
• External clients are those outside of your organization, such as your immediate clients
and your clients’ clients (also referred to as external partners).
| 59
The Assessment Book
In the first column (left side), indicate how you see our organization currently operating.
In the second column (right side), indicate how you feel our organization should be operating.
For each item, think of the phrase “In our organization, from my experience…” as you consider
the current and preferred states of the organization. If there are items about which you are
uncertain, give your best response.
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
{ | } ~ 5. We evaluate. { | } ~
{ | } ~ 6. Evaluations include a focus on the results { | } ~
accomplished by individuals.
60 |
Evaluation, You, and Your Organization
1 – Rarely, if ever What Is Evaluation, You, and Your Organization Survey What Should Be
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
23
“Hard” data are results that are independently verifiable (such as error rate, production rate, employ-
ment, being a self-sufficient citizen). We use it as the same as “actual performance” data.
| 61
The Assessment Book
1 – Rarely, if ever What Is Evaluation, You, and Your Organization Survey What Should Be
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
24
“Soft” data are results that are personal and not independently verifiable (such as perceptions,
opinions, feelings). They are the perceptions that are held by people about some performance results.
62 |
Evaluation, You, and Your Organization
1 – Rarely, if ever What Is Evaluation, You, and Your Organization Survey What Should Be
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
| 63
The Assessment Book
1 – Rarely, if ever What Is Evaluation, You, and Your Organization Survey What Should Be
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
64 |
Evaluation, You, and Your Organization
1 – Rarely, if ever What Is Evaluation, You, and Your Organization Survey What Should Be
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 - Consistently
1 – Rarely, if ever
2 – Not usually
3 – Sometimes
4 – Frequently
5 – Consistently
| 65
The Assessment Book
Items 6 through 11 examine the target of evaluations. Item 6 examines evaluations relative to
individuals, Item 7 to small groups, Item 8 to individual departments, Item 9 to the organization
itself, Item 10 to external clients, and Item 11 to society and the community. By breaking these
into individual questions, you may pinpoint where evaluations are or are not being targeted. Of
course, inclusion of all will yield the best evaluation results.
Items 12 through 14 focus on possible trouble spots. Item 12 seeks why evaluations might be
skipped for lack of time, Item 13 for lack of expertise, and Item 14 for lack of knowing how to
use the evaluation data. These are usual traps that interfere with good and useful evaluation.
Item 15 is about management and supervisors’ orientation toward results and putting results and
methods/ resources into useful perspective. Item 16 probes staff focus, and Item 17 is about the
organization’s results culture. Item 18 examines how evaluation data are used, and Item 19 looks
specifically at inclusion in project plans.
Item 21 looks to see if hard (performance) data are used concerning individuals’ jobs and tasks.
The same question is applied concerning hard (performance) data to units within the organization
in Item 22, and to the organization itself in Item 23. Item 24 lets you know about both the col-
lection and use of performance data for external clients, while Item 25 probes the Mega level of
society and community. All levels should be included in hard data collection and use.
Item 26 examines the collection and use of soft data related to individuals, Item 27 to organiza-
tional units, Item 28 to the organization itself, Item 29 to external clients, and Item 30 to society
and the community. All levels should be included in soft data collection and use.
Item 31 will tell you if the integrity of evaluation (comparing results with intentions) is contin-
ued by examining gaps in results. Item 32 examines if internal staff are involved in setting
objectives. Item 33 examines if external partners are included in setting objectives. Item 34
examines if evaluations are formal and focus on external clients.
Item 35 shifts to examine the extent to which your evaluations are informal, and then further
probes to see if informal evaluations are made for external clients. Item 36 seeks information
about informal evaluation regarding results accomplished for society or the community, while
Item 37 seeks information about informal evaluation of the organization itself, Item 38 of work
units, and Item 39 of individuals. Informal evaluations are best replaced by formal and rigorous
evaluations.
Item 40 is key. It probes to find out if evaluation provides measurable criteria and provides data
for what is to be accomplished and how to calibrate that accomplishment.
66 |
Evaluation, You, and Your Organization
Item 42 evaluates whether plans are made on the basis of desired results. Item 43 examines if
external client desired consequences are used for planning, and Item 44 seeks the same question
for society and the community. Item 45 probes whether plans are based on desired individual
performance, Item 46 on desired resources, and Item 47 on desired programs, projects, and
activities. Item 48 examines if plans are made on the basis of desired section results, and Item 49
on desired organizational results. Item 50 examines if plans are linked to activities, programs,
and projects. Item 51 examines if plans are made on the basis of linking resources to results that
add value for clients and clients’ clients. Your evaluations will be more useful if all of these are
included.
Item 52 examines whether the organization takes action without identifying results to be
achieved. Item 53 probes to see if the organization uses evaluation data for external and societal
impact. Item 54 starts a shift to see if needs assessments (determining and prioritizing gaps in
results) data are used for impact on the organization itself, and Item 55 focuses on individual
operations and tasks.
Item 56 determines if needs are prioritized on the basis of the costs to meet the needs as com-
pared to the costs to ignore them.
Items 57 through 61 start a series on criteria for evaluation. Item 57 examines if criteria for
evaluation of people and programs are only known to supervisors, which is not a good approach.
Item 58 probes if evaluation criteria for people and programs are not rigorous, and Item 59
examines the consistency of the application of evaluation criteria. Item 60 examines the fairness
of the criteria, and Item 61 is concerned with differential (and inappropriate) rewards.
Item 62 seeks to determine if evaluation data are used for continual improvement, and Item 63
for blaming and punishing.
Item 64 seeks to find if evaluation data are used to compare results with intentions. Item 65 looks
specifically at the organization itself, and Item 66 looks at individual projects or activities. Item
67 slips into the arena of company politics and whether rewards are made on the basis of politi-
cal considerations alone.
Item 68 speaks to continual improvement using evaluation data based on hard (performance)
data, and Item 69 asks the same for soft (perception) data. Of course, both hard and soft data
should be used.
| 67
The Assessment Book
The data are now available for objective assessment of your approach to evaluation, the major
variables in it, and what to change and what to continue.
Following are the items that are the cornerstone for defining, doing, and benefiting from useful
evaluations, realizing that the others are also important but are part of the fabric. These should
receive a rating of 4 or 5 on the “What Should Be” scale: Items 1, 2, 11 (particularly important if
evaluation is to look at the value added to our shared society and communities), 15, 18, 24, 25
(particularly important), 30, 34 (particularly important), 40, 44 (particularly important), 53, 56,
59 (particularly important), 60 (particularly important), 62, 64, 68, and 69.
There are some items that if you get high—4 or 5—“What Should Be” scores should serve as red
flags: 3, 12, 13, 14, 35, 36, 37, 38, 39, 41, 46, 47, 52, 57, 58, 61, 63, and 67.
From the responses to this survey, you can identify what is working and what should be changed.
Evaluation is critical to organizational success. Do it well.
Related References
Barker, J. A. (2001). The new business of paradigms (21st century ed.). St. Paul, MN: Star
Thrower Distribution. Videocassette.
Bernardez, M. (2005). Achieving business success by developing clients and community: lessons
from leading companies, emerging economies and a nine year case study. Performance
Improvement Quarterly, 18(3), 37–55.
Clark, R. E., & Estes, F. (2002). Turning research into results: a guide to selecting the right
performance solutions. Atlanta, GA: CEP Press.
Davis, I. (2005, May 26). The biggest contract. The Economist (London) 375(8428), 87.
Drucker, P. F. (1993). Post-capitalist society. New York: Harper Business.
Greenwald, H. (1973). Decision therapy. New York: Peter Wyden, Inc.
Guerra, I. (2003). Key competencies required of performance improvement professionals.
Performance Improvement Quarterly, 16(1).
Guerra, I., Bernardez, M., Jones, M., & Zidan, S. (2005). Government workers adding value:
The Ohio Workforce Development Program. Performance Improvement Quarterly, 18(3),
76–99.
Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for
performance improvement practitioners. Amherst, MA: HRD Press, Inc.
68 |
Evaluation, You, and Your Organization
Kaufman, R. (2006a). 30 seconds that can change your life: a decision-making guide for those
who refuse to be mediocre. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2006b). Change, choices, and consequences: a guide to Mega thinking and
planning. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2005). Defining and delivering measurable value: Mega thinking and planning
primer. Performance Improvement Quarterly, 18(3), 8–16.
Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks,
CA: Sage Publications. Also Planificación Mega: Herramientas practicas paral el exito
organizacional (2004). Traducción de Sonia Agut. Universitat Jaume I, Castelló de la Plana,
Espana.
Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised).
Arlington, VA & Washington, DC: Jointly published by the American Society for Training
and Development and the International Society for Performance Improvement. Also,
published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los
Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.
Kaufman, R., Guerra, I., & Platt, W. A. (2006). Practical evaluation for educators: finding what
works and what doesn’t. Thousand Oaks, CA: Corwin Press/Sage.
Kaufman, R., & Lick, D. (Winter, 2000–2001). Change creation and change management: partners
in human performance improvement. Performance in Practice, 8–9.
Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic
planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/
Pfeiffer.
Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing,
and accomplishing. Lancaster, PA: Proactive Publishers.
Kaufman, R., Watkins, R., Sims, L., Crispo, N., & Sprague, D. (1997) Costs-consequences
analysis. Performance Improvement Quarterly, 10(3), 7–21.
Kirkpatrick, D. L. (1994). Evaluating training programs: the four levels. San Francisco, CA:
Berret-Koehler.
Mager, R. F. (1997). Preparing instructional objectives: a critical tool in the development of
effective instruction (3rd ed.). Atlanta, GA: Center for Effective Performance.
Rummler, G. A. (2004). Serious performance consulting: according to Rummler. Silver Spring,
MD: International Society for Performance Improvement and the American Society for
Training and Development.
Sample, J. (1997, July 25). Training programs: how to avoid legal liability. Fair Employment
Practices Guidelines (Supplement #436), 1–21.
Scriven, M. (1973). Goal free evaluation in school evaluation: the politics and process. E. R.
House (Ed.). Berkeley, CA: McCutchan.
Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. M. Gagne, and M. Scriven,
Perspectives of Curriculum Evaluation (AERA Monograph Series on Curriculum
Evaluation). Chicago, IL: Rand McNally & Co.
| 69
The Assessment Book
Stufflebeam, D. L., Foley, W. J., Gephart, W. R., Hammon, R. L., Merriman, H. O., & Provus,
M. M. (1971). Educational evaluation and decision making. Itasca, IL: Peacock.
Van Tiem, D. M., Moseley, J. L., & Dessinger, J. C. (2000). Fundamentals of performance
technology: a guide to improving people, process, and performance. Silver Spring, MD:
International Society for Performance Improvement.
70 |
Chapter 7
Competencies for Performance
Improvement Professionals
Ingrid Guerra-López, PhD
Introduction
Today’s performance improvement practitioners represent the entire spectrum of business,
industry, and the public sector, with their functions being as diverse as the organizations they
come from (Dean, 1995). Along with eclecticism and expansion comes the threat of mediocrity
(Gayeski, 1995). Given the speed of change in today’s society and the variety of knowledge and
skill sets brought in by more specialized subsets (e.g., instructional designers, training special-
ists, human resource developers, organizational developers), there has been a growing disparity
among practitioners’ behavior, even if they share the title of performance improvement profes-
sional (Hutchinson, 1990).
Some time ago, the International Board of Standards for Training, Performance and Instruction
(IBSTPI) released the third edition of Instructional Design Competencies: The Standards
(Richey, Fields, and Foxon, 2000), intended to provide instructional designers with a foundation
for the establishment of professional standards. Although Sanders and Ruggles (2000) conclude,
“By most accounts, HPI is an outgrowth of instructional systems design and programmed
instruction,” these instructional design competencies do not cover the entire spectrum of compe-
tencies required for performance improvement professionals (Kaufman and Clark, 1999). Many
of the leading figures in the performance improvement field agree that skills/knowledge is one of
about four possible causes (e.g., selection, motivation, environment, and ability) of performance
problems (Harless, 1970; Rummler and Brache, 1995; Rothwell and Kazanas, 1992).
According to some of the literature, the field of performance improvement must be analyzed to
determine what behaviors and performances are required in order for practitioners to add a
demonstrable value to the field and society as a whole (Dean, 1997; Kaufman, 2000; Kaufman,
2006; Westgaard, 1988). If they are to deliver what they promise—improved performance—a
logical place to start is to set performance standards for themselves. Stolovitch, Keeps, and
Rodrigue (1995) agree, “Performance standards can serve as a means to officially recognize the
achievements a professional has made in his or her area of practice” (683).
Thus, the Performance Improvement Competency Inventory, presented here, was rigorously
designed and validated with the purpose of informing and improving the practice of performance
improvement professionals.
Framework
Although there are many performance improvement models in practice today, most of them can
be traced back to this ADDIE model (Analysis, Design, Development, Implementation, and
Evaluation). Thus, studies have used the ADDIE model as the basis for the development of
| 71
The Assessment Book
proposed performance improvement competency models (see Harless, 1995; Stolovitch, Keeps,
and Rodrigue, 1995). For the development of this instrument, however, the ADDIE model was
modified to include assessment as a first and distinct phase in the performance improvement
process (Guerra, 2001).
Generally, some performance improvement authors have either used the terms analysis and
assessment interchangeably, or have affirmed that the term analysis is understood to include
assessment (Rossett, 1987, 1999; Rothwell, 1996; Stolovitch, Keeps, and Rodrigue, 1995). How-
ever, others have made a clear distinction between the two (Kaufman, 1992, 2000; Kaufman,
Rojas, and Mayer, 1993). Based on Kaufman’s definition (2000), needs assessment is the process
that identifies gaps in results. Needs analysis, on the other hand, breaks these identified needs or
problems into its components and seeks to identify root causes. Thus, needs analysis, when
applied to performance improvement as the first step, assumes that what is being analyzed are in
fact “needs.” Simply stated, while assessment identifies the “what,” analysis identifies the
“why.” Ignoring the distinction between these two separate, but related, steps introduces the pos-
sibility of mistaking a symptom for the problem itself. Consequently, another “A” (assessment)
was added to the conventional ADDIE model, resulting in the A²DDIE model (Guerra, 2003).
Finally, another element included in this model was that of social responsibility. Westgaard
(1988), Kaufman (1992, 2000, 2006), Dean (1993), Kaufman and Clark (1999), Farrington and
Clark (2000), and others have challenged professionals to rethink many traditional practices
within the performance improvement field and consider the ethical responsibilities associated
with its application. In the broader context of consulting, even mainstream consulting firms such
as McKinsey (Davis, 2005) caution against the perils of not looking and aligning organizational
and societal goals. Professionals are increasingly being expected to consider the environment
within which their clients exist and account for the impact organizational performance may have
on this environment. Thus, the A²DDIE model also includes societal impact of organizational
performance (Guerra, 2003).
Instrument Validation
Content validity was ensured by an expert panel consisting of four leading figures in the per-
formance improvement field. All had published extensively in the performance improvement
area, and three of the four experts were past presidents of the International Society for Perform-
ance Improvement (ISPI). Each expert panelist was sent a content validation packet, which
included a brief overview of the framework and operational definitions of the A²DDIE model
and each of its phases. They were asked to systematically review the list of competencies and
determine whether it was representative of those required of performance improvement profes-
sionals. Using a matching task approach (Rovinelli and Hambleton 1977, cited in Fitzpatrick,
1981), they were specifically asked to categorize each of the competencies into the correspond-
ing domain (assessment, analysis, design, development, implementation, and evaluation). Sec-
ondly, using a 5-point scale (1 = slightly relevant, 2 = somewhat relevant, 3 = relevant, 4 = very
relevant, and 5 = extremely relevant), they were asked to rate how relevant each competency was
to the corresponding domain they indicated. Lastly, they were asked general questions regarding
the adequacy of the list of competencies. Items classified into their intended category with a rat-
ing of 3 or higher by at least three of the four judges were included in the questionnaire.
72 |
Competencies for Performance Improvement Professionals
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
25
Based on Guerra, I. (2003). Key competencies required of performance improvement professionals.
Performance Improvement Quarterly, 16(1).
| 73
The Assessment Book
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
74 |
Competencies for Performance Improvement Professionals
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
| 75
The Assessment Book
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
76 |
Competencies for Performance Improvement Professionals
The gaps should be analyzed in light of the magnitude of their discrepancy, the relative position
on the Likert scale continuum, and the direction in which the gap is located (i.e., is it a negative
or a positive gap). If you apply this questionnaire not just for yourself, but to a group of perform-
ance improvement professionals, an analysis of the demographic characteristics will also be very
meaningful.
For example, work responsibilities may be one of the demographic characteristics you collect
data for. Because work responsibilities of some professionals may be almost exclusively cen-
tered on a particular phase (e.g., a job specialty such as that of designer), the different
phases/dimensions of the instrument should reveal an accurate reflection of gaps in the relevant
task competencies. For instance, if you are a designer of interventions whose work in a project
begins after assessment and analysis have been conducted by another team, then you may want
to pay particular attention to the gaps in the design phase. Likewise, your practice may require
you to be a generalist and be involved throughout the entire process. In this case, you would
benefit from examining the full inventory of competencies.
• “What Should Be” responses indicate perceived importance of that competency to the
respondent. A high score indicates they think it’s very important, and conversely low
scores indicate they don’t think it’s very important.
• “What Is” responses indicate respondent perception about their current behavior. No
matter how honest respondents feel they are being in answering items, those interpret-
ing the results should keep in mind that this is reality according them. Were a third
party to observe their actual behavior, the conclusions might be different.
• Large positive gaps indicate that they don’t feel they carry out these tasks as often as
they think they should.
• Large negative gaps indicate that they carry out these tasks more often than they think
they should.
Of course, the relative position of responses will also add to the complexity of the potential
interpretations.
| 77
The Assessment Book
Assessment 1–7 Large positive gaps in this phase represent a lack of focus on
identifying or verifying performance problems and opportunities.
Confirming performance problems and opportunities may be
important even for designers and developers who want to ensure
their solutions will meet the client’s requirements.
A particularly low score in Item 4 for both “What Should Be”
and “What Is” (and therefore a small or nonexistent gap) means
that the respondent sees no connection between the value of
organizational contributions and societal needs and requirements.
If the score is high for “What Should Be” and low for “What Is”
(i.e., a high positive gap), it could mean that though they think
it’s important, they do not feel empowered to address those needs
(whether because of lack of authority, resources, etc.).
Analysis 8–25 Large positive gaps here represent a lack of focus on identifying
factors causing the gaps. Because these factors are the basis for
selecting the right performance solutions, large gaps here are also
dangerous. Again, confirmation of performance problems and
their causes inform good design, development, implementation,
and evaluation.
Design 26–35 The items included in the design phase are meant to ensure a
sound design. Large gaps in any of these items can jeopardize the
utility and success of the design.
Development 36–42 These items are meant to guide the most efficient and effective
development process possible. Large gaps in any of these may
signal redundancies, unnecessary expenses, and steps in the
development of a solution.
Implementation 43–48 The implementation items provide a road map for how to ensure
that the developed solution will actually be adopted by the end
users and any other stakeholders. If assessment is about change
creation, implementation is about change management, in large
part. Large positive gaps in this section may signal a solution that
will fail to render desired results, no matter how appropriate and
well designed.
(continued)
78 |
Competencies for Performance Improvement Professionals
Evaluation 49–58 Finally, evaluation tasks are critical in ensuring that everything
done up to this point was worthwhile. If there are large gaps in
this phase, there may not be relevant, reliable, and valid data to
prove that the efforts and results (whether positive or negative)
were in fact worth the expense.
Related References
Davis, I. (2005). The McKinsey Quarterly, 3.
Dean, P. (1997). Social science and the practice of performance improvement. Performance
Improvement, 10(3), 3–6.
Dean, P. (1995). Examining the practice of human performance technology. Performance
Improvement Quarterly, 8(2), 17–39.
Dean, P. (1993). A selected review of the underpinnings of ethics for human performance
technology professionals—Part one: key ethical theories and research. Performance
Improvement Quarterly, 6(4), 6–32.
Farrington, J., & Clark, R. E. (2000). Snake oil, science, and performance products. Performance
Improvement, 39(10), 5–10.
Fitzpatrick, A. (1981). The validation of criterion-referenced tests. Amherst, MA: University of
Massachusetts.
Gayeski, D. (1995). Preface to the special issue. Performance Improvement Quarterly, 18(2),
6–16.
Guerra, I. (2003). Key competencies required of performance improvement professionals.
Performance Improvement Quarterly, 16(1).
Guerra, I. (2001). Performance improvement based on results: Is our field adding value?
Performance Improvement, 40(1), 6–10.
Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for
performance improvement practitioners. Amherst, MA: HRD Press, Inc.
Harless, J. (1995). Performance technology skills in business: implications for preparation.
Performance Improvement Quarterly, 8(4), 75–88.
Harless, J. (1970). An ounce of analysis is worth a pound of objectives. Newman, GA: Harless
Performance Guild.
Hutchinson, C. (1990). What’s a nice P.T. like you doing? Performance & Instruction, 29(9),
1–5.
Kaufman, R. (2006). Change, choices, and consequences: a guide to Mega thinking and
planning. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2000). Mega planning. Thousand Oaks, CA: Sage Publications.
| 79
The Assessment Book
Kaufman, R. (1992). Strategic planning plus: an organizational guide (revised). Newbury Park,
CA: Sage.
Kaufman, R., & Clark, R. (1999). Re-establishing performance improvement as a legitimate area
of inquiry, activity, and contribution: rules of the road. Performance Improvement, 38(9),
13–18.
Kaufman, R., Rojas, A., & Mayer, H. (1993). Needs assessment: a user’s guide. Englewood,
Cliffs, NJ: Educational Technology.
Richey, R., Fields, D., & Foxon, M. (2000). Instructional design competencies: the standards.
Iowa City, IA: International Board of Standards for Training, Performance, and Instruction.
Rossett, A. (1999). Analysis for human performance technology. In H. D. Stolovitch and E. J.
Keeps (Eds.) Handbook for human performance technology (2nd ed.). San Francisco, CA:
Jossey-Bass Pfeiffer.
Rossett, A. (1987). Training needs assessment. Englewood Cliffs, NJ: Educational Technology.
Rothwell, W. (1996). ASTD models for human performance improvement: roles, competencies,
and outputs. Alexandria, VA: The American Society for Training and Development.
Rothwell, W., & Kazanas, H. (1992). Mastering the instructional design process: a systematic
approach. San Francisco, CA: Jossey-Bass Publishers.
Rummler, G. A. (2004). Serious performance consulting: according to Rummler. Silver Spring,
MD: International Society for Performance Improvement and the American Society for
Training and Development.
Rummler, G. A., & Brache, A. P. (1995). Improving performance: How to manage the white
space on the organization chart (2nd ed.). San Francisco, CA: Jossey-Bass Publishers.
Sanders, E., & Ruggles, J. (2000). HPI Soup. Training & Development, 54(6).
Stolovitch, H., & Keeps, E. (1999). What is human performance technology? In H. D. Stolovitch
and E. J. Keeps (Eds.) Handbook for human performance technology (2nd ed.). San
Francisco, CA: Jossey-Bass Pfeiffer.
Stolovitch, H., Keeps, E., & Rodrigue, D. (1995). Skill sets for the human performance
technologists. Performance Improvement Quarterly, 8(2), 40–67.
Westgaard, O. (1988). A credo for performance technologists. Western Springs, IL: International
Board of Standards for Training, Performance and Instruction.
80 |
Chapter 8
Performance Motivation
Doug Leigh, PhD
Introduction
Goals are thought to influence behavior (Zaleski, 1987) and can be considered thoughts related
to results that are required in the future. Kaufman (1998, 2006) defines goal statements as gen-
eral aims, purposes, or intents in nominal or ordinal scales of measurement which, unlike per-
formance objectives, do not specify evaluation criteria nor the means by which the goal will be
achieved.
Also known as “subjective expected utility,” expectancy-value theory simply provides a means
for predicting behavior by considering the judgments individuals make regarding a goal’s value,
as well as their anticipated likelihood of successfully achieving that goal. Three precursors of
expectancy-value theory have been responsible for this explanation of motivation: Lewin’s
(Lewin, Dembo, Festinger, and Sears, 1944) “resultant valence theory,” Atkinson’s (1964)
“achievement motivation,” and Rotter’s (1954) “social learning theory.” All three theories share
the assumption that the actions individuals take regarding goals depend on the assumed likeli-
hood that their action will lead to the goal as well as the value individuals ascribe to the payoffs
of accomplishing that goal (Weiner, 1992).
Whereas individuals’ commitment to goals of their own creation are presumed to be at least par-
tially grounded in attaining desirable consequences or avoiding undesirable ones (Heider, 1958),
externally imposed goals should first be evaluated in both terms of likelihood of successful
accomplishment as well as subjective value. Supporting this notion, Hollenbeck and Klein
(1987) point out that goal acceptance is a “necessary prerequisite to goal setting’s salutary effects
on performance” (2).
Performance Motivation
The decision to “accept” goals whose immediate payoff may manifest not just for the individual,
but instead benefit her or his team, organization, or community, is particularly critical in today’s
workplace. The importance of defining, linking, and taking steps to accomplish results at various
levels of results—individually and in teams, within the organization itself, and for external
| 81
The Assessment Book
clients and the community—is essential to the continued success of any organization (Nicholls,
1987; Senge, 1990).
Kaufman (1998, 2006) suggests that if organizations are not useful to society, they will invaria-
bly fail. Further, as House and Shamir (1993) point out, the “articulation of an ideological goal
as a vision for a better future, for which followers have a moral claim, is the sine qua non of all
charismatic, visionary theories [of leadership]” (97). Kaufman’s (1998, 2000, 2006) “Organiza-
tional Elements Model” (OEM) provides a useful framework for stratifying results according to
the differing clients and beneficiaries of organizational action. This model distinguishes what an
organization uses (Inputs) and does (Processes) from the results it yields to three distinct (but
often related) stakeholders—individual employees and the teams they work within (Micro level),
the organization as a whole (Macro level), and/or external clients and society (Mega level).
Results at the Micro level are called “Products,” at the Macro level “Outputs,” and at the Mega
level “Outcomes.”
In 1984 Naylor, Pritchard, and Ilgen coined the term performance motivation to refer to the
“multiple processes by which individuals allocate personal resources, in the form of time and
effort, in response to anticipated [results] or consequences” (159). In this vein, the instrument
presented in this chapter operationalizes performance motivation as the perceived likelihood and
utility of Input, Process, Micro (individual and team), Macro (organizational), and Mega (exter-
nal client and societal) goal statements. In keeping with this framework, the following goal-
directed commitments are measured by the instrument presented in this chapter.
82 |
Performance Motivation
subjective appraisals that individuals make regarding goals whose accomplishment is intended to
benefit primarily oneself.
| 83
The Assessment Book
Expectancy—the anticipated likelihood that goal accomplishment will lead to valued conse-
quences—is measured on a 5-point response scale from 0 (“None at all”) to 4 (“A great deal”).
Valence—subjective appraisals of the value of consequences coming from goal accomplish-
ment—is measured on a 5-point response scale using the same anchors as for expectancy.
According to expectancy-value theory (Weiner, 1992), individuals are more likely to take steps
to accomplish goals with a positive multiplicative product of these two scores (a derived score of
up to 16) and take no action on goals that yield a product of zero.
For each of the 28 items in the questionnaire, please provide two responses for each of the goals
listed. To the left, rate how important you believe each of the goals to be. To the right, rate how
influential you believe your efforts are to the accomplishment of each of the goals listed.
84 |
Performance Motivation
A great deal
None at all
A great deal
To the left, rate how To the right, rate how
important you believe influential you believe
each of the goals your efforts are to the
below are. accomplishment of each
of the goals below.
| 85
The Assessment Book
A great deal
None at all
A great deal
To the left, rate how To the right, rate how
important you believe influential you believe
each of the goals your efforts are to the
below are. accomplishment of each
of the goals below.
86 |
Performance Motivation
| 87
The Assessment Book
Related References
Atkinson, J. W. (1964). An introduction to motivation. Princeton, NJ: Van Nostrand.
Heider, F. (1958). The psychology of interpersonal relations. Hillsdale, NJ: Lawrence Erlbaum
Associates.
House, R. J., & Shamir, B. (1993). Toward the integration of transformational, charismatic, and
visionary theories. In Chemmers, M., & Ayman, R. (Eds.), Leadership theory and research:
perspectives and directions. San Diego, CA: Academic Press, pp. 81–108.
Hollenbeck, J. R., & Klein, H. J. (1987). Goal commitment and the goal-setting process:
problems, prospects, and proposals for future research. Journal of Applied Psychology, 72,
212–220.
Kaufman, R. (2006). Change, choices, and consequences: a guide to Mega thinking and
planning. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2000). Mega planning: defining and achieving success. Newbury Park, CA: Sage.
Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised).
Arlington, VA & Washington, DC: Jointly published by the American Society for Training
and Development and the International Society for Performance Improvement. Also,
published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los
Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.
Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: Defining,
prioritizing, and accomplishing. Lancaster, PA: Proactive Publishers.
Kuhl, J. (1982). The expectancy-value approach within the theory of social motivation:
Elaborations, extensions, critique. In N. T. Feather (Ed.), Expectations and actions:
expectancy-value models in psychology. Hillsdale, NJ: Erlbaum.
Lewin, K., Dembo, T., Festinger, L., & Sears, P. (1944). Level of aspiration. In Hunt, J. M.
(Ed.). Personality and the behavior disorder (333–338). Oxford: Roald Press.
Leigh, D. (2000). Causal-utility decision analysis (CUDA): quantifying SWOTs. In Biech, E.
(Ed.), The 2000 annual, volume 2, consulting. San Francisco, CA: Jossey-Bass/Pfeiffer,
251–265.
Levenson, H. (1972). Distinctions within the concept of internal-external control: development
of new scale. Proceedings, 80th Annual Convention, APA.
88 |
Performance Motivation
Naylor, J. C., Pritchard, R. D., & Ilgen, D. R. (1984). A theory of behavior in organizations. New
York: Academic Press.
Nicholls, J. (1987). Leadership in organisations: Meta, macro, and micro. European
Management Journal, 6(1), 16–25.
Rotter, J. B. (1954). Social learning and clinical psychology. New York: Prentice-Hall.
Senge, P. M. (1990). The fifth discipline: the art and practice of the learning organization. New
York: Doubleday-Currency.
Settoon, R. (1998). Management of organizations: management 351 (online). Available:
http://sluweb.selu.edu/Academics/Faculty/rsettoon [1999, April 5].
Weiner, B. (1992). Human motivation: metaphors, theories, and research. Newbury Park, CA:
Sage.
Zaleski, Z. (1987). Behavioral effects of self-set goals for different time ranges. International
Journal of Psychology, 22, 17–38.
| 89
Chapter 9
Organizational Readiness for
E-learning Success
Ryan Watkins, PhD
Introduction
In its many forms, e-learning has become an integral part of the business and public sector
models that shape the training and professional development services in many of today’s organi-
zations. As a result, there is a growing recognition that implementing an effective e-learning pro-
gram requires both a model for strategic alignment of e-learning initiatives as well as a systemic
framework for linking the various dimensions of e-learning together to ensure success. Whether
e-learning in your organization includes just a limited number of vendor-purchased online
courses or the year-around management of international professional development events deliv-
ered via satellite, assessing your organization’s readiness for successful e-learning is an essential
step in shaping programs that lead to valuable accomplishments. Thus, find out what the readi-
ness is before rushing into implementation.
E-learning
In organizations around the globe, e-learning takes on many forms and functions. Although
e-learning is often closely associated with Internet-based training, the realm of learning opportu-
nities that are predominately mediated by electronic media (Internet, CD, DVD, satellite, digital
cable, etc.) is broad and numerous. Likewise, the learning experiences offered through e-learning
are as diverse as brown-bag lunches facilitated by desktop video and complete certifications
programs delivered on DVD. For that reason, it is essential that any decisions made with regards
to the successful implementation or management of e-learning be based on systemic or holistic
business models.
Foundations
The Organizational Elements Model (Kaufman and English, 1979; Kaufman, 1998, 2000, 2006b;
Watkins, 2007; Kaufman, Oakley-Browne, Watkins, and Leigh, 2003) provides an ideal
foundational framework for assessing the readiness of an organization to begin or to later
evaluate their e-learning initiatives. The model provides five interlinked and essential elements
for which alignment is necessary if e-learning programs (and most any other programs) are going
to successfully accomplish useful results. The first three elements of the model are performance-
| 91
The Assessment Book
focused and ensure the alignment of results across the internal and external partners of the
organization, while the remaining two elements provide for the identification and linkage of
appropriate processes and resources for the accomplishment of useful results. In planning for the
successful implementation of e-learning initiatives, it is consequently critical that all five
elements be assessed and reviewed. The Organizational Elements Model (OEM) is found in
Chapter 1, Table 1.1 of this book.
The strategic objectives at the Mega, Macro, and Micro levels of planning provide the concrete
definitions of success that are necessary for any e-learning initiative. For example, a strategic
objective of an organization may be to reduce the number of consumers who permanently injure
themselves using the organization’s products to less than 1 in every 50,000 in the next 5 years
with the goal of reducing it to zero in the next 15 years. This Mega level objective can then be
used to guide both organizational decision making with regards to product design, as well as the
evaluation of successful design changes based on this variable. In addition, e-learning initiatives
within the organization may also contribute to the successful achievement of this objective by
including additional information on customer safety in online new employee orientations and/or
identifying e-learning opportunities in product engineering that focus on design safety measures.
The success of these programs can then in part be measured by the successful achievements of
the organization.
Using the model, decisions regarding e-learning can be aligned with both the long-term and
short-term objectives of the organization, thereby ensuring that all that is used, done, produced,
and delivered is adequately aligned with the successful accomplishments of clients, clients’
clients, and others in the community. The model thereby provides for a systemic or holistic per-
spective on how e-learning can be an integral part of any successful organization when aligned
with valuable accomplishments.
E-learning initiatives are complex systems with many variables that are critical to their success.
As a consequence, e-learning initiatives can be viewed from eight distinct, yet closely related,
dimensions that represent the following concise characteristics (based on Kahn, 2005; Watkins,
2006):
• Interface design: focuses on all aspects of how the learner interacts with the learning
technology, instructor, and peers in the learning experience (for instance, incorporates
Web page and site design, videoconference format, content design, navigation, and
usability testing)
92 |
Organizational Readiness for E-learning Success
• Resource support: examines issues related to online support and resources for learners,
instructors, developers, administrators, and others
These dimensions provide for a holistic view of e-learning within any organization. The alliance
of these eight dimensions is therefore critical to the success of e-learning when assisting the
organization in making valuable contributions to clients and others. Both the Organizational
Elements Model and the eight dimensional framework for e-learning are integrated into the
questions that compose the Organizational E-learning Readiness Self-Assessment (Watkins,
2006).
The self-assessment may also be completed by multiple organizational leaders and managers in
order to gain useful diverse perspectives on the related issues. From the training department to
the office for information technologies, perspectives on the many dimensions of successful
e-learning can vary greatly. Whether results of the self-assessment are later aggregated or if per-
spectives are analyzed for distinct implications, the utility of the self-assessment’s diverse
dimensions can be of value to most any organization.
First, you can identify Gaps or differences between “What Is” and “What Should Be” (Gap =
WSB – WI). Using the Likert-type scale, these Gaps can illustrate perceived differences between
current individual, organizational, and societal performance (WI) and the desired or required per-
formance that is necessary for individual, organizational, and societal success (WSB).
| 93
The Assessment Book
Second, Gaps can then be classified either as Needs when the WSB is determined to be greater
than the WI, or as Opportunities when the WI is greater than the WSB. Positive and negative
Gaps can then be used to inform organizational strategic direction setting as well as daily deci-
sion making.
Third, you can identify the relative and perceived prioritization of Gaps (i.e., Needs or Opportu-
nities) by examining the location of the Gaps along the Likert-type scale. For example, while the
Gaps for two distinct questions on the self-assessment may have similar values of 2 points along
the Likert-type scale, their perceived importance may vary greatly depending on whether the
Gaps are between values of WSB = 3 and WI = 1, or the values of WSB = 5 and WI = 3.
Lastly, by collecting data for both dimensions of the dual-matrix self-assessment, you can assess
the distinct values of either “What Is” and/or “What Should Be.” To illustrate the value of these
analyses, it is important to consider the perspectives of those completing the self-assessment. If,
for example, it is determined that the information technology specialists who will be responsible
for maintaining the integrity and security of the e-learning platforms do not view the sharing
real-time feedback with learners at the same level of importance (i.e., WSB) as instructors, then
there is potential for miscommunication and differing program objectives when later decisions
regarding the use of peer-to-peer file sharing technologies are made.
The Organizational E-learning Readiness Self-Assessment doesn’t attempt to dilute all e-learning
decision making to a single variable, nor a single variable within any dimension of the founda-
tional frameworks. As a system, e-learning in any organization is complex, evolving, and multi-
variate. As a result, when complete the self-assessment does not provide a single value (e.g.,
average = 4) based on which all organizational decisions regarding e-learning can or should be
based. Instead, for each group of questions within the dimensions of the self-assessment, leaders
are encouraged to review the responses of the self-assessment participants using the four analysis
strategies described above in order to determine how the data can best be used for result-focused
decision making that can lead to valuable accomplishments.
94 |
Organizational Readiness for E-learning Success
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
Organizational
The organization…
{ | } ~ 1. Is committed to the long-term successful { | } ~
accomplishment of its clients.
| 95
The Assessment Book
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
96 |
Organizational Readiness for E-learning Success
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
Pedagogical
Training content will be…
{ | } ~ 31. Based solely on subject-matter expert input. { | } ~
{ | } ~ 32. Based on formal job/task analysis. { | } ~
{ | } ~ 33. Based solely on previous materials used in { | } ~
similar courses.
E-learning courses…
{ | } ~ 37. Will be self-paced and without the active { | } ~
participation of an instructor.
| 97
The Assessment Book
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
Technological
The technology to be used in e-learning…
{ | } ~ 52. Is already implemented by the organization. { | } ~
{ | } ~ 53. Will have to be purchased (or upgraded) in { | } ~
the next 12–18 months.
98 |
Organizational Readiness for E-learning Success
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
Interface Design
The e-learning interface will…
{ | } ~ 59. Require unique user logins and passwords. { | } ~
{ | } ~ 60. Provide learners with visual information on { | } ~
their progress.
| 99
The Assessment Book
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
E-learning instructors…
{ | } ~ 70. Will have access to a variety of options for { | } ~
communicating with learners.
100 |
Organizational Readiness for E-learning Success
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
Resource Support
The associates who will be taking e-learning
courses will…
{ | } ~ 87. Have access to specialized technology { | } ~
support personnel.
| 101
The Assessment Book
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
Ethical
The organization will…
{ | } ~ 95. Develop and communicate comprehensive { | } ~
plagiarism and/or code of conduct policies
regarding e-learning.
102 |
Organizational Readiness for E-learning Success
Rarely
Sometimes
Usually
Always
Never
Rarely
Sometimes
Usually
Always
Indicate with what frequency Indicate with what frequency
you are currently applying you think you should be
this competency. applying this competency.
| 103
The Assessment Book
The suggested analysis for this instrument follows that of the others previously mentioned. Each
of the four types of analyses—discrepancy, direction, position, and demographics—should be
applied to gain maximum understanding of the meaning of each and the associated dimension.
Using the data from the four analyses of the Organizational Readiness for E-learning Success
Self-Assessment, results for each of the eight e-learning dimensions should be interpreted in
order to identify the strategic objectives of potential performance improvement activities. Within
each section (and subsection) of the self-assessment items should be reviewed for discrepancy,
direction, position, and demographics.
Note: The following items from the Organizational Readiness for E-learning Success Self-
Assessment are recommended for reverse scoring (i.e., for these items it is often beneficial
to have a “What Should Be” score lower than a “What Is” score). You will want to review
each of these items separately in order to interpret the results of your assessment for your
organization. It should be noted, however, that for some organizations, reversed scoring of
the items may not be appropriate given their strategic objectives at the Mega, Macro, and
Micro levels.
Items: 11, 13, 21, 22, 23, 25, 27, 28, 29, 31, and 33
The Organizational E-learning Readiness Self-Assessment includes a mix of items that will
likely vary in the direction when the analysis is complete. While many items will have an easily
identified “preferred” or “better” direction that can be used in interpreting the results, there are
multiple items included in the self-assessment in which the positive or negative direction of the
data will have to be interpreted within the context of the organization (for example, the use of
internal or external technical support staff). With regards to these items, the purpose of the self-
assessment is to ensure that adequate attention has been paid to these considerations and that
decision makers realize the importance of the topic to the successful implementation of an
e-learning initiative.
104 |
Organizational Readiness for E-learning Success
It is often helpful to aggregate scores on individual items to determine the average discrepancy,
direction, position, and demographics for sections and subsections of the self-assessment as well.
When aggregating scores, you will want to transpose the values for the reverse-scored items
before including those with the other items. While aggregated scores for sections and subsections
can be useful, they should not, however, be used as a substitute for reviewing the analysis of
each item in the self-assessment.
In addition to using the results of the self-assessment to guide decision making during the devel-
opment or reengineering of an e-learning program, the self-assessment can also be a valuable
tool for evaluating the results or improving the performance of any current e-learning programs.
When using the self-assessment as part of an evaluation as opposed to an assessment (see
Watkins and Kaufman, 2002), you will want to provide specific directions regarding which
e-learning programs should be the focus of the self-assessment. Accordingly, you will also want
to define how the interpretation of results from each section, subsection, and item will be utilized
to evaluate results and improve future performance. Similar analyses for discrepancy, direction,
position, and demographics will be of value when using the instrument as an evaluation tool.
The purpose of the self-assessment is to support useful decision making within organizations. As
a result, the use of the self-assessment, the analyses performed on the resulting data, and how
those results are interpreted within the organization are all processes that should be considered
within the context of the decisions being made. The extent to which additional analyses of the
results are performed and the time spent interpreting the results from individual items within the
instrument should be guided by how the findings can inform the decision-making process. For
some organizations, distinct sections or subsections of the instrument may be higher priorities
than others, and as a result, additional analyses based on demographic data may be useful in sup-
porting decision makers.
While all eight dimensions of the e-learning framework are important to the long-term success of
the initiative, each organization should determine an appropriate balance among those to support
their decision making. No single section should go un-used or un-analyzed, but the time and
effort spent aggregating and interpreting results should be focused on the requirements of the
organization and its decision makers.
Related References
Kahn, B. (2005). Managing e-learning strategies: design, delivery, implementation and
evaluation. Hershey, PA: Information Science Publishing.
Kaufman, R. (2006a). 30 seconds that can change your life: a decision-making guide for those
who refuse to be mediocre. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2006b). Change, choices, and consequences: a guide to Mega thinking and
planning. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2005). Defining and delivering measurable value: Mega thinking and planning
primer. Performance Improvement Quarterly, 18(3), 8–16.
Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks,
CA: Sage Publications. Also Planificación Mega: Herramientas practicas paral el exito
organizacional (2004). Traducción de Sonia Agut. Universitat Jaume I, Castelló de la Plana,
Espana.
| 105
The Assessment Book
Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised).
Arlington, VA & Washington, DC: Jointly published by the American Society for Training
and Development and the International Society for Performance Improvement. Also,
published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los
Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.
Kaufman, R., & English, F. W. (1979). Needs assessment: concept and application. Englewood
Cliffs, NJ: Educational Technology Publications.
Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Strategic planning for
success: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass.
Kaufman, R., Watkins, R., and Guerra, I. (2001). The future of distance education: defining and
sustaining useful results. Educational Technology, 41(3), 19–26.
Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing
and achieving. Lancaster, PA: Proactive Publishing.
Watkins, R. (2007). Performance by design: the systematic selection, design, and development
of performance technologies that produce results. Amherst, MA: HRD Press, Inc.
Watkins, R. (2006). Ends and means: is your organization ready for e-learning? Distance
Learning Magazine, 3(4).
Watkins, R. (2005). 75 e-learning activities: making online courses more interactive. San
Francisco, CA: Jossey-Bass/Pfeiffer.
Watkins, R. (2003). Determining if distance education is the right choice: applied strategic
thinking in education. Computers in the Schools, 20(2), 103–120. Also in Corry, M., &
Tu, C. (Eds.). Distance education: what works well. Binghamton, NY: Haworth Press.
Watkins, R. (2000). How distance education is changing workforce development. Quarterly
Review of Distance Education, 1(3), 241–246.
Watkins, R., & Corry, M. (2004). E-learning companion: a student's guide to online success.
New York: Houghton Mifflin.
Watkins, R., & Corry, M. (2002). Virtual universities: challenging the conventions of education.
In Haddad, W., & Draxler A. (Eds.). Technologies for education: potentials, parameters and
prospects. Paris: UNESCO.
Watkins, R., & Kaufman, R. (2002). Is your distance education going to accomplish useful
results? 2002 Training and Performance Sourcebook. New York: McGraw-Hill, 89–95.
Watkins, R., Leigh, D., & Triner, D. (2004). Assessing readiness for e-learning. Performance
Improvement Quarterly, 17(4), 66–79.
Watkins, R., & Schlosser, C. (2003). Conceptualizing educational research in distance education.
Quarterly Review of Distance Education, 4(3), 331–341.
106 |
Concluding Thoughts and Suggestions
The assessment instruments presented in this book may not all necessarily be appropriate for
your organization. If you are going to use any of these, make sure that the instrument fits your
purposes, and that part of your purpose is to use the data to create desirable change. For example,
if you suspect that there is significant neglect of the strategic planning process in your organiza-
tion, then that may be the best instrument to use. Once you have collected, analyzed, and inter-
preted the data, and these in fact confirm your initial suspicions (i.e., strategic planning is really
not being done), be sure that your recommendations get implemented. Of course, clearly com-
municating the potential costs and consequences of both closing the identified gaps in results, as
well as ignoring them, plays a significant role in the decision-making process. Be sure that all
involved understand what actions must be taken, why, and how.
Also worth noting is that each of the assessment instruments presented here is meant to serve as
merely one data collection tool among various options. If appropriate to your circumstances, you
are encouraged to explore other data points (besides the perceptions of those in your sample of
respondents), other data sources, and data collection methodologies. These tools enable you to
better understand people’s perceptions about each of the items.
It is also important to consider that people’s perceptions change over time. While your survey
findings may be such today, you might find different responses using the same instrument at
another point in time. In fact, that is one of the most productive ways of using these tools. Con-
sider the first implementation of any given instrument as a baseline. If the results of the assess-
ment instruments lead to specific actions for gap closure, be sure to allow an appropriate amount
of time to transpire so that the impact of those actions can be seen. Then, apply the instrument
again to track how perceptions have shifted. While it is tempting to assume that if you do
observe changes in perceptions, these can automatically be attributed to your actions. It is always
possible that there are other factors that contributed to such a shift, either individually or collec-
tively. Beware of jumping to conclusions without additional evidence to support your claims.
| 107
Glossary
The increasing responsibilities of professionals for the results, consequences, and payoffs of their
activities has lead us into a new era of professionalism. For the performance professional, this
era requires a renewed focus on the scientific basis for decision making, the system approach to
performance improvement and technology, as well as a consistency in language that leaves no
confusion regarding the value added for individuals, organization, and society. This article
provides a model for defining and achieving success in the future through a glossary of terms
that focuses on the results and payoffs for internal and external clients instead of the process,
activities, and interventions we commonly apply.
The new era we face is defining and achieving useful results for all stakeholders: including both
internal and external partners. And we must prove the value we add in terms of empirical data
about what we deliver, what it accomplished, and what value it added for all stakeholders (not
just the value it added to our team, our department, or our organization, but to the entire system
of internal and external partners). We can no longer get away with “feel good” discussions of
how we increased efficiency or effectiveness of processes that may or may not add value to all of
our clients, our client's clients, and the society.
The performance professional of the future has to both know how to improve performance as
well as how to justify why an individual or organization should improve performance. For in
addition to justifying what we use, do, accomplish and deliver, the new reality is that we must all
now prove that there are useful results to both the client and to society. From a societal
perspective, value-added includes the survival, health, and well-being of all partners. Planning
for and achieving results at the societal level—value-added for tomorrow’s child—is termed
“Mega Planning” or “Strategic Thinking” (Kaufman, 1992, 1998, 2000). It is this system or
super-system (society) that best begins our planning and serves as the basis for our evaluation
and continuous improvement. But to be successful in planning for and demonstrating value
___________________________
27
Danny Langdon (1999) speaks to the language of work and the importance of the terms and concepts
we use and understand.
| 109
The Assessment Book
added, we must use words with rigor and precision. Language that is crisp, to the point, and
focused on results (including societal payoffs) is essential for professional success. And then we
must match our promises with deeds and payoffs that measurably add value.
System, systems, systematic, and systemic: related but not the same
To set the framework, let’s define these basic terms, relate them, and then use them to put other
vocabulary in context.
system approach: Begins with the sum total of parts working independently and together to
achieve a useful set of results at the societal level, adding value for all internal and external
partners. We best think of it as the large whole, and we can show it thus:
It should be noted here that the “system” is made up of smaller elements, or subsystems,
shown as bubbles embedded in the larger system. If we start at this smaller level, we will
start with a part and not the whole. So, when someone says they are using a “systems
approach” they are really focusing on one or more subsystems, but they are unfortunately
focusing on the parts and not the whole. When planning and doing at this level, they can
only assume that the payoffs and consequences will add up to something useful to society
and external clients, and this is usually a very big assumption.
systematic approach: An approach that does things in an orderly, predictable, and controlled
manner. It is a reproducible process. Doing things, however, in a systematic manner does
not ensure the achievement of useful results.
systemic approach: An approach that affects everything in the system. The definition of the
system is usually left up to the practitioner and may or may not include external clients and
society. It does not necessarily mean that when something is systemic it is also useful.
Interestingly, these above terms are often used interchangeably. Yet they are not the same.
Notice that when the words are used interchangeably and/or when one starts at the systems level
and not the system level, it will mean that we might not add value to external clients and society.
110 |
Glossary
Semantic quibbling? We suggest just the opposite. If we talk about a “systems” approach and
don’t realize that we are focusing on splinters and not on the whole, we usually degrade what we
use, do, produce, and deliver in terms of adding value inside and outside of the organization.
When we take a “systems” approach, we risk losing a primary focus on societal survival, self-
sufficiency, and quality of life. We risk staying narrow.
What organizations that you personally do business with do you expect to really put
client health, safety, and well being at the top of the list of what they must deliver?
It is the rare individual who does not care whether or not the organizations that affect their lives
have a primary focus and accountability for survival, health, welfare, and societal payoffs. Most
people, regardless of culture, want safety, health, and well-being to be the top priority of
everyone they deal with.
What we do and deliver must be the same as what we demand of others. So, if we want Mega—
value added for society—to be at the top of the list for others (e.g., airlines, government,
software manufacturers), why don’t we do unto others as we would have them do unto us? At
best we give “lip service” to customer pleasure, profits, or satisfaction… and then go on to work
on splinters of the whole. We work on training courses for individual jobs and tasks, and then we
hope that the sum total of all of the training and trained people adds up to organizational success.
We too often don’t formally include external client survival and well-being in our performance
plans, programs, and delivery. We rarely start our plans or programs with an “outside-the-
organization” Outcome28 clearly and rigorously stated before selecting the organizational results
and resources (Outputs, Products, Processes, and Inputs).
The words we use might get in the way of a societal added value focus. To keep our performance
and value-added focus, we should adjust our perspective when reviewing the literature and as we
listen to speakers at meetings. Far too often we read and hear key terms used with altering (or
case specific) definitions. There seems to be many words that sound familiar, and these words
are often so comfortable and identify us as professionals that we neglect to question the meaning
or appropriateness of use within the context. And when we apply the words and concepts
inconsistently, we find that their varying definitions can abridge success.
What we communicate to ourselves and others—the words and phrases—are important since
they operationally define our profession and communicate our objectives and processes to others.
They are symbols and signs with meaning. When our words lead us away, by implication or
convention, from designing and delivering useful results for both internal and external clients,
then we must consider changing our perspectives and our definitions.
_________________________
28
As we will note later, this word is used in a fuzzy way by most people for any kind of result.
| 111
The Assessment Book
If we don’t agree on definitions and communicate with common and useful understandings, then
we will likely get a “leveling” of the concepts—and thus our resulting efforts and
contributions—to the lowest common denominator. Let’s look at some frequently used words,
define each, and see how a shift in focus to a more rigorous basis for our terms and definitions
will help us add value to internal and external clients.
The following definitions come from our review of the literature and other writings. Many of the
references and related readings from a wide variety of sources are included at the end of the
glossary. Italics provide some rationale for a possible perspective shift from conventional and
comfortable to societal value added. In addition, each definition identifies if the word or phrase
relates most to a system approach, systems approach, systematic approach, or systemic approach
(or a combination). The level of approach (system, systems, etc.) provides the unit of analysis for
the words and terms as they are defined in this article. Alternative definitions, should also be
analyzed based on the unit of analysis. If we are going to apply system thinking (decision
making that focuses on valued added at the individual, organizational, and societal levels), then
definitions from that perspective should be applied in our literature, presentations, workshops,
and products.
A2DDIE model: Model proposed by Ingrid Guerra-López (2007) that adds Assessment to the
ADDIE Model.
change creation: The definition and justification, proactively, of new and justified as well as
justifiable destinations. If this is done before change management, acceptance is more likely.
This is a proactive orientation for change and differs from the more usual change
management in that it identifies in advance where individuals and organizations are headed
rather than waiting for change to occur and be managed.
change management: Ensuring that whatever change is selected will be accepted and
implemented successfully by people in the organization. Change management is reactive in
that it waits until change requirements are either defined or imposed and then moves to have
the change accepted and used.
comfort zones: The psychological areas, in business or in life, where one feels secure and
safe (regardless of the reality of that feeling). Change is usually painful for most people.
When faced with change, many people will find reasons (usually not rational) for why not to
make and modifications. This gives rise to Tom Peter’s (1997) observation that “it is easier
to kill an organization than it is to change it.”
112 |
Glossary
return on investment. Thus, even the calculations for standard approaches steer away from
the vital consideration of self-sufficiency, health, and well-being (Kaufman & Keller, 1994;
Kaufman, Keller, & Watkins, 1995; Kaufman, 1998, 2000).
criteria: Precise and rigorous specifications that allow one to prove what has been or has to
be accomplished. Many processes in place today do not use rigorous indicators for expected
performance. If criteria are “loose” or unclear, there is no realistic basis for evaluation and
continuous improvement. Loose criteria often meet the comfort test, but don’t allow for the
humanistic approach to care enough about others to define, with stakeholders, where you are
headed and how to tell when you have or have not arrived.
deep change: Change that extends from Mega—societal value added—downward into the
organization to define and shape Macro, Micro, Processes, and Inputs. It is termed deep
change to note that it is not superficial or just cosmetic, or even a splintered quick fix. Most
planning models do not include Mega results in the change process, and thus miss the
opportunity to find out what impact their contributions and results have on external clients
and society. The other approaches might be termed superficial change or limited change in
that they only focus on an organization or a small part of an organization.
desired results: Ends (or results) identified through needs assessments that are derived from
soft data relating to “perceived needs.” Desired indicates these are perceptual and personal
in nature.
ends: Results, achievements, consequences, payoffs, and/or impacts. The more precise the
results, the more likely that reasonable methods and means can be considered, implemented,
and evaluated. Without rigor for results statements, confusion can take the place of
successful performance.
evaluation: Compares current status (what is) with intended status (what was intended) and
is most commonly done only after an intervention is implemented. Unfortunately, evaluation
is used for blaming and not fixing or improving. When blame follows evaluation, people
tend to avoid the means and criteria for evaluation or leave them so loose that any result can
be explained away.
external needs assessment: Determining and prioritizing gaps, then selecting problems to be
resolved at the Mega level. This level of needs assessment is most often missing from
conventional approaches. Without the data from it, one cannot be assured that there will be
strategic alignment from internal results to external value added.
hard data: Performance data that are based on objectives and independently verifiable. This
type of data is critical. It should be used along with “soft” or perception data.
Ideal Vision: The measurable definition of the kind of world we, together with others,
commit to help deliver for tomorrow’s child. An Ideal Vision defines the Mega level of
planning. It allows an organization and all of its partners to define where they are headed
and how to tell when they are getting there or getting closer. It provides the rationality and
reasons for an organizational mission objective.
| 113
The Assessment Book
Inputs: The ingredients, raw materials, and physical and human resources that an
organization can use in its processes in order to deliver useful ends. These ingredients and
resources are often the only considerations made during planning without determining the
value they add internally and externally to the organization.
internal needs assessment: Determining and prioritizing gaps, then selecting problems to be
resolved at the Micro and Macro levels. Most needs assessment processes are of this variety
(Watkins, Leigh, Platt, & Kaufman , 1998).
Macro level of planning: Planning focused on the organization itself as the primary client
and beneficiary of what is planned and delivered. This is the conventional starting and
stopping place for existing planning approaches.
Mega thinking: Thinking about every situation, problem, or opportunity in terms of what
you use, do, produce, and deliver as having to add value to external clients and society.
Same as strategic thinking.
methods-means analysis: Identifies possible tactics and tools for meeting the needs
identified in a system analysis. The methods-means analysis identifies the possible ways and
means to meet the needs and achieve the detailed objectives that are identified in this Mega
plan, but does not select them. Interestingly, this is a comfortable place where some opera-
tional planning starts. Thus, it either assumes or ignores the requirement to measurably add
value within and outside the organization.
114 |
Glossary
Micro level planning: Planning focused on individuals or small groups (such as desired and
required competencies of associates or supplier competencies). Planning for building-block
results. This also is a comfortable place where some operational planning starts. Starting
here usually assumes or ignores the requirement to measurably add value to the entire
organization as well as to outside the organization.
mission analysis: Analysis step that identified: (1) what results and consequences are to be
achieved; (2) what criteria (in interval and/or ratio scale terms) will be used to determine
success; and (3) what are the building-block results and the order of their completion
(functions) required to move from the current results to the desired state of affairs. Most
mission objectives have not been formally linked to Mega results and consequences, and
thus strategic alignment with “where the clients are” are usually missing (Kaufman, Stith,
Triner, & Watkins, 1998).
need: The gap between current results and desired or required results. This is where a lot of
planning goes “off the rails.” By defining any gap as a need, one fails to distinguish between
means and ends and thus confuses what and how. If need is defined as a gap in results, then
there is a triple bonus: (1) it states the objectives (What Should Be), (2) it contains the
evaluation and continuous improvement criteria (What Should Be), and (3) it provides the
basis for justifying any proposal by using both ends of a need—What Is and What Should
Be in terms of results. Proof can be given for the costs to meet the need as well as the costs
to ignore the need.
needs analysis: Taking the determined gaps between adjacent organizational elements, and
finding the causes of the inability for delivering required results. A needs analysis also
identifies possible ways and means to close the gaps in results—needs—but does not select
them. Unfortunately, needs analysis is usually interchangeable with needs assessment. They
are not the same. How does one “analyze” something (such as a need) before they know
what should be analyzed? First assess the needs, then analyze them.
needs assessment: A formal process that identifies and documents gaps between current and
desired and/or required results, arranges them in order of priority on basis of the cost to meet
the need as compared to the cost of ignoring it, and selects problems to be resolved. By
starting with a needs assessment, justifiable performance data and the gaps between What Is
and What Should Be will provide the realistic and rational reason for both what to change as
well as what to continue.
| 115
The Assessment Book
objectives: Precise statement of purpose, or destination of where we are headed and how we
will be able to tell when we have arrived. The four parts to an objective are (1) what result is
to be demonstrated, (2) who or what will demonstrate the results, (3) where will the result be
observed, (4) what interval or ratio scale criteria will be used? Loose or process-oriented
objectives will confuse everyone (c.f. Mager, 1997). A Mega level result is best stated as an
objective.
outcomes: Results and payoffs at the external client and societal level. Outcomes are results
that add value to society, community, and external clients of the organization. These are
results at the Mega level of planning.
outputs: The results and payoffs that an organization can or does deliver outside of itself to
external clients and society. These are results at the Macro level of planning where the
primary client and beneficiary is the organization itself. It does not formally link to
outcomes and societal well-being unless it is derived from outcomes and the Ideal (Mega)
Vision.
paradigm: The framework and ground rules individuals use to filter reality and understand
the world around them (Barker, 1992). It is vital that people have common paradigms that
guide them. That is one of the functions of the Mega level of planning and outcomes so that
everyone is headed to a common destination and may uniquely contribute to that journey.
products: The building-block results and payoffs of individuals and small groups that form
the basis of what an organization produces and delivers, inside as well as outside of itself,
and the payoffs for external clients and society. Products are results at the Micro level of
planning.
required results: Ends identified through needs assessment, which are derived from hard
data relating to objective performance measures.
116 |
Glossary
soft data: Personal perceptions of results. Soft data is not independently verifiable. While
people’s perceptions are reality for them, they are not to be relied on without relating to
“hard”—independently verifiable—data as well.
strategic alignment: The linking of Mega, Macro, and Micro level planning and results with
each other and with Processes and Inputs. By formally deriving what the organization uses,
does, produces, and delivers to Mega/external payoffs, strategic alignment is complete.
strategic thinking: Approaching any problem, program, project, activity, or effort by noting
that everything that is used, done, produced, and delivered must add value for external
clients and society. Strategic thinking starts with Mega.
systems analysis: Identifies the most effective and efficient ways and means to achieve
required results. Solutions and tactics focused. This is an internal—inside the organization—
process.
tactical planning: Finding out what is available to get from What Is to What Should Be at
the organizational/Macro level. Tactics are best identified after the overall mission has been
selected based on its linkages and contributions to external client and societal (Ideal Vision)
results and consequences.
What Is: Current operational results and consequences. These could be for an individual, an
organization, and/or for society.
What Should Be: Desired or required operational results and consequences. These could be
for an individual, an organization, and/or society.
wishes: Desires concerning means and ends. It is important not to confuse wishes with
needs.
| 117
The Assessment Book
References
Barker, J. A. (1992). Future edge: discovering the new paradigms of success. New York:
William Morrow & Co., Inc.
Brethower, D. (2006). Performance analysis: knowing what to do and how. Amherst, MA: HRD
Press, Inc.
Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for
performance improvement practitioners. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2006a). Change, choices, and consequences: a guide to Mega thinking and
planning. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2006b). 30 seconds that can change your life: a decision-making guide for those
who refuse to be mediocre. Amherst, MA: HRD Press, Inc.
Kaufman, R. (2000). Mega planning. Thousand Oaks, CA: Sage Publications.
Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised).
Arlington, VA and Washington, DC: Jointly published by the American Society for Training
and Development and the International Society for Performance Improvement.
Kaufman, R. (1992). Strategic planning plus: an organizational guide (revised). Newbury Park,
CA: Sage.
Kaufman, R., & Keller, J. (Winter, 1994). Levels of evaluation: beyond Kirkpatrick. Human
Resources Quarterly, 5(4), 371–380.
Kaufman, R., Keller, J., & Watkins, R. (1995). What works and what doesn’t: evaluation beyond
Kirkpatrick. Performance and Instruction, 35(2), 8–12.
Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic
planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/
Pfeiffer.
118 |
Glossary
Kaufman, R., Stith, M., Triner, D., & Watkins, R. (1998). The changing corporate mind:
organizations, visions, mission, purposes, and indicators on the move toward societal
payoffs. Performance Improvement Quarterly, 11(3), 32–44.
Langdon, D., Whiteside, K., & McKenna, M. (1999). Intervention resource guide: 50
performance improvement tools. San Francisco, CA: Jossey-Bass.
Mager, R. F. (1997). Preparing instructional objectives: a critical tool in the development of
effective instruction (3rd ed.). Atlanta, GA: Center for Effective Performance.
Peters, T. (1997). The circle of innovation: you can’t shrink your way to greatness. New York:
Knopf.
Popcorn, F. (1991). The Popcorn report. New York: Doubleday.
Watkins, R., Leigh, D., Platt, W., & Kaufman, R. (1998). Needs assessment: A digest, review,
and comparison of needs assessment literature. Performance Improvement, 37(7), 40–53.
Watkins, R., Leigh, D., Foshay, R., & Kaufman, R. (1998). Kirkpatrick plus: evaluation and
continuous improvement with a community focus. Educational Technology Research and
Development Journal, 46(4).
Watkins, R. (2007). Performance by design: the systematic selection, design, development of
performance technologies that produce useful results. Amherst, MA: HRD Press, Inc.
Related Readings
Banathy, B. H. (1992). A systems view of education: Concepts and principles for effective
practice. Englewood, Cliffs, NJ: Educational Technology Publications.
Beals, R. L. (December, 1968). Resistance and adaptation to technological change: Some
anthropological views. Human Factors.
Bertalanffy, L. Von (1968). General systems theory. New York: George Braziller.
Block, P. (1993). Stewardship. San Francisco, CA: Berrett-Koehler Publishers.
Branson, R. K. (August, 1998). Teaching-centered schooling has reached its upper limit: It
doesn’t get any better than this. Current Directions in Psychological Science, 7(4), 126–135.
Churchman, C. W. (1969, 1975). The systems approach (1st and 2nd eds.). New York: Dell
Publishing Company.
Clark, R. E., & Estes, F. (March–April, 1999). The development of authentic educational
technologies. Educational Technology, 38(5), 5–11.
Conner, D. R. (1998). Building nimble organizations. New York: John Wiley & Sons.
Deming, W. E. (1972). Code of Professional Conduct. Inst. Stat. Rev., 40(2), 215–219.
Deming, W. E. (1986). Out of the crisis. Cambridge, MA: MIT, Center for Advanced
Engineering Technology.
Deming, W. E. (May 10, 1990). A system of profound knowledge. Washington, DC: Personal
memo.
Drucker, P. F. (1973). Management: tasks, responsibilities, practices. New York: Harper &
Row.
| 119
The Assessment Book
120 |
Glossary
Kaufman, R., & Watkins, R. (1999). Using an ideal vision to guide Florida’s revision of the State
Comprehensive Plan: a sensible approach to add value for citizens. In DeHaven-Smith, L.
(ed.). Florida’s future: a guide to revising Florida’s State Comprehensive Plan. Tallahassee,
FL: Florida Institute of Government.
Kaufman, R., Watkins, R., & Guerra, I. (Winter, 2002). Getting valid and useful educational
results and payoffs: we are what we say, do, and deliver. International Journal of
Educational Reform, 11(1), 77–92.
Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing,
accomplishing. Proactive Press, Lancaster, PA.
Kaufman, R., Watkins, R., & Sims, L. (1997). Costs-consequences analysis: a case study.
Performance Improvement Quarterly, 10(3), 7–21.
Kuhn, T. (1970). The structure of scientific revolutions (2nd ed.). Chicago, IL: University of
Chicago Press.
LaFeur, D., & Brethower, D. (1998). The transformation: business strategies for the 21st century.
Grand Rapids, MI: IMPACTGROUPworks.
Langdon, D. (ed.). Intervention resource guide: 50 performance improvement tools. San
Francisco, CA: Jossey-Bass.
Lick, D., & Kaufman, R. (Winter, 2000–2001). Change creation: the rest of the planning story.
Planning for Higher Education, 29(2), 24–36.
Muir, M., Watkins, R., Kaufman, R., & Leigh, D. (April, 1998). Costs-consequences analysis: a
primer. Performance Improvement, 37(4), 8–17, 48.
Peters, T. (1997). The circle of innovation: you can’t shrink your way to greatness. New York:
Alfred A. Knopf.
Rummler, G. A., & Brache, A. P. (1990). Improving performance: how to manage the white
space on the organization chart. San Francisco, CA: Jossey-Bass Publishers.
Senge, P. M. (1990). The fifth discipline: the art and practice of the learning organization. New
York: Doubleday-Currency.
Triner, D., Greenberry, A., & Watkins, R. (November–December, 1996). Training needs
assessment: a contradiction in terms? Educational Technology, 36(6), 51–55.
Watkins, R., Triner, D., & Kaufman, R. (July, 1996). The death and resurrection of strategic
planning: a review of Mintzberg’s The Rise and Fall of Strategic Planning. International
Journal of Educational Reform.
Watkins, R., & Kaufman, R. (November, 1996). An update on relating needs assessment and
needs analysis. Performance Improvement, 35(10), 10–13.
Watkins, R. (2007). Performance by design: the systematic selection, design, development of
performance technologies that produce useful results. Amherst, MA: HRD Press, Inc.
| 121
About the Authors
Roger Kaufman, CPT, PhD, is professor emeritus of educational psychology and learning
systems at the Florida State University, where he served as director, Office for Needs
Assessment and Planning, as well as Associate Director of the Learning Systems Institute, and
where he received the Professorial Excellence award. He is also Distinguished Research
Professor at the Sonora Institute of Technology, Sonora Mexico. In addition, Dr. Kaufman has
served as Research Professor of Engineering Management at the Old Dominion University, at the
New Jersey Institute of Technology, and is Associated with the faculty of industrial engineering
at the University of Central Florida. Previously he has been professor at several universities,
including Alliant International University (formerly the U.S. International University) and
Chapman University and also taught courses in strategic planning, needs assessment, and
evaluation at the University of Southern California and Pepperdine University. He was the 1983
Haydn Williams Fellow at the Curtin University of Technology in Perth, Australia. Dr. Kaufman
also serves as the Vice Chair of the Senior Research Advisory Committee for Florida TaxWatch
and is a member of the Business Advisory Council for Excelsior College.
He is a Fellow of the American Psychological Association, a Fellow of the American
Academy of School Psychology, and a Diplomate of the American Board of Professional
Psychology. He has been awarded the highest honor of the International Society for Performance
Improvement (an organization for which he also served as president), being named “Member for
Life,” and has been awarded the Thomas F. Gilbert Professional Achievement Award by that
same organization. He has recently been awarded ASTD’s Distinguished Contribution to
Workplace Learning & Performance award—only the tenth individual to be so honored—and
received the U.S. Coast Guard/Department of Homeland Security medal for Meritorious Public
Service. These recognitions have come from his internationally recognized contributions in
strategic and tactical planning, needs assessment, and evaluation. Having working for National
Defense contracting companies (Boeing, Douglas, and Martin [now Lockheed-Martin] as a
scientist and manager as well as in academia, he balances extensive scholarly expertise with a
practical understanding of requirements in both the public and private sectors.
A Certified Performance Technologist (CPT), Kaufman earned his Ph.D. in communications
from New York University, with additional graduate work in industrial engineering, psychology,
and education at the University of California at Berkeley and Johns Hopkins University (where
he earned his MA). His undergraduate work in psychology, statistics, sociology, and industrial
engineering was at Purdue and George Washington (where he earned his B.A.) universities.
Prior to entering higher education, he was Assistant to the Vice President for Engineering as
well as Assistant to the Vice President for Research at Douglas Aircraft Company. Before that,
he was director of training system analysis at US Industries, Head of Training Systems for the
New York office of Bolt, Beranek & Newman, and head of human factors engineering at Martin
Baltimore & earlier as a human factors specialist at Boeing. He has served two terms on the U.S.
Secretary of the Navy's Advisory Board on Education and Training.
Dr. Kaufman has published 38 books, including Change, Choices, and Consequences; 30
Seconds That Can Change Your Life, Mega Planning; and Strategic Thinking—Revised, and co-
authored Useful Educational Results: Defining, Prioritizing, and Accomplishing as well as
Practical Strategic Planning: Aligning People, Performance, and Payoffs, and Practical
Evaluation for Educators: Finding what Works and What Doesn’t, plus 246 articles on strategic
planning, performance improvement, distance learning, quality management and continuous
improvement, needs assessment, management, and evaluation.
| 123
The Assessment Book
Ingrid Guerra-López, PhD, is an Assistant Professor at the Wayne State University, Associate
Research Professor at the Sonora Institute of Technology in Mexico, and a Senior Associate of
Roger Kaufman and Associates. Dr. Guerra-López publishes, teaches, consults (nationally and
internationally across all sectors) and conducts research in the areas of organizational
effectiveness, performance evaluation, needs assessment and analysis, and strategic alignment.
She is co-author of Practical Evaluation for Educators: Finding What Works and What Doesn’t
with Roger Kaufman and Bill Platt and has also published chapters in the 2006 Handbook for
Human Performance Technology, various editions of the Training and Performance
Sourcebooks, as well as the Organizational Development Sourcebooks. Additionally, she has
published articles in journals such as Performance Improvement, Performance Improvement
Quarterly, Human Resource Development Quarterly, Educational Technology, Quarterly Review
of Distance Education, International Public Management Review, and the International Journal
for Educational Reform, among others. She obtained her doctorate and master’s degrees from
Florida State University.
Doug Leigh, PhD, is an Associate Professor with Pepperdine University’s Graduate School of
Education and Psychology. He is coauthor of Strategic Planning for Success: Aligning People,
Performance and Payoffs and Useful Educational Results: Defining, Prioritizing, and
Accomplishing. Dr. Leigh is an associate director of Roger Kaufman & Associates, two time
chair of the American Evaluation Association's Needs Assessment Topic Interest Group, and
past editor-in-chief of Performance Improvement journal. He currently serves as chair of the
International Society for Performance Improvement's Research Committee. Leigh's current
research, publication, and consulting interests concern cause analysis, organizational trust,
leadership visions, and dispute resolution.
124 |