You are on page 1of 8

Learning & Development

Measurement Strategy

www.performitiv.com
Overview
This document provides guidance to assist organizations in designing and
implementing an effective learning and development (L&D) measurement
strategy.

Strategy Vision
Every good measurement strategy must start with a vision that is articulate, yet
concise. A solid vision for L&D measurement would be to “measure,
communicate and improve learning programs and the people and business
outcomes impacted by those programs.”

Strategic Approaches
There are two strategic approaches for L&D measurement. Your strategy
should address both as, as both are relevant.

Approach One: Everyday L&D programs. These include various types of


programs, such as compliance, sales, leadership, onboarding and technical
training. These also include various modalities, such as digital, classroom,
coaching, conferences and e-learning. The goal is to ensure learning
measurement is a practical, scalable and repeatable process, and works
efficiently and effectively within your organization’s money, time and
personnel limitations. The process emphasizes roughly reasonable vs. perfect
and precise information, is geared toward heightened use of data in a timely
manner, and more constructive dialogue with business stakeholders to improve
performance.

This approach should account for 90% of your operation and efforts.

Approach Two: Periodic events or programs that are very strategic, visible or
costly, where a special project is created to measure the impact of the program
in an in-depth manner. This may be a first-time through senior executive
program, or a costly sales program that has an uncertain future. The
measurement approach is a one-off, episodic effort to use more precise data of
a statistical, causal nature or of an impact, ROI nature to prove value (or lack of
value) from the program on intended results. This approach is not meant to be
scalable or repeatable and will need to have a special allocation of money, time
and personnel resources. As such, this must be done in very limited
circumstances where both the L&D team and business stakeholder mutually
agree that the cost is worth the benefit.

This approach should account for 10% or less of your operation and efforts.

www.performitiv.com
Strategic Methodology
The measurement strategy should be based on credible, valid and reliable methodologies. For L&D, the most
common and recognized is Kirkpatrick Learning Levels. This model should not be used literally when doing everyday
measurement but should be adapted into your data collection and reporting process, such that the spirit of the
model is fully captured, where applicable, to glean the benefits of the methodology.

In addition, fusing Net Promoter Score (NPS) with the Kirkpatrick model is a complementary approach that modern-
day L&D functions are deploying. It is a concise evaluation process focusing on use of the data to improve
performance. These are highly desirable elements of any process derived from sound strategy.

For the limited, episodic measurement studies your organization may do, we suggest a statistical causal model
approach: either The Phillips ROI Process or The Brinkerhoff Success Case Method. Each of these are highly focused
and time consuming but may yield the proof or validation desired when doing this type of exercise. We suggest
having a third-party consultant with expertise in these methodologies conduct the studies as it will not only ensure
they are done correctly but will remove any perceived or realized bias in attempting to do the exercise internally.

Strategic Process
The measurement strategy should outline the process that will be used for data collection, reporting and
improvement, with an emphasis on the process that is used 90% of the time for measurement. It is less important to
focus on the process you might use 10% of the time, because it is more of a project than a process and may
ultimately be done by a third-party and is much rarer in its use.

The strategic process should be broken down into three discrete sub-processes: data collection, data reporting, and
improvement actions. These are mutually exclusive sub-processes but work in complement to each other to enable
the overall strategic measurement process.

Data Collection

Data collection should be a combination of evaluation, business data and learning utilization data. Together they
paint a complete picture in reporting and for performance improvement.

Evaluation data should be mentioned in your L&D strategy, as they are a core data collection instrument. The
measurement strategy should focus on the following tenants to evaluation data collection:

1. Keep evaluations simple and concise. We suggest 10 or fewer questions.

2. Adapt the spirit of the Kirkpatrick Learning Levels methodology with the Net Promoter Score (NPS)
methodology within the instruments. We suggest the classic NPS question and then a single question in an
NPS format representing constructs such as Instructor, Environment, Content, Learning, Job Impact,
Business Results and Manager Support.

www.performitiv.com
3. Create a consistent evaluation to use at the end of a learning event, experience or program milestone.

4. Create a consistent evaluation to use when participants are on-the-job.

5. Create a consistent evaluation to use for stakeholder observation of participants on-the-job (ex. manager).

6. Ensure the instruments are designed with the respondent experience in mind, so that response rates are
optimized, and you have active vs. passive respondents. This means focusing on User Experience (UE) and
not having complex questions or cumbersome navigation.

7. Capture this data by tagging each data point to respondent demographics (ex. job function, years of
service, business unit).

8. Capture this data by tagging each data point to learning attributes (ex. modality, location, program,
curricula, course, instructor).

9. Preferably, automate the distribution of these instruments and integrate them with your learning
management system.

Business data should also be a component to your strategic measurement process. Business data are key
performance metrics typically gathered on a monthly or quarterly basis and are meant to track the stakeholder’s
desired outcomes aligned to the learning program. Strategically, business metrics fall into categories, such as sales,
cost, cycle time, productivity, risk, safety, innovation, quality, customer satisfaction and employee engagement.
Gathering the metrics can be done through integration, batch upload or automated request to the operations
managers. The strategy should account for collecting this data, but it should focus less on collecting precise
measures that attempt to isolate learning impact (unless you are doing a deeper impact study) and focus more on
identifying roughly reasonable alignment of learning to the business metric.

Another piece of data to collect and note in your measurement strategy is the learning utilization data. The most
popular metric is completion rates. Typically, stakeholders do care about the coverage the program had on their
employee population, so a completion rate is important to collect. This data can come from a learning management
system or through training registration and completion records.

Data Reporting

The measurement strategy should have a section on data reporting. There should be governance over this process
so that reporting is not unwieldly and results in poor use of the reports. Performitiv has identified three core
audiences for learning measurement reports. The strategy should address how these audiences will be served by
L&D measurement reporting. In addition, reporting should focus on less reports, but more impactful reporting. This
means fewer unique reports, dashboards, and pivot tables and a much smaller set of reports specific to the
audience, so it is easier for them to consume for their specific needs.

www.performitiv.com
The first audience is the tactical users. Examples include instructors and content designers. These important
professionals want a report that summarizes a specific learning event or group of learning experiences by the
questions on the evaluation. They want to quickly understand what elements of the learning worked and did not
work, as well as which learner demographics had positive or negative impact and experiences. Using basic
red/yellow/green highlights and logical drill downs from summary to detail are important here. The strategic intent
is to give them straight-forward, timely insights to change what they do the next time the learning event is run.

The second audience to address is the operational managers. Examples include program managers, curricula
managers, course owners, location managers and instructor managers. These professionals have a critical
component of the learning operation to oversee and the best reporting for them begins with stack rankings of what
they manage, so that they can drill into the tactical reports. For example, an instructor manager might see a stack
ranking of all instructors broken out by the evaluation categories. They can drill into any specific instructor and see
the courses they taught, where they were taught, and the demographics of the participants they taught. The
strategic intent is to give them a broader view of what they manage with a clear path to pinpoint improvement
opportunities for short-term adjustments and long-term change.

The third audience is the learning leaders and business stakeholders. Examples include the VP of Learning or Sales.
Both are busy executives and don’t need to go into the details. In both cases, the measurement strategy should call
out a scorecard with a small, but balanced, set of key metrics that can easily be reviewed in a 30 to 60-minute
meeting. For example, if we created a week-long sales effectiveness program, we would want to showcase the
results to the Sales VP on a clear, concise scorecard. We suggest the scorecard have three elements: efficiency
metrics (ex. completion rates), effectiveness metrics (ex. instructor ratings, content ratings) and outcome metrics
(ex. job impact ratings, business results ratings, and business metrics such as sales growth, sales lead penetration
rates, etc.). The focus should be on the outcome section where the impact and results ratings convey alignment (or
lack thereof) to the business metrics. Each metric on the scorecard should show the actual result vs. a goal and use a
red/yellow/green color to make it simple to review.

The outcome of using a scorecard like the above is not to attempt to prove the value of learning, nor to justify
learning made a precise or perfect impact, but to show where performance was positive and where performance
needs improvement. A conversation on performance is much more constructive than a conversation on validating
L&D existence or challenging the business on what value L&D added. We suggest avoiding those types of
conversations and not creating a measurement strategy that reports data that results in controversial or defensive
oriented conversations.

www.performitiv.com
Improvement Actions

The most important, yet least accounted for in a measurement strategy, are the improvement actions. The major
issue with outdated L&D measurement strategies and processes is that they intensely focus on collecting and
reporting data and stop short of using the data to improve the program and the people or results impacted by the
program. A modern-day measurement strategy must have a component around improvement actions. These are
the activities that will be done when an improvement is identified as a result of data that is below goal or negatively
trending.

There are three primary improvements resulting from L&D measures you will want to account for in your
measurement strategy. First is to improve the program. Your data collection and reporting should highlight if there
was an issue with the facilitation, content or learning environment. If so, an improvement action should be taken to
positively change this. Second is the learner. If the data reports a particular demographic (ex. job function, years of
service, etc.) had a more negative experience or impact, action should be taken to positively change this. Finally, if a
business result that should have aligned to the program is still below goal or negatively trending, an improvement
action should be taken to positively change this.

The measurement strategy should focus on how the data and reporting highlight these deficiencies that are then
prioritized for positive change through an action plan. The plan should be done regardless of whether or not
learning caused, controlled or contributed to the problem. The spirit of L&D measurement should not focus on
validating L&D existence, but on improving programs, people and results. This is really important to emphasize in
the strategy if it is a modern-day approach that prioritizes L&D integration into business performance vs. L&D
isolation with a focus just on its own self-preservation.

The strategy should speak to action plans themselves. These are collaborations with people to do tasks in a visible
and accountable way, so the next time a measurement is done, the movement is positive. For example, if the supply
chain operation had higher than average safety incidents, learning was done on safety and showed an alignment to
it, but incidents remain above acceptable goals, rather than debate who caused, controlled or contributed to this,
an action plan should be jointly created by L&D and the supply chain operation to improve this. For example, the
L&D group might create a 90-second video reinforcing major elements of safety. The supply chain operation might
do random safety audits. If both parties work together on a specific action linked to a specific under-performing
metric, it will lead to positive change.

Strategic Tools
The measurement strategy should also document the organization’s plan to support the strategic process. This
means identifying a person or small team that will provide governance over the measurement function, so

www.performitiv.com
standards are properly set and followed. It also means identifying a formal budget to manage and maintain the
measurement process, because without budget, it will be a flavor of the month that is replaced by the next shiny
new thing. Finally, it should identify the tools that will support the strategic process. This includes technology to
automate data collection, reporting and improvement actions. The strategy should document all of the above in a
clear manner.

Conclusion
A solid measurement strategy focuses on what you will do 90% of the time, not 10%. It should be repeatable,
practical and work within the limits of your constrained resources. The strategy should us a combination of data
elements but have a few impactful reports. Most importantly, the strategy must emphasize using the data to drive
positive performance change vs. using the data to justify, validate or defend L&D. Finally, the strategy should outline
the supporting resources needed to maintain it over time, including people, budget and tools.

If the strategy has the above components, it will be a valuable addition to the overall L&D strategy and process.

www.performitiv.com
About Performitiv
Performitiv is analytics software that optimizes learning impact by demonstrating value and identifying improvement
opportunities. This modern measurement system collects evidence of impact from methodology-sound surveys and
automated, secure operational data uploads. We go beyond traditional measurement tools to paint a complete picture of
impact by automating the aggregation, integration and analyses of critical predictive and prescriptive data. Performitiv has
transformed the practice of impact optimization from a tactical, reactive exercise, to a credible, cost-effective, repeatable
measurement process.

performitiv.com

www.performitiv.com

You might also like