We use cookies to provide the best site experience.
Ok, don't show again
Close


Is Calculating Learning ROI Worth the Effort?

Reflecting on learning evaluation and ROI, I was struck by how the models and processes to achieve it can, in fact, suck all enthusiasm for the subject out of you.

We know all too well that the reality gap between how something is written up versus the actual doing of it is large, if not yawning. The reality is that implementing a whole series of evaluation methodologies on top of the training plan is, well, tough going and potentially demoralising – no wonder few venture beyond the most rudimentary evaluation. Calculating ROI can be a test of endurance and you need commitment and resilience above all else to keep the process going.

Here are a couple of questions that can come up:

  • Does the outcome warrant all the effort?
  • Can we 'get away with' less depth in our learning evaluation efforts?

Keep those questions in mind while I summarise briefly where we are, where we want to be, and what we need to get there.
Where we are: current thinking

Even a cursory look over current literature and the Kirkpatrick 4 Levels and the Phillips Kirkpatrick revision will dominate. The Kirkpatrick model has been around for over half a century, reviewed and renewed in 2016 with the Phillips iteration first developed in 1975 and revised in 1993. Other ideas have come into focus such as Brinkerhoff's Success Case Model, a qualitative analysis to identify the causes of success or failure, and Predictive Learning Analytics, algorithm-based feedback to improve learning rates. All these tools have their place, and all have value. But remember, tools don't rule. The meta-view has to prevail.

The Kirkpatrick and Phillips models focus on how information is acquired, retained an implemented, setting out the process in 4 (Kirkpatrick) or 5 (Phillips) levels:

  1. Reaction: how the learner responds to the training material and the trainer. Feedback commonly gathered through smile sheets on the day either during the training session or as it finishes
  2. Learning: what the learner recalls. Evaluation can be by quiz in the weeks after the training.
  3. Behaviour: the learner changes their behaviour due to their retained knowledge or changed understanding. This can be assessed by peer or supervisor review by measurement against agreed metrics.
  4. Results: an overall assessment of the material impacts experienced as a result of the training being done.
  5. ROI: an assessment of the financial difference brought about by the training.

The Brinkerhoff Success Case Model that looks into detail at why an aspect of training either has or hasn't been implemented. This is more operational and problem-solving in focus, targeting and scrutinising identified issues and working to resolve them. As mentioned previously, this is a qualitative analysis and, as such, has a narrow remit but very useful when used in conjunction with whichever evaluation schema is utilised.

While evaluation can be seen as separate from the training event, Predictive Learning Analytics works to reduce the wastage ("scrap learning") of training initiatives by increasing the uptake, retention and implementation. Estimates of training not applied on the job range from 45% to 85% (Sources: Brinkerhoff. Gartner). These striking figures are part of the team behind PLA's pitch: with clever algorithms, predict those learners that might need additional support, thereby improving overall performance and compressing the time between the training and when further interventions are used to improve individual outcomes. Through this, PLA is able to head off a common criticism that the ROI calculation is that it is historical, that is, by the time the impacts are known – positive and negative, it is a year down the road and the opportunity to improve delivery has passed.

As we can see, there has been a lot of good work done, and there are many more methods each providing rigour and credibility to the process. The challenge, clearly, is in the practical application of them.

Where we want to be: ROI & effective training evaluation

If an ROI exercise is on the table, what is it for? If it is to add a useful and usable layer of analytics in support of continued business and personal growth, then that is worth investing the time and money. If it is anything other than this, I would be asking more questions to clarify the objectives and agenda.

It is also worth bearing in mind that, while an ROI number can be generated by a simple spreadsheet, the underlying processes that have got you to that point expose weaknesses as well as strengths, failures as well as successes, requiring courage and honesty from all involved in the face of intense scrutiny.

In addition, it is important to remember the caveats associated with calculating ROI. On the page, you subtract costs from calculated benefits, and divide it by costs and multiply by 100 to get a nice neat percentage. There are two, in particular, to consider. The first is in how you define, or the extent to which you calculate, both costs and benefits: how far-reaching are your costs and benefits? Do you take into account lost productivity for the training downtime, for instance? The second is, can you be sure that the benefits you are attributing to the training are solely due to the training and not other factors?

But while these caveats can weigh heavy on the reliability of the ROI calculation, they don't make them impossible, as we will see.

How we get there

So, despite the concerns, we want a learning ROI and we know the range of evaluation models available. Now, we tackle the process.

Before any training begins, the first question is: have I asked the right questions? Asking the right questions enables you to formulate and articulate:

  • The core strategic issues that need tackling
  • The KPIs that matter operationally
  • The right suite of evaluation methods
  • The map of the business, its systems, inputs and output

And it is in getting clarity on the answers to your questions that you are then able to act on participant feedback and to structure the post-training interventions.

Next Steps

The following graphic is the starting point for what is to be achieved and the evaluation method to be utilised. The further up the pyramid you travel, two conditions apply, inversely related to each other. So, the strategic importance increases, while the number of events where these evaluations are useful decreases.

The five levels of the pyramid represent the levels of learning impact – to the individual, the team, and organisation, culminating in the impact on the bottom line. We will now explore each level, and the goals and limits for each.

Skills, Knowledge Acquisition & Recall

Starting at the base of the pyramid, the training event and immediate recall of content is evaluated. The methods used at this level are smile sheets and quizzes in the weeks following training.

Smile sheets are widely used either during or at the end of the training. They are limited in scope, focussing on training content and are useful for tweaking the content for any consequent training sessions. The feedback can be utilised quickly, but it must be remembered that testing skills acquisition and recall are only short-term interventions.

Testing recall in the weeks following training gives some insights into the training's specific content. It can be used in conjunction with a narrowly focussed Brinkerhoff Success Case Method intervention to catch implementation issues early.

Knowledge Retention

Knowledge retention can only be gauged weeks or months after the training, measuring, not the ability to regurgitate learned facts, but the ability to understand and communicate them.

While the measurement at this level is more complex, as it is a measure of something less obvious, it is, at the same time, achievable. The primary method here would be a more in-depth test of knowledge, exploring comprehension of the subject matter. To minimise disruption to business flow, these kinds of tests can be administered through an app or bot, which both increase learner engagement and response times.

The results of this evaluation are of more value if a benchmarking exercise is carried out prior to the training taking place. This gives you a clear measure of improvement on an individual level as well as overall.

Learning Implementation

More than skills acquisition, more than knowledge retention, we want to know whether the learning is being applied accurately and in the right context. A benchmarking exercise is essential, and this would take the form of, say, a self-, peer or supervisor assessment. The supervisor has an opportunity here to prime, prepare and motivate the participant for learning.

As with measuring knowledge retention, learning implementation assessment tools, for example simulations to test the skill or behaviour, can be delivered using apps or bots, lessening interruptions to business and increasing engagement and response times. Apps and bots are extremely useful for both gauging where the learner is and for feeding back to the global analytics to build the bigger organisational picture and measuring against agreed KPIs.

Brinkerhoff's Success Case Model can be usefully applied to understand the specifics of non-implementation, with the results being used to define further training opportunities or factors that inhibit the implementation of the skills learnt.

Interestingly, the tools used to evaluate learning implementation can become a way to enhance overall performance. Apps and bots, in testing an individual's ability to implement a skill, will also repeat and reinforce the learning if delivered constructively.

Organisational Impacts

Measuring at an organisational level can be problematic. It is possible to use a control group to directly compare the material benefits for the business gained by the group who participated in the training versus those that didn't. There are ethical considerations, however, as this approach demands that the control group is denied the personal and professional benefits of learning.

Instead, a 'deep' analysis comes to the fore here. From our initial assessment, we have a map of the business and its systems and interfaces, and an analytical schema developed. During the process, data and insights related to the training programme, along with data and insights from across the business have been gathered, ready for the measurement of organisational impacts using forecasting and trendline analysis.

This is the most demanding of all the evaluations, bringing together all historical analysis, all data from the training programme, data from the business to isolate training benefits and predictive analytics to build the complete picture. Whilst demanding, it is not impossible. With the right analytical engine, complexity and difficulties can be managed and meaningful results produced in a timely manner.

From this, you can see why this exercise is reserved for more strategically important initiatives. But remember that the discussions and reviews taking place at the outset will be setting clear objectives and will describe the limits of the learning evaluation. This initial exercise ensures that analytical rigour is maintained, and that the evaluation effort is proportionate to the outcomes sought.

ROI

If all the groundwork has been done to prepare for the ROI evaluation, it is a relatively straightforward calculation since all the inputs into it are agreed and validated. The real challenge is in how to use the results, and how to apply the insights to future training decisions, hence how critical the work is in the initial stages to explore the reasons for the exercise in the first place.

We would also say that there is a strong case for carrying out the ROI on an ongoing basis: to maintain accountability, to be able to account for and celebrate the learning successes and also to account for and remediate the failures, whether systemic or strategic. With so much of the preparatory work done, with the evaluative mechanisms in place and the analytics framework agreed, repeating it regularly is not nearly so onerous.

Final Thoughts

How far you travel up the evaluation pyramid correlates with the strategic impacts, i.e., the more connected, directly or indirectly, to KPIs or overall business objectives. And in deciding clear outcomes for both the training plan and the evaluation plan, you can assess the most appropriate methods, along with how and when to deploy them. Ensure, also, that you have a clear plan for the interrogation of client systems, data capture, analysis and reporting to maximise the effectiveness of the evaluations at every stage of the training delivery.

From our perspective, having the right analytics in place is a key success factor. We would also recommend a coaching programme (human or digital) running alongside the learning effectiveness initiative in order to embed, repeat and reinforce the learning, supporting people navigate the bumps in the road and building resilience to reach the end goal.

In summary, I would say, concentrate on what really matters, decide how far you want to go to measure learning effectiveness and put in place a system of evaluation that can either fit into the flow of daily business or at least can lessen the impact. Above all, ensure you have the right support and the right partners in place so you can deliver the best to your people.


02.04.2020

Talk to us