Where we are: current thinking
Even a cursory look over current literature and the Kirkpatrick 4 Levels and the Phillips Kirkpatrick revision will dominate. The Kirkpatrick model has been around for over half a century, reviewed and renewed in 2016 with the Phillips iteration first developed in 1975 and revised in 1993. Other ideas have come into focus such as Brinkerhoff's Success Case Model, a qualitative analysis to identify the causes of success or failure, and Predictive Learning Analytics, algorithm-based feedback to improve learning rates. All these tools have their place, and all have value. But remember, tools don't rule. The meta-view has to prevail.
The Kirkpatrick and Phillips models focus on how information is acquired, retained an implemented, setting out the process in 4 (Kirkpatrick) or 5 (Phillips) levels:
- Reaction: how the learner responds to the training material and the trainer. Feedback commonly gathered through smile sheets on the day either during the training session or as it finishes
- Learning: what the learner recalls. Evaluation can be by quiz in the weeks after the training.
- Behaviour: the learner changes their behaviour due to their retained knowledge or changed understanding. This can be assessed by peer or supervisor review by measurement against agreed metrics.
- Results: an overall assessment of the material impacts experienced as a result of the training being done.
- ROI: an assessment of the financial difference brought about by the training.
The Brinkerhoff Success Case Model that looks into detail at why an aspect of training either has or hasn't been implemented. This is more operational and problem-solving in focus, targeting and scrutinising identified issues and working to resolve them. As mentioned previously, this is a qualitative analysis and, as such, has a narrow remit but very useful when used in conjunction with whichever evaluation schema is utilised.
While evaluation can be seen as separate from the training event, Predictive Learning Analytics works to reduce the wastage ("scrap learning") of training initiatives by increasing the uptake, retention and implementation. Estimates of training not applied on the job range from 45% to 85% (Sources: Brinkerhoff. Gartner). These striking figures are part of the team behind PLA's pitch: with clever algorithms, predict those learners that might need additional support, thereby improving overall performance and compressing the time between the training and when further interventions are used to improve individual outcomes. Through this, PLA is able to head off a common criticism that the ROI calculation is that it is historical, that is, by the time the impacts are known – positive and negative, it is a year down the road and the opportunity to improve delivery has passed.
As we can see, there has been a lot of good work done, and there are many more methods each providing rigour and credibility to the process. The challenge, clearly, is in the practical application of them. Where we want to be: ROI & effective training evaluation
If an ROI exercise is on the table, what is it for? If it is to add a useful and usable layer of analytics in support of continued business and personal growth, then that is worth investing the time and money. If it is anything other than this, I would be asking more questions to clarify the objectives and agenda.
It is also worth bearing in mind that, while an ROI number can be generated by a simple spreadsheet, the underlying processes that have got you to that point expose weaknesses as well as strengths, failures as well as successes, requiring courage and honesty from all involved in the face of intense scrutiny.
In addition, it is important to remember the caveats associated with calculating ROI. On the page, you subtract costs from calculated benefits, and divide it by costs and multiply by 100 to get a nice neat percentage. There are two, in particular, to consider. The first is in how you define, or the extent to which you calculate, both costs and benefits: how far-reaching are your costs and benefits? Do you take into account lost productivity for the training downtime, for instance? The second is, can you be sure that the benefits you are attributing to the training are solely due to the training and not other factors?
But while these caveats can weigh heavy on the reliability of the ROI calculation, they don't make them impossible, as we will see. How we get there
So, despite the concerns, we want a learning ROI and we know the range of evaluation models available. Now, we tackle the process.
Before any training begins, the first question is: have I asked the right questions? Asking the right questions enables you to formulate and articulate:
- The core strategic issues that need tackling
- The KPIs that matter operationally
- The right suite of evaluation methods
- The map of the business, its systems, inputs and output
And it is in getting clarity on the answers to your questions that you are then able to act on participant feedback and to structure the post-training interventions. Next Steps
The following graphic is the starting point for what is to be achieved and the evaluation method to be utilised. The further up the pyramid you travel, two conditions apply, inversely related to each other. So, the strategic importance increases, while the number of events where these evaluations are useful decreases.