[login-form]
Measurement

Impact and ROI of Learning: Worth Pursuing or Not?

Higher levels of learning evaluation are more difficult and subjective, but they are also the most important.

Several articles in the past month have suggested the profession should step back from trying to isolate the impact of learning and the resulting return on investment. The authors argue that it is difficult or impossible to isolate the impact of learning from other factors and no one will believe the results anyway, so why bother. Instead, they call for showing the alignment of learning to business goals, focusing on easier-to-measure participant reaction, amount learned and application (levels 1, 2 and 3, respectively, of the Kirkpatrick Model/Phillips ROI Methodology) and finally focusing on employee engagement with learning (consumption of learning and completion rates).

Aligning learning to business goals and measuring levels 1 through 3 are always good ideas, so no disagreement there. And depending on your goals for learning and the type of learning, measuring average consumption and completion rates may also make sense. However, for certain types of learning there is still a need to measure impact and ROI (levels 4 and 5 of the ROI Methodology).

The primary reason is that senior corporate leaders like the CEO and CFO want to see it and so should the department head and program director. Research by Jack Phillips, chairman of ROI Institute, in 2010 showed that CEOs most want to see impact and ROI but instead are often provided with only participation data (number of learners, number of courses), participant reaction and cost. While these measures are helpful, they don’t answer the CEO’s question of what they are getting for their investment. CEOs and CFOs want to know what difference the training made and whether it was worth the time and effort. The program director and CLO should also be curious about this, not from a defensive point of view (like proving the value of training) but from a continuous improvement perspective where they are always asking what we learned from this project and how we can improve next time.

It is true that level 4, the isolated impact of learning on a goal, is harder to determine than levels 1 through 3. Sometimes there will be a naturally occurring control group that did not receive the training. In this case, any difference in performance between the two groups must be due to the training. In other cases, statistical analysis like regression may be used to estimate the isolated impact from the training.

The most common approach, however, is participant and leader estimation, which is generally good enough to roughly determine the impact of learning and definitely good enough to learn from the project and to identify opportunities for improvement. In a nutshell, the methodology calls for asking participants to estimate the impact from just the training and to also share their confidence in that estimate. The two are multiplied to provide a confidence-adjusted estimate of the isolated impact of learning.

For example, one participant may say that the training led to a 40 percent increase in performance (like higher sales), but they may be only 50 percent confident in that 40 percent estimate. The confidence-adjusted impact would be 40 percent x 50 percent = 20 percent increase in performance.

Repeat for others and average. Best practice would be to ask supervisors what they believe as well. Then share with the initiative’s sponsor and other stakeholders and modify as necessary. Once the level 4 isolated impact is determined, ROI is very straightforward.

This subjective estimate of impact should be no problem for senior leaders as long as the results are presented conservatively and with humility. Simply share that you used an industry standard methodology and that you and the sponsor and stakeholders all agree the estimate is conservative. Share the estimate as being close enough to make decisions (like whether to move from pilot to full rollout or whether to repeat for next year) and to identify opportunities for improvement. Share the lessons learned.

Do not present the data as being accurate to the decimal point. Senior leaders are used to this level of uncertainty and lack of absolute precision. In fact, most of the information they get is simply the best estimate put together by the right people in the best position to make the estimate. Think about the cost of a new product launch, the impact of a price increase, the impact of advertising, the impact of research and development or the impact of competitors reacting to your latest moves. These are all estimates, subjectively made and hopefully close enough to allow the CEO to make the right decision.

In conclusion, we must not shy away from higher levels of evaluation because they are harder or more subjective. They are also the most important, definitely to your CEO and hopefully to your CLO and program director, as well. And remember, the standard is not absolute precision, but simply close enough to make the right decisions and to learn the important lessons.

David Vance is the executive director for the Center for Talent Reporting, founding and former president of Caterpillar University and author of “The Business of Learning.” Comment below or email editor@CLOmedia.com.

Tags: , , , , ,

Simple Share Buttons
Simple Share Buttons