The framework for learning evaluation and measurement embraced by most in the industry starts with Kirkpatrick. The longtime University of Wisconsin at Madison professor wrote a series of articles in 1959 for the American Society for Training and Development that outlined what became known as the four levels of evaluation — reaction, learning, behavior and results.
Kirkpatrick is now retired and the honorary chairman of Kirkpatrick Partners LLC, a learning and evaluation consultancy led by his son, James, and his daughter-in-law, Wendy. The firm’s advisory services help clients implement the four levels, which have grown from a raw set of ideas to a structured framework. They are:
Level 1 — Reaction: “To what degree participants react favorably to the training.”
Level 2 — Learning: “To what degree participants acquire the intended knowledge, skills, attitudes, confidence and commitment based on their participation in a training event.”
Level 3 — Behavior: “To what degree participants apply what they learned during training when they are back on the job.”
Level 4 — Results: “To what degree targeted outcomes occur as a result of the training event and subsequent reinforcement.”
Most organizations have mastered using levels 1 and 2 but struggle with levels 3 and 4. James Kirkpatrick, a senior consultant at Kirkpatrick Partners, attributed this to learning leaders’ comfort level. He said a lot of what constitutes levels 1 and 2 — smile sheets and pre- and post-testing — is easily tied to classroom training, which learning executives are comfortable with thanks to its history as a focal point for corporate training.
The problem is such rudimentary levels are no longer satisfactory to executives. Kirkpatrick said the issue has grown in significance since the financial crisis, as organizations examine all functional budgets more closely. “Now everybody is under the gun, and for the most part training budgets and training providers have failed to demonstrate business value.”
Under the Kirkpatrick framework, learning leaders can attain levels 3 and 4 by designing programs with the end in mind — what are the desired results for a learning initiative, and what behaviors need to happen to satisfy the outcome? Trying to apply all four levels retroactively, which many tend to do, won’t produce fruitful results, Kirkpatrick said.
The most important indicator of value, Kirkpatrick said, is return on expectations, or ROE. Kirkpatrick said this is shown by tracking improvement in the desired outcome — say, an expectation to increase a call center worker’s cycle time — and tracing it back to behaviors learned during training. In other words, the desired outcomes defined ahead of time represent level 4, and those results are presentable for senior leaders.
Others find this insufficient, arguing learning should have a dollar value. Jack Phillips, chairman of ROI Institute Inc., a learning metrics and measurement consultancy, is a proponent of this approach. Phillips, who started the ROI Institute in 1993 and has written a number of books on the subject, adds a fifth level, return on investment, to the taxonomy.
He said because senior leaders view strategy through a financial lens, putting an isolated dollar value on training is a necessity. But “it’s possible to have the impact be positive but the project be negative, because it costs too much,” he said.
Consider this sales training example. One way to determine the financial impact on a sales training program is to isolate a group of sales workers who participated in training and those who did not. After the training is completed, measure levels 1 and 2 — did participants acquire new skills and report satisfaction? Then measure level 3, behavior, likely through participants’ self-assessments. To measure level 4, or impact, Phillips said learning leaders take the predetermined outcome measure. In the following example, Phillips uses new accounts, with the aim of isolating those connected to training.
Say a company’s sales staff picks up on average three new accounts a month, or 36 a year. Phillips said learning leaders should identify what percentage of the staff responsible for those new accounts participated in training, along with a confidence level in that number on a scale from 1 to 10. “We’ve got to separate the effect of the sales training [from] other influences,” he said.
Say the company identifies that 10 percent of those 36 new accounts came from the group who participated in training — about four new accounts.
Next, Phillips said the company would determine the lifetime value of a new account. Suppose that lifetime value is $10,000.
Under this example, four new accounts would translate to $40,000 in earned lifetime value of acquiring those accounts. Then compare this figure to the cost of training using a benefit-cost ratio. If the $40,000 came from a group of 10 salespeople, and the cost to train them was $20,000, the benefit-cost ratio is 2 to 1.
Under this thinking, “for every $1 we invest, we have $2 in benefits,” Phillips said.
Critics of the ROI approach say that isolating training events ignores other intangible factors from a learning experience. “You might find an obvious improvement or change in the performance of people who went to training, and therefore say we should do more training. But you really don’t know if it was the training that caused it,” said Josh Bersin, principal and founder of Bersin by Deloitte, a learning and development consultancy. “Maybe the people who went to the training got some personal attention from their managers or they were higher performers in the first place.”
Brinkerhoff, of the University of Western Michigan, is also skeptical. “It’s right-spirited and it’s right -minded … but in application and execution it gets misused,” he said. “… I would be very nervous to report to a board of directors and a CEO and CFO and be taking that data forward because so much of it is based on some pretty shaky assumptions.”
Others find both ROE and ROI to be inappropriate measures. Dan Pontefract, the senior director of learning and collaboration at Canadian telecommunications firm Telus Inc., said learning leaders should focus on return on employee performance, along with measures such as engagement and aggregate measures of informal interaction. Closely aligning learning outcomes with engagement is more likely to generate higher productivity and business results.
“I don’t understand why we’re so fixated on the return on investment of our learning programs,” Pontefract said. “There should always be an investment in our people, and that comes in the form of learning — but it also comes in the form of other opportunities.”Filed under: Measurement, Strategy