Today’s CLOs find themselves in a measurement Catch-22: To successfully demonstrate the impact learning has on business results requires a willingness from management to provide access to key performance and result metrics, but management is often reluctant to provide this access when it doesn’t recognize training’s strategic value in the first place. Given this stalemate, then, it is no surprise that IDC’s survey results show very little has changed on the measurement front in the past three years.
Every other month, IDC surveys Chief Learning Officer magazine’s Business Intelligence Board (BIB) on an array of topics to gauge the issue, opportunities and attitudes that make up the role of a senior learning executive. In March, this article examined how CLOs were endeavoring to staff their learning departments. This month, 239 training professionals shared their thoughts on learning measurement.
Ask a room of learning professionals to talk to you about the importance of measurement, and soon you won’t be able to hear yourself over the din. If you ask, “How many of you have successfully executed measurement programs within your own organizations?” though, the room suddenly gets a lot quieter.
There is little disagreement among those in training about the value of measurement — when done properly, measurement can empirically demonstrate training’s impact on the company’s top and bottom lines. Key metrics might include employee performance, speed to proficiency, customer satisfaction, improved sales numbers, etc. As IDC’s survey shows, the challenge lies in gaining access to these key metrics, as well as finding the time and resources to conduct measurement on a consistent enough basis such that the information is useful.
Given these challenges, the majority of the CLO BIB members indicated a high level of dissatisfaction when asked to describe how they feel in regard to the extent of training measurement that occurs within their organizations (see Figure 1).
Survey respondents attributed their disappointment to a wide range of factors, including inadequate leadership support, lack of funding, a culture of indifference, lack of awareness, no means to automate process, no dialogue with executives on what metrics they want to see and poor use of existing resources.
The general consensus indicates that within most companies, measurement efforts are inconsistent. When measurement occurs across the enterprise, it’s typically only reaching Kirkpatrick Levels 1 and 2. Measurement that captures performance and business results’ impact (Kirkpatrick Levels 3 and 4) happens rarely and usually only at the individual level. In both cases, the difficulty is in ensuring enough consistency across the enterprise to provide an accurate assessment or build critical mass for support.
One respondent summed up the challenge very well: “Our organization’s training is decentralized, and it is difficult to get everyone on the same page when it comes to measurement. There are some areas that really don’t place any importance on measurement, while others place a lot. Right now, we are all over the board, making it extremely difficult to make an overarching case for training and education.”
Other significant factors include the lack of time and resources. The speed of business moves so quickly that once a training initiative is completed, the stakeholders involved often already have moved on to their next project and aren’t interested in assessing the impact training had on an “old” project. Similarly, the act of measurement is usually seen as an add-on to training, when really it ought to be seen as a function onto itself that needs its own resources, timeline and budget to be done properly.
But as the March 2007 Business Intelligence article on staffing and development showed, 60 percent of CLOs think they don’t have enough staff as it is to support their companies’ learning initiatives, so the likelihood that there are people available to do meaningful ROI studies is doubtful. (For more on this, see the March 2007 issue of Chief Learning Officer magazine, “Staffing Learning & Development: Doing More with Less.”)
A Stagnant State of Affairs
What all this has led to is a general state of stagnancy within the learning industry regarding measurement. IDC’s survey shows very little change over the past three years across a number of measurement-related variables. Figure 2, for example, reveals the processes for measurement have not changed significantly since 2004 — the majority of respondents still use a manually generated means to do measurement. There has been a slight increase in the percentage of respondents who indicate they use a combination of their learning management system (LMS) and enterprise resource planning (ERP) systems, which suggests integration between these platforms has improved. By and large, however, the mix is still the same.
Similarly, Figure 3 shows almost identical results for the percentage of companies that conduct post-training assessment and correlate training to things such as employee performance, business performance and customer satisfaction.
Technology’s Role is Important
One area in which there has been significant change is in the percentage of respondents who say their technology-based learning platform has given them a greater ability to make correlations between training and performance. The capability of an LMS to automatically track who has completed training is significant, but even more valuable is its ability (when integrated with a performance management system) to run reports that compare performance levels of those who have been trained with those who have not. Even when an LMS isn’t linked to a performance management system, it still can be used to generate automatic post-training surveys to managers and training participants for assessment of their abilities from which further reports can be run.
One respondent to IDC’s survey shows how an LMS is being used effectively in this manner: “In addition to the standard smiley sheets, our LMS automatically sends out an e-mail 90 days after the training with a survey that asks, ‘Now that you have had time to put your training to use …,’ then we have a series of questions. We have found that this provides much more valid feedback.”
Some Reason for Optimism
Despite the obvious challenges, the news isn’t all bad. As Figure 1 shows, more than one-third of respondents either were satisfied or very satisfied with the extent of measurement at their companies. Members of this group typically demonstrated wider support for measurement from the senior levels within their organization, as well as enough manpower to consistently tackle measurement initiatives. One satisfied respondent said having “a dedicated team that cares for the responsibility and a leadership group that is very much interested in results” made measurement possible.
In addition, two-thirds of respondents indicated their companies have plans to implement measurement initiatives over the next six to 18 months. CLOs will be most interested in making training correlations to employee performance, overall business performance and customer satisfaction.
Ending the Stalemate
For reasons already cited, measurement will remain a significant challenge for learning executives going forward, yet CLOs can take steps to effect change on this front. In a document IDC published last year, it identified a number of best practices that help increase measurement success. Three of the most significant practices are:
Companies that can incorporate even some of these ideas into their assessment methodology stand a good chance of breaking their “analysis paralysis” and seeing improvement in their measurement initiatives.
Peter McStravick is the senior research analyst for IDC’s Learning Services group, where he addresses the impact of training methodologies and business models on end-user organizations and tracks market growth and opportunities in the U.S. corporate training market. He can be reached at firstname.lastname@example.org.