Every important learning program as well as every training function should have a measurement and evaluation strategy. Both the measurement and evaluation parts are important. You can certainly measure without evaluating the results (for example, just collecting level 1 data), and you can evaluate without formal measurement (for example, using the case study method or a focus group).
Examples of some simple measures would be number of participants, number of courses, cost and start/completion dates (all are level 0 or activity or volume measures). By themselves, these measures don’t provide much in terms of evaluation. However, if we set a plan for each one and then compare results to the plan, we can evaluate how well we did in terms of delivering.
Kirkpatrick level 1 measures of participant satisfaction, by themselves, do help us evaluate how well the course was received, but are even more powerful when compared to a benchmark or plan. Likewise, Kirkpatrick level 3, application of learning on the job, is a valuable measure in itself for evaluating the likely impact of training and even more meaningful when compared to the plan or benchmark. Levels 4 and 5 (impact and ROI) are key higher-level measures which are all about evaluating the effectiveness of the learning.
Measurement and evaluation is critical to run learning like a business. But it should be done for the right reasons. The best reasons for measurement and evaluation are to ensure that you deliver promised results and to continuously improve. Goals should be set for the key measures (number of participants, start and completion dates, levels 1 and 2 at a minimum and ideally level 3 as well as agreement with the stakeholder on the ultimate measure of success, such as impact on the organizational goal like increasing sales by 10 percent) for all key programs, and then progress should be measured against these goals at least monthly to make sure you are on target and to make any necessary mid-course corrections.
This disciplined use of measures will also provide great learning opportunities for you that you would otherwise miss. Typically, there will be a variance or difference between plan and actual results, which is an opportunity to explore why. Perhaps the plan or forecast was unrealistic. OK, then you have learned something about forecasting and you can do a better job next time. Or, more likely, there is an issue with execution which you want to identify and understand as early in the deployment as possible so you can take corrective action (revise the content, work with the instructor, talk with the stakeholder about reinforcing application on the job, etc.). And, at the conclusion of an important program, evaluate its effectiveness at level 3, 4, or 5 to determine if the planned impact was delivered. If it was not, then learn from it and do better next time.