As its use in organizations increases, e-learning is evolving from being a new initiative to becoming just another business process. Much of this paradigm shift is being driven by businesses that are moving away from envisioning the potential of e-learnin
by Site Staff
January 2, 2004
This changing dynamic poses an intriguing challenge for learning professionals, many of whom have never been required to use business metrics to quantify the success of their training programs. This maturing vision of e-learning is forcing training decision-makers to move away from proprietary training rituals and reporting to more conventional or mainstream business practices where quality programs and process improvement initiatives are the norm. This article will explore how applying Six Sigma methodology to e-learning will enable those overseeing online learning programs to use the correct metrics to gauge the success of e-learning programs and to present these metrics in a language that the CFO or CEO can understand.
The Current State
Practically all e-learning is currently developed using Instructional System Design (ISD) or some variation. The effectiveness of these programs is normally assessed from a pure training point of view, using Kirkpatrick’s four levels of evaluation. In this model the Level 1 feedback, along with attendance, the number of employees trained or the number of courses available, tends to be what is communicated to senior management. A benchmarking study of e-learning providers (performed by the Depository Trust & Clearing Corp., or DTCC) indicated that the most popular data reported to senior management about e-learning included:
- The number of employees registered for learning management systems.
- The number of courses that the companies had available.
- Scores on level-two evaluations.
This study included practitioners from a variety of industries, including financial services, manufacturing and insurance.
This type of information, while popular with e-learning professionals, does not necessarily reflect what is important from a business perspective, and may in fact be one of the reasons that training managers often have difficulty communicating the effectiveness of training to business leaders. A good example of how there can be a disconnect between what training professionals think is important and what businesses really care about might best be illustrated by examining the dynamics of harassment training.
Harassment Training (The Training Perspective)
When harassment training is scrutinized from a pure performance improvement or training perspective, its purpose is to change the behavior of employees (so that they do not engage in behavior that harasses other employees). The traditional training model calls for some type of survey or evaluation to be completed by the students (once they have finished the course). This survey evaluates the learners’ reaction to the training. Some type of criterion-referenced exam follows the evaluation. The exam attempts to appraise whether the participant actually learned anything while attending the training (Level 2). Six weeks to six months later, either some type of follow-up communication with the student’s supervisor or an observation of the student in his work environment should occur. The purpose of this observation is to determine whether the student’s behavior has changed as a result of the training (Level 3). Finally, six months to two years after the training, some comparison between the number of complaints or legal action taken against the company prior to the training and after the training is recommended. Performance improvement evangelists believe that the cost associated with this type of analysis is required if the business truly wants to measure the effectiveness of the training program. From a training point of view, the program is considered successful if scores on the post-class evaluations are favorable and scores on Level 2 exams are high.
If the follow-up communication determines that the behavior of the students had not changed, training professionals will quickly point out that it is the responsibility of the line manager to ensure behavior in the workplace. If the number of claims levied against the company increases, training managers will counter that there are “other factors” that affect the number of harassment complaints against a company. If on the other hand, the results of these measures are positive, the training organization will claim responsibility for the company’s savings.
Examining this same program from a business point of view paints a different picture of the purpose of harassment training and sheds a different perspective on what the training program must accomplish to be successful.
Harassment Training (The Business Perspective)
Executives are interested in limiting the financial exposure that a company faces (as a result of legal action brought by employees claiming harassment or discrimination). If a company has an anti-harassment policy in place and can show that it provided training for all of its employees, the company’s exposure is minimized. The burning platform (from a business perspective) becomes getting all employees trained as quickly as possible. The quicker they are trained, the sooner the company is protected. There may also be some legal requirement that the training cover specific topics. From a business perspective, however, there is no justification for the types of evaluations described in the training model (or the costs associated with them). What is important is that all employees are trained quickly. The only required metrics are the number of students completing the training and the amount of time that it took. The success of the training is measured by how well the program meets these specifications.
What’s Important?
This it not to imply that performance improvement is never important to businesses, or that the reaction of the student should not be measured. If, for example, the reaction of students would be such that they did not complete the training, the business requirement to have all employees trained would be jeopardized. What is being suggested is that training professionals must use a system or methodology that allows them to identify the appropriate data/information to report to senior management. If the required business outcome of the training is in fact performance improvement, it should be identified and agreed to by the business stakeholder and not imposed by the training representative. This is where Six Sigma is valuable.
What Is Six Sigma?
Six Sigma is a customer-focused, data-driven methodology that allows companies to achieve the highest level of quality as a result of breakthrough (as opposed to) incremental improvements. (See Figure 1.) With Six Sigma, the customer or end-user of the product, as well as the business or process partner, defines quality. Any component of a product that does not meet the requirements that the customer and the process partner have defined is considered a defect.
Figure 1: Six Sigma Savings
Centered on a powerful problem-solving and process-optimization methodology, Six Sigma is credited with saving many billions of dollars for companies such as:
Company | Savings | Period |
Dow | $1.6 billion | 3 years |
DuPont | $1.6 billion | 4 years |
Ford | $1 billion | 3 years |
GE | $12 billion | 5 years |
Toshiba | $4 billon | 3 years |
Source: Six Sigma Academy
Applying this model to the harassment-training example might look like this: The business has identified that 3,000 students must be trained in 30 days. The end-users or customers have given feedback that they do not like courses that are more than one hour long. These factors then become the requirements that the training must meet. Any of those requirements that are not met are labeled as defects.
The Six Sigma methodology ensures that the combination of business needs and customer requirements drive decisions around measurement and reporting by identifying the things that are important to both the customer and the business. This occurs by first ascertaining what Six Sigma calls output process indicators or measurable requirement for the process. For the harassment-training example, those indicators would be:
- 3,000 students trained.
- All students trained in 30 days.
- Course length of one hour or less.
Using Six Sigma methodology, the customer requirements are identified as the voice of the customer (VOC), and the business requirements are known as the voice of the business (VOB). See Figure 2.
The Voice of the Customer
The end-user perspective or voice of the customer (VOC) may come from a variety of sources, including phone calls, written complaints, surveys or, what training professionals like the most, level-one evaluations. The VOC is then converted or categorized into to key customer issues, which are in turn converted to critical customer requirements (CCR) or specific measurable targets. For example, complaints that e-learning programs are too long might be categorized with other similar complaints and put into a category called time. This voice then becomes the critical customer issue. Customers might then be surveyed in order to identify (from the customer perspective) how long the programs should be. That metric then becomes an output indicator that must be measured. Scores meeting or exceeding customer requirements are strengths, and conversely, scores falling short of customer specifications become areas of opportunity.
The Voice of the Business
The same process is used to identify metrics from the perspective of the business. The voice of the business (VOB) is captured from interviews with business stakeholders, corporate initiatives, regulatory requirements, etc. These factors are then categorized into business issues that are in turn converted into measurables or what Six Sigma calls “critical business requirements” (CBR).
A prioritized listing of the measurable customer and business requirements then become what Six Sigma calls “output indicators” or the benchmark that puts the whole organization on the same page around the success or failure of e-learning. The fact that these requirements are derived in partnership with both the business and customer stakeholders ensures that the language is appropriate for all audiences.
It should be pointed out that this all happens at a strategic level and does not negate the requirement to do the tactical work of performing a needs analysis for individual e-learning projects. The results of this measurement (the output indicators) will validate whether the e-learning program is making a tangible business impact. Even if the e-learning program is falling short, however, there will now at least be a benchmark, a common understanding and a common language around what the program needs to accomplish to show positive and tangible business impact.
Analyzing Your E-Learning Program
Once the performance of the online learning program has been measured (in relation to business and customer requirements), there is a good chance that there will be some areas of opportunity. There will therefore be a need to make adjustments to the program so that business and customer specifications can be met.
Six Sigma recommends that no adjustments be made to processes until there is a complete understanding of the process and validation of what effect the changes will have on it. This understanding is gained as a result of both process analysis and data analysis. Many organizations do not track the type of data required to do a comprehensive data analysis. However, significant improvements can be made as a result of a good process analysis. There are a variety of tools in the Six Sigma tool chest available to help perform process analysis on e-learning programs some of the most useful include SIPOC diagrams and functional deployment maps.
SIPOC
A SIPOC diagram is a tool used to identify all relevant elements of a process.
The tool name prompts the team to consider the Suppliers of the process, the Inputs to the process, the Process that is being improved, the Outputs of the process and the Customers that receive the process outputs. SIPOC takes the perspective that any procedure, no matter how complicated, can be broken down into three components: an input, a process and an output. If the output is bad, it is because there was a problem in one of the other two areas (either the input to the process or the process itself). Validating that the input is good allows the focus to be on improving the process. If on the other hand the input is bad, controls must be put in place to ensure valid and correct input. Once the input can be verified as correct, keep the process the same and then test the output. If the output is still bad, the problem is obviously with the process. In addition to the inputs, outputs and process, SIPOC has suppliers that give the input and customers who validate whether the output meets their requirements. Using this tool can help to easily identify where problems are occurring and where the corrective action needs to take place. The first step, however, is to ensure that the inputs to the process are correct. See Figure 3.
Functional Deployment Maps
Once the work has been done to ensure that inputs to the process are good (if the outputs still fall short of business and or customer requirements), the focus can now shift to fixing the process itself. One valuable tool that can be used to help with this is a functional deployment map. This map or flow chart graphically represents all the tasks, sequences and relationships within a process. In short, it shows who does what and when. While the map itself may seem to be a simple diagram, developing such a graphic can be quite a complicated task. This complexity may be better illustrated by examining part a case study of a Six Sigma e-learning project done by The Depository Trust & Clearing Corp. (DTCC)
At DTCC, even though the work to complete this map was being done by people who used the process every day, the functional deployment map alone took almost four meetings (eight hours) to complete. The exercise of developing this map made it very clear to everyone involved that those working with e-learning development had a different view on what was supposed to be occurring and when it was supposed to be happening. When the map was finally completed, it displayed all of the steps in the e-learning development process as they were really occurring. It illustrated where each step was performed and who was involved. The completed map clearly illustrated for us that our process contained a series of checks and rechecks that virtually guaranteed rework. So at this point, we had validated that the problem was with our process. We also identified the steps in our process that were potentially causing the problems.
The functional deployment map is a tool that allows its user to see all of the steps in a process and thus be in a position to analyze each of the steps.
Quantitative Analysis
Once the functional deployment map is complete, the next type of process analysis that should be completed is a qualitative analysis. This tool requires that every step in the e-learning process (as identified in the functional deployment map) be evaluated and classified as customer-value-added, operational-value-added, not-customer-value-added or not-operational-value-added. Customer-value-added is defined as an activity that the customer recognizes as valuable, that changes the product toward something that the customer expects or that is done right the first time. Operational-value-added activities are activities that are required by contract or other laws and regulations, are done right the first time or are required to sustain the workplace ability to perform customer-value-added activities. All of the tasks that are classified as non-value-added are then categorized based on whether they are easy to implement, fast to implement, cheap to implement, within the team’s control and easily reversible. The non-value-added activities (NVA) that meet all of these conditions are identified as quick wins. See Figure 4.
Depending on how far away the e-learning process is from the business and customer requirements (the critical-to-quality requirements), implementing the quick wins might be enough to ensure that the e-learning program is accomplishing what it needs to, in order to satisfy what both the business and the customer feel is important.
Why Six Sigma?
Applying Six Sigma methodology to e-learning takes a lot of effort, energy and time. It requires a skill set that is new to the e-learning professional. However, the types of analysis, efforts and skills that are required by Six Sigma are currently being put into virtually every other business processes, in virtually every industry. It is now being considered as a methodology to help fight the war on terror. If e-learning is going to be looked upon as a business process, business practices must be applied. Taking the time to apply Six Sigma to e-learning ensures that the business gets what it needs, that the customers get what they want, that the language is a common business language and, maybe most importantly, that e-learning has business credibility.
Kaliym Islam is the director of instructional technologies for The Depository Trust & Clearing Corp., where he oversees all technology-based training and e-learning strategies. E-mail Kaliym at kislam@clomedia.com.