“Sometimes the technology is the simple part of all this,” said Chris von Koschembahr, executive of information transformation at IBM’s IT Education Services and worldwide director of IBM’s mobile learning program. “It’s the application of it in the appropriate manner and the right blend that’s the hard part. That’s the challenge for us: to try and articulate and drive the use of these technologies in an appropriate manner, or else it will fail, it won’t be effective, it won’t reach the people, and simulations will get a bad name.”
Because of their cost and complexity, it is imperative to evaluate how instructional simulations are used, how they meet organizational learning and development needs and what return on investment (ROI) is derived from them. This begs the question: What techniques should you employ in order to measure the utility and ROI of simulations?
Level of Knowledge and Proficiency
One way in which simulations can demonstrate their effectiveness is through affirmation of not only their level of use, but also employees’ attainment of necessary knowledge and skills. Tracking employee learning in other learning environments can be tedious and problematic, especially when learners feign interest in the subject matter. “Just because someone has turned up and nodded their head, have they actually understood the business process or task, or the reason why they were in training?” asked Vince Lucey, director of consulting for OnDemand Software, which counts AOL Time Warner, Capgemini and Newport News Shipbuilding among its clients.
OnDemand offers simulations that verify that learners have gone through training and comprehend the subject matter, Lucey said. “It allows the end-user client to set up what they believe is a fail-or-pass rate,” he said. “It cuts out the subjectivity of the competency of training. It can test through the system to make sure that they can actually demonstrate an understanding of the process for what they’re being trained in. After the training, they can assess and quantify how effective the training has been.”
“Not only can we produce content on a step-by-step basis which explains to the users what the correct procedure is, but also we can get them to go back and test that understanding. Because we can track that on a server, that provides our clients with concrete evidence that those users were trained,” added Janice Brown, managing director of Trainers IT Services. Her company recently formed a partnership with OnDemand to provide support services to Personal Navigator users. “When they’re moving to that new system, what we in instructional development understand is what the skills gap is for the user. That’s where you see the return on the investment—when we know what they’re using at the moment and where they need to be in the new system.”
There are two established techniques to measure the competence gained through instructional simulations, according to Michele Cunningham, vice president at THINQ Learning Solutions, which provides learning platforms to the U.S. Army, the U.S. Air Force, Citibank, Boeing, Lockheed Martin and General Dynamics. The first is to conduct pre-and post-training testing. Simply put, this approach determines the levels of users’ knowledge and proficiency prior to training with simulations. Upon completion, learners are retested on the same content, and the results are gauged against pre-training levels.
The other metric process described by Cunningham was more experimental in nature. Essentially, it involves dividing employees into two groups, and administering simulation training to only one of those groups. Then, the job performance of both groups is tracked and compared. As Cunningham pointed out, though, this method can be unfair, as it may give certain employees an inequitable lead in skills and knowledge over their peers. Also, the question of which employees to place in which group arises. Should it be a random sampling? Should one group be comprised of more capable or less capable workers? While this method is by and large helpful in measuring the effectiveness of simulations, the implications are such that trainers might be discouraged from using it.
Speed of Product Rollout
The efficacy of simulations also can be measured by assessing how they affect product rollout. When a new product is released, obviously there will be a shortage of equipment with which to provide hands-on training or instructor-led demonstrations, von Koschembahr said. However, by distributing instructional simulations concurrent with or even prior to the release of new products, you can ensure users will know how to perform necessary tasks much sooner and at much less risk to themselves or the equipment.
“We’ll make a new piece of hardware or some kind of machine,” von Koschembahr said. “When those first ones roll off the line, you can be sure that those are going out to either be showcased or go to our first customers. In the training environment, we have an immediate demand from our customers to be trained on those as well, but we need those machines to train people. So what we do for the first x amount of days or weeks until we can get a critical mass of that same hardware in order to train people, is go find one of the first and grab it, and make some simulations and begin to deliver training immediately based on those simulations. This allows us to do some initial training that would not have been possible due to the limitation on hardware.”
“An application is about to be deployed, and somebody on the project says, ‘OK, now what about training?’ or ‘What about getting this to the users?’” Brown said of past experiences in training development. “The training team got the call at the 11th hour. There was a lot of midnight oil burned. Before [simulations] came about, that actually meant cutting and pasting and dropping things into word processing.” Brown said simulations have made life easier for both application developers and training developers because it requires them to work together from the outset and develop training concomitantly with the new product.
Enhancement of Workforce Performance
A third way simulations can establish their worth as a training tool is by improving quantity and quality of employees’ work. This is achieved by providing learners with an almost exact replica of the work environment through a “combination of tools, techniques and services that allow the delivery of a rich learning experience to an audience,” which better prepares them for actual job tasks, Cunningham said. It also is accomplished by shortening or even eliminating training cycles, which comes when simulation training is offered whenever and wherever needed, and is blended with other learning platforms.
“The advantage is certainly the opportunity to closely mimic the operational context in which it will be used, particularly for something that is more of an individual activity,” Cunningham said. “It really does allow them to test-drive, and some of the more sophisticated ones even manage the 3-D purview of what interaction with that piece of equipment would be.”
The accessibility of many simulations, regardless of time and place, was also emphasized. “The beauty of simulations is you can use it early in the morning or late at night, bookmark it, work through it and come back to it at a later stage,” Lucey said.
There are a few methods that can be used to measure employee performance as it pertains to simulation training, Cunningham said. The first is time-to-competence, or the amount of time it takes to get from the beginning of training–when an employee’s knowledge of the content is practically nonexistent–to an acceptable level of comprehension of the skills or knowledge being taught. This technique is particularly expedient in fields with high degrees of innovation and/or product rollout. Other approaches include assessing employee reduction-in-error rates and increases in quality of goods or services. These methods are more practical in manufacturing and other routine-based vocations.
Another gauge of the utility of simulations is how well they work with other learning platforms. One unique advantage of simulations is that not only can they be used to supplement other training tools (and vice versa), but also the content from simulation training can be applied to other learning environments.
“Simulations can also be used as a blended learning solution,” Lucey said. “Some people learn better though touch, some people like to be sat down with a book. We normally find the better way to deliver training through blended solutions is to have hands-on and then the roll-through instructional design. Another advantage of simulations is they can actually bolster the training after you’ve given the instructor-classroom training. So if someone wants a refresher before they go into the live application or the live system, they can actually go do another little test through the Web.”
“Our experience is very much around blended learning,” Brown said. “Because you have these simulation modes, it does mean that you’re actually able to reinforce that and enable them to use these products in a classroom environment. For the client, the initial savings in terms of training is in development cost, even though there’s a bit of a shift necessary in going from classroom to blended learning. Therefore, that element of delivering may not be cheaper this time around, but two years up the road, they’ll have real cost savings.”
In addition to standard blended learning, von Koschembahr explained a procedure used in IBM’s IT Education Services that obtains application sequences from simulation training and saves them. “I call this concept ‘Develop once and deliver many ways, ’” he said. Using a simulation application capture tool, segments of the simulation training are preserved and delivered to other learning platforms.
“If we develop our content with an open set of content development tools that make learning elements, then classrooms draw from those elements and the e-learning folks can draw from those same elements,” he said. “That tool is able to publish that exact same documentation that was just required for the class. The benefits are tremendous, because not only can I use that in a lab environment, but the e-world, so to speak, can now grab that and contextualize that into embedded work. It could be in a repository of learning objects, or it can be repackaged into an all-new e-learning offering, because now I’ve got the lab in an e-format. It drives cost savings. By being able to harvest that and deliver it in many different ways, you have new revenue opportunities.”
Although there are many ways to measure the ROI of simulations, one of the most fundamental things to remember is that knowledge itself is ultimately intangible. Thus, no metric can fully illustrate the significance of simulations to learning and development. In that sense, simulations are invaluable.
“Sometimes it’s unrealistic to put a pure dollar value on everything you do,” von Koschembahr said. “How can you put a price on the fact that [learning] can now be better contextualized?”
Brian Summerfield is associate editor for Chief Learning Officer magazine. He can be reached at firstname.lastname@example.org.Filed under: Learning Delivery, Technology