Reaction Level

 

Reaction Level





Reaction data captures the participants' reaction to the training experience. Specifically, it refers to how satisfying, engaging, and relevant they find the experience.

 

This is the most common type of evaluation that departments carry out today. Training practitioners often hand out 'smile sheets' (or 'happy sheets') to participants at the end of a workshop or eLearning experience. Participants rate, on a scale of 1-5, how satisfying, relevant, and engaging they found the experience.

 

Level 1 data tells you how the participants feel about the experience, but this data is the least useful for maximizing the impact of the training program.

 

The purpose of corporate training is to improve employee performance, so while an indication that employees are enjoying the training experience may be nice, it does not tell us whether or not we are achieving our performance goal or helping the business.

 

With that being said, efforts to create a satisfying, enjoyable, and relevant training experience are worthwhile, but this level of evaluation strategy requires the least amount of time and budget. The bulk of the effort should be devoted to levels 2, 3, and 4.

 

Kirkpatrick Level 1 Evaluation Techniques

As discussed above, the most common way to conduct level 1 evaluation is to administer a short survey at the conclusion of a training experience. If it's an in-person experience, then this may be conducted via a paper handout, a short interview with the facilitator, or an online survey via an email follow-up.

 

If the training experience is online, then you can deliver the survey via email, build it directly into the eLearning experience, or create the survey in the Learning Management System (LMS) itself.

 

Common survey tools for training evaluation are Questionmark and SurveyMonkey.

 

Kirkpatrick Level 1 Evaluation Examples

Let's consider two real-life scenarios where evaluation would be necessary:

 

A large technical support call center rolled out new screen sharing software for agents to use with the customers. They're providing training to teach the agents how to use the new software.

An industrial coffee roastery company sells its roasters to regional roasteries, and they offer follow-up training on how to properly use and clean the machines.

In the call center example, imagine a facilitator hosting a one-hour webinar that teaches the agents when to use screen sharing, how to initiate a screen sharing session, and how to explain the legal disclaimers. They split the group into breakout sessions at the end to practice.

 

At the conclusion of the experience, participants are given an online survey and asked to rate, on a scale of 1 to 5, how relevant they found the training to their jobs, how engaging they found the training, and how satisfied they are with what they learned. There's also a question or two about whether they would recommend the training to a colleague and whether they're confident that they can use screen sharing on calls with live customers.

 

In the coffee roasting example, imagine a facilitator delivering a live workshop on-site at a regional coffee roastery. He teaches the staff how to clean the machine, showing each step of the cleaning process and providing hands-on practice opportunities.

 

Once the workshop is complete and the facilitator leaves, the manager at the roastery asks his employees how satisfied they were with the training, whether they were engaged, and whether they're confident that they can apply what they learned to their jobs. He records some of the responses and follows up with the facilitator to provide feedback.

 

In both of these examples, efforts are made to collect data about how the participants initially react to the training event; this data can be used to make decisions about how to best deliver the training, but it is the least valuable data when it comes to making important decisions about how to revise the training.

 

For example, if you find that the call center agents do not find the screen sharing training relevant to their jobs, you would want to ask additional questions to determine why this is the case. Addressing concerns such as this in the training experience itself may provide a much better experience to the participants.

Evolution of Evaluation

  Evolution of Evaluation  “Evaluation is a very young discipline - although it is a very old practice.” - (Scriven, 1996) In this chapter...