February 24, 2007

A Baker's Dozen (+1) Design Disasters

For a training professional, it always helps to read over and over again about what can go wrong while designing, developing, and delivering training courses.

Becky Pluth, in a recent article in Bob Pike's Training and Development e-Zine, lists the following as a baker's dozen (+1) training design disasters.

  1. Using language that belittles participants or puts on airs
  2. Designing the training first and then writing the terminal and enabling objectives
  3. Spending too much time on the nice-to-know versus the need-to-know
  4. Chunking the content into unmanageable learning portions
  5. Stating objectives of the session and then not meeting them
  6. Not providing a roadmap of where the session is going
  7. Telling stories that don’t match the message but are your “favourite” and everyone “loves” them
  8. Telling participants what to do versus showing them and allowing them to DO
  9. Not building activities that teach your content into the training
  10. Creating job aids that are conceptual instead of behavioural
  11. Using a PowerPoint deck as your handout
  12. Not reviewing or revisiting content throughout the session
  13. Using the same activity multiple times
  14. Overusing one form of media (DVDs, gaming, books, etc.)

For elaboration on the above points, read Becky's full article.

February 19, 2007

What Motivates Learners

For any training programme to be successful, learner motivation is an absolute must. So, what are the training characteristics that motivate learners?

Edmund Saas, a professor of education, has some answers.

In the late 1980s, Edmund conducted a study to determine student perceptions of those college classes likely to result in high and low classroom motivation. Edmund surveyed about 700 of his educational psychology students for the study.

The results of the study are discussed in Motivation in the College Classroom: What Students Tell Us, an article in the Teaching of Psychology journal (subscription required).

The following eight characteristics emerged as the top contributors to learner motivation (in order of importance):

   1. Instructor's enthusiasm
   2. Relevance of the material
   3. Organisation of the course
   4. Appropriate difficulty level of the material
   5. Active involvement of learners
   6. Variety of instructional methods
   7. Rapport between instructor and learners
   8. Use of appropriate examples

The first three characteristics had considerably higher ratings than the rest of the characteristics.

The top characteristics that students perceived as non-motivating were expectedly the opposite of the motivating characteristics:

   1. Lack of variety in instructional methods
   2. Disorganisation of the course
   3. Little or no active learner involvement
   4. Lack of involvement of the instructor
   5. Lack of interpersonal warmth from the instructor

If you want further descriptions of the motivating characteristics, see David Gershaw’s article titled Motivating Students to Learn.

February 15, 2007

Measuring the Effectiveness of Training

How do you know that your training programme is effective? Would assessments at the end of a training programme help you measure the effectiveness of the training?

Maybe to an extent, but not quite.

Assessment Acumen: Do You Have It?, an article by Margery Weinstein in a recent issue of Training Magazine has the following quote from Roger Chevalier.

“The biggest mistake is we follow-up with students based on whether or not they've acquired knowledge, and that's a problem. The outcome we're looking for is actually a change in behaviour,” he says. “I think all trainers need to redefine learning as not just the acquisition of knowledge but as the ability to demonstrate a desired behaviour, and that learning is everything, and anything, that contributes to the change in behaviour we’re looking for.”

As Roger rightly points out the primary goal of training is to impact the learner’s job performance. Assessments that test for new knowledge at the end of training do not accurately convey whether the learner will demonstrate the desired behaviour on the job.

(By the way, I’m not saying that assessments at the end of training are not required. What I’m saying is that such assessments cannot be the primary means of measuring training effectiveness.)

So, to measure the effectiveness of training, we need to focus on evaluating the transfer of the learner’s news skills and knowledge on to the job. This kind of evaluation corresponds to the third level in Donald Kirkpatrick’s model of evaluation. The evaluation at this level attempts to answer the question: are the newly acquired skills, knowledge, or attitude being used in the learner’s everyday environment?

Of course, conducting such evaluations is challenging. First, we need to determine when we can conduct such an evaluation. It’s not fair to expect a learner’s performance behaviour to change immediately after the training.

But when is it fair to do the evaluation? After a month? After two months? I believe that would depend on the kind of training and the length of training. There are no easy answers here.

And how do we plan and conduct the evaluation? Ed Mayberry has some answers in her article in Learning Circuits.

Dr Jeanne Farrington also proposes a six-step method to evaluate learning transfer. Look at her article, Measuring Transfer for Results and Glory, in a recent issue of the DSA newsletter.

The Assessment Acumen article referred to earlier in this post gives some pointers on what some companies are doing to better evaluate if their training programmes are working.