Personnel Evaluation

A fundamental tenet of product evaluation is that you evaluate the design, NOT the user. However, there are times when this is not necessarily true. Stufflebeem & Coryn (2014) describe what they call ghosts in the system. For example, in a program evaluation, the designer creates an instructional product to be implemented or provided by someone. Likewise, a policy must be implemented by people and followed by individuals. In addition, educational products must be utilized by the intended users. However, most people don’t like to be evaluated because they are being judged, but making judgments is a fundamental aspect of evaluation. So, we say we are evaluating the product but we are also evaluating the people implementing the program, using the product, or applying the policy (i.e., the ghosts in the system). The transition from judging the design of a product to judging the user often happens between the formative and summative evaluation phases. However, it can happen during a needs analysis when conducting a Performance Gap Analysis.  

In addition, a fundamental assumption when evaluating a product is that the user intends to use the product and is capable of benefiting from the product. This may not always be the case. When conducting a formative evaluation test subjects often attempt to be helpful. However, once the product has been implemented, users are not always so accomodating. When conducting formative evaluates of a product, you might ask whether users can use the product as intended. However, when conducting summative evaluations, you might ask whether users are willing to use the product. The difference is subtle but important. The formative evaluation of a product judges its design (e.g., can it be used); however, once the product has been judged adequate and has been implemented, you often judge how willing the intended users are to utilize the product to accomplish specific learning objectives. You might also ask how capable individuals are when judging the effectiveness of a product. It is not always a question of whether the product is effective but rather how willing and interested learners are in accomplishing the expected learning.

When conducting a Performance Gap Analysis, you are basically performing a personnel evaluation. You assess how well someone performs a task and judge whether the performance is adequate. You make determinations about the cause of the problem and decide what action to take. Sometimes the solution is to create training or provide practice, but often the solution is to establish a policy (remove obstacles or arrange for consequences). The purpose may be framed as whether an instructional product is needed or what might be done to improve the effectiveness of a product (its implementation or application), but essentially, in a summative evaluation, you are evaluating people and what you can do to get them to accomplish the required tasks or implement the product as intended.

Understanding this difference is important when making recommendations. We note the deficiency in users when, for example, we fault user error in plane crashes or automobile accidents. The user may be driving too fast for the road conditions or driving impaired or distracted – it is not the product that needs improvement. Likewise, many perfectly viable educational products exist. Some might need to be improved, but often a more likely reason a product is labeled “ineffective” is that it is not used at all or as intended. When a product would facilitate learning if used correctly, it would be ill-advised to recommend an instructional product be abandoned or revised simply because a user is unwilling to take advantage of the opportunity.

This is boring! A School Classroom Example

For some time now, one criterion used to judge educational products and programs has been that the instruction must be interesting and engaging. As a result, students don’t always feel inclined to do anything that is not interesting or fun. However, many educational tools (or topics) are essentially uninteresting—they aren’t meant to be interesting; these tools are believed to be useful (or in the case of a topic, essential). Even if they are interesting at first (i.e., initially novel), educational products will eventually need to be used to accomplish a task, not to entertain. Students must be intrinsically motivated to learn; they cannot always be extrinsically enticed to participate. They can and need to accomplish difficult, uninteresting tasks. However, when conducting summative evaluations in authentic contexts, students often comment that an activity or product is uninteresting, too complicated, or difficult to understand. They have selected specific criteria that focus on certain aspects of satisfaction rather than effectiveness and utility. When receiving this type of feedback, the user might really be saying that they don’t want to learn what is expected of them, or they prefer not to do anything that challenges their ability.

In a negative case evaluation, you often find that some individuals find a product facilitates their ability to accomplish a task or learn a topic. However, others do not. It may not be the fault of the product’s design; it may be the result of an ineffective teacher, the learning environment, or it may be that the learner is unwilling, resistant, unprepared, or incapable of accomplishing the expected learning. Students with an external locus of control may blame external factors for their failure rather than accept responsibility for their learning and expend the effort needed to accomplish the required learning. In these situations, care needs to be taken as to what recommendations to suggest and how to convey sensitive recommendations.

You can’t make me! A Corporate Example

Learning doesn’t just happen in schools. Corporate training benefits from instructional design just as much as schools. Companies need to train employees to do specific tasks accurately and consistently. Unfortunately, a training program may be effective in that employees know what and how to do something; however, they may choose not to do what is expected for various reasons. For example, employees may become lazy or tire easily; they may not perform well in adverse situations (e.g., dealing with the unrealistic expectations of demanding customers), or they may not care.

After receiving training, you may assess an employee’s performance and find that they can do what is expected. You may also later observe employees fail to perform the task consistently when they don’t think they are being watched. In these cases, a performance task analysis is needed. It has nothing to do with whether the training needs to be improved but how to motivate individuals to do their job. Sometimes, implementing a policy, removing obstacles, or arranging for positive or negative consequences might solve the problem. Other times terminating the employee may be the best option. Each of these solutions has less to do with a formative evaluation of the instructional products used to train people and more to do with summative evaluations of the personnel.

Chapter Summary

  • Product evaluation typically focuses on evaluating the product design, not judging the user. However, sometimes summative evaluations must evaluate users, and a performance gap analysis essentially evaluates personnel, not products. 
  • When a product is found to be ineffective, it is essential to identify the actual cause of the problem.
  • Not all products need to be revised or improved.
  • Not all products need to be interesting or fun. Satisfaction also must consider whether a product is effective and efficient. Choosing appropriate evaluation criteria is essential.
  • Sometimes an evaluation finds the problem with ineffective products is an issue of user error, a lack of interest, or learner intent.
  • An instructional product may be effective and individuals may still choose not to use what they have learned or behave (perform) as they have been trained

Discussion Questions

  1. Think of a product that works perfectly but is not often used. Explain the reason it is not used. Consider ways you might entice potential users to use the product. 
  2. Describe an educational situation where an effective instructional product is used but students choose not to act on what they have learned. What recommendations would be appropriate?  Which recommendations might not be appropriate?

References

Stufflebeam, D. L., & Coryn, C. L. (2014). Evaluation theory, models, and applications (Vol. 50). John Wiley & Sons.

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/eval_and_design/evaluating_people.