Digest 16. Accurate ratings in performance evaluations: Restructured FOR training makes it possible!

Business photo created by katemangostar - freepik

Business photo created by katemangostar - freepik

Accurate assessments of employee performance are much needed in organizations since they bring on many benefits, such as superior job performance, enhanced perceptions of procedural and informational justice, increased appraisal satisfaction, and motivation to improve future job performance. Managers’ diligence is key to accurately measuring employees’ performance and providing feedback (as we have seen in our previous digest). Yet, providing accurate assessments is not an easy task and may not come that natural. Raters might use heuristic‐based judgments (that is, mental shortcuts to make judgments and decisions) during the evaluation process. To mitigate the use of these heuristics, raters can be and should be trained to improve their rating accuracy, adopt organizational goals, develop skills related to feedback delivery, and increase confidence in performing assessments. One effective and frequently used rater training approach is the frame‐of‐reference (FOR) training.

FOR training

FOR training assumes that people have idiosyncratic knowledge (personal schema) which differs from the more widely held, and often explicitly stated, institutional knowledge (organizational schema). The training aims to educate raters to draw on the institutional knowledge and use common evaluation standards (i.e., the organization's schema of performance) when making evaluations. To minimize the gap between personal and organizational schemas, the FOR training process provides three key elements for each performance dimension or competence:

  • A standardized definition of “good performance”, with behavioral examples;

  • Mock evaluations to practice the rating scale;

  • Feedback from an advisor (or expert) indicating how a rater's evaluation is discrepant from the organizational schema and standards.  

The FOR training has been shown to improve rating accuracy. However, it rests on an unexamined assumption that raters can effectively replace their personal schemas with those of the organization. Moreover, the practice‐then‐feedback procedure may have some counter effects and make the evaluation more inaccurate by encouraging other heuristics, known as the anchoring and adjustment effects. That is, people start with the anchor which is the initial reference point, from there and based on additional information, they make incremental adjustments to that initial reference point until they reach their final judgment. The anchor influences the subsequent adjustments and assessments significantly and, therefore, the adjustment process is typically insufficient – especially when the anchor is self‐generated rather than provided by others. In terms of FOR training, by letting raters practice the evaluation first (self-generated anchor), and then giving them feedback (information for adjustments), might still result in insufficient adjustments and inaccurate ratings.

How can FOR training be improved?

To overcome such shortcomings, Tsai, Wee, and Koh (2019) have introduced the restructured FOR training. This restructuring includes the following changes:

  • Restructured presentation: Evaluation standards are presented before participants practice rating rather than after the practice as in the traditional practice-then-feedback procedure explained above. Hence, the anchor is formed upon the organization’s evaluation standards (organizational schemas), while the accessibility of the self‐generated anchor is minimized.

  • Adjustment opportunity: A further element is introduced in which participants practice adjusting their ratings, after receiving feedback, until their evaluations are consistent with those of the training advisor (with the organizational standards).

Infographic created by REAL PAL - Steps of restructured FOR training

Infographic created by REAL PAL - Steps of restructured FOR training

To test the effectiveness of the restructured FOR training, Tsai and colleagues conducted five studies using experimental designs to compare the effects of the typical FOR training with the restructured FOR training. Their results demonstrate that, after attending a restructured FOR training, raters provide evaluations twice as accurate compared to raters who did not receive any training. Furthermore, the accuracy also improves as compared to the typical FOR training.

Let’s see in detail their findings. In each experiment, raters “under training” evaluated 12 workers on competencies relevant in the workplace (for example, Planning & Organization, Negotiation) using a scale from 1 (“poor”) to 10 (“excellent”). Their evaluations were then compared with those of “expert managers”. When raters did not receive any training, their ratings differed from those of the experts by 1.13 points on average. Such difference in the evaluations between raters and experts decreased when raters received a typical FOR training (difference: 0.88 points) and dropped even further when raters received the restructured FOR training (difference: 0.59 points). Therefore, the rater vs. expert discrepancy was actually negligible when raters underwent the restructured FOR training.

Organizational implications                                                                                 

Practitioners are recommended to:

  • Use this restructured method to improve the quality of ratings.

  • Educate managers and raters to understand and spot the anchoring and adjustment heuristics in their evaluation processes and modify their behavior accordingly to avoid it.

  • Ensure that organizational standards and schemas are shared with and grasped by managers and raters.

  • Understand the importance of initial responses and allow managers to emulate a benchmark as the first step of a training process.

  • Give timely feedback on managers' incorrect ratings to avoid anchoring and the interference of inconsistent information.

  • Extend to other topics of training (e.g., teamwork skills and effective team building, information communication, managerial decision‐making) one central feature of the restructured FOR training method, that is the primacy attributed to role modeling over feedback, consistent with the basic tenets of social learning theory.  

——

Reference: Tsai, M. H., Wee, S., & Koh, B. (2019). Restructured frame-of-reference training improves rating accuracy. Journal of Organizational Behavior, 40(6), 740–757. https://doi.org/10.1002/job.2368