Authors

Richard Carter

Publication Date

6-1984

Advisor(s) - Committee Chair

Raymond Mendel

Degree Program

Department of Psychology

Degree Type

Master of Arts

Abstract

Given the recent theoretical emphasis on the process of performance rating (e.g., Landy & Farr, 1980), a test of the suggestion that better raters may use different rating processes than poorer raters was implemented. Specifically, this study was designed to determine if more accurate raters use a systematically different rating strategy than less accurate raters. Accuracy, the proximity of a rating to the ratee’s true score, was operationalized by differential accuracy (Cronbach, 1955), while rating strategies were determined through a policy capturing method (e.g., Zedeck & Kafry, 1977).

Seventh-three subjects rated a series of videotapes, developed by Borman (e.g. Borman, 1977), of performances with known true scores. A subject’s ratings on a particular dimension were correlated with the true scores for that dimension (across ratees) to provide each subject’s differential accuracy score for that particular dimension. Then, dimensional differential accuracy ratings were converted z scores (using Fisher’s r to z conversion) and the mean of each subject’s dimensional accuracy ratings was calculated and used as his/her summary accuracy indices.

The policy capturing segment of the study required subjects to provide an overall performance rating for 100 hypothetical performance profiles. The subject’s overall ratings were then regressed on the hypothetical performance profiles, providing for each subject a regression equation reflecting his/her particular rating strategy.

The variables from each subject’s regression equation used to reflect his/her rating strategy were then correlated with that subject’s accuracy ratings. The results indicated more accurate raters were no more consistent in using their individual rating strategies than less accurate raters, nor did they use information from more performance dimensions than less accurate raters. Also, there was no correlation between the accuracy with which a dimension was rated and the relative weight given that dimension when providing an overall rating. Given the lack of significant relationships between the accuracy of rating and measures of rating strategy, it was suggested that the effect of other rating process variables (e.g. observation and memory processes) on accuracy be examined.

Disciplines

Applied Behavior Analysis | Clinical Psychology | Psychology

Share

COinS