The authors provide a methodology for capturing policies for rating test fairness. Guided by the test fairness models that appeared in the psychological literature of the 1970s, test fairness evaluations are modeled as a function of a test's validity coefficient, the mean test score difference between the majority group and the minority group, and the test score adjustment for the minority group. Fifty-seven participants with graduate education in human resource management rated fairness for 20 stimuli. Preference modeling was used to estimate each participant's fairness policy. Results showed that the variation in fairness ratings attributable to differences in ratings policies among participants (40.4%) was almost equal to the variation explained by attribute main effects, demonstrating that test fairness is a controversial issue even among a homogeneous set of participants. The authors discuss the implications of this type of research may have for public policy and the use of group-based test score adjustments.