Purpose/Hypothesis : Purpose/Hypothesis: Little information is known about how raters assess applicants to physical therapy (PT) programs. The purpose of this study is to investigate how faculty members and clinicians rate the cognitive and non-cognitive attributes of PT school applicants. It is hypothesized that faculty members will demonstrate greater rater severity than clinicians, that all raters will apply the rating scale differently, and that the criteria of academic preparation and additional factors will be rated with equal severity.Number of Subjects : 12 ratersMaterials/Methods : I used Facets software with a three-facet Rasch model (applicants, raters, criteria) to analyze the ratings the 12 raters assigned to 186 PT applicants.Results : Rater severity measures ranged from -1.47 to 1.31 logits. Rater fair-measure averages indicated that the most lenient rater tended to give ratings that were 1.26 raw score points higher than the most severe rater. The rater separation index of 5.21 indicates that among the 12 raters there were more than five statistically distinct levels of rater severity. Raters were well differentiated in their levels of severity as evidenced by the reliability of rater separation value of .93. Faculty members were the most lenient raters. Partial credit analysis of individual raters showed great variation in the use of the rating scale with some raters showing a central tendency effect in ratings and raters tending to use the 6-point scale for the additional factors criterion as a 3-point scale. Analysis of the rater fit statistics showed that some raters exhibited inconsistency in their use of the rating scale, especially in the additional factors criterion.Conclusions : In this sample, faculty members as a group were the most lenient raters. Raters tended to show great variation and inconsistency in the individual use of the rating scale which contributed to unexpected ratings. The two criteria for admissions, academic preparation and additional factors, were not statistically significantly different in difficulty.Clinical Relevance : The results presented here support the use of MRFM techniques in analyzing rating data from PT admissions committee members to provide detail regarding use of a rating scale, rater severity, and item difficulty. This information is of great importance given the high-stakes rating assessment of physical therapy applications. Previous research has focused on classical test theory approaches to determine inter-rater reliability of the ratings that admissions committee members assign when assessing applicants. The results presented here support the use of the Rasch model to guide physical therapy admissions committees in constructing rating scale tools and analyzing rating data to provide the fairest ratings to PT school applicants.