Face-to-face interviews are often completed as part of the admissions process in order to assess both cognitive and non-cognitive variables thought to be associated with success in physical therapy education and practice. These face-to-face interviews are resource-intensive and often require applicants to physically travel for the interview, which may create an unnecessary personal or financial burden for some applicants. With the recent development of hybrid doctor of physical therapy (DPT) programs that include students from all over the country, online recorded (asynchronous) interviews are now being performed at several programs as part of the admissions process. These asynchronous interviews can be viewed at different times by different faculty, which creates the opportunity to assess the reliability of the evaluations between assessors. The primary purpose of this study was to assess the inter-rater reliability of an asynchronous interview during the application to a hybrid DPT program.
Asynchronous interviews were assessed with 40 randomly selected applicants to an entry-level DPT program. A recording of a faculty member asking 5 different standardized questions was shown to the applicants. After each question, the applicant was given 20 seconds to consider their response. Their camera was then activated and their response was recorded. Responses were limited to 2 minutes per question. One of thirteen different faculty initially reviewed the applicantÕs interview using a standardized rubric. Following the initial reviews, the same applicants were reviewed by a second faculty member without knowledge of the previous rubric scores. Inter-rater reliability was analyzed with IBM SPSS Statistics (Mac, Version 26.0. Armonk, NY: IBM Corp). The reliability of the numerical score of each section was assessed using intra-class correlation coefficients (ICC). The reliability of each question, as well as the composite interview score, was assessed using CohenÕs Kappa statistic coefficient.
The inter-rater reliability is currently in process and will be completed and available at the time of the presentation. Results of this pilot study will yield quantitative inter-rater reliability and qualitative feasibility findings to inform future revisions and uses of the rubric as well as asynchronous interviews in general.
Conclusions/Relevance to the conference theme:
The results are pending complete analysis but show promise for an interview platform that could consistently measure student performance all while minimizing the impact on the prospective students. This should in turn foster equal opportunity for each applicant and thus encourage diversity in our program cohorts.