Use of a Self-Study Tool to Develop and Recognize Sites of Excellence in Clinical Education
When learners perform below expectations during full-time clinical experiences, the reasons for the poor performance often relate to four domains that are important for the provision of optimal, patient-centered care, including professionalism, communication, clinical reasoning and/or adaptability1. While many physical therapy learners are successful during their terminal, full-time clinical experiences, when they perform poorly or are unsuccessful, the cost is high and comprehensive affecting the learner, the clinical instructor, the DPT program, and potentially the patient and our profession. Currently, physical therapy programs are not formally nor consistently assessing learners in these four domains in a longitudinal fashion prior to terminal, full-time clinical experiences such that learners who need additional support are identified early and supported towards improved performance and success, leading to decreased need for remediation and a myriad of related costs. An assessment tool that produces valid data is needed to assist with identifying learners who need educational support early2 in physical therapy education programs in the areas of professionalism, communication, clinical reasoning and adaptability. The purpose of this project is to demonstrate evidence of validity, using MessickÕs model of validity3,4, for data collected using the LEARN PT Rubric.
The LEARN PT Rubric is a simple assessment tool that can be used by learners, faculty and clinicians to longitudinally assess learner progress in the aforementioned four domains in an efficient manner. This tool has been used by learners, faculty and clinicians associated with DPT education at the University of Kansas Medical Center since Fall 2018. The components of MessickÕs model of validity we are using to demonstrate evidence of validity include Òresponse processÓ, Òinternal structureÓ and Òrelations to other variablesÓ.4 For Òresponse processÓ, revised instructions for using the LEARN PT Rubric are being created based on 1.5 years of pilot use that would allow for all users of the rubric to have a consistent approach to using the tool. Only if the instructions achieve their intended purpose can they contribute to evidence of validity. This will be accomplished through listening to clinicians describe how they use the LEARN PT Rubric to assess learners. To assess validity evidence for Òinternal structureÓ, we are assessing whether the data obtained over the past 1.5 years demonstrates consistency. For example, do learner scores on the rubric increase across time as one would expect? To assess Òrelations to other variablesÓ, we are assessing how learner scores on the LEARN PT Rubric relate to the same learnersÕ scores on other tools like the Professional Behaviors Self-Assessment5 at the same timepoints, and the Clinical Performance Instrument6 at later timepoints. We plan to assess the Òresponse processÓ in Summer 2020, have collected data to assess Òinternal structureÓ (see results) and are currently analyzing LEARN PT Rubric scoresÕ Òrelations to other variablesÓ.
Response Process: Five clinical preceptors have been identified across varied patient-care settings to assess validity evidence related to the LEARN PT Rubric instructions this summer. Internal Structure: The data from 1.5 years of pilot use demonstrate evidence of Òinternal structureÓ. Across four DPT cohorts (n=233), the learner average scores in each of the four domains consistently increases at every subsequent timepoint (tp) across the curriculum (7 total timepoints). For example, learner average scores (maximum = 125) for the professionalism domain are 60.5 ± 23.4 (tp 1, DPT program entry), 71.6 ± 22.1 (tp 2, semester 2), 77.3 ± 18.2 (tp 3, semester 3), 91.6 ± 16.5 (tp 4, semester 4/5), 96.2 ± 13.9 (tp 5, semester 6), 98.0 ± 10.8 (tp 6, semester 7, didactic exit) and 115 ± 9.5 (tp 7, semester 9, DPT program exit). In addition, at each timepoint except timepoint 3, learner average scores are consistently highest for the professionalism domain followed by communication, then adaptability, then clinical reasoning. Average learner scores range from 60.5 ± 23.4 (tp 1) to 115 ± 9.5 (tp 7) for the professionalism domain, 56.8 ± 22.7 (tp 1) to 111.6 ± 10.1 (tp 7) for communication, 25.9 ± 22.8 (tp 1) to 101.3 ± 12.0 (tp 7) for clinical reasoning, and 48.8 ± 24.0 (tp 1) to 107.4 ± 13.4 (tp 7) for adaptability. The greatest increase in average scores from timepoint 1 to 7 is 75.5 points for the clinical reasoning domain; the smallest increase is 54.5 for the professionalism domain. Similar results have been observed for average clinical preceptorsÕ scores of these same learners. An additional semester of related data from three cohorts will be available for analysis by the end of Spring 2020. Relations to Other Variables: The relationship between LEARN PT Rubric data and concurrent Professional Behaviors Self-Assessment data and subsequent Clinical Performance Instrument data during terminal full-time clinical experiences is currently being analyzed.
Conclusions/Relevance to the conference theme:
Evidence of validity related to Òinternal structureÓ for data obtained with the LEARN PT Rubric has been demonstrated, while additional evidence of validity related to Òresponse processÓ and Òrelations to other variablesÓ is currently under investigation and will be available by end of summer 2020. It is important to understand whether or not the learner performance data obtained with the LEARN PT Rubric is valid before using this tool to strategically identify learners who need support well before full-time clinical experiences, leading to decreased need for costly remediation and ultimately, greater success. The LEARN PT Rubric may also be useful for elevating all learnersÕ performance to foster excellence along with success.