Purpose/Hypothesis : Despite proliferation of integrated clinical experiences (ICEs) within physical therapist educational programs, no performance rating tools have been validated for part-time clinical experiences. As such, academic programs have either created their own assessment tool or utilized the APTAÕs Clinical Performance Instrument (CPI). The CPI has been validated for full-time clinical experiences only. For 18+ years, university clinical instructors (CIs) have been calibrated to and have been utilizing the New England Consortium Performance Rating Tool (NEC) for ICEs. While detailed in its design, the NEC is a more cumbersome tool than the CPI. This preliminary analysis served to examine if existing NEC grading criteria could inform passing criteria on the CPI for use in ICEs.Number of Subjects : 218 student assessments: 34 year-one spring semester, 133 year-two fall semester, and 51 year-two spring semester.Materials/Methods : CIs assessed students using both the NEC and the CPI. Consensus amongst the Directors of Clinical Education matched NEC performance categories with CPI criteria. One-way ANOVA was used to identify significant differences in CPI ratings between students deemed successful or unsuccessful in the ICE based upon established NEC ratings. CPI passing criteria were developed using mean CPI scores for individuals successful in the ICE (as determined by NEC ratings) minus one standard deviation. The sensitivity (Sn) and specificity (Sp) were also calculated for each passing criteria.Results : During the year-one spring ICE, significant differences in CPI scores were identified between successful and unsuccessful students in 15 of the 18 CPI categories. During the year-two fall and spring ICEs, significant differences were identified in all CPI categories for students deemed successful or unsuccessful in the ICE. Statistical analysis placed passing criteria between advanced beginner and intermediate for year-one spring (7; Sn= 0.84, Sp= 1) and between intermediate and advanced intermediate for year-two fall (10; Sn=0.824, Sp=1) and spring (12; Sn=0.918, Sp=1). No false positives were identified.Conclusions : The high sensitivity and specificity of the established CPI passing criteria for this UniversityÕs ICEs suggests this tool might be an appropriate substitute for the currently utilized NEC. Given this, ongoing analysis is occurring to determine if individual passing criteria should be established for each CPI category as opposed to use of one single rating across all.Clinical Relevance : Adopting the CPI as a consistent tool for use throughout all phases of clinical education (part-time and full-time) may allow for tracking of student progression and deeper student understanding of a single rating scale. Based on statistical findings, when compared to the NEC, the CPI may be a more efficient and streamlined tool capable of providing an equally accurate indicator of student performance.