Maryland EXCELS validation study: January 1, 2014 - June 30, 2016
The purpose of this report is to examine the validity of the Maryland EXCELS Standards and subsequent quality ratings. This analysis is guided by five evaluation questions, with an overall validation approach aligned to the four key elements described by Zellman and Fiene (2012) and further described by Tout and Starr (2013). Broadly through the validation approach described in the next sections, the following questions were investigated: 1. What are the characteristics of programs participating in Maryland EXCELS? 2. Is there variability between programs on each quality indicator of Maryland EXCELS? Do component measures that claim five scales actually have five scales? Do combining Levels 4 and 5 produce better distributions or more meaningful distinctions among programs? 3. How are the Maryland EXCELS quality indicators related to each other? Do measures of similar concepts relate more closely to each other than to other measures? 4. What relationship exists between reported Maryland EXCELS quality indicators and observed classroom interactions, as assessed by the Classroom Assessment Scoring System (CLASS(TM); Pianta, La Paro, & Hamre, 2008)? Do providers who receive more checks in Maryland EXCELS also receive high scores on the CLASS(TM)? What relationship exists between Maryland EXCELS ratings, CLASS(TM), and ERS scores? 5. How do Maryland EXCELS quality ratings change over time for programs? (author abstract)
Related resources include summaries, versions, measures (instruments), or other resources in which the current document plays a part. Research products funded by the Office of Planning, Research, and Evaluation are related to their project records.