Comparison of predictive validity and diagnostic accuracy of screening measures of reading skills

David A. Klingbeil, Jennifer McComas, Matthew K Burns, Lori A Helman

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Assessment data must be valid for the purpose for which educators use them. Establishing evidence of validity is an ongoing process that must be shared by test developers and test users. This study examined the predictive validity and the diagnostic accuracy of universal screening measures in reading. Scores on three different universal screening tools were compared for nearly 500 second- and third-grade students attending four public schools in a large urban district. Hierarchical regression and receiver operating characteristic curves were used to examine the criterion-related validity and diagnostic accuracy of students' oral reading fluency (ORF), Fountas and Pinnell Benchmark Assessment System (BAS) scores, and fall scores from the Measures of Academic Progress for reading (MAP). Results indicated that a combination of all three measures accounted for 65% of the variance in spring MAP scores, whereas a reduced model of ORF and MAP scores predicted 60%. ORF and BAS scores did not meet standards for diagnostic accuracy. Combining the measures improved diagnostic accuracy, depending on how criterion scores were calculated. Implications for practice and future research are discussed.

Original languageEnglish (US)
Pages (from-to)500-514
Number of pages15
JournalPsychology in the Schools
Volume52
Issue number5
DOIs
StatePublished - May 1 2015

Fingerprint Dive into the research topics of 'Comparison of predictive validity and diagnostic accuracy of screening measures of reading skills'. Together they form a unique fingerprint.

Cite this