Modeling the predictive validity of SAT mathematics items using item characteristics

Jennifer L. Kobrin, Young Koung Kim, Paul R. Sackett

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

There is much debate on the merits and pitfalls of standardized tests for college admission, with questions regarding the format (multiple-choice vs. constructed response), cognitive complexity, and content of these assessments (achievement vs. aptitude) at the forefront of the discussion. This study addressed these questions by investigating the relationship between SAT Mathematics (SAT-M) item characteristics and the item's ability to predict college outcomes. Using multiple regression, SAT-M item characteristics (content area, format, cognitive complexity, and abstract/concrete classification) were used to predict three outcome measures: the correlation of item score with first-year college grade point average, the correlation of item score with mathematics course grades, and the percentage of students who answered the item correctly and chose to major in a mathematics or science field. Separate models were run including and excluding item difficulty and discrimination as covariates. The results revealed that many of the item characteristics were related to the outcome measures and that item difficulty and discrimination had a mediating effect on several of the predictor variables, particularly on the effects of nonroutine/insightful items and multiple-choice items.

Original languageEnglish (US)
Pages (from-to)99-119
Number of pages21
JournalEducational and Psychological Measurement
Volume72
Issue number1
DOIs
StatePublished - Feb 2012

Keywords

  • college outcomes
  • mathematics items
  • test validity

Fingerprint

Dive into the research topics of 'Modeling the predictive validity of SAT mathematics items using item characteristics'. Together they form a unique fingerprint.

Cite this