It is generally acknowledged that alternatives such as none of the above and all of the above should be used sparingly in multiple-choice (MC) items. But the effect that all of the above has on the reliability and validity of an MC item is unclear. This study compared the results of a single-response (SR(a)) item format that included all of the above as the correct response to a multiple-response (MR) item format that required examinees to select all of the available alternatives for a correct response. A crossover design was used to compare the effect of formats on student performance while item content, scoring method, and student ability levels remained constant. Results indicated that the SR(a) format greatly distorted examinee performance by elevating their scores because examinees who recognized two or more alternatives as being correct were cued to select all of the above. In addition, the SR(a) format significantly reduced the reliability and concurrent validity of examinee scores. In summary, the MR format was found to be superior. Based upon new empirical evidence, this study recommends that whenever an educator wishes to evaluate student understanding of an issue that has multiple facts, the SR(a) format should be avoided and the MR format should be used instead.