Attention in the personality test faking literature has been directed toward research designs in which job applicants complete a personality test, fail to get the job, and subsequently retest. This article highlights the extent to which inferences drawn from studies using the retest-after-failure research design address the extent and prevalence of applicant faking. Results from several studies using this design are reviewed, revealing an enormous range of findings. We simulate two aspects of the assessment context that can explain the discrepancy in previous results. The simulation systematically varies the weight given to personality in the assessment battery and the selection ratio to investigate their effects on personality retest scores. Results are useful for interpreting previous findings and understanding settings in which retest improvement occurs.