Measurement and design heterogeneity in perceived message effectiveness studies: A call for research

Seth M. Noar, Joshua Barker, Marco Yzer

Research output: Contribution to journalReview articlepeer-review

8 Scopus citations

Abstract

Ratings of perceived message effectiveness (PME) are commonly used during message testing and selection, operating under the assumption that messages scoring higher on PME are more likely to affect actual message effectiveness (AME)—for instance, intentions and behaviors. Such a practice has clear utility, particularly when selecting from a large pool of messages. Recently, O’Keefe (2018) argued against the validity of PME as a basis for message selection. He conducted a meta-analysis of mean ratings of PME and AME, testing how often two messages that differ on PME similarly differ on AME, as tested in separate samples. Comparing 151 message pairs derived from 35 studies, he found that use of PME would only result in choosing a more effective message 58% of the time, which is little better than chance. On that basis, O’Keefe concluded that “message designers might dispense with questions about expected or perceived persuasiveness (PME), and instead pretest messages for actual effectiveness” (p. 135). We do not believe that the meta-analysis supports this conclusion, given the measurement and design issues in the set of studies O’Keefe analyzed.

Original languageEnglish (US)
Pages (from-to)990-993
Number of pages4
JournalJournal of Communication
Volume68
Issue number5
DOIs
StatePublished - Oct 1 2018

Bibliographical note

Funding Information:
Grant number R03DA041869 from the National Institute on Drug Abuse and the Food and Drug Administration’s Center for Tobacco Products supported Seth Noar’s time spent writing this paper. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the Food and Drug Administration.

Funding Information:
We thank Dan O’Keefe for sharing documents related to his meta-analysis. We thank the University of North Carolina–Wake Forest Tobacco Center of Regulatory Science for a stimulating discussion that informed the commentary, and Joseph Cappella for feedback on an earlier draft of the manuscript. Grant number R03DA041869 from the National Institute on Drug Abuse and the Food and Drug Administration’s Center for Tobacco Products supported Seth Noar’s time spent writing this paper. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the Food and Drug Administration.

Keywords

  • Message
  • Meta-analysis
  • Perceived effectiveness

Fingerprint Dive into the research topics of 'Measurement and design heterogeneity in perceived message effectiveness studies: A call for research'. Together they form a unique fingerprint.

Cite this