Assessing the Reporting of Reliability in Published Content Analyses: 1985-2010

Jennette Lovejoy, Brendan R. Watson, Stephen Lacy, Daniel Riffe

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

Content analysis is a common research method employed in communication studies. An important part of content analysis is establishing the reliability of the coding protocol, and reporting must be detailed enough to allow for replication of methodological procedures. This study employed a content analysis of published content analysis articles (N=581) in three communication journals over a 26-year period to examine changes in reliability sampling procedures and reporting of reliability coefficients across time. Findings indicate that general improvements have been made in the detail of reporting reliability, in the practice of reporting reliability coefficients that take chance into consideration, and in the reporting of reliability coefficients for more than one variable. However, explaining the reliability sampling process and use of a probability or census reliability sample did not change over time. In recent years, the preponderance of articles did not explain the reliability sampling method or report a reliability coefficient for all key study variables, and few utilized a census or probability sampling frame. Implications are discussed and recommendations made for reporting of reliability in content analysis.

Original languageEnglish (US)
Pages (from-to)207-221
Number of pages15
JournalCommunication Methods and Measures
Volume8
Issue number3
DOIs
StatePublished - Jul 2014

Fingerprint

Dive into the research topics of 'Assessing the Reporting of Reliability in Published Content Analyses: 1985-2010'. Together they form a unique fingerprint.

Cite this