Inter-observer agreement for quality measures applied to online health information.

Smitha Sagaram, Muhammad Walji, Funda Meric-Bernstam, Craig Johnson, Elmer Bernstam

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Many quality criteria have been developed to rate the quality of online health information. However, few instruments have been validated for inter-observer reliability. Therefore, we assessed the degree to which two raters agree upon the presence or absence of information based on 22 popularly cited quality criteria on a sample of 21 complementary and alternative medicine websites. Our preliminary analysis showed a poor inter-rater agreement on 10 out of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the allowed choices and defined a location to look for the information. As a result 15 out of the 22 quality criteria had a kappa >0.6. We conclude that even with precise definitions some commonly used quality criteria to assess the quality of health information online cannot be reliably assessed. However, inter-rater agreement can be improved by providing precise operational definitions.

Original languageEnglish (US)
Pages (from-to)1308-1312
Number of pages5
JournalMedinfo. MEDINFO
Volume11
Issue numberPt 2
StatePublished - 2004

Fingerprint

Dive into the research topics of 'Inter-observer agreement for quality measures applied to online health information.'. Together they form a unique fingerprint.

Cite this