Abstract
Purpose: Many criteria have been developed to rate the quality of online health information. To effectively evaluate quality, consumers must use quality criteria that can be reliably assessed. However, few instruments have been validated for inter-rater agreement. Therefore, we assessed the degree to which two raters could reliably assess 22 popularly cited quality criteria on a sample of 42 complementary and alternative medicine Web sites. Methods: We determined the degree of inter-rater agreement by calculating the percentage agreement, Cohen's kappa, and prevalence- and bias-adjusted kappa (PABAK). Results: Our un-calibrated analysis showed poor inter-rater agreement on eight of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the number of assessment choices and defined where to look for the information. As a result 18 of the 22 quality criteria were reliably assessed (inter-rater agreement ≥ 0.6). Conclusions: We conclude that even with precise definitions, some commonly used quality criteria cannot be reliably assessed. However, inter-rater agreement can be improved with precise operational definitions.
Original language | English (US) |
---|---|
Pages (from-to) | 675-683 |
Number of pages | 9 |
Journal | International Journal of Medical Informatics |
Volume | 74 |
Issue number | 7-8 |
DOIs | |
State | Published - Aug 2005 |
Bibliographical note
Funding Information:This work was supported in part by a training fellowship from the Keck Center for Computational and Structural Biology of the Gulf Coast Consortia (NLM Grant No. 5T15LM07093) (M.W.) and a grant from the Robert Wood Johnson Foundation Health-e-Technologies Initiative (E.V.B. and F.M.-B).
Copyright:
Copyright 2008 Elsevier B.V., All rights reserved.
Keywords
- MeSH: Internet
- Medical informatics
- Patient education