A comparison of impact factor, clinical query filters, and pattern recognition query filters in terms of sensitivity to topic

Lawrence D. Fu, Lily Wang, Yindalon Aphinyanagphongs, Constantin F. Aliferis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Evaluating journal quality and finding high-quality articles in the biomedical literature are challenging information retrieval tasks. The most widely used method for journal evaluation is impact factor, while novel approaches for finding articles are PubMed's clinical query filters and machine learning-based filter models. The related literature has focused on the average behavior of these methods over all topics. The present study evaluates the variability of these approaches for different topics. We find that impact factor and clinical query filters are unstable for different topics while a topic-specific impact factor and machine learning-based filter models appear more robust. Thus when using the less stable methods for a specific topic, researchers should realize that their performance may diverge from expected average performance. Better yet, the more stable methods should be preferred whenever applicable.

Original languageEnglish (US)
Title of host publicationMEDINFO 2007 - Proceedings of the 12th World Congress on Health (Medical) Informatics
Subtitle of host publicationBuilding Sustainable Health Systems
Pages716-720
Number of pages5
Volume129
StatePublished - Dec 1 2007
Event12th World Congress on Medical Informatics, MEDINFO 2007 - Brisbane, QLD, Australia
Duration: Aug 20 2007Aug 24 2007

Other

Other12th World Congress on Medical Informatics, MEDINFO 2007
Country/TerritoryAustralia
CityBrisbane, QLD
Period8/20/078/24/07

Fingerprint

Dive into the research topics of 'A comparison of impact factor, clinical query filters, and pattern recognition query filters in terms of sensitivity to topic'. Together they form a unique fingerprint.

Cite this