A comparison of impact factor, clinical query filters, and pattern recognition query filters in terms of sensitivity to topic.

Lawrence D. Fu, Lily Wang, Yindalon Aphinyanagphongs, Constantin F. Aliferis

Research output: Contribution to journalArticlepeer-review

Abstract

Evaluating journal quality and finding high-quality articles in the biomedical literature are challenging information retrieval tasks. The most widely used method for journal evaluation is impact factor, while novel approaches for finding articles are PubMed's clinical query filters and machine learning-based filter models. The related literature has focused on the average behavior of these methods over all topics. The present study evaluates the variability of these approaches for different topics. We find that impact factor and clinical query filters are unstable for different topics while a topic-specific impact factor and machine learning-based filter models appear more robust. Thus when using the less stable methods for a specific topic, researchers should realize that their performance may diverge from expected average performance. Better yet, the more stable methods should be preferred whenever applicable.

Original languageEnglish (US)
Pages (from-to)716-720
Number of pages5
JournalMedinfo. MEDINFO
Volume12
Issue numberPt 1
StatePublished - Dec 1 2007

Fingerprint

Dive into the research topics of 'A comparison of impact factor, clinical query filters, and pattern recognition query filters in terms of sensitivity to topic.'. Together they form a unique fingerprint.

Cite this