Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 1 von 10
Expert systems with applications, 2013-08, Vol.40 (10), p.4106-4114
2013
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Lost in PubMed. Factors influencing the success of medical information retrieval
Ist Teil von
  • Expert systems with applications, 2013-08, Vol.40 (10), p.4106-4114
Ort / Verlag
Amsterdam: Elsevier Ltd
Erscheinungsjahr
2013
Quelle
Access via ScienceDirect (Elsevier)
Beschreibungen/Notizen
  • ► Categorization of errors in queries submitted during an IR experiment in PubMed. ► Identification of the factors that have a direct impact on query quality. ► Analysis of the characteristics of the best and worst performers. ► Language skills play an important role in non-native English searchers. ► MeSH terms compensate for limited language skills in non-native speakers of English. With the explosion of information available on the Web, finding specific medical information in an efficient way has become a considerable challenge. PubMed/MEDLINE offers an alternative to free-text searching on the web, allowing searchers to do a keyword-based search using Medical Subject Headings. However, finding relevant information within a limited time frame remains a difficult task. The current study is based on an error analysis of data from a retrieval experiment conducted at the nursing departments of two Belgian universities and a British university. We identified the main difficulties in query formulation and relevance judgment and compared the profiles of the best and worst performers in the test. For the analysis, a query collection was built from the queries submitted by our test participants. The queries in this collection are all aimed at finding the same specific information in PubMed, which allowed us to identify what exactly went wrong in the query formulation step. Another crucial aspect for efficient information retrieval is relevance judgment. Differences between potential and actual recall of each query offered indications of the extent to which participants overlooked relevant citations. The test participants were divided into “worst”, “average” and “best” performers based on the number of relevant citations they selected: zero, one or two and three or more, respectively. We tried to find out what the differences in background and in search behavior were between these three groups.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX