Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 24 von 90

Details

Autor(en) / Beteiligte
Titel
Validity evidence supporting clinical skills assessment by artificial intelligence compared with trained clinician raters
Ist Teil von
  • Medical education, 2024-01, Vol.58 (1), p.105-117
Ort / Verlag
England: Wiley Subscription Services, Inc
Erscheinungsjahr
2024
Link zum Volltext
Quelle
Wiley Blackwell Single Titles
Beschreibungen/Notizen
  • Background Artificial intelligence (AI) is becoming increasingly used in medical education, but our understanding of the validity of AI‐based assessments (AIBA) as compared with traditional clinical expert‐based assessments (EBA) is limited. In this study, the authors aimed to compare and contrast the validity evidence for the assessment of a complex clinical skill based on scores generated from an AI and trained clinical experts, respectively. Methods The study was conducted between September 2020 to October 2022. The authors used Kane's validity framework to prioritise and organise their evidence according to the four inferences: scoring, generalisation, extrapolation and implications. The context of the study was chorionic villus sampling performed within the simulated setting. AIBA and EBA were used to evaluate performances of experts, intermediates and novice based on video recordings. The clinical experts used a scoring instrument developed in a previous international consensus study. The AI used convolutional neural networks for capturing features on video recordings, motion tracking and eye movements to arrive at a final composite score. Results A total of 45 individuals participated in the study (22 novices, 12 intermediates and 11 experts). The authors demonstrated validity evidence for scoring, generalisation, extrapolation and implications for both EBA and AIBA. The plausibility of assumptions related to scoring, evidence of reproducibility and relation to different training levels was examined. Issues relating to construct underrepresentation, lack of explainability, and threats to robustness were identified as potential weak links in the AIBA validity argument compared with the EBA validity argument. Conclusion There were weak links in the use of AIBA compared with EBA, mainly in their representation of the underlying construct but also regarding their explainability and ability to transfer to other datasets. However, combining AI and clinical expert‐based assessments may offer complementary benefits, which is a promising subject for future research. Is artificial intelligence (AI) ready to replace clinicians in clinical skills assessment? In this paper, Johnsson et al. use Kane's framework to compare validity evidence between the two.
Sprache
Englisch
Identifikatoren
ISSN: 0308-0110, 1365-2923
eISSN: 1365-2923
DOI: 10.1111/medu.15190
Titel-ID: cdi_swepub_primary_oai_prod_swepub_kib_ki_se_237615058

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX