Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 4 von 10

Details

Autor(en) / Beteiligte
Titel
Addressing Challenges for Reliable Machine Learning Model Updates
Ort / Verlag
ProQuest Dissertations & Theses
Erscheinungsjahr
2024
Quelle
ProQuest Dissertations & Theses A&I
Beschreibungen/Notizen
  • The dynamic nature of data and user expectations requires machine learning models to be updated throughout deployment for maximal performance. Updating models presents a set of challenges that must be handled carefully to ensure that performance improves. The non-convex nature of neural network training makes it challenging to update a model without introducing new errors relative to the original model, regardless if accuracy increases overall. Flipping previously correct predictions to incorrect is known as predictive churn and decreases user trust. I expand the understanding of how churn happens by explaining it through the lens of incompatible parameter updates. I use this analysis to motivate the need for a churn reduction method which does not suffer from a stability-plasticity tradeoff. Finally, I propose a solution for reducing predictive churn called accumulated model combination (AMC) that achieves state-of-the-art churn reduction performance.In some cases, the quality of newly gathered data may be lower than the initial training data. This leads to model updates decreasing performance; i.e., deterioration. To understand how deterioration occurs, I use model-dependent noise (MDN) where samples of a particular difficulty are corrupted. I show that depending on the sample difficulty targeted (easy, intermediate, hard), deterioration occurs mainly through two mechanisms: forgetting or learning prevention. Given that hard samples are more likely to be corrupted than easy samples, I demonstrate the potential societal implications as difficulty-based subgroups can coincide with demographic subgroups. I also provide various insights which explain why continual learning is particularly susceptible to deterioration.Lastly, I introduce the feedback loop problem which occurs in settings where data is confounded by model predictions. I investigate the role that clinician trust plays in limiting feedback loops, and also show the feasibility of detecting this phenomenon. I then present a variety of design choices which need to be adjusted to account for feedback loops in order to limit their effect. As part of a first-of-its-kind study on machine learning deployment in healthcare, I investigate to what extent clinician impressions are influenced by an early warning system, and how these predictions subsequently change behaviour and outcomes.
Sprache
Englisch
Identifikatoren
ISBN: 9798382192109
Titel-ID: cdi_proquest_journals_3034140357

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX