Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 21 von 101
2021 2nd International Conference on Secure Cyber Computing and Communications (ICSCCC), 2021, p.227-231
2021
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Explaining HCV prediction using LIME model
Ist Teil von
  • 2021 2nd International Conference on Secure Cyber Computing and Communications (ICSCCC), 2021, p.227-231
Ort / Verlag
IEEE
Erscheinungsjahr
2021
Quelle
IEEE Xplore
Beschreibungen/Notizen
  • Machine learning is a state-of-art technique to infer complex data into information with high accuracy. Machine learning algorithms are black boxes because of the high complexity and non-transparency present in each data processing layer, making it difficult to rationalize the output performed. The predictive models are surmising techniques that use overly simplified accuracy measures, which alone cannot determine the quality of learning, because there is no interpretability of the process of the outcome. The purpose of this paper is manifold. The first objective is to discuss the need for interpretability in Machine Learning. The second objective is to study different interpretations models. The final objective is to familiarize oneself with the novel model interpretation framework: LIME (Local Interpretable Model-Agnostic Explanations) to yield explanation on the HCV test predicted results.
Sprache
Englisch
Identifikatoren
DOI: 10.1109/ICSCCC51823.2021.9478092
Titel-ID: cdi_ieee_primary_9478092

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX