Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 2 von 2

Details

Autor(en) / Beteiligte
Titel
Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features
Ist Teil von
  • Journal of machine learning research, 2022-01
Erscheinungsjahr
2022
Quelle
ACM Digital Library
Beschreibungen/Notizen
  • Shapley values are today extensively used as a model-agnostic explanation framework to explain complex predictive machine learning models. Shapley values have desirable theoretical properties and a sound mathematical foundation in the field of cooperative game theory. Precise Shapley value estimates for dependent data rely on accurate modeling of the dependencies between all feature combinations. In this paper, we use a variational autoencoder with arbitrary conditioning (VAEAC) to model all feature dependencies simultaneously. We demonstrate through comprehensive simulation studies that our VAEAC approach to Shapley value estimation outperforms the state-of-the-art methods for a wide range of settings for both continuous and mixed dependent features. For high-dimensional settings, our VAEAC approach with a non-uniform masking scheme significantly outperforms competing methods. Finally, we apply our VAEAC approach to estimate Shapley value explanations for the Abalone data set from the UCI Machine Learning Repository.
Sprache
Norwegisch
Identifikatoren
ISSN: 1532-4435
eISSN: 1533-7928
Titel-ID: cdi_cristin_nora_10852_97792
Format

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX