Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 5 von 131

Details

Autor(en) / Beteiligte
Titel
A robust estimator of mutual information for deep learning interpretability
Ist Teil von
  • Machine learning: science and technology, 2023-06, Vol.4 (2), p.25006
Ort / Verlag
Bristol: IOP Publishing
Erscheinungsjahr
2023
Link zum Volltext
Quelle
EZB Free E-Journals
Beschreibungen/Notizen
  • Abstract We develop the use of mutual information (MI), a well-established metric in information theory, to interpret the inner workings of deep learning (DL) models. To accurately estimate MI from a finite number of samples, we present GMM-MI (pronounced ‘Jimmie’), an algorithm based on Gaussian mixture models that can be applied to both discrete and continuous settings. GMM-MI is computationally efficient, robust to the choice of hyperparameters and provides the uncertainty on the MI estimate due to the finite sample size. We extensively validate GMM-MI on toy data for which the ground truth MI is known, comparing its performance against established MI estimators. We then demonstrate the use of our MI estimator in the context of representation learning, working with synthetic data and physical datasets describing highly non-linear processes. We train DL models to encode high-dimensional data within a meaningful compressed (latent) representation, and use GMM-MI to quantify both the level of disentanglement between the latent variables, and their association with relevant physical quantities, thus unlocking the interpretability of the latent representation. We make GMM-MI publicly available in this GitHub repository.
Sprache
Englisch
Identifikatoren
ISSN: 2632-2153
eISSN: 2632-2153
DOI: 10.1088/2632-2153/acc444
Titel-ID: cdi_doaj_primary_oai_doaj_org_article_8563df984bc14bcc9d888b2478eaf82f

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX