Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 13 von 65
2023 IEEE International Conference on Data Mining Workshops (ICDMW), 2023, p.1551-1558
2023
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Increasing Entropy to Boost Policy Gradient Performance on Personalization Tasks
Ist Teil von
  • 2023 IEEE International Conference on Data Mining Workshops (ICDMW), 2023, p.1551-1558
Ort / Verlag
IEEE
Erscheinungsjahr
2023
Quelle
IEEE/IET Electronic Library (IEL)
Beschreibungen/Notizen
  • In this effort, we consider the impact of regularization on the diversity of actions taken by policies generated from reinforcement learning agents trained using a policy gradient. Policy gradient agents are prone to entropy collapse, which means certain actions are seldomly, if ever, selected. We augment the optimization objective function for the policy with terms constructed from various φ-divergences and Maximum Mean Discrepancy which encourages current policies to follow different state visitation and/or action choice distribution than previously computed policies. We provide numerical experiments using MNIST, CIFAR10, and Spotify datasets. The results demonstrate the advantage of diversity-promoting policy regularization and that its use on gradient-based approaches have significantly improved performance on a variety of personalization tasks. Furthermore, numerical evidence is given to show that policy regularization increases performance without losing accuracy.
Sprache
Englisch
Identifikatoren
eISSN: 2375-9259
DOI: 10.1109/ICDMW60847.2023.00197
Titel-ID: cdi_ieee_primary_10411562

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX