Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 23 von 85
2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), 2016, p.1-6
2016
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Stochastic gradient descent with finite samples sizes
Ist Teil von
  • 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), 2016, p.1-6
Ort / Verlag
IEEE
Erscheinungsjahr
2016
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
  • The minimization of empirical risks over finite sample sizes is an important problem in large-scale machine learning. A variety of algorithms has been proposed in the literature to alleviate the computational burden per iteration at the expense of convergence speed and accuracy. Many of these approaches can be interpreted as stochastic gradient descent algorithms, where data is sampled from particular empirical distributions. In this work, we leverage this interpretation and draw from recent results in the field of online adaptation to derive new tight performance expressions for empirical implementations of stochastic gradient descent, mini-batch gradient descent, and importance sampling. The expressions are exact to first order in the step-size parameter and are tighter than existing bounds. We further quantify the performance gained from employing mini-batch solutions, and propose an optimal importance sampling algorithm to optimize performance.
Sprache
Englisch
Identifikatoren
DOI: 10.1109/MLSP.2016.7738878
Titel-ID: cdi_ieee_primary_7738878

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX