Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 7 von 65
Data-Driven Nonsmooth Optimization
SIAM journal on optimization, 2020-01, Vol.30 (1), p.102-131
2020
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Data-Driven Nonsmooth Optimization
Ist Teil von
  • SIAM journal on optimization, 2020-01, Vol.30 (1), p.102-131
Erscheinungsjahr
2020
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
  • In this work, we consider methods for solving large-scale optimization problems with a possibly nonsmooth objective function. The key idea is to first parametrize a class of optimization methods using a generic iterative scheme involving only linear operations and applications of proximal operators. This scheme contains some modern primal-dual first-order algorithms like the Douglas-Rachford and hybrid gradient methods as special cases. Moreover, we show weak convergence of the iterates to an optimal point for a new method which also belongs to this class. Next, we interpret the generic scheme as a neural network and use unsupervised training to learn the best set of parameters for a specific class of objective functions while imposing a fixed number of iterations. In contrast to other approaches of "learning to optimize," we present an approach which learns parameters only in the set of convergent schemes. Finally, we illustrate the approach on optimization problems arising in tomographic reconstruction and image deconvolution, and train optimization algorithms for optimal performance given a fixed number of iterations.
Sprache
Englisch
Identifikatoren
ISSN: 1052-6234, 1095-7189
eISSN: 1095-7189
DOI: 10.1137/18M1207685
Titel-ID: cdi_swepub_primary_oai_DiVA_org_kth_278765

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX