Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 11 von 193
Neural networks, 2019-11, Vol.119, p.286-298
2019
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Transformed ℓ1 regularization for learning sparse deep neural networks
Ist Teil von
  • Neural networks, 2019-11, Vol.119, p.286-298
Ort / Verlag
Elsevier Ltd
Erscheinungsjahr
2019
Quelle
Access via ScienceDirect (Elsevier)
Beschreibungen/Notizen
  • Deep Neural Networks (DNNs) have achieved extraordinary success in numerous areas. However, DNNs often carry a large number of weight parameters, leading to the challenge of heavy memory and computation costs. Overfitting is another challenge for DNNs when the training data are insufficient. These challenges severely hinder the application of DNNs in resource-constrained platforms. In fact, many network weights are redundant and can be removed from the network without much loss of performance. In this paper, we introduce a new non-convex integrated transformed ℓ1 regularizer to promote sparsity for DNNs, which removes redundant connections and unnecessary neurons simultaneously. Specifically, we apply the transformed ℓ1 regularizer to the matrix space of network weights and utilize it to remove redundant connections. Besides, group sparsity is integrated to remove unnecessary neurons. An efficient stochastic proximal gradient algorithm is presented to solve the new model. To the best of our knowledge, this is the first work to develop a non-convex regularizer in sparse optimization based method to simultaneously promote connection-level and neuron-level sparsity for DNNs. Experiments on public datasets demonstrate the effectiveness of the proposed method.
Sprache
Englisch
Identifikatoren
ISSN: 0893-6080
eISSN: 1879-2782
DOI: 10.1016/j.neunet.2019.08.015
Titel-ID: cdi_proquest_miscellaneous_2288014889

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX