Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
A Survey of Sparse-Learning Methods for Deep Neural Networks
Ist Teil von
2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI), 2018, p.647-650
Ort / Verlag
IEEE
Erscheinungsjahr
2018
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
Deep neural networks (DNNs) has drawn considerable attention in recent years as a result of their remarkable performace in many visual and speech recognition assignments. As the scale of tasks that need to solve is increasingly big, the networks used also become wider and deeper, resulting in millions or even billions of parameters needed. Deep and wide networks with large number of parameters bring many problems, including memory requirement, computation cost and overfitting, which severely hinder the application of DNNs in practice. Therefore, a natural thought is to train sparse networks with less parameters and float operators while maintaining comparable performance. During past few years, a mass of research has been proposed in this area. In this paper, we survey sparsity-promoting techniques in DNNs proposed in recent years. These approaches are roughly divided into three categories, including pruning, randomly reducing the complexity and optimizing with sparse regularizer. Pruning techniques will be introduced first and others will be described in the following section. For each kind of methods, we present approaches in this category, strengths and drawbacks. In the final, we will discuss the relationship of these three categories of methods.