Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 20 von 88
2023 International Conference on Machine Learning and Applications (ICMLA), 2023, p.1-8
2023
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
PLSR: Unstructured Pruning with Layer-Wise Sparsity Ratio
Ist Teil von
  • 2023 International Conference on Machine Learning and Applications (ICMLA), 2023, p.1-8
Ort / Verlag
IEEE
Erscheinungsjahr
2023
Quelle
IEEE/IET Electronic Library (IEL)
Beschreibungen/Notizen
  • In the current era of multi-modal and large models gradually revealing their potential, neural network pruning has emerged as a crucial means of model compression. It is widely recognized that models tend to be over-parameterized, and pruning enables the removal of unimportant weights, leading to improved inference speed while preserving accuracy. From early methods such as gradient-based, and magnitude-based pruning to modern algorithms like iterative magnitude pruning, lottery ticket hypothesis, and pruning at initialization, researchers have strived to increase the compression ratio of model parameters while maintaining high accuracy. Currently, mainstream algorithms focus on the global pruning of neural networks using various scoring functions, followed by different pruning strategies to enhance the accuracy of sparse model. Recent studies have shown that random pruning with varying layer-wise sparsity ratio has achieved robust results for large models and out-of-distribution data. Based on this discovery, we propose a new score called FeatIO, which is based on module input and output feature map sizes. As a score function used in PaI, FeatIO surpasses the performance of other PaI score functions. Additionally, we propose a novel pruning strategy called Pruning with Layer-wise Sparsity Ratio (PLSR), which conbines the layer-wise sparsity ratios and magnitude-based score function, resulting in optimal evaluation performance. Almost all algorithms exhibit improved performance when using our novel pruning strategy. The combination of PLSR and FeatIO consistently outperforms other algorithms in testing, demonstrating the significant potential of our proposed approach. Our code will be available here.
Sprache
Englisch
Identifikatoren
eISSN: 1946-0759
DOI: 10.1109/ICMLA58977.2023.00009
Titel-ID: cdi_ieee_primary_10459856

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX