Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 21 von 47
2023 IEEE 41st International Conference on Computer Design (ICCD), 2023, p.499-506
2023
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
GIM: Versatile GNN Acceleration with Reconfigurable Processing-in-Memory
Ist Teil von
  • 2023 IEEE 41st International Conference on Computer Design (ICCD), 2023, p.499-506
Ort / Verlag
IEEE
Erscheinungsjahr
2023
Quelle
IEEE Xplore
Beschreibungen/Notizen
  • Recent boost of deep learning has revolutionized many machine learning tasks, including the graph neural networks (GNNs) that are specifically designed for non-Euclidean graph data. GNNs have been widely adopted in numerous real-world applications, such as the recommendation system. However, with increasingly enlarged graph size and complexity, GNN performance on conventional computers has been severely hindered by the memory bottleneck. The challenge attracts wide investigations, and the processing-in-memory (PIM) architecture arises as one of the most promising solutions. Prior works have leveraged the ReRAM crossbars as analog dot-product engines to accelerate the vector-matrix multiplications in GNN, and achieve prominent performance improvements over modern CPUs and GPUs. Nevertheless, analog computing is known to be variation-vulnerable, which hampers the inference accuracy of GNN. Besides, the mixed-signal peripherals (e.g., ADC) are hardware-expensive and specialize in dense computations, which makes the analog crossbar-based PIM not the ideal candidate for GNN inference whose computation is of great sparsity.In this work, we propose a novel digital-PIM architecture for GNN acceleration, namely GIM. Our compact yet efficient digital computing paradigm can greatly boost computing parallelism with a minimum budget. GIM integrates dedicated optimizations on both operand- and bit-sparsity, to eliminate sparse computations thus significantly boost the performance. Meanwhile, at the software level, we implement data-layout optimizations to minimize the inter-memory communications and maximize computing parallelism. Our design derives prominent performance improvements over the modern CPU, GPU, and state-of-the-art PIM-based accelerators. Compared to modern CPU and GPU, GIM averagely achieves 24485× and 778× of speedup, and 78480× and 8906× of energy reduction. Compared to the state-of-the-art PIM-based GNN accelerators ReFlip and PIMGCN, GIM averagely achieves 9.0× and 73.4× of throughput boost with 15.2× and 95.6× of efficiency improvements.
Sprache
Englisch
Identifikatoren
eISSN: 2576-6996
DOI: 10.1109/ICCD58817.2023.00083
Titel-ID: cdi_ieee_primary_10360941

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX