Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 1 von 61
IEEE transactions on circuits and systems. I, Regular papers, 2019-08, Vol.66 (8), p.3064-3076
2019
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Xcel-RAM: Accelerating Binary Neural Networks in High-Throughput SRAM Compute Arrays
Ist Teil von
  • IEEE transactions on circuits and systems. I, Regular papers, 2019-08, Vol.66 (8), p.3064-3076
Ort / Verlag
New York: IEEE
Erscheinungsjahr
2019
Quelle
IEEE Xplore
Beschreibungen/Notizen
  • Deep neural networks are biologically inspired class of algorithms that have recently demonstrated the state-of-the-art accuracy in large-scale classification and recognition tasks. Hardware acceleration of deep networks is of paramount importance to ensure their ubiquitous presence in future computing platforms. Indeed, a major landmark that enables efficient hardware accelerators for deep networks is the recent advances from the machine learning community that have demonstrated the viability of aggressively scaled deep binary networks. In this paper, we demonstrate how deep binary networks can be accelerated in modified von Neumann machines by enabling binary convolutions within the static random access memory (SRAM) arrays. In general, binary convolutions consist of bit-wise exclusive-NOR (XNOR) operations followed by a population count (popcount). We present two proposals: one based on charge sharing approach to perform vector XNOR and approximate popcount and another based on bit-wise XNOR followed by a digital bit-tree adder for accurate popcount. We highlight the various tradeoffs in terms of circuit complexity, speed-up, and classification accuracy for both the approaches. Few key techniques presented as a part of the manuscript are the use of low-precision, low-overhead analog-to-digital converter (ADC), to achieve a fairly accurate popcount for the charge-sharing scheme and proposal for sectioning of the SRAM array by adding switches onto the read-bitlines, thereby achieving improved parallelism. Our results on benchmark image classification datasets for CIFAR-10 and SVHN on a binarized neural network architecture show energy improvements of up to <inline-formula> <tex-math notation="LaTeX">6.1\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">2.3\times </tex-math></inline-formula> for the two proposals, compared to conventional SRAM banks. In terms of latency, improvements of up to <inline-formula> <tex-math notation="LaTeX">15.8\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">8.1\times </tex-math></inline-formula> were achieved for the two respective proposals.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX