Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 1 von 17
IEEE transactions on computers, 2020-08, Vol.69 (8), p.1159-1171
2020

Details

Autor(en) / Beteiligte
Titel
Accelerating Hyperdimensional Computing on FPGAs by Exploiting Computational Reuse
Ist Teil von
  • IEEE transactions on computers, 2020-08, Vol.69 (8), p.1159-1171
Ort / Verlag
IEEE
Erscheinungsjahr
2020
Link zum Volltext
Quelle
IEEE Xplore Digital Library
Beschreibungen/Notizen
  • Brain-inspired hyperdimensional (HD) computing emulates cognition by computing with long-size vectors. HD computing consists of two main modules: encoder and associative search. The encoder module maps inputs into high dimensional vectors, called hypervectors. The associative search finds the closest match between the trained model (set of hypervectors) and a query hypervector by calculating a similarity metric. To perform the reasoning task for practical classification problems, HD needs to store a non-binary model and uses costly similarity metrics as cosine . In this article we propose an FPGA-based acceleration of HD exploiting Co mputational Re use (<inline-formula><tex-math notation="LaTeX">\mathtt {HD}</tex-math> <mml:math><mml:mi mathvariant="monospace">HD</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq1-2992662.gif"/> </inline-formula>-<inline-formula><tex-math notation="LaTeX">\mathtt {Core}</tex-math> <mml:math><mml:mi mathvariant="monospace">Core</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq2-2992662.gif"/> </inline-formula>) which significantly improves the computation efficiency of both encoding and associative search modules. <inline-formula><tex-math notation="LaTeX">\mathtt {HD}</tex-math> <mml:math><mml:mi mathvariant="monospace">HD</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq3-2992662.gif"/> </inline-formula>-<inline-formula><tex-math notation="LaTeX">\mathtt {Core}</tex-math> <mml:math><mml:mi mathvariant="monospace">Core</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq4-2992662.gif"/> </inline-formula> enables computation reuse in both encoding and associative search modules. We observed that consecutive inputs have high similarity which can be used to reduce the complexity of the encoding step. The previously encoded hypervector is reused to eliminate the redundant operations in encoding the current input. <inline-formula><tex-math notation="LaTeX">\mathtt {HD}</tex-math> <mml:math><mml:mi mathvariant="monospace">HD</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq5-2992662.gif"/> </inline-formula>-<inline-formula><tex-math notation="LaTeX">\mathtt {Core}</tex-math> <mml:math><mml:mi mathvariant="monospace">Core</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq6-2992662.gif"/> </inline-formula>, additionally eliminates the majority of multiplication operations by clustering the class hypervector values, and sharing the values among all the class hypervectors. Our evaluations on several classification problems show that <inline-formula><tex-math notation="LaTeX">\mathtt {HD}</tex-math> <mml:math><mml:mi mathvariant="monospace">HD</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq7-2992662.gif"/> </inline-formula>-<inline-formula><tex-math notation="LaTeX">\mathtt {Core}</tex-math> <mml:math><mml:mi mathvariant="monospace">Core</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq8-2992662.gif"/> </inline-formula> can provide <inline-formula><tex-math notation="LaTeX">4.4\times</tex-math> <mml:math><mml:mrow><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>4</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="salamat-ieq9-2992662.gif"/> </inline-formula> energy efficiency improvement and <inline-formula><tex-math notation="LaTeX">4.8\times</tex-math> <mml:math><mml:mrow><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>8</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="salamat-ieq10-2992662.gif"/> </inline-formula> speedup over the optimized GPU implementation while ensuring the same quality of classification. <inline-formula><tex-math notation="LaTeX">\mathtt {HD}</tex-math> <mml:math><mml:mi mathvariant="monospace">HD</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq11-2992662.gif"/> </inline-formula>-<inline-formula><tex-math notation="LaTeX">\mathtt {Core}</tex-math> <mml:math><mml:mi mathvariant="monospace">Core</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq12-2992662.gif"/> </inline-formula> provides <inline-formula><tex-math notation="LaTeX">2.4\times</tex-math> <mml:math><mml:mrow><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>4</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="salamat-ieq13-2992662.gif"/> </inline-formula> more throughput than the state-of-the-art FPGA implementation; on average, 40 percent of this improvement comes directly from enabling computation reuse in the encoding module and the rest comes from the computation reuse in the associative search module.
Sprache
Englisch
Identifikatoren
ISSN: 0018-9340
eISSN: 1557-9956
DOI: 10.1109/TC.2020.2992662
Titel-ID: cdi_ieee_primary_9088251

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX