Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 9 von 49371
IEEE transactions on parallel and distributed systems, 2021-07, Vol.32 (7), p.1677-1689
2021
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
FT-CNN: Algorithm-Based Fault Tolerance for Convolutional Neural Networks
Ist Teil von
  • IEEE transactions on parallel and distributed systems, 2021-07, Vol.32 (7), p.1677-1689
Ort / Verlag
New York: IEEE
Erscheinungsjahr
2021
Quelle
IEEE Xplore
Beschreibungen/Notizen
  • Convolutional neural networks (CNNs) are becoming more and more important for solving challenging and critical problems in many fields. CNN inference applications have been deployed in safety-critical systems, which may suffer from soft errors caused by high-energy particles, high temperature, or abnormal voltage. Of critical importance is ensuring the stability of the CNN inference process against soft errors. Traditional fault tolerance methods are not suitable for CNN inference because error-correcting code is unable to protect computational components, instruction duplication techniques incur high overhead, and existing algorithm-based fault tolerance (ABFT) techniques cannot protect all convolution implementations. In this article, we focus on how to protect the CNN inference process against soft errors as efficiently as possible, with the following three contributions. (1) We propose several systematic ABFT schemes based on checksum techniques and analyze their fault protection ability and runtime thoroughly. Unlike traditional ABFT based on matrix-matrix multiplication, our schemes support any convolution implementations. (2) We design a novel workflow integrating all the proposed schemes to obtain a high detection/correction ability with limited total runtime overhead. (3) We perform our evaluation using ImageNet with well-known CNN models including AlexNet, VGG-19, ResNet-18, and YOLOv2. Experimental results demonstrate that our implementation can handle soft errors with very limited runtime overhead (4%<inline-formula><tex-math notation="LaTeX">\sim</tex-math> <mml:math><mml:mo>∼</mml:mo></mml:math><inline-graphic xlink:href="zhao-ieq1-3043449.gif"/> </inline-formula>8% in both error-free and error-injected situations).

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX