Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Logarithmic Floating-Point Multipliers for Efficient Neural Network Training
Ist Teil von
Design and Applications of Emerging Computer Systems, p.567-587
Ort / Verlag
Cham: Springer Nature Switzerland
Link zum Volltext
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
Floating-point (FP) arithmetic computation is favored for training neural networks (NNs) due to its wide numerical range. The computation-intensive training process requires a tremendous amount of multiplication, which poses a challenge to deploying NN architectures on resource-constrained devices. This chapter presents hardware-efficient logarithmic FP multipliers (LFPMs) for NN training. By using piecewise approximations in different configurations over the applicable domains of the logarithm and anti-logarithm functions, we obtain LFPMs with various characteristics in accuracy and hardware. The benchmark NN applications are considered for evaluation.