Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 18 von 219
IEEE transactions on circuits and systems for video technology, 2022-02, Vol.32 (2), p.621-633
2022
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Contraction of Dynamically Masked Deep Neural Networks for Efficient Video Processing
Ist Teil von
  • IEEE transactions on circuits and systems for video technology, 2022-02, Vol.32 (2), p.621-633
Ort / Verlag
New York: IEEE
Erscheinungsjahr
2022
Quelle
IEEE/IET Electronic Library (IEL)
Beschreibungen/Notizen
  • Sequential data such as video are characterized by spatio-temporal redundancies. As of yet, few deep learning algorithms exploit them to decrease the often massive cost during inference. This work leverages correlations in video data to reduce the size and run-time cost of deep neural networks. Drawing upon the simplicity of the typically used ReLU activation function, we replace this function by dynamically updating masks. The resulting network is a simple chain of matrix multiplications and bias additions, which can be contracted into a single weight matrix and bias vector. Inference then reduces to an affine transformation of the input sample with these contracted parameters. We show that the method is akin to approximating the neural network with a first-order Taylor expansion around a dynamically updating reference point. For triggering these updates, one static and three data-driven mechanisms are analyzed. We evaluate the proposed algorithm on a range of tasks, including pose estimation on surveillance data, road detection on KITTI driving scenes, object detection on ImageNet videos, as well as denoising MNIST digits, and obtain compression rates up to <inline-formula> <tex-math notation="LaTeX">3.6\times </tex-math></inline-formula>.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX