Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Rate-Distortion-Optimization for Deep Image Compression
Ist Teil von
2021 IEEE International Conference on Image Processing (ICIP), 2021, p.3737-3741
Ort / Verlag
IEEE
Erscheinungsjahr
2021
Quelle
IEL
Beschreibungen/Notizen
Given the capabilities of massive GPU hardware, there has been a surge of using artificial neural networks (ANN) for still image compression. These compression systems usually consist of convolutional layers and can be considered as non-linear transform coding. Notably, these ANNs are based on an end-to-end approach where the encoder determines a compressed version of the image as features. In contrast to this, existing image and video codecs employ a block-based architecture with signal-dependent encoder optimizations. A basic requirement for designing such optimizations is estimating the impact of the quantization error on the resulting bitrate and distortion. As for non-linear, multi-layered neural networks, this is a difficult problem. This paper presents a performant auto-encoder architecture for still image compression, which represents the compressed features at multiple scales. Then, we demonstrate how an algorithm, which tests multiple feature candidates, can reduce the Lagrangian cost and optimize compression efficiency. The algorithm avoids multiple network executions by pre-estimating the impact of the quantization on the distortion by a higher-order polynomial.