Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Hyperspectral imaging technology, combining traditional imaging and spectroscopy technologies to simultaneously acquire spatial and spectral information, is deemed to be an intuitive medium for robust face recognition. However, the intrinsic structure of hyperspectral images is more complicated than ordinary gray-scale or RGB images, how to fully explore the discriminant and correlation features with only a limited number of hyperspectral samples for deep learning training has not been well studied. In response to these problems, this paper proposes an end-to-end multiscale fusion lightweight convolution neural network (CNN) framework for hyperspectral face recognition, termed as the features fusion with channel attention network (FFANet). Firstly, to capture richer subtle details, we introduce Second-Order Efficient Channel Attention (SECA) as the variant of Efficient Channel Attention (ECA) into the framework. The difference from ECA is that SECA can extract the second-order information of each channel to improve the network’s feature extraction ability and is more suitable for the complexity of hyperspectral data. Secondly, we further fuse multiscale information to yield a comprehensive and discriminative representation learning. Finally, the joint of Self-Supervision and Knowledge Distillation (SSKD) is exploited to train an efficient deep model, which can learn more dark knowledge from the trained teacher network. The experimental results on three benchmark hyperspectral face databases of PolyU, CMU, and UWA show that the proposed approach has achieved competitive accuracy and efficiency on the basis of significantly reducing the storage space and computation overheads. These characteristics also show its wide applicability on edge/mobile devices.