Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 1 von 1018
IEEE transactions on image processing, 2019-05, Vol.28 (5), p.2624-2638
2019
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Single Sample Face Recognition Under Varying Illumination via QRCP Decomposition
Ist Teil von
  • IEEE transactions on image processing, 2019-05, Vol.28 (5), p.2624-2638
Ort / Verlag
United States: IEEE
Erscheinungsjahr
2019
Quelle
IEEE/IET Electronic Library (IEL)
Beschreibungen/Notizen
  • In this paper, we present a novel high-frequency facial feature and a high-frequency-based sparse representation classification to tackle single sample face recognition (SSFR) under varying illumination. First, we propose the assumption that orthogonal triangular with column pivoting (QRCP) bases can represent intrinsic face surface features with different frequencies, and their corresponding energy coefficients describe illumination intensities. Based on this assumption, we take QRCP bases with corresponding weighting coefficients (i.e., the major components of energy coefficients) to develop the high-frequency facial feature of the face image, which is named as QRCP-face. The normalized QRCP-face (NQRCP-face) is constructed to further constraint illumination effects by normalizing the weighting coefficients of QRCP-face. Moreover, we propose the adaptive QRCP-face that assigns a special parameter to NQRCP-face via the illumination level estimated by the weighting coefficients. Second, we consider that the differences of pixel images cannot model the intra-class variations of generic faces with illumination variations, and the specific identification information of the generic face is redundant for the current SSFR with generic learning. To tackle the above-mentioned two issues, we develop a general high-frequency-based sparse representation model. Two practical approaches, separated high-frequency-based sparse representation and unified high-frequency-based sparse representation, are developed. Finally, the performances of the proposed methods are verified on the Extended Yale B, CMU PIE, AR, Labeled Faces in the Wild, and our self-built Driver face databases. The experimental results indicate that the proposed methods outperform previous approaches for SSFR under varying illumination.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX