Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 8 von 34
IEEE transactions on pattern analysis and machine intelligence, 2024-06, Vol.46 (6), p.4331-4347
2024
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
StyleTalk++: A Unified Framework for Controlling the Speaking Styles of Talking Heads
Ist Teil von
  • IEEE transactions on pattern analysis and machine intelligence, 2024-06, Vol.46 (6), p.4331-4347
Ort / Verlag
United States: IEEE
Erscheinungsjahr
2024
Quelle
IEEE/IET Electronic Library
Beschreibungen/Notizen
  • Individuals have unique facial expression and head pose styles that reflect their personalized speaking styles. Existing one-shot talking head methods cannot capture such personalized characteristics and therefore fail to produce diverse speaking styles in the final videos. To address this challenge, we propose a one-shot style-controllable talking face generation method that can obtain speaking styles from reference speaking videos and drive the one-shot portrait to speak with the reference speaking styles and another piece of audio. Our method aims to synthesize the style-controllable coefficients of a 3D Morphable Model (3DMM), including facial expressions and head movements, in a unified framework. Specifically, the proposed framework first leverages a style encoder to extract the desired speaking styles from the reference videos and transform them into style codes. Then, the framework uses a style-aware decoder to synthesize the coefficients of 3DMM from the audio input and style codes. During decoding, our framework adopts a two-branch architecture, which generates the stylized facial expression coefficients and stylized head movement coefficients, respectively. After obtaining the coefficients of 3DMM, an image renderer renders the expression coefficients into a specific person's talking-head video. Extensive experiments demonstrate that our method generates visually authentic talking head videos with diverse speaking styles from only one portrait image and an audio clip.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX