Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 25 von 27
Proceedings of the 2024 7th International Conference on Image and Graphics Processing, 2024, p.244-251
2024
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Animatable 3D Facial Detail Reconstruction from In-the-wild Images
Ist Teil von
  • Proceedings of the 2024 7th International Conference on Image and Graphics Processing, 2024, p.244-251
Ort / Verlag
New York, NY, USA: ACM
Erscheinungsjahr
2024
Quelle
ACM Digital Library
Beschreibungen/Notizen
  • With the introduction of the "metaverse" concept, digital human technology has garnered widespread attention, and how to quickly model a high-precision 3D human face has become a research focus in the field of digital humans. To address the issue of 3D Morphable Models (3DMM) being unable to reconstruct facial details, this paper proposes an animatable facial detail generation algorithm based on generative adversarial loss, which is used to restore more facial reliefs on the basic face obtained from three-dimensional facial reconstruction, achieving a highly realistic reconstruction of digital human faces. Firstly, this paper employs displacement mapping technology to recover facial detail information, constructing an autoencoder network to predict displacement maps from input images, controlling the vertices of the 3D model to express more relief details. Secondly, to effectively capture the mid-to-high frequency details of the face, generative adversarial loss is introduced to model the high-frequency attribute differences between images, jointly constrained with photometric losses and other content losses to ensure the accuracy of generated facial details. Lastly, to decouple static from dynamic facial details, this paper embeds 3DMM expression-related parameters into the generator model of displacement maps, and adopts an identity averaging strategy to constrain the encoder network to model only the static details of the face, whereas the dynamic details are provided by the 3DMM parameters, enabling the generation of dynamic facial details driven by expressions. The final experimental results show that the algorithm proposed in this paper demonstrates competitive performance in terms of model reconstruction quality and accuracy when compared to other state-of-the-art algorithms in facial detail generation. Additionally, the ability of the model to dynamically construct facial details was verified through expression transfer experiments, and the importance of generative adversarial loss was confirmed through ablation experiments.
Sprache
Englisch
Identifikatoren
ISBN: 9798400716720
DOI: 10.1145/3647649.3647689
Titel-ID: cdi_acm_books_10_1145_3647649_3647689

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX