Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 6 von 72
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, p.16773-16783
2023

Details

Autor(en) / Beteiligte
Titel
Consistent View Synthesis with Pose-Guided Diffusion Models
Ist Teil von
  • 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, p.16773-16783
Ort / Verlag
IEEE
Erscheinungsjahr
2023
Link zum Volltext
Quelle
IEEE Xplore Digital Library
Beschreibungen/Notizen
  • Novel view synthesis from a single image has been a cornerstone problem for many Virtual Reality applications that provide immersive experiences. However, most existing techniques can only synthesize novel views within a limited range of camera motion or fail to generate consistent and high-quality novel views under significant camera movement. In this work, we propose a pose-guided diffusion model to generate a consistent long-term video of novel views from a single image. We design an attention layer that uses epipolar lines as constraints to facilitate the association between different viewpoints. Experimental results on synthetic and real-world datasets demonstrate the effectiveness of the proposed diffusion model against state-of-the-art transformer-based and GAN-based approaches. More qualitative results are available at https://poseguided-diffusion.github.io/.
Sprache
Englisch
Identifikatoren
eISSN: 2575-7075
DOI: 10.1109/CVPR52729.2023.01609
Titel-ID: cdi_ieee_primary_10204231

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX