Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 3 von 44607

Details

Autor(en) / Beteiligte
Titel
DRIT++: Diverse Image-to-Image Translation via Disentangled Representations
Ist Teil von
  • International journal of computer vision, 2020-11, Vol.128 (10-11), p.2402-2417
Ort / Verlag
New York: Springer US
Erscheinungsjahr
2020
Link zum Volltext
Quelle
SpringerLink (Online service)
Beschreibungen/Notizen
  • Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for this task: (1) lack of aligned training pairs and (2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for generating diverse outputs without paired training images. To synthesize diverse outputs, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and attribute vectors sampled from the attribute space to synthesize diverse outputs at test time. To handle unpaired training data, we introduce a cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative evaluations, we measure realism with user study and Fréchet inception distance, and measure diversity with the perceptual distance metric, Jensen–Shannon divergence, and number of statistically-different bins.
Sprache
Englisch
Identifikatoren
ISSN: 0920-5691
eISSN: 1573-1405
DOI: 10.1007/s11263-019-01284-z
Titel-ID: cdi_gale_infotracacademiconefile_A636368460

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX