Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
We tackle audio-visual inpainting, the problem of completing an image in such a way to be consistent with the sound associated to the scene. To this end, we propose a multimodal, audio-visual inpainting method (AVIN), and show how to leverage sound to reconstruct semantically consistent images. AVIN is a 2-stage algorithm, which first learns the scene semantics and reconstructs low resolution images based on a conditional probability distribution of pixels in the space conditioned to audio, and then refines such result with a GAN-based network to increase the resolution of the reconstructed image. We show that AVIN is able to recover the original content, especially in the hard cases where the missing area heavily degrades the scene semantics: it can perform cross-modal generation whenever no visual context is observed at all, reconstructing visual data from sound only. Code will be made available upon acceptance.