Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 2 von 16312
IEEE transactions on geoscience and remote sensing, 2023, Vol.61, p.1-14
2023
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
CLEGAN: Toward Low-Light Image Enhancement for UAVs via Self-Similarity Exploitation
Ist Teil von
  • IEEE transactions on geoscience and remote sensing, 2023, Vol.61, p.1-14
Ort / Verlag
New York: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Erscheinungsjahr
2023
Quelle
IEEE Xplore
Beschreibungen/Notizen
  • Low-light remote sensing image enhancement for unmanned aerial vehicles (UAVs) has significant scientific and practical value because unfavorable lighting conditions make capture more difficult, resulting in undesired images. As acquiring real-world low-light/normal-light image pairs in the field of remote sensing is almost infeasible, performing low-light image enhancement (LLIE) in an unpaired manner is practical and valuable. However, without the paired data as the supervision, learning an LLIE network is challenging. To address the challenges, this article proposes a novel yet effective method to unpaired LLIE, which maximizes the mutual information between low-light and restored images through self-similarity contrastive learning (SSCL) in a fully unsupervised fashion within a single deep generative adversarial network (GAN) framework, named CLEGAN. Instead of supervising the learning using ground truth data, we propose to regularize the unpaired training using the information extracted from the input itself. The nonlocal patch sampling strategy in SSCL naturally makes the negative samples differ from the positive samples for discriminative representation. Moreover, the single GAN embeds the dual illumination perception module (DIPM) to handle the internal recurrence of information and overall uneven illumination distribution in remote sensing images. DIPM mainly consists of two cooperative blocks: the spatial adaptive light adjustment module (SALAM) and the global adaptive light adjustment module (GALAM). Specifically, SALAM exploits the internal recurrence of information in remote sensing images to encode a wider range of contextual information into local features and make a proper light estimation. Simultaneously, GALAM enhances the most valuable illumination-related channels in the feature map to achieve better light estimation. The experiments on several datasets including low-light remote sensing image dataset and public low-light image datasets show that CLEGAN performs favorably against the existing unpaired LLIE approaches, and even outperforms several fully supervised methods.
Sprache
Englisch
Identifikatoren
ISSN: 0196-2892
eISSN: 1558-0644
DOI: 10.1109/TGRS.2023.3279826
Titel-ID: cdi_proquest_journals_2824113169

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX