Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 22 von 6950

Details

Autor(en) / Beteiligte
Titel
WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Ist Teil von
  • IEEE journal of selected topics in signal processing, 2022-10, Vol.16 (6), p.1505-1518
Ort / Verlag
New York: IEEE
Erscheinungsjahr
2022
Link zum Volltext
Quelle
IEEE/IET Electronic Library (IEL)
Beschreibungen/Notizen
  • Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. To tackle the problem, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM jointly learns masked speech prediction and denoising in pre-training. By this means, WavLM does not only keep the speech content modeling capability by the masked speech prediction, but also improves the potential to non-ASR tasks by the speech denoising. In addition, WavLM employs gated relative position bias for the Transformer structure to better capture the sequence ordering of input speech. We also scale up the training dataset from 60 k hours to 94 k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.
Sprache
Englisch
Identifikatoren
ISSN: 1932-4553
eISSN: 1941-0484
DOI: 10.1109/JSTSP.2022.3188113
Titel-ID: cdi_crossref_primary_10_1109_JSTSP_2022_3188113

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX