Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
IEEE/ACM transactions on audio, speech, and language processing, 2014-12, Vol.22 (12), p.1713-1725
2014
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Fast Adaptation of Deep Neural Network Based on Discriminant Codes for Speech Recognition
Ist Teil von
  • IEEE/ACM transactions on audio, speech, and language processing, 2014-12, Vol.22 (12), p.1713-1725
Ort / Verlag
Piscataway: IEEE
Erscheinungsjahr
2014
Quelle
IEEE
Beschreibungen/Notizen
  • Fast adaptation of deep neural networks (DNN) is an important research topic in deep learning. In this paper, we have proposed a general adaptation scheme for DNN based on discriminant condition codes, which are directly fed to various layers of a pre-trained DNN through a new set of connection weights. Moreover, we present several training methods to learn connection weights from training data as well as the corresponding adaptation methods to learn new condition code from adaptation data for each new test condition. In this work, the fast adaptation scheme is applied to supervised speaker adaptation in speech recognition based on either frame-level cross-entropy or sequence-level maximum mutual information training criterion. We have proposed three different ways to apply this adaptation scheme based on the so-called speaker codes: i) Nonlinear feature normalization in feature space; ii) Direct model adaptation of DNN based on speaker codes; iii) Joint speaker adaptive training with speaker codes. We have evaluated the proposed adaptation methods in two standard speech recognition tasks, namely TIMIT phone recognition and large vocabulary speech recognition in the Switchboard task. Experimental results have shown that all three methods are quite effective to adapt large DNN models using only a small amount of adaptation data. For example, the Switchboard results have shown that the proposed speaker-code-based adaptation methods may achieve up to 8-10% relative error reduction using only a few dozens of adaptation utterances per speaker. Finally, we have achieved very good performance in Switchboard (12.1% in WER) after speaker adaptation using sequence training criterion, which is very close to the best performance reported in this task ("Deep convolutional neural networks for LVCSR," T. N. Sainath , Proc. IEEE Acoust., Speech, Signal Process., 2013).
Sprache
Englisch
Identifikatoren
ISSN: 2329-9290
eISSN: 2329-9304
DOI: 10.1109/TASLP.2014.2346313
Titel-ID: cdi_ieee_primary_6874531

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX