Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 2 von 6

Details

Autor(en) / Beteiligte
Titel
Recognition of Audio Depression Based on Convolutional Neural Network and Generative Antagonism Network Model
Ist Teil von
  • IEEE access, 2020, Vol.8, p.101181-101191
Ort / Verlag
Piscataway: IEEE
Erscheinungsjahr
2020
Link zum Volltext
Quelle
EZB Electronic Journals Library
Beschreibungen/Notizen
  • This paper proposes an audio depression recognition method based on convolution neural network and generative antagonism network model. First of all, preprocess the data set, remove the long-term mute segments in the data set, and splice the rest into a new audio file. Then, the features of speech signal, such as Mel-scale Frequency Cepstral Coefficients (MFCCs), short-term energy and spectral entropy, are extracted based on audio difference normalization algorithm. The extracted matrix vector feature data, which represents the unique attributes of the subjects' own voice, is the data base for model training. Then, based on the combination of CNN and GAN, DR AudioNet is used to build the model of depression recognition research. With the help of DR AudioNet, the former model is optimized and the recognition classification is completed through the normalization characteristics of the two adjacent segments before and after the current audio segment. The experimental results on AViD-Corpus and DAIC-WOZ datasets show that the proposed method effectively reduces the depression recognition error compared with other existing methods, and the RMSE and MAE values obtained on the two datasets are better than the comparison algorithm by more than 5%.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX