Improving the Efficiency of Speech Emotion Recognition System by Generative Adversarial Network in Clinical Psychology

Document Type : Original Article

Authors

1 Faculty of Electrical Engineering, Shahrood University of Technology, Shahrood, Iran

2 Department of Electrical Engineering, Shahrood University of Technology, Shahrood, Iran

Abstract

Introduction:  In the realm of psychotherapy, the utilization of speech emotion recognition technology holds promise in unraveling the factors that contribute to the varying effectiveness of psychotherapists. This valuable insight can significantly enhance the diagnosis and treatment methods employed. By identifying individuals at a heightened risk of suicide or displaying suicidal tendencies, we can take preventive measures, addressing a long-standing need within the field of psychology and ultimately reducing treatment expenses. Consequently, there is a pressing demand for the recognition of emotions through speech and the development of an extensive emotional database. However, amassing a substantial database with an ample number of samples would traditionally require several decades. To address this challenge, machine learning techniques such as data augmentation and feature selection play a pivotal role.
Methods:  This paper introduces an innovative solution to address the challenge of training deep neural networks when the training data lacks diversity and is limited in each class. The proposed approach is an adversarial data augmentation network based on adversarial generative networks. This network consists of an adversarial generator network, an autoencoder, and a classifier. Through adversarial training, these networks combine feature vectors from each class in the feature space and integrate them into the database. Additionally, separate adversarial generative networks are proposed for each class, ensuring similarity between real and generated samples while creating emotional differentiation among different classes. To overcome the problem of excessive gradient reduction, which hinders proper training and halts the learning process before fully understanding the data distribution, the paper suggests using divergence and capture instead of mutual entropy error to generate high-quality synthetic samples.
Results:  The model's performance was evaluated on the Berlin Emotional Database, serving as training, testing and evaluation datasets. Combining artificial and real feature vectors effectively addressed the issue of excessive gradient shrinkage, resulting in a significant reduction in the network training process. The results demonstrated that the generated data from the proposed network can enhance speech signal emotion recognition, leading to improved emotional classification capabilities.

Keywords


  1. Belouali1, S. Gupta, V. Sourirajan, N. Allen, and A. Alaoui, “Acoustic and language analysis of speech for suicidal ideation among US veterans,” in BioData Mining., 2021, pp. 1–17.
  2. Chekroud, RJ. Zotti, Z. Shehzad, R. Gueorguieva, and MK. Johnson, “Cross-trial prediction of treatment outcome in depression: a machine learning approach,” in The Lancet Psychiatry., 2016, pp. 243–250.
  3. Breiman, Classification and Regression Trees. CRC Press, 2017.
  4. Rong, G. Li, and Y.-P. P. Chen, “Acoustic feature selection for automatic emotion recognition from speech,” Inf. Process. Manag., vol. 45, no. 3, pp. 315–328, 2009.
  5. Chen, X. He, J. Yang, and H. Zhang, “3-D convolutional recurrent neural networks with attention model for speech emotion recognition,” IEEE Signal Process. Lett., vol. 25, no. 10, pp. 1440–1444, Oct. 2018.
  6. Guyon and A. Elisseeff, “An Introduction to Variable and Feature Selection,” J. Mach. Learn. Res., vol. 3, pp. 1157–1182, Mar. 2003.
  7. DeVries and G. W. Taylor, “Dataset augmentation in feature space,” 2017, arXiv:1702.05538. [Online]. Available: https://arxiv.org/abs/1702.05538
  8. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio. “Generative adversarial nets”. In: Advances in neural information processing systems. 2014.
  9. Eyben, F. Weninger, F. Gross, and B. Schuller, “Recent developments in openSMILE, the Munich open-source multimedia feature extractor,” in Proc. 21st ACM Int. Conf. Multimedia MM, 2013, pp. 835–838.
  10. Rong, G. Li, and Y.-P. P. Chen, “Acoustic feature selection for automatic emotion recognition from speech,” Inf. Process. Manag., vol. 45, no. 3, pp. 315–328, 2009.
  11. Chandaka, A. Chatterjee, and S. Munshi, “Support vector machines employing cross-correlation for emotional speech recognition,” Measurement, vol. 42, no. 4, pp. 611–618, 2009.
  12. Altun and G. Polat, “Boosting selection of speech-related features to improve performance of multi-class SVMs in emotion detection,” Expert Syst. Appl., vol. 36, no. 4, pp. 8197–8203, 2009.
  13. Bitouk, R. Verma, and A. Nenkova, “Class-level spectral features for emotion recognition,” Speech Commun., vol. 52, no. 7, pp. 613–625, 2010.
  14. Yang and M. Lugger, “Emotion recognition from speech signals using new harmony features,” Signal Processing, vol. 90, no. 5, pp. 1415–1423, 2010.
  15. Polzehl, A. Schmitt, F. Metze, and M. Wagner, “Anger recognition in speech using acoustic and linguistic cues,” Speech Commun., vol. 53, no. 9, pp. 1198–1209, 2011.
  16. Abdelwahab and C. Busso, “Study of Dense Network Approaches for Speech Emotion Recognition,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 5084–5088.
  17. Kockmann, L. Burget, and J. “Honza” Černocký, “Application of speaker- and language identification state-of-the-art techniques for emotion recognition,” Speech Commun., vol. 53, no. 9, pp. 1172–1185, 2011.
  18. M. Albornoz, D. H. Milone, and H. L. Rufiner, “Spoken emotion recognition using hierarchical classifiers,” Comput. Speech Lang., vol. 25, no. 3, pp. 556–570, 2011.
  19. Bozkurt, E. Erzin, Ç. E. Erdem, and A. T. Erdem, “Formant position based weighted spectral features for emotion recognition,” Speech Commun., vol. 53, no. 9, pp. 1186–1197, 2011.
  20. Wu, T. H. Falk, and W.-Y. Chan, “Automatic speech emotion recognition using modulation spectral features,” Speech Commun., vol. 53, no. 5, pp. 768–785, 2011.
  21. Laukka, D. Neiberg, M. Forsell, I. Karlsson, and K. Elenius, “Expression of effect in spontaneous speech: Acoustic correlates and automatic detection of irritation and resignation,” Comput. Speech Lang., vol. 25, no. 1, pp. 84–104, 2011.
  22. Pérez-Espinosa, C. A. Reyes-García, and L. Villaseñor-Pineda, “Acoustic feature selection and classification of emotions in speech using a 3D continuous emotion model,” Biomed. Signal Process. Control, vol. 7, no. 1, pp. 79–87, 2012.
  23. Han, D. Yu, and I. Tashev, Speech Emotion Recognition Using Deep Neural Network and Extreme Learning Machine. 2014.
  24. Palo and M. Mohanty, “Modified-VQ Features for Speech Emotion Recognition,” J. Appl. Sci., vol. 16, pp. 406–418, Sep. 2016.
  25. Schuller, R. Müller, M. Lang, and G. Rigoll, Speaker independent emotion recognition by early fusion of acoustic and linguistic features within ensembles. 2005.
  26. Luengo, E. Navas, and I. Hernáez, “Feature Analysis and Evaluation for Automatic Emotion Identification in Speech,” Multimedia, IEEE Trans., vol. 12, pp. 490–501, Nov. 2010.
  27. Gharavian, M. Sheikhan, and F. Ashoftedel, “Emotion recognition improvement using normalized formant supplementary features by a hybrid of DTW-MLP-GMM model,” Neural Comput. Appl., vol. 22, no. 6, pp. 1181–1191, 2013.
  28. Zhao, S. Zhang, and B. Lei, “Robust emotion recognition in a noisy speech via sparse representation,” Neural Comput. Appl., vol. 24, Jun. 2013.
  29. Hu, T. Tan, and Y. Qian, “Generative adversarial network-based data augmentation for noise-robust speech recognition,” in Proc. IEEE Int. Conf. Acoustic., Speech Signal Process. (ICASSP), Apr. 2018, pp. 5044–5048.
  30. Chang, S. Scherer. “Learning representations of emotional speech with deep convolutional generative adversarial networks”. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. 2017.
  31. Zhang, J. Han, K. Qian, C. Janott, Y. Guo, and B. Schuller, “Snore- GANs: Improving automatic snore sound classification with synthesized data,” IEEE J. Biomed. Health Information., vol. 24, no. 1, pp. 300–310, Jan. 2020.
  32. Schuller, S. Steidl, A. Batliner, F. Schiel, and J. Krajewski, “The INTERSPEECH 2011 speaker state challenge,” in Proc. Interspeech, Sep. 2011, pp. 3201–3204.
  33. Schuller, S. Steidl, and A. Batliner, “The INTERSPEECH 2009 emotion challenge,” in Proc. Interspeech, Sep. 2009, pp. 312–315.
  34. Schuller et al., “The INTERSPEECH 2010 paralinguistic challenge,” in Proc. Interspeech, Sep. 2010, pp. 2794–2797.
  35. R. Deller Jr, J. G. Proakis, and J. H. L. Hansen, Discrete-time Processing of Speech Signals. Basingstoke, U.K.: Macmillan Pub, 1993.
  36. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in Proc. Int. Conf. Mach. Learn., Aug. 2017, pp. 214–223.
  37. Wu, Z. Huang, J. Thoma, D. Acharya, and L. Van Gool, “Wasserstein divergence for GANs,” in Proc. Eur. Conf. Compute. Vis. (ECCV), Sep. 2018, pp. 653–668.
  38. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein GANs,” in Proc. Adv. Neural Inf. Process. Syst., I. Guyon, U. V. Lux burg, S. Bagnio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. New York, NY, USA: Curran Associates, 2017, pp. 5767–5777.
  39. Burkhardt, A. Paeschke, M. Rolfes, W. F. Sendlmeier, and B. Weiss, “A database of German emotional speech,” in Proc. 9th Eur. Conf. Speech Commun. Technol., 2005, pp. 1–4.
  40. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proc. 13th Int. Conf. Artif. Intel. Statist., 2010, pp. 249–256.
  41. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” in Proc. 3rd Int. Conf. Learn. Represent. (ICLR), 2015, pp.1–15.
  42. Luengo, E. Navas, and I. Hernaez, “Feature analysis and evaluation for automatic emotion identification in speech,” IEEE Trans. Multimedia, vol. 12, no. 6, pp. 490–501, Oct. 2010.
  43. Chen, X. He, J. Yang, and H. Zhang, “3-D convolutional recurrent neural networks with attention model for speech emotion recognition,” IEEE Signal Process. Lett., vol. 25, no. 10, pp. 1440–1444, Oct. 2018.
  44. Shilandari, H. Marvi, H. Khosravi and W. Wang “Speech emotion recognition using data augmentation method by cycle-generative adversarial networks,” in Journal of Signal, Image, and Video Processing., DOI: https://doi.org/10.1007/s11760-022-02156-9, 2022.