Document Type : Original Article
Faculty of Electrical Engineering, Shahrood University of Technology, Shahrood, Iran
Department of Electrical Engineering, Shahrood University of Technology, Shahrood, Iran
Introduction: In the realm of psychotherapy, the utilization of speech emotion recognition technology holds promise in unraveling the factors that contribute to the varying effectiveness of psychotherapists. This valuable insight can significantly enhance the diagnosis and treatment methods employed. By identifying individuals at a heightened risk of suicide or displaying suicidal tendencies, we can take preventive measures, addressing a long-standing need within the field of psychology and ultimately reducing treatment expenses. Consequently, there is a pressing demand for the recognition of emotions through speech and the development of an extensive emotional database. However, amassing a substantial database with an ample number of samples would traditionally require several decades. To address this challenge, machine learning techniques such as data augmentation and feature selection play a pivotal role.
Methods: This paper introduces an innovative solution to address the challenge of training deep neural networks when the training data lacks diversity and is limited in each class. The proposed approach is an adversarial data augmentation network based on adversarial generative networks. This network consists of an adversarial generator network, an autoencoder, and a classifier. Through adversarial training, these networks combine feature vectors from each class in the feature space and integrate them into the database. Additionally, separate adversarial generative networks are proposed for each class, ensuring similarity between real and generated samples while creating emotional differentiation among different classes. To overcome the problem of excessive gradient reduction, which hinders proper training and halts the learning process before fully understanding the data distribution, the paper suggests using divergence and capture instead of mutual entropy error to generate high-quality synthetic samples.
Results: The model's performance was evaluated on the Berlin Emotional Database, serving as training, testing and evaluation datasets. Combining artificial and real feature vectors effectively addressed the issue of excessive gradient shrinkage, resulting in a significant reduction in the network training process. The results demonstrated that the generated data from the proposed network can enhance speech signal emotion recognition, leading to improved emotional classification capabilities.