Speech emotion recognition based on dynamic convolutional neural network


  • Ziyao Lin
  • Zhangfang Hu
  • Kuilin Zhu




Speech emotion recognition, Dynamic convolutional kernel, Neural network


In speech emotion recognition, the use of deep learning algorithms that extract and classify features of audio emotion samples usually requires the use of a large amount of resources, which makes the system more complex. This paper proposes a speech emotion recognition system based on dynamic convolutional neural network combined with bi-directional long and short-term memory network. On the one hand, the dynamic convolutional kernel allows the neural network to extract global dynamic emotion information, which can improve the performance while ensuring the computational power of the model, and on the other hand, the bi-directional long and short-term memory network enables the model to classify the emotion features more effectively with the temporal information. In this paper, we use CISIA Chinese speech emotion dataset, EMO-DB German emotion corpus and IEMOCAP English corpus to conduct experiments, and the average emotion recognition accuracy of the experimental results are 59.08%, 89.29% and 71.25%, which are 1.17%, 1.36% and 2.97% higher than the accuracy of speech emotion recognition systems using mainstream models, respectively. The effectiveness of the method in this paper is proved.


Akçay M B, Oğuz K. Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers[J]. Speech Communication, 2020, 116: 56-76.

Abbaschian B J, Sierra-Sosa D, Elmaghraby A. Deep learning techniques for speech emotion recognition, from databases to models[J]. Sensors, 2021, 21(4): 1249.

Pandey S K, Shekhawat H S, Prasanna S R M. Deep learning techniques for speech emotion recognition: A review[C]//2019 29th International Conference Radioelektronika (RADIOELEKTRONIKA). IEEE, 2019: 1-6.

Issa D, Demirci M F, Yazici A. Speech emotion recognition with deep convolutional neural networks[J]. Biomedical Signal Processing and Control, 2020, 59: 101894.

Lee J, Tashev I. High-level feature representation using recurrent neural network for speech emotion recognition[C]//Interspeech 2015. 2015.

Kim J, Saurous R A. Emotion Recognition from Human Speech Using Temporal Information and Deep Learning[C]//Interspeech. 2018: 937-940.

El Ayadi M, Kamel M S, Karray F. Survey on speech emotion recognition: Features, classification schemes, and databases[J]. Pattern recognition, 2011, 44(3): 572-587.

Fahad M S, Ranjan A, Yadav J, et al. A survey of speech emotion recognition in natural environment[J]. Digital Signal Processing, 2020: 102951.

Roy T, Marwala T, Chakraverty S. A survey of classification techniques in speech emotion recognition[J]. Mathematical Methods in Interdisciplinary Sciences, 2020: 33-48.

Reshma C V, Rajasree R. A survey on Speech Emotion Recognition[C]//2019 IEEE International Conference on Innovations in Communication, Computing and Instrumentation (ICCI). IEEE, 2019: 193-195.

Ai X, Sheng V S, Fang W, et al. Ensemble learning with attention-integrated convolutional recurrent neural network for imbalanced speech emotion recognition[J]. IEEE Access, 2020, 8: 199909-199919.

Hajarolasvadi N, Demirel H. 3D CNN-based speech emotion recognition using k-means clustering and spectrograms[J]. Entropy, 2019, 21(5): 479.

Iqbal A, Barua K. A real-time emotion recognition from speech using gradient boosting[C]//2019 International Conference on Electrical, Computer and Communication Engineering (ECCE). IEEE, 2019: 1-5.

Ringeval F, Eyben F, Kroupi E, et al. Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[J]. Pattern Recognition Letters, 2015, 66: 22-30.

Garg U, Agarwal S, Gupta S, et al. Prediction of Emotions from the Audio Speech Signals using MFCC, MEL and Chroma[C]//2020 12th International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2020: 87-91.

Eddy S R. Profile hidden Markov models[J]. Bioinformatics (Oxford, England), 1998, 14(9): 755-763.

Reynolds D A. Gaussian mixture models[J]. Encyclopedia of biometrics, 2009, 741(659-663).

Hearst M A, Dumais S T, Osuna E, et al. Support vector machines[J]. IEEE Intelligent Systems and their applications, 1998, 13(4): 18-28.

Jain A K, Mao J, Mohiuddin K M. Artificial neural networks: A tutorial[J]. Computer, 1996, 29(3): 31-44.

Quinlan J R. Induction of decision trees[J]. Machine learning, 1986, 1: 81-106.

Peterson L E. K-nearest neighbor[J]. Scholarpedia, 2009, 4(2): 1883.

Kwon S. MLT-DNet: Speech emotion recognition using 1D dilated CNN based on multi-learning trick approach[J]. Expert Systems with Applications, 2021, 167: 114177.

Kwon S. A CNN-assisted enhanced audio signal processing for speech emotion recognition[J]. Sensors, 2020, 20(1): 183.

Wang J, Xue M, Culhane R, et al. Speech emotion recognition with dual-sequence LSTM architecture[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020: 6474-6478.

Chen Y, Dai X, Liu M, et al. Dynamic convolution: Attention over convolution kernels[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 11030-11039.

Yang B, Bender G, Le Q V, et al. Condconv: Conditionally parameterized convolutions for efficient inference[J]. arXiv preprint arXiv:1904.04971, 2019.

Zhang Y, Zhang J, Wang Q, et al. Dynet: Dynamic convolution for accelerating convolutional neural networks[J]. arXiv preprint arXiv:2004.10694, 2020.

Wen H, You S, Fu Y. Cross-modal dynamic convolution for multi-modal emotion recognition[J]. Journal of Visual Communication and Image Representation, 2021: 103178.







How to Cite

Lin, Z., Hu, Z., & Zhu, K. (2023). Speech emotion recognition based on dynamic convolutional neural network. Journal of Computing and Electronic Information Management, 10(1), 72-77. https://doi.org/10.54097/jceim.v10i1.5756

Similar Articles

1-10 of 81

You may also start an advanced similarity search for this article.