Exploring Speech Emotion Recognition in Tribal Language with Deep Learning Techniques

Authors

  • Subrat Kumar Nayak Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha, India
  • Ajit Kumar Nayak Department of Computer Science and Information Technology, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha, India
  • Smitaprava Mishra Department of Computer Science and Information Technology, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha, India
  • Prithviraj Mohanty Department of Computer Science and Information Technology, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha, India
  • Nrusingha Tripathy Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha, India
  • Kumar Surjeet Chaudhury Department of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha, India

DOI:

https://doi.org/10.32985/ijeces.16.1.6

Keywords:

KUI Dataset, Speech Emotion Recognition, Deep Learning, Long Short-Term Memory, Data Augmentation

Abstract

Emotion is fundamental to interpersonal interactions since it assists mutual understanding. Developing human-computer interactions and a related digital product depends heavily on emotion recognition. Due to the need for human-computer interaction applications, deep learning models for the voice recognition of emotions are an essential area of research. Most speech emotion recognition algorithms are only deployed in European and a few Asian languages. However, for a low-resource tribal language like KUI, the dataset is not available. So, we created the dataset and applied some augmentation techniques to increase the dataset size. Therefore, this study is based on speech emotion recognition using a low-resourced KUI speech dataset, and the results with and without augmentation of the dataset are compared. The dataset is created using a studio platform for better-quality speech data. They are labeled using six perceived emotions: ସଡାଙ୍ଗି (angry), େରହା (happy), ଆଜି (fear), ବିକାଲି (sad), ବିଜାରି (disgust), and େଡ଼କ୍‌(surprise). Mel-frequency cepstral coefficient (MFCC) is used for feature extraction. The deep learning technique is an alternative to the traditional methods to recognize speech emotion. This study uses a hybrid architecture of Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNNs) as classification techniques for recognition. The results have been compared with existing benchmark models, with the experiments demonstrating that the proposed hybrid model achieved an accuracy of 96% without augmentation and 97% with augmentation.

Downloads

Published

2024-12-11

How to Cite

[1]
S. Kumar Nayak, A. Kumar Nayak, S. Mishra, P. Mohanty, N. Tripathy, and K. Surjeet Chaudhury, “Exploring Speech Emotion Recognition in Tribal Language with Deep Learning Techniques”, IJECES, vol. 16, no. 1, pp. 53-64, Dec. 2024.

Issue

Section

Original Scientific Papers