Amazigh Spoken Digit Recognition using a Deep Learning Approach based on MFCC
DOI:
https://doi.org/10.32985/ijeces.14.7.6Keywords:
CNN, Deep learning, digits, Amazigh, Speech, ASRAbstract
The field of speech recognition has made human-machine voice interaction more convenient. Recognizing spoken digits is particularly useful for communication that involves numbers, such as providing a registration code, cellphone number, score, or account number. This article discusses our experience with Amazigh's Automatic Speech Recognition (ASR) using a deep learning- based approach. Our method involves using a convolutional neural network (CNN) with Mel-Frequency Cepstral Coefficients (MFCC) to analyze audio samples and generate spectrograms. We gathered a database of numerals from zero to nine spoken by 42 native Amazigh speakers, consisting of men and women between the ages of 20 and 40, to recognize Amazigh numerals. Our experimental results demonstrate that spoken digits in Amazigh can be recognized with an accuracy of 91.75%, 93% precision, and 92% recall. The preliminary outcomes we have achieved show great satisfaction when compared to the size of the training database. This motivates us to further enhance the system's performance in order to attain a higher rate of recognition. Our findings align with those reported in the existing literature.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 International Journal of Electrical and Computer Engineering Systems
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.