International Journal of Electrical and Computer Engineering Systems
https://ijeces.ferit.hr/index.php/ijeces
<p>The International Journal of Electrical and Computer Engineering Systems publishes open access original research in the form of original scientific papers, review papers, case studies and preliminary communications which are not published or submitted to some other publication. It covers theory and application of electrical and computer engineering, synergy of computer systems and computational methods with electrical and electronic systems, as well as interdisciplinary research.<br /><br /></p> <h2>Review Speed</h2> <p>The average number of weeks it takes for an article to go through the editorial review process for this journal, including standard rejects, and excluding desk rejects (for the articles submitted in 2024):</p> <p><strong>Submission to the first decision</strong><br />From manuscript submission to the initial decision on the article (accept/reject/revisions) – <strong>5.00 weeks</strong></p> <p><strong>Submission to the final decision</strong><br />From manuscript submission to the final editorial decision (accept/reject) – <strong>7.14 weeks</strong></p> <p><strong>Any manuscript not written in accordance with the <a href="https://ijeces.ferit.hr/index.php/ijeces/about/submissions">IJECES template</a> will be rejected immediately in the first step (desk reject) and will not be sent to the review process.<br /><br /></strong></p> <h2>Publication Fees</h2> <p>Publication fee is <strong>500 EUR</strong> for up to <strong>8 pages</strong> and <strong>50 EUR</strong> for <strong>each additional page</strong>.</p> <p><span style="font-size: 10.5pt; font-family: 'Noto Sans',sans-serif; color: black; background: white;">The maximum number of pages for a paper is 20, and therefore, the <strong><span style="font-family: 'Noto Sans',sans-serif;">maximum publication fee</span></strong><strong> is 1100 Euro</strong> (500 Euro (for up to 8 pages) + (12x50) Euro (for 12 additional pages)) = <strong><span style="font-family: 'Noto Sans',sans-serif;">1100 Euros</span></strong></span></p> <p>We operate a <strong>No Waiver</strong> policy.</p> <p><strong><br />Published by Faculty of Electrical Engineering, Computer Science and Information Technology, Josip Juraj Strossmayer University of Osijek, Croatia.<br /><br /></strong></p> <p><strong>The International Journal of Electrical and Computer Engineering Systems is published with the financial support of the Ministry of Science and Education of the Republic of Croatia</strong></p>Faculty of Electrical Engineering, Computer Science and Information Technology, Josip Juraj Strossmayer University of Osijek, Croatia.en-USInternational Journal of Electrical and Computer Engineering Systems1847-6996Deep Learning-Based Approach for Disease Stage Classification of Sunflower Leaf
https://ijeces.ferit.hr/index.php/ijeces/article/view/3675
<p>Accurate disease severity evaluation is crucial for managing the disease and yield loss. The classification of disease stages is essential for the estimation of disease severity. It takes extensive time for cultivators and botanical researchers to meticulously examine each leaf image and identify the disease stage to assess the severity of the disease at the field scale. Extracting the damaged leaf area is also achievable with image segmentation, although there are drawbacks such as threshold selection and lack of grayscale difference. Thus, deep learning has produced recent breakthroughs in various fields, such as high-resolution image synthesis, recognition, and categorization of images. In this work, the disease stages of two diseases (Alternaria leaf blight and Powdery Mildew) are classified using sunflower leaf images taken from sunflower farms in India (Marathwada State) during the Rabi season. With the help of botanists, images are labeled as three disease stage classes and one healthy stage as ground truth. A series of deep convolutional neural networks (Visual Geometry Group models with 16 and 19 neurons, respectively) with transfer learning and fine-tuning approach is trained, validated, and tested using stratified k-fold values four and five. The findings indicate that VGG16, with k-fold=5, gives the highest testing accuracy, which is 90.25%, with fine-tuning for Alternaria Leaf Blight. For VGG19 with kfold=5, the highest testing accuracy is 86.89% with fine- tuning for Powdery Mildew. Additionally, confidence interval calculation shows smaller intervals of 3% and 4% with a significance level of 95% for the VGG16 and VGG19 models, respectively.</p>Rupali SarodeArti Deshpande
Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems
https://creativecommons.org/licenses/by-nc-nd/4.0
2025-02-252025-02-2516319520310.32985/ijeces.16.3.1Optimized Weed Image Classification via Parallel Convolutional Neural Networks Integrating an Excess Green Index Channel
https://ijeces.ferit.hr/index.php/ijeces/article/view/3612
<p>Weed management is an essential operational task to ensure the excellent health of crops or trees. The emergence of machine vision enables convolutional neural networks (CNNs) to classify weed types automatically, which can subsequently be used for a weed management strategy. A dominant approach to implement CNN-based weed classification is to train a network with RGB images as input either by adopting a transfer learning approach or a custom network. However, such an approach limits the process of incorporating prior knowledge as a significant feature of the network to improve the classification accuracy. This work proposes a novel network based on parallel convolutional neural networks (P-CNN), leveraging the excess green index (ExG) channel as an additional input to the RGB image channels. We argue that using the ExG channel can capture the greenness feature of weeds from the visible light spectrum, an important feature in many vegetation images such as leaves or green plants. The results show that the proposed P-CNN combining ResNet50 and a custom CNN obtains a Top-1 accuracy of 97.2% on a public weed dataset called DeepWeeds compared to the baseline ResNet50 alone with only 95.7%. The results show the significant contribution of domain- specific knowledge of green indexes in improving the classification performance of weed images. This enhancement could transform real-world weed management by enabling highly precise detection by allowing the classifier to focus intensively on differentiating green color features between leaves with nearly identical morphology.</p>Seyed Abdollah VaghefiMohd Faisal IbrahimMohd Hairi Mohd ZamanMohd Marzuki MustafaSeri Mastura MustazaMohd Asyraf Zulkifley
Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems
https://creativecommons.org/licenses/by-nc-nd/4.0
2025-02-282025-02-2816320521610.32985/ijeces.16.3.2A Deep Learning Framework with Optimizations for Facial Expression and Emotion Recognition from Videos
https://ijeces.ferit.hr/index.php/ijeces/article/view/3544
<p>Human emotion recognition has many real-time applications in healthcare and psychology domains. Due to the widespread usage of smartphones, large volumes of video content are being produced. A video can have both audio and video frames in the form of images. With the advancements in Artificial Intelligence (AI), there has been significant improvement in the development of computer vision applications.Accuracy in recognizing human emotions from given audio-visual content is a very challenging problem. However, with the improvements in deep learning techniques,analyzing audio-visual content towards emotion recognition is possible. The existing deep learning methods focused on audio content or video frames for emotion recognition. An integrated approach consisting of audio and video frames in a single framework is needed to leverage efficiency. This paper proposes a deep learning framework with specific optimizations for facial expression and emotion recognition from videos. We proposed an algorithm, Learning Human Emotion Recognition (LbHER), which exploits hybrid deep learning models that could process audio and video frames toward emotion recognition. Our empirical study with a benchmark dataset, IEMOCAP, has revealed that the proposed framework and the underlying algorithm could leverage state-of-the-art human emotion recognition. Our experimental results showed that the proposed algorithm outperformed many existing models with the highest average accuracy of 94.66%. Our framework can be integrated into existing computer vision applications to recognize emotions from videos automatically.</p>Ranjit Kumar NukathatiUday Bhaskar NagellaAP Siva Kumar
Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems
https://creativecommons.org/licenses/by-nc-nd/4.0
2025-03-042025-03-0416321722910.32985/ijeces.16.3.3Classification of Road Scenes Based on Heterogeneous Features and Machine Learning
https://ijeces.ferit.hr/index.php/ijeces/article/view/3636
<p>There is a rapid advancement in Artificial intelligence (AI) and Machine Learning (ML) that has extensively improved the object detection capabilities of smart vehicles today. Convolutional Neural Networks (CNNs) based on small, medium, and large networks have made significant contributions to in-vehicle navigation. Simultaneously, achieving higher level accuracies and faster response in autonomous vehicles is still a challenge and needs special care and attention and must be addressed for human safety. Hence, this article proposes a heterogeneous features-based machine learning framework to distinguish road scenes. The model incorporates object-based, image-based, and diverse conventional features from the road scene images generated from four distinct datasets. Object-based features are acquired using the YOLOv5m model and modified VGG19 networks, whereas image-based features are extracted using the modified VGG19 network. Conventional features are added to the object-based and blind features by applying a variety of descriptors that include Matched filters, Wavelets, Gray Level Occurrence Matrix (GLCM), Linear Binary Pattern (LBP), and Histogram of Gaussian (HOG). The descriptors are used to extract fine and course features to enhance the capabilities of the classifier. Experiments show that the proposed road scene classification framework performed better in classifying two scene categories, including crosswalks, parking, roads under bridges/tunnels, and highways achieving an average classification accuracy of 97.62% and the highest of 99.85% between crosswalks and Parking. A marginal improvement of approximately 1% is seen when all four categories were considered for evaluation using a multiclass SVM compared to other competing models.</p>Sanjay PandeSarika KhandelwalPratik R. HajarePoonam T. AgarkarRajani D. SinghPrashant R. Patil
Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems
https://creativecommons.org/licenses/by-nc-nd/4.0
2025-02-272025-02-2716323124210.32985/ijeces.16.3.4Application of Artificial Vision Based on Convolutional Neural Networks for Predictive Detection of Faults in Electrical Distribution Line Insulators
https://ijeces.ferit.hr/index.php/ijeces/article/view/3529
<p>Insulators play a crucial role in transporting and distributing electrical energy. They separate the energized conductor from the metal structure and support the conductors against adverse weather conditions such as winds and rains. However, these devices lose their insulating and mechanical properties when exposed to climatic factors such as sun exposure, rain, dust, and environmental pollution. This is due to the forming of a cover of organic matter and breaks and fissures, which can trigger adverse effects such as generating electric arcs. For this reason, it is essential to identify these failures effectively. In this research, an innovative solution is proposed that involves the use of artificial vision integrated into uncrewed vehicles, using the YOLOv5 object detection technology based on convolutional neural networks, to analyze 3000 images of the insulators in search of signs of deterioration, such as the presence of organic matter, breaks or cracks. The results showed an accuracy of over 90% in detecting failures. Deploying YOLOv5 alongside an uncrewed vehicle allows for faster and more accurate inspection of insulators along power distribution lines in real-time. Furthermore, by using this artificial vision technology, detailed data on the condition of the insulators can be collected in an automated manner, which facilitates the planning of preventive and corrective maintenance actions. This not only reduces the costs associated with the maintenance of distribution lines but also contributes to improving the reliability and efficiency of the electrical system.</p>Vicente Paul AstudilloPablo Catota
Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems
https://creativecommons.org/licenses/by-nc-nd/4.0
2025-02-282025-02-2816324325110.32985/ijeces.16.3.5A New Encryption Algorithm for Voice Messages on Social Media Using Magic Cube GF (2^8) Technology
https://ijeces.ferit.hr/index.php/ijeces/article/view/3420
<p>With the rise of multimedia technology, audio file encryption has become increasingly significant, especially for voice messages in popular social media applications like WhatsApp. Voice messages hold great social significance, and to ensure their security, they must be encrypted before being transmitted over the internet. This paper proposes an efficient algorithm to securely encrypt voice messages. The innovative algorithm is based on a magic cube to reduce the execution time of the advanced encryption standard (AES) cipher algorithm. This is achieved by replacing the MixColumn function with a 3 × 3 × 3 magic cube FG (2^8) irreducible polynomial. This work reduces the execution time of the AES cryptosystem and enhances complexity by utilizing additional keys generated by a 3 × 3 × 3 magic cube. to develop a block cipher algorithm that encodes audio files using two types of finite fields: GF (P) and GF (2^8). This algorithm places a key of three cells and a voice message of six cells on each face of a 3 × 3 × 3 magic cube. Time complexity and encryption quality are evaluated according to National Institute of Standards and Technology standards, and the differential attacks' peak signal-to-noise ratio is calculated. The total complexity achieved for both GF (P) =2569 × 25118 and GF (2^8) = 2569 × 25618 is measured for comparison. Simulation results demonstrate a significant reduction in execution time and increased encryption complexity. Moreover, the magic cube with three faces (3 × 3 × 3) exhibits superior performance in terms of complexity and speed compared to the third-order magic square.</p>Mohammed M. Al-EzziWang WeipingAbdul Monem S. RahmaHasnain Ali Al mashhadaniMazen R. Hassan
Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems
https://creativecommons.org/licenses/by-nc-nd/4.0
2025-03-172025-03-1716325326310.32985/ijeces.16.3.6