https://ijeces.ferit.hr/index.php/ijeces/issue/feed International Journal of Electrical and Computer Engineering Systems 2025-04-07T00:00:00+02:00 Mario Vranješ mario.vranjes@ferit.hr Open Journal Systems <p>The International Journal of Electrical and Computer Engineering Systems publishes open access original research in the form of original scientific papers, review papers, case studies and preliminary communications which are not published or submitted to some other publication. It covers theory and application of electrical and computer engineering, synergy of computer systems and computational methods with electrical and electronic systems, as well as interdisciplinary research.<br /><br /></p> <h2>Review Speed</h2> <p>The average number of weeks it takes for an article to go through the editorial review process for this journal, including standard rejects, and excluding desk rejects (for the articles submitted in 2024):</p> <p><strong>Submission to the first decision</strong><br />From manuscript submission to the initial decision on the article (accept/reject/revisions) – <strong>5.00 weeks</strong></p> <p><strong>Submission to the final decision</strong><br />From manuscript submission to the final editorial decision (accept/reject) – <strong>7.14 weeks</strong></p> <p><strong>Any manuscript not written in accordance with the <a href="https://ijeces.ferit.hr/index.php/ijeces/about/submissions">IJECES template</a> will be rejected immediately in the first step (desk reject) and will not be sent to the review process.<br /><br /></strong></p> <h2>Publication Fees</h2> <p>Publication fee is <strong>500 EUR</strong> for up to <strong>8 pages</strong> and <strong>50 EUR</strong> for <strong>each additional page</strong>.</p> <p><span style="font-size: 10.5pt; font-family: 'Noto Sans',sans-serif; color: black; background: white;">The maximum number of pages for a paper is 20, and therefore, the <strong><span style="font-family: 'Noto Sans',sans-serif;">maximum publication fee</span></strong><strong> is 1100 Euro</strong> (500 Euro (for up to 8 pages) + (12x50) Euro (for 12 additional pages)) = <strong><span style="font-family: 'Noto Sans',sans-serif;">1100 Euros</span></strong></span></p> <p>We operate a <strong>No Waiver</strong> policy.</p> <p><strong><br />Published by Faculty of Electrical Engineering, Computer Science and Information Technology, Josip Juraj Strossmayer University of Osijek, Croatia.<br /><br /></strong></p> <p><strong>The International Journal of Electrical and Computer Engineering Systems is published with the financial support of the Ministry of Science and Education of the Republic of Croatia</strong></p> https://ijeces.ferit.hr/index.php/ijeces/article/view/3711 A Hybrid Deep Learning Framework for Speech-to-Text Conversion as Part of Telemedicine System Integrated With 5G 2024-12-19T08:43:38+01:00 Medapati Venkata Manga Naga Sravan Sravan.medapati@gmail.com K Venkata Rao professor_venkat@yahoo.com <p>In today's world, aligning healthcare research with the third sustainable development goal of the United Nations (UN) is crucial. This goal focuses on ensuring health and well-being for all. Technological innovations like the Internet of Things (IoT) and Artificial Intelligence (AI) are vital in improving healthcare systems. Developing a technology-driven telemedicine system can have a significant impact on society. While current approaches focus on various methods for developing telemedicine modules, advancing these models with the latest technology is essential. Our paper proposes a deep learning-based framework that allows patients to provide information through voice. The system automatically analyzes this information to provide valuable insights in the doctor's dashboard, making diagnosis and prescriptions easier for the patient. Our proposed hybrid deep learning framework integrates with 5G technology and focuses on speech-to-text conversion. We introduce a hybrid deep learning model to improve performance in speech-to-text conversion. Our proposed algorithm, AI-Enabled Speech-to-Text Conversion (AIE-STTC), has the potential to match and surpass many existing deep learning models. Our empirical study, conducted using a benchmark dataset, demonstrated an impressive accuracy rate of 95.32%. In comparison, the baseline models showed lower accuracy rates: CNN achieved 88%, ResNet50 reached 90%, and VGG16 had 89%. Therefore, our proposed methodology has the potential to realize a technology-driven telemedicine system by integrating it with other necessary modules in the future. It significantly improves remote patient healthcare, making it more accessible and cost-effective, leading to a hopeful paradigm shift in healthcare services.</p> 2025-03-20T00:00:00+01:00 Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems https://ijeces.ferit.hr/index.php/ijeces/article/view/3783 Sensorless Generalized Average Modeling- Based Control for the Resonant LC-DAB Converter 2025-01-06T19:54:57+01:00 Adelina Mukhametdinova adelinam@sjtu.edu.cn Muhammad Mansoor Khan mansoor@sjtu.edu.cn Ruifeng Zhang zrf333@sjtu.edu.cn <p>The dual active bridge (DAB) converter is an efficient power conversion topology designed for applications that require bidirectional galvanic isolation and energy transfer. Among its various configurations, the resonant LC-DAB converter is notable for its ability to significantly reduce switching losses and enhance efficiency. While discrete-time control methodologies are widely employed for design and analysis of DAB converters, it is challenging to ensure performance stability during steady-state and transient operating modes. Furthermore, high-frequency and single point measurement of resonant LC-DAB possess challenge, as it may not reflect the behavior of resonant inductor current. To address these issues, a generalized average modeling-based approach is proposed, which minimizes output-side circulating current. This is achieved by aligning the output current with the secondary side converter's voltage. The proposed model eliminates the need for a current sensor and demonstrates low sensitivity during transient conditions. A two-stage control loop is utilized: an inner loop for current control and an outer loop with a PI controller for voltage control. The analysis and design procedure for the proposed control is detailed, followed by simulation and experimental results in order to demonstrate the effectiveness of the proposed method.</p> 2025-03-18T00:00:00+01:00 Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems https://ijeces.ferit.hr/index.php/ijeces/article/view/3750 Stream-based Identification of Gender using Noninvasive Electroencephalographic Technology 2024-12-12T13:11:32+01:00 Bat-Erdene Gotov baterdene.g@mnums.edu.mn Tengis Tserendondog tengis@must.edu.mn Uurtsaikh Luvsansambuu uurtsaikh@must.edu.mn Munkhbayar Bat-Erdene munkhbayar.b@must.edu.mn Batmunkh Amar abatmunkh@must.edu.mn Dongsung Bae paeds915@smut.ac.kr <p>Numerous studies on EEG signals have revealed differences in brain activity patterns between males and females. However, these differences aren't always consistent or significant, as they can be affected by factors like age, task engagement, and specifics of EEG measurements. In our research, we introduce a new approach to detect gender called 'Stream-based Identification of Gender using Noninvasive Electroencephalographic Technology. We employed this technique to investigate how male and female brains respond differently during video streaming tasks with the aim of exploring functional disparities between them. This study aims to advance our understanding of gender-specific brain responses. We used data collected in our previous research from 122 volunteers (85 male, 37 female). Utilizing a deep learning (DL) approach allowed us to achieve 99% accuracy in gender identification. The applications of our model extend to various fields, including advertisements, multi-level security systems, and healthcare, showcasing the potential of advanced machine learning techniques in neuroscientific research.</p> 2025-03-17T00:00:00+01:00 Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems https://ijeces.ferit.hr/index.php/ijeces/article/view/3742 LTE Coverage Planning Based on Improved Grey Wolf Optimization 2025-01-18T15:45:12+01:00 Fekar Mohammed Riyadh El Mansour fekarmohammedriyadhelmansour@gmail.com Mustapha Guezouri guezouri.mustapha@univ-oran1.dz <p>Automatic planning and dimension optimization of LTE is one of the crucial tasks in the mobile networking community. It is well known that this process is an NP-hard issue that requires huge computing resources. We also noticed that the actual proposed solutions are still inefficient in terms of scalability (handling a large number of eNodeBs) and runtime effectiveness. Moreover, SINR handling and variability of propagation loss models with respect to areas' types further complicate the coverage planning task. In this paper, we propose a swarm intelligence-based method for effectively placing and configuring the eNodeBs of an LTE network. In particular, we propose two variants of grey wolf optimizer (GWO), namely a discrete version of GWO (DGWO) and an improved version of GWO (IGWO) for LTE coverage planning. The improved version consists of an additional local search rule that allows for exploring regions closer to the promising solutions. The approaches are simulated on an urban area with many types of clutter. The IGWO technique had a coverage of 99.0% of 10 dB SINR rate and 95.1% of 12 dB SINR rate. The obtained results show that IGWO is more effective than the discrete one and other existing metaheuristics in terms of cost and coverage rates. More specifically, it ensures a coverage improvement (with respect to 10 dB SINR rate) of 10.6%, 10.5%, and 2.6 % in comparison to DGWO, Tabu search (TS), and discrete particle swarm optimization (DPSO) respectively.</p> 2025-04-07T00:00:00+02:00 Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems https://ijeces.ferit.hr/index.php/ijeces/article/view/3827 Enhancing Energy Efficiency in GAN-based HEVC Video Compression Using Knowledge Distillation 2025-02-11T13:06:37+01:00 Hajar Hardi hardi.hajar@gmail.com Imade Fahd Eddine Fatani i.fatani@usms.ma <p>High-efficiency Video Coding (HEVC) is a widely used video coding standard, and it has recently gained widespread adoption in various applications, such as video streaming, broadcasting, real-time conferencing, and storage. The adoption of Generative Adversarial Networks (GANs) into HEVC compression has shown significant improvements in compression performance by reducing the video size while maintaining the original quality. In this work, we explore the application of Knowledge Distillation to reduce the energy consumption associated with GAN-based HEVC. By training a smaller student model that imitates the larger teacher model's behavior, we significantly improved energy efficiency. In this paper, we provide a detailed study comparing the traditional HEVC algorithm, GAN-based HEVC, and GAN-based HEVC with Knowledge Distillation. The experimental results demonstrate a reduction in energy consumption of up to 30% while preserving video quality, making it an effective solution for video streaming platforms and energy-constrained devices and a sustainable solution for video compression without diminishing video quality.</p> 2025-03-24T00:00:00+01:00 Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems https://ijeces.ferit.hr/index.php/ijeces/article/view/3746 MLbFA: A Machine Learning-Based Face Anti- Spoofing Detection Framework under Replay Attack 2024-12-23T11:14:46+01:00 Vijay V. Chakole chakole.vijay@gmail.com Dr. Swati R. Dixit swati.dixit@raisoni.net <p>The primary aim of the research paper is to deploy an efficient automated face antispoofing system that could consider replay attacks in the presence of partial occlusions. For this purpose, the article introduces a novel machine learning-based face- antispoofing (MLbFA) framework. The system incorporates a modified version of the difference of the Gaussian technique to compute the overall contrast of the input images which is later used to enhance the contrast of the image using contrast correction. On the other hand, the image details, especially the edges are enhanced for significant feature contribution using a Beltrami filter. The contrast-cured and extremity-enhanced images are averaged to obtain a finer image. Face cropping is achieved using the Bounding- Box algorithm to reduce computational complexity and improve classification accuracy for region-bounded feature extraction. Quality conventional or handcrafted features (CF/HF) are extracted through various descriptors from the region of interest (ROI). The features are reduced in dimension using principal component analysis (PCA) and portioned in training and testing sets with a 75%:25% ratio respectively. An experimental study showed that the proposed MLbFA model using a Support Vector Machine (SVM) outperforms other recent existing face anti-spoofing competing techniques with an improvement of 0.11% compared to the best- performing Edge-Net Autoencoder model concerning the classification accuracy.</p> 2025-03-31T00:00:00+02:00 Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems