https://ijeces.ferit.hr/index.php/ijeces/issue/feedInternational Journal of Electrical and Computer Engineering Systems2025-09-15T00:00:00+02:00Mario Vranješmario.vranjes@ferit.hrOpen Journal Systems<p>The International Journal of Electrical and Computer Engineering Systems publishes open access original research in the form of original scientific papers, review papers, case studies and preliminary communications which are not published or submitted to some other publication. It covers theory and application of electrical and computer engineering, synergy of computer systems and computational methods with electrical and electronic systems, as well as interdisciplinary research.<br /><br /></p> <h2>Review Speed</h2> <p>The average number of weeks it takes for an article to go through the editorial review process for this journal, including standard rejects, and excluding desk rejects (for the articles submitted in 2024):</p> <p><strong>Submission to the first decision</strong><br />From manuscript submission to the initial decision on the article (accept/reject/revisions) – <strong>5.00 weeks</strong></p> <p><strong>Submission to the final decision</strong><br />From manuscript submission to the final editorial decision (accept/reject) – <strong>7.14 weeks</strong></p> <p><strong>Any manuscript not written in accordance with the <a href="https://ijeces.ferit.hr/index.php/ijeces/about/submissions">IJECES template</a> will be rejected immediately in the first step (desk reject) and will not be sent to the review process.<br /><br /></strong></p> <h2>Publication Fees</h2> <p>Publication fee is <strong>500 EUR</strong> for up to <strong>8 pages</strong> and <strong>50 EUR</strong> for <strong>each additional page</strong>.</p> <p><span style="font-size: 10.5pt; font-family: 'Noto Sans',sans-serif; color: black; background: white;">The maximum number of pages for a paper is 20, and therefore, the <strong><span style="font-family: 'Noto Sans',sans-serif;">maximum publication fee</span></strong><strong> is 1100 Euro</strong> (500 Euro (for up to 8 pages) + (12x50) Euro (for 12 additional pages)) = <strong><span style="font-family: 'Noto Sans',sans-serif;">1100 Euros</span></strong></span></p> <p>We operate a <strong>No Waiver</strong> policy.</p> <p><strong><br />Published by Faculty of Electrical Engineering, Computer Science and Information Technology, Josip Juraj Strossmayer University of Osijek, Croatia.<br /><br /></strong></p> <p><strong>The International Journal of Electrical and Computer Engineering Systems is published with the financial support of the Ministry of Science and Education of the Republic of Croatia</strong></p>https://ijeces.ferit.hr/index.php/ijeces/article/view/4165Optimizing Computation Offloading in 6G Multi-Access Edge Computing Using Deep Reinforcement Learning2025-06-10T09:50:09+02:00Mamoon M. Saeedmamoon530@gmail.comRashid A. Saeedrabdelhaleem@lu.edu.qaHashim Elshafiehelshafie@kku.edu.saAla Eldin Awoudaaadam@ub.edu.saZeinab E. AhmedZeinab.e.ahmed@gmail.comMayada A. Ahmedmayadanott13@gmail.comRania A Mokhtarramohammed@tu.edu.sa<p>One of the most important technologies for future mobile networks is multi-access edge computing (MEC). Computational duties can be redirected to edge servers rather than distant cloud servers by placing edge computing facilities at the edge of the wireless access network. This will meet the needs of 6G applications that demand high reliability and low latency. At the same time, as wireless network technology develops, a variety of computationally demanding and time-sensitive 6G applications appear. These jobs require lower latency and higher processing priority than traditional internet operations. This study presents a 6G multi-access edge computing network design to reduce total system costs, creating a collective optimization challenge. To tackle this problem, Joint Computation Offloading and Task Migration Optimization (JCOTM), an approach based on deep reinforcement learning, is presented. This algorithm takes into consideration several factors, such as the allocation of system computing resources, network communication capacity, and the simultaneous execution of many calculation jobs. A Markov Decision Process is used tosimulate the mixed integer nonlinear programming problem. The effectiveness of the suggested algorithm in reducing equipment energy consumption and task processing delays is demonstrated by experimental findings. Compared to other computing offloading techniques, it maximizes resource allocation and computing offloading methodologies, improving system resource consumption. The presented findings are based on a set of simulations done in TensorFlow and Python 3.7 for the Joint Computation Offloading and Task Management (JCOTM) method. Changing key parameters lets us find out that the JCOTM algorithm does converge, with rewards providing a measure of its success compared to various task offloading methods. 15 users and 4 RSUs are placed in the MEC network which faces resource shortages and is aware of users. According to the tests, JCOTM offers a lower average system offloading cost than local, edge, cloud, random computing and a game-theory-based technique. When there are more users and data, JCOTM continues to manage resources effectively and shows excellent speed in processing demands. It can be seen from these results that JCOTM makes it possible to offload efficiently as both server loads and user needs change in MEC environments.</p>2025-09-04T00:00:00+02:00Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systemshttps://ijeces.ferit.hr/index.php/ijeces/article/view/3976Comprehensive Classification and Analysis of Malware Samples Using Feature Selection and Bayesian Optimized Logistic Regression for Cybersecurity Applications2025-04-23T10:50:08+02:00Manisankar Sannigrahimanisankar.sannigrahi2020@vitstudent.ac.inR Thandeeswaranrthandeeswaran@vit.ac.in<p>Cyberattacks are serious threats not only to individuals but also to corporations due to their rising frequency and financial impact. Malware is the main tool of cybercriminals, and is always changing, making its detection and mitigation more complicated. To counter these threats, this work proposes a Logistic Regression approach that is based on Bayesian Optimization. By leveraging advanced techniques like a hybrid feature selection model, the study enhances malware detection and classification accuracy and efficiency. Bayesian Optimization fine-tunes the logistic regression model's hyperparameters, improving performance in identifying malware. The integration of a hybrid feature selection algorithm reduces dataset dimensionality, focusing on relevant features for more accurate classification and efficient resource use, which is suitable for real-time applications. The experimental results show amazing accuracy rates of 99.94% for the Ransomware Dataset and 99.98% on the CIC-Obfuscated Malware dataset. This proposed model performs better than the conventional detection techniques. With its flexible feature selection and optimization techniques, it can keep pace with the dynamic landscape of cyber threats. It, therefore, produces a robust and scalable answer to the current cybersecurity issues.</p>2025-09-08T00:00:00+02:00Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systemshttps://ijeces.ferit.hr/index.php/ijeces/article/view/3975Unified Communications Model for Information Management in Peruvian Public University2025-04-08T10:21:12+02:00John Fredy Rojas Bujaicojohn.rojas@unh.edu.peWilfredo Huaman Peraleswilfredo.huaman@unh.edu.peYerson Espinoza Tumialanyerson.espinoza@unh.edu.peRafael Wilfredo Rojas Bujaicorafaelrojas@unat.edu.pe<p>This study aimed to design a unified communications model to improve information management at the National University of Huancavelica. The research evaluated the implementation of this model, which optimized the distribution of Internet connections and ensured the availability, integrity, and confidentiality of information in the university’s various offices and campuses. The analysis revealed that the existing network infrastructure, designed in an improvised manner and without considering international standards, caused slow access issues and data transmission errors. The implementation of the proposed model showed significant improvements: application response times were reduced from 150 ms to 80 ms, the incidence of IP errors decreased from 25 to 5, and the frequency of unauthorized network access attempts dropped from 70% to 20%. Unlike previous approaches that were limited to partial solutions, this model integrates advanced security protocols, network segmentation through VLANs, and artificial neural networks for dynamic bandwidth allocation. This model offers a comprehensive solution that can be replicated in other institutions facing similar challenges.</p>2025-09-15T00:00:00+02:00Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systemshttps://ijeces.ferit.hr/index.php/ijeces/article/view/3882Federated Learning Algorithm to Suppress Occurrence of Low-Accuracy Devices2025-04-29T16:13:12+02:00Koudai Sakaidasakaida.koudai@ohsuga.lab.uec.ac.jpKeiichiro Oishioishi@okayama-u.ac.jpYasuyuki Taharatahara@uec.ac.jpAkihiko Ohsugaohsuga@uec.ac.jpAndrew Jandrew.j@manipal.eduYuichi Seiseiuny@uec.ac.jp<p>In recent years, federated learning (FL), a decentralized machine learning approach, has garnered significant attention. FL enables multiple devices to collaboratively train a model without sharing their data. However, when the data across devices are non- independent and identically distributed (non-IID), performance degradation issues such as reduced accuracy, slower convergence speed, and decreased performance fairness are known to occur. Under non-IID data environments, the trained model tends to exhibit varying accuracies across different devices, often overfitting on some devices while achieving lower accuracy on others. To address these challenges, this study proposes a novel approach that integrates reinforcement learning into FL under Non-IID conditions. By employing a reinforcement learning agent to select the optimal devices in each round, the proposed method effectively suppresses the emergence of low-accuracy devices compared to existing methods. Specifically, the proposed method improved the average accuracy of the bottom 10% devices by up to 4%, without compromising the overall average accuracy. Furthermore, the device selection patterns revealed that devices with more diverse local data tend to be chosen more frequently.</p>2025-09-10T00:00:00+02:00Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systemshttps://ijeces.ferit.hr/index.php/ijeces/article/view/3938Integrating Squeeze-and-Excitation Network with Pretrained CNN Models for Accurate Plant Disease Detection2025-05-30T13:59:00+02:00Lafta Raheem Alil.alkhazraji@gmail.comSabah Abdulazeez Jebursabah.abdulazeez@iku.edu.iqMothefer Majeed Jahefermodafarmajed@iku.edu.iqAbbas Khalifa Nawarabbas.altimimy@iku.edu.iqZaed S. Mahdizaed.s.mahdi@uotechnology.edu.iq<p>The increasing global population and the challenges posed by climate change have intensified the demand for sustainable food production. Traditional agricultural practices are often insufficient, leading to significant crop losses due to diseases and pests, despite the widespread use of pesticides and other chemical interventions. This paper introduces a new approach that integrates deep learning techniques, specifically Convolutional Neural Networks (CNNs) with Squeeze and Excitation (SE) networks, to enhance the accuracy of disease detection in fig leaves. By leveraging three pre-trained CNN models—MobileNetV2, InceptionV3, and Xception—this framework addresses data scarcity issues and improves feature representation while minimizing the risk of overfitting. Data augmentation techniques were employed to counteract data imbalance, and visualization tools like Grad-CAM and t-SNE were utilized for model interpretability. The proposed CNN-SE model was trained and evaluated on a fig leaf dataset comprising 1,196 images of healthy and diseased fig leaves, achieving an accuracy of 92.90% with MobileNet-SE, 91.48% with Inception-SE, and 89.62% with Xception-SE. Our model demonstrates superior performance in detecting fig leaf diseases, presenting a robust solution for sustainable agriculture by providing accurate, efficient, and scalable disease management in crops. The code of the proposed framework is available at https://github.com/lafta/SE-block-with-CNN-Models-for-Plant-Disease-Detection.</p>2025-09-15T00:00:00+02:00Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systemshttps://ijeces.ferit.hr/index.php/ijeces/article/view/4000Adaptive Robust Control for Maximum Power Point Tracking in Photovoltaic Systems based on Sliding Mode and Fuzzy Control2025-06-04T13:05:38+02:00Minh Van Phampvminh@uneti.edu.vn<p>Photovoltaic (PV) systems play a crucial role in renewable energy generation, but their efficiency heavily depends on accurate Maximum Power Point (MPP) tracking under varying environmental conditions. This paper applies an adaptive robust controller (ARC) to improve MPP tracking performance in PV systems, with a particular focus on enhancing robustness and reducing chattering. First, a sliding surface is defined based on the maximum power point. Then, a sliding mode controller is designed to ensure robustness against system uncertainties and external disturbances. To mitigate the chattering effect, a fuzzy logic-based controller is integrated into the ARC framework. The proposed controller is proven to be stable according to the Lyapunov criterion, providing robustness to uncertain parameters and external disturbances and reducing chattering. The proposed controller is validated through comparative simulations, demonstrating its superior performance over conventional methods. The results demonstrate that the proposed ARC achieves faster convergence, higher tracking accuracy, and improved robustness compared to conventional methods. Moreover, the integration of fuzzy logic significantly mitigates chattering, enhancing system efficiency and reliability. Given these advantages, the proposed controller is well-suited for real-world PV energy conversion systems, particularly in environments with rapidly changing irradiance and temperature conditions.</p>2025-09-10T00:00:00+02:00Copyright (c) 2025 International Journal of Electrical and Computer Engineering Systems