Deep Reinforcement Learning for Dynamic Task Scheduling in Edge-Cloud Environments
DOI:
https://doi.org/10.32985/ijeces.15.10.3Keywords:
Task Scheduling, Edge-Cloud Environment, Recurrent Neural Network, Edge Computing, Cloud Computing, Deep Reinforcement LearningAbstract
With The advent of the Internet of Things (IoT) and its use cases there is a necessity for improved latency which has led to edgecomputing technologies. IoT applications need a cloud environment and appropriate scheduling based on the underlying requirements of a given workload. Due to the mobility nature of IoT devices and resource constraints and resource heterogeneity, IoT application tasks need more efficient scheduling which is a challenging problem. The existing conventional and deep learning scheduling techniques have limitations such as lack of adaptability, issues with synchronous nature and inability to deal with temporal patterns in the workloads. To address these issues, we proposed a learning-based framework known as the Deep Reinforcement Learning Framework (DRLF). This is designed in such a way that it exploits Deep Reinforcement Learning (DRL) with underlying mechanisms and enhanced deep network architecture based on Recurrent Neural Network (RNN). We also proposed an algorithm named Reinforcement Learning Dynamic Scheduling (RLbDS) which exploits different hyperparameters and DRL-based decision-making for efficient scheduling. Real-time traces of edge-cloud infrastructure are used for empirical study. We implemented our framework by defining new classes for CloudSim and iFogSim simulation frameworks. Our empirical study has revealed that RLbDS out performs many existing scheduling methods.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 International Journal of Electrical and Computer Engineering Systems
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.