Abstract:
Fog computing, which integrates mobile-edge and cloud resources, has emerged due to the widespread adoption of Internet-of-Things (IoT) applications. Due to resource constraints, IoT mobility, heterogeneity, network hierarchy, and stochastic behaviors, scheduling application tasks in such environments is difficult.
Heuristics and Reinforcement Learning-based approaches cannot generalize or adapt quickly enough to solve this problem. They only work in centralized environments and cannot use temporal workload patterns.
Asynchronous-advantage-actor-critic (A3C) learning and residual recurrent neural network (R2N2) quickly update model parameters in dynamic scenarios with less data. Thus, an A3C-based real-time scheduler for stochastic Edge-Cloud environments allows decentralized learning across multiple agents.
The R2N2 architecture captures many host and task parameters and temporal patterns for efficient scheduling decisions. The model can tune hyper-parameters to meet application needs. Sensitivity analysis explains our hyper-parameters.
Compared to state-of-the-art algorithms, real-world data set experiments show 14.4, 7.74, 31.9, and 4.64 percent improvements in energy consumption, response time, SLA, and running cost.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here