Abstract:
Vehicular edge computing (VEC) is a promising paradigm based on the Internet of vehicles to provide computing resources for end users and reduce cellular network traffic. This paper considers a VEC network with dynamic topologies, unstable connections, and unpredictable movements.
Vehicles can offload computation tasks to neighboring VEC clusters formed by onboard resources to reduce system energy consumption and meet task latency constraints. Existing research uses heuristic algorithms or deep reinforcement learning (DRL) for online task scheduling.
Due to low searching efficiency and slow convergence speeds for large-scale networks, these algorithms are inefficient. Instead, we propose an imitation learning-enabled online task scheduling algorithm with near-optimal initial performance.
An expert can solve the optimization problem offline with a few samples to find the best scheduling policy. We train agent policies online by following the expert’s demonstration with a theoretically acceptable performance gap. Our solution outperforms the benchmark by over 50%.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here