Mobile Computing Projects

Abstract:

In underwater acoustic networks (UWANs), long propagation delay degrades throughput, making medium access control (MAC) protocol design critical. This paper develops a deep reinforcement learning (DRL)-based MAC protocol for UWANs, delayed-reward deep-reinforcement learning multiple access (DR-DLMA), to maximize network throughput by judiciously using time slots caused by propagation delays or not used by other nodes.

In the DR-DLMA design, we introduced a new DRL algorithm, delayed-reward deep Q-network (DR-DQN). By defining state, action, and reward in RL terms, we can solve the UWAN multiple access problem as a reinforcement learning (RL) problem and implement the DR-DLMA protocol.

Traditional DRL algorithms, like the original DQN algorithm, give the agent immediate access to the environment’s “reward” after an action. In our design, the “reward” (i.e., the ACK packet) is only available after twice the one-way propagation delay after the agent transmits a data packet. DR-DQN integrates the propagation delay into the DRL framework and modifies the DRL algorithm. DR-DQN’s nimble training mechanism reduces the cost of online deep neural network (DNN) training. The optimal network throughputs in various cases are benchmarked.

Simulation results show that our DR-DLMA protocol with nimble training mechanism can (i) find the optimal transmission strategy when coexisting with other protocols in a heterogeneous environment; (ii) outperform state-of-the-art MAC protocols (e.g., slotted FAMA and DOTS) in a homogeneous environment; and (iii) greatly reduce energy consumption and run-time compared to DR-DLMA with traditional DNN training mechanism.

Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.

Did you like this final year project?

To download this project Code with thesis report and project training... Click Here

You may also like: