Deep Reinforcement Learning for Energy-Efficient Data Dissemination Through UAV Networks
The unprecedented growth in the number of connected devices has given rise to the Internet-of-Things (IoT) and led to an increasing demand for additional computational and communication resources. Within this context, unmanned aerial vehicles (UAVs) have shown to provide extended coverage, flexibility, and reachability. Motivated by this, in this paper, we develop a UAV-assisted data dissemination framework for IoT networks. To this end, we formulate a joint optimization problem that aims to minimize the total energy expenditure, i.e., the sum of the energy consumed by the UAV and all the spatially-distributed IoT devices. We propose a deep reinforcement learning approach to solve the joint device classification, device association, and path planning optimization problem. In particular, we aim to 1) train a double deep Q-network agent in order to classify devices into two classes, and then using this classification we 2) develop an association algorithm based on the nearest-neighbor heuristic for device association, and 3) develop a path planning algorithm based on the Lin-Kernighan heuristic. Simulation results show that the proposed approach efficiently reduces the energy consumption as compared with the benchmark approaches, i.e., brute force approach and baseline approach. Furthermore, obtained results show that our approach provides a near optimum solution with a fraction of the time required compared to the brute force approach.