Deep Reinforcement Learning-based Online Resource Management for
UAV-Assisted Edge Computing with Dual Connectivity
Abstract
Mobile Edge Computing (MEC) is a key technology towards delay-sensitive
and computation-intensive applications in future cellular networks. In
this paper, we consider a multi-user, multi-server system where the
cellular base station is assisted by a UAV, both of which provide
additional MEC services to the terrestrial users. Via dual connectivity
(DC), each user can simultaneously offload tasks to the macro base
station and the UAV-mounted MEC server for parallel computing, while
also processing some tasks locally. We aim to propose an online resource
management framework that minimizes the average power consumption of the
whole system, considering long-term constraints on queue stability and
computational delay of the queueing system. Due to the coexistence of
two servers, the problem is highly complex and formulated as a
multi-stage mixed integer non-linear programming (MINLP) problem. To
solve the MINLP with reduced computational complexity, we first adopt
Lyapunov optimization to transform the original multi-stage problem into
deterministic problems that are manageable in each time slot. Afterward,
the transformed problem is solved using an integrated
learning-optimization approach, where model-free Deep Reinforcement
Learning (DRL) is combined with modelbased optimization. Via extensive
simulation and theoretical analyses, we show that the proposed framework
is guaranteed to converge and can produce nearly the same performance as
the optimal solution obtained via an exhaustive search.