Resource Allocation using Deep Learning in Mobile Small Cell Networks
preprintposted on 04.02.2021, 22:18 by Saniya Zafar, sobia Jangsher, Arafat Al-Dweik
The deployment of mobile-Small cells (mScs) is widely adopted to intensify the quality-of-service (QoS) in high mobility vehicles. However, the rapidly varying interference patterns among densely deployed mScs make the resource allocation (RA) highly challenging. In such scenarios, RA problem needs to be solved nearly in real-time, which can be considered as drawback for most existing RA algorithms. To overcome this constraint and solve the RA problem efficiently, we use deep learning (DL) in this work due to its ability to leverage the historical data in RA problem and to deal with computationally expensive tasks offline. More specifically, this paper considers the RA problem in vehicular environment comprising of city buses, where DL is explored for optimization of network performance. Simulation results reveal that RA in a network using Long Short-Term Memory (LSTM) algorithm outperforms other machine learning (ML) and DL-based RA mechanisms. Moreover, RA using LSTM provides less accurate results as compared to existing Time Interval Dependent Interference Graph (TIDIG)-based, and Threshold Percentage Dependent Interference Graph (TPDIG)-based RA but shows improved results when compared to RA using Global Positioning System Dependent Interference Graph (GPSDIG). However, the proposed scheme is computationally less expensive in comparison with TIDIG and TPDIG-based algorithms.