Example：10.1021/acsami.1c06204 or Chem. Rev., 2007, 107, 2411-2502
On Joint Offloading and Resource Allocation: A Double Deep Q-Network Approach IEEE Transactions on Cognitive Communications and Networking (IF4.341), Pub Date : 2021-09-29, DOI: 10.1109/tccn.2021.3116251 Fahime Khoramnejad, Melike Erol-Kantarci
Multi-access edge computing (MEC) is an important enabling technology for 5G and 6G networks. With MEC, mobile devices can offload their computationally heavy tasks to a nearby server which can be a simple node at a base station, a vehicle or another device. With the increasing number of devices, slices and multiple radio access technologies, the problem of task offloading is becoming an increasingly complex problem. Thus, traditional approaches experience limitations while machine learning algorithms emerge as promising methods. In this paper, we consider binary and partial offloading problems and aim to jointly find optimal decisions for offloading and resource allocation which maximize the number of computed bits while minimizing the energy consumption. This allows improved usage of uplink transmit power and local CPU resources. We propose the Deep Reinforcement Learning for Joint Resource Allocation and Offloading (DJROM) algorithm that uses the double deep Q-network approach and models UEs as agents. We compare the proposed approach with two other machine learning based techniques, namely, multi-agent deep Q-learning (MARL-DQL) and multi-agent deep Q network (MARL-DQN) under fixed and mobile scenarios. Our results show that, DJROM scheme enhances the efficiency better than the other compared algorithms.