TY - JOUR
T1 - Latency and energy aware rate maximization in MC-NOMA-based multi-access edge computing
T2 - A two-stage deep reinforcement learning approach
AU - Nduwayezu, Maurice
AU - Yun, Ji Hoon
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/4/22
Y1 - 2022/4/22
N2 - Future network services are emerging with an inevitable need for high wireless capacity along with strong computational capabilities, stringent latency and reduced energy consumption Two technologies are promising, showing potential to support these requirements: multi-access (or mobile) edge computing (MEC) and non-orthogonal multiple access (NOMA). While MEC allows users to access the abundant computing resources at the edge of the network, NOMA technology enables an increase in the density of a cellular network. However, integrating NOMA technology into MEC systems faces challenges in terms of joint offloading decisions (remote or local computation) and inter-user interference management. In this paper, with the objective of maximizing the system-wide sum computation rate under latency and energy consumption constraints, we propose a two-stage deep reinforcement learning algorithm to solve the joint problem in a multicarrier NOMA-based MEC system, in which the first-stage agent handles offloading decisions while the second-stage agent considers the offloading decisions to determine the resource block assignments for users. Simulation results show that compared with other benchmark algorithms, the proposed algorithm improves the sum computation rate while meeting the latency and energy consumption requirements, and it outperforms the approach in which a single agent handles both offloading decisions and resource block assignments due to faster convergence performance.
AB - Future network services are emerging with an inevitable need for high wireless capacity along with strong computational capabilities, stringent latency and reduced energy consumption Two technologies are promising, showing potential to support these requirements: multi-access (or mobile) edge computing (MEC) and non-orthogonal multiple access (NOMA). While MEC allows users to access the abundant computing resources at the edge of the network, NOMA technology enables an increase in the density of a cellular network. However, integrating NOMA technology into MEC systems faces challenges in terms of joint offloading decisions (remote or local computation) and inter-user interference management. In this paper, with the objective of maximizing the system-wide sum computation rate under latency and energy consumption constraints, we propose a two-stage deep reinforcement learning algorithm to solve the joint problem in a multicarrier NOMA-based MEC system, in which the first-stage agent handles offloading decisions while the second-stage agent considers the offloading decisions to determine the resource block assignments for users. Simulation results show that compared with other benchmark algorithms, the proposed algorithm improves the sum computation rate while meeting the latency and energy consumption requirements, and it outperforms the approach in which a single agent handles both offloading decisions and resource block assignments due to faster convergence performance.
KW - Deep reinforcement learning
KW - Mobile edge computing
KW - Multicarrier non-orthogonal multiple access
KW - Resource block assignment
UR - https://www.scopus.com/pages/publications/85124800380
U2 - 10.1016/j.comnet.2022.108834
DO - 10.1016/j.comnet.2022.108834
M3 - Article
AN - SCOPUS:85124800380
SN - 1389-1286
VL - 207
JO - Computer Networks
JF - Computer Networks
M1 - 108834
ER -