A collaborative research team from China Mobile and BUPT has designed a dynamic, learning-based incentive system for offloading AI training tasks in 6G digital twin networks, improving adaptability, efficiency, and cooperation between operators and mobile access points.

Research: Incentive-based task offloading for digital twins in 6G native artificial intelligence networks: a learning approach. Image Credit: Fit Ztudio / Shutterstock
In the era of the sixth-generation (6G) wireless network, which expands to scenarios like immersive communication, integrated artificial intelligence (AI) and communication, and hyper-reliable low-latency communication, high-level network autonomy has become essential to address the complexity of network architecture.
Digital twin networks offer a solution by constructing high-fidelity virtual replicas of physical networks to predict states and validate decisions, but their accuracy relies on AI model training, a process that demands substantial resources. However, resource-limited network entities (RL-NEs) such as mobile access points (mAPs) cannot independently support AI training for digital twins, necessitating task offloading to base stations (BSs).
Further, since RL-NEs and AI service providers (operators owning BSs) often belong to different entities, incentive mechanisms are critical to maximize the utility of both parties.
Study on incentive-based task offloading for digital twins
To address this challenge, a team of researchers from China Mobile Research Institute, China Mobile Communications Group Corporation, and Beijing University of Posts and Telecommunications conducted a study titled "Incentive-based task offloading for digital twins in 6G native artificial intelligence networks: a learning approach".
Stackelberg game models operator, mAP interactions
This study first establishes a Stackelberg game model to model the interaction between operators (leaders) and mAPs (followers). The operator, as the leader, sets the unit price for AI training services, while mAPs (followers) decide which BS to access and the volume of tasks to offload based on the announced prices. The study analyzes the Stackelberg equilibrium (SE) to derive optimal pricing and offloading strategies: the operator's utility balances service revenue and BS energy costs, while mAPs' utility considers AI training effectiveness, task delay, and service payment.
Deep reinforcement learning enhances adaptability
Considering the time-varying wireless network environment, the study further designs a deep reinforcement learning (DRL) algorithm based on the soft actor, critic (SAC) method. For the operator, a SAC-based pricing strategy uses channel gain as the state to dynamically adjust service prices; for mAPs, a multiagent SAC-based offloading strategy enables independent decision-making (without sharing other mAPs' information) by leveraging channel gain and pricing as states.
Simulation results validate 6G offloading efficiency
Extensive simulations on a 6G native AI network (with 2,4 BSs and 4,8 mAPs) validate the proposal's effectiveness. Compared with benchmark schemes (TD3-based and DDPG-based), the proposed DRL approach achieves higher utility for both operators and mAPs under varying bandwidths and computing capacities. While the static Stackelberg game scheme may yield slightly higher utility in stable environments, the DRL-based algorithm has lower inference complexity and better adaptability to dynamic network changes.
Research team and publication details
The paper "Incentive-based task offloading for digital twins in 6G native artificial intelligence networks: a learning approach" is authored by Tianjiao CHEN, Xiaoyun WANG, Meihui HUA, and Qinqin TANG. Full text of the open access paper: https://link.springer.com/article/10.1631/FITEE.2400240.
Source:
Journal reference:
- Chen, T., Wang, X., Hua, M. et al. Incentive-based task offloading for digital twins in 6G native artificial intelligence networks: a learning approach. Front Inform Technol Electron Eng 26, 214–229 (2025). DOI: 10.1631/FITEE.2400240, https://link.springer.com/article/10.1631/FITEE.2400240