Abstract
Offloading heavy VR rendering to edge/cloud computation entities and streaming rendered VR images to user headsets over networks is a promising solution for achieving widespread adoption of VR services. However, this solution faces the challenge of significant latency. To address black borders caused by such latency in a computationally efficient manner, VR streaming requires both overfill rendering (OR) and foveated rendering (FR), significantly influence the trade-off between visual quality and computational load based on their configuration parameters. In this paper, we develop a visual quality metric for VR images where OR and FR are simultaneously applied under latency, which models the impact of the overfill factor and foveal layers on user-perceived visual quality. Then, we design a deep reinforcement learning (DRL)-based solution to maximize a unified objective that combines visual quality and computational overhead by jointly optimizing the overfill factor and the sizes of the foveal layers. Experimental results obtained using a real-world wireless VR streaming testbed for an interactive VR game demonstrate the effectiveness of the proposed solution. Compared to benchmarks, our solution adapts OR and FR operations to varying environmental conditions, achieving superior performance in terms of overall reward.
| Original language | English |
|---|---|
| Pages (from-to) | 143896-143913 |
| Number of pages | 18 |
| Journal | IEEE Access |
| Volume | 13 |
| DOIs | |
| State | Published - 2025 |
Keywords
- Virtual reality
- deep reinforcement learning
- edge computing
- foveated rendering
- offloading
- overfill rendering
- streaming