Analyzing viewport prediction under different VR interactions

Tan Xu, Bo Han, Feng Qian

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

In this paper, we study the problem of predicting a user's viewport movement in a networked VR system (i.e., predicting which direction the viewer will look at shortly). This critical knowledge will guide the VR system through making judicious content fetching decisions, leading to efficient network bandwidth utilization (e.g., up to 35% on LTE networks as demonstrated by our previous work) and improved Quality of Experience (QoE). For this study, we collect viewport trajectory traces from 275 users who have watched popular 360° panoramic videos for a total duration of 156 hours. Leveraging our unique datasets, we compare viewport movement patterns of different interaction modes: wearing a head-mounted device, tilting a smartphone, and dragging the mouse on a PC. We then apply diverse machine learning algorithms - from simple regression to sophisticated deep learning that leverages crowd-sourced data - to analyze the performance of viewport prediction. We find that the deep learning approach is robust for all interaction modes and yields supreme performance, especially when the viewport is more challenging to predict, e.g., for a longer prediction window, or with a more dynamic movement. Overall, our analysis provides key insights on how to intelligently perform viewport prediction in networked VR systems.

Original languageEnglish (US)
Title of host publicationCoNEXT 2019 - Proceedings of the 15th International Conference on Emerging Networking Experiments and Technologies
PublisherAssociation for Computing Machinery, Inc
Pages165-171
Number of pages7
ISBN (Electronic)9781450369985
DOIs
StatePublished - Dec 3 2019
Event15th ACM International Conference on Emerging Networking Experiments and Technologies, CoNEXT 2019 - Orlando, United States
Duration: Dec 9 2019Dec 12 2019

Publication series

NameCoNEXT 2019 - Proceedings of the 15th International Conference on Emerging Networking Experiments and Technologies

Conference

Conference15th ACM International Conference on Emerging Networking Experiments and Technologies, CoNEXT 2019
CountryUnited States
CityOrlando
Period12/9/1912/12/19

Bibliographical note

Funding Information:
We would like to thank the anonymous reviewers and our shepherd Zubair Shafiq for their valuable comments. We would also like to thank the voluntary users who participated in our user study. Feng Qian’s research was supported in part by NSF Award #1915122 and a Google Faculty Award.

Funding Information:
Feng Qian's research was supported in part by NSF Award #1915122 and a Google Faculty Award

Keywords

  • 360-degree video
  • Adaptive video streaming
  • Machine learning
  • VR interactions
  • Viewport prediction

Fingerprint Dive into the research topics of 'Analyzing viewport prediction under different VR interactions'. Together they form a unique fingerprint.

Cite this