User-Adaptive Editing for 360 degree Video Streaming with Deep Reinforcement Learning

Abstract : The development through streaming of 360°videos is persistently hindered by how much bandwidth they require. Adapting spatially the quality of the sphere to the user's Field of View (FoV) lowers the data rate but requires to keep the playback buffer small, to predict the user's motion or to make replacements to keep the buffered qualities up to date with the moving FoV, all three being uncertain and risky. We have previously shown that opportunistically regaining control on the FoV with active attention-driving techniques makes for additional levers to ease streaming and improve Quality of Experience (QoE). Deep neural networks have been recently shown to achieve best performance for video streaming adaptation and head motion prediction. This demo presents a step ahead in the important investigation of deep neural network approaches to obtain user-adaptive and network-adaptive 360°video streaming systems. In this demo, we show how snap-changes, an attention-driving technique, can be automatically modulated by the user's motion to improve the streaming QoE. The control of snap-changes is made with a deep neural network trained on head motion traces with the Deep Reinforcement Learning strategy A3C.
Complete list of metadatas

Cited literature [19 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02366869
Contributor : Lucile Sassatelli <>
Submitted on : Saturday, November 16, 2019 - 6:22:15 PM
Last modification on : Monday, November 25, 2019 - 8:56:53 AM

File

article_MM_2019.pdf
Files produced by the author(s)

Identifiers

Citation

Lucile Sassatelli, Marco Winckler, Thomas Fisichella, Ramon Aparicio-Pardo. User-Adaptive Editing for 360 degree Video Streaming with Deep Reinforcement Learning. 27th ACM International Conference on Multimedia, Oct 2019, Nice, France. pp.2208-2210, ⟨10.1145/3343031.3350601⟩. ⟨hal-02366869⟩

Share

Metrics

Record views

34

Files downloads

27