M. Almquist, V. Almquist, V. Krishnamoorthi, N. Carlsson, and D. Eager, The prefetch aggressiveness tradeoff in 360 degree video streaming, Proceedings of the 9th ACM Multimedia Systems Conference, pp.258-269, 2018.

T. Ballard, C. Griwodz, R. Steinmetz, and A. Rizk, Rats: Adaptive 360-degree live streaming, Proceedings of the 10th ACM Multimedia Systems Conference, pp.308-311, 2019.

A. Bangor, P. Kortum, and J. Miller, Determining what individual sus scores mean: Adding an adjective rating scale, Journal of Usability Studies, vol.4, pp.114-123, 2009.

E. Bastug, M. Bennis, M. Medard, and M. Debbah, Toward Interconnected Virtual Reality: Opportunities, Challenges, and Enablers, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01781856

J. G. Beerends and F. E. De-caluwe, The influence of video quality on perceived audio quality and vice versa, J. Audio Eng. Soc, vol.47, issue.5, pp.355-362, 1999.

J. Brooke, Sus-a quick and dirty usability scale. Usability evaluation in industry pp, pp.189-194, 1996.

N. Carlsson, D. Eager, V. Krishnamoorthi, and T. Polishchuk, Optimized adaptive streaming of multi-video stream bundles, IEEE Transactions on Multimedia, vol.19, issue.7, pp.1637-1653, 2017.


E. M. Caruso, Z. C. Burns, and B. Converse, Slow motion increases perceived intent, Proceedings of the National Academy of Sciences, issue.113, pp.9250-9255, 2106.

I. D. Corporation, Demand for Augmented Reality/Virtual Reality Headsets Expected to Rebound, 2018.

A. Coutrot and N. Guyader, Learning a time-dependent master saliency map from eye-tracking data in videos, 2017.

S. Dambra, G. Samela, L. Sassatelli, R. Pighetti, R. Aparicio-pardo et al., TOUCAN-VR. Software, 2018.

S. Dambra, G. Samela, L. Sassatelli, R. Pighetti, R. Aparicio-pardo et al., Film editing: New levers to improve vr streaming, Proceedings of the 9th ACM Multimedia Systems Conference, pp.27-39, 2018.

E. J. David, J. Gutiérrez, A. Coutrot, M. P. Da-silva, and P. L. Callet, A dataset of head and eye movements for 360 degree videos, Proceedings of the 9th ACM Multimedia Systems Conference, pp.432-437, 2018.

Y. Farmani and R. Teather, Viewpoint snapping to reduce cybersickness in virtual reality, Graphic interfaces, 2018.


C. O. Fearghail, C. Ozcinar, S. Knorr, and A. Smolic, Director's cutanalysis of aspects of interactive storytelling for vr films, Interactive Storytelling, pp.308-322, 2018.

V. R. Gaddam, M. Riegler, R. Eg, C. Griwodz, and P. Halvorsen, Tiling in interactive panoramic video: Approaches and evaluation, IEEE Transactions on Multimedia, vol.18, issue.9, pp.1819-1831, 2016.


B. Girod, N. Färber, and E. G. Steinbach, Adaptive playout for low latency video streaming, IEEE International Conference on Image Processing (ICIP), vol.1, pp.962-965, 2001.

, Google: VR180 cameras, 2019.

M. Graf, C. Timmerer, and C. Mueller, Towards bandwidth efficient adaptive streaming of omnidirectional video over http: Design, implementation, and evaluation, Proceedings of the 8th ACM on Multimedia Systems Conference, pp.261-271, 2017.

S. Grogorick, M. Stengel, E. Eisemann, and M. Magnor, Subtle gaze guidance for immersive environments, Proceedings of the ACM Symposium on Applied Perception. pp. 4:1-4:7. SAP '17, 2017.

M. Hassenzahl and N. Tractinsky, User experience -a research agenda, Behavior and Information Technology, vol.25, pp.91-97, 2006.

H. Hu, Y. Lin, M. Liu, H. Cheng, Y. Chang et al., Deep 360 pilot: Learning a deep agent for piloting through 360 sports videos, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.1396-1405, 2017.

M. Jeppsson, H. Espeland, C. Griwodz, T. Kupka, R. Langseth et al., Efficient live and on-demand tiled hevc 360 vr video streaming, 2018 IEEE International Symposium on Multimedia (ISM), pp.81-88, 2018.

J. De, V. A. Oliveira, L. Brayda, L. Nedel, and A. Maciel, Designing a vibrotactile head-mounted display for spatial awareness in 3d spaces, IEEE Trans. on Visualization and Computer Graphics, vol.23, issue.4, pp.1409-1417, 2017.

J. Kim and M. Hahn, Voice activity detection using an adaptive context attention model, IEEE Signal Processing Letters, vol.25, issue.8, pp.1181-1185, 2018.

J. R. Lewis and J. Sauro, The factor structure of the system usability scale, pp.94-103, 2009.

Y. Li, A. Markopoulou, J. Apostolopoulos, and N. Bambos, Contentaware playout and packet scheduling for video streaming over wireless links, IEEE Transactions on Multimedia, vol.10, issue.5, pp.885-895, 2008.

X. Liu, B. Han, F. Qian, and M. Varvello, LIME: Understanding Commercial 360 degree Live Video Streaming Services, Proceedings of the 10th ACM Multimedia Systems Conference, pp.154-164, 2019.

J. Magliano and J. M. Zacks, The Impact of Continuity Editing in Narrative Film on Event Segmentation, Cognitive Science, vol.35, issue.8, pp.1489-1517, 2011.

G. Mather, R. J. Sharman, and T. Parsons, Visual adaptation alters the apparent speed of real-world actions, Scientific Reports, vol.7, issue.1, 2017.

, MPEG: Omnidirectional Media Application Format, 2018.

A. Nguyen, Z. Yan, and K. Nahrstedt, Your attention is unique: Detecting 360-degree video saliency in head-mounted display for head movement prediction, ACM Multimedia Conference, pp.1190-1198, 2018.

O. Niamut, E. Thomas, L. D'acunto, C. Concolato, F. Denoual et al., Mpeg dash srd: Spatial relationship description, 2016.
URL : https://hal.archives-ouvertes.fr/hal-02287283

, Oculus: Oculus Best Practices, pp.310-30000, 2017.

A. Pavel, B. Hartmann, and M. Agrawala, Shot orientation controls for interactive cinematography with 360 video, Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, pp.289-297, 2017.

S. Petrangeli, V. Swaminathan, M. Hosseini, and F. D. Turck, An HTTP/2-based adaptive streaming framework for 360 virtual reality videos, ACM Multimedia Conf, 2017.

P. Rondao-alface, J. F. Macq, and N. Verzijp, Interactive omnidirectional video delivery: A bandwidth-effective approach, Bell Lab. Tech. J, vol.16, issue.4, pp.135-147, 2012.

L. Sassatelli, M. Winckler, T. Fisichella, A. Dezarnaud, J. Lemaire et al., TOUCAN-VR Data, Data, 2019.

L. Sassatelli, M. Winckler, T. Fisichella, A. Dezarnaud, J. Lemaire et al., , 2019.

L. Sassatelli, A. M. Pinna-déry, M. Winckler, S. Dambra, G. Samela et al., Snap-changes: A dynamic editing strategy for directing viewer's attention in streaming virtual reality videos, Proceedings of the 2018 International Conference on Advanced Visual Interfaces, vol.46, pp.1-46, 2018.

A. Serrano, V. Sitzmann, J. Ruiz-borau, G. Wetzstein, D. Gutierrez et al., Movie Editing and Cognitive Event Segmentation in Virtual Reality Video, ACM Trans. on Graphics, 2017.

A. Singla, S. Göring, A. Raake, B. Meixner, R. Koenen et al., Subjective quality evaluation of tile-based streaming for omnidirectional videos, Proceedings of the 10th ACM Multimedia Systems Conference, pp.232-242, 2019.

V. Sitzmann, A. Serrano, P. , A. Agrawala, M. Gutierrez et al., Saliency in VR: How Do People Explore Virtual Environments?, IEEE Trans. on Visualization and Computer Graphics, 2018.

A. Slavkovic, Loglikelihood and confidence intervals, 2005.

I. T. Union, Subjective video quality assessment methods for multimedia applications, 2008.

I. T. Union, Methodology for the subjective assessment of the quality of television pictures, recommendation ITU-R BT, pp.500-513, 2012.

W. Verhelst and M. Roelands, An overlap-add technique based on waveform similarity (wsola) for high quality time-scale modification of speech, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol.2, pp.554-557, 1993.


N. Waldin, M. Waldner, and I. Viola, Flicker Observer Effect: Guiding Attention Through High Frequency Flicker in Images, Comput. Graph. Forum, vol.36, issue.2, pp.467-476, 2017.

M. Xiao, C. Zhou, V. Swaminathan, Y. Liu, and S. Chen, Bas-360: Exploring spatial and temporal adaptability in 360-degree videos over http/2, IEEE INFOCOM 2018 -IEEE Conference on Computer Communications, pp.953-961, 2018.


J. Yi, S. Luo, and Z. Yan, A Measurement Study of YouTube 360 degree Live Video Streaming, Proceedings of the 29th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video, pp.49-54, 2019.

X. L. Zhang and D. Wang, Boosting contextual information for deep neural network based voice activity detection, IEEE/ACM Trans. Audio, Speech and Lang. Proc, vol.24, issue.2, pp.252-264, 2016.