HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Conference papers

Models for video enrichment

Benoît Encelle 1 Pierre-Antoine Champin 1 Yannick Prié 1 Olivier Aubert 1
1 SILEX - Supporting Interaction and Learning by Experience
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : Videos are commonly being augmented with additional content such as captions, images, audio, hyperlinks, etc., which are rendered while the video is being played. We call the result of this rendering “enriched videos”. This article details an annotationbased approach for producing enriched videos: enrichment is mainly composed of textual annotations associated to temporal parts of the video that are rendered while playing it. The key notion of enriched video and associated concepts is first introduced and we second expose the models we have developed for annotating videos and for presenting annotations during the playing of the videos. Finally, an overview of a general workflow for producing/viewing enriched videos is presented. This workflow particularly illustrates the usage of the proposed models in order to improve the accessibility of videos for sensory disabled people.
Document type :
Conference papers
Complete list of metadata

Contributor : Équipe Gestionnaire Des Publications Si Liris Connect in order to contact the contributor
Submitted on : Thursday, August 18, 2016 - 7:30:58 PM
Last modification on : Wednesday, June 30, 2021 - 7:12:01 PM



Benoît Encelle, Pierre-Antoine Champin, Yannick Prié, Olivier Aubert. Models for video enrichment. Document Engineering 2011 (DocEng 2011), Sep 2011, Mountain View, CA, United States. pp.85-88, ⟨10.1145/2034691.2034710⟩. ⟨hal-01354516⟩



Record views