The CAMOMILE Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Documents - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

The CAMOMILE Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Documents

Hazim Ekenel
  • Fonction : Auteur
Georges Quénot

Résumé

In this paper, we describe the organization and the implementation of the CAMOMILE collaborative annotation framework for multimodal, multimedia, multilingual (3M) data. Given the versatile nature of the analysis which can be performed on 3M data, the structure of the server was kept intentionally simple in order to preserve its genericity, relying on standard Web technologies. Layers of annotations, defined as data associated to a media fragment from the corpus, are stored in a database and can be managed through standard interfaces with authentication. Interfaces tailored specifically to the needed task can then be developed in an agile way, relying on simple but reliable services for the management of the centralized annotations. We then present our implementation of an active learning scenario for person annotation in video, relying on the CAMOMILE server; during a dry run experiment, the manual annotation of 716 speech segments was thus propagated to 3504 labeled tracks. The code of the CAMOMILE framework is distributed in open source.
Fichier principal
Vignette du fichier
456_Paper.pdf (679.91 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte
Loading...

Dates et versions

hal-01350096 , version 1 (29-07-2016)

Identifiants

  • HAL Id : hal-01350096 , version 1

Citer

Johann Poignant, Mateusz Budnik, Hervé Bredin, Claude Barras, Mickael Stefas, et al.. The CAMOMILE Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Documents. LREC 2016 Conference, May 2016, Portoroz, Slovenia. ⟨hal-01350096⟩
554 Consultations
154 Téléchargements

Partager

Gmail Facebook X LinkedIn More