Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Advances in Multimedia Année : 2013

Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video

Résumé

The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.
Fichier principal
Vignette du fichier
175064.pdf (5.36 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-00833474 , version 1 (12-06-2013)

Identifiants

Citer

Vladislavs Dovgalecs, Rémi Megret, Yannick Berthoumieu. Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video. Advances in Multimedia, 2013, 2013, pp.Article ID 175064. ⟨10.1155/2013/175064⟩. ⟨hal-00833474⟩
225 Consultations
87 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More