Learning Object Class Detectors from Weakly Annotated Video

Abstract : Object detectors are typically trained on a large set of still images annotated by bounding-boxes. This paper introduces an approach for learning object detectors from real-world web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos. To compare to the state of the art, we test our detector on still images, i.e., Pascal VOC 2007. We observe that frames extracted from web videos can differ significantly in terms of quality to still images taken by a good camera. Thus, we formulate the learning from videos as a domain adaptation task. We show that training from a combination of weakly annotated videos and fully annotated still images using domain adaptation improves the performance of a detector trained from still images alone.
Document type :
Conference papers
Complete list of metadatas

Cited literature [37 references]  Display  Hide  Download

https://hal.inria.fr/hal-00695940
Contributor : Alessandro Prest <>
Submitted on : Monday, July 2, 2012 - 9:59:33 AM
Last modification on : Tuesday, March 5, 2019 - 9:30:11 AM
Long-term archiving on : Thursday, December 15, 2016 - 7:44:50 PM

File

VO.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Alessandro Prest, Christian Leistner, Javier Civera, Cordelia Schmid, Vittorio Ferrari. Learning Object Class Detectors from Weakly Annotated Video. CVPR 2012 - Conference on Computer Vision and Pattern Recognition, Jun 2012, Providence, RI, United States. pp.3282-3289, ⟨10.1109/CVPR.2012.6248065⟩. ⟨hal-00695940v2⟩

Share

Metrics

Record views

697

Files downloads

1699