Skip to Main content Skip to Navigation
Conference papers

TAO: A Large-Scale Benchmark for Tracking Any Object

Achal Dave 1 Tarasha Khurana 1 Pavel Tokmakov 1 Cordelia Schmid 2, 3 Deva Ramanan 1, 4
2 Thoth - Apprentissage de modèles à partir de données massives
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann
3 WILLOW - Models of visual object recognition and scene understanding
Inria de Paris, DI-ENS - Département d'informatique de l'École normale supérieure
Abstract : For many years, multi-object tracking benchmarks have focused on a handful of categories. Motivated primarily by surveillance and self-driving applications, these datasets provide tracks for people, vehicles, and animals, ignoring the vast majority of objects in the world. By contrast, in the related field of object detection, the introduction of large-scale, diverse datasets (e.g., COCO) have fostered significant progress in developing highly robust solutions. To bridge this gap, we introduce a similarly diverse dataset for Tracking Any Object (TAO) 4. It consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average. Importantly, we adopt a bottom-up approach for discovering a large vocabulary of 833 categories, an order of magnitude more than prior tracking benchmarks. To this end, we ask annotators to label objects that move at any point in the video, and give names to them post factum. Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets. To ensure scalability of annotation, we employ a federated approach that focuses manual effort on labeling tracks for those relevant objects in a video (e.g., those that move). We perform an extensive evaluation of state-of-the-art trackers and make a number of important discoveries regarding large-vocabulary tracking in an open-world. In particular, we show that existing single-and multi-object trackers struggle when applied to this scenario in the wild, and that detection-based, multi-object trackers are in fact competitive with user-initialized ones. We hope that our dataset and analysis will boost further progress in the tracking community.
Complete list of metadata

Cited literature [72 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02951747
Contributor : Ricardo Garcia <>
Submitted on : Monday, September 28, 2020 - 9:28:36 PM
Last modification on : Thursday, July 1, 2021 - 5:58:09 PM
Long-term archiving on: : Tuesday, December 29, 2020 - 7:19:58 PM

File

tao_dataset.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Achal Dave, Tarasha Khurana, Pavel Tokmakov, Cordelia Schmid, Deva Ramanan. TAO: A Large-Scale Benchmark for Tracking Any Object. ECCV 2020 - European Conference on Computer Vision, Aug 2020, Glasgow / Virtual, United Kingdom. pp.436-454, ⟨10.1007/978-3-030-58558-7_26⟩. ⟨hal-02951747⟩

Share

Metrics

Record views

169

Files downloads

197