Towards Automated Computer Vision: Analysis of the AutoCV Challenges 2019 - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Pattern Recognition Letters Année : 2020

Towards Automated Computer Vision: Analysis of the AutoCV Challenges 2019

Résumé

We present the results of recent challenges in Automated Computer Vision (AutoCV, renamed here for clarity AutoCV1 and AutoCV2, 2019), which are part of a series of challenge on Automated Deep Learning (AutoDL). These two competitions aim at searching for fully automated solutions for classification tasks in computer vision, with an emphasis on anytime performance. The first competition was limited to image classification while the second one included both images and videos. Our design imposed to the participants to submit their code on a challenge platform for blind testing on five datasets, both for training and testing, without any human intervention whatsoever. Winning solutions adopted deep learning techniques based on already published architectures, such as AutoAugment, MobileNet and ResNet, to reach state-of-the-art performance in the time budget of the challenge (only 20 minutes of GPU time). The novel contributions include strategies to deliver good preliminary results at any time during the learning process, such that a method can be stopped early and still deliver good performance. This feature is key for the adoption of such techniques by data analysts desiring to obtain rapidly preliminary results on large datasets and to speed up the development process. The soundness of our design was verified is several respect: (1) Little overfitting of the on-line leaderboard providing feedback on 5 development datasets was observed, compared to the final blind testing on the 5 (separate) final test datasets, suggesting that winning solutions might generalize to other computer vision classification tasks; (2) Error bars on the winners' performance allow us to say with confident that they performed significantly better than the baseline solutions we provided; (3) The ranking of participants according to the anytime metric we designed, namely the Area under the Learning Curve, was different from that of the fixed-time metric, i.e. AUC at the end of the fixed time budget. We released all winning solutions under open-source licenses. At the end of the AutoDL challenge series, all data of the challenge will be made publicly available, thus providing a collection of uniformly formatted datasets, which can serve to conduct further research, particularly on meta-learning.
Fichier principal
Vignette du fichier
AutoCV_Analysis_preprint.pdf (1.13 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02386805 , version 1 (29-11-2019)

Identifiants

  • HAL Id : hal-02386805 , version 1

Citer

Zhengying Liu, Zhen Xu, Sergio Escalera, Isabelle Guyon, Julio C S Jacques Junior, et al.. Towards Automated Computer Vision: Analysis of the AutoCV Challenges 2019. Pattern Recognition Letters, 2020, 135, pp.196-203. ⟨hal-02386805⟩
246 Consultations
208 Téléchargements

Partager

Gmail Facebook X LinkedIn More