Skip to Main content Skip to Navigation
Conference papers

ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition

Abstract : In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD) and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset (CGD) that has a total of more than 50000 gestures for the "one-shot-learning" competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences. Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for "user independent" gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words (BoVW) model is also presented.
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01381151
Contributor : Isabelle Guyon <>
Submitted on : Friday, October 14, 2016 - 9:29:59 AM
Last modification on : Wednesday, September 16, 2020 - 5:11:40 PM

Identifiers

  • HAL Id : hal-01381151, version 1

Citation

Jun Wan, Yibing Zhao, Shuai Zhou, Isabelle Guyon, Sergio Escalera, et al.. ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition. CVPR 2016 - IEEE Conference on Computer Vision and Pattern Recognition - Workshops, 2016, Las Vegas, United States. ⟨hal-01381151⟩

Share

Metrics

Record views

2065