Naive Bayesian fusion for action recognition from Kinect - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Naive Bayesian fusion for action recognition from Kinect

Résumé

The recognition of human actions based on three-dimensional depth data has become a very active research field in computer vision. In this paper, we study the fusion at the feature and decision levels for depth data captured by a Kinect camera to improve action recognition. More precisely, from each depth video sequence, we compute Depth Motion Maps (DMM) from three projection views: front, side and top. Then shape and texture features are extracted from the obtained DMMs. These features are based essentially on Histogram of Oriented Gradients (HOG) and Local Binary Patterns (LBP) descriptors. We propose to use two fusion levels. The first is a feature fusion level and is based on the concatenation of HOG and LBP descriptors. The second, a score fusion level, based on the naive-Bayes combination approach, aggregates the scores of three classifiers: a collaborative representation classifier, a sparse representation classifier and a kernel based extreme learning machine classifier. The experimental results conducted on two public datasets, Kinect v2 and UTD-MHAD, show that our approach achieves a high recognition accuracy and outperforms several existing methods.

Dates et versions

hal-01870502 , version 1 (07-09-2018)

Identifiants

Citer

Amel Ben Mahjoub, Mohamed Ibn Khedher, Mohamed Atri, Mounim El Yacoubi. Naive Bayesian fusion for action recognition from Kinect. DPPR 2017: 7th International Conference on Digital Image Processing and Pattern Recognition, Dec 2017, Sydney, Australia. pp.53 - 69, ⟨10.5121/csit.2017.71606⟩. ⟨hal-01870502⟩
74 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More