Grassmannian Representation of Motion Depth for 3D Human Gesture and Action Recognition
Résumé
Recently developed commodity depth sensors open up new possibilities of dealing with rich descriptors, which capture geometrical features of the observed scene. Here, we propose an original approach to represent geometrical features extracted from depth motion space, which capture both geometric appearance and dynamic of human body simultaneously. In this approach, sequence features are modeled temporally as subspaces lying on Grassmannian manifold. Classification task is carried out via computation of probability density functions on tangent space of each class tacking benefit from the geometric structure of the Grassmaniann manifold. The experimental evaluation is performed on three existing datasets containing various chal- lenges, including MSR-action 3D, UT-kinect and MSR-Gesture3D. Results reveal that our approach outperforms the state-of-the- art methods, with accuracy of 98.21% on MSR-Gesture3D and 95.25% on UT-kinect, and achieves a competitive performance of 86.21% on MSR-action 3D.
Origine : Fichiers produits par l'(les) auteur(s)
Loading...