Abstract : Recent development in affordable depth sensors opens new possibilities in action recognition problem. Depth information improves skeleton detection, therefore many authors focused on analyzing pose for action recognition. But still skeleton detection is not robust and fail in more challenging scenarios, where sensor is placed outside of optimal working range and serious occlusions occur. In this paper we investigate state-of-the-art methods designed for RGB videos, which have proved their performance. Then we extend current state-of-the-art algorithms to benefit from depth information without need of skeleton detection. In this paper we propose two novel video descriptors. First combines motion and 3D information. Second improves performance on actions with low movement rate. We validate our approach on challenging MSR DailyActivty3D dataset.
https://hal.inria.fr/hal-01054949
Contributor : Michal Koperski <>
Submitted on : Sunday, August 10, 2014 - 9:53:05 PM Last modification on : Thursday, March 5, 2020 - 5:34:17 PM Long-term archiving on: : Wednesday, November 26, 2014 - 6:15:40 PM
Michal Koperski, Piotr Bilinski, François Bremond. 3D Trajectories for Action Recognition. ICIP - The 21st IEEE International Conference on Image Processing, IEEE, Oct 2014, Paris, France. ⟨hal-01054949⟩