Abstract : Human action recognition is a challenging field that have been addressed with many different classification techniques such as SVM or Random Decision Forests and by considering many different kinds of information joints, key poses, joints rotation matrix, angles for example. This paper presents our approach for action recognition that considers only information given by the 3D joints from the skeleton and trains a two stage random forest to classify them. We extract skeletal features by computing all angles between any triplet of joints and all distances between any pair of joints then organizing them into a feature vector for each static pose. Complex dynamic actions are then described by sequences of such feature vectors. We evaluate our approach on the most recent and the largest benchmark, MSRC-12 Kinect Gesture Dataset, and compare our results with the state-of-the-art methods on this dataset.