Hand-Object Contact Force Estimation From Markerless Visual Tracking
Résumé
We consider the problem of computing realistic contact forces during manipulation, backed with ground-truth measurements, using vision alone. Interaction forces are traditionally measured by mounting force transducers onto the manipulated objects or the hands. Those are costly, cumbersome, and alter the objects' physical properties and their perception by the human sense of touch. Our work establishes that interaction forces can be estimated in a cost-effective, reliable, non-intrusive way using vision. This is a complex and challenging problem. Indeed, in multi-contact, a given trajectory can generally be caused by an infinity of possible distributions. To alleviate the limitations of traditional models based on inverse optimization, we collect and release the first large-scale dataset on manipulation kinodynamics as 3.2 hours of synchronized force and motion measurements under 193 object-grasp configurations. We learn a mapping between high-level kinematic features based on the equations of motion and the underlying manipulation forces using recurrent neural networks (RNN). The RNN predictions are consistently refined using physics-based optimization through second-order cone programming (SOCP). We show that our method can successfully capture interaction forces compatible with both the observations and the way humans naturally manipulate objects, on an acquisition system no more complex than a single RGB-D camera.
Origine : Fichiers produits par l'(les) auteur(s)
Loading...