Using vision and haptic sensing for human-humanoid haptic joint actions - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2013

Using vision and haptic sensing for human-humanoid haptic joint actions

Résumé

Human-humanoid haptic joint actions are collaborative tasks requiring a sustained haptic interaction between both parties. As such, most research in this field has concentrated on how to use solely the robot's haptic sensing to extract the human partners' intentions. With this information, interaction controllers are designed. In this paper, the addition of visual sensing is investigated and a suitable framework is developed to accomplish this. This is then tested on examples of haptic joint actions namely collaboratively carrying a table. Additionally a visual task is implemented on top of this. In one case, the aim is to keep the table level taking into account gravity. In another case, a freely moving ball is balanced to keep it from falling off the table. The results of the experiments show that the framework is able to utilize both information sources properly to accomplish the task.
Fichier principal
Vignette du fichier
2013_cisram_agravante-using_vision_and_haptic_sensing_for_human_humanoid_joint_actions.pdf (679.92 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

lirmm-00908439 , version 1 (22-11-2013)

Identifiants

Citer

Don Joven Agravante, Andrea Cherubini, Abderrahmane Kheddar. Using vision and haptic sensing for human-humanoid haptic joint actions. RAM: Robotics, Automation and Mechatronics, Nov 2013, Manila, Philippines. pp.13-18, ⟨10.1109/RAM.2013.6758552⟩. ⟨lirmm-00908439⟩
376 Consultations
342 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More