Skip to Main content Skip to Navigation
Conference papers

Zero-Shot Object Recognition Based on Haptic Attributes

Zineb Abderrahmane 1 Gowrishankar Ganesh 2, 1 Andrea Cherubini 1 André Crosnier 1
1 IDH - Interactive Digital Humans
LIRMM - Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier
Abstract : Robots operating in household environments need to recognize a variety of objects. Several touch-based object recognition systems have been proposed in the last few years [2]– [5]. They map haptic data to object classes using machine learning techniques, and then use the learned mapping to recognize one of the previously encountered objects. The accuracy of these proposed methods depends on the mass of the the training samples available for each object class. On the other hand, haptic data collection is often system (robot) specific and labour intensive. One way to cope with this problem is to use a knowledge transfer based system, that can exploit object relationships to share learned models between objects. However, while knowledge-based systems, such as zero shot learning [6], have been regularly proposed for visual object recognition, a similar system is not available for haptic recognition. Here we developed [1] the first haptic zero-shot learning system that enables a robot to recognize, using haptic exploration alone, objects that it encounters for the first time. Our system first uses the so called Direct Attributes Prediction (DAP) model [7] to train on the semantic representation of objects based on a list of haptic attributes, rather than the object itself. The attributes (including physical properties such as shape, texture, material) constitute an intermediate layer relating objects, and is used for knowledge transfer. Using this layering, our system can predict the attribute-based representation of a new (previously non-trained) object and use it to infer its identity. A. System Overview An overview of our system is given in Fig. 1. Given distinct training and test data-sets Y and Z, that are described by an attribute basis a, we first associate a binary label a o m to each object o with o ∈ Y ∪ Z and m = 1. .. M. This results in a binary object-attribute matrix K. For a given attributes list during training, haptic data collected from Y are used to train a binary classifier for each attribute a m. Finally, to classify a test sample x as one of Z objects, x is introduced to each one of the learned attribute classifiers and the output attributes posteriors p(a m | x) are used to predict the corresponding object, provided that the ground truth is available in K. This extended abstract is a summary of submission [1] B. Experimental Setup To collect haptic data, we use the Shadow anthropo-morphic robotic hand equipped with a BioTac multimodal tactile sensor on each fingertip. We developed a force-based grasp controller that enables the hand to enclose an object. The joint encoder readings provides us with information on object shape, while the BioTac sensors provides us with information about objects material, texture and compliance at each fingertip 1. In order to find the appropriate list of attributes describing our object set (illustrated in Fig. 2), we used online dictionaries to collect one or multiple textual definitions of each object. From this data, we extracted 11 haptic adjectives, or descriptions that could be " felt " using our robot hand. These adjectives served as our attributes: made of porcelain, made of plastic, made of glass, made of cardboard, made of stainless steel, cylindrical, round, rectangular, concave, has a handle, has a narrow part. We grouped the attributes into material attributes, and shape attributes. During the training phase, we use the Shadow hand joint readings x sh to train an SVM classifier for each shape, and BioTacs readings x b to train an SVM classifier for each material attribute. SVM training returns a distance s m (x) measure for each sample x that gives how far x lies from the discriminant hyper-plane. We transform this score to an attribute posterior p(a m | x) using a sigmoid function.
Document type :
Conference papers
Complete list of metadatas

Cited literature [8 references]  Display  Hide  Download
Contributor : Andrea Cherubini <>
Submitted on : Tuesday, May 16, 2017 - 9:00:08 AM
Last modification on : Wednesday, June 19, 2019 - 2:20:18 PM
Document(s) archivé(s) le : Friday, August 18, 2017 - 12:39:22 AM


Files produced by the author(s)


  • HAL Id : hal-01523030, version 1



Zineb Abderrahmane, Gowrishankar Ganesh, Andrea Cherubini, André Crosnier. Zero-Shot Object Recognition Based on Haptic Attributes. ICRA Workshop The Robotic Sense of Touch, May 2017, Singapour, Singapore. ⟨hal-01523030⟩



Record views


Files downloads