RECIPE RECOGNITION WITH LARGE MULTIMODAL FOOD DATASET - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

RECIPE RECOGNITION WITH LARGE MULTIMODAL FOOD DATASET

Résumé

This paper deals with automatic systems for image recipe recognition. For this purpose, we compare and evaluate leading vision-based and text-based technologies on a new very large multimodal dataset (UPMC Food-101) containing about 100,000 recipes for a total of 101 food categories. Each item in this dataset is represented by one image plus textual information. We present deep experiments of recipe recognition on our dataset using visual, textual information and fusion. Additionally, we present experiments with text-based embedding technology to represent any food word in a semantical continuous space. We also compare our dataset features with a twin dataset provided by ETHZ university: we revisit their data collection protocols and carry out transfer learning schemes to highlight similarities and differences between both datasets. Finally, we propose a real application for daily users to identify recipes. This application is a web search engine that allows any mobile device to send a query image and retrieve the most relevant recipes in our dataset.
Fichier principal
Vignette du fichier
CEA_ICME2015.pdf (28.77 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01196959 , version 1 (14-09-2015)

Identifiants

Citer

Xin Wang, Devinder Kumar, Nicolas Thome, Matthieu Cord, Frederic Precioso. RECIPE RECOGNITION WITH LARGE MULTIMODAL FOOD DATASET. IEEE International Conference on Multimedia & Expo (ICME), workshop CEA, Jun 2015, Turin, Italy. ⟨10.1109/ICMEW.2015.7169757⟩. ⟨hal-01196959⟩
403 Consultations
932 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More