Skip to Main content Skip to Navigation
Conference papers

Cross-Modal Retrieval in the Cooking Context

Abstract : Designing powerful tools that support cooking activities has rapidly gained popularity due to the massive amounts of available data, as well as recent advances in machine learning that are capable of analyzing them. In this paper, we propose a cross-modal retrieval model aligning visual and textual data (like pictures of dishes and their recipes) in a shared representation space. We describe an effective learning scheme, capable of tackling large-scale problems, and validate it on the Recipe1M dataset containing nearly 1 million picture-recipe pairs. We show the effectiveness of our approach regarding previous state-of-the-art models and present qualitative results over computational cooking use cases.
Document type :
Conference papers
Complete list of metadata

Cited literature [43 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01839068
Contributor : Matthieu Cord <>
Submitted on : Friday, March 13, 2020 - 10:15:50 AM
Last modification on : Tuesday, March 23, 2021 - 9:28:03 AM
Long-term archiving on: : Sunday, June 14, 2020 - 12:17:58 PM

File

1804.11146.pdf
Files produced by the author(s)

Identifiers

Citation

Micael Carvalho, Rémi Cadène, David Picard, Laure Soulier, Nicolas Thome, et al.. Cross-Modal Retrieval in the Cooking Context. SIGIR '18: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Jul 2018, Ann Arbor, Michigan, United States. pp.35-44, ⟨10.1145/3209978.3210036⟩. ⟨hal-01839068⟩

Share

Metrics

Record views

168

Files downloads

384