The Multi-Task Learning View of Multimodal Data

Abstract : We study the problem of learning from multiple views using kernel methods in a supervised setting. We approach this problem from a multi-task learning point of view and illustrate how to capture the interesting multimodal structure of the data using multi-task kernels. Our analysis shows that the multi-task perspective offers the flexibility to design more efficient multiple-source learning algorithms, and hence the ability to exploit multiple descriptions of the data. In particular, we formulate the multimodal learning framework using vector-valued reproducing kernel Hilbert spaces, and we derive specific multi-task kernels that can operate over multiple modalities. Finally, we analyze the vector-valued regularized least squares algorithm in this context, and demonstrate its potential in a series of experiments with a real-world multimodal data set.
Document type :
Conference papers
Complete list of metadatas

Cited literature [36 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01070601
Contributor : Emilie Morvant <>
Submitted on : Tuesday, October 7, 2014 - 10:40:09 AM
Last modification on : Tuesday, April 2, 2019 - 1:42:40 AM
Long-term archiving on : Thursday, January 8, 2015 - 10:16:02 AM

File

Kadri13.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01070601, version 1

Citation

Hachem Kadri, Stéphane Ayache, Cécile Capponi, Sokol Koço, François-Xavier Dupé, et al.. The Multi-Task Learning View of Multimodal Data. Asian Conference on Machine Learning (ACML), Nov 2013, Canberra, Australia. pp.261--276. ⟨hal-01070601⟩

Share

Metrics

Record views

400

Files downloads

166