Skip to Main content Skip to Navigation
Conference papers

Maximum Roaming Multi-Task Learning

Abstract : Multi-task learning has gained popularity due to the advantages it provides with respect to resource usage and performance. Nonetheless, the joint optimization of parameters with respect to multiple tasks remains an active research topic. Subpartitioning the parameters between different tasks has proven to be an efficient way to relax the optimization constraints over the shared weights, may the partitions be disjoint or overlapping. However, one drawback of this approach is that it can weaken the inductive bias generally set up by the joint task optimization. In this work, we present a novel way to partition the parameter space without weakening the inductive bias. Specifically, we propose Maximum Roaming, a method inspired by dropout that randomly varies the parameter partitioning, while forcing them to visit as many tasks as possible at a regulated frequency, so that the network fully adapts to each update. We study the properties of our method through experiments on a variety of visual multi-task data sets. Experimental results suggest that the regularization brought by roaming has more impact on performance than usual partitioning optimization strategies. The overall method is flexible, easily applicable, provides superior regularization and consistently achieves improved performances compared to recent multi-task learning formulations.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03044386
Contributor : Lucas Pascal Connect in order to contact the contributor
Submitted on : Monday, December 7, 2020 - 5:36:39 PM
Last modification on : Tuesday, October 26, 2021 - 4:52:24 PM
Long-term archiving on: : Monday, March 8, 2021 - 7:38:17 PM

File

Maximum_Roaming.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03044386, version 1
  • ARXIV : 2006.09762

Collections

Citation

Lucas Pascal, Pietro Michiardi, Xavier Bost, Benoit Huet, Maria Zuluaga. Maximum Roaming Multi-Task Learning. 35th AAAI Conference on Artificial Intelligence, Feb 2021, Virtuel, United States. pp.9331-9341. ⟨hal-03044386⟩

Share

Metrics

Record views

86

Files downloads

173