Multiagent Incremental Learning in Networks

Abstract : This paper investigates incremental multiagent learning in structured networks. Learning examples are incrementally distributed among the agents, and the objective is to build a common hypothesis that is consistent with all the examples present in the system, despite communication constraints. Recently, different mechanisms have been proposed that allow groups of agents to coordinate their hypotheses. Although these mechanisms have been shown to guarantee (theoretically) convergence to globally consistent states of the system, others notions of effectiveness can be considered to assess their quality. Furthermore, this guaranteed property should not come at the price of a great loss of efficiency (for instance a prohibitive communication cost). We explore these questions theoretically and experimentally (using different boolean formulas learning problems).
Type de document :
Communication dans un congrès
Pacific Rim International Workshop on Multi-Agent, Dec 2008, Hanoi, Vietnam. Springer, Pacific Rim International Workshop on Multi-Agent, 5357, pp.109-120, Lecture Notes in Computer Science. 〈10.1007/978-3-540-89674-6_14〉
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01305583
Contributeur : Lip6 Publications <>
Soumis le : jeudi 21 avril 2016 - 14:10:06
Dernière modification le : jeudi 22 novembre 2018 - 14:31:11

Identifiants

Collections

Citation

Gauvain Bourgne, Amal El Fallah Seghrouchni, Nicolas Maudet, Henry Soldano. Multiagent Incremental Learning in Networks. Pacific Rim International Workshop on Multi-Agent, Dec 2008, Hanoi, Vietnam. Springer, Pacific Rim International Workshop on Multi-Agent, 5357, pp.109-120, Lecture Notes in Computer Science. 〈10.1007/978-3-540-89674-6_14〉. 〈hal-01305583〉

Partager

Métriques

Consultations de la notice

113