Federated Learning in Adversarial Settings - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Federated Learning in Adversarial Settings

Résumé

Federated Learning enables entities to collaboratively learn a shared prediction model while keeping their training data locally. It prevents data collection and aggregation and, therefore, mitigates the associated privacy risks. However, it still remains vulnerable to various security attacks where malicious participants aim at degrading the generated model, inserting backdoors, or inferring other participants' training data. This paper presents a new federated learning scheme that provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy. Our scheme uses biased quantization of model updates and hence is bandwidth efficient. It is also robust against state-of-the-art backdoor as well as model degradation attacks even when a large proportion of the participant nodes are malicious. We propose a practical differentially private extension of this scheme which protects the whole dataset of participating entities. We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements but are less robust against model degradation and backdoor attacks. This suggests a possible fundamental trade-off between Differential Privacy and robustness.
Fichier principal
Vignette du fichier
Federated_Learning_in_Adversarial_Settings (1).pdf (701.21 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02968257 , version 1 (15-10-2020)
hal-02968257 , version 2 (27-10-2020)

Identifiants

  • HAL Id : hal-02968257 , version 2

Citer

Raouf Kerkouche, Gergely Ács, Claude Castelluccia. Federated Learning in Adversarial Settings. 2020. ⟨hal-02968257v2⟩
84 Consultations
158 Téléchargements

Partager

Gmail Facebook X LinkedIn More