Randomization matters How to defend against strong adversarial attacks - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Randomization matters How to defend against strong adversarial attacks

Résumé

Is there a classifier that ensures optimal robust-ness against all adversarial attacks? This paper answers this question by adopting a game-theoretic point of view. We show that adversarial attacks and defenses form an infinite zero-sum game where classical results (e.g. Sion theorems) do not apply. We demonstrate the non-existence of a Nash equilibrium in our game when the clas-sifier and the Adversary are both deterministic, hence giving a negative answer to the above question in the deterministic regime. Nonetheless, the question remains open in the randomized regime. We tackle this problem by showing that, under mild conditions on the dataset distribution, any deterministic classifier can be outperformed by a randomized one. This gives arguments for using randomization, and leads us to a new algorithm for building randomized classifiers that are robust to strong adversarial attacks. Empirical results validate our theoretical analysis, and show that our defense method considerably outperforms Adver-sarial Training against state-of-the-art attacks.
Fichier principal
Vignette du fichier
2002.11565.pdf (617 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02892161 , version 1 (07-07-2020)

Identifiants

  • HAL Id : hal-02892161 , version 1

Citer

Rafael Pinot, Raphael Ettedgui, Geovani Rizk, Yann Chevaleyre, Jamal Atif. Randomization matters How to defend against strong adversarial attacks. Thirty-seventh International Conference on Machine Learning, Jul 2020, Vienna, Austria. ⟨hal-02892161⟩
131 Consultations
107 Téléchargements

Partager

Gmail Facebook X LinkedIn More