Adversarial Classification under Gaussian Mechanism: Calibrating the Attack to Sensitivity - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Adversarial Classification under Gaussian Mechanism: Calibrating the Attack to Sensitivity

Melek Önen

Résumé

This work studies anomaly detection under differential privacy with Gaussian perturbation using both statistical and information-theoretic tools. In our setting, the adversary aims to modify the content of a statistical dataset by inserting additional data without being detected using the differential privacy to her/his own benefit. To this end, firstly via hypothesis testing, we characterize a statistical threshold for the adversary, which balances the privacy budget and the induced bias (the impact of the attack) in order to remain undetected. In addition, we establish the privacy-distortion trade-off in the sense of the well-known rate-distortion function for the Gaussian mechanism by using an information-theoretic approach to avoid detection. Accordingly, we derive an upper bound on the variance of the attacker's additional data as a function of the sensitivity and the original data's second-order statistics. Lastly, we introduce a new privacy metric based on Chernoff information for classifying adversaries under differential privacy as a stronger alternative for the Gaussian mechanism. Analytical results are supported by numerical evaluations.

Dates et versions

hal-03626066 , version 1 (31-03-2022)

Identifiants

Citer

Ayse Unsal, Melek Önen. Adversarial Classification under Gaussian Mechanism: Calibrating the Attack to Sensitivity. 2022. ⟨hal-03626066⟩
24 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More