Yes We can: Watermarking machine learning models beyond classification - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Yes We can: Watermarking machine learning models beyond classification

Lounici Sofiane
  • Fonction : Auteur
Orhan Ermis
  • Fonction : Auteur
  • PersonId : 1091850
Melek Önen
Slim Trabelsi
  • Fonction : Auteur

Résumé

Since machine learning models have become a valuable asset for companies, watermarking techniques have been developed to protect the intellectual property of these models and prevent model theft. We observe that current watermarking frameworks solely target image classification tasks, neglecting a considerable part of machine learning techniques. In this paper, we propose to address this lack and study the watermarking process of various machine learning techniques such as machine translation, regression, binary image classification and reinforcement learning models. We adapt current definitions to each specific technique and we evaluate the main characteristics of the watermarking process, in particular the robustness of the models against a rational adversary. We show that watermarking models beyond classification is possible while preserving their overall performance. We further investigate various attacks and discuss the importance of the performance metric in the verification process and its impact on the success of the adversary.
Fichier principal
Vignette du fichier
publi-6532.pdf (713.22 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03220793 , version 1 (07-02-2022)

Identifiants

Citer

Lounici Sofiane, Mohamed Njeh, Orhan Ermis, Melek Önen, Slim Trabelsi. Yes We can: Watermarking machine learning models beyond classification. CFS 2021, 34th IEEE Computer Security Foundations Symposium, Jun 2021, Dubrovnik, Croatia. ⟨10.1109/CSF51468.2021.00044⟩. ⟨hal-03220793⟩
128 Consultations
269 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More