An Empirical Study of Black-Box Based Membership Inference Attacks on a Real-World Dataset

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The recent advancements in artificial intelligence drive the widespread adoption of Machine-Learning-as-a-Service platforms, which offers valuable services. However, these pervasive utilities in the cloud environment unavoidably encounter security and privacy issues. In particular, a membership inference attack (MIA) poses a threat by recognizing the presence of a data sample in a training set for the victim model. Although prior MIA approaches underline privacy risks repeatedly by demonstrating experimental results with standard benchmark datasets such as MNIST and CIFAR. However, the effectiveness of such techniques on a real-world dataset remains questionable. We are the first to perform an in-depth empirical study on black-box based MIAs that hold realistic assumptions, including six metric-based and three classifier-based MIAs with the high-dimensional image dataset that consists of identification (ID) cards and driving licenses. Additionally, we introduce the Siamese-based MIA that shows similar or better performance than the state-of-the-art approaches and suggest training a shadow model with autoencoder-based reconstructed images. Our major findings show that the performance of MIA techniques against too many features may be degraded; the MIA configuration or a sample’s properties can impact the accuracy of membership inference on members and non-members.

Original languageEnglish
Title of host publicationFoundations and Practice of Security - 17th International Symposium, FPS 2024, Revised Selected Papers
EditorsKamel Adi, Simon Bourdeau, Christel Durand, Valérie Viet Triem Tong, Alina Dulipovici, Yvon Kermarrec, Joaquin Garcia-Alfaro
PublisherSpringer Science and Business Media Deutschland GmbH
Pages121-137
Number of pages17
ISBN (Print)9783031874956
DOIs
StatePublished - 2025
Event17th International Symposium on Foundations and Practice of Security, FPS 2024 - Montréal, Canada
Duration: 9 Dec 202411 Dec 2024

Publication series

NameLecture Notes in Computer Science
Volume15533 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference17th International Symposium on Foundations and Practice of Security, FPS 2024
Country/TerritoryCanada
CityMontréal
Period9/12/2411/12/24

Keywords

  • Machine Learning
  • Membership Inference Attack

Fingerprint

Dive into the research topics of 'An Empirical Study of Black-Box Based Membership Inference Attacks on a Real-World Dataset'. Together they form a unique fingerprint.

Cite this