Debiasing Classifiers by Amplifying Bias with Latent Diffusion and Large Language Models

Donggeun Ko, Dongjun Lee, Namjun Park, Wonkyeong Shim, Jaekwang Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Neural networks struggle with image classification when biases are learned and misleads correlations, affecting their generalization and performance. Previous methods require attribute labels (e.g. background, color) or utilizes Generative Adversarial Networks (GANs) to mitigate biases. We introduce DiffuBias, a novel pipeline for text-to-image generation that generates bias-conflict samples, without any training. By utilizing pretrained diffusion and image captioning models, DiffuBias generates, bias-conflict samples using the top-K losses from a biased classifier (fB) to debias the classifier. This method not only debiases effectively but also boosts classifier generalization capabilities. Our comprehensive experimental evaluations demonstrate that DiffuBias achieves state-of-the-art performance on benchmark datasets.

Original languageEnglish
Title of host publication40th Annual ACM Symposium on Applied Computing, SAC 2025
PublisherAssociation for Computing Machinery
Pages1290-1292
Number of pages3
ISBN (Electronic)9798400706295
DOIs
StatePublished - 14 May 2025
Event40th Annual ACM Symposium on Applied Computing, SAC 2025 - Catania, Italy
Duration: 31 Mar 20254 Apr 2025

Publication series

NameProceedings of the ACM Symposium on Applied Computing

Conference

Conference40th Annual ACM Symposium on Applied Computing, SAC 2025
Country/TerritoryItaly
CityCatania
Period31/03/254/04/25

Keywords

  • classification
  • debiasing
  • generative model

Fingerprint

Dive into the research topics of 'Debiasing Classifiers by Amplifying Bias with Latent Diffusion and Large Language Models'. Together they form a unique fingerprint.

Cite this