Mitigating inappropriate concepts in text-to-image generation with attention-guided Image editing

Jiyeon Oh, Jae Yeop Jeong, Yeong Gi Hong, Jin Woo Jeong

Research output: Contribution to journalArticlepeer-review

Abstract

Text-to-image generative models have recently garnered a significant surge due to their ability to produce diverse images based on given text prompts. However, concerns regarding the occasional generation of inappropriate, offensive, or explicit content have arisen. To address this, we propose a simple yet effective method that leverages attention map to selectively suppress inappropriate concepts during image generation. Unlike existing approaches that often sacrifice original image context or demand substantial computational overhead, our method preserves image integrity without requiring additional model training or extensive engineering effort. To evaluate our method, we conducted comprehensive quantitative assessments on inappropriateness reduction, text fidelity, image consistency, and computational cost, alongside an online human perceptual study involving 20 participants. The results from our statistical analysis demonstrated that our method effectively removes inappropriate content while preserving the integrity of the original images with high computational efficiency.

Original languageEnglish
Article numbere3170
JournalPeerJ Computer Science
Volume11
DOIs
StatePublished - 2025

Keywords

  • Attention map
  • Deep learning
  • Inappropriateness mitigation
  • Text-to-image generation

Fingerprint

Dive into the research topics of 'Mitigating inappropriate concepts in text-to-image generation with attention-guided Image editing'. Together they form a unique fingerprint.

Cite this