Abstract
When trained on biased datasets, Deep Neural Networks (DNNs) often make predictions based on attributes derived from features spuriously correlated with target labels. This is especially problematic if these irrelevant features are easier for the model to learn than the truly relevant ones. Many existing debiasing methods have been proposed to address this issue, but they often require predefined bias labels and entail significantly increased computational complexity by incorporating additional auxiliary models. Instead, we provide an orthogonal perspective from existing approaches, inspired by cognitive science, specifically Global Workspace Theory (GWT). Our method, Debiasing Global Workspace (DGW), is a novel debiasing framework that consists of specialized modules and a shared workspace, allowing for increased modularity and improved debiasing performance. Furthermore, DGWimproves the transparency of decision-making processes by visualizing which features of inputs the model focuses on during training and inference through attention masks. We begin by proposing an instantiation of GWT for the debiasing method. We then outline the implementation of each component within DGW. Finally, we validate our method across various biased datasets, proving its effectiveness in mitigating biases and improving model performance.
| Original language | English |
|---|---|
| Journal | Proceedings of Machine Learning Research |
| Volume | 285 |
| State | Published - 2024 |
| Event | 2nd Edition of the Workshop on Unifying Representations in Neural Models, UniReps 2024 - Vancouver, Canada Duration: 14 Dec 2024 → … |
Fingerprint
Dive into the research topics of 'A Cognitive Framework for Learning Debiased and Interpretable Representations via Debiasing Global Workspace'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver