Towards maximizing feature efficiency: All-in-one image restoration via radial basis attention

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advancements in all-in-one Image Restoration (IR) have shown great promise, yet existing methods are often constrained by fixed parameters and struggle with the addition of new degradation types. In this paper, we propose RBaIR (Radial Basis Attention Image Restoration), a novel universal restoration network that significantly enhances feature efficiency and restoration performance across multiple degradations. Specifically, we design the Dynamic Radial Basis Attention (DyRBA) module, which decouples inter-channel dependencies using a Radial Basis Function Network (RBFN) to maximize feature independence. DyRBA also incorporates a data-dependent cross-attention mechanism for flexible and efficient spatial feature exploration. Then, to mitigate the attention dilution issue in attention-based models, we introduce a Mixture of Convolutional Experts (MoCE). The MoCE captures a diverse set of local and depth-dependent patterns through its multi-kernel design. Finally, we improve model generalization by introducing specialized loss functions, including the Kullback-Leibler Divergence (KLD) loss. Extensive experiments demonstrate that RBaIR achieves state-of-the-art performance, outperforming existing methods in both all-in-one and single-task restoration settings with fewer parameters. The PyTorch code is available at https://github.com/towardsDLCV/RBaIR.

Original languageEnglish
Article number112815
JournalPattern Recognition
Volume173
DOIs
StatePublished - May 2026

Keywords

  • Adaptive offsets
  • Cross-attention
  • Mixture of experts
  • Multi-task image restoration
  • Radial basis function

Fingerprint

Dive into the research topics of 'Towards maximizing feature efficiency: All-in-one image restoration via radial basis attention'. Together they form a unique fingerprint.

Cite this