GC-Fed: Gradient centralized federated learning with partial client participation

  • Jungwon Seo
  • , Ferhat Ozgur Catak
  • , Chunming Rong
  • , Kibeom Hong
  • , Minhoe Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Federated Learning (FL) enables privacy-preserving multi-source information fusion (MSIF) but suffers from client drift in highly heterogeneous data settings. Many existing approaches mitigate drift by providing clients with common reference points, typically derived from past information, to align objectives or gradient directions. However, under severe partial participation, such history-dependent references may become unreliable, as the set of client data distributions participating in each round can vary drastically. To overcome this limitation, we propose a method that mitigates client drift without relying on past information by constraining the update space through Gradient Centralization (GC). Specifically, we introduce Local GC and Global GC , which apply GC at the local and global update stages, respectively, and further present GC-Fed , a hybrid formulation that generalizes both. Theoretical analysis and extensive experiments on benchmark FL tasks demonstrate that GC-Fed effectively alleviates client drift and achieves up to 20 % accuracy improvement under data heterogeneous and partial participation conditions.

Original languageEnglish
Article number104148
JournalInformation Fusion
Volume131
DOIs
StatePublished - Jul 2026

Keywords

  • Federated learning
  • Gradient centralization
  • Machine learning
  • Multi-source information fusion
  • Optimization

Fingerprint

Dive into the research topics of 'GC-Fed: Gradient centralized federated learning with partial client participation'. Together they form a unique fingerprint.

Cite this