FedShift: Robust Federated Learning Aggregation Scheme in Resource Constrained Environment via Weight Shifting

Research output: Contribution to journalArticlepeer-review

Abstract

Federated Learning (FL) is a distributed machine learning framework that utilizes the data and computing power of local devices to optimize a common objective. While effective, this paradigm suffers from significant communication overhead, impacting overall training efficiency. To mitigate this, prior works have explored compression techniques such as weight quantization in the weight exchange process. This leads to clients employing different quantization levels based on their hardware or network constraints, necessitating a mixed-precision aggregation process at the server. This introduces additional challenges, exacerbating client drift and leading to performance degradation. In this work, we propose FedShift, a novel aggregation methodology designed to mitigate performance degradation in FL scenarios with mixed quantization levels. FedShift employs a statistical matching mechanism based on weight shifting to align mixed-precision models, thereby reducing model divergence and addressing quantization-induced bias. Our approach functions as an add-on to existing FL optimization algorithms, enhancing their robustness and improving convergence. We provide both empirical results that demonstrate that FedShift effectively mitigates the negative impact of mixed-precision aggregation, yielding superior performance across various FL benchmarks, and a theoretical analysis that shows the guarantee of convergence to the optimum with the FedShift algorithm.

Original languageEnglish
Pages (from-to)3708-3721
Number of pages14
JournalIEEE Access
Volume14
DOIs
StatePublished - 2026

Keywords

  • Federated learning
  • optimization
  • quantization

Fingerprint

Dive into the research topics of 'FedShift: Robust Federated Learning Aggregation Scheme in Resource Constrained Environment via Weight Shifting'. Together they form a unique fingerprint.

Cite this