Model Compression by Count Sketch for Over-the-Air Stateless Federated Learning

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Motivated by the rapidly increasing computing performance of devices and the abundance of device-generated data, federated learning (FL) has emerged as a new distributed machine learning (ML) scheme with a wide range of applications. However, it is well-known that FL might be severely degraded by communication overhead, as it heavily relies on communication between clients and a central server. To overcome this communication bottleneck, the wireless communication community has explored AirComp FL, applying over-the-air computation (AirComp) for model aggregation. In this article, we introduce a novel AirComp FL algorithm, A-FedCS, which utilizes count sketch (CS) for model compression. A-FedCS exhibits scalability, addressing challenges faced by existing approaches struggling with scarce channel resources or rarely revisiting clients. Experimental results demonstrate that the proposed scheme outperforms state-of-the-art schemes, including CA-DSGD and D-DSGD. We show that the improvement is more significant in stateless FL through experiments with various settings of tasks, transmission power, bandwidth, and the number of clients. Additionally, we provide a mathematical analysis of A-FedCS by deriving its convergence rate.

Original languageEnglish
Pages (from-to)21689-21703
Number of pages15
JournalIEEE Internet of Things Journal
Volume11
Issue number12
DOIs
StatePublished - 15 Jun 2024

Keywords

  • Count sketch (CS)
  • federated learning (FL)
  • over-the-air computation (AirComp)
  • stateless FL

Fingerprint

Dive into the research topics of 'Model Compression by Count Sketch for Over-the-Air Stateless Federated Learning'. Together they form a unique fingerprint.

Cite this