Federated Learning in Massive MIMO 6G Networks: Convergence Analysis and Communication-Efficient Design

Yuchen Mu, Nanveet Garg, Tharm Ratnarajah

Research output: Contribution to journalArticlepeer-review

Abstract

In federated learning (FL), model weights must be updated at local users and the base station (BS). These weights are subjected to uplink (UL) and downlink (DL) transmission errors due to the limited reliability of wireless channels. In this paper, we investigate the impact of imperfections in both UL and DL links. First, for a multi-user massive multi-input-multi-output (mMIMO) 6G network, employing zero-forcing (ZF) and minimum mean-squared-error (MMSE) schemes, we analyze the estimation errors of weights for each round. A tighter convergence bound on the modelling error for the communication efficient FL algorithm is derived of the order of OT-1σz2, where... σz2 denotes the variance of overall communication error including the quantization noise. The analysis shows that the reliability of DL links is more critical than that of UL links; and the transmit power can be varied in training process to reduce energy consumption. We also vary the number of local training steps, average codeword length after quantization and scheduling policy to improve the communication efficiency. Simulations with image classification problems on MNIST, EMNIST and FMNIST datasets verify the derived bound and are useful to infer the minimum SNR required for successful convergence of the FL algorithm.

Original languageEnglish
Pages (from-to)4220-4234
JournalIEEE Transactions on Network Science and Engineering
Volume9
Issue number6
Early online date5 Aug 2022
DOIs
Publication statusPublished - 1 Nov 2022

Keywords / Materials (for Non-textual outputs)

  • 6G networks
  • Deep learning
  • federated learning
  • massive MIMO (mMIMO)

Fingerprint

Dive into the research topics of 'Federated Learning in Massive MIMO 6G Networks: Convergence Analysis and Communication-Efficient Design'. Together they form a unique fingerprint.

Cite this