Abstract / Description of output
Online convex optimization in distributed systems has shown great promise in collaboratively learning on data streams with massive learners, such as in collaborative coordination in robot and IoT networks. When implemented in communication-constrained networks like robot and IoT networks, two critical yet distinct objectives in distributed online convex optimization (DOCO) are minimizing the overall regret and the communication cost. Achieving both objectives simultaneously is challenging, especially when the number of learners $n$ and learning time $T$ are prohibitively large. To address this challenge, we propose novel algorithms in typical adversarial and stochastic settings. Our algorithms significantly reduce the communication complexity of the algorithms with the state-of-the-art regret by a factor of $\mathcal {O}(n^{2})$ and $\tilde{\mathcal {O}}(\sqrt{nT})$ in adversarial and stochastic settings, respectively. We are the first to achieve nearly optimal regret and communication complexity simultaneously up to polylogarithmic factors. We validate our algorithms through experiments on real-world datasets in classification tasks. Our algorithms with appropriate parameters can achieve $90\%\sim 99\%$ communication saving with close accuracy over existing methods in most cases.
Original language | English |
---|---|
Pages (from-to) | 1-13 |
Number of pages | 13 |
Journal | IEEE Transactions on Parallel and Distributed Systems |
DOIs | |
Publication status | Published - 21 May 2024 |
Keywords / Materials (for Non-textual outputs)
- complexity theory
- Stochastic processes
- robtos
- robot kinematics
- costs
- convex functions
- loss measurement