An Adaptive Projected Subgradient Approach to Learning in Diffusion Networks

R. Cavalcante, I. Yamada, Bernie Mulgrew

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

We present an algorithm that minimizes asymptotically a sequence of nonnegative convex functions over diffusion networks. In the proposed algorithm, at each iteration the nodes in the network have only partial information of the cost function, but they are able to achieve consensus on a possible minimizer asymptotically. To account for possible node failures, position changes, and/or reachability problems (because of moving obstacles, jammers, etc.), the algorithm can cope with changing network topologies and cost functions, a desirable feature in online algorithms where information arrives sequentially. Many projection-based algorithms can be straightforwardly extended to (probabilistic) diffusion networks with the proposed scheme. The system identification problem in distributed networks is given as one example of a possible application.
Original languageEnglish
Pages (from-to)2762-2775
Number of pages14
JournalIEEE Transactions on Signal Processing
Volume57
Issue number7
DOIs
Publication statusPublished - Jul 2009

Keywords / Materials (for Non-textual outputs)

  • Adaptive filtering
  • adaptive projected subgradient method
  • consensus
  • convex optimization
  • diffusion networks
  • distributed processing

Fingerprint

Dive into the research topics of 'An Adaptive Projected Subgradient Approach to Learning in Diffusion Networks'. Together they form a unique fingerprint.

Cite this