Distributed Time-Varying Stochastic Optimization and Utility-Based Communication
We devise a distributed asynchronous stochastic ǫgradient-based algorithm to enable a network of computing and communicating nodes to solve a constrained discrete-time time-varying stochastic convex optimization problem. Each node updates its own decision variable only once every discrete time step. Under some assumptions (among which, strong convexity, Lipschitz continuity of the gradient, persistent excitation), we prove the algorithm’s asymptotic convergence in expectation to an error bound whose size is related to the constant stepsize choice α, the variability in time of the optimization problem, and to the accuracy ǫ. Moreover, the convergence rate is linear. Then, we show how to compute locally stochastic ǫ-gradients that depend also on the time-varying noise probability density function (PDF) of the neighboring nodes, without requiring the neighbors to send such PDFs at each time step. We devise utilitybased policies to allow each node to decide whether to send or not the most up-to-date PDF, which guarantee a given user-speciﬁed error level ǫ in the computation of the stochastic ǫ-gradient. Numerical simulations display the added value of the proposed approach and its relevance for estimation and control of timevarying processes and networked systems.
Physics & Electronics
To reference this document use:
DSS - Distributed Sensor Systems
TS - Technical Sciences
Defence, Safety and Security