Distributed Adaptive Gradient Algorithm with Gradient Monitoring for Stochastic Non-Convex Optimization
Authors: Dongyu Han, Kun Liu, Yeming Lin, Yuanqing Xia
Summary: This paper considers a distributed stochastic non-convex optimization drawback, the place the nodes in a community cooperatively decrease a sum of L-smooth native value features with sparse gradients. By adaptively adjusting the stepsizes in line with the historic (probably sparse) gradients, a distributed adaptive gradient algorithm is proposed, by which a gradient monitoring estimator is used to deal with the heterogeneity between completely different native value features. We set up an higher sure on the optimality hole, which signifies that our proposed algorithm can attain a first-order stationary answer depending on the higher sure on the variance of the stochastic gradients. Lastly, numerical examples are offered as an instance the effectiveness of the algorithm