Abstract
Distributed optimization has received a lot of interest due to its wide applications in various fields. It involves multiple agents connected by a graph that optimize a total cost in a collaborative way. Often, in applications, the graph of the agents is a directed graph. The gradient-push algorithm is a fundamental algorithm for distributed optimization when the agents are connected by a directed graph. Despite its wide usage in the literature, its convergence property has not been well established for the important case where the stepsize is constant and the domain is the entire space. This work proves that the gradient-push algorithm with stepsize α>0 converges exponentially fast to an O(α)-neighborhood of the optimizer if the stepsize α is less than a specific value. For the result, we assume that each cost is smooth and the total cost is strongly convex. Numerical experiments are provided to support the theoretical convergence result. We also present a numerical test showing that the gradient-push algorithm may approach a small neighborhood of the minimizer faster than the Push-DIGing algorithm which is a variant of the gradient-push algorithm which involves agents sharing their gradient information.
| Original language | English |
|---|---|
| Pages (from-to) | 713-736 |
| Number of pages | 24 |
| Journal | Journal of Global Optimization |
| Volume | 92 |
| Issue number | 3 |
| DOIs | |
| State | Published - Jul 2025 |
Keywords
- Convex optimization
- Gradient-push algorithm
- Push-sum algorithm