Convergence Results of a Nested Decentralized Gradient Method for Non-strongly Convex Problems

Woocheol Choi, Doheon Kim, Seok Bae Yun

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

We are concerned with the convergence of NEAR-DGD+ (Nested Exact Alternating Recursion Distributed Gradient Descent) method introduced to solve the distributed optimization problems. Under the assumption of the strong convexity of local objective functions and the Lipschitz continuity of their gradients, the linear convergence is established in Berahas et al. (IEEE Trans Autom Control 64:3141-3155, 2019). In this paper, we investigate the convergence property of NEAR-DGD+ in the absence of strong convexity. More precisely, we establish the convergence results in the following two cases: (1) When only the convexity is assumed on the objective function. (2) When the objective function is represented as a composite function of a strongly convex function and a rank deficient matrix, which falls into the class of convex and quasi-strongly convex functions. The numerical results are provided to support the convergence results.

Original languageEnglish
Pages (from-to)172-204
Number of pages33
JournalJournal of Optimization Theory and Applications
Volume195
Issue number1
DOIs
StatePublished - Oct 2022

Keywords

  • Distributed gradient methods
  • NEAR-DGD
  • Quasi-strong convexity

Fingerprint

Dive into the research topics of 'Convergence Results of a Nested Decentralized Gradient Method for Non-strongly Convex Problems'. Together they form a unique fingerprint.

Cite this