Perturbed Proximal Gradient ADMM for Nonconvex Composite Optimization
Abstract
This paper proposes a Perturbed Proximal Gradient ADMM (PPG-ADMM) framework for solving general nonconvex composite optimization problems, where the objective function consists of a smooth nonconvex term and a nonsmooth weakly convex term for both primal variables. Unlike existing ADMM-based methods which necessitate the function associated with the last updated primal variable to be smooth, the proposed PPG-ADMM removes this restriction by introducing a perturbation mechanism, which also helps reduce oscillations in the primal-dual updates, thereby improving convergence stability. By employing a linearization technique for the smooth term and the proximal operator for the nonsmooth and weakly convex term, the subproblems have closed-form solutions, significantly reducing computational complexity. The convergence is established through a technically constructed Lyapunov function, which guarantees sufficient descent and has a well-defined lower bound. With properly chosen parameters, PPG-ADMM converges to an $\epsilon$-approximate stationary point at a sublinear convergence rate of $\mathcal{O}(1/\sqrt{K})$. Furthermore, by appropriately tuning the perturbation parameter $\beta$, it achieves an $\epsilon$-stationary point, providing stronger optimality guarantees. We further apply PPG-ADMM to two practical distributed nonconvex composite optimization problems, i.e., the distributed partial consensus problem and the resource allocation problem. The algorithm operates in a fully decentralized manner without a central coordinating node. Finally, numerical experiments validate the effectiveness of PPG-ADMM, demonstrating its improved convergence performance.