Point Convergence Analysis of the Accelerated Gradient Method for Multiobjective Optimization: Continuous and Discrete
Abstract
This paper studies the point convergence of accelerated gradient methods for unconstrained convex smooth multiobjective optimization problems, covering both continuous-time gradient flows and discrete-time algorithms. In single-objective optimization, the point convergence problem of Nesterov's accelerated gradient method at the critical damping parameter $\alpha = 3$ has recently been resolved. This paper extends this theoretical framework to the multiobjective setting, focusing on the multiobjective inertial gradient system with asymptotically vanishing damping (MAVD) with $\alpha =3 $ and the multiobjective accelerated proximal gradient algorithm (MAPG). For the continuous system, we construct a suitable Lyapunov function for the multiobjective setting and prove that, under appropriate assumptions, the trajectory $x(t)$ converges to a weakly Pareto optimal solution. For the discrete algorithm, we construct a corresponding discrete Lyapunov function and prove that the sequence $\{x_k\}$ generated by the algorithm converges to a weakly Pareto optimal solution.