Abstract
We present two new error bounds for optimization problems over a convex set whose objective function f is either semianalytic or γ-strictly convex, with γ ≥ 1. We then apply these error bounds to analyze the rate of convergence of a wide class of iterative descent algorithms for the aforementioned optimization problem. Our analysis shows that the function sequence {f(xk)} converges at least at the sublinear rate of k-ε for some positive constant ε, where k is the iteration index. Moreover, the distances from the iterate sequence {xk} to the set of stationary points of the optimization problem converge to zero at least sublinearly.
Original language | English (US) |
---|---|
Pages (from-to) | 341-355 |
Number of pages | 15 |
Journal | Mathematical Programming, Series B |
Volume | 88 |
Issue number | 2 |
DOIs | |
State | Published - Aug 2000 |