Optimization problems and Variational Inequalities on domains given by Linear Minimization Oracle
Résumé
Classical First Order methods for large-scale convex-concave saddle point problems and variational inequalities with monotone operators are proximal algorithms. They require minimizing of a sum of a linear form and a strongly convex (proximal) function at each iteration of the method. To make such an algorithm practical, the problem domain X should be proximal-friendly (admits a strongly convex function with easy to minimize linear perturbation). As a byproduct, X admits a computationally cheap Linear Minimization Oracle (LMO) capable to minimize over X linear forms. There are, however, important situations where a cheap LMO indeed is available, but X is not proximal-friendly. This motivates the search for algorithms based solely on LMO's. For smooth convex minimization, there exists a classical LMO-based algorithm - Conditional Gradient (a.k.a. Frank-Wolfe algorithm). In contrast, known to us LMO-based techniques for other problems with convex structure (composite minimization, nonsmooth convex minimization, convex-concave saddle point problems and variational inequalities with monotone operators, even as simple as affine) are quite recent. Here we discuss some new LMO-based decomposition techniques for such problems.