汇报标题 (Title):Accelerating Sparse Over-parameterized Solvers via Fast Inexact Gradients(基于急剧非精确梯度加快的稀少过参数化求解算法)
汇报人 (Speaker): 梁经纬 副教授(上海交通大学)
汇报功夫 (Time):2025年3月27日(周四) 9:00
汇报地址 (Place):校本部GJ303
约请人(Inviter):曾晓艳
主办部门:理学院数学系
汇报提要:Sparse regularization, such as -norm, (over-lapping) group sparsity and low-rank, are widely used in data science and machine learning. Designing efficient numerical solvers for sparse regularization has been an extremely active research area since the new millennium. Recently, over-parameterized solvers such as VarPro [Poon & Peyre, 2023] have been particularly attractive due to its improved conditioning properties and ability to transform the original non-smooth optimization problem into a smooth one, enabling the possibilities of adopting high-order numerical schemes such as quasi-Newton methods. However, despite the fast global convergence behavior, such methods incur a high per iteration complexity. To circumvent this drawback, by an ``approximated dimension reduction'', in this talk I will introduce an inexact version of VarPro which can significantly reduce the per-iteration complexity. The error incurred at each step can be directly controlled and our inexact gradient approach is adaptive, thus allowing for significant acceleration without hurting convergence. The method works seamlessly with L-BFGS and substantially enhances the performance of VarPro. Numerical experiments are provided for various sparse optimization problems.