Adaptive Regularization within Trust Region Methods for Stochastic Nonconvex Optimization

Abstract

We propose a stochastic nonconvex optimization algorithm that achieves almost sure $\tilde{\mathcal{O}}(\epsilon^{-1.5})$ iteration complexity for problems with smooth objective functions and gradients only observable with noise. The mean-zero stochastic noise is decision-dependent and has unbounded support with subexponential tail, allowing our framework to cover a broad class of problems. The improved almost sure iteration complexity is achieved with a new variant of the adaptive sampling trust-region optimization (ASTRO) augmented with an adaptively regularized local model, which we term Reg-ASTRO. Adaptive sampling ensures that the estimation precision is aligned with a measure of stationarity, so that iterates closer to stationarity trigger higher accuracy requirement for sampling. A key analytical challenge arises because the trust-region radius and regularization are coupled and not determined prior to gradient estimation at each iteration. We further establish an almost sure $\tilde{\mathcal{O}}(\epsilon^{-4.5})$ sample complexity for Reg-ASTRO, which improves to $\tilde{\mathcal{O}}(\epsilon^{-3.5})$ under stronger regularity conditions and use of common random numbers, substantially outperforming first-order methods in theory and numerical experiments.

Publication
Under Review at Mathematical Programming
Yunsoo Ha
Yunsoo Ha
Postdoctoral Researcher

My research interests include stochastic optimization, monte carlo simulation, quantum computing, and decision-focused learning.