Randomised Levenberg-Marquardt methods for nonlinear least-squares
Please login to view abstract download link
In recent years, randomized linear algebra \cite{troop} has emerged as a powerful tool for solving problems with high computational and memory demands. Randomized sampling and randomized embeddings are the core of a variety of optimization methods with random models that are suitable for solving many applications, including machine learning. We focus on nonlinear least-squares problems and present stochastic Levenberg-Marquardt/Gauss-Newton methods combined with a line-search strategy. We discuss randomised dimensionality reduction either in the variable dimension through subsampling or in the dimension of the observations through random embedding, referred to as sketching. In this latter case, the computation of the trial step is restricted to a random subspace of reduced dimension. The algorithms proposed have per-iteration computational complexity lower than classical deterministic methods. We discuss the construction of the random models and the iteration complexity results to drive the gradient below a prescribed accuracy, then we present results from our computational experience.
