Convergence rates for Tikhonov regularization on compact sets: application to neural network
Please login to view abstract download link
In this presentation, we introduce a novel regularization method for ill-posed inverse problems, based on a sequence of compact sets that becomes dense in the solution space. This approach is inspired by selection methods and the classical Ritz method. The key idea is to approximate the solution within an increasing family of compact subsets by minimizing the classical Tikhonov functional over each set. We analyze the convergence rates of the method and show that, if the compact sets become dense sufficiently fast, we recover the classical convergence rates of Tikhonov regularization. Moreover, in the linear case, we obtain optimal convergence rates under standard source conditions. Finally, we apply this framework to neural networks, which are employed in a novel way within inverse problems as a tool to parametrize the solution by leveraging the powerful technique of 'positional encoding'. This leads to a new perspective on how neural networks can contribute to the regularization of ill-posed problems.
