Explainable Machine Learning with Fredholm Neural Networks
Please login to view abstract download link
Within the family of explainable machine-learning, we develop and present Fredholm Neural Networks (Fredholm NNs), deep neural networks (DNNs) which replicate fixed point iterations for the solution of linear and nonlinear Fredholm Integral Equations (FIE) of the second kind. Other methods connecting Integral Equations with Neural Networks have been considered, however our approach provides insight into the values of the hyperparameters and trainable/explainable weights and biases of the DNN, by directly connecting their values to the underlying mathematical theory, thereby providing a constructive approach to model development that can be used to solve both forward as well as inverse problems in Scientific Machine Learning. Moreover, the connection between such IEs and PDEs is established via the Boundary Integral Equation (BIE) method and the double layer potential integral. We develop a modified representation of the double layer potential and use Fredholm NNs and an additional layer that replicates this new representation to solve elliptic PDEs. The resulting neural network achieves not only significant numerical approximation accuracy, but also interpretability and consistency across both the domain and boundary. This constructive approach shows that there exists neural network architectures that can be interpreted directly using ”classical” numerical schemes and we suggest that the general framework can find applications in fields such as Uncertainty Quantification (UQ) and explainable artificial intelligence (XAI).
