Rigorous Error Certification for Neural PDE Solvers: From Empirical Residuals to Solution Guarantees

Published in Preprint, 2026

Uncertainty quantification for partial differential equations is traditionally grounded in discretization theory, where solution error is controlled via mesh/grid refinement. Physics-informed neural networks fundamentally depart from this paradigm: they approximate solutions by minimizing residual losses at collocation points, introducing new sources of error arising from optimization, sampling, representation, and overfitting. As a result, the generalization error in the solution space remains an open problem. Our main theoretical contribution establishes generalization bounds that connect residual control to solution-space error. We prove that when neural approximations lie in a compact subset of the solution space, vanishing residual error guarantees convergence to the true solution. We derive deterministic and probabilistic convergence results and provide certified generalization bounds translating residual, boundary, and initial errors into explicit solution error guarantees.

Recommended citation: Mukherjee, A., Fitzsimmons, M., Del Rey Fernandez, D.C., Liu, J.,, (2026). "Rigorous Error Certification for Neural PDE Solvers: From Empirical Residuals to Solution Guarantees." arXiv:2603.19165
Download Paper