We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.
By clicking "Accept", you agree to our use of cookies.
Learn more.
Vu-Anh Le, Mehmet Dik
Neural operators have emerged as transformative tools for learning mappings between infinite-dimensional function spaces, offering useful applications in solving complex partial differential equations (PDEs). This paper presents a rigorous mathematical framework for analyzing the behaviors of neural operators, with a focus on their stability, convergence, clustering dynamics, universality, and generalization error. By proposing a list of novel theorems, we provide stability bounds in Sobolev spaces and demonstrate clustering in function space via gradient flow interpretation, guiding neural operator design and optimization. Based on these theoretical gurantees, we aim to offer clear and unified guidance in a single setting for the future design of neural operator-based methods.