Parallel-in-time integration (PinT) algorithms, such as Parareal, PFASST, or MGRIT, have demonstrated significant prowess in addressing time-dependent problems. However, the construction of coarse level models for these algorithms is a challenge, demanding expertise numerics, domain science, and a profound understanding of high-performance computing (HPC). The research done in this study is focussing on advancements in leveraging machine learning (ML) techniques, specifically neural operators (NOs), to offer a more generic and efficient approach to the development of effective coarse models.
The established PinT algorithms necessitate the creation of coarse level models, introducing complexities that require substantial expertise and time for implementation and tuning. In contrast, the integration of ML techniques, particularly neural operators, presents an adequate way to do this. Neural operators, in their most general form, model the solution operator of partial differential equations (PDEs), mapping a set of inputs (initial conditions, boundary conditions, forcing terms, parameters). NeuralPint formulates the solution operator as a high-dimensional neural network, with parameters learned through a training procedure.
The ML-based solution operators enable the iterative construction of solutions to time-dependent problems on extended time scales. By treating the solution obtained in each iteration as the initial condition for the subsequent iteration, the ML-based approach provides a versatile and efficient strategy for solving complex PDEs over extended temporal domains.
While ML-based solvers may exhibit limitations in terms of accuracy when considered in isolation, their rapid evaluation times post-training present a compelling advantage. Notably, their computing patterns align seamlessly with GPU architectures, enabling efficient utilization of accelerators. This is in contrast to traditional mesh-based numerical models, which often struggle to exploit the full potential of GPU accelerators. The combination of swift execution times and reasonable accuracy positions ML-based solvers as ideal candidates for constructing effective PinT coarse models.
The comonation of ML techniques with PinT algorithms not only simplifies the construction of coarse models but also addresses the computational challenges inherent in high-performance computing environments. The inherent efficiency of ML-based solvers on GPUs aligns well with the demands of PinT coarse models, providing an optimal balance between speed and accuracy.
The integration of machine learning, particularly neural operators, into the realm of PinT algorithms represents a transformative approach to constructing efficient coarse models. By capitalizing on the speed and adaptability of ML-based solvers, this methodology offers a generic solution that reduces the dependence on intricate numerical expertise and accelerates the implementation and tuning process.