Machine Learning Through Physics–Informed Neural Networks: Progress and Challenges

: Physics-Informed Neural Networks (PINNs) represent a groundbreaking approach wherein neural networks (NNs) integrate model equations, such as Partial Differential Equations (PDEs), within their architecture. This innovation has become instrumental in solving diverse problem sets including PDEs, fractional equations, integral-differential equations, and stochastic PDEs. It's a versatile multi-task learning framework that tasks NNs with fitting observed data while simultaneously minimizing PDE residuals. This paper delves into the landscape of PINNs, aiming to delineate their inherent strengths and weaknesses. Beyond exploring the fundamental characteristics of these networks, this review endeavors to encompass a wider spectrum of collocation-based physics-informed neural networks, extending beyond the core PINN model. Variants like physics-constrained neural networks (PCNN), variational hp-VPINN, and conservative PINN (CPINN) constitute pivotal aspects of this exploration. The study accentuates a predominant focus in research on tailoring PINNs through diverse strategies: adapting activation functions, refining gradient optimization techniques, innovating neural network structures, and enhancing loss function architectures. Despite the extensive applicability demonstrated by PINNs, surpassing classical numerical methods like Finite Element Method (FEM) in certain contexts, the review highlights ongoing opportunities for advancement. Notably, there are persisting theoretical challenges that demand resolution, ensuring the continued evolution and refinement of this revolutionary approach.


Introduction
Deep neural networks have excelled across diverse domains like remote sensing [1,2], computer vision [3,4], risk prevention [5], optimization [6,7], pattern recognition [3,8,9], regression tasks [10] and Hyperspectral Imaging [11][12][13][14] are executed.In applied mathematics, the application of deep learning techniques to classical problems, such as partial differential equations (PDEs), marks a significant trend.Traditional numerical approaches become difficult when faced with PDEs featuring substantial nonlinearities, convection dominance, or shocks.Enter deep learning, presenting a new paradigm in scientific computing due to the remarkable universal approximation and expressiveness of neural networks.Recent studies underscore the potential of deep learning in constructing meta-models for rapid predictions in dynamic systems, adeptly capturing intricate nonlinear input-output relationships.Yet, grappling with the intricacies of high-dimensional complex systems encounters the challenge of dimensionality, a hurdle notably articulated by Bellman in optimal control problems.Nevertheless, machine learning-based algorithms offer promising prospects for tackling PDEs [15].
Forecasts suggest that machine learning-driven PDEsolving methods will remain a focal point as deep learning evolves through methodological, theoretical, and algorithmic advancements [15,16].Early attempts at solving differential equations employed simplistic neural network models like Multi-Layer Perceptron's (MLPs) with minimal hidden layers [17].Contemporary approaches, leveraging NN techniques, harness optimization frameworks and autodifferentiation techniques, exemplified by Berg and Nyström's unified deep neural network method for estimating PDE solutions [16,18,19].Additionally, the envisioned potential of deep neural networks to construct interpretable hybrid Earth system models for Earth and climate sciences further underscores their significance [18].
Presently, the literature lacks a standardized nomenclature for integrating prior physical knowledge with deep learning.Terms like 'physics-informed,' 'physics-based,' 'physicsguided,' and 'theory-guided' are variably used.Some researchers [20] introduced a comprehensive taxonomy, termed 'informed deep learning,' organizing it into three core conceptual stages: delineating the type of deep neural network used, representing physical knowledge, and integrating this information.Inspired by their framework, this exploration will delve into Physics-Informed Neural Networks (PINNs), introduced in 2017, elucidating how neural network features are employed, how physical information is incorporated, and which physical problems have found resolution within the literature.

Physics-Informed Neural Networks Concept
Physics-Informed Neural Networks (PINNs) serve as a potent scientific machine learning tool tailored for solving problems entrenched in Partial Differential Equations (PDEs).The crux of PINNs lies in approximating PDE solutions by training neural networks to minimize a loss function that encapsulates critical elements: initial and boundary conditions across the space-time domain, alongside the PDE residual pinpointed at selected collocation points within the domain.These deep-learning networks, constituting PINNs, yield estimated solutions at designated points within a differential equation's integration domain post-training.
What sets PINNs apart is the incorporation of a residual network encoding the governing physics equations, a hallmark novelty.The training process of PINNs functions as an unsupervised strategy, eschewing the need for labeled data stemming from prior simulations or experiments.Essentially, the PINN algorithm operates as a mesh-free technique, transforming the direct resolution of governing equations into an optimization problem concerning a loss function.This method intricately integrates the mathematical model into the neural network and reinforces the loss function by introducing a residual term derived from the governing equation.This term acts as a constraining element, narrowing down the acceptable solution space.
Unlike approaches reliant solely on data-driven solutions, PINNs prioritize the inherent PDE physics over mere data fitting via neural networks.This concept traces its roots to earlier research [21], shedding light on the potential of leveraging structured prior knowledge.Some researcher [22] leveraged Gaussian process regression to construct representations of linear operator functionals, effectively inferring solutions and providing uncertainty estimates for diverse physical problems-a concept later extended in [22].The inception of PINNs in 2017 marked a milestone-a new class of data-driven solvers introduced through a comprehensive two-part article, subsequently amalgamated in 2019 [23], which elucidate the PINN approach's prowess in solving nonlinear PDEs like Schrödinger, Burgers, and convection-diffusion equations [24][25][26][27][28]. Their innovation with physics-informed neural networks (PINNs) extends beyond forward problems, encompassing inverse problems where model parameters are learned from observable data.
The integration of prior knowledge into machine learning algorithms isn't entirely novel.Early pioneers like Dissanayake and Phan-Thien [29] could be considered among the initial forerunners of PINNs.Building upon the universal approximation advancements of the late 1980s, methodologies emerged in the early 90s proposing neural network approximations for PDEs, such as constrained neural networks [30].This early network comprised two hidden layers with 3, 5, or 10 nodes per layer, utilizing pointcollocation to approximate the L2 error on the domain's interior and boundary.Evaluating the loss function employed a quasi-Newtonian approach, while gradients were assessed using finite differences.

Components of PINN
Physics-informed neural networks (PINNs) offer a potent solution for tackling problems characterized by limited or noisy experimental data.These networks possess the unique capability to incorporate known data while adhering rigorously to specified physical laws encoded by complex nonlinear partial differential equations, thereby operating as a versatile tool within the realm of supervised learning [31].In their essence, PINNs excel at solving differential equations across a broad spectrum of formulations, presenting a robust approach to various problem types, like: ;      Ω,        Ω defined on the domain Ω ⊂ Rd with the boundary Ω.In the spatial-temporal coordinate vector z: = [x1..., xd−1; t], u symbolizes the elusive solution, γ stands for the parameters entwined with the physics, f delineates the problem's data, and F embodies the nonlinear differential operator.Moreover, considering the initial condition as a form of Dirichlet boundary condition within the spatial-temporal domain allows denoting B as the operator signifying arbitrary initial or boundary conditions associated with the problem, while g represents the boundary function.These boundary conditions may encompass Dirichlet, Neumann, Robin, or periodic boundary conditions.
Above Equation embraces the capacity to depict a myriad of physical systems, encompassing both forward and inverse problems.In the realm of forward problems, the objective is to ascertain the function u for each z, with specified parameters γ.However, in the inverse problem, determining γ from the available data becomes imperative.A comprehensive operator-based mathematical formulation of Eq. ( 1) can be explored in Mishra and Molinaro's work [32].
Within the PINN framework, the computational prediction of u(z) transpires through a neural network (NN) parametrized by a set of parameters θ, giving rise to an approximation uˆθ (z) ≈ u(z), where (ˆ•) θ signifies an NN approximation realized through θ.
In this comprehensive context where both forward and inverse problems find analysis within the same framework, and recognizing PINN's efficacy in tackling both, θ assumes the role of representing the vector encompassing all unknown parameters within the neural network, constituting the surrogate model, and concurrently encapsulating unknown parameters γ in the inverse problem scenario.
Within this framework, the neural network must assimilate the approximation of differential equations by discerning θ, which defines the neural network, accomplished by minimizing a weighted loss function contingent upon the differential equation LF, the boundary conditions LB, and potentially known data L, each suitably weighted and considered:  *           .
In essence, PINN embodies an unsupervised learning methodology when trained exclusively on physical equations and boundary conditions to address forward problems.However, in scenarios involving inverse problems or when certain physical properties are inferred from potentially noisy data, PINN can seamlessly transition into a supervised learning approach.

Differential Equations
The inaugural vanilla PINN [23] was engineered to solve intricate nonlinear PDEs formulated as u t + F x u = 0, where x represents a spatial coordinate vector, t signifies the time coordinate vector, and Fx denotes a nonlinear differential operator concerning spatial coordinates.Initially, the PINN architecture demonstrated its proficiency in handling both forward and inverse problems.Over the subsequent years, PINNs have expanded their application scope, venturing into solving a diverse array of equations encompassing ordinary differential equations (ODEs), partial differential equations (PDEs), Fractional PDEs, integro-differential equations (IDEs), and stochastic differential equations (SDEs).
This section endeavors to spotlight the trajectory of research advancements in addressing various equation types, classifying them based on their structures and outlining seminal contributions from the literature that harnessed PINN for such problem domains.The exploration commences with PINN studies tackling ODEs, progressing then to investigations on steady-state PDEs including Elliptic type equations, steady-state diffusion, and the Eikonal equation.Subsequently, the examination delves into the domain of the Navier-Stokes equations, followed by an array of dynamic problems such as heat transport, advection-diffusion-reaction systems, hyperbolic equations, and Euler equations or quantum harmonic oscillator.Concluding this overview, the section delves into Bayesian problems associated with the previously addressed PDEs, shedding light on strategies to navigate uncertainties inherent in stochastic equations.

Application and Perspective
The preceding sections delved into the neural network aspect of the PINN framework and highlighted the range of equations addressed in existing literature.This section initiates by exploring the management of physical information within the PINN framework-an examination into how data and models intertwine to enhance efficacy.Subsequently, it scrutinizes real-world applications of PINNs, shedding light on software platforms like DeepXDE, NeuralPDE, NeuroDiffEq, and others, all of which emerged in 2019, fostering advancements in PINN design.
The future trajectory of PINN's theoretical or applied configurations remains uncertain.Here, we can evaluate incomplete findings from papers, scrutinize the most contentious aspects of PINNs, identify unexplored realms, and delineate intersections with other disciplines.
Despite significant strides in augmenting PINN capabilities through published works, numerous unresolved issues persist.These encompass a broad spectrum, spanning from theoretical considerations-such as convergence and stability-to implementation challenges, including boundary condition management, neural network design, general PINN architecture, and optimization aspects.While PINNs and other deep learning methods leveraging physics priors hold promise in solving high-dimensional PDEs prevalent in physics, engineering, and finance, they encounter hurdles in accurately approximating solutions compared to specialized numerical methods tailored for specific PDEs.Particularly, PINNs may struggle with learning intricate physical phenomena, such as solutions exhibiting multi-scale, chaotic, or turbulent behaviors.

Conclusion
This review serves as an in-depth exploration of the innovation process in the field of PINNs over the past four years, transcending the boundaries of a mere research survey.Raissi's seminal research [143,144], pioneering the PINN framework, initially focused on employing PINNs to solve established physical models.These groundbreaking papers propelled the PINN methodology into the spotlight, further substantiating its original conceptual framework.Across the analyzed studies, efforts were concentrated on tailoring PINNs through adjustments in activation functions, gradient optimization techniques, neural network architectures, and the structures of loss functions.
An intriguing extension of the original PINN concept involved utilizing minimal model information within the physical loss function, bypassing typical PDE equations, while concurrently embedding the validity of initial or boundary conditions directly into the NN structure.Fewer studies delved into alternatives to automatic differentiation or grappled with convergence issues [33].A pivotal subset of publications aimed to elevate this progression by introducing comprehensive frameworks catering to various sub-types of physical problems or multi-physics systems [33].The foundational brilliance of the initial PINN articles lies in reviving the optimization of problems with physical constraints, first approximating the unknown function using a neural network, and subsequently expanding this approach into a hybrid data-equation-driven methodology within contemporary research.Prior studies in past years employed diverse methods to approximate the unknown function, such as kernel approaches or utilizing PDE functions as constraints in optimization problems [34].Nonetheless, PINNs are inherently grounded in physical information, derived either from data point values-usually confined to initial or boundary data-or from collocation points enforcing compliance with the physical model equation.
This survey delves into the evolution of PINNs from their inception in the pioneering works and tracks the progression of integrating physical priors into Neural Networks.It scrutinizes PINNs as a collocation-based method for solving differential equations using Neural Networks, encompassing various iterations such as the variational PINN (VPINN) in its soft and hard form, incorporating initial and boundary conditions into loss functions or embedding boundary conditions within the neural network structure.The survey meticulously dissects the PINN pipeline, examining each fundamental component: neural networks, the construction of loss functions based on physical models, and the feedback mechanism.It provides an overarching analysis of examples where the PINN methodology has been applied and offers insights into concrete applications of PINNs along with available software packages.However, despite significant advancements, numerous opportunities for improvement persist, especially in unresolved theoretical quandaries.There remains untapped potential for optimizing PINN training and extending their capabilities to tackle multiple equations effectively.