Learnable activation functions in physics-informed neural networks for solving partial differential equations

Afrah Farea*, Mustafa Serdar Celebi

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Physics-Informed Neural Networks (PINNs) have emerged as a promising approach for solving Partial Differential Equations (PDEs). However, they face challenges related to spectral bias (the tendency to learn low-frequency components while struggling with high-frequency features) and unstable convergence dynamics (mainly stemming from the multi-objective nature of the PINN loss function). These limitations impact their accuracy for solving problems involving rapid oscillations, sharp gradients, and complex boundary behaviors. We systematically investigate learnable activation functions as a solution to these challenges, comparing Multilayer Perceptrons (MLPs) using fixed and learnable activation functions against Kolmogorov-Arnold Networks (KANs) that employ learnable basis functions. Our evaluation spans diverse PDE types, including linear and non-linear wave problems, mixed-physics systems, and fluid dynamics. Using empirical Neural Tangent Kernel (NTK) analysis and Hessian eigenvalue decomposition, we assess spectral bias and convergence stability of the models. Our results reveal a trade-off between expressivity and training convergence stability. While learnable activation functions work well in simpler architectures, they encounter scalability issues in complex networks due to the higher functional dimensionality. Counterintuitively, we find that low spectral bias alone does not guarantee better accuracy, as functions with broader NTK eigenvalue spectra may exhibit convergence instability. We demonstrate that activation function selection remains inherently problem-specific, with different bases showing distinct advantages for particular PDE characteristics. We believe these insights will help in the design of more robust neural PDE solvers.

Original languageEnglish
Article number109753
JournalComputer Physics Communications
Volume315
DOIs
Publication statusPublished - Oct 2025

Bibliographical note

Publisher Copyright:
© 2025 Elsevier B.V.

Keywords

  • Kolmogorov-Arnold networks
  • Learnable activation function
  • Multilayer perceptrons
  • Partial differential equations
  • Physics informed neural networks

Fingerprint

Dive into the research topics of 'Learnable activation functions in physics-informed neural networks for solving partial differential equations'. Together they form a unique fingerprint.

Cite this