PACMANN: Smarter Point Placement for AI-Based PDE Solvers

A gradient guided adaptive sampling method that repositions collocation points to reduce computational cost, improve stability, and help physics informed neural networks handle complex, high dimensional PDE simulations more efficiently, making advanced scientific machine learning more practical for real engineering analysis and design workflows today.

Why Engineers Should Care

Partial differential equations (PDEs) underpin much of engineering analysis—from aerodynamics and structural mechanics to heat transfer and fluid flow. Reliable numerical solutions are essential, but they can be computationally demanding for complex systems.

Physics-Informed Neural Networks (PINNs) have attracted attention as an alternative approach. By embedding physical laws into neural network training, PINNs can approximate PDE solutions without traditional meshing.

However, their performance depends strongly on where the governing equations are enforced in the domain. These locations, known as collocation points, play a central role in accuracy and stability. Choosing them well is not trivial, especially as dimensionality increases.

What Came Before: Adaptive Sampling’s Trade-Off

Prior research showed that adaptive placement of collocation points improves PINN accuracy. Methods such as Residual-based Adaptive Refinement (RAR) and Residual-based Adaptive Distribution (RAD) allocate more points where errors are high.

This strategy is effective but costly. Identifying high-error regions often requires evaluating the PDE residual at many candidate points. For high-dimensional problems, this step alone can become a bottleneck.

Other approaches introduced secondary neural networks or generative models to guide sampling. While these can improve accuracy, they also increase implementation complexity and computational overhead.

The Core Question

The authors ask a practical question relevant to many engineering users:

Can collocation points be adapted efficiently while keeping computational cost manageable in high-dimensional problems?

Their objective is not to redesign PINNs, but to provide a method that is simple to integrate, computationally modest, and effective across problem types.

PACMANN in a Nutshell

PACMANN (Point Adaptive Collocation Method for Artificial Neural Networks) takes a different view of adaptive sampling. Instead of repeatedly generating new points, it moves existing collocation points.

The process works as follows:

  1. Train the PINN for a set number of iterations.
  2. Pause training.
  3. Compute the gradient of the squared PDE residual with respect to each point’s location.
  4. Move points toward higher-residual regions using gradient-based optimizers such as Adam.
  5. Resume training and repeat periodically.

In effect, the method concentrates points where the model currently struggles.

Because this relies on gradients already available in PINN frameworks, PACMANN avoids large global searches or auxiliary networks. The authors emphasize its low overhead and ease of integration into existing pipelines.

What the Experiments Show

The study evaluates PACMANN on multiple forward and inverse problems, including:

  • 1D Burgers’ equation
  • 1D Allen–Cahn equation
  • 5D Poisson equation
  • 2D and 3D Navier–Stokes problems
  • Geometrically challenging domains such as a re-entrant corner and a perforated plate

Key observations reported by the authors:

  • For low-dimensional problems, PACMANN achieves accuracy–efficiency trade-offs comparable to state-of-the-art methods.
  • For several high-dimensional and low-regularity problems, PACMANN outperforms the compared adaptive methods in terms of the reported accuracy–cost balance.
  • The stepsize for moving points has a strong influence on performance.
  • Around five movement steps often provide a good balance between accuracy and cost.
  • Among tested optimizers, Adam consistently performs well.

These findings are presented as empirical results for the tested cases rather than universal guarantees.

Implications for Engineering Practice

For engineers exploring PINNs in simulation workflows, PACMANN offers a pragmatic adaptive strategy. Its main appeal lies in:

  • Minimal modification to standard PINN training loops
  • No secondary networks
  • Controlled computational overhead

This makes it relevant for higher-dimensional problems where classical adaptive sampling becomes expensive.

The authors also note directions for further work, such as improved handling of boundary regions and studying behavior across different PDE classes.

A Measured Takeaway

PACMANN does not replace existing solvers, nor does it claim to solve all sampling challenges. What it demonstrates is that repositioning points using residual gradients can be an effective and economical adaptive strategy.

For the engineering community, this contributes a practical tool and a useful perspective on adaptive sampling in scientific machine learning.

Acknowledgment
Thanks to Coen Visser, Alexander Heinlein, and Bianca Giovanardi for their contribution. Readers interested in implementation details can consult the paper and code repository.

Reference
Visser, C., Heinlein, A., & Giovanardi, B. (2026). PACMANN: Point adaptive collocation method for artificial neural networks. Computer Methods in Applied Mechanics and Engineering, 452, 118723.

Code: https://github.com/CoenVisser/PACMANN

Leave a Comment

Your email address will not be published. Required fields are marked *