.gitignore | ||
main.py | ||
OGA_PDE1d.py | ||
OGA_PDE_test.ipynb | ||
readme.md | ||
requirements.txt |
Greedy PDE Solvers with Shallow Neural Networks
Fast, transparent baselines for solving Poisson‑type PDEs with shallow neural‑network dictionaries using greedy methods. The current code targets a 1D boundary‑value problem on the interval $[0,\pi]$ with homogeneous Dirichlet boundary conditions and provides ready‑made plotting utilities.
Highlights
-
Orthogonal Greedy Algorithm (OGA) for a 1D Poisson problem.
-
Two boundary strategies:
- Penalized boundary via $\frac{1}{\delta},(u(0)^2 + u(\pi)^2)$ in the objective.
- Exact boundary via a fixed shape function $\psi(x)$ with $\psi(0)=\psi(\pi)=0$.
-
Shallow dictionary atoms $g(x)=\psi(x),\sigma(wx+b)$, with options
relu1
,relu2
,relu3
(ReLU^p),tanh
,sin
.
-
Diagnostics out of the box: $L^2/L^1$ error curves (if an exact solution is provided), greedy correlation magnitude $\lvert R'(u_n)[g_n]\rvert$, and selected parameters $(w_n,b_n)$ over iterations.
Installation
Create a virtual environment (recommended)
macOS/Linux
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
Install dependencies
pip install -r requirements.txt
requirements.txt
(baseline)
Minimal runtime dependencies:
numpy>=1.24
scipy>=1.10
matplotlib>=3.7
If you prefer pinned, tested versions:
numpy==1.26.4
scipy==1.11.4
matplotlib==3.8.4
black
ruff
jupyter
Quick start (OGA, 1D Poisson)
import numpy as np
import matplotlib.pyplot as plt
# adjust the import path to your project layout
from your_package.greedy import GreedyPDE1D_OGA_PenalizedTestPlots
if __name__ == "__main__":
# Right-hand side f(x) = 1 and exact solution u*(x) = 0.5*x*(pi - x)
f_rhs = lambda x: np.ones_like(x)
u_star = lambda x: 0.5 * x * (np.pi - x)
oga = GreedyPDE1D_OGA_PenalizedTestPlots(
f=f_rhs,
u_exact=u_star,
N=5000, # quadrature grid size on [0, pi]
delta=1e-5, # boundary penalty (ignored if use_form=True)
activation="relu3", # ReLU^3 dictionary atoms
w_vals=(-1.0, 1.0), # slopes for atoms
b_min=-np.pi, b_max=np.pi,
Nb_grid=1000, # coarse b-grid for argmax initialization
use_form=True # exact boundary via psi(x)=x*(pi-x)
)
oga.solve(n_steps=300, verbose=True)
# Diagnostics
oga.plot_errors() # L2 and L1 errors with reference slopes
oga.plot_corr() # |R'(u)[g_n]|
oga.plot_selected_params() # selected w_n and b_n per step
plt.show()
What you will see
- Error curves vs iteration (with $n^{-1}$, $n^{-2}$, $n^{-1/2}$ reference lines).
- Correlation magnitude $|R'(u_n)[g_n]|$ (greedy progress indicator).
- Selected dictionary parameters $(w_n, b_n)$ across iterations.
API overview
GreedyPDE1D_OGA_PenalizedTestPlots
A compact OGA solver with plotting helpers.
Constructor (key arguments)
f
: callablef(x)
returningnumpy.ndarray
of shape(N,)
.u_exact
: optional callable for an exact/reference solution, used for error plots.N
: number of grid points on $[0,\pi]$ (default2000
).delta
: boundary penalty strength for penalized boundary (smaller means harder enforcement).activation
: one of{"relu1", "relu2", "relu3", "tanh", "sin"}
.w_vals
: iterable of slope values for the dictionary atoms (e.g.,(-1.0, 1.0)
).b_min
,b_max
: search interval for the biasb
.Nb_grid
: coarse grid size for initializing the argmax search inb
.use_form
: ifTrue
, uses $\psi(x)=x(\pi-x)$ which enforcesu=0
at the boundary exactly; the penalty is then unnecessary.argmax_tol
: tolerance for detecting repeated directions.
Main methods
solve(n_steps: int, verbose: bool=True)
: runn_steps
greedy steps with a fully corrective update each time.plot_errors()
: L2/L1 error plots vs iteration (requiresu_exact
).plot_corr()
: absolute greedy score|R'(u)[g_n]|
vs iteration.plot_selected_params()
: selected(w, b)
vs iteration.
Mathematical formulation
We solve a 1D Poisson problem on $\Omega=(0,\pi)$ with homogeneous Dirichlet boundary conditions. In penalized form, we minimize
R_\delta(u)
= \tfrac12 \int_{0}^{\pi} |u'(x)|^2,dx
- \int_{0}^{\pi} f(x),u(x),dx
- \tfrac{1}{2\delta}\big(u(0)^2 + u(\pi)^2\big).
Alternatively, we enforce the boundary exactly by writing $u(x)=\psi(x),\hat u(x)$ with a fixed $\psi\in H_0^1(\Omega)$ (e.g., $\psi(x)=x(\pi-x)$), and minimizing the unpenalized energy over $\hat u$.
Dictionary and greedy iterate
Atoms are
g_{w,b}(x) = \psi(x),\sigma(wx+b),
and the $n$-th iterate is
u_n(x) = \sum_{j=1}^{n} c^{(n)}j, g{w_j,b_j}(x).
Orthogonal Greedy step
Given $u_{n-1}$, choose $(w_n,b_n)$ and a sign $s_n\in{\pm1}$ maximizing the directional decrease
\big\lvert R'(u_{n-1})[s_n,g_{w,b}] \big\rvert.
Include $g_n := s_n,g_{w_n,b_n}$ and perform a fully corrective update by solving
A^{(n)} \mathbf{c}^{(n)} = \mathbf{b}^{(n)},
with
A^{(n)}_{ij}=\int_0^{\pi} g'_i(x),g'_j(x),dx
+\frac{1}{\delta}\big(g_i(0)g_j(0)+g_i(\pi)g_j(\pi)\big),\qquad
b^{(n)}_i=\int_0^{\pi} f(x),g_i(x),dx.
(For the exact-boundary variant with $\psi\in H_0^1$, drop the boundary terms.) Then set
u_n(x)=\sum_{j=1}^{n} c^{(n)}_j,g_j(x).
Diagnostics
We track $R(u_n)$, $\lvert R'(u_n)[g_n]\rvert$, and errors $\lVert u_n-u^\rVert_{L^2(0,\pi)}$, $\lVert u_n-u^\rVert_{L^1(0,\pi)}$ when a reference $u^*$ is available.
Tips and gotchas
- Bias grid density (
Nb_grid
): a denser coarse grid gives a more robust initializer for the local optimizer. - Penalty weight (
delta
): smaller values enforce the boundary harder but can stiffen the system; ifuse_form=True
, the penalty is typically unnecessary. - Grid size (
N
): largerN
improves quadrature accuracy at a higher cost. - Activations:
relu3
often works well for this toy problem;tanh
andsin
are available for experimentation.
Citation
If you use this repository in academic work, please cite the following greedy‑training paper, which motivates greedy algorithms for neural networks and PDE applications:
@article{Siegel2023Greedy,
author = {Jonathan W. Siegel and Qingguo Hong and Xianlin Jin
and Wenrui Hao and Jinchao Xu},
title = {Greedy Training Algorithms for Neural Networks and Applications to PDEs},
journal = {arXiv preprint arXiv:2107.04466},
year = {2023},
note = {Submitted to Journal of Computational Physics},
url = {https://arxiv.org/pdf/2107.04466}
}
Acknowledgments
Feedback and pull requests are welcome. If something is unclear or you would like an additional example (e.g., a different right‑hand side or activation), please open an issue.