__pycache__ | ||
image.png | ||
pde_rga_2d_debug.ipynb | ||
readme.md | ||
requirements.txt | ||
RGA_1d_exact_argmax.py | ||
RGA_debug.py | ||
RGA_exact_debug.ipynb | ||
RGA_linesarch_debug.ipynb | ||
RGA_linesearch.py |
RGA‑PDE (1D) — Greedy solvers for Poisson with boundary penalty
This project contains clean, well‑commented Python implementations of greedy solvers for the 1‑D Poisson problem with a penalized boundary term, together with Jupyter notebooks that reproduce the debug experiments and plots used in the write‑up.
The core idea: build an approximation u_k
as a linear combination of dictionary atoms g_{w,b}(x)=\sigma(w x + b)^2
(with \sigma
the ReLU), choose an atom greedily, and update either with the classical Relaxed Greedy Algorithm (RGA) step or with an exact line search along the chosen atom. The notebooks visualize gradient residuals, variation‑norm budgets, L^2
errors, and the selected $w$/b
over iterations (3×2 summary figure).
What’s in this repo
Python modules (each ships with a small “dummy main” so you can run them directly):
RGA_debug.py
— RGA (relaxed greedy) for 1‑D Poisson with extensive logging and the 3×2 summary plot. Atoms may be $H_\delta$‑normalized; per‑iteration step uses a schedulealpha_k
and a budget scheduleM_k
.RGA_linesearch.py
— Greedy line‑search variant: at iterationk
, pickg_k
and take the exact one‑dimensional minimizer\lambda_k = \mathcal R'(u_k)[g_k]/\|g_k\|^2
. Same logging/plots as the debug class.RGA_1d_exact_argmax.py
— A minimal solver that computes the exact arg‑max in $b$ for the ReLU² atom (using interval partitioning by hinge locations) and demonstrates the effect on convergence.
Each file has:
- the solver class,
- an embedded
if __name__ == "__main__":
block with dummy parameters sopython file.py
runs an illustrative short demo and pops the 3×2 summary figure.
Notebooks (for the paper’s figures and ablations):
RGA_debug.ipynb
— runs the S0–S5 experiments (fixed budgets, normalization on/off, larger dictionaries, different step schedules) and renders the 3×2 plots.RGA_linesearch_debug.ipynb
— runs S6 (line search) with optional clipping of each step and shows how/why the error eventually drops.RGA_1d_exact_argmax.ipynb
— uses the exact arg‑max implementation to compare against coarse‑grid + local refinement.
The notebooks call the classes from the
.py
files and keep all plotting/printing on the notebook side, so code and experiments are cleanly separated.
Installation
Use a local virtual environment and the provided requirements.txt
.
# create and activate a venv (Linux/macOS)
python -m venv .venv
source .venv/bin/activate
# on Windows PowerShell
# python -m venv .venv
# .venv\Scripts\Activate.ps1
pip install -r requirements.txt
That’s it. The requirements are light: numpy
, scipy
, pandas
, matplotlib
(and ipykernel
for notebooks).
Quick start
1) Run a dummy demo from a module
Each module has a tiny main. For example, the classical RGA debug run:
python RGA_debug.py
This will:
- instantiate the solver with dummy parameters (right‑hand side
f(x)=1
), - run a short number of iterations,
- display the 3×2 summary: gradient residual, budget vs. variation,
L^2
error with reference slopes,u_k
vs.u*
, and the traces of the selectedw
andb
.
Similarly, try the line‑search variant:
python RGA_linesearch.py
2) Reproduce the paper’s debug experiments in notebooks
Open in VS Code or Jupyter and run all cells:
RGA_debug.ipynb
(S0–S5): fixed budget, normalization, large dictionary, variable budget,\alpha_k\sim 1/\sqrt{k}
, etc.RGA_linesearch_debug.ipynb
(S6): exact line search with optional per‑step clipping.
Each experiment cell returns a pandas.DataFrame
log and renders the 3×2 plot. The plots match the ones in the write‑up.
Typical usage patterns (import in your own notebook)
import numpy as np
from RGA_debug import RGA1D, alpha_schedule, M_schedule
f_rhs = lambda x: np.ones_like(x)
u_star = lambda x: 0.5*x*(np.pi - x)
solver = RGA1D(
f_fun=f_rhs,
u_exact=u_star,
norm_atoms=True,
alpha_fun=alpha_schedule("warm"),
M_fun=M_schedule(M0=10, Mmax=10, kfull=4000, mode="linear"),
w_vals=(-1.0, 1.0), Nb=400, delta=1e-2, N=1501,
)
solver.run(iters=25_000, label="MyRun", prt=1000) # prints logs and shows 3×2 plot
Line search (same idea, but step size chosen exactly along the atom):
from RGA_linesearch import GreedyLineSearch1D
solver = GreedyLineSearch1D(
f_fun=f_rhs,
u_exact=u_star,
w_vals=(-1.0, 1.0), Nb=400, delta=1e-2, N=2001,
M_cap=None, # or a float to clip each |lambda_k|
)
solver.run(iters=100_000, label="S6_LineSearch", prt=1000)
Changing the PDE
- Right‑hand side: set
f_fun
to any callable; e.g.lambda x: np.sin(x)
. - Exact solution (for diagnostics): set
u_exact
accordingly or passNone
to disable theL^2
panel. - Boundary penalty:
delta
controls the strength of the penalization ofu(0)
andu(pi)
.
The 3×2 summary plot (what you see)
- Gradient residual: $|\langle g_k, \nabla\mathcal{R}(u_k)\rangle|$ on log–log axes.
- Variation vs. budget: the accumulated variation proxy $k_1(u_k)$ against the budget $M_k$.
- $L^2$ convergence: $|u_k-u^*|_{L^2}$ with reference slopes $n^{-1}$, $n^{-1/2}$, $n^{-1/4}$.
- Exact vs. approximation: $u_k$ and $u^*$.
- $w$ trace: selected slopes over time (typically toggling between $\pm 1$).
- $b$ trace: selected hinge locations over time.
Tips and troubleshooting
- Quadrature vs. iterations: for theory‑faithful rates, grow the grid
N
in line with the iteration count; too coarse a quadrature can cause early plateaus in the $L^2$ panel. - Large dictionaries: add more
w
values (e.g.(-5,…,-1,1,…,5)
) and increaseNb
if you want the arg‑max to discover different scales/hinges. - Performance: the arg‑max uses a coarse scan in
b
plus a local L‑BFGS refine;Nb
controls the coarse pass. LargerNb
= more accurate but slower. - No exact solution: pass
u_exact=None
to suppress the $L^2$ panel.
License and citation
Add your preferred license here. If you use this code in a publication, please cite your paper and reference the section where the greedy objective and boundary penalty are defined.
Acknowledgments
Thanks to the theoretical RGA framework this project builds on. The implementations aim to mirror the assumptions of the proof (atom normalization, variation budget) and to make deviations (e.g. line search without budget clipping) explicit in the experiments.
Updated project layout (current)
RGA-PDE/
├── requirements.txt
├── RGA_debug.py # RGA1D (relaxed greedy, argmax over (w,b), 3×2 plots)
├── RGA_1d_exact_argmax.py # GreedyPDE1D (exact argmax), 1D Poisson -u" = 1
├── RGA_linesearch.py # GreedyLineSearch1D (line-search on lambda), same plots
├── RGA_exact_debug.ipynb # experiments for exact-argmax solver
├── RGA_linesearch_debug.ipynb # experiments for line-search solver
├── pde_rga_2d_debug.ipynb # optional 2D scratchpad
├── .venv/ # local virtual environment (optional)
└── __pycache__/ # python bytecode cache
Run the scripts
# create & activate venv
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
# relaxed greedy (argmax over (w,b))
python RGA_debug.py
# exact argmax solver
python RGA_1d_exact_argmax.py
# greedy with line-search
python RGA_linesearch.py
Each script has a small main
with dummy parameters and will render the unified 3×2 summary figure. In notebooks, you can import the classes directly:
from RGA_debug import RGA1D, alpha_schedule, M_schedule
from RGA_1d_exact_argmax import GreedyPDE1D
from RGA_linesearch import GreedyLineSearch1D, run_case_ls
Tip: For theory-aligned runs, pick the quadrature size N
large enough relative to the iteration count (often proportional to iters^2
in the proofs) and set Nb
so the b-grid is fine enough for the local refinement to start well.