Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Avoid side-effect: Undo the scaling of rho in the computePhi #3445

Open
wants to merge 2 commits into
base: development
Choose a base branch
from

Conversation

RemiLehe
Copy link
Member

@RemiLehe RemiLehe commented Oct 3, 2022

When calling computePhi, we modify the array of rho in-place.
This PR avoids this side-effect.

@RemiLehe RemiLehe requested a review from ax3l October 3, 2022 01:40
@@ -317,6 +317,11 @@ computePhi (amrex::Vector<amrex::MultiFab*> const & rho,
post_phi_calculation.value()(mlmg, lev);

} // loop over lev(els)

// Undo the scaling of rho
for (int lev=0; lev<=finest_level; lev++) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not always necessary since the rho passed in is often a temporary. Perhaps add a flag to computePhi whether to reset rho and have it default to true. Then the calling routine can turn it off when not needed and save a little bit of time.

Along the same line, the initial scaling done above is not always needed either since sometimes rho is zero (for example in AddBoundaryField) and it could be skipped as well to save even more time.

@ax3l ax3l added enhancement New feature or request cleaning Clean code, improve readability component: electrostatic electrostatic solver component: ABLASTR components shared with other PIC codes labels Oct 3, 2022
@@ -161,7 +161,7 @@ computePhi (amrex::Vector<amrex::MultiFab*> const & rho,
// scale rho appropriately; also determine if rho is zero everywhere
amrex::Real max_norm_b = 0.0;
for (int lev=0; lev<=finest_level; lev++) {
rho[lev]->mult(-1._rt/PhysConst::ep0); // TODO: when do we "un-multiply" this? We need to document this side-effect!
rho[lev]->mult(-1._rt/PhysConst::ep0);
Copy link
Member

@ax3l ax3l Oct 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proposal: let's remove this here, make rho a amrex::Vector<amrex::MultiFab const*> const & rho named source and explicitly scale up before we call computePhi (and undo it afterwards in case we did not pass a temporary)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The advantage of doing the scaling here is that it reduces code duplication for most use cases.

@ax3l ax3l self-assigned this Oct 3, 2022
@RemiLehe
Copy link
Member Author

RemiLehe commented Oct 3, 2022

Thanks for the feedback!
I updated this PR to include the -epsilon_0 directly into the definition of the operator that we apply on phi.

@RemiLehe RemiLehe changed the title Avoid side-effect: Undo the scaling of rho in the computePhi [WIP] Avoid side-effect: Undo the scaling of rho in the computePhi Oct 3, 2022
@RemiLehe
Copy link
Member Author

RemiLehe commented Oct 3, 2022

OK, that was a bad idea: it seems that the MLMG solver diverges in this case:

MLMG: Initial rhs               = 6.56305248e-05
MLMG: Initial residual (resid0) = 6.56305248e-05
MLMG: Iteration   1 Fine resid/bnorm = 5.799033105e+54

@ax3l ax3l requested a review from WeiqunZhang October 4, 2022 16:18
@ax3l ax3l assigned WeiqunZhang and unassigned ax3l Oct 4, 2022
@@ -226,12 +225,14 @@ computePhi (amrex::Vector<amrex::MultiFab*> const & rho,
// one of the axes of the grid, i.e. that only *one* of the
// components of `beta` is non-negligible. // we use this
#if defined(WARPX_DIM_RZ)
linop.setSigma({0._rt, 1._rt-beta_solver[1]*beta_solver[1]});
linop.setSigma({
-PhysConst::ep0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're intending to multiply through by ep0, this looks like a mistake - the first entry would still be 0, wouldn't it?

@WeiqunZhang
Copy link
Member

AMReX-Codes/amrex#2986

@WeiqunZhang
Copy link
Member

OK, that was a bad idea: it seems that the MLMG solver diverges in this case:

MLMG: Initial rhs               = 6.56305248e-05
MLMG: Initial residual (resid0) = 6.56305248e-05
MLMG: Iteration   1 Fine resid/bnorm = 5.799033105e+54

@RemiLehe What test is this?

@WeiqunZhang
Copy link
Member

Also, we might need to adjust the absolute tolerance, because the solver compares that with the residual. Now the residual has been scaled by a factor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cleaning Clean code, improve readability component: ABLASTR components shared with other PIC codes component: electrostatic electrostatic solver enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants