Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute emf #642

Draft
wants to merge 37 commits into
base: development
Choose a base branch
from
Draft

Conversation

AstroKriel
Copy link
Collaborator

@AstroKriel AstroKriel commented Jun 7, 2024

Description

Added infrastructure to compute EMF components on cell edges.

Still in progress:

  • Close the compute-loop and update magnetic field components on each face.
  • Post a flow chart of steps performed in ComputeEMF().
  • Implement an Alfven Wave test problem.

Related issues

#526

Checklist

  • I have added a description (see above).
  • I have added a link to any related issues see (see above).
  • I have read the Contributing Guide.
  • I have added tests for any new physics that this PR adds to the code.
  • I have tested this PR on my local computer and all tests pass.
  • I have manually triggered the GPU tests with the magic comment /azp run.
  • I have requested a reviewer for this PR.

auto &stateNew_cc = state_inter_cc_;

auto const &stateOld_fc = state_old_fc_tmp;
auto &stateNew_fc = state_inter_fc_;

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable stateNew_fc is not used.
auto const &stateInter = state_inter_cc_;
auto &stateFinal = state_new_cc_[lev];
auto [fluxArrays, faceVel] = computeHydroFluxes(stateInter, ncompHydro_, lev);
auto [fluxArrays, faceVel, fspds] = computeHydroFluxes(stateInter, ncompHydro_, lev);

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable fspds is not used.
auto const &stateOld_cc = state_old_cc_tmp;
auto &stateNew_cc = state_inter_cc_;

auto const &stateOld_fc = state_old_fc_tmp;

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable stateOld_fc is not used.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When MHD is disabled, you should call amrex::ignore_unused(stateOld_fc) to suppress this warning.

auto const &stateOld_fc = state_old_fc_tmp;
auto &stateNew_fc = state_inter_fc_;

auto [fluxArrays, faceVel, fspds] = computeHydroFluxes(stateOld_cc, ncompHydro_, lev);

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable fspds is not used.
// bx = 0 for testing purposes
// TODO(Neco): pass correct bx value once magnetic fields are enabled
F_canonical = quokka::Riemann::HLLD<problem_t, nscalars_, nmscalars_, nvar_>(sL, sR, gamma_, 0.0);
auto [F_canonical, fspd_m, fspd_p] = quokka::Riemann::HLLD<problem_t, nscalars_, nmscalars_, nvar_>(sL, sR, gamma_, 0.0);

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable F_canonical is not used.
@BenWibking BenWibking marked this pull request as draft June 7, 2024 15:47
@BenWibking
Copy link
Collaborator

I've marked this as a draft PR since you've indicated in the description that there are a few items you're still working on. Ping me and @markkrumholz when it's ready.

@BenWibking
Copy link
Collaborator

Just FYI- I've added #526 to "Related Issues" in the PR description.

Copy link
Collaborator

@BenWibking BenWibking left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall it's looks like a good start.

My main comment is that the variable names are excessively short and unreadable. There are also several unnecessary variable renamings in existing code in RadhydroSimulation that should (IMO) be avoided.

Just FYI, it will be necessary to save the wavespeeds for LLF when MHD is used because LLF is always the fallback solver when first-order flux correction (FOFC) is activated. Some additional changes will be necessary to make the emf update consistent when FOFC is activated.

@@ -1550,13 +1602,13 @@ void RadhydroSimulation<problem_t>::hydroFOFluxFunction(amrex::MultiFab const &p
// donor-cell reconstruction
HydroSystem<problem_t>::template ReconstructStatesConstant<DIR>(primVar, leftState, rightState, ng_reconstruct, nvars);
// LLF solver
HydroSystem<problem_t>::template ComputeFluxes<RiemannSolver::LLF, DIR>(flux, faceVel, leftState, rightState, primVar, artificialViscosityK_);
HydroSystem<problem_t>::template ComputeFluxes<RiemannSolver::LLF, DIR>(flux, faceVel, leftState, rightState, primVar, artificialViscosityK_, nullptr);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is actually necessary to save the wavespeeds from the LLF Riemann solver -- this is used whenever first-order flux correction is activated, regardless of whether MHD is used or not.

@@ -96,6 +96,8 @@ AMREX_FORCE_INLINE AMREX_GPU_DEVICE auto HLLD(quokka::HydroState<N_scalars, N_ms
const double cfs_R = FastMagnetoSonicSpeed(gamma, sR, bx);
spds[0] = std::min(sL.u - cfs_L, sR.u - cfs_R);
spds[4] = std::max(sL.u + cfs_L, sR.u + cfs_R);
const double fspd_m = spds[0];
const double fspd_p = spds[0];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would strongly prefer to avoid Fortran-style four-character variable names. In my opinion, there is no reason in the year 2024 not to use descriptive variables names that use full words.

for (int idim = 0; idim < AMREX_SPACEDIM; ++idim) {
auto ba_face = amrex::convert(ba, amrex::IntVect::TheDimensionVector(idim));
auto ba_fc = amrex::convert(ba_cc, amrex::IntVect::TheDimensionVector(idim));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to rename ba_face -> ba_fc? ba_face is readable and ba_fc is shortens "face" to "fc" for no good reason.

rightState[idim] = amrex::MultiFab(ba_face, dm, nvars, reconstructGhost);
flux[idim] = amrex::MultiFab(ba_face, dm, nvars, 0);
facevel[idim] = amrex::MultiFab(ba_face, dm, 1, 0);
auto ba_fc = amrex::convert(ba_cc, amrex::IntVect::TheDimensionVector(idim));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing ba_face to ba_fc makes the variable name less understandable. There is no good reason for this change.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had only changed it so that the variable naming was consistent across the codebase, but I am happy to change it back if you like.

Copy link
Collaborator

@BenWibking BenWibking Jun 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, okay. Making it consistent is a good reason.

@@ -116,7 +118,8 @@ template <typename problem_t> class HydroSystem : public HyperbolicSystem<proble

template <RiemannSolver RIEMANN, FluxDir DIR>
static void ComputeFluxes(amrex::MultiFab &x1Flux_mf, amrex::MultiFab &x1FaceVel_mf, amrex::MultiFab const &x1LeftState_mf,
amrex::MultiFab const &x1RightState_mf, amrex::MultiFab const &primVar_mf, amrex::Real K_visc);
amrex::MultiFab const &x1RightState_mf, amrex::MultiFab const &primVar_mf, amrex::Real K_visc,
amrex::MultiFab *x1FSpds_mf = nullptr);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is necessary to save the wavespeeds for LLF when MHD is used.

const int reconstructionOrder)
{
if (reconstructionOrder == 3) {
switch (dir) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This switch statement is not necessary if you make dir a template parameter.

break;
}
} else if (reconstructionOrder == 2) {
switch (dir) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This switch statement is not necessary. Make dir a template parameter instead.

break;
}
} else if (reconstructionOrder == 1) {
switch (dir) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This switch statement is not necessary. Make dir a template parameter instead.

iquad1 = (iface == 0) ? 3 : 2;
}
ec_fabs_Ui_q[icomp][iquad0].atomicAdd(ec_fabs_U_ieside[0], 0, 0, 1);
ec_fabs_Ui_q[icomp][iquad1].atomicAdd(ec_fabs_U_ieside[1], 0, 0, 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is atomicAdd necessary here?

void MHDSystem<problem_t>::ComputeEMF(std::array<std::array<amrex::MultiFab, 2>, AMREX_SPACEDIM> &ec_mf_emf_comps, amrex::MultiFab const &cc_mf_cVars,
std::array<amrex::MultiFab, AMREX_SPACEDIM> const &fcx_mf_cVars,
std::array<amrex::MultiFab, AMREX_SPACEDIM> const &fcx_mf_fspds, const int nghost_fc)
{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a lot of very small (one-line or similar) GPU kernels in this function. This will have a significant negative performance impact. Once test problems are working, I would strongly recommend seeing if it's possible to combine this kernels in this function into a much smaller number of ParallelFors.

emf_comps_ec[idim][1].define(
amrex::convert(ba_cc, amrex::IntVect::TheDimensionVector(idim) + amrex::IntVect::TheDimensionVector((idim + 2) % 3)), dm, 1,
nghost_fc_);
fspds[idim].FillBoundary(geom[lev].periodicity());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note for the future: this will have to be modified for AMR to use FillPatchTwoLevels instead (since the wavespeeds may be at a coarse-fine boundary).

nghost_fc_);
fspds[idim].FillBoundary(geom[lev].periodicity());
}
MHDSystem<problem_t>::ComputeEMF(emf_comps_ec, stateOld_cc, stateOld_fc, fspds, nghost_fc_);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WeightedSync should be called here (#526).

@@ -1212,8 +1258,8 @@ auto RadhydroSimulation<problem_t>::advanceHydroAtLevel(amrex::MultiFab &state_o
redoFlag.setVal(quokka::redoFlag::none);

Copy link
Collaborator

@BenWibking BenWibking Jun 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the RK stage 2 update is missing. MHDSystem<problem_t>::ComputeEMF should be called here, followed by WeightedSync.

std::array<std::array<amrex::MultiFab, 2>, AMREX_SPACEDIM> emf_comps_ec;
for (int idim = 0; idim < AMREX_SPACEDIM; ++idim) {
emf_comps_ec[idim][0].define(
amrex::convert(ba_cc, amrex::IntVect::TheDimensionVector(idim) + amrex::IntVect::TheDimensionVector((idim + 1) % 3)), dm, 1,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is correct. There should only be 1 emf MultiFab per spatial dimension. The emf multifabs should be identical to the E multifabs here: https://github.com/WeiqunZhang/amrex-devtests/blob/8ceb0e5e9ba61f553499bf19b21fcac21e25e7c7/edgefluxregister/main.cpp#L48

@BenWibking BenWibking added the MHD label Jun 20, 2024
Copy link

sonarcloud bot commented Sep 30, 2024

Quality Gate Failed Quality Gate failed

Failed conditions
C Maintainability Rating on New Code (required ≥ A)

See analysis details on SonarCloud

Catch issues before they fail your Quality Gate with our IDE extension SonarLint

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants