Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VL+CT support gravity source terms #186

Merged
merged 15 commits into from
Jun 16, 2022
Merged

Conversation

mabruzzo
Copy link
Contributor

@mabruzzo mabruzzo commented May 30, 2022

This PR introduces gravity source terms to VL+CT solver.

Overview

The implementation is very simplistic.

  • the user should schedule methods to compute the acceleration field at the start of the timestep
  • Then, within the VL+CT solver, we add the source terms at the same time that we introduce the acceleration field at the end of the full timestep.

The implementation should have the same temporal order as the PPM solver. I adapted the source term formulas from Enzo’s Runge Kutta solver.

Tests

Implemented Test

At this time, I have only implemented a single test. Essentially we run an inclined stable Jeans wave over one period and compute the L1 error norm of the final snapshot compared to the initial snapshot (this is the same procedure that is used by our inclined HD and MHD tests). While introducing this test I made some changes to the infrastructure in EnzoInitialInclinedWave (there's still a lot more that could be improved), but I took care to make sure I wasn't breaking other tests.

The following shows the evolution of a high resolution simulation on a uniform grid with a resolution of (256,128,128):
image
The above figure was run with a much earlier version of this branch, but from everything I've seen the results shouldn't have changed much. The above figure includes a data point for every cell in the simulation plotted at its position along the (incline) axis that the wave is propagating along.

I have included a plot of the L1 error norm as function of the number of cells. This used a more recent version of the code (from just before I rebased the branch onto the cmake branch (so compiler settings were slightly different). Following the convention that Athena/Athena++ use for their inclined wave convergence study, a domain of (256,128,128) corresponds to N=128:

image

The dashed line shows 2nd order convergence. I'm not sure why we are getting higher order convergence (presumably this is just related to this particular test problem). I tried changing Method:gravity:order from 4 (the default) to 2 and saw no difference (beyond round-off error) in this plot.

As I've mentioned on the slack, the PPM solver encounters issues with this test problem. I'll open an issue to document this

Other Tests

We could definitely use more tests (especially ones with analytic solutions).

An obvious choice is the unstable Jeans wave (since EnzoInitialInclinedWave can already generate the initial conditions) The following shows the early time evolution when $\lambda/\lambda_J = 1.5$.

image

To finish implementing this test we would need to compare against the analytic solution (presumably we could reuse/adapt the machinery from the Shock Tube tests). The analytic solution is the same as the one used in Athena.

Alternative implementation

As an aside: other choices could be made for integrating the solver with the gravity solver.

As a reminder: a van Leer Predictor-Corrector scheme (the the VL in VL+CT) employs the following procedure:

  • use hydro quantities from beginning of the timestep, u_begin, to compute fluxes, f_begin.
  • Predict the hydro quantities at the half timestep, u_half (from u_begin and f_begin)
  • Use the values of u_half to estimate the fluxes at the half timestep, f_half
  • Compute the quantities at the end of the timestep with u_begin and f_half

Alternative 1

A simple way to try to improve the solver’s integration with gravity is to reuse the same acceleration source terms in both the partial and full time steps. It’s super easy to adapt the code to do this, but I think we should have some more test cases before we do this (it’s not completely obvious to me how well this should work).

This is probably what we want to be doing in the short-term.

Alternative 2

The optimal approach treatment with higher temporal order (what athena++ does) would:

  • use the acceleration field for source terms during the half timestep
  • Recompute the acceleration field at the half timestep for the gravitational potential of u_half["density"] (and the expected particle positions at the half timestep)
  • And use the recomputed acceleration from the half timestep to compute the source terms in the full timestep update

As in athena++, it would be fairly trivial to use the momentum source terms described by Mullen+ 2021 under this approach - this ensures momentum conservative.

I’ve been thinking about this sort of solution for a while (it has applications for other source terms like radiative cooling) and have some ideas for how to approach it. Unfortunately, it opens a can of worms:

  • we really should perform flux corrections at the partial timestep (so that the mass that sources the gravitational potential is constant)
  • Complications will also arise from adaptive time-stepping
  • Complications also arise if we want the "mhd_vlct" and "ppm" to remain drop-in replacements for each other.

So, I’ve been holding off on this for the short term

mabruzzo added 7 commits May 30, 2022 09:23
For testing purposes, I have introduced support for an inclined linear Jeans Wave (users can choose to make it stable or unstable based on the value of the Gravitational constant)
While I was here, I decided to refactor the circularly polarized alfven wave so that it used lambda functions to define its vector initializers. Doing this makes the code a lot easier to read - all of the code specific to initializing a Circularly-Polarized Alfven Wave is now located in a single spot (previously it was split into 2 different places).

I would like to clean this all up a little more, but it's not a great use of my time
- describe the changes to the inclined_wave initializer
- describe the actual Jeans wave test
@mabruzzo
Copy link
Contributor Author

mabruzzo commented Jun 2, 2022

@peeples this is ready for review

@peeples peeples requested review from pgrete, gregbryan and fearmayo and removed request for pgrete June 2, 2022 14:47
Copy link
Contributor

@stefanarridge stefanarridge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks good to me.

Would it be worth making an EnzoSource base class, and have EnzoSourceGravity (and potentially other types of sources) be a class derived from EnzoSource? I understand this might be too much work for now and you would prefer to leave that for a a future PR. Could feedback and accretion from / onto star / sink particles be handled in a similar way?

src/Enzo/enzo_EnzoMethodMHDVlct.cpp Outdated Show resolved Hide resolved
@stefanarridge stefanarridge mentioned this pull request Jun 10, 2022
@peeples
Copy link
Contributor

peeples commented Jun 10, 2022

@stefanarridge this looks ready to merge, modulo that one conflict?

@stefanarridge
Copy link
Contributor

@stefanarridge this looks ready to merge, modulo that one conflict?

Yes I'd say so.

mabruzzo added 2 commits June 11, 2022 09:21
- fixed an oversight in EnzoMethodMHDVlct::compute where we never initialized the map of acceleration fields
- fixed an oversight in EnzoSourceGravity where we were accidentally trying to read acceleration fields from the wrong map.
- fixed an oversight in the default constructor for array_StringIndRdOnlyMap that was revealed during debugging. Previously, when we would default construct an array, we would not properly initialize the size and capacity members to zero.

This test problem appears to break in a similar way to the PPM test problem. For that reason I have removed the Jeans Wave from the automated testing infrastructure
@mabruzzo
Copy link
Contributor Author

mabruzzo commented Jun 11, 2022

After merging with main, I discovered a massive oversight. It turns out, we weren't actually adding the gravity source terms in the VL+CT solver (I had forgotten to initialize the array map that holds the acceleration fields)...

This revealed a few obvious minor bugs in my implementation that I have now fixed. Previously, the test problem must have been evolving a pure hydro wave (which explains the mysterious second order error convergence).

After fixing this, an x-axis aligned Jeans wave with the VL+CT solver now looks something like this:

image

This looks a lot like the result for the PPM solver:
image

All of this seems to suggest that there is probably an issue with my initial conditions (or possibly a problem with the Gravity solver, but that seems a lot less likely...). You can use these input files to reproduce these results: grav_input.tar.gz.

Would it be worth making an EnzoSource base class, and have EnzoSourceGravity (and potentially other types of sources) be a class derived from EnzoSource? I understand this might be too much work for now and you would prefer to leave that for a a future PR. Could feedback and accretion from / onto star / sink particles be handled in a similar way?

Yep, that's exactly what I had in mind, but I was leaving that for a future PR. (As an aside, there is also a source term for the dual energy formalism, but that has a different signature that it probably wouldn't be subclassed from the same base class). We should talk more about this next week, because it may make more sense to keep certain types of source terms in separate methods.

@mabruzzo
Copy link
Contributor Author

mabruzzo commented Jun 11, 2022

Uh, actually I just figured it all out (based on a comment @jwise77 made a while back during his review of PR #89).

Everything works out extremely nicely when I set Method:pm_deposit:alpha = 0.

Here's the aligned wave for VL+CT:
image

Here's the aligned wave for PPM:
image

This also cleared up a weird inconsistency I was getting about choosing the correct stopping time

@stefanarridge
Copy link
Contributor

stefanarridge commented Jun 13, 2022

As I mentioned, I have done some isothermal collapse runs with the PPM and VL+CT methods and gravity.

Here is a link to a Google drive folder with a couple of gifs. They show the evolution of the density field and the position of the sink particle. https://drive.google.com/drive/folders/105ha9f_DmbcWKA6zZxcl09K3UpuKEzdy?usp=sharing

I am also attaching a couple of tarballs with more results:
vlct_shu_collapse_n_128.tar.gz
ppm_shu_collapse_n_128.tar.gz

Unfortunately there are currently some missing files which I am unable to access at the moment due to technical difficulties.

Both contain a "shu.in" which is the parameter file used to run the problem.

In the "ppm" directory, there is a "radial_profiles.png", and a "mass.png". These are attempts to reproduce Figures 10 and 9 from this paper:
https://arxiv.org/pdf/1001.4456.pdf.

The "vlct" directory contains "shu.out" is is the Enzo-E output file. It shows that the simulation crashed at some point.

Unfortunately I don't have access to the python scripts used to make the figures at the moment.

Edit: I have managed to get some of missing files now, namely, the python scripts and the "mass" and "radial profile" plots for the vlct run, which you can extract from this tarball:
missing_files.tar.gz

I will try and do these runs again with the "alpha" parameter set to zero.

To run with the same version of Enzo-E, checkout commit 23b44da from this repository: https://github.com/stefanarridge/enzo-e. This branch is the subject of PR #200.

Let me know if you have any questions.

Copy link
Contributor

@gregbryan gregbryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great to me -- congrats on getting that to work! I left a few very minor comments/questions but seems ready to go for me.

doc/source/param/initial.incl Show resolved Hide resolved
doc/source/tests/gravity.rst Outdated Show resolved Hide resolved
test/CMakeLists.txt Outdated Show resolved Hide resolved
@jwise77 jwise77 added this to the v1.0 milestone Jun 15, 2022
mabruzzo and others added 2 commits June 15, 2022 11:15
Co-authored-by: Greg Bryan <gbryan@astro.columbia.edu>
@mabruzzo
Copy link
Contributor Author

I updated the answer-test and added tests for the PPM solver. Here is the L1 Error-Norm scaling. Both hydro solvers achieve first-order convergence.

image

If all tests pass, let's merge

As I mentioned, I have done some isothermal collapse runs with the PPM and VL+CT methods and gravity.

Here is a link to a Google drive folder with a couple of gifs. They show the evolution of the density field and the position of the sink particle. https://drive.google.com/drive/folders/105ha9f_DmbcWKA6zZxcl09K3UpuKEzdy?usp=sharing

I am also attaching a couple of tarballs with more results: vlct_shu_collapse_n_128.tar.gz ppm_shu_collapse_n_128.tar.gz

Unfortunately there are currently some missing files which I am unable to access at the moment due to technical difficulties.

Both contain a "shu.in" which is the parameter file used to run the problem.

In the "ppm" directory, there is a "radial_profiles.png", and a "mass.png". These are attempts to reproduce Figures 10 and 9 from this paper: https://arxiv.org/pdf/1001.4456.pdf.

The "vlct" directory contains "shu.out" is is the Enzo-E output file. It shows that the simulation crashed at some point.

Unfortunately I don't have access to the python scripts used to make the figures at the moment.

Edit: I have managed to get some of missing files now, namely, the python scripts and the "mass" and "radial profile" plots for the vlct run, which you can extract from this tarball: missing_files.tar.gz

I will try and do these runs again with the "alpha" parameter set to zero.

To run with the same version of Enzo-E, checkout commit 23b44da from this repository: https://github.com/stefanarridge/enzo-e. This branch is the subject of PR #200.

Let me know if you have any questions.

Let me know if you reproduced this issue or if you don't have time to do that. If this remains an issue, I think we should address this outside of this PR.

@stefanarridge
Copy link
Contributor

Let me know if you reproduced this issue or if you don't have time to do that. If this remains an issue, I think we should address this outside of this PR.

If you're referring to the issue of how the sphere collapses into a cross shape, I suspect this may have something to do with the initial conditions, and I am going to test this soon.

In any case, this does not need to be addressed in this PR.

@gregbryan gregbryan merged commit 2c03396 into enzo-project:main Jun 16, 2022
mabruzzo added a commit to mabruzzo/enzo-e that referenced this pull request Jun 27, 2022
Previously the drift time for the gas density was set to the value of `alpha` (taken from `Method:pm_deposit:alpha`). As @stefanarridge pointed out in PR enzo-project#89 this didn't make a lot of sense. I also ran into issues with introducing a Stable Jeans Wave regression test in PR enzo-project#186 that required me to set `Method:pm_deposit:alpha` to 0.

For comparison, the drift time for the gravitating mass particles is set to `alpha * dt / cosmo_a`, in which:
- `alpha` again comes from `Method:pm_deposit:alpha`
- `dt` is the time-step during the current cycle
- `cosmo_a` is the scale factor computed at `time + alpha * dt` (where `time` is the time at the start of the current cycle)

To investigate this issue I checked the original code from enzo-dev in `Grid_DepositBaryons.C`. I concluded that enzo-dev sets the gas density's drift-timestep should get set in exactly the same way as the Particle's drift timestep. However, when using the PPM and Zeus Hydro-solvers, the drift timestep is set to 0 (this is consistent with a comment @johnwise made during his review of PR enzo-project#189). Since our 2 primary hydro-solvers in Enzo-E (Ppm and VL+CT) currently handle these source terms with the same temporal order as these solvers, we now force the gas-deposition drift timestep to be 0 in `EnzoMethodPmDeposit`.

We will need to revisit this exact behavior when we eventually modify the VL+CT solver to handle self-gravity in a higher temporal-order manner.
mabruzzo added a commit to mabruzzo/enzo-e that referenced this pull request Jun 27, 2022
…posit

Previously the drift time for the gas density was set to the value of `alpha` (taken from `Method:pm_deposit:alpha`). As @stefanarridge pointed out in PR enzo-project#89 this didn't make a lot of sense. I also ran into issues with introducing a Stable Jeans Wave regression test in PR enzo-project#186 that required me to set `Method:pm_deposit:alpha` to 0.

For comparison, the drift time for the gravitating mass particles is set to `alpha * dt / cosmo_a`, in which:
- `alpha` again comes from `Method:pm_deposit:alpha`
- `dt` is the time-step during the current cycle
- `cosmo_a` is the scale factor computed at `time + alpha * dt` (where `time` is the time at the start of the current cycle)

To investigate this issue I checked the original code from enzo-dev in `Grid_DepositBaryons.C`. I concluded that enzo-dev sets the gas density's drift-timestep should get set in exactly the same way as the Particle's drift timestep. However, when using the PPM and Zeus Hydro-solvers, the drift timestep is set to 0 (this is consistent with a comment @johnwise made during his review of PR enzo-project#189). Since our 2 primary hydro-solvers in Enzo-E (Ppm and VL+CT) currently handle these source terms with the same temporal order as these solvers, we now force the gas-deposition drift timestep to be 0 in `EnzoMethodPmDeposit`.

We will need to revisit this exact behavior when we eventually modify the VL+CT solver to handle self-gravity in a higher temporal-order manner.
@mabruzzo mabruzzo deleted the vlct-gravity branch July 1, 2022 13:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants