Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Rename regression_test.py to use pytest #279

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

rjleveque
Copy link
Member

Changes at around 1e-3 level probably due to different handling of
refinement regions and resulting patches.
* master: (25 commits)
  Update some python code to remove errors
  Add *.data to gitignore
  Add basic CI script
  fix euler_3d_radial/qinit.f90 to agree with v4.x version
  create gauge filenames that allow more than 5 digits also in 1d
  check that abs(gaugeno) < 2**31 in data.py
  create gauge filenames that allow more than 5 digits also in 3d
  create gauge filenames that allow more than 5 digits in gauge number
  update examples/README.txt regarding env variables
  modify examples/README.txt with new instructions
  move run_examples.py to clawutil
  rename run_tests.py -> run_examples.py
  clean up run_tests.py
  fix typo in .gitignore
  cleaned up amrclaw files with SHARED do warnings
  new features: checkpt style 4 and added STOP feature
  bug fix for cases when only x is periodic
  Remove extra paren typo
  Drop Python 2 support, remove imports of six and __future__
  Add meson.build.
  ...
The gauges disagree slightly at later times with old data.
Running older versions of the code such as v5.8.0 give the same results
as this new data, so not sure when it diverged.

--------
@rjleveque
Copy link
Member Author

Two tests are failing, tests/acoustics_2d_radial and tests/acoustics_2d_adjoint. The former ran fine on my laptop and the latter did not, but I updated the regression_data in commit e1ff225, and now it runs fine on my laptop.

@ketch or @mandli, could you try running pytest with this branch and see if how they behave for you?

@mandli
Copy link
Member

mandli commented Jun 12, 2024

I have the following with the full pytest output attached below.

  • acoustics_2d_radial
E           AssertionError:
E           Not equal to tolerance rtol=1e-14, atol=1e-08
E
E           Mismatched elements: 18 / 44 (40.9%)
E           Max absolute difference: 0.0021096
E           Max relative difference: 0.02010603
  • acoustics_2d_adjoint
E           AssertionError:
E           Not equal to tolerance rtol=1e-14, atol=1e-08
E
E           Mismatched elements: 3 / 51 (5.88%)
E           Max absolute difference: 0.00021523
E           Max relative difference: 0.00756248
E           Gauge Match Failed for gauge = 1
E             failures in fields: 0

pytest_output.txt

@mandli
Copy link
Member

mandli commented Jun 12, 2024

I noticed that some of the tests were behaving oddly and made some adjustments to the adjoint tests. Unfortunately, while I think it fixed one problem, it caused another for the 1D version to surface.

@rjleveque
Copy link
Member Author

I tried running it on a linux machine and I got the same results as @mandli, and digging into the tests/acoustics_2d_radial results, the level 3 grids it makes are slightly different larger and hence the results start to diverge near the patch boundary. I thought it might be because I was using the ifort compiler, but redoing it with gfortran on linux didn't change the results (still different than with gfortran on my MacBook). In both tests I used a fresh clone of the repos plus this PR, so I think all the code should have been identical.

Maybe there's a cell with a value that's within rounding error of the flagging tolerance and if it gets flagged another row of cells is required in the patch.

But strangely it's not just in one patch but in all 5 of the level 3 grids in the figure below. In one computation they all start at y = -0.26 and in the other at y = -0.24.

@mjberger is this something worth pursuing, or should we just change the tolerance a bit for example and hope it goes away?

frame1

@mjberger
Copy link
Contributor

mjberger commented Jun 20, 2024 via email

@rjleveque
Copy link
Member Author

It's a small unit test problem and looking at fort.amr after setting verbosity and verbosity_regrid to 3, I see that the number of cells flagged already differs at time 0, on both levels 2 and 3. So it seems like it must be something dumb I'm missing...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants