Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Longer tests, logging, warnings #54

Merged
merged 4 commits into from
Jun 9, 2020
Merged

Conversation

amoodie
Copy link
Member

@amoodie amoodie commented Jun 7, 2020

This PR implements some additional consistency testing. I was reluctant to make many tests, as was initially suggested in discussion and #51, because if there are many tests, and you make a known breaking change, it's quite a bit of homework to go back and make all those tests passing again. I added two more tests, such that on a breaking change, the dev would have to go back and change 4 lines to match the new bed elevations.

I also did work on the i/o file logger, and warnings changes and testing. Previously the logger wrote to a file in the cwd, but now the log is written to the location defined by out_dir in the .yml config file. Now, every time something happens (just a few markers through the code) the action is written to the log, regardless of the verbosity (only one item is written to log per timestep, so this doesn't appreciably slow things down). The verbose flag controls whether these messages are also printed to the console. These actions could be wrapped into a function to simplify the code (I'll open an issue if you all like these changes).

This PR will probably close relates to but does not complete #51, and addresses most of #45. Between this PR and #52, most of #35 would be implemented too (but still needs to be completely addressed once we rewerite to use hdf in #13).

Will rebase the code once #52 is merged.

@ericbarefoot
Copy link
Collaborator

the dev would have to go back and change 4 lines to match the new bed elevations.

Can this be automated somehow? Like can there be a private function to go and change consistency tests to match the new reality if we make a breaking change we want to pass?

@amoodie
Copy link
Member Author

amoodie commented Jun 7, 2020

Can this be automated somehow?

Yeah, that's an interesting idea. It could probably be automated somehow. Automating would require editing the python code via a script though, so would probably need to be a bash script. Something with grep to pull the new values from the failed test outputs and then sed -i to replace the new values into the script. But this does not sound like an easy task to me.

Automating that would certainly make it easier to write a ton more tests though.

@amoodie amoodie mentioned this pull request Jun 8, 2020
@amoodie amoodie marked this pull request as ready for review June 9, 2020 15:44
@amoodie
Copy link
Member Author

amoodie commented Jun 9, 2020

hi @elbeejay, is this what you had in mind here? Or were you thinking like a long, serious, model run?

The latter could be configured as another "job" on travis, to call some python script that runs the model for a few hundred timesteps (jobs time out at some point...).

If you mean the latter, then this should be disconnected from #51 before merging, or it will accidentally close the issue.

Either way, I think this can be merged now if you are good with it 👍

@amoodie amoodie added enhancement New feature or request testing labels Jun 9, 2020
Copy link
Member

@elbeejay elbeejay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @elbeejay, is this what you had in mind here? Or were you thinking like a long, serious, model run?

The latter could be configured as another "job" on travis, to call some python script that runs the model for a few hundred timesteps (jobs time out at some point...).

I like the effort and think this is definitely a step forward for the unit testing. I just can't wrap my head around how we would go about testing the ability of the model (as we continue to alter it) to transition from the initial jet to a reasonable channel network. This (at least to me) is one of the critical things DeltaRCM has to do and there is no obvious or intuitive way to test that without a model run of many timesteps (the point about travis timing out is well taken).

I don't know how we get around it, but that is my concern because if we break the model and it no longer forms a channel network then we have to revert to an older version of the code with no real knowledge of where the 'breaking' change was made.

If you mean the latter, then this should be disconnected from #51 before merging, or it will accidentally close the issue.

If you don't mind leaving #51 open I think we could improve upon it. Maybe my initializing tests from various pre-developed topographies will help get around running super long tests, not sure just an early thought.

Either way, I think this can be merged now if you are good with it 👍

Yes I agree, these are nice changes that continue to move us forward in the right direction 😄

Comment on lines +235 to +239
This test throws an error by trying to index cell 1800 of a 30x60 array.
This exceeds the limit of the array. I suspect this is a bug with the
unravel in shared_tools.

The xfail should be removed when the bug is fixed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for flagging this test as well as leaving the segfault issues in the below commented out tests. Will be helpful when debugging to see if these get fixed.

Comment on lines +38 to +69
def test_long_multi_validation(tmp_path):
# IndexError on corner.

file_name = 'user_parameters.yaml'
p, f = utilities.create_temporary_file(tmp_path, file_name)
utilities.write_parameter_to_file(f, 'seed', 42)
utilities.write_parameter_to_file(f, 'Length', 600.)
utilities.write_parameter_to_file(f, 'Width', 600.)
utilities.write_parameter_to_file(f, 'dx', 5)
utilities.write_parameter_to_file(f, 'Np_water', 10)
utilities.write_parameter_to_file(f, 'Np_sed', 10)
utilities.write_parameter_to_file(f, 'f_bedload', 0.05)
f.close()
delta = DeltaModel(input_file=p)

for _ in range(0, 3):
delta.update()

# slice is: test_DeltaModel.eta[:5, 62]
# print(delta.eta[:5, 62])

_exp1 = np.array([-4.971009, -3.722004, -4.973, -3.7240038, -3.7250037])
assert np.all(delta.eta[:5, 62] == pytest.approx(_exp1))

for _ in range(0, 10):
delta.update()

# slice is: test_DeltaModel.eta[:5, 4]
print(delta.eta[:5, 62])

_exp2 = np.array([-4.971052, -2.0813923, -2.0824013, -4.6614914, -1.5570664])
assert np.all(delta.eta[:5, 62] == pytest.approx(_exp2))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is of course longer than a single step test. But I guess my concern is about reproduction of one of the aspects that makes the model unique - the ability to form a distributary channel network. So while this is definitely a step ahead and an improvement to our previous checks, I think we could do better.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I agree. I'm going to unlink the issue so it stays open.

@amoodie amoodie merged commit e9edc36 into DeltaRCM:develop Jun 9, 2020
amoodie added a commit to amoodie/pyDeltaRCM that referenced this pull request Feb 25, 2021
@amoodie amoodie deleted the longer_tests branch February 25, 2021 17:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants