-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added new tests #179
added new tests #179
Conversation
#173 should solve the doc issue, if ready to be merged (Seems ready to me), so that we can then work on this PR |
#173 is not a PR to fix docs, and actually it is not ready, it still does not do what I originally wanted. The problem you have is different. From the doc generation output:
so something probably was changed/corrected at astropy level. You just need to change |
I meant this:
BTW, in the meanwhile i will fix the astropy issue, thanks! |
That error is still not solved unfortunately in #173 . It is not critical, because the docs building is fine even if that error is present. I will try to make it work in the lstchain branch, maybe it is a problem of python version. |
Anyway, I produced this PR to try to fix the issue we discovered about tests (with HaST and lstchain 0.10): some tests now should fail if no output file has been produced in a step (the older tests won't fail, but we cannot test, e.g., the DL2 script if we have no DL1 data) |
the docs issue seems just a version 'inconsistency' in the setup (could be due to one or more dependencies that requires a too old setuptools version). Maybe the newer version could be forced (I thought your PR did this) or we can update everything we can update (new ctapipe -> new pyirf....) and see if it is automatically solved |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #179 +/- ##
==========================================
+ Coverage 76.33% 76.83% +0.49%
==========================================
Files 21 21
Lines 2502 2556 +54
==========================================
+ Hits 1910 1964 +54
Misses 592 592 ☔ View full report in Codecov by Sentry. |
@Elisa-Visentin I think I found a nicer way to do this (looking at how ctapipe handle this very same issue). Since the files are created by running An example: @pytest.fixture(scope="session")
def M2_l1_monly(temp_DL1_M_monly, dl0_m2, config_monly):
"""
Produce a DL1 file
"""
process_run = subprocess.run(
[
"magic_calib_to_dl1",
f"-i{str(dl0_m2[0])}",
f"-o{str(temp_DL1_M_monly)}",
f"-c{str(config_monly)}",
f"-m{int(maxmonly)}",
"--process-run",
]
)
assert process_run.returncode == 0
return temp_DL1_M_monly |
This works well also when exceptions are raised, because python will then give the generic exit code 1. |
@aleberti I implemented them as separate tests to try, in the near future, a little upgrade: skip all the tests that are non-sense because files have not been produced in a previous step (e.g., skip DL2 tests if no DL2 file exist). But maybe we can do this even with your version, but it would be really useful if the tests returns a failure instead of error (I'm playing a bit with pytest: a global variable and an 'if Failed' should let us save a lot of time) |
@aleberti I tried to change a bit the structure to group tests and, in case of non-processed/deleted/moved files (so that we don't have all the files we should have), skip the related tests. Now only implemented for stereo MC tests in MAGIC-only analysis (just to try it). What about this? BTW, we must check again the combo-types in this branch before the merge. It seems that something has been lost in the merge (But I will check this) |
Thanks @Elisa-Visentin . To understand better, with the change you made, the skipped tests will be included in the summary pytest creates at the end right? Then, will failures and errors be still shown? Looking at the documentation, when using -r, the default is fE (failures+errors). However now it is xXs. At this point, maybe we can just use -ra (see https://docs.pytest.org/en/6.2.x/usage.html#detailed-summary-report). By chance, do you have an example of how the change works if we have some files that were not created and therefore trigger the skipping of some test? |
Ok, then from my side I think that this PR is good. Let's discuss on wednesday how to proceed, but I think this PR can be merged already, unless you need to add anything else. |
I have to 'duplicate' the same structure (classes + dependencies) for the other tests (both in test_io and in test_io_monly), but I will do this in few days if you agree on this new structure |
Something strange happened again with combos in the master. I am trying to fix it now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think some of the tests are unnecessarily repeated for MAGIC-only and can be safely removed.
I left a few more specific comments
Thank you Julian, I will check this. Just a minute because I am a bit lost in the combinations (there is still something wrong). BTW, thanks Alessio, but go to rest :) |
Now the combinations are fixed (right?) and we can work on duplicates... |
I'm working on all your comments and suggestions. Hopefully, tomorrow we will be ready for the merge and the new tagged version for the school |
@jsitarek can you quickly check the latest changes by Elisa? I think they are fine and reply to your comments. After that she can merge this and I can make the new release. |
@aleberti - done |
Added new tests to check if output files have been produced in each of the analysis step (should solve the issue of no failed tests in case data are not adequately processed by the scripts). @aleberti could this be a solution or should I change/add other tests?