-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
i.eodag: add testsuite #1163
i.eodag: add testsuite #1163
Conversation
1a625e2
to
5b4486a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good to see the testsuite added! My only concern is that some tests are a bit too strict, and easily fail with subtle changes on the provider side, even if those changes do not impact the use / users of the module... And then the tests would have to be adjusted every time such a change occures.
I would thus lessen some criteria, or what do @veroandreo and @lucadelu
quiet=True, | ||
stdout_=PIPE, | ||
) | ||
self.assertEqual(i_eodag.outputs["stdout"].value.strip(), output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have too strict test-criteria (here assertEqual for the entire output, or an md5 test for the output file, e.g. reprocessing of the products with a new processing baseline would cause the test to fail. I would probably rather test if the requested product type is returned, the sensing time is within start and end and stuff like that...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, just in case something change on their side (highly likely based on 10+ years of the Sentinel data being out), I'd check if the time is within start and end, and if the product starts with S2 or so... Question is what happens with these tests if the site is down?? They will all fail and it is not a problem of the tool but on the server side...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question is what happens with these tests if the site is down?? They will all fail and it is not a problem of the tool but on the server side...
If we unify all the tests to use "peps" as the provider, maybe we can run a pretty basic test of the module in the setUpClass
method, and then save whether this test fails or not in flag. If it fails, then we can always return success in all of the tests, because the server is down or connection could not be established for any reason, and if it passes we run the tests normally.
# i.eodag -l provider=peps producttype=S2_MSI_L1C start=2022-05-01 end=2022-06-01 footprints=s2_footprints | ||
# This is not working...? | ||
# v.import seems to complain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the background for these lines?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you try to run this test case individually, it will give you an error, or at least that what happens for me.
But if you run the same command using the terminal it runs fine. So I am not sure what could be the issue here.
i.eodag -l provider=peps producttype=S2_MSI_L1C start=2022-05-01 end=2022-06-01 map=durham footprints=s2_footprints
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it because in your terminal, you already have something downloaded or extra env vars?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @echoix
I have tried to switch devices and to use a virtual environment to test if this is the issue, but still no luck, maybe there is some other factor that I couldn't eliminate when testing this?
Anyhow, the error seems to be that it couldn't create a temporary location...
Error from v.in.ogr
:
ERROR: Unable to create new location <tmp_v_import_location_debian_hp_6941>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @ninsbl that tests might be too strict leading them to fail because of changes in the providers and not because the tool is not working. So, I wonder if instead of comparing the stdout output we could check:
- for the case of limit option if the number of scenes found is equal or less than limit, but higher than 0
- for the case of queryables, if they exist in the json output
- for cloud cover, if the values are le than cloudcover
I'm not a specialist on this, does that make any sense to you guys? @ninsbl @lucadelu ?
Docstrings need to be explicit about what is being tested.
There's a mismatch among provider in the comments and provider used in the commands. Since creodias is a special case requiring TOTP, I'd say we stick to peps instead, no?
quiet=True, | ||
stdout_=PIPE, | ||
) | ||
self.assertEqual(i_eodag.outputs["stdout"].value.strip(), output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, just in case something change on their side (highly likely based on 10+ years of the Sentinel data being out), I'd check if the time is within start and end, and if the product starts with S2 or so... Question is what happens with these tests if the site is down?? They will all fail and it is not a problem of the tool but on the server side...
Catch exception in test_sample_one Add header
978736f
to
0768882
Compare
It make sense, however we should trust on provider, if it doesn't work our tests will fail. This was reported by @marisn in this comment #1033 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach seems very good.
I do have some suggestions for things you can consider . They are not tested, so please handle with care...
(apply changes from code review)
Thanks @ninsbl, I have applied the suggestions from the code review. |
I run the tests and I got just one failure because
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good!
Notice that in addition to the error @lucadelu gets locally, tests are failing early in CI because eodag is not installed for the github action.
The latter can be fixed by adding eodag here:
.github/workflows/extra_requirements.txt
https://github.com/OSGeo/grass-addons/actions/runs/10481142386/job/29030104572?pr=1163#step:17:164
That should be the last hurdle for this PR!
Tests are failing: https://github.com/OSGeo/grass-addons/actions/runs/10487387701/job/29050543474#step:17:155 |
Please try adding |
Thanks that passed the checks. |
Hi @HamedElgizery , unfortunately, tests with the current release branch (and Python 3.10), some eodag tests fail due to isoformat parsing (with time zone and microseconds)... See: https://github.com/OSGeo/grass-addons/actions/runs/10498051963/job/29082199054?pr=1163#step:17:155 |
I think it passes now (I found no traces of a failing i.eodag test). The problem was because of the version of the datetime library in Python 3.10. I had to slightly change the format of the timestamp to be compatible with it, by removing the 'Z' character from the end of the timestamp. |
No description provided.