-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: s3fs pytest unraisable exception #1012
Conversation
…ec-optional-backends * origin/source-futures-submit: Future init do not use named arguments for the path/url annotation update submit interface
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The diff is quite reasonable, but the list of commits is very long! Were you debugging in CI? (Not that I haven't done it myself...)
Thanks for fixing this. It looks done now, so I'll merge it.
Yes, before I gave up on fixing the underlying issue I tried many different things. I'm curious, is there a problem I'm missing with the long list of commits? I'll get squashed, it'll just be a very long commit message, am I right? |
That's right. It will be a long description; the title of the commit will be the title of the PR plus " (#1012)". It's only a problem for mouseover in the GitHub web interface, which shows the long description of the commit you're hovering over. |
* install optional fsspec backends in the CI for some python versions * update submit interface * annotation * do not use named arguments for the path/url * Future init * add s3fs and sshfs as test dependencies (fsspec beckends) * test fsspec s3 for more combination of parameters * remove pip install from ci * style: pre-commit fixes * revert test order * remove dependencies as a test * add s3fs to test * exclude s3fs to python version 3.12 * add sshfs test (skipped) * fix pytest * asyncio not available in 3.12 * asyncio not available in 3.12 * add comment for fsspec threads * attempt to close resources * handle s3fs case separate for now * attempt to pass tests * attempt to pass tests * simplified * remove support for use_threads option, run non-async fs in threads using asyncio * stop the loop on resource shutdown * add skip for xrootd due to server issues * remove skip for xrootd * remove shutdown * understand ci fail * enable test that uses s3fs * skip test for CI * skip tests on problematic python versions * test S3Source only in its test file * fix pytest parameter * rename test * rename test function (duplicate name) * imports * rerun on timeout * rerun on timeout * remove minio * skip s3fs test instead * rename test * no s3 in test names * improve pytest regex * python 3.8 skip * skip s3_fail when timeout * error tuple * line jump --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Jim Pivarski <jpivarski@users.noreply.github.com>
if sys.version_info < (3, 11): | ||
pytest.skip( | ||
"https://github.com/scikit-hep/uproot5/pull/1012", | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lobis why specifically versions less than 3.11?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No particular reason, it just failed on older versions, I never could find out why.
This PR doesn't fix the underlying cause of this error in pytest but it makes it go away from the CI.
S3Source
tests. There is an issue with the test that checks for exception when the file does not exist: many times this throws a timeout that doesn't go away with retries, I will just add the timeout exceptions to the list of valid exceptions.s3fs
test but only on python 3.11 and above. In older python version we have this mysterious error.I'm 90% confident that this mysterious error won't cause a problem when using uproot with s3 and fsspec but it's the reason I didn't set it as default in #1023. Now that I think about it, I don't think it was a good idea to not set the s3 default to s3fs. It probably works fine and doing so will probably let us know if this is the case, so I will do it (in another PR).