-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests for module-level functions with units #3493
Tests for module-level functions with units #3493
Conversation
this should be ready for review and merge, unless anyone wants tests for I will add tests for Edit: the failing |
Thanks a lot as ever @keewis ! |
While it won't cover all the use cases, check out https://github.com/pydata/xarray/blob/master/xarray/tests/test_variable.py#L1819 when you get a chance; it's possible that inheriting from that test with a pint array might give you some tests for free |
thanks for reviewing, @max-sixty I did not plan to add tests for I don't think inheritance will help much, but that example could definitely be used as a reference / inspiration. |
I was predominately suggesting that as a way of saving your time & code on the margin ( As from any time or code savings, I think that it's not strictly necessary to test |
hmm... well, I certainly agree the tests are often quite verbose (maybe too verbose) and sometimes also test functionality of pint (e.g. when incompatible, compatible and identical units are tried). I didn't check, but I don't remember any overlaps with tests from To reduce the code, it might be worth to only test compatible units. We could also try to use helper functions for data creation, but while that reduces the code it also makes understanding it a little bit harder. If Reusing tests from |
Overall the tests are great, and the breadth of coverage is impressive. That's more important than their form! The way I was thinking about leveraging existing tests is that there are Any opportunities to use existing code would be on (a). In the above linked
Yes, there's some repetition. Did we go back & forth before re putting some of the duplicated setup in fixtures? That could cut down some boilerplate if there's a lot of overlap (though if there's only partial overlap, also increase complication, as you point out) |
ahh, now this cls = staticmethod(lambda *args: Variable(*args).chunk()) makes a lot more sense, I didn't understand the purpose of We didn't go back & forth re fixtures yet. I'll also investigate the overlap, but I think it's only partial. |
* upstream/master: Added fill_value for unstack (pydata#3541) Add DatasetGroupBy.quantile (pydata#3527) ensure rename does not change index type (pydata#3532) Leave empty slot when not using accessors interpolate_na: Add max_gap support. (pydata#3302) units & deprecation merge (pydata#3530) Fix set_index when an existing dimension becomes a level (pydata#3520) add Variable._replace (pydata#3528) Tests for module-level functions with units (pydata#3493) Harmonize `FillValue` and `missing_value` during encoding and decoding steps (pydata#3502) FUNDING.yml (pydata#3523) Allow appending datetime & boolean variables to zarr stores (pydata#3504) warn if dim is passed to rolling operations. (pydata#3513) Deprecate allow_lazy (pydata#3435) Recursive tokenization (pydata#3515)
* upstream/master: (22 commits) Added fill_value for unstack (pydata#3541) Add DatasetGroupBy.quantile (pydata#3527) ensure rename does not change index type (pydata#3532) Leave empty slot when not using accessors interpolate_na: Add max_gap support. (pydata#3302) units & deprecation merge (pydata#3530) Fix set_index when an existing dimension becomes a level (pydata#3520) add Variable._replace (pydata#3528) Tests for module-level functions with units (pydata#3493) Harmonize `FillValue` and `missing_value` during encoding and decoding steps (pydata#3502) FUNDING.yml (pydata#3523) Allow appending datetime & boolean variables to zarr stores (pydata#3504) warn if dim is passed to rolling operations. (pydata#3513) Deprecate allow_lazy (pydata#3435) Recursive tokenization (pydata#3515) format indexing.rst code with black (pydata#3511) add missing pint integration tests (pydata#3508) DOC: update bottleneck repo url (pydata#3507) add drop_sel, drop_vars, map to api.rst (pydata#3506) remove syntax warning (pydata#3505) ...
* master: (24 commits) Tweaks to release instructions (pydata#3555) Clarify conda environments for new contributors (pydata#3551) Revert to dev version 0.14.1 whatsnew (pydata#3547) sparse option to reindex and unstack (pydata#3542) Silence sphinx warnings (pydata#3516) Numpy 1.18 support (pydata#3537) tweak whats-new. (pydata#3540) small simplification of rename from pydata#3532 (pydata#3539) Added fill_value for unstack (pydata#3541) Add DatasetGroupBy.quantile (pydata#3527) ensure rename does not change index type (pydata#3532) Leave empty slot when not using accessors interpolate_na: Add max_gap support. (pydata#3302) units & deprecation merge (pydata#3530) Fix set_index when an existing dimension becomes a level (pydata#3520) add Variable._replace (pydata#3528) Tests for module-level functions with units (pydata#3493) Harmonize `FillValue` and `missing_value` during encoding and decoding steps (pydata#3502) FUNDING.yml (pydata#3523) ...
This PR adds tests that cover the module level functions of the public API, similar to #3238 and #3447.
black . && mypy . && flake8
whats-new.rst
for all changes andapi.rst
for new APIAs a reference for myself, these are the functions listed by the docs:
apply_ufunc
align
broadcast
concat
merge
combine_by_coords
combine_nested
auto_combine
(deprecated)where
full_like
,ones_like
,zeros_like
dot
map_blocks
Functions not covered by this PR:
auto_combine
(deprecated)map_blocks
(dask specific, should be the same asapply_ufunc
without dask)