Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable loading remote hdf5 files #2782

Merged
merged 12 commits into from
Mar 16, 2019
Merged

enable loading remote hdf5 files #2782

merged 12 commits into from
Mar 16, 2019

Conversation

scottyhq
Copy link
Contributor

@scottyhq scottyhq commented Feb 20, 2019

Enable loading remote hdf5 files. Will require h5py>2.9.0 and some changes to https://github.com/shoyer/h5netcdf. I've current just made a quick hack change to backends/api.py, so further tests are needed. Pinging @jhamman, @mrocklin, and @rabernat for thoughts on this.

Here is a short notebook demonstrating how this works:
https://gist.github.com/scottyhq/790bf19c7811b5c6243ce37aae252ca1

@mrocklin
Copy link
Contributor

I'm glad to see this. I'll also be curious to see what the performance will look like.

cc @llllllllll

@shoyer
Copy link
Member

shoyer commented Feb 22, 2019

This looks great!

I'll note one minor extension: you could look at the first few bytes of the file (the "magic number") to determine if it's a netCDF3 or netCDF4 file, and hence whether it can be opened with scipy or h5netcdf:

  • CDF\001 or CDF\002 would indicate netCDF3 (use scipy)
  • \211HDF\r\n\032\n would indicate netCDF4/HDF5 (use h5netcdf)

@scottyhq
Copy link
Contributor Author

scottyhq commented Mar 5, 2019

@shoyer , it would be great to have your feedback on these recent changes now that h5netcdf 0.7 is out. There's a bit more logic required in api.py now that scipy isn't the only backend that is able to read file-like objects (and people may not specify engine= when opening datasets)

test_backends.py passes locally for me except for TestValidateAttrs.test_validating_attrs... not sure why.

Also, per your comment here: h5netcdf/h5netcdf#51 (comment), I think it would be great to get a few small netcdf4/hdf test files in https://github.com/pydata/xarray-data.

Copy link
Member

@shoyer shoyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some very minor suggestions but generally this looks good to me.

lock=lock, **backend_kwargs)
else:
raise ValueError("byte header doesn't match netCDF3 or "
"netCDF4/HDF5: {}".format(magic_number))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect this is one of those rare cases where it's best not to report all the details -- most users probably don't know about magic numbers. Maybe something like:

  • "file-like object is not a netCDF file: {}".format(filename_or_obj)`, or
  • "bytes do not represent in-memory netCDF file: {}. (Pass a string or pathlib.Path object to read a filename from disk.)".format(filename_or_obj[:80] + b'...' if len(filename_or_obj) > 80 else b'')

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

went with the first more-concise option

@@ -1955,6 +1955,38 @@ def test_dump_encodings_h5py(self):
assert actual.x.encoding['compression_opts'] is None


# Requires h5py>2.9.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a pytest.mark.skipif based on the version number? (The test on Travis-CI is failing on Python 3.5 because it has an old version of h5py installed)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think i did this correctly (added some lines to tests/__init__.py)

ds['scalar'] = v
bio.seek(0)
with xr.open_dataset(bio) as ds:
v = ds['scalar']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefer using assert_identical and comparing to another expected dataset object.

Copy link
Contributor Author

@scottyhq scottyhq Mar 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i've changed that test to use assert_identical, and am using with raises_regex() to make sure the new error exceptions are hit

def test_h5bytes(self):
import h5py
bio = BytesIO()
with h5py.File(bio) as ds:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be nice if we supported writing to file-like objects, too? :)

(But don't do that now, this PR is a nice logical size already.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed. hopefully someone else could pick that up!

@pep8speaks
Copy link

pep8speaks commented Mar 6, 2019

Hello @scottyhq! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2019-03-15 23:35:28 UTC

xarray/tests/test_backends.py Outdated Show resolved Hide resolved
xarray/backends/api.py Outdated Show resolved Hide resolved
else:
print(magic_number)
raise ValueError("file-like object is not a netCDF file: {}"
.format(filename_or_obj))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little concerned about giving users an error message about file-like objects if they passed in a bytes object, e.g., consider xarray.open_dataset(b'garbage').

Ideally this should give a useful error message, something like: ValueError: b'garbage' is not a valid netCDF file, did you mean to pass a string for a path instead?, not ValueError: file-like object is not a netCDF file: <_io.BytesIO at 0x105663888>.

xarray/backends/api.py Outdated Show resolved Hide resolved
@shoyer
Copy link
Member

shoyer commented Mar 6, 2019 via email

@scottyhq
Copy link
Contributor Author

scottyhq commented Mar 8, 2019

thanks for the input @shoyer, I attempted to tidy up a bit and in the process re-ordered some things such as adding an 'engine' check at the top of open_dataset(). backend tests are passing locally on my machine. hopefully i didn't add too much here or overstep!

@jhamman
Copy link
Member

jhamman commented Mar 8, 2019

@scottyhq - can you add note to the what's new page?

From what I can tell, I don't think the failing tests are related to this PR.

Copy link
Member

@shoyer shoyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the look of this, thanks for refactoring this logic!

elif magic_number.startswith(b'\211HDF\r\n\032\n'):
engine = 'h5netcdf'
if isinstance(filename_or_obj, bytes):
raise ValueError("can't open netCDF4/HDF5 as bytes "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note: we could support this in the future, by wrapping bytes in a io.BytesIO object (like we do for the scipy backend). But no need to add it now -- I like explicitly providing file objects.

with raises_regex(ValueError, 'read/write pointer not at zero'):
with create_tmp_file() as tmp_file:
expected.to_netcdf(tmp_file, engine='h5netcdf')
f = open(tmp_file, 'rb')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a real test failure on Window (see the Appveyor CI results), likely because this file never get closed. You should use a context manager here instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that test was for the case where the file isn't closed before reopening, but it looks like on windows the error is different compared to linux ( PermissionError versus ValueError), so I added a check for the windows error: PermissionError: [WinError 32] The process cannot access the file because it is being used by another process:

@jhamman
Copy link
Member

jhamman commented Mar 16, 2019

I think we're good here. I made one minor tweak to the windows fix @scottyhq implemented. I plan to merge this on Monday if I don't hear any objections.

@shoyer shoyer merged commit 225868d into pydata:master Mar 16, 2019
@shoyer
Copy link
Member

shoyer commented Mar 16, 2019

thanks @scottyhq !

dcherian added a commit to yohai/xarray that referenced this pull request Mar 18, 2019
* upstream/master:
  Rework whats-new for 0.12
  Add whats-new for 0.12.1
  Release 0.12.0
  enable loading remote hdf5 files (pydata#2782)
  Push back finalizing deprecations for 0.12 (pydata#2809)
  Drop failing tests writing multi-dimensional arrays as attributes (pydata#2810)
  some docs updates (pydata#2746)
  Add support for cftime.datetime coordinates with coarsen (pydata#2778)
  Don't use deprecated np.asscalar() (pydata#2800)
  Improve name concat (pydata#2792)
  Add `Dataset.drop_dims` (pydata#2767)
  Quarter offset implemented (base is now latest pydata-master). (pydata#2721)
  Add use_cftime option to open_dataset (pydata#2759)
  Bugfix/reduce no axis (pydata#2769)
  'standard' now refers to 'gregorian' in cftime_range (pydata#2771)
pletchm pushed a commit to pletchm/xarray that referenced this pull request Mar 21, 2019
* attempt at loading remote hdf5

* added a couple tests

* rewind bytes after reading header

* addressed comments for tests and error message

* fixed pep8 formatting

* created _get_engine_from_magic_number function, new tests

* added description in whats-new

* fixed test failure on windows

* same error on windows and nix
pletchm pushed a commit to pletchm/xarray that referenced this pull request Mar 21, 2019
* attempt at loading remote hdf5

* added a couple tests

* rewind bytes after reading header

* addressed comments for tests and error message

* fixed pep8 formatting

* created _get_engine_from_magic_number function, new tests

* added description in whats-new

* fixed test failure on windows

* same error on windows and nix
shoyer pushed a commit that referenced this pull request Mar 26, 2019
…ns with size>1 (#2757)

* Quarter offset implemented (base is now latest pydata-master). (#2721)

* Quarter offset implemented (base is now latest pydata-master).

* Fixed issues raised in review (#2721 (review))

* Updated whats-new.rst with info on quarter offset support.

* Updated whats-new.rst with info on quarter offset support.

* Update doc/whats-new.rst

Co-Authored-By: jwenfai <jwenfai@gmail.com>

* Added support for quarter frequencies when resampling CFTimeIndex. Less redundancy in CFTimeIndex resampling tests.

* Removed normalization code (unnecessary for cftime_range) in cftime_offsets.py. Removed redundant lines in whats-new.rst.

* Removed invalid option from _get_day_of_month docstring. Added tests back in that raises ValueError when resampling (base=24 when resampling to daily freq, e.g., '8D').

* Minor edits to docstrings/comments

* lint

* Add `Dataset.drop_dims` (#2767)

* ENH: Add Dataset.drop_dims()

* Drops full dimensions and any corresponding variables in a
  Dataset
* Fixes GH1949

* DOC: Add Dataset.drop_dims() documentation

* Improve name concat (#2792)

* Added tests of desired name inferring behaviour

* Infers names

* updated what's new

* Don't use deprecated np.asscalar() (#2800)

It got deprecated in numpy 1.16 and throws a ton of warnings due to
that.
All the function does is returning .item() anyway, which is why it got
deprecated.

* Add support for cftime.datetime coordinates with coarsen (#2778)

* some docs updates (#2746)

* Friendlier io title.

* Fix lists.

* Fix *args, **kwargs

"inline emphasis..."

* misc

* Reference xarray_extras for csv writing. Closes #2289

* Add metpy accessor. Closes #461

* fix transpose docstring. Closes #2576

* Revert "Fix lists."

This reverts commit 39983a5.

* Revert "Fix *args, **kwargs"

This reverts commit 1b9da35.

* Add MetPy to related projects.

* Add Weather and Climate specific page.

* Add hvplot.

* Note open_dataset, mfdataset open files as read-only (closes #2345).

* Update metpy 1

Co-Authored-By: dcherian <dcherian@users.noreply.github.com>

* Update doc/weather-climate.rst

Co-Authored-By: dcherian <dcherian@users.noreply.github.com>

* Drop failing tests writing multi-dimensional arrays as attributes (#2810)

These aren't valid for netCDF files.

Fixes GH2803

* Push back finalizing deprecations for 0.12 (#2809)

0.12 will already have a big change in dropping Python 2.7 support. I'd rather
wait a bit longer to finalize these deprecations to minimize the impact on
users.

* enable loading remote hdf5 files (#2782)

* attempt at loading remote hdf5

* added a couple tests

* rewind bytes after reading header

* addressed comments for tests and error message

* fixed pep8 formatting

* created _get_engine_from_magic_number function, new tests

* added description in whats-new

* fixed test failure on windows

* same error on windows and nix

* Release 0.12.0

* Add whats-new for 0.12.1

* Rework whats-new for 0.12

* DOC: Update donation links

* DOC: remove outdated warning (#2818)

* Allow expand_dims() method to support inserting/broadcasting dimensions with size>1 (#2757)

 * Make using dim_kwargs for python 3.5 illegal -- a ValueError is thrown

 * dataset.expand_dims() method take dict like object where values represent length of dimensions or coordinates of dimesnsions

 * dataarray.expand_dims() method take dict like object where values represent length of dimensions or coordinates of dimesnsions

 * Add alternative option to passing a dict to the dim argument, which is now an optional kwarg, passing in each new dimension as its own kwarg

 * Add expand_dims enhancement from issue 2710 to whats-new.rst

 * Fix test_dataarray.TestDataArray.test_expand_dims_with_greater_dim_size tests to pass in python 3.5 using ordered dicts instead of regular dicts. This was needed because python 3.5 and earlier did not maintain insertion order for dicts

 * Restrict core logic to use 'dim' as a dict--it will be converted into a dict on entry if it is a str or a sequence of str

 * Don't cast dim values (coords) as a list since IndexVariable/Variable will internally convert it into a numpy.ndarray. So just use IndexVariable((k,), v)

 * TypeErrors should be raised for invalid input types, rather than ValueErrors.

 * Force 'dim' to be OrderedDict for python 3.5

* Allow expand_dims() method to support inserting/broadcasting dimensions with size>1 (#2757)

 * use .size attribute to determine the size of a dimension, rather than converting to a list, which can be slow for large iterables

 * Make using dim_kwargs for python 3.5 illegal -- a ValueError is thrown

 * dataset.expand_dims() method take dict like object where values represent length of dimensions or coordinates of dimesnsions

 * dataarray.expand_dims() method take dict like object where values represent length of dimensions or coordinates of dimesnsions

 * Add alternative option to passing a dict to the dim argument, which is now an optional kwarg, passing in each new dimension as its own kwarg

 * Add expand_dims enhancement from issue 2710 to whats-new.rst

 * Fix test_dataarray.TestDataArray.test_expand_dims_with_greater_dim_size tests to pass in python 3.5 using ordered dicts instead of regular dicts. This was needed because python 3.5 and earlier did not maintain insertion order for dicts

 * Restrict core logic to use 'dim' as a dict--it will be converted into a dict on entry if it is a str or a sequence of str

 * Don't cast dim values (coords) as a list since IndexVariable/Variable will internally convert it into a numpy.ndarray. So just use IndexVariable((k,), v)

 * TypeErrors should be raised for invalid input types, rather than ValueErrors.

 * Force 'dim' to be OrderedDict for python 3.5

* Allow expand_dims() method to support inserting/broadcasting dimensions with size>1 (#2757)

 * Move enhancement description up to 0.12.1

 * use .size attribute to determine the size of a dimension, rather than converting to a list, which can be slow for large iterables

 * Make using dim_kwargs for python 3.5 illegal -- a ValueError is thrown

 * dataset.expand_dims() method take dict like object where values represent length of dimensions or coordinates of dimesnsions

 * dataarray.expand_dims() method take dict like object where values represent length of dimensions or coordinates of dimesnsions

 * Add alternative option to passing a dict to the dim argument, which is now an optional kwarg, passing in each new dimension as its own kwarg

 * Add expand_dims enhancement from issue 2710 to whats-new.rst

 * Fix test_dataarray.TestDataArray.test_expand_dims_with_greater_dim_size tests to pass in python 3.5 using ordered dicts instead of regular dicts. This was needed because python 3.5 and earlier did not maintain insertion order for dicts

 * Restrict core logic to use 'dim' as a dict--it will be converted into a dict on entry if it is a str or a sequence of str

 * Don't cast dim values (coords) as a list since IndexVariable/Variable will internally convert it into a numpy.ndarray. So just use IndexVariable((k,), v)

 * TypeErrors should be raised for invalid input types, rather than ValueErrors.

 * Force 'dim' to be OrderedDict for python 3.5
@scottyhq scottyhq mentioned this pull request Apr 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants