Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't call open_mfdataset without creating chunked dask arrays #9038

Open
5 tasks done
TomNicholas opened this issue May 21, 2024 · 3 comments · May be fixed by #5704
Open
5 tasks done

Can't call open_mfdataset without creating chunked dask arrays #9038

TomNicholas opened this issue May 21, 2024 · 3 comments · May be fixed by #5704
Labels
bug io topic-chunked-arrays Managing different chunked backends, e.g. dask

Comments

@TomNicholas
Copy link
Member

TomNicholas commented May 21, 2024

What happened?

Passing chunks=None to xr.open_dataset/open_mfdataset is supposed to avoid using dask at all, returning lazily-indexed numpy arrays even if dask is installed. However chunks=None doesn't currently work for xr.open_mfdataset as it gets silently coerced internally to chunks={}, which creates dask chunks aligned with the on-disk files.

Offending line of code:

open_kwargs = dict(engine=engine, chunks=chunks or {}, **kwargs)

What did you expect to happen?

Passing chunks=None to open_mfdataset should return lazily-indexed numpy arrays, like open_dataset does.

Minimal Complete Verifiable Example

ds = xr.tutorial.open_dataset("air_temperature")

ds1 = ds.isel(time=slice(None, 1000))
ds2 = ds.isel(time=slice(1000, None))

ds1.to_netcdf('air1.nc')
ds2.to_netcdf('air2.nc')

combined = xr.open_mfdataset(['air1.nc', 'air2.nc'], chunks=None)

print(type(combined['air'].data))

MVCE confirmation

  • Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • Complete example — the example is self-contained, including all data and the text of any traceback.
  • Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • New issue — a search of GitHub Issues suggests this is not a duplicate.
  • Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

dask.array.core.Array

Anything else we need to know?

As the default is None, changing this without changing the default would be a breaking change. But the current behaviour is also not intended.

Environment

main

@TomNicholas TomNicholas added bug io topic-chunked-arrays Managing different chunked backends, e.g. dask labels May 21, 2024
@dcherian
Copy link
Contributor

dcherian commented May 21, 2024

Passing chunks=None to open_mfdataset should return lazily-indexed numpy arrays, like open_dataset does.

Can't do this without virtual concat machinery (#4628) which someone decided to implement elsewhere 🙄 ;)

We could change the default to chunks={} in anticipation though.

@TomNicholas
Copy link
Member Author

Can't do this without virtual concat machinery (#4628) which someone decided to implement elsewhere 🙄 ;)

😅

It's still broken at the moment though - I had a (ridiculous) case where I don't care that concat will load everything in memory, I just want to completely avoid creating dask.array objects, and right now there is no possible input option to open_mfdataset to do that.

We could change the default to chunks={} in anticipation though.

That's probably more useful, as well as actually being consistent.

@Illviljan
Copy link
Contributor

See #5704 for changing chunks={} and more discussion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug io topic-chunked-arrays Managing different chunked backends, e.g. dask
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants