Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

concat prealigned objects #1413

Closed
wants to merge 2 commits into from
Closed

Conversation

rabernat
Copy link
Contributor

This is an initial PR to bypass index alignment and coordinate checking when concatenating datasets.

@rabernat
Copy link
Contributor Author

rabernat commented May 18, 2017

Let me expand on what this does.

Many netCDF datasets consist of multiple files with identical coordinates, except for one (e.g. time). With xarray we can open these datasets with open_mfdataset, which calls concat on the list of individual dataset objects. concat calls align, which loads all of the dimension indices (and, optionally, non-dimension coordinates) from each file and checks them for consistency / alignment.

This align step is potentially quite expensive for big collections of files with large indices. For example, an unstructured grid or particle-based dataset would just have a single dimension coordinate, with the same length as the data variables. If the user knows that the datasets are already aligned, this PR enables the alignment step to be skipped by passing the argument prealigned=True to concat. My goal is to avoid touching the disk as much as possible.

This PR is a draft in progress. I still need to propagate the prealigned argument up to auto_combine and open_mfdataset.

An alternative API would be to add another option to the coords keywork, i.e. coords='prealigned'.

Feedback welcome.

if not prealigned:
datasets = align(*datasets, join='outer', copy=False, exclude=[dim])
else:
coords = 'minimal'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's bad form to unilaterally override an argument with another value -- it's better to raise an error (or maybe a warning).

The only value of coords that really breaks here is 'different', and even that value could conceivably make sense.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about just adding the option coords='prealigned'?

My initial thought was that, for prealigned data, all coords should just be drawn from the first object. But on second thought, what if there are other coords in the later dataset that do need to be concatenated, e.g. concat over time with an auxiliary coordinates iteration_number with dimension time.

It definitely doesn't work with coords='different'. I have not tried all the other options. I have a hard time conceptualizing what the different coords options do. Some guidance would be very welcome. I don't really understand what the function _calc_concat_over does.

@shoyer
Copy link
Member

shoyer commented May 18, 2017

This enhancement makes a lot of sense to me.

Two things worth considering:

  1. Given a collection of datasets, how do I know if setting prealigned=True will work? This is where my PR adding xr.align(..., join='exact') could help (I can finish that up). Maybe it's worth adding xr.is_aligned or something similar.
  2. What happens if things go wrong? It's okay if the behavior is undefined (or could give wrong results) but we should document that. Ideally we should raise sensible errors at some later time, e.g., when the dask arrays are computed. This might or might not be possible to do efficiently with dask, if the result of all the equality checks are consolidated and added into the dask graphs of the results.

@rabernat
Copy link
Contributor Author

rabernat commented May 19, 2017

Given a collection of datasets, how do I know if setting prealigned=True will work?

I guess we would want to check that (a) the necessary variables and dimensions exist in all datasets and (b) the dimensions have the same length. We would want to bypass the actual reading of the indices. I agree it would be nicer to subsume this logic into align.

What is xr.align(..., join='exact') supposed to do?

What happens if things go wrong?

I can add more careful checks once we sort out the align question.

@shoyer
Copy link
Member

shoyer commented May 19, 2017

What is xr.align(..., join='exact') supposed to do?

It verifies that all dimensions have the same length, and coordinates along all dimensions (used for indexing) also match. Unlike the normal version of align, it doesn't do any indexing -- the outputs are always the same as the inputs.

It does not check that the necessary dimensions and variables exist in all datasets. But we should do that as part of the logic in concat anyways, since the xarray data model always requires knowing variables and their dimensions.

@rabernat
Copy link
Contributor Author

As I think about this further, I realize it might be futile to avoid reading the dimensions from all the files. This is a basic part of how open_dataset works.

@shoyer
Copy link
Member

shoyer commented May 19, 2017 via email

@rabernat
Copy link
Contributor Author

Since the expensive part (for me) is actually reading all the coordinates, I'm not sure that this PR makes sense any more.

The same thing I am going for here could probably be accomplished by allowing the user to pass join='exact' via open_mfdataset. A related optimization would be to allow the user to pass coords='minimal' (or other concat coords options) via open_mfdataset.

For really big datasets, I think we will want to go the NCML approach, generating the xarray metadata as a pre-processing step. Then we could add a function like open_ncml_dataset to xarray which would parse this metadata and construct the dataset in a more efficient way (i.e. not reading redundant coordinates).

@shoyer
Copy link
Member

shoyer commented May 20, 2017 via email

@jhamman
Copy link
Member

jhamman commented Jul 13, 2017

@rabernat - I'm just catching up on this issue. Is you last comment indicating that we should close this PR?

@rabernat
Copy link
Contributor Author

rabernat commented Jul 14, 2017

Yes, I think it should be closed. There are better ways to accomplish the desired goals.

Specifically, allowing the user to pass kwargs to concat via open_mfdataset would be useful.

@jhamman
Copy link
Member

jhamman commented Jul 17, 2017

Okay thanks, closing now. We can always reopen this if necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants