Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

open_mfdataset reads coords from disk multiple times #1521

Closed
crusaderky opened this issue Aug 24, 2017 · 14 comments
Closed

open_mfdataset reads coords from disk multiple times #1521

crusaderky opened this issue Aug 24, 2017 · 14 comments

Comments

@crusaderky
Copy link
Contributor

I have 200x of the below dataset, split on the 'scenario' axis:

<xarray.Dataset>
Dimensions:      (fx_id: 39, instr_id: 16095, scenario: 2501)
Coordinates:
    currency     (instr_id) object 'GBP' 'USD' 'GBP' 'GBP' 'GBP' 'EUR' 'CHF' ...
  * fx_id        (fx_id) object 'USD' 'EUR' 'JPY' 'ARS' 'AUD' 'BRL' 'CAD' ...
  * instr_id     (instr_id) object 'property_standard_gbp' ...
  * scenario     (scenario) object 'Base Scenario' 'SSMC_1' 'SSMC_2' ...
    type         (instr_id) object 'Common Stock' 'Fixed Amortizing Bond' ...
Data variables:
    fx_rates     (fx_id, scenario) float64 1.236 1.191 1.481 1.12 1.264 ...
    instruments  (instr_id, scenario) float64 1.0 1.143 0.9443 1.013 1.176 ...
Attributes:
    base_currency:  GBP

I individually dump them to disk with Dataset.to_netcdf(fname, engine='h5netcdf').
Then I try loading them back up with open_mfdataset, but it's mortally slow:

%%time
xarray.open_mfdataset('*.nc', engine='h5netcdf')

Wall time: 30.3 s

The problem is caused by the coords being read from disk multiple times.
Workaround:

%%time
def load_coords(ds):
    for coord in ds.coords.values():
        coord.load()
    return ds
xarray.open_mfdataset('*.nc', engine='h5netcdf', preprocess=load_coords)
Wall time: 12.3 s

Proposed solutions:

  1. Implement the above workaround directly inside open_mfdataset()
  2. change open_dataset() to always eagerly load the coords to memory, regardless of the chunks parameter. Is there any valid use case where lazy coords are actually desirable?

An additional, more radical observation is that, very frequently, a user knows in advance that all coords are aligned. In this use case, the user could explicitly request xarray to blindly trust this assumption, and thus skip loading the coords not based on concat_dim in all datasets beyond the first.

@crusaderky
Copy link
Contributor Author

crusaderky commented Aug 24, 2017

change open_dataset() to always eagerly load the coords to memory, regardless of the chunks parameter. Is there any valid use case where lazy coords are actually desirable?

This also leads to another inefficiency of open_dataset(chunks=...), where you may have your data e.g. shape=(50000, 2**30), chunks=(1, 2**30). If you pass the chunks above to open_dataset, it will break down the coords on the first dim into dask arrays of 1 element - which hardly benefits anybody. Things get worse if the dataset is compressed with zlib or whatever, but only the data vars were chunked at the moment of writing. Am I correct in understanding that the whole coord var will be read from disk 50000 times over?

@shoyer
Copy link
Member

shoyer commented Aug 24, 2017

change open_dataset() to always eagerly load the coords to memory, regardless of the chunks parameter. Is there any valid use case where lazy coords are actually desirable?

In principle, coords can have the same shape as data variables. In those cases, you probably want to use the same chunking scheme.

An additional, more radical observation is that, very frequently, a user knows in advance that all coords are aligned. In this use case, the user could explicitly request xarray to blindly trust this assumption, and thus skip loading the coords not based on concat_dim in all datasets beyond the first.

@rabernat is interested in this use case. See #1385 and #1413 for discussion.

This also leads to another inefficiency of open_dataset(chunks=...), where you may have your data e.g. shape=(50000, 230), chunks=(1, 230). If you pass the chunks above to open_dataset, it will break down the coords on the first dim into dask arrays of 1 element - which hardly benefits anybody. Things get worse if the dataset is compressed with zlib or whatever, but only the data vars were chunked at the moment of writing. Am I correct in understanding that the whole coord var will be read from disk 50000 times over?

Yes, I think you're correct here as well. This is also an annoying inefficiency, but the API design is a little tricky.

@crusaderky crusaderky mentioned this issue Aug 28, 2017
13 tasks
@crusaderky
Copy link
Contributor Author

crusaderky commented Sep 2, 2017

As suspected, the problem is caused specifically by non-index coords:

import xarray
import numpy

data = numpy.random.randint(1<<63, size=1000000)

for r in range(50):
    ds = xarray.Dataset(
        coords={'r': [r], 'c': data, 'otherindex': data},
        data_vars={'data': (('r', 'c'), data.reshape(1, data.size))})
    ds.to_netcdf('fast.%02d.nc' % r)
    del ds['otherindex']
    ds.coords['nonindex'] = ('c', data)
    ds.to_netcdf('slow.%02d.nc' % r)

def load_coords(ds):
    for coord in ds.coords.values():
        coord.load()
    return ds
%time xarray.open_mfdataset('fast.*.nc')
%time xarray.open_mfdataset('fast.*.nc', preprocess=load_coords)
%time xarray.open_mfdataset('slow.*.nc')
%time xarray.open_mfdataset('slow.*.nc', preprocess=load_coords)

output:

CPU times: user 332 ms, sys: 88 ms, total: 420 ms
Wall time: 420 ms
CPU times: user 348 ms, sys: 84 ms, total: 432 ms
Wall time: 430 ms
CPU times: user 1.13 s, sys: 200 ms, total: 1.33 s
Wall time: 1.07 s
CPU times: user 596 ms, sys: 104 ms, total: 700 ms
Wall time: 697 ms

@crusaderky
Copy link
Contributor Author

Getting closer. The problem is in xarray.concat, which resolves non-index dask coords, TWICE, even if it should not resolve them at all (as alignment should be done on index coords only?)

import xarray
import numpy
import dask.array

def kernel(label):
    print("Kernel [%s] invoked!" % label)
    return numpy.array([1, 2])

a = dask.array.Array(name='a', dask={('a', 0): (kernel, 'a')}, chunks=((2, ), ), dtype=int)
b = dask.array.Array(name='b', dask={('b', 0): (kernel, 'b')}, chunks=((2, ), ), dtype=int)

ds0 = xarray.Dataset(coords={'x': ('x', [1, 2]), 'y': ('x', a)})
ds1 = xarray.Dataset(coords={'x': ('x', [1, 2]), 'y': ('x', b)})
xarray.concat([ds0, ds1], dim='z')

Output:

Kernel [a] invoked!
Kernel [b] invoked!
Kernel [b] invoked!Kernel [a] invoked!

<xarray.Dataset>
Dimensions:  (x: 2)
Coordinates:
  * x        (x) int64 1 2
    y        (x) int64 dask.array<shape=(2,), chunksize=(2,)>
Data variables:
    *empty*

@shoyer
Copy link
Member

shoyer commented Sep 4, 2017

The problem is these lines in combine.py:

if opt == 'different':
def differs(vname):
# simple helper function which compares a variable
# across all datasets and indicates whether that
# variable differs or not.
v = datasets[0].variables[vname]
return any(not ds.variables[vname].equals(v)
for ds in datasets[1:])
# all nonindexes that are not the same in each dataset
concat_new = set(k for k in getattr(datasets[0], subset)
if k not in concat_over and differs(k))

We inspect compare coordinates for equality in order to decide whether to ignore redundant coordinates or stack them up. This happens if coords='different'. That is the default choice, which was convenient before we supported dask, but is now a source of performance trouble as you point out.

@shoyer
Copy link
Member

shoyer commented Sep 4, 2017

So, to be more precise, I think the problem is that the first variable is computed many times over (once per comparison), inside the differs helper function above.

A very simple fix, slightly more conservative than loading every coordinate into memory, is to simply compute these first coordinates on the first variable, e.g., v = datasets[0].variables[vname] -> v = datasets[0].variables[vname].compute(). I am slightly nervous about the potential memory overhead of loading all coordinates into memory.

@crusaderky
Copy link
Contributor Author

Just realised that you can concat(data_vars='different') and have the exact same problem on data_vars :|

Also, with "different" I realised that you're comparing the variable contents twice, once in _calc_concat_over and another time at the end of _dataset_concat. This also slows down concat on pure-numpy backend.

Need more time to work on this... I'll be on holiday for the next week with no access to my PC; should be able to continue after the 12 Sept.

@crusaderky
Copy link
Contributor Author

P.S. need to put #1522 as a prerequisite in order not to lose my sanity, as this change is very much hitting on the same nails

@shoyer
Copy link
Member

shoyer commented Sep 6, 2017 via email

@rabernat
Copy link
Contributor

rabernat commented Sep 6, 2017

This is closely related to #1385 and my aborted attempted fix in #1413.

@crusaderky
Copy link
Contributor Author

Back to banging my head on it. Expect a heavy rewrite of combine.py. Can't say an ETA but it's going to be a fair amount of hours.

@fmaussion
Copy link
Member

Thanks @crusaderky for looking into this, I think this is very important.

@jhamman
Copy link
Member

jhamman commented Sep 21, 2017

@crusaderky - happy to help with this. Maybe you can get a PR open and then I can provide some ASV benchmarking.

@crusaderky
Copy link
Contributor Author

@jhamman There's already #1551 open but I need to heavily rethink it to cater for all the various use cases offered by the data_vars and coords parameters of concat().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants