Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tolerance for alignment #2217

Open
naomi-henderson opened this issue Jun 5, 2018 · 23 comments · May be fixed by #4489
Open

tolerance for alignment #2217

naomi-henderson opened this issue Jun 5, 2018 · 23 comments · May be fixed by #4489
Labels
topic-combine combine/concat/merge

Comments

@naomi-henderson
Copy link

When using open_mfdataset on files which 'should' share a grid, there is often a small mismatch which results in the grid not aligning properly. This happens frequently when trying to read data from large climate models from multiple files of the same variable, same lon,lat grid and different time intervals. This silent behavior means that I always have to check the sizes of the lon,lat grids whenever I rely on mfdataset to concatenate the data in time.

Here is an example in which I create two 1d DataArrays which have slightly different coordinates:

import xarray as xr
import numpy as np
from glob import glob

tol=1e-14
x1 = np.arange(1,6)+ tol*np.random.rand(5)
da1 = xr.DataArray([9, 0, 2, 1, 0], dims=['x'], coords={'x': x1})

x2 = np.arange(1,6) + tol*np.random.rand(5)
da2 = da1.copy()
da2['x'] = x2

print(da1.x,'\n', da2.x)
<xarray.DataArray 'x' (x: 5)>
array([1., 2., 3., 4., 5.])
Coordinates:
  * x        (x) float64 1.0 2.0 3.0 4.0 5.0 
 <xarray.DataArray 'x' (x: 5)>
array([1., 2., 3., 4., 5.])
Coordinates:
  * x        (x) float64 1.0 2.0 3.0 4.0 5.0

First I save both DataArrays as netcdf files and then use open_mfdataset to load them:

da1.to_netcdf('da1.nc',encoding={'x':{'dtype':'float64'}})
da2.to_netcdf('da2.nc',encoding={'x':{'dtype':'float64'}})

db = xr.open_mfdataset(glob('da?.nc'))

db
<xarray.Dataset>
Dimensions:                        (x: 10)
Coordinates:
  * x                              (x) float64 1.0 2.0 3.0 4.0 5.0 1.0 2.0 ...
Data variables:
    __xarray_dataarray_variable__  (x) int64 dask.array<shape=(10,), chunksize=(5,)>

So the x grid is now twice the size. This behavior is the same if I just use align with join='outer':

xr.align(da1,da2,join='outer')
(<xarray.DataArray (x: 10)>
 array([nan,  9., nan,  0.,  2., nan, nan,  1.,  0., nan])
 Coordinates:
   * x        (x) float64 1.0 1.0 2.0 2.0 3.0 3.0 4.0 4.0 5.0 5.0,
 <xarray.DataArray (x: 10)>
 array([ 9., nan,  0., nan, nan,  2.,  1., nan, nan,  0.])
 Coordinates:
   * x        (x) float64 1.0 1.0 2.0 2.0 3.0 3.0 4.0 4.0 5.0 5.0)

Request/ suggestion

What is needed is a user specified tolerance level to give to open_mfdataset and passed to
align which will accept these grids as the same

Possibly related to #2215

xr.__version__ '0.10.4'

thanks, Naomi

@shoyer
Copy link
Member

shoyer commented Jun 6, 2018

I agree that this would be useful.

One option that works currently would be to determine the proper grid (e.g., from one file) and then use the preprocess argument of open_mfdataset to reindex() each dataset to the desired grid.

To do this systematically in xarray, we would want to update xarray.align to be capable of approximate alignment. This would in turn require approximate versions of pandas.Index.union (for join='outer') and pandas.Index.intersection (for join='inner').

Ideally, we would do this work upstream in pandas, and utilize it downstream in xarray. Either way, someone will need to figure out and implement the appropriate algorithm to take an approximate union of two sets of points. This could be somewhat tricky when you start to consider sets where some but not all points are within tolerance of each other (e.g., {0, 1, 2, 3, 4, 5} with tolerance=1.5).

@rabernat
Copy link
Contributor

rabernat commented Jun 6, 2018

An alternative approach to fixing this issue would be the long-discussed idea of a "fast path" for open_mfdataset (#1823). In this case, @naomi-henderson knows a-priori that the coordinates for these files should be the same, numerical noise notwithstanding. There should be a way to just skip the alignment check completely and override the coordinates with the values from the first file.

For example

xr.align(da1, da2, join='override')

This would just check that the shapes of the different coordinates match and then replace da2's coordinates with those from da1.

@shoyer
Copy link
Member

shoyer commented Jun 6, 2018

For example
xr.align(da1, da2, join='override')
This would just check that the shapes of the different coordinates match and then replace da2's coordinates with those from da1.

I like this idea! This would be certainly be much easier to implement than general purpose approximate alignment.

@WeatherGod
Copy link
Contributor

I was just pointed to this issue yesterday, and I have an immediate need for this feature in xarray for a work project. I'll take responsibility to implement this feature tomorrow.

@WeatherGod
Copy link
Contributor

To be clear, my use-case would not be solved by join='override' (isn't that just join='left'?). I have moving nests of coordinates that can have some floating-point noise in them, but are otherwise identical.

@shoyer
Copy link
Member

shoyer commented Jun 21, 2018

To be clear, my use-case would not be solved by join='override' (isn't that just join='left'?). I have moving nests of coordinates that can have some floating-point noise in them, but are otherwise identical.

`join='left'' will reindex all arguments to match the coordinates of the first object. In practice, that means that if coordinates differ by floating point noise, the second object would end up converted to all NaNs.

join='override' would just relabel coordinates, assuming that the shapes match. The data wouldn't change at all.

I guess another way to do this would be to include method and tolerance arguments from reindex on align, and only allow them when join='left' or join='right'. But this would be a little trickier to pass on through other functions like open_mfdataset().

@WeatherGod
Copy link
Contributor

Well, I need this to work for join='outer', so, it is gonna happen one way or another...

One concept I was toying with today was a distinction between aligning coords (which is what it does now) and aligning bounding boxes.

@WeatherGod
Copy link
Contributor

@shoyer, I am thinking your original intuition was right about needing to introduce improve the Index classes to perhaps work with an optional epsilon argument to its constructor. How receptive do you think pandas would be to that? And even if they would accept such a feature, we probably would need to implement it a bit ourselves in situations where older pandas versions are used.

@shoyer
Copy link
Member

shoyer commented Jun 22, 2018

I think a tolerance argument for set-methods like Index.union would be an easier sell than an epsilon argument for the Index construction. You'd still need to figure out the right algorithmic approach, though.

@shoyer
Copy link
Member

shoyer commented Jun 22, 2018

See pandas-dev/pandas#9817 and pandas-dev/pandas#9530 for the relevant pandas issues.

@WeatherGod
Copy link
Contributor

Ok, I see how you implemented it for pandas's reindex. You essentially inserted an inexact filter within .get_indexer(). And the intersection() and union() uses these methods, so, in theory, one could pipe a tolerance argument through them (as well as for the other set operations). The work needs to be expanded a bit, though, as get_indexer_non_unique() needs the tolerance parameter, too, I think.

For xarray, though, I think we can work around backwards compatibility by having Dataset hold specialized subclasses of Index for floating-point data types that would have the needed changes to the Index class. We can have this specialized class have some default tolerance (say 100*finfo(dtype).resolution?), and it would have its methods use the stored tolerance by default, so it should be completely transparent to the end-user (hopefully). This way, xr.open_mfdataset() would "just work".

@shoyer
Copy link
Member

shoyer commented Jun 22, 2018 via email

@WeatherGod
Copy link
Contributor

Actually, I disagree. Pandas's set operations methods are mostly index-based. For union and intersection, they have an optimization that dives down into some c-code when the Indexes are monotonic, but everywhere else, it all works off of results from get_indexer(). I have made a quick toy demo code that seems to work. Note, I didn't know how to properly make a constructor for a subclassed Index, so I added the tolerance attribute after construction just for the purposes of this demo.

from __future__ import print_function
import warnings
from pandas import Index
import numpy as np

from pandas.indexes.base import is_object_dtype, algos, is_dtype_equal
from pandas.indexes.base import _ensure_index, _concat, _values_from_object, _unsortable_types
from pandas.indexes.numeric import Float64Index


def _choose_tolerance(this, that, tolerance):
    if tolerance is None:
        tolerance = max(this.tolerance, getattr(that, 'tolerance', 0.0))
    return tolerance


class ImpreciseIndex(Float64Index):
    def astype(self, dtype, copy=True):
        return ImpreciseIndex(self.values.astype(dtype=dtype, copy=copy), name=self.name,
                              dtype=dtype)

    @property
    def tolerance(self):
        return self._tolerance

    @tolerance.setter
    def tolerance(self, tolerance):
        self._tolerance = self._convert_tolerance(tolerance)

    def union(self, other, tolerance=None):
        self._assert_can_do_setop(other)
        other = _ensure_index(other)

        if len(other) == 0 or self.equals(other, tolerance=tolerance):
            return self._get_consensus_name(other)

        if len(self) == 0:
            return other._get_consensus_name(self)

        if not is_dtype_equal(self.dtype, other.dtype):
            this = self.astype('O')
            other = other.astype('O')
            return this.union(other, tolerance=tolerance)

        tolerance = _choose_tolerance(self, other, tolerance)

        indexer = self.get_indexer(other, tolerance=tolerance)
        indexer, = (indexer == -1).nonzero()

        if len(indexer) > 0:
            other_diff = algos.take_nd(other._values, indexer,
                                       allow_fill=False)
            result = _concat._concat_compat((self._values, other_diff))

            try:
                self._values[0] < other_diff[0]
            except TypeError as e:
                warnings.warn("%s, sort order is undefined for "
                              "incomparable objects" % e, RuntimeWarning,
                              stacklevel=3)
            else:
                types = frozenset((self.inferred_type,
                                   other.inferred_type))
                if not types & _unsortable_types:
                    result.sort()
       else:
            result = self._values

            try:
                result = np.sort(result)
            except TypeError as e:
                warnings.warn("%s, sort order is undefined for "
                              "incomparable objects" % e, RuntimeWarning,
                              stacklevel=3)

        # for subclasses
        return self._wrap_union_result(other, result)


    def equals(self, other, tolerance=None):
        if self.is_(other):
            return True

        if not isinstance(other, Index):
            return False

        if is_object_dtype(self) and not is_object_dtype(other):
            # if other is not object, use other's logic for coercion
            if isinstance(other, ImpreciseIndex):
                return other.equals(self, tolerance=tolerance)
            else:
                return other.equals(self)

        if len(self) != len(other):
            return False

        tolerance = _choose_tolerance(self, other, tolerance)
        diff = np.abs(_values_from_object(self) -
                      _values_from_object(other))
        return np.all(diff < tolerance)

    def intersection(self, other, tolerance=None):
        self._assert_can_do_setop(other)
        other = _ensure_index(other)

        if self.equals(other, tolerance=tolerance):
            return self._get_consensus_name(other)

        if not is_dtype_equal(self.dtype, other.dtype):
            this = self.astype('O')
            other = other.astype('O')
            return this.intersection(other, tolerance=tolerance)

        tolerance = _choose_tolerance(self, other, tolerance)
        try:
            indexer = self.get_indexer(other._values, tolerance=tolerance)
            indexer = indexer.take((indexer != -1).nonzero()[0])
        except:
            # duplicates
            # FIXME: get_indexer_non_unique() doesn't take a tolerance argument
            indexer = Index(self._values).get_indexer_non_unique(
                other._values)[0].unique()
            indexer = indexer[indexer != -1]

        taken = self.take(indexer)
        if self.name != other.name:
            taken.name = None
        return taken

    # TODO: Do I need to re-implement _get_unique_index()?

    def get_loc(self, key, method=None, tolerance=None):
        if tolerance is None:
            tolerance = self.tolerance
        if tolerance > 0 and method is None:
            method = 'nearest'
        return super(ImpreciseIndex, self).get_loc(key, method, tolerance)

    def get_indexer(self, target, method=None, limit=None, tolerance=None):
        if tolerance is None:
            tolerance = self.tolerance
        if tolerance > 0 and method is None:
            method = 'nearest'
        return super(ImpreciseIndex, self).get_indexer(target, method, limit, tolerance)


if __name__ == '__main__':
    a = ImpreciseIndex([0.1, 0.2, 0.3, 0.4])
    a.tolerance = 0.01
    b = ImpreciseIndex([0.301, 0.401, 0.501, 0.601])
    b.tolerance = 0.025
    print(a, b)
    print("a | b  :", a.union(b))
    print("a & b  :", a.intersection(b))
    print("a.get_indexer(b):", a.get_indexer(b))
    print("b.get_indexer(a):", b.get_indexer(a))

Run this and get the following results:

ImpreciseIndex([0.1, 0.2, 0.3, 0.4], dtype='float64') ImpreciseIndex([0.301, 0.401, 0.501, 0.601], dtype='float64')
a | b  : ImpreciseIndex([0.1, 0.2, 0.3, 0.4, 0.501, 0.601], dtype='float64')
a & b  : ImpreciseIndex([0.3, 0.4], dtype='float64')
a.get_indexer(b): [ 2  3 -1 -1]
b.get_indexer(a): [-1 -1  0  1]

This is mostly lifted from the Index base class methods, just with me taking out the monotonic optimization path, and supplying the tolerance argument to the respective calls to get_indexer. The choice of tolerance for a given operation is that unless provided as a keyword argument, then use the larger tolerance of the two objects being compared (with a failback if the other isn't an ImpreciseIndex).

@shoyer
Copy link
Member

shoyer commented Jun 22, 2018

@WeatherGod One problem with your definition of tolerance is that it isn't commutative, even if both indexes have the same tolerance:

a = ImpreciseIndex([0.1, 0.2, 0.3, 0.4])
a.tolerance = 0.1
b = ImpreciseIndex([0.301, 0.401, 0.501, 0.601])
b.tolerance = 0.1
print(a.union(b))  # ImpreciseIndex([0.1, 0.2, 0.3, 0.4, 0.501, 0.601], dtype='float64')
print(b.union(a))  # ImpreciseIndex([0.1, 0.2, 0.301, 0.401, 0.501, 0.601], dtype='float64')

If you try a little harder, you could even have cases where the result has a different size, e.g.,

a = ImpreciseIndex([1, 2, 3])
a.tolerance = 0.5
b = ImpreciseIndex([1, 1.9, 2.1, 3])
b.tolerance = 0.5
print(a.union(b))  # ImpreciseIndex([1.0, 2.0, 3.0], dtype='float64')
print(b.union(a))  # ImpreciseIndex([1.0, 1.9, 2.1, 3.0], dtype='float64')

Maybe these aren't really problems in practice, but it's at least a little strange/surprising.

@WeatherGod
Copy link
Contributor

WeatherGod commented Jun 22, 2018 via email

@shoyer
Copy link
Member

shoyer commented Jun 23, 2018 via email

@WeatherGod
Copy link
Contributor

Do we want to dive straight to that? Or, would it make more sense to first submit some PRs piping the support for a tolerance kwarg through more of the API? Or perhaps we should propose that a "tolerance" attribute should be an optional attribute that methods like get_indexer() and such could always check for? Not being a pandas dev, I am not sure how piecemeal we should approach this.

In addition, we are likely going to have to implement a decent chunk of code ourselves for compatibility's sake, I think.

@shoyer
Copy link
Member

shoyer commented Jun 25, 2018 via email

@WeatherGod
Copy link
Contributor

I have created a PR for my work-in-progress: pandas-dev/pandas#22043

@maschull
Copy link

any work around to this issue?

@dcherian
Copy link
Contributor

Yes on xarray>=0.13.0, xr.open_mfdataset(..., join="override") assuming that all files are on the same coordinate system.

@maschull
Copy link

ah wonderful! I will update to 1.13.0

@dcherian
Copy link
Contributor

reopening since we have a PR to fix this properly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic-combine combine/concat/merge
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants