Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel interpolation #108

Merged
merged 101 commits into from
Aug 29, 2020
Merged

Parallel interpolation #108

merged 101 commits into from
Aug 29, 2020

Conversation

johnomotani
Copy link
Collaborator

Provides methods BoutDataArray.getHighRes() to get a version of a variable interpolated in the parallel direction to increase the poloidal resolution, or BoutDataSet.getHighResVars(['var1', 'var2', ...) to get a Dataset with high-resolution versions of a list of variables. An example from TORPEX simulations - before interpolation
base
and after
highres

Also adds a feature to the tests - can use @pytest.mark.long to mark a test as long, in which case it is skipped by default. Long tests are run if --long argument is passed to pytest, and the Travis tests do run the long tests.

Includes #107, this PR will be only 1,384 additions and 363 deletions after that is merged.

Need to re-chunk grid after using xr.concat to re-join the grid Dataset
with upper boundary points removed. Otherwise 'y' is re-chunked into at
least two parts.
If set to false, does not store the results in the Dataset, in order to
save memory.
Making zShift a coordinate means that the Dataset is no longer required
for the interpolation, so move the method to the BoutDataArray.
Provides a workaround for when the boundary cells were not saved in the
data file.
Use coordinate ranges stored in Region instead.
Instead, will provide functionality to save the high-resolution
variables into a new Dataset.
Distribute the output points of the high-resolution field like a
standard BOUT++ cell-centred variable.
Increase jyseps*, ny, ny_inner, MYSUB to reflect the new resolution.
This method only actually does one thing, and is not required to be
called before parallel interpolation, so rename to be clearer, and make
'n' argument non-optional.
By converting the DataArrays in each region into Datasets, can combine
with xarray.combine_by_coords. Then is natural to return a Dataset, making
it more straightforward to merge the results of calling this method for
several variables into a single Dataset.
Information in regions is not correct for the high-res variable, and
needs to be recalculated later.
In BoutDataArray.highParallelRes(), copy the attrs from the first part
of the variable to the combined Dataset.
After interpolating to higher parallel resolution, a Dataset has the
correct coordinates, but no regions. This commit makes
add_toroidal_geometry_coords and add_s_alpha_geometry_coords skip
adding coordinates if the coordinates are already present, so that the
functions can be applied again to interpolated Datasets. At the moment,
the only thing this does is to re-create the regions.
Need to slice hthe with 'y' instead of 'theta' if it was read from grid
file.
Adding 'dy' as a coordinate allows it to be assembled correctly when
DataArrays are combined with combine_by_coords, which is much more
straightforward than recalculating it from the y-coordinate. When
initialising a Dataset from the interpolated variables, will demote 'dy'
to a variable again.
Add method BoutDataset.getHighParallelResVars() that takes a list of
variables, and returns a new BoutDataset containing those variables with
an increased parallel resolution. The new Dataset is a fully valid
BoutDataset, so all plotting methods, etc. work.
xarray.testing also provides an assert_allclose function, so it is
clearer to be explicit about which module the function belongs to.
Attributes like 'direction_y' only make sense for a particular
DataArray, not the whole Dataset.
Some coordinates corresponding to x (calculated from the index), y
(calculated from dy) and z (calculated from ZMIN and ZMAX) can always be
created, although they might be named differently. So create them in the
top-level apply_geometry() function, not the registered functions for
particular geometries.
Previously were off by half a grid-cell.
Needed to pass checks in toFieldAligned().
For interpolation, where there is a physical boundary, want the limit of
the coordinate (that is stored in the region) to be the global
coordinate value at the boundary, not at the grid edge (which was what
was stored previously).
Conflicts:
    xbout/boutdataset.py
    xbout/geometries.py
    xbout/tests/test_boutdataset.py
Issue with merging attrs has been fixed in xarray-0.16.0, so can remove
workaround, as well as fixing problem with inconsistent regions with new
default compat="no_conflicts" for xarray's combine_by_coords().
The result to be returned is updated_ds, checking ds meant always adding
a new xcoord to updated_ds, even if it was already added by
add_geometry_coords().
Ensure 'metadata', 'options', 'regions' and 'geometry' attributes are
always added to all coordinates. Ensures consistency between original
and saved-and-reloaded Datasets, allowing some workarounds in tests to
be removed.
@johnomotani
Copy link
Collaborator Author

Merge conflicts fixed now. Also updated some handling of attrs, because xarray-0.16.0 fixed some attrs-propagation allowing removal of some workarounds. In the process, I've made all 'coordinate' variables always have "metadata", "options", "regions" and "geometry" attrs, like the 'data_vars' variables do, which made things more consistent between a BoutDataset opened from *.dmp.*.nc files and one saved and re-loaded by xBOUT, simplifying the save-and-reload tests.

Adding attrs to the 'ycoord' coordinate in
d062fa9 made interpolate_parallel()
very slow. Don't understand why, but adding 'da = da.compute()' before
the interpolation restores the speed.
@johnomotani
Copy link
Collaborator Author

That was a very strange issue. Tests failed on d062fa9 because they timed out. Turns out adding attrs to the theta coordinate made .interp() really slow in BoutDataArray.interpolate_parallel(). Adding a da = da.compute() before the da.interp() call fixed the slow-down. I guess adding attrs to the theta coordinate somehow messed up the dask task-graph, but don't even begin to understand how, or why calling da = da.compute() would fix it!

xarray-0.16.0 is required now, older versions will fail the tests.
xarray requires less-than-6-months old dask, so 0.16.0 requires
dask-2.10.
@codecov-commenter
Copy link

codecov-commenter commented Jul 30, 2020

Codecov Report

Merging #108 into master will increase coverage by 0.46%.
The diff coverage is 81.59%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #108      +/-   ##
==========================================
+ Coverage   70.83%   71.30%   +0.46%     
==========================================
  Files          14       14              
  Lines        1519     1697     +178     
  Branches      306      359      +53     
==========================================
+ Hits         1076     1210     +134     
- Misses        353      382      +29     
- Partials       90      105      +15     
Impacted Files Coverage Δ
xbout/plotting/animate.py 42.02% <0.00%> (ø)
xbout/plotting/plotfuncs.py 6.54% <0.00%> (ø)
xbout/plotting/utils.py 16.66% <18.18%> (ø)
xbout/load.py 81.39% <50.00%> (-0.28%) ⬇️
xbout/boutdataset.py 74.68% <75.43%> (-3.22%) ⬇️
xbout/boutdataarray.py 82.65% <87.01%> (+2.65%) ⬆️
xbout/utils.py 92.00% <87.09%> (-8.00%) ⬇️
xbout/geometries.py 78.57% <88.23%> (+4.46%) ⬆️
xbout/region.py 88.40% <90.24%> (-2.41%) ⬇️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a89feb6...8e3e14a. Read the comment docs.

Removing attrs from y-coordinate means we do not need to call
da.compute(), which would load the entire result into memory. It is
better not to, as the result may be sliced or processed somehow later
and we don't want to force loading in case the variable is too large to
fit in memory.
This was intended to be moved from add_toroidal_geometry_coords() into
apply_geometry(), but ended up being added back into
add_toroidal_geometry_coords() in a merge.
The coordinates of a DataArray that has been interpolated will have
attrs that are not consistent with the new DataArray. This commit
updates _update_metadata_increased_resolution() to also replace the
attrs of the DataArray's coords with the attrs of the new DataArray.
@johnomotani
Copy link
Collaborator Author

As @TomNicholas noticed when discussing the performance regression issue today, using da = da.compute() forces computing and loading into memory the whole result, which we'd like to avoid if possible - especially if the variable being interpolated is too big to fit into memory after interpolation. I poked a bit more, and found that deleting the "regions" attr (which is a dict of Region objects, so I guess relatively big) on the 'theta' coordinate also fixed the performance regression. Coordinates always have their data in numpy arrays, not dask arrays. I still can't manage to make a simple reproducer, but apparently attrs on a numpy-backed variable passed to dask.array.map_blocks() (which is used down in the xarray interp methods) can slow things down by a factor of 10 (this was for a unit test case with stupidly small arrays and stupidly small chunks, so might not translate to a normal dataset, but there aren't that many chunks, and the test was taking ~35s in all, which seems high for the tiny array it's working on, although there are a lot of variables that I added as coordinates that also need interpolating). With the fix, the unit test only takes ~11s. Some of the cases of the same test (see below) with vars_to_interpolate=... (so all variables in the Dataset are interpolated) were taking ~6mins before the fix, and ~30s now.

@TomNicholas - I'd like to put this as an FYI issue on xarray, but without being able to reproduce for the devs there it seems like a waste of time. The case I've been checking is time pytest --long -s -k test_interpolate_parallel[vars_to_interpolate0-False-False-guards0, and commenting out this bit

# This prevents da.interp() from being very slow.
# Apparently large attrs (i.e. regions) on a coordinate which is passed as an
# argument to dask.array.map_blocks() slow things down, maybe because coordinates
# are numpy arrays, not dask arrays?
# Slow-down was introduced in d062fa9e75c02fbfdd46e5d1104b9b12f034448f when
# _add_attrs_to_var(updated_ds, ycoord) was added in geometries.py
da[ycoord].attrs = {}

should re-introduce the slow-down if anyone has time to take a look.

@TomNicholas
Copy link
Collaborator

Thanks for this report @johnomotani . This behaviour is definitely something I would like to understand, and ideally flag up with an issue on xarray. But for that I think it would need a reproducible example though. With this I can come back to it later at least.

Do you want me to review this so you can merge it and move on?

@johnomotani
Copy link
Collaborator Author

Do you want me to review this so you can merge it and move on?

That would be great! 👍

Staggered grid cases are not implemented yet, would need to use
zShift_CELL_XLOW or zShift_CELL_YLOW (which may or may not be present in
the Dateset, depending on the PhysicsModel).
Not all dump files (especially older ones) have cell_location attrs
written, so if none is present, assume it's OK to do toFieldAligned and
fromFieldAligned with the cell-centre zShift since we cannot check.
The index-value coordinates are now added for dimensions without
coordinates after the geometry is applied, so no 'x' coordinate has been
created to drop.
@johnomotani johnomotani force-pushed the parallel-interpolation branch from 43fe97b to 585330f Compare August 17, 2020 14:43
cell_location attribute only makes sense for a DataArray not a whole
Dataset, so remove in to_dataset() method.
Seems to be required at the moment to avoid an import error in the
minimum-package-versions test.
@johnomotani
Copy link
Collaborator Author

Tests pass now! Any more review comments before I merge?

@johnomotani johnomotani merged commit 2c8ae2a into master Aug 29, 2020
@johnomotani johnomotani deleted the parallel-interpolation branch August 29, 2020 18:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants