forked from pydata/xarray
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge changes since fork #1
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Fix precision drop when indexing a datetime64 arrays. * minor style fix. * Add a link to github issue in numpy/numpy
* add main sections to toc * move whats new to "help and references" section
* Add seaborn import to toy weather data example. Fixes GH1944 It looks like this got inadvertently removed with the flake8 fix in GH1925. * Tweak stickler config * Tweak stickler again * temp tweak * more tweaking of config * more config tweak * try again * once again * alt disable * add comment * final line break
* isort configs * all files sorted * default to thirdparty * sorting after defaulting to third party * don't ignore sorting errors in Stickler (though I don't think it'll check anyway!) * lots of gaps? * double negative * @stickler-ci * isort home stretch * unsure why this is being picked up now * ok fun is fading now * Fixing style errors. * random char * lint
* Plot discrete colormap with proportional colorbar Using 'cbar_kwargs' it is possible to extend the capabilities of colorbar ticks. * Fixing style errors. * Fixing style errors. * Fixed the line-length issue Shortened intro texts. * added absolute_import added absolute_import to make stickler happy, can be removed later. * Updated script with changes requested Renamed the script, script info updated. * Cosmetic changes
* flake8 only without stepping on @jhamman's toes, flake8 takes 5.4s to run and so we can probably cut to just running `flake8` rather than an explanation of the diffs etc * isort instructions
* Rolling_window for np.ndarray * Add pad method to Variable * Added rolling_window to DataArray and Dataset * remove pad_value option. Support dask.rolling_window * Refactor rolling.reduce * add as_strided to npcompat. Tests added for reduce(np.nanmean) * Support boolean in maybe_promote * move rolling_window into duck_array_op. Make DataArray.rolling_window public. * Added to_dataarray and to_dataset to rolling object. * Use pad in rolling to make compatible to pandas. Expose pad_with_fill_value to public. * Refactor rolling * flake8 * Added a comment for dask's pad. * Use fastpath in rolling.to_dataarray * Doc added. * Revert not to use fastpath * Remove maybe_prompt for Boolean. Some improvements based on @shoyer's review. * Update test. * Bug fix in test_rolling_count_correct * fill_value for boolean array * rolling_window(array, axis, window) -> rolling_window(array, window, axis) * support stride in rolling.to_dataarray * flake8 * Improve doc. Add DataArrayRolling to api.rst * Improve docs in common.rolling. * Expose groupby docs to public * Default fill_value=dtypes.NA, stride=1. Add comment for DataArrayRollig. * Default fill_value=dtypes.NA, stride=1. Add comment for DataArrayRollig. * Add fill_value option to rolling.to_dataarray * Convert non-numeric array in reduce. * Fill_value = False for boolean array in rolling.reduce * Support old numpy plus bottleneck combination. Suppress warning for all-nan slice reduce. * flake8 * Add benchmark * Dataset.count. Benchmark * Classize benchmark * Decoratorize for asv benchmark * Classize benchmarks/indexing.py * Working with nanreduce * Support .sum for object dtype. * Remove unused if-statements. * Default skipna for rolling.reduce * Pass tests. Test added to make sure the consistency to pandas' behavior. * Delete duplicate file. flake8 * flake8 again * Working with numpy<1.13 * Revert "Classize benchmarks/indexing.py" This reverts commit 4189d71. * rolling_window with dask.ghost * Optimize rolling.count. * Fixing style errors. * Remove unused npcompat.nansum etc * flake8 * require_dask -> has_dask * npcompat -> np * flake8 * Skip tests for old numpy. * Improve doc. Optmize missing._get_valid_fill_mask * to_dataarray -> construct * remove assert_allclose_with_nan * Fixing style errors. * typo * `to_dataset` -> `construct` * Update doc * Change boundary and add comments for dask_rolling_window. * Refactor dask_array_ops.rolling_window and np_utils.rolling_window * flake8 * Simplify tests * flake8 again. * cleanup roling_window for dask. * remove duplicates * remvove duplicate * flake8 * delete unnecessary file.
* Fix doc for missing values. * Remove some methods from api-hidden * Fix for dask page.
* Update nmpy on RTD * update mpl too
* Add x,y kwargs for plot.line(). Supports both 1D and 2D DataArrays as input. Change variable names to make code clearer: 1. set xplt, yplt to be values that are passed to ax.plot() 2. xlabel, ylabel are axes labels 3. xdim, ydim are dimension names * Respond to comments. * Only allow one of x or y to be specified. * fix docs * Follow suggestions. * Follow suggestions.
* Check for minimum Zarr version. ws * Exclude version check from coverage. additional ws.
* Start working * First support of lazy vectorized indexing. * Some optimization. * Use unique to decompose vectorized indexing. * Consolidate vectorizedIndexing * Support vectorized_indexing in h5py * Refactoring backend array. Added indexing.decompose_indexers. Drop unwrap_explicit_indexers. * typo * bugfix and typo * Fix based on @WeatherGod comments. * Use enum-like object to indicate indexing-support types. * Update test_decompose_indexers. * Bugfix and benchmarks. * fix: support outer/basic indexer in LazilyVectorizedIndexedArray * More comments. * Fixing style errors. * Remove unintended dupicate * combine indexers for on-memory np.ndarray. * fix whats new * fix pydap * Update comments. * Support VectorizedIndexing for rasterio. Some bugfix. * flake8 * More tests * Use LazilyIndexedArray for scalar array instead of loading. * Support negative step slice in rasterio. * Make slice-step always positive * Bugfix in slice-slice * Add pydap support. * Rename LazilyIndexedArray -> LazilyOuterIndexedArray. Remove duplicate in zarr.py * flake8 * Added transpose to LazilyOuterIndexedArray
…#1963) * Better error message for vectorize=True in apply_ufunc with old numpy * Typo otype -> otypes * add missing __future__ imports * all_output_core_dims -> all_core_dims
* Resolve more warnings in the xarray test suite Currently down from 255 to 104 warnings. * Silence irrelevant warning from RasterIO * add whats-new for rasterio warnings * remove redundant import
* use stand-alone netcdftime from conda-forge * remove temporary netcdftime development build * more docs, bump for tests * update docs a bit
* distributed tests that write dask arrays * Change zarr test to synchronous API * initial go at __setitem__ on array wrappers * fixes for scipy * cleanup after merging with upstream/master * needless duplication of tests to work around pytest bug * use netcdf_variable instead of get_array() * use synchronous dask.distributed test harness * cleanup tests * per scheduler locks and autoclose behavior for writes * HDF5_LOCK and CombinedLock * integration test for distributed locks * more tests and set isopen to false when pickling * Fixing style errors. * ds property on DataStorePickleMixin * stickler-ci * compat fixes for other backends * HDF5_USE_FILE_LOCKING = False in test_distributed * style fix * update tests to only expect netcdf4 to work, docstrings, and some cleanup in to_netcdf * Fixing style errors. * fix imports after merge * fix more import bugs * update docs * fix for pynio * cleanup locks and use pytest monkeypatch for environment variable * fix failing test using combined lock
* einsum for xarray * whats new * Support dask for xr.dot. * flake8. Add some error messages. * fix for sticker-ci * Use counter * Always allow dims=None for xr.dot. * Simplify logic. More comments. * Support variable in xr.dot * bug fix due to the undefined order of set * Remove unused casting to set
* Support __array_ufunc__ for xarray objects. This means NumPy ufuncs are now supported directly on xarray.Dataset objects, and opens the door to supporting computation on new data types, such as sparse arrays or arrays with units. Fixes GH1617 * add TODO note on xarray objects in out argument * Satisfy stickler for __eq__ overload * Move dummy arithmetic implementations to SupportsArithemtic * Try again to disable flake8 warning * Disable py3k tool on stickler-ci * Move arithmetic to its own file. * Remove unused imports * Add note on backwards incompatible changes from apply_ufunc
* Docs and other minor udpates for v0.10.2. * Update build deps * Misc doc fixes * Removal failure on warning in doc build
* Make constructing slices lazily. * Additional speedup * Move some lines in DataArrayRolling into __iter__. Added a benchmark for long arrays. * Bugfix in benchmark * remove underscores.
We made this change several years ago now -- it's no longer timely news to share with users.
* ENH: Plotting for groupby_bins DataArrays created with e.g. groupy_bins have coords containing of pd._libs.interval.Interval. For plotting, the pd._libs.interval.Interval is replaced with the interval's center point. '_center' is appended to teh label * changed pd._libs.interval.Interval to pd.Interval * Assign new variable with _interval_to_mid_points instead of mutating original variable. Note that this changes the the type of xplt from DataArray to np.array in the line function. * '_center' added to label only for 1d plot * added tests * missing whitespace * Simplified test * simplified tests once more * 1d plots now defaults to step plot New bool keyword `interval_step_plot` to turn it off. * non-uniform bin spacing for pcolormesh * Added step plot function * bugfix: linestyle == '' results in no line plotted * Adapted to upstream changes * Added _resolve_intervals_2dplot function, simplified code * Added documentation * typo in documentation * Fixed bug introduced by upstream change * Refactor out utility functions. * Fix test. * Add whats-new. * Remove duplicate whats new entry. :/ * Make things neater.
* iterate over data_vars * whats new
* facetgrid: properly support cbar_kwargs. * Update doc/plotting.rst * Update xarray/plot/facetgrid.py * Update whats-new.rst
* Remove .T as shortcut for transpose() * fix whats-new * remove Dataset.__dir__ * Update whats-new.rst * Update whats-new.rst
…lues (#2520) Fixes GH1267
* Added a global option to always keep or discard attrs. * Updated docs and options docstring to describe new keep_attrs global option * Updated all default keep_attrs arguments to check global option * New test to check attributes are retained properly * Implemented shoyer's suggestion so attribute permanence test now passes for reduce methods * Added tests to explicitly check that attrs are propagated correctly * Updated what's new with global keep_attrs option * Bugfix to stop failing tests in test_dataset * Test class now inherits from object for python2 compatibility * Fixes to documentation * Removed some unneccessary checks of the global keep_attrs option * Removed whitespace typo I just created * Removed some more unneccessary checks of global keep_attrs option (pointed out by dcherian)
* Switch enable_cftimeindex to True by default * Add a friendlier error message when plotting cftime objects * Mention that the non-standard calendars are used in climate science * Add GH issue references to docs * Deprecate enable_cftimeindex option * Add CFTimeIndex.to_datetimeindex method * Add friendlier error message for resample * lint * Address review comments * Take into account microsecond attribute in cftime_to_nptime * Add test for decoding dates with microsecond-resolution units This would have failed before including the microsecond attribute of each date in cftime_to_nptime in eaa4a44. * Fix typo in time-series.rst * Formatting * Fix test_decode_cf_datetime_non_iso_strings * Prevent warning emitted from set_options.__exit__
* fixed typo * added test for saving opened zarr dataset * modified test for saving opened zarr dataset * allow different last chunk * removed whitespace * modified error messages * fixed pep8 issues * updated whats-new
* Start deprecating inplace. * remove warnings from tests. * this commit silences nearly all warnings. * Add whats-new. * Add a default kwarg to _check_inplace and use for Dataset.update. * Major fix! * Add stacklevel * Tests: Less aggressive warning filter + fix unnecessary inplace. * revert changes to _calculate_binary_op
This has been deprecated since xarray 0.10. I also added support for passing a mapping ``{dim: freq}`` as the first argument.
* putting up for discussion: stop loading tutorial data by default * add tutorial.open_dataset * fix typo * add test for cached tutoreial data and minor doc fixes
…2555) * Add libnetcdf, libhdf5, pydap and cfgrib to xarray.show_versions() * More fixup * Show full Python version string
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.