-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add histogram method #4610
Comments
Update on this: in a PR to xhistogram we have a rough proof-of-principle for a dask-parallelized, axis-aware implementation of N-dimensional histogram calculations, suitable for eventually integrating into xarray. We still need to complete the work over in xhistogram, but for now I want to suggest what I think the eventual API should be for this functionality within xarray: Top-level functionxhistogram's xarray API is essentially one New methodsWe could also add a The existing
|
Should be fine I think. Matplolib explains how to use counts, bins = np.histogram(data)
plt.hist(bins[:-1], bins, weights=counts) Some reading if wanting to do the plot by hand: |
what about a list of @dougiesquire implemented this in https://github.com/xarray-contrib/xskillscore/blob/2217b58c536ec1b3d2c42265ed6689a740c2b3bf/xskillscore/core/utils.py#L133 EDIT: seeing now that this issue and #5400 aims to implement xr.DataArray.hist only. xr.Dataset would be also nice :) |
@aaronspring I'm a bit confused by your comment. The (proposed) API in #5400 does have a ds = open(file, vars=['temperature', 'salinity'])
ds.hist() # creates 2D temperature-salinity histogram That's not the same thing as using Datasets as bins though - but I'm not really sure I understand the use case for that or what that allows? You can already choose different bins to use for each input variable, are you saying it would be neater if you could assign bins to input variables via a dict-like dataset rather than the arguments being in the corresponding positions in a list? The example you linked doesn't pass datasets as bins either, it just loops over multiple input datasets and assumes you want to calculate joint histograms between those datasets. |
I tried to show in https://gist.github.com/aaronspring/251553f132202cc91aadde03f2a452f9 how I would like to use xr.Datasets as I tried show in the gist that I could be also nice to allow xr.Datasets as bins if the inputs are xr.Datasets.
I cannot find this in #5400. I should checkout and run the code locally. Yep, the example xskillscore code posted doesnt allow nd bins. forgot that. correct. in my head thinking about the future it does. https://github.com/xarray-contrib/xskillscore/blob/6f7be06098eefa1cdb90f7319f577c274621301c/xskillscore/core/probabilistic.py#L498 takes xr.Datasets as bins and in a previous version we used |
#5400 is right now just a skeleton, it won't compute anything other than a
One of the bullets above is for N-dimensional bins, passed as xr.DataArrays. If we allow multidimensional xr.DataArrays as bins, then you could pass bins which changed at each quantile in that way. What I'm unclear about is what you want to achieve by inputting an xarray.Dataset that couldn't be done with inputs of ND xr.DataArrays as both data and bins? |
with dataset bins I want to have different bin_edges for each dataset. If bins is only a dataArray, I cannot have this. Can I? |
For each dataset in what? Do you mean for each input dataarray? I'm proposing an API in which you either pass multiple DataArrays as data (what xhistogram currently accepts), or you can call
If bins can be a list of multiple dataarrays then you can have this, right? i.e. histogram(da1, da2, bins=[bins_for_da1, bins_for_da2]) where |
I am unsure about this and cannot manage to put my Südasien down precisely. Calculating a contingency table for instance between two multivar inputs: ˋˋˋ maybe @dougiesquire can phrase this more precisely |
We have a very thin wrapper of xhistogram in xskillscore for calculating histograms from Datasets. It simply calculates the histograms independently for all variables that exist in all Datasets. This makes sense in the context of calculating skill score where the first Dataset corresponds to observations and the second to forecasts, and we want to calculate the histograms between matched variables in each dataset. However, this might be quite a specific use case and is probably not what we'd want to do in the general case. I like @TomNicholas 's proposal for Dataset functionality. Is this what you're getting at @aaronspring ? Or am I misunderstanding? |
I like your explanation of the two different inputs @dougiesquire and for multi-dim datasets these must be xr.datasets. my point about the |
This makes sense, but it sounds like this suggestion (of accepting Datasets not just DataArrays) is mostly a convenience tool for applying histograms to particular variables across multiple datasets quickly. It's not fundamentally different to picking and choosing the variables you want from multiple datasets and feeding them in to I think we should focus on including features that enable analyses that would otherwise be difficult or impossible, for example ND bins: without allowing bins to be >1D at a low level internally then it would be fairly difficult to replicate the same functionality just by wrapping |
agree.
looking forward to the PR |
Okay great, thanks for the patient explanation @aaronspring ! Will tag you when this has progressed to the point that you can try it out. |
Given the performance I found in xgcm/xhistogram#60, I think we probably want to use the |
Q: Use xhistogram approach or flox-powered approach?@dcherian recently showed how his flox package can perform histograms as groupby-like reductions. This begs the question of which approach would be better to use in a histogram function in xarray. (This is related to but better than what we had tried previously with xarray groupby and numpy_groupies.) Here's a WIP notebook comparing the two approaches. Both approaches can feasibly do:
Pros of using flox-powered reductions:
Pros of using xhistogram's blockwise bincount approach:
Other thoughts:
|
Nah, in my experience, the overhead is "factorizing" (pd.cut/np.digitize) or converting to integer bins, and then converting the nD problem to a 1D problem for bincount. numba doesn't really help. 3-4x is a lot bigger than I expected. I was hoping for under 2x because flox is more general. I think the problem is We could swap that out easily here: https://github.com/xarray-contrib/flox/blob/daebc868c13dad74a55d74f3e5d24e0f6bbbc118/flox/core.py#L473 I think the one special case to consider is binning datetimes, and that digitize and pd.cut have different defaults for
Ideally As a workaround, we could replace
Yup. unlikely to help here. |
Could you show the example that's this slow, @TomNicholas ? So I can play around with it too. One thing I noticed in your notebook is that you haven't used |
I think I just timed the difference in the (unweighted) "real" example I gave in the notebook. (Not the weighted one because that didn't give the right answer with flox for some reason.)
Fair point, worth trying. |
Can we not just test the in-memory performance by |
This could basically be something like
We'll need #9522 + some skipping of Actually this doesn't handle the |
On today's dev call, we discussed the possible role that numpy_groupies could play in xarray (#4540). I noted that many of the use cases for advanced grouping overlap significantly with histogram-type operations. A major need that we have is to take [weighted] histograms over some, but not all, axes of DataArrays. Since groupby doesn't allow this (see #1013), we started the standalone xhistogram package.
Given the broad usefulness of this feature, I suggested that we might want to deprecate xhistogram and move the histogram function to xarray. We may want to also reimplement it using numpy_groupies, which I think is smarter than our implementation in xhistogram.
I've opened this issue to keep track of the idea.
The text was updated successfully, but these errors were encountered: