-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend sparse broadcast to VecOrMats #20102
Conversation
@nanosoldier |
Just an idea, not thought through to the end, so I don't know whether it's feasible, but could we use _containertype(::Type{T}) where (T<:AbstractArray{Te,N} where Te) where N = AbstractArray{Te,N} where Te Then we wouldn't need |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels |
@martinholters Actually, my first idea was to go with |
@nanosoldier |
@martinholters I tried to implement your idea and it felt somewhat convoluted, but I removed the |
I made a mistake above so here's again @nanosoldier |
base/sparse/higherorderfns.jl
Outdated
@@ -861,8 +862,10 @@ broadcast_indices(::Type{AbstractSparseArray}, A) = indices(A) | |||
# broadcast container type promotion for combinations of sparse arrays and other types | |||
_containertype{T<:SparseVecOrMat}(::Type{T}) = AbstractSparseArray | |||
# combinations of sparse arrays with broadcast scalars should yield sparse arrays | |||
promote_containertype(::Type{Any}, ::Type{AbstractSparseArray}) = AbstractSparseArray | |||
promote_containertype(::Type{AbstractSparseArray}, ::Type{Any}) = AbstractSparseArray | |||
promote_containertype(::Type{AbstractSparseArray}, ::ScalarType) = AbstractSparseArray |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not know whether sparse map[!]/broadcast[!]
handles Nullable
s correctly. (Hence the absence of Nullable
s in the relevant code so far.) Have you confirmed that sparse map[!]/broadcast[!]
handles Nullable
s correctly? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not work, so I will remove it for now.
@inline _sparsifystructured(A::AbstractVector) = SparseVector(A) | ||
@inline _sparsifystructured(A::AbstractSparseMatrix) = SparseMatrixCSC(A) | ||
@inline _sparsifystructured(A::AbstractSparseVector) = SparseVector(A) | ||
@inline _sparsifystructured(S::SparseVecOrMat) = S |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call. This combination of definitions (_sparsifystructured
for AbstractSparseMatrix
, AbstractSparseVector
, and SparseVecOrMat
) is safer than the definition that combination supplants (_sparsifystructured
for AbstractSparseArray
). We should make certain this change goes in one way or another. Thanks!
base/sparse/higherorderfns.jl
Outdated
@@ -871,11 +874,11 @@ promote_containertype(::Type{AbstractSparseArray}, ::Type{Tuple}) = Array | |||
|
|||
# broadcast[!] entry points for combinations of sparse arrays and other (scalar) types | |||
@inline function broadcast_c{N}(f, ::Type{AbstractSparseArray}, mixedargs::Vararg{Any,N}) | |||
parevalf, passedargstup = capturescalars(f, mixedargs) | |||
parevalf, passedargstup = capturescalars(f, map(_sparsifystructured, mixedargs)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the capturescalars
pathway's standing shortcomings, sending sparse broadcast[!]
calls through capturscalars
that need not go through capturscalars
makes me uneasy. (Overcoming capturescalars
's shortcomings would be wonderful. Any thoughts on that front?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added a capturedscalars
method that should take care of this for now until we find a better alternative (I have thought of having a method that iterates over columns for AbstractArray
s in general and doing something like _broadcast!
with _broadcast_getindex!
so we don't need this mechanism, but that would have to wait)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have thought of having a method that iterates over columns for AbstractArrays in general and doing something like _broadcast! with _broadcast_getindex! so we don't need this mechanism
Could you clarify? If you mean something along the lines of extending the common interface over SparseVector
s and SparseMatrixCSC
s in sparse broadcast[!]
to other types (e.g. scalars, thereby obviating the capturescalars
mechanism), then that matches my plan for evolving these as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, a mechanism for avoiding the need of capturescalars
(similar to the way things work for Array
s) is what I meant, but we can discuss that later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Much thanks for working on this @pabloferz! If I'm not mistaken, this version will not correctly funnel certain argument combinations to sparse broadcast[!]
. For example, suppose you have an argument tuple including scalars, Vector
s, and SparseMatrixCSC
s. For certain argument orderings, it seems promote_containertype
may receive e.g. an Any
-VecOrMat
pair and return Array
. Downstream the broadcast[!]
operation will then hit AbstractArray
broadcast[!]
rather than sparse broadcast[!]
. (This issue is part of why disambiguating Array
in Broadcast
's containertype promotion mechanism may be worthwhile.) To properly handle such cases, you need additional machinery to capture combinations identified as Array
s by Broadcast
's containertype promotion mechanism and examine them more critically to determine whether sparse broadcast[!]
should handle them or not (as in e.g. #20009; #20007 takes a different approach). Best!
Partially! :) On the one hand, replacing Over the last few days I've thought through the issues #20007 hightlights. I hope to write those thoughts out in the next few days; please look for more from me soon. Thanks again! |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels |
f513300
to
5c571aa
Compare
Fixed
Now, if you don't need to distinguish between |
Also, this uncovered a bug on calls to |
@JeffBezanson @vtjnash I'm seeing some of the #11840 tests failing here, do you have any idea why? |
#19986 (pending merge) fixes this issue. Best! |
I had missed that one. I'd say that should go in first and I'll rebase on top of it. |
The latest iteration suffers from other correctness issues unfortunately. And as I conveyed gently above, I have misgivings about the direction in which this pull request evolves How to best structure Thoughts? Thanks and best! |
Not related to this PR (only), but in response to @Sacha0: I think post 0.6, it is worth collecting all the lessons learned documented in various issues and PRs and rethink the whole |
If you have the time, could you exemplify what would fail with this approach (to take it into account in the future)? With respect to not modifying the
@martinholters That's an excellent idea and I look forward to doing this 👍 |
Absolutely. The concerns I recall immediately from when I skimmed this iteration are: The The If I'm not mistaken, this iteration funnels some combinations that do not involve scalars but do involve structured matrices through Some additional concern regarding collapsing
Seconded! Best! |
Closing in favor of #23939 |
This is yet another alternative to #20007 and #20009, that also addresses #11474. The difference with the referred PRs is that, hopefully, this addresses @Sacha0's concerns pointed out on those PRs.
I have not included tests here as there are already proposed tests in the previous PRs which we can keep.