Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for AbstractWrappedArray in order to mitigate "AbstractArray-fallback" #31563

Closed
wants to merge 5 commits into from
Closed

Proposal for AbstractWrappedArray in order to mitigate "AbstractArray-fallback" #31563

wants to merge 5 commits into from

Conversation

KlausC
Copy link
Contributor

@KlausC KlausC commented Mar 30, 2019

The new abstract type AbstractWrappedArray{T,N,P,B} <: AbstractArray{T,N} is inserted in the type hierarchy of some wrapping arrays: Adjoint <: AbstractWrappedArray; Symmetric, SubArray, etc. as well.
Purpose is to allow to avoid the "AbstractArray-fallback" trap by defining additional methods, which use the additional type parameter B (base-type) for dispatch.

Example:

  1. foo(A::AbstractMatrix) = ... using getindex massively ...
  2. foo(A::SparseMatrixCSC) = ... exploiting sparsity of A - not using getindex ...

When calling foo(Adjoint(sparse([1 2; 3 4])) method 1 is invoked; for really big sparse matrices that is prohibitive (the "AbstractArray-fallback" use case).

Adding the following method improves the situation considerably:
3. foo(A::AbstractWrappedArray{<:Any,2,<:Any,<:SparseMatrixCSC}) = foo(sparse(A))

Now the wrapped matrix A is converted to a SparseMatrixCSC by sparse and then method 2. is invoked. This way the "AbstractArray-fallback" has been replaced by a "SparseMatrixCSC-fallback", which has much better performance.

Note, that the wrapped types may be arbitrary deeply nested; sparse is now reasonably fast after PR #30552.

@KlausC
Copy link
Contributor Author

KlausC commented Apr 2, 2019

@ViralBShah, @StefanKarpinski, @andreasnoack, I would like to bring this PR to your attention. It is introducing a new abstract type into the standard type hierarchy, and is slightly breaking.
At the other hand it enables near-future performance improvements in quite some SparseArrays methods without changing existing methods.

@KlausC KlausC closed this Apr 4, 2019
@KlausC KlausC reopened this Apr 4, 2019
@mbauman mbauman added the arrays [a, r, r, a, y, s] label Apr 4, 2019
@mbauman
Copy link
Member

mbauman commented Apr 4, 2019

Interesting. This seems to be a competing option to doing something like #25558 — that is, instead of traits this encodes the "basetype" as a parameter in the type tree.

@KlausC
Copy link
Contributor Author

KlausC commented Apr 5, 2019

@mbauman, yes, I was first thinking also of a traits solution, but I favored the additional-type-parameter way for those reasons:

  1. it is not necessary to modify the existing foo(::AbstractArray) nor foo(::SparseMatrixCSC) methods; at least the first has to be modified for the trait solution.
  2. the wrapper-property of some concrete types deserves a manifestation in the type hierarchy, IMO. As long as we don't have "interfaces", "multiple inheritance", extending the type hierarchy tree when appropriate seems a cleaner solution than traits.

@KlausC KlausC closed this Apr 8, 2019
@KlausC KlausC reopened this Apr 8, 2019
@KlausC
Copy link
Contributor Author

KlausC commented Apr 8, 2019

Failing test seems unrelated.

@StefanKarpinski
Copy link
Member

@mbauman or @KristofferC, please merge if this looks good to you.

@fredrikekre
Copy link
Member

Isn't this heavily breaking since it adds type parameters?

@mbauman
Copy link
Member

mbauman commented Apr 25, 2019

Yes, I think this needs a more careful design review with an eye towards what we want our final state to be. I've simply not had a chance to do so myself at this point.

@KlausC
Copy link
Contributor Author

KlausC commented May 2, 2019

The appended type parameter is a breaking change, because we have to replace for example typeof(A) == Symmetric{Float64, Matrix{Float64}} by typeof(A) <= Symmetric{Float64,Matrix{Float64}}. Nevertheless such changes are enforced by syntax errors, and not by silent changes of program behavior.
Without the additional parameter (but introducing the abstract type AbstractWrappedArray{T,N,P}) would also allow to improve the situation, in a probably less comfortable way; we had to replace

3. `foo(A::AbstractWrappedArray{<:Any,2,<:Any,<:SparseMatrixCSC}) = foo(sparse(A))`

by

3a. `function foo(A::T) where T<:AbstractWrappedArray{<:Any,2,<:Any}
           basetype(T) <: SparseMatrixCSC ? foo(sparse(A)) : invoke(foo, Tuple{AbstractMatrix}, A)
        end

(I am not sure, if the invoke has performance impact.)

@KlausC
Copy link
Contributor Author

KlausC commented May 14, 2019

bu mp ;-)

@mbauman mbauman added the design Design of APIs or of the language itself label May 14, 2019
@KlausC
Copy link
Contributor Author

KlausC commented May 24, 2019

bummpp

@mbauman
Copy link
Member

mbauman commented May 24, 2019

What about a definition like this:

abstract type AbstractWrappedArray{T,N,P} <: AbstractArray{T,N}; end
basetype(::Type{<:AbstractWrappedArray{T,N,P}}) where {T,N,P} = basetype(P)

We'd only store the "one-level up" parent in the type but allow basetype to recurse. The advantage is that we wouldn't need to change any type parameters of the subtypes — which is a breaking change. The disadvantage is that you shouldn't do dispatch directly on the type directly — which would be very tempting — and instead need to use basetype like a trait.

We can discuss it on an upcoming triage call.

Edit: ah, yes, that's what you suggested in #31563 (comment). foo needn't be quite so awkward, though — it just needs an additional function hop:

foo(A::AbstractWrappedArray) = _foo(basetype(typeof(A)), A)
_foo(::Type{<:SparseMatrixCSC}, A) = ...

@mbauman mbauman added breaking This change will break code triage This should be discussed on a triage call labels May 24, 2019
@KlausC
Copy link
Contributor Author

KlausC commented May 26, 2019

— it just needs an additional function hop:

foo(A::AbstractWrappedArray) = _foo(basetype(typeof(A)), A)
_foo(::Type{<:SparseMatrixCSC}, A) = ...

The complete story would look like:

 foo(A::AbstractWrappedArray) = _foo(basetype(typeof(A)), A)
 _foo(::Type{<:SparseMatrixCSC}, A) = foo(sparse(A))
 _foo(::Type{<:AbstractMatrix}, A) = invoke(foo, Tuple{AbstractMatrix}, A) # avoid dispatch loop foo(A)

if we want to re-use the pre-existing foo(A::AbstractMatrix).
For me it looks like a trade-off between

A. additional type parameter in wrapped types + one additional method per use case (method)
B. retain non-breaking + 3 additional methods per use case + need to use invoke

In both cases the new AbstractWrappedArray type is inserted into the type hierarchy.

@JeffBezanson
Copy link
Member

Inserting new abstract types into the hierarchy is allowed and considered backwards-compatible. Adding the type parameters, as observed, is breaking.

@KlausC
Copy link
Contributor Author

KlausC commented Jun 21, 2019

So I will backpedal on the additional type parameter, but insert AbstractWrappedArray{T,N,P} into the type hierarchy without changing type parameters or layout of existing wrapped types:

abstract type AbstractWrappedArray{T,N,P} <: AbstractArray{T,N}

struct Transpose{T,P} <: AbstractWrappedArray{T,2,P}; parent::P; end
struct SubArray{T,N,P,I,L} <: AbstractWrappedArray{T,N,P}; parent::P; ... end
...

@KlausC
Copy link
Contributor Author

KlausC commented Jun 21, 2019

BTW, is there a chance in the future to have a breaking change approved?
Actually I inserted the additional B basetype parameter, because I wanted to make the predicate "the initial type in a nested wrapped structure is a sparse matrix" available for method dispatch without the traits pattern. That would have allowed to define specialized methods for wrapped sparse matrices without touching the already existing methods defined for AbstractMatrix of the same functions.
Is there a plan to support "traits" (by syntax extension), which allow to dispatch on such secondary properties of types without having to provide and use the extra method argument? Does it make sense to investigate this direction? I am aware of the complexity of the dispatching mechanism and the danger of undecidability.
I was inspired by this quotation "One approach would be to include a list of satisfied predicates in a type. However, to the extent such a system is decidable, it is essentially equivalent to multiple inheritance." from @JeffBezanson 's thesis.

@tanhevg
Copy link
Contributor

tanhevg commented Feb 20, 2020

I see there have been no comments on this PR for a while, so I wonder if any decision has been reached? The problem that is being addressed here, "AbstractArray-fallback", can occur in various situations, not just for sparse matrices. I have stumbled into this when trying to train a deep learning model on a GPU with Zygote. This problem causes the broadcast result of a computation involving GPU arrays to be materialised on a CPU, so the remainder of gradient backward pass (that should take place on GPU) blows up. See JuliaGPU/GPUArrays.jl#244 and linked issues.

This issue can easily cause errors downstream that are hard to debug. Either a general solution (not necessarily this one) needs to be adopted, or individual patches for various downstream use cases (e.g. sparse matrices, broadcasting, GPU, Zygote, etc..) need to be implemented.

@timholy
Copy link
Member

timholy commented Feb 20, 2020

The issue needs fixing, but I think several of us are skeptical that this is the right approach. Approaches that might work include (1) submitting specialized methods for the cases you're hitting, and/or (2) resurrect https://github.com/timholy/ArrayIteration.jl and use it to implement the generic fallbacks.

Either a general solution (not necessarily this one) needs to be adopted, or individual patches for various downstream use cases (e.g. sparse matrices, broadcasting, GPU, Zygote, etc..) need to be implemented.

Given that you need it, I would love it if you picked up ArrayIteration and pushed it over the finish line. If memory serves, you probably want the teh/stored2 branch. The big problem with that approach was temporary allocations due to heap-allocated wrappers, https://stackoverflow.com/questions/47590839/unexpected-memory-allocation-when-using-array-views-julia/47607539#47607539. But the optimizer & inlining has gotten so much better that it may not be the issue it once was.

@AriMKatz
Copy link

AriMKatz commented Feb 26, 2020

https://github.com/timholy/ArrayIteration.jl is very interesting. It has similar goals to @ChrisRackauckas 's https://github.com/JuliaDiffEq/ArrayInterface.jl and @vchuravy 's https://github.com/JuliaGPU/KernelAbstractions.jl so I wonder if it makes sense to coordinate on next gen array stuff, which can solve the problems raised in this thread.

@ChrisRackauckas
Copy link
Member

The problem that this could fix is that it would allow you to flip functions to be below the wrapper by the default. The problem comes up if you compose wrappers. Examples of this are on Adjoint{CuArray} and stuff like that, where reshape an adjoint of a CuArray gives you a ReshapedArray{Adjoint{CuArray}} because it's unable to hit the specialization on CuArrays to then become Adjoint{CuArray} and the be fast again. However, with an abstract type, we could do:

reshape(a::AbstractWrappedArray,sz...) = rewrap(typeof(a),reshape(parent(a),sz...))
reinterpret(a::AbstractWrappedArray,T) = rewrap(typeof(a),reinterpret(parent(a),sz...))

However, this points to the fact that the PR is missing a few pieces of an AbstractWrappedArray interface that would be necessary:

  • parent(a): Returns the array that is being wrapped
  • rewrap(T,a): "Rewraps" the array, i.e. make it Adjoint again, without having to know that it was Adjoint.

@vchuravy
Copy link
Member

rewrap(T,a): "Rewraps" the array, i.e. make it Adjoint again, without having to know that it was Adjoint.

See https://github.com/JuliaGPU/Adapt.jl/blob/master/src/Adapt.jl for a rewrap infrastructure that has worked out rather well in the wild.

@ChrisRackauckas
Copy link
Member

Yes, Adapt.jl is great, but what it's doing should be what happens by default since right now if you don't use Adapt you end up with these issues. Since a lot of Julia users don't even know about Adapt, this is probably what I run into in most generic codes as the blocker for why GPUs or TrackedArrays don't "just work".

@EricForgy
Copy link

This is interesting 👍

I've done something similar recently in a private package I'm developing. I have a package with an abstract type AbstractMyType with some concrete implementations, but these concrete implementations are meant to be wrapped, so I introduced AbstractMyTypeWrapper. This saves me from a lot of redundant code in each extention.

Since it is a very common pattern to wrap arrays, it makes sense to me to have an AbstractArrayWrapper in Base.

I'm actually working on a tensor algebra package now to illustrate some points made in this issue:

JuliaLang/LinearAlgebra.jl#729

Having an AbstractArrayWrapper would help with tensors and helping us get a natural contravariance and covariance for LinearAlgebra.

Small comment: I would probably prefer AbstractArrayWrapper to AbstractWrappedArray 😊

@maleadt
Copy link
Member

maleadt commented Sep 28, 2020

There's another potential advantage of this PR: latency. I've been seeing significant package load-time regressions now that I've started more heavily using the WrappedArray union from Adapt.jl, but also using more simple unions like StridedCuArray from CUDA.jl (mimicking Base's StridedArray). See JuliaGPU/CUDA.jl#453.

That means however it would need to be possible to use AbstractWrappedArray for dispatch, which IIUC means we can't just store one level up and use basetype to recurse. That implies this would be breaking, as it changes type parameters.

@vtjnash
Copy link
Member

vtjnash commented Oct 27, 2023

If I understand right, this is now handled in the dispatch system for ldiv! and friends from @dkarrasch recent work, and that just needs to be adopted now by packages that care about latency, if they haven't already

@vtjnash vtjnash closed this Oct 27, 2023
@maleadt
Copy link
Member

maleadt commented Oct 28, 2023

This PR proposes a principled approach that would work with all arrays, and not a handful of operations. I don't think it should have been closed. Sadly, doing so removed the upstream branch, so it cannot be re-opened.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
arrays [a, r, r, a, y, s] breaking This change will break code design Design of APIs or of the language itself
Projects
None yet
Development

Successfully merging this pull request may close these issues.