Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP:Add Transpose immutable #6837

Closed
wants to merge 1 commit into from
Closed

WIP:Add Transpose immutable #6837

wants to merge 1 commit into from

Conversation

andreasnoack
Copy link
Member

Here is a sketch of how a Transpose type could work. Before continuing with redefining all the methods to the new type, I would like to hear what people think about it. Basically, I am just introducing the immutable

immutable Transpose{T, N, S<:AbstractArray{T, N}, Conj} <: AbstractArray{T, N}
    data::S
end

and although there is an N there, this proposal deals with arrays of dimension one and two. That is what is possible to transpose right now and I think it is beyond the scope of this proposal to handle higher dimensions.

This is basically a kind of lazy transpose which we can use for dispatch to optimised methods. If the idea is accepted, we can remove all the Ax_mul_Bx! methods and just have mul! and similarly for ldiv/rdiv, see #5332.

With this proposal transpose(Vector{T}) is a VectorTranspose{T,Vector{T},false}, which is still a one dimensional thing, but with this we can have

julia> A=randn(3,3);x=randn(3);

julia> x'A*x
-3.0853326038138893

julia> x'x
3.0143512744908234

which was discussed in #4774. Another useful consequence would be that we can evaluate something like A*B*C' more efficiently. Right now B*C' calls A_mul_Bc without actually forming C', but A*B*C' calls *(A,B,ctranspose(C)) so for more complicated matrix expression there is more copying than necessary.

cc: @jiahao, @toivoh

@tkelman
Copy link
Contributor

tkelman commented May 14, 2014

I like the general idea of lazy transpose. The tricky part is how do you propose to handle setindex, does writing into a transpose object change the original matrix? Guess you just document really clearly if transpose returns a view, and you use B = full(A') (or should that be B = copy(A')?) when you need a copy? Same concern as for slices, great for performance but a temporary disruption that could cause bugs in existing code.

Properly supporting sparse here basically means introducing a SparseMatrixCSR type, transpose outputs the same data as the opposite type and the current sparse transpose implementation becomes the conversion operation between the two types.

@jiahao
Copy link
Member

jiahao commented May 14, 2014

What does transpose mean for N>2?

@toivoh
Copy link
Contributor

toivoh commented May 14, 2014

Since we want a transposed vector to be a row vector (as opposed to a row matrix) I guess thst the only reasonable definition of transposing higher dimensional arrays would be to swap the first two dimensions. But I agree that starting out with N <= 2 makes a lot of sense.

Also, thanks for trying this out!

@jiahao
Copy link
Member

jiahao commented May 14, 2014

I guess thst the only reasonable definition of transposing higher dimensional arrays would be to swap the first two dimensions.

I don't see a unique definition for N>2; you could easily argue for swapping the last two dimensions, or reversing the order of all the dimensions.

@kmsquire
Copy link
Member

Python Pandas defines a version of transpose for N=3 that allows arbitrary swapping of dimensions, which seems (to me) like a reasonable definition.

Do LAPACK routines allow for arbitrary strides along each dimension, or only the major stride?

@mbauman
Copy link
Member

mbauman commented May 14, 2014

Numpy's stock transpose (x.T) reverses the dimensions by default. Matlab refuses and punts to permute in its error message. I think erroring on N>2 is sensible.

@tknopp
Copy link
Contributor

tknopp commented May 14, 2014

+1 for reversing the dimensions. This basically gives us row major arrays.

@jiahao
Copy link
Member

jiahao commented May 14, 2014

Matlab refuses and punts to permute in its error message.

That's really my point, that for Transpose{N>2} to be sensible, it needs to store a permutation (or similar combinatorial object) to distinguish between all the possible generalizations.

@StefanKarpinski
Copy link
Member

Reversing the dimensions is ok as long as Transpose{Vector} is treated like a row matrix the same way that Vector is treated like a column matrix. Making all of that consistent is going to take some effort.

@toivoh
Copy link
Contributor

toivoh commented May 14, 2014

Reversing the dimensions would mean that the transpose of a column vector
would be the same column vector. What we want is that a transposed vector
should behave like the first and second dimensions were swapped, just as
with a matrix. Swapping the last two dimensions on a column vector would
swap the nonexistant zeroth dimension with the first, again unlike the
matrix case.

But I agree that transpose in more than two dimensions d
oesn't really make sense unless you store a permutation (that seems to be
hard to unify with the row vector kind of transposition)

On Wed, May 14, 2014 at 7:58 PM, Stefan Karpinski
notifications@github.comwrote:

Reversing the dimensions is ok as long as Transpose{Vector} is treated
like a row matrix the same way that Vector is treated like a column matrix.
Making all of that consistent is going to take some effort.


Reply to this email directly or view it on GitHubhttps://github.com//pull/6837#issuecomment-43114985
.

@andreasnoack
Copy link
Member Author

@tkelman I/We better document this well, but the general shift to views will probably give some surprises in terms of breaking code, but hopefully also in faster code.

@kmsquire LAPACK requires the storage to be contiguous along the major stride. BLAS allows arbitrary strides for vector arguments i.e. BLAS 1 & 2.

@jiahao I'm not familiar with transposing arrays with N>2. In this sketch, I have made transpose and ctranspose return an error when N>2 although the type is actually defined in general. Based on the discussion in this thread, it is my feeling that we could start out as it is now and then consider adding a permutation property to the type at some point when we feel ready for higher dimensions.

I think the permutation vector approach can be combined with the covector behaviour. Although the permutation of [1] might feel a little trivial, we would still have Transpose type to dispatch on to control how the transposed vector should behave.

But again, right now we only support transposing when N<3 and I think this proposal is an improvement for that, maybe limited, case. If anyone find it problematic to change the way we handle transposes now, please raise your voice or I'll continue to work on this proposal.

Finally, I would be happy if someone with Scheme skills could have a look at the lines I have commented out in julia-syntax.scm.

@simonbyrne
Copy link
Contributor

I think this is an excellent idea (I particularly like that u' v could replace dot(u,v)), and a very elegant solution to the A_mul_B mess.

I think it is worth considering making conjugate-transpose a distinct type, say CTranspose{T<:Complex, N, S <: AbstractArray{T,N}}, as I feel that would make dispatch a little cleaner.

@StefanKarpinski
Copy link
Member

I agree about conjugate transpose. If we're going to have Transpose it seems like we'd really want CTranspose as well.

@simonbyrne
Copy link
Contributor

There is one other issue with the A_mul_B! methods that this doesn't address: at the moment the 2-argument form of A_mul_B!(A,B) still doesn't specify which matrix should be overwritten. Most methods overwrite B in this case, but there are some exceptions.

Some possible solutions:

  • Define mul!l and mul!r (for overwriting the left and right argument, respectively)
  • Create an Inplace{N} type, which could be the first argument of mul!, i.e. mul!(Inplace{1},A,B) would overwrite A, and mul!(Inplace{2},A,B) would overwrite B.

@simonbyrne
Copy link
Contributor

To follow up, I threw together a package for providing a convenient syntax for in-place operations, e.g.

@in2! X' \ y

will call Ac_mul_B!(X,y). It duplicates some of the proposed functionality proposed in this PR, so will reduce in size considerably if this gets accepted.

@Jutho
Copy link
Contributor

Jutho commented May 22, 2014

While I like the general attitude of switching to views where possible, and also see the advantage of having a Transpose type for the matrix multiplication zoo, one might want to consider how to deal with the corresponding CTranspose type? Since this involves an additional complex conjugation, it is not simply a view. If the CTranspose type is used outside the context of matrix multiplication, when will the conjugation be applied? Every time the user accesses a specific entry of the CTranspose object. If this happens many times, it could become much more inefficient then just computing the complex conjugated matrix or Hermitian conjugated matrix once.

@StefanKarpinski
Copy link
Member

The CTranspose type isn't entirely necessary. ctranspose of real arrays could apply the lazy Transpose construction, while ctranspose of complex arrays could do the actual eager ctranspose. Of course, that introduces a problem generic programming in that for some array types a' is a view of a and modifying it will affect a whereas for others it creates a copy and you can safely modify a' without affecting a. If you weren't allowed to modify lazy transposes, this would be less problematic, but I think that having a CTranspose type that does this might be best:

setindex!{M<:Matrix}(t::CTranspose{M}, v, i, j) = (t.array[j,i] = v')

@toivoh
Copy link
Contributor

toivoh commented May 22, 2014

I would be inclined to think that negating the imaginary part of a complex
number is pretty cheap compared to loading from / storing to memory.

@StefanKarpinski
Copy link
Member

Yes, I suspect that's true.

@pygy pygy mentioned this pull request May 24, 2014
@Jutho
Copy link
Contributor

Jutho commented May 25, 2014

That's probably true indeed. I just started wondering about this since I noticed that the current code for getindex in this PR does not take the complex conjugation into account. But that's perfectly fine, it is WIP after all.

@simonbyrne
Copy link
Contributor

I was playing around with this idea over the weekend, and while it makes things drastically simpler in general (linalg/matmul.jl can be reduced to a handful of methods), some new problems are introduced, all due to having Transpose{T,1} <: AbstractVector{T}.

Whereas Transpose{T,2} still behaves exactly like an Matrix{T}, a Transpose{T,1} can act differently than an Vector{T} (e.g. *(a::Vector{T}, b::Tranpose{T,1}) and *(a::Tranpose{T,1}, b::Vector) return very different objects) and so almost any time an AbstractVector occurs, additional methods are required to differentiate between the two behaviours, and I don't think this approach is sustainable over the whole julia ecosystem.

As a result, we would probably need to split Transpose into separate types, say TransposeVector and TransposeMatrix. There are two options that I've thought of:

  1. Make TransposeVector{T} <: AbstractMatrix{T}: if we were to follow this strictly, it would mean that we would no longer be able to have u' v == dot(u,v).
  2. Make TransposeVector{T] <: AbstractArray{T,-1} (or some other non-standard array type), to emphasise that its behaviour is different than a standard Array.

@simonbyrne
Copy link
Contributor

To follow up, here's my attempt: 1919d59 (branched from @andreasnoackjensen's code). It defines separate types Transpose/ConjTranspose (for matrices), and Covector/ConjCovector (for vectors).

It almost halves the number of lines in matmul.jl, and significantly reduces duplicated code (e.g. each BLAS call appears only once).

@StefanKarpinski
Copy link
Member

@simonbyrne – that seems like a fairly sane, targeted approach. I guess it feels a bit limited not to make the transpose construction more general but since we a) can't agree on what higher order transpose should mean and b) it causes all kinds of generic programming problems as you outlined, maybe the targeted approach for just vectors and matrices is best.

It also occurs to me that we could use Array{T,-n} as the actual transposed types rather than using a wrapper. Arrays with negative dimensionality would count their dimensions downward instead of upward. Then Array{T,-1} could be covector, Array{T,-2} a "comatrix", and so on. A comatrix would basically be a row-major matrix.

@simonbyrne
Copy link
Contributor

@StefanKarpinski That's perhaps worth considering, though it would mean that every AbstractMatrix would need to define a corresponding type for its transpose. And we would still need a Conjugate wrapper type for dealing with complex-transpose.

@lindahua
Copy link
Contributor

We should be careful about the idea of Array{T,-N}. Many functions defined over Array are not supposed to work with such types.

The Transpose type is more clean and explicit.

@jiahao
Copy link
Member

jiahao commented May 28, 2014

I'm not a fan of Array{T,-N}: it doesn't generalize correctly beyond N>2 since for higher ranks you can have nontrivial behaviors associated with different dimensions being "up" or "down". I'd rather just have TransposeVector be its own thing rather than try to shoehorn it into the type hierarchy.

@StefanKarpinski
Copy link
Member

I generally agree that the Transpose and Covector parametric types are probably a safer way to go.

On May 28, 2014, at 11:13 AM, Dahua Lin notifications@github.com wrote:

We should be careful about the idea of Array{T,-N}. Many functions defined over Array are not supposed to work with such types.

The Transpose type is more clean and explicit.


Reply to this email directly or view it on GitHub.

@simonbyrne
Copy link
Contributor

One more brainteaser: what should be the (conjugate)transpose of a matrix of matrices? Should we also transpose the individual entries?

ctransposeblock!(B,A,m,n-newn,offseti,offsetj+newn)
end
return B
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Boy, there's a lot of near-duplicated code here. I'd really prefer a generic implementation in which you passed transpose or ctranspose as an argument.

@timholy
Copy link
Member

timholy commented Apr 29, 2016

Nice!

How did the benchmarks work out? Naturally, I'm only concerned about performance for element-by-element access patterns. Does LLVM hoist the conjugated test out of a loop? If not, any chance that changing it to an if statement would cause it to be hoisted? (Without hoisting, your ifelse is much better.) If there's no hoisting, what's the performance hit?

@andreasnoack
Copy link
Member Author

andreasnoack commented Apr 29, 2016

Thanks for the comments. I'll look into them and there is still work to do here and more benchmarks are needed to detect if the new methods are efficient. Right now, I wanted to direct the attention to the compile time issue which is quite visible in

https://travis-ci.org/JuliaLang/julia/jobs/126553560#L1648

My feeling is that unless something can be done with the compile time this change is simply not feasible.

@timholy
Copy link
Member

timholy commented Apr 29, 2016

Yeah, I noticed that. Quite a bummer.

@StefanKarpinski
Copy link
Member

Bump. Needs to happen soon to go into 0.6.

@JeffBezanson
Copy link
Member

Is this still blocked on a compiler performance problem? Is there an issue for it?

@andreasnoack
Copy link
Member Author

I don't think the compile time is a blocker anymore. The huge number of method compilations could be avoided by making sure that elements in A, B, and C got promoted before calling the actual multiplication routine.

@StefanKarpinski
Copy link
Member

That sounds promising. It would be great if this could get done in time for the 0.6 feature freeze at end of year.

@andyferris
Copy link
Member

I said it in the line comments, but I would prefer that we punt entirely on conjugation here. I'm hoping that between my TransposedVector, this Transpose/TransposedMatrix and a new Conj wrapper, we could replace the Ac_mul_Bc class of function with methods on *.

I feel that having the conjugation more modular should be a conceptual advantage (and probably better for developers). For instance, you can have Conj of a higher-dimensional array (but not a transpose).

@JeffBezanson
Copy link
Member

I believe we're aiming for RowVector in 0.6, and TransposedMatrix later, so changing milestone.

@JeffBezanson JeffBezanson modified the milestones: 1.x, 0.6.0, 1.0 Jan 4, 2017
@ViralBShah
Copy link
Member

ViralBShah commented Nov 19, 2017

@andreasnoack Are we likely to be able to make this by feature freeze?

@ViralBShah ViralBShah added the triage This should be discussed on a triage call label Nov 19, 2017
@mbauman
Copy link
Member

mbauman commented Nov 19, 2017

A much more up-to-date version here is in #23424.

@andreasnoack
Copy link
Member Author

Yes. This is not relevant anymore. The current efforts are in #23424.

@andreasnoack andreasnoack deleted the anj/transpose branch November 19, 2017 14:02
@JeffBezanson JeffBezanson removed the triage This should be discussed on a triage call label Nov 20, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear algebra Linear algebra
Projects
None yet
Development

Successfully merging this pull request may close these issues.