Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

At_mul_B! has different methods for sparse and dense matrices #9930

Closed
hofmannmartin opened this issue Jan 26, 2015 · 21 comments
Closed

At_mul_B! has different methods for sparse and dense matrices #9930

hofmannmartin opened this issue Jan 26, 2015 · 21 comments
Assignees
Labels
linear algebra Linear algebra

Comments

@hofmannmartin
Copy link

Just recently I wanted to implement an algorithm. I used the method At_mul_B!(α::Number,A::SparseMatrixCSC{Tv,Ti<:Integer},x::AbstractArray{T,1},β::Number, y::AbstractArray{T,1}), which works fine with sparse matrices but has no counterpart for dense matrices. Its dense counterpart BLAS.gemv! does not work with sparse matrices. As workaround I dispatched the At_mul_b! function.

Is this behaviour intended (using 0.3.6), or did I got terribly lost?

In my opinion it would be great to have basic linear algebra working on dense and sparse matrices with the same syntax.

@simonster
Copy link
Member

@andreasnoack has commented on this before. I think it would be good to have A_mul_B! etc. for all relevant types that accept α and β as in gemv/gemm, although I'd put the output array first.

@ViralBShah ViralBShah added the linear algebra Linear algebra label Jan 26, 2015
@tkelman
Copy link
Contributor

tkelman commented Jan 26, 2015

I thought we were trying to get rid of A_mul_B! eventually...

@timholy
Copy link
Sponsor Member

timholy commented Jan 26, 2015

I thought we were trying to get rid of A_mul_B! eventually...

I know it has been complained about, but those complaints are just wrong. 😜 While it is sometimes used to work around the (formerly) high cost of memory allocation, and that might be considered a "performance hack," at its root that is only one of this function's many uses. Here's one I've used myself:

A = load_10^12-by-2_matrix_from_disk_by_mmap()
B = rand(2,2)
C = A*B  # oops, crash---not enough memory to hold the result
C = mmap_array(Float64, (10^12, 2), iostream)
A_mul_B!(C, A, B)   # works just dandily

Or, what if you want the result to be a SharedArray? A DArray? The issue is that using the inputs to control the type of the output---possibly dispatching to different algorithms---is too powerful a trick to throw away.

I'll go farther and say most nontrivial computational routines should be written in ! format, with a non-! wrapper as a convenient option---not the other way around. That's as true today as it was before we had this awesome new GC 😄.

@tkelman
Copy link
Contributor

tkelman commented Jan 26, 2015

To clarify, the issue with A_mul_B! etc is not the in-place variants, it's the inelegant naming zoo due to how it handles transpose and conjugation that I thought we were planning on fixing. Of course some form of in-place matrix multiplication is going to be necessary for very large-scale problems. It's a natural mapping to BLAS gemm that way as well. But the A_mul_B! names are something that only we do, and I don't think anyone likes them.

And if you want to do sparse-sparse in-place matmul, then you want a different low-level API. You would need to split the symbolic phase from the numeric phase, since most applications of sparse matrices are repeatedly using arrays with the same nonzero structure but different values.

@StefanKarpinski
Copy link
Sponsor Member

I'd be perfectly happy to have mul! that does in-place multiplication. It's the explosion of names as Tony said that's the problem.

@hofmannmartin
Copy link
Author

I personally do not dislike A_mul_B!.

And I agree, that it would be nice to have a unified naming convention for linear algebra. That is what I wanted to point out. Thanks for the quick response.

@timholy
Copy link
Sponsor Member

timholy commented Jan 26, 2015

Ah, sure, name-condensation is a perfectly reasonable thing to wish for. I'd be +1 for that.

And if you want to do sparse-sparse in-place matmul, then you want a different low-level API

Perhaps, although one wonders whether the output-argument could be a container that included the results of the symbolic phase. But I haven't thought about this particular case seriously.

@jiahao
Copy link
Member

jiahao commented Jan 26, 2015

Related: #5332

I believe that for dense matrices and vectors (and BigFloats and BigInts for that matter), the question of how to reuse intermediaries is equivalent to a register allocation problem. In principle one could write an abstract interpreter to do both shape analysis (to infer dimensions of arrays) and liveness analysis, and then one would have all the information needed to reuse allocated arrays aggressively. I think to some extent an algorithm for solving the reuse problem for dense matrices can be reused for sparse, but computing the resulting shapes due to fill-in would be more complicated (and hence a more interesting research problem :) ).

@andreasnoack
Copy link
Member

@hofmannmartin I think you are complete right and that all the dense multiplication methods should take α and β arguments that default to one and zero respectively. I actually, implemented this a while ago, but it never got merged and the branch is probably very outdated now, but I'll take a look and see how bad it is.

@tkelman Is right that Sparse*Sparse case should probably be handled differently, but I guess we could use the same structure for Dense = αSparse*Dense + βDense without problems.

Finally, just to clarify the "get rid of" part of Ax_mul_Bx!. It is that we'll probably get a lazy transpose type for at least matrices and in that case we don't need the Ax_ and Bx_ parts, but can simply write something like mul!(2.0, Transpose(A), B, 0.0, C) if I am to decide or mul!(C,0.0,2.0, Transpose(A), B) (where should β go?) if @simonster and @timholy are to decide.

@simonster
Copy link
Member

IMO the most logical signature would be mul!(C, A, B, α=1, β=0).

@StefanKarpinski
Copy link
Sponsor Member

IMO the most logical signature would be mul!(C, A, B, α=1, β=0).

+1

@timholy
Copy link
Sponsor Member

timholy commented Jan 26, 2015

I like that one too, but (s)he who writes the code gets at least two votes, three if it's an odd-numbered Thursday.

@hofmannmartin
Copy link
Author

@simonster is it intended to be a paradigm to name the variable to be changed first by foo! throughout julia or just for linear algebra? In the latter case one might argue that the same input ordering as BLAS does make sense.

@ALL Love programming in Julia. Thanks to all.

@simonster
Copy link
Member

@hofmannmartin Yes, generally Julia functions that perform operations in place take the destination array first (further discussion in #8246), although function-valued arguments have precedence since they need to come first for do block syntax (e.g. map!(function, destination, collection...)).

@andreasnoack
Copy link
Member

@hofmannmartin We have had the discussion a couple of places, but in short my arguments are that

  1. BLAS is well established for matrix multiplication and triangular division
  2. Sometimes it is awkward to have the output first, e.g. applying rotations or reflectors form the left A_mul_B!(G,A) where A is updated in place. Same applies to triangular multiplication.
  3. The ordering of the arguments in GEMM is the same as when writing the expression αAB+βC. This argument allows for some other variant, e.g. the one I had in my last post.

So my conclusion is that having a strict rule for the position of the output is not desirable.

@timholy
Copy link
Sponsor Member

timholy commented Jan 27, 2015

I think it's not even possible to have a strict rule. collect!(output, inputs...) needs to have the output first, but distribute!(input, outputs...) needs to have the outputs last. It's just that in julia, collect! seems much more common than distribute!, so usually the output comes first.

As a separate point, there's a part of me that gets really annoyed at the two-argument forms A_mul_B!(G, A), especially when the output comes second (which is opposite the 3-argument form). But there's also a certain logic for them (aliasing matrix multiplication and all that).

@RaulDurand
Copy link

What about a macro:

@mul C = A*B
@mul C = A*B' 
@mul C = A'*B' 
@mul D += A*B*C' 

to call the right function treating transposes and avoiding temporaries ?

@andreasnoack
Copy link
Member

See https://github.com/simonbyrne/InplaceOps.jl, but to my knowledge, it is not possible to do the fourth calculation in place.

@andreasnoack
Copy link
Member

I think we should do #9930 (comment) because it aligns with the three argument version we already have

@andreasnoack andreasnoack self-assigned this Sep 1, 2017
@tkelman
Copy link
Contributor

tkelman commented Sep 1, 2017

we should consider giving the scalar multiples more descriptive names, and possibly making them keyword arguments

@oscardssmith
Copy link
Member

We've done this, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear algebra Linear algebra
Projects
None yet
Development

No branches or pull requests

10 participants