Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always return sparse matrices for spre, spost, and sprepost #162

Merged
merged 5 commits into from
Jun 10, 2024

Conversation

ytdHuang
Copy link
Member

@ytdHuang ytdHuang commented Jun 9, 2024

Currently, the return matrix type of spre depends on the input, but spost and sprepost always return sparse matrix.

For superoperators, it should be more feasible to use sparse matrix for large systems. Therefore, I suggest to always return sparse matrix for spre.

Also, I find out that if A is a sparse matrix, sparse(A) will make a copy of it. But we are going to do Kronecker product right away in generating superoperators, I don't think we need to do this copy.

Therefore, I made intrinsic functions for spre, spost, sprepost with multiple dispatches. To deal with different types (dense or sparse) of input Opeartor.

I believe this could improve the performance when creating the Liouvillian.

Copy link

codecov bot commented Jun 9, 2024

Codecov Report

Attention: Patch coverage is 95.65217% with 1 line in your changes missing coverage. Please review.

Project coverage is 92.94%. Comparing base (39b436d) to head (f8e90de).

Files Patch % Lines
src/qobj/superoperators.jl 95.65% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #162      +/-   ##
==========================================
+ Coverage   92.89%   92.94%   +0.05%     
==========================================
  Files          27       28       +1     
  Lines        1956     1971      +15     
==========================================
+ Hits         1817     1832      +15     
  Misses        139      139              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ytdHuang
Copy link
Member Author

ytdHuang commented Jun 9, 2024

Seems that kron for Transpose matrices is not supported before Julia v1.10, I will make a multiple dispatch later.

@ytdHuang ytdHuang closed this Jun 9, 2024
@ytdHuang ytdHuang deleted the dev/superoperator branch June 9, 2024 10:26
@ytdHuang ytdHuang restored the dev/superoperator branch June 9, 2024 10:26
@ytdHuang ytdHuang reopened this Jun 9, 2024
Copy link
Member

@albertomercurio albertomercurio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the main change is to define sparse(transpose(sparse(…))) or transpose(sparse(…)) depending on the Julia version?

And also convert a dense matrix to a sparse one.

Are these the main changes?

Are this changes compatible with GPU arrays? As far as I remember, sparse(transpose(sparse(…))) was needed for GPUs.

src/qobj/superoperators.jl Outdated Show resolved Hide resolved
@ytdHuang
Copy link
Member Author

ytdHuang commented Jun 10, 2024

So the main change is to define sparse(transpose(sparse(…))) or transpose(sparse(…)) depending on the Julia version?

And also convert a dense matrix to a sparse one.

Are these the main changes?

Are this changes compatible with GPU arrays? As far as I remember, sparse(transpose(sparse(…))) was needed for GPUs.

The main change is to remove unnecessary sparse() call. Because this function will make the copy if the input matrix is already a sparse matrix. We only need to transfer the dense matrix to sparse ones.

The kron function somehow supports for Transpose input type start from julia 1.10.

That is, instead of calling

kron(sparse(transpose(sparse(A))), B)

like we did before. We can directly use the following call start from julia v1.10

kron(transpose(sparse(A)), B)

we can even remove sparse if A is already a sparse matrix

kron(transpose(A), B)

But we still need to keep the old versions supported, so I made multiple dispatch for different julia versions.

As for the GPU support, I have merged from main (to have the CUDA runtests content in #163 ).
I've tested in my local PC, every CUDA extension tests pass.
Since the example in the runtest includes mesolve, I think the code here supports GPU computing.

Maybe they extend some methods in CUDA.CUSPARSE so that it is now available.

@albertomercurio
Copy link
Member

Ok. Is it also supported for gpus?

@ytdHuang
Copy link
Member Author

Ok. Is it also supported for gpus?

I also tried the following cases:

using CUDA, QuantumToolbox
Xs = cu(sigmax()) # sparse
Xd = cu(sparse_to_dense(sigmax())) # dense

spre(Xs)
spre(Xd)
spost(Xs)
spost(Xd)

They all worked, and returns

Quantum Object:   type=SuperOperator   dims=[2]   size=(4, 4)
4×4 CUDA.CUSPARSE.CuSparseMatrixCSC{ComplexF64, Int32} with 4 stored entries:
      ⋅            ⋅       1.0 + 0.0im       ⋅
      ⋅            ⋅            ⋅       1.0 + 0.0im
 1.0 + 0.0im       ⋅            ⋅            ⋅
      ⋅       1.0 + 0.0im       ⋅            ⋅

in my local PC

@albertomercurio
Copy link
Member

Perfect. Let’s wait the run tests and then we can merge it

@ytdHuang
Copy link
Member Author

@albertomercurio
Some instability issue happened in ODE solvers under macOS.
Don't know why but I think this is something that is beyond our scope. Maybe we can just merge it since the other runtests looks fine.

@albertomercurio albertomercurio merged commit 0d9cf1b into qutip:main Jun 10, 2024
12 of 14 checks passed
@ytdHuang ytdHuang deleted the dev/superoperator branch June 11, 2024 08:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants