Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Storage type #23

Merged
merged 3 commits into from
Apr 11, 2024
Merged

Storage type #23

merged 3 commits into from
Apr 11, 2024

Conversation

hakkelt
Copy link
Contributor

@hakkelt hakkelt commented Apr 10, 2024

After an embarrassingly long delay, I finally managed to allocate time to implement the modifications discussed in #17. :)

The initial goal was to make it possible to use AbstractOperators with CUDA.jl or other GPU packages, and the main obstacle was that operators and their combinations allocated buffers on the CPU. With this modification, it will be possible to override the two new functions, domainStorageType and codomainStorageType, and these functions determine the type of buffers/outputs. That way, one can implement CPU ↔ GPU and GPU ↔ GPU operators, or even CPU ↔ CPU operators that operate on AbstractArrays other than Array (e.g. NamedDimsArray).

The default implementation of domainStorageType/codomainStorageType for AbstractOperators returns Array or ArrayPartition if the domain/codomain size of the operator is a tuple of tuples; therefore, no breaking changes are needed, and all test completes without modifications.

@nantonel
Copy link
Member

Hey thanks for this, looks good to me! Did you actually manage to run AbstractOperators on GPUs?

@nantonel nantonel merged commit 7413876 into kul-optec:master Apr 11, 2024
7 checks passed
@hakkelt
Copy link
Contributor Author

hakkelt commented Apr 14, 2024

No, I haven't tried it yet. Maybe I'll do it in the next couple of days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants