Skip to content

Commit

Permalink
Transition GPUArrays to KernelAbstractions
Browse files Browse the repository at this point in the history
  • Loading branch information
leios committed Jul 23, 2024
1 parent d0492a2 commit 3560049
Show file tree
Hide file tree
Showing 27 changed files with 374 additions and 689 deletions.
11 changes: 7 additions & 4 deletions .buildkite/pipeline.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ steps:
println("--- :julia: Instantiating project")
Pkg.develop(; path=pwd())
Pkg.develop(; name="CUDA")
Pkg.add(; url="https://github.com/leios/CUDA.jl/", rev="GtK_trans")
println("+++ :julia: Running tests")
Pkg.test("CUDA"; coverage=true)'
Expand All @@ -31,10 +31,13 @@ steps:
println("--- :julia: Instantiating project")
Pkg.develop(; path=pwd())
Pkg.develop(; name="oneAPI")
Pkg.add(; url="https://github.com/leios/oneAPI.jl/", rev="GtK_transition")
println("+++ :julia: Building support library")
include(joinpath(Pkg.devdir(), "oneAPI", "deps", "build_ci.jl"))
filename = Base.find_package("oneAPI")
filename = filename[1:findfirst("oneAPI.jl", filename)[1]-1]
filename *= "../deps/build_ci.jl"
include(filename)
Pkg.activate()
println("+++ :julia: Running tests")
Expand All @@ -56,7 +59,7 @@ steps:
println("--- :julia: Instantiating project")
Pkg.develop(; path=pwd())
Pkg.develop(; name="Metal")
Pkg.add(; url="https://github.com/leios/Metal.jl/", rev="GtK_transition")
println("+++ :julia: Running tests")
Pkg.test("Metal"; coverage=true)'
Expand Down
1 change: 1 addition & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ version = "10.3.0"
[deps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
GPUArraysCore = "46192b85-c4d5-4398-a991-12ede77f4527"
KernelAbstractions = "63c18a36-062a-441e-b654-da1e3ab1ce7c"
LLVM = "929cbde3-209d-540e-8aea-75f648917ca0"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Expand Down
5 changes: 2 additions & 3 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,9 @@ will get a lot of functionality for free. This will allow to have multiple GPUAr
implementation for different purposes, while maximizing the ability to share code.

**This package is not intended for end users!** Instead, you should use one of the packages
that builds on GPUArrays.jl. There is currently only a single package that actively builds
on these interfaces, namely [CuArrays.jl](https://github.com/JuliaGPU/CuArrays.jl).
that builds on GPUArrays.jl such as [CUDA](https://github.com/JuliaGPU/CUDA.jl), [AMDGPU](https://github.com/JuliaGPU/AMDGPU.jl), [OneAPI](https://github.com/JuliaGPU/oneAPI.jl), or [Metal](https://github.com/JuliaGPU/Metal.jl).

In this documentation, you will find more information on the interface that you are expected
This documentation is meant for users who might wish to implement a version of GPUArrays for another GPU backend and will cover the features you will need
to implement, the functionality you gain by doing so, and the test suite that is available
to verify your implementation. GPUArrays.jl also provides a reference implementation of
these interfaces on the CPU: The `JLArray` array type uses Julia's parallel programming
Expand Down
57 changes: 18 additions & 39 deletions docs/src/interface.md
Original file line number Diff line number Diff line change
@@ -1,53 +1,32 @@
# Interface

To extend the above functionality to a new array type, you should use the types and
implement the interfaces listed on this page. GPUArrays is design around having two
different array types to represent a GPU array: one that only ever lives on the host, and
implement the interfaces listed on this page. GPUArrays is designed around having two
different array types to represent a GPU array: one that exists only on the host, and
one that actually can be instantiated on the device (i.e. in kernels).
Device functionality is then handled by [KernelAbstractions.jl](https://github.com/JuliaGPU/KernelAbstractions.jl).

## Host abstractions

## Device functionality

Several types and interfaces are related to the device and execution of code on it. First of
all, you need to provide a type that represents your execution back-end and a way to call
kernels:
You should provide an array type that builds on the `AbstractGPUArray` supertype, such as:

```@docs
GPUArrays.AbstractGPUBackend
GPUArrays.AbstractKernelContext
GPUArrays.gpu_call
GPUArrays.thread_block_heuristic
```
mutable struct CustomArray{T, N} <: AbstractGPUArray{T, N}
data::DataRef{Vector{UInt8}}
offset::Int
dims::Dims{N}
...
end
You then need to provide implementations of certain methods that will be executed on the
device itself:

```@docs
GPUArrays.AbstractDeviceArray
GPUArrays.LocalMemory
GPUArrays.synchronize_threads
GPUArrays.blockidx
GPUArrays.blockdim
GPUArrays.threadidx
GPUArrays.griddim
```

This will allow your defined type (in this case `JLArray`) to use the GPUArrays interface where available.
To be able to actually use the functionality that is defined for `AbstractGPUArray`s, you need to define the backend, like so:

## Host abstractions

You should provide an array type that builds on the `AbstractGPUArray` supertype:

```@docs
AbstractGPUArray
```

First of all, you should implement operations that are expected to be defined for any
`AbstractArray` type. Refer to the Julia manual for more details, or look at the `JLArray`
reference implementation.

To be able to actually use the functionality that is defined for `AbstractGPUArray`s, you
should provide implementations of the following interfaces:

```@docs
GPUArrays.backend
import KernelAbstractions: Backend
struct CustomBackend <: KernelAbstractions.GPU
KernelAbstractions.get_backend(a::CA) where CA <: CustomArray = CustomBackend()
```

There are numerous examples of potential interfaces for GPUArrays, such as with [JLArrays](https://github.com/JuliaGPU/GPUArrays.jl/blob/master/lib/JLArrays/src/JLArrays.jl), [CuArrays](https://github.com/JuliaGPU/CUDA.jl/blob/master/src/gpuarrays.jl), and [ROCArrays](https://github.com/JuliaGPU/AMDGPU.jl/blob/master/src/gpuarrays.jl).
6 changes: 3 additions & 3 deletions lib/GPUArraysCore/src/GPUArraysCore.jl
Original file line number Diff line number Diff line change
Expand Up @@ -218,10 +218,10 @@ end
Gets the GPUArrays back-end responsible for managing arrays of type `T`.
"""
backend(::Type) = error("This object is not a GPU array") # COV_EXCL_LINE
backend(x) = backend(typeof(x))
get_backend(::Type) = error("This object is not a GPU array") # COV_EXCL_LINE
get_backend(x) = get_backend(typeof(x))

# WrappedArray from Adapt for Base wrappers.
backend(::Type{WA}) where WA<:WrappedArray = backend(unwrap_type(WA))
get_backend(::Type{WA}) where WA<:WrappedArray = backend(unwrap_type(WA))

end # module GPUArraysCore
3 changes: 2 additions & 1 deletion lib/JLArrays/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,11 @@ version = "0.1.5"
[deps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
GPUArrays = "0c68f7d7-f131-5f86-a1c3-88cf8149b2d7"
KernelAbstractions = "63c18a36-062a-441e-b654-da1e3ab1ce7c"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"

[compat]
Adapt = "2.0, 3.0, 4.0"
GPUArrays = "10"
julia = "1.8"
Random = "1"
julia = "1.8"
Loading

0 comments on commit 3560049

Please sign in to comment.