You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AmgXWrapper does what it claims to, but requires the user to write some special code to call it. But this sort of functionality can be implemented as a PETSc PC in which case an existing user would not need to modify or even recompile their code, just use the run-time option -pc_type amgx. (Note that a PC is allowed to do its own inner iterations, which might be desirable here if AmgX has no zero-copy way to access vector state on the device. If such an interface exists, or is added, it could become zero-copy with PETSc's CUDA vectors and would thus run efficiently with all PETSc Krylov solvers.) This PC implementation could be distributed with PETSc or independently as a source or binary plugin that PETSc loads at run-time. This would be significantly more convenient, especially when solving nonlinear and time-dependent problems using PETSc SNES or TS, and when solving coupled problems using PCFieldSplit.
This would be a major refactor, but actually not that onerous. You would need an initializer that calls PCRegister("amgx", PCCreate_AmgX), a function PCCreate_AmgX(PC) that sets up function pointers, and functions PCSetUp_AmgX(PC) and PCApply_AmgX(PC, Vec, Vec) that call your setA and solve methods. There are lots of examples in PETSc and we would be happy to provide further details, and distribution tips if you prefer to distribute independently from PETSc.
The text was updated successfully, but these errors were encountered:
Agree. This is actually what we planned to do at the beginning: developing the interface directly in PETSc. In this way, we don't even have to change any code in PETSc applications.
But at that time, we were not familiar with advance PETSc usage, so while we needed something that we could quickly use in our application code, we decided to write this wrapper.
We'll definitely look into this. Do you have any suggestion about which example code we should look first?
I would suggest starting with src/ksp/pc/impls/saviennacl/saviennacl.cxx in PETSc 'master', which interfaces to a (GPU-capable) smoothed aggregation solver in ViennaCL. @karlrupp could comment further about CUDA-specific details.
AmgXWrapper does what it claims to, but requires the user to write some special code to call it. But this sort of functionality can be implemented as a PETSc PC in which case an existing user would not need to modify or even recompile their code, just use the run-time option
-pc_type amgx
. (Note that a PC is allowed to do its own inner iterations, which might be desirable here if AmgX has no zero-copy way to access vector state on the device. If such an interface exists, or is added, it could become zero-copy with PETSc's CUDA vectors and would thus run efficiently with all PETSc Krylov solvers.) This PC implementation could be distributed with PETSc or independently as a source or binary plugin that PETSc loads at run-time. This would be significantly more convenient, especially when solving nonlinear and time-dependent problems using PETSc SNES or TS, and when solving coupled problems using PCFieldSplit.This would be a major refactor, but actually not that onerous. You would need an initializer that calls
PCRegister("amgx", PCCreate_AmgX)
, a functionPCCreate_AmgX(PC)
that sets up function pointers, and functionsPCSetUp_AmgX(PC)
andPCApply_AmgX(PC, Vec, Vec)
that call yoursetA
andsolve
methods. There are lots of examples in PETSc and we would be happy to provide further details, and distribution tips if you prefer to distribute independently from PETSc.The text was updated successfully, but these errors were encountered: