Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AmgXWrapper as a PETSc PC #17

Open
jedbrown opened this issue Aug 8, 2017 · 2 comments
Open

AmgXWrapper as a PETSc PC #17

jedbrown opened this issue Aug 8, 2017 · 2 comments

Comments

@jedbrown
Copy link
Contributor

jedbrown commented Aug 8, 2017

AmgXWrapper does what it claims to, but requires the user to write some special code to call it. But this sort of functionality can be implemented as a PETSc PC in which case an existing user would not need to modify or even recompile their code, just use the run-time option -pc_type amgx. (Note that a PC is allowed to do its own inner iterations, which might be desirable here if AmgX has no zero-copy way to access vector state on the device. If such an interface exists, or is added, it could become zero-copy with PETSc's CUDA vectors and would thus run efficiently with all PETSc Krylov solvers.) This PC implementation could be distributed with PETSc or independently as a source or binary plugin that PETSc loads at run-time. This would be significantly more convenient, especially when solving nonlinear and time-dependent problems using PETSc SNES or TS, and when solving coupled problems using PCFieldSplit.

This would be a major refactor, but actually not that onerous. You would need an initializer that calls PCRegister("amgx", PCCreate_AmgX), a function PCCreate_AmgX(PC) that sets up function pointers, and functions PCSetUp_AmgX(PC) and PCApply_AmgX(PC, Vec, Vec) that call your setA and solve methods. There are lots of examples in PETSc and we would be happy to provide further details, and distribution tips if you prefer to distribute independently from PETSc.

@piyueh
Copy link
Member

piyueh commented Aug 9, 2017

Agree. This is actually what we planned to do at the beginning: developing the interface directly in PETSc. In this way, we don't even have to change any code in PETSc applications.

But at that time, we were not familiar with advance PETSc usage, so while we needed something that we could quickly use in our application code, we decided to write this wrapper.

We'll definitely look into this. Do you have any suggestion about which example code we should look first?

@jedbrown
Copy link
Contributor Author

jedbrown commented Aug 9, 2017

I would suggest starting with src/ksp/pc/impls/saviennacl/saviennacl.cxx in PETSc 'master', which interfaces to a (GPU-capable) smoothed aggregation solver in ViennaCL. @karlrupp could comment further about CUDA-specific details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants