-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timeline for team level dense linear algebra #39
Comments
What is the problem sizes of interest ? When you mention team or thread level functor interface for dense linear algebra, you probably want to do some small or mid-range problem. Depending on the problem sizes, the implementation may different with using fast memory. Do you also need to solve same size problems or different problem sizes across teams ? From my experience, team level interface is effective on GPUs but on KNL, MKL already provides good performance on almost all problem sizes (except for tiny problems dimensions are 3, 5 10 etc.). Please let me know the application and workflow scenario. The advantage of using kokkoskernels is on understanding workflow (not providing generic version of libraries which are already exist). |
Each team must solve a block tri-diagonal linear system with non-uniform block sizes (the size of block row 1 could be different from the size of block row 2, etc.). Those sizes tend to range from ~10 up to ~1000. While at some point in the problem, each team will have the same sized block-tridiagonal linear system, later on those sizes could be different, so it is probably best to assume that each team is solving a different sized matrix. I currently use MKL for LU decomposition (dgetrf and dgetrs), but I use a hand-written team level function for dgemm and dgemv. However, I might go back to using mkl for everything as I have been running into issues on machines that have more than 1 thread/core, despite enforcing a team size of 1 (non-deterministic, difficult to reproduce in tests, etc.) |
I see.
Do you get any performance benefits from your team-level hand-written code compared to MKL ? Since this is dense tridiagonal factorization and solve, do you measure the performance on KNL in terms of gflop/s ? We can go emails for detailed information. |
Recent versions of MKL have batched BLAS for DGEMM at least. You might just be able to call that. |
I vote emails for much of this. But to answer some questions:
The majority of time is spent calculating the matrix elements, so it is difficult to figure out, but either way the performance differences between MKL and my version are in the noise of total calculation time. This is due to the fact that matrix build = * N * N, matrix solve = * N * N * N. When N is small, the large number is large enough to still take more time. This project started with the idea of using batched BLAS, but we have since moved away b/c we cannot always rely on each team having the same matrix sizes. |
@mhoemmen Batched BLAS does not make sense in this tridiagonal factorization. Batch operation is to apply BLAS operation for "a set of matrices". Multiple (parallel) tridiagonal factorization can be implemented in a sequence of batch GETRF, TRSM and GEMM. Using batched BLAS, we do not exploit data locality at all even if the sequence of operations completely reuse previous computation result. That is why we need functor-level interface around the parallel for. We have a compact batch for tridiagonal factorization (LU is implemented without pivot as the tridiagonal factorization is used as preconditioner. do you really need pivoting ?). That is more optimized for problem sizes < 32. For the range of problem sizes between 100 and 1000, I need to repack data (this is not yet implemented). |
While we could get away without pivoting in most cases, it would be preferable to have pivoting. Also, @kyungjoo-kim I sent you an email. @mhoemmen do you wish to be included in the emails? There were ways to include batching, but it eats up one of our levels of parallelism (each thread team gets a batch of inputs rather than a single set of inputs). When certain physics is enabled, it allows each element of the batch to have a different matrix size and structure, removing the ability to use batched calls. |
@dholladay00 : Can we discuss this more in e-mails ? Include me in the e-mail chain with @kyungjoo-kim . This will help us plan for Kokkoskernels. |
include me as well |
@dholladay00 You're welcome to include me if you like. |
@kyungjoo-kim, is it possible to get a timeline on Kokkos-Kernels team level dense linear algebra?
It's not critical at the moment, but it would be useful to have an estimate on when that capability will be available. We can discuss offline further if need be.
The text was updated successfully, but these errors were encountered: