Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeline for team level dense linear algebra #39

Open
dholladay00 opened this issue Jul 24, 2017 · 10 comments
Open

Timeline for team level dense linear algebra #39

dholladay00 opened this issue Jul 24, 2017 · 10 comments
Assignees
Labels

Comments

@dholladay00
Copy link

@kyungjoo-kim, is it possible to get a timeline on Kokkos-Kernels team level dense linear algebra?

It's not critical at the moment, but it would be useful to have an estimate on when that capability will be available. We can discuss offline further if need be.

@kyungjoo-kim
Copy link
Contributor

What is the problem sizes of interest ? When you mention team or thread level functor interface for dense linear algebra, you probably want to do some small or mid-range problem. Depending on the problem sizes, the implementation may different with using fast memory. Do you also need to solve same size problems or different problem sizes across teams ?

From my experience, team level interface is effective on GPUs but on KNL, MKL already provides good performance on almost all problem sizes (except for tiny problems dimensions are 3, 5 10 etc.).

Please let me know the application and workflow scenario. The advantage of using kokkoskernels is on understanding workflow (not providing generic version of libraries which are already exist).

@dholladay00
Copy link
Author

Each team must solve a block tri-diagonal linear system with non-uniform block sizes (the size of block row 1 could be different from the size of block row 2, etc.). Those sizes tend to range from ~10 up to ~1000. While at some point in the problem, each team will have the same sized block-tridiagonal linear system, later on those sizes could be different, so it is probably best to assume that each team is solving a different sized matrix.

I currently use MKL for LU decomposition (dgetrf and dgetrs), but I use a hand-written team level function for dgemm and dgemv. However, I might go back to using mkl for everything as I have been running into issues on machines that have more than 1 thread/core, despite enforcing a team size of 1 (non-deterministic, difficult to reproduce in tests, etc.)

@kyungjoo-kim
Copy link
Contributor

I see.

  1. there are multiple tridiagonal systems (parallel for can be used)
  2. each tridiagonal sytem is composed of irregular blocks ranging between 10 and 100.
  3. however, those tridiagonal systems have the same length and same internal pattern (which possiblly allows stacking and vectorizing across those tridiagonal systems).

Do you get any performance benefits from your team-level hand-written code compared to MKL ? Since this is dense tridiagonal factorization and solve, do you measure the performance on KNL in terms of gflop/s ? We can go emails for detailed information.

@mhoemmen
Copy link
Contributor

Recent versions of MKL have batched BLAS for DGEMM at least. You might just be able to call that.

@dholladay00
Copy link
Author

I vote emails for much of this.

But to answer some questions:

  1. Yes, I am using a parallel for with a team policy.
  2. That is roughly correct, could be > 100 but probably < 1000.
  3. I'll send an email regarding this as its somewhat complicated.

The majority of time is spent calculating the matrix elements, so it is difficult to figure out, but either way the performance differences between MKL and my version are in the noise of total calculation time. This is due to the fact that matrix build = * N * N, matrix solve = * N * N * N. When N is small, the large number is large enough to still take more time.

This project started with the idea of using batched BLAS, but we have since moved away b/c we cannot always rely on each team having the same matrix sizes.

@kyungjoo-kim
Copy link
Contributor

kyungjoo-kim commented Jul 24, 2017

@mhoemmen Batched BLAS does not make sense in this tridiagonal factorization. Batch operation is to apply BLAS operation for "a set of matrices". Multiple (parallel) tridiagonal factorization can be implemented in a sequence of batch GETRF, TRSM and GEMM. Using batched BLAS, we do not exploit data locality at all even if the sequence of operations completely reuse previous computation result. That is why we need functor-level interface around the parallel for.

We have a compact batch for tridiagonal factorization (LU is implemented without pivot as the tridiagonal factorization is used as preconditioner. do you really need pivoting ?). That is more optimized for problem sizes < 32. For the range of problem sizes between 100 and 1000, I need to repack data (this is not yet implemented).

@dholladay00
Copy link
Author

While we could get away without pivoting in most cases, it would be preferable to have pivoting. Also, @kyungjoo-kim I sent you an email. @mhoemmen do you wish to be included in the emails?

There were ways to include batching, but it eats up one of our levels of parallelism (each thread team gets a batch of inputs rather than a single set of inputs). When certain physics is enabled, it allows each element of the batch to have a different matrix size and structure, removing the ability to use batched calls.

@srajama1
Copy link
Contributor

@dholladay00 : Can we discuss this more in e-mails ? Include me in the e-mail chain with @kyungjoo-kim . This will help us plan for Kokkoskernels.

@crtrott
Copy link
Member

crtrott commented Jul 24, 2017

include me as well

@mhoemmen
Copy link
Contributor

@dholladay00 You're welcome to include me if you like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants