Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TestUtils #18

Closed
willtebbutt opened this issue Sep 22, 2020 · 3 comments
Closed

TestUtils #18

willtebbutt opened this issue Sep 22, 2020 · 3 comments

Comments

@willtebbutt
Copy link
Member

willtebbutt commented Sep 22, 2020

I think before implementing any more likelihoods we should think carefully about what standardised tests we want to run on them, similarly to what I've just done in this PR: JuliaGaussianProcesses/KernelFunctions.jl#159

Looking at the various open PRs (#17, #16 , #15, and maybe #14), and the already implemented likelihoods (Gaussian and Poisson) there appears to be quite a lot of repeated code in the tests. It would be great to get a handle on this before it becomes too large a job to easily refactor.

Pulling out some common themes from these, it looks like we want to be testing that

  1. the API is satisfied
  2. the length is correct
  3. functor works properly

@sharanry you've done most of the work on this so far. Is there anything else that's obvious to you that we're missing?

There's also quite a lot of code that manually generates input and output data / constructs appropriate "latent" AbstractGPs at the minute -- in my opinion it would be a very good idea to factor this code out as well.

@sharanry
Copy link
Contributor

sharanry commented Sep 22, 2020

Totally agree! I also think the 3 things you mentioned are the main points to cover.

We could also start testing different ADs like in kernel functions. I think this needs to be done in AbstractGPs too.

@willtebbutt
Copy link
Member Author

willtebbutt commented Sep 22, 2020

Yeah, we really need to figure out the "correct" way to test AD in general. Currently everyone writes their own code to test a given AD, which is just bad -- the ADs should provide utility to do this.

For now we're probably stuck with code duplication though.

edit: by correct, I mean a way that covers all of the technical points correctly, and maximises code reuse between packages.

@theogf
Copy link
Member

theogf commented Jan 4, 2022

Sounds like this is solved no?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants