-
-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bayes Factor Estimation #665
Comments
Hey! I don't know much about bayes factors, but I might point you towards this work from @karink520 (building off some of @junpenglao's work). It has been held up for a while trying to figure out the right library to live in and the abstractions that library would need to support, but it should mostly work with just posterior samples (and access to the transformations that were used). |
Hey @ColCarroll ! Thanks so much for sending this my way, I was having a look for something similar but hadn't had much luck. I am going to give it a go this afternoon, I really appreciate it. :) |
@cameron-sql I'm not very familiar with bayes factors either. If you can share an example of what you do in PyMC I could give more help. However, if you know how to get what you do with a PyMC model, you can also do that with a Bambi model. A Bambi model always holds an instance of a PyMC model in So if you want to use SMC (which I'm not familiar with) you can access the underlying PyMC model with |
What kind of model do you have in mind? ArviZ supports Computing Bayes Factor https://python.arviz.org/en/stable/api/generated/arviz.plot_bf.html from the docs
The main motivation for the SMC implementation in PyMC was dealing with multimodal posteriors and models for which gradients were not available making NUTS an invalid option. SMC uses an (Independent Metropolis-Hastings) kernel, while usually it is much better than the MH sampler, still inherits some of its limitations and compared to NUTS it can have a harder time fitting some complex geometries like we usually observed for hierarchical models. To some extent, this can be alleviated by increasing the number of draws/particles. We are working on bringing to PyMC, an SMC with a Hamiltonian Monte Carlo kernel, this will make SMC more robust. |
Hey @tomicapretto and @aloctavodia, thanks for getting back to me -- sorry, I don't check my account as much as I should! I didn't know about the backend using Similarly, thank you so much for the reference to the ArviZ documentation, that is actually exactly what I was looking for! In terms of the model / data, it is nothing very high level, we just have many separate samples of financial data that we are looking to compare to a larger reference population. We have been exploring a couple of different approaches and this was brought up as a potential option. Thank you all again! |
Hi,
I've been a PyMC user for a while who has been getting into Bambi a bit more (and loving it, really great stuff!). In PyMC at the moment, if I want to calculate the Bayes factor between two models, I would use the:
trace = pymc.smc.sample_smc
function and access the marginal likelihoods like:
trace.report.marginal_likelihood
.Is there a similar functionality in Bambi or is this reliant on the use of the PyMC Sequential Monte Carlo sampler? Given two Bambi
fit()
functions, can the resulting ArviZ inference objects even be used to calculator marginal likelihoods / Bayes factors?Thank you so much for your consideration and time.
The text was updated successfully, but these errors were encountered: