You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, this is a question regarding the relationship between natural gradient boosting and variational inference. In the most general sense, any optimization method that approximates a density can be considered a variational inference method (instead of strictly referring to the approximation of a posterior). In practice, most VI methods optimize the KL-divergence by using a proxy called ELBO. The NGBoost looks quite similar to the mean-field VI, which assumes latent variables are mutually independent, but I'm struggling to link the two methods. Would be great if anyone here has looked into a similar issue before and could share some insights. Thank you in advance!
The text was updated successfully, but these errors were encountered:
Hi, this is a question regarding the relationship between natural gradient boosting and variational inference. In the most general sense, any optimization method that approximates a density can be considered a variational inference method (instead of strictly referring to the approximation of a posterior). In practice, most VI methods optimize the KL-divergence by using a proxy called ELBO. The NGBoost looks quite similar to the mean-field VI, which assumes latent variables are mutually independent, but I'm struggling to link the two methods. Would be great if anyone here has looked into a similar issue before and could share some insights. Thank you in advance!
The text was updated successfully, but these errors were encountered: