Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Functional laplace #55

Closed
wants to merge 94 commits into from
Closed

Conversation

metodmove
Copy link
Contributor

No description provided.

Metod Jazbec and others added 20 commits December 30, 2022 11:50
* gp calibration notebook

* minor

* inducing points CIFAR experiment

* transfer model from bnn-preds repo

* inducing points FMNIST CNN

* gp calibration example

* gp continue

* subset_of_weights=all experiment

* fixed prior precision

* fixed prior precision

* ensure that input is differentiable

* further optimize delta experiment

* run for larger delta

* last-layer debug

* inference speed-up

* minor

* einsum memory investigation

* CV working

* rebuild on Sigma_inv

* clean

* validate no_grad

Co-authored-by: Metod Jazbec <metodjazbec@Metods-MacBook-Pro.local>
Co-authored-by: Metod Jazbec <metodjazbec@wcw-staff-145-109-82-97.wireless.uva.nl>
Co-authored-by: Metod Jazbec <metodjazbec@uvavpn-byodm-145-18-162-61.vpn.uva.nl>
Co-authored-by: Metod Jazbec <metodjazbec@uvavpn-byodm-145-18-160-65.vpn.uva.nl>
Co-authored-by: Metod Jazbec <metodjazbec@uvavpn-byodm-145-18-162-52.vpn.uva.nl>
Co-authored-by: Metod Jazbec <metodjazbec@wcw-staff-145-109-84-166.wireless.uva.nl>
Co-authored-by: Metod Jazbec <metodjazbec@wcw-staff-145-109-89-247.wireless.uva.nl>
@metodj
Copy link
Contributor

metodj commented Jan 12, 2023

Main changes since last time:

  • addressed underfitting of FunctionalLaplace in regression_example.py, fixed the issue with the marginal likelihood: Functional laplace #55 (comment)
  • added calibration_gp_example.py where FunctionalLaplace is used on a pre-trained FMNIST classifier (same experiment as in linearised Laplace paper)

@metodj
Copy link
Contributor

metodj commented Jan 12, 2023

I also benchmarked the code with BNN-predictions-repo in terms of speed and the code here is 2-3x slower. After investigating, I believe the main reason is that code here allows one to only use smaller batch sizes before running into memory issues when doing GP inference (b=128 here compared to e.g. b=512 in the BNN-predictions-repo).

When it comes to memory, the bottleneck seem to be torch.einsum functions used in the kernel methods in FunctionalLaplace. To get around this, I also tested implementation without torch.einsum. However, then the memory issues started to pop up in the backpack backend.

Further benchmarking and faster implementation can be left for future PRs imo.

metodj and others added 4 commits February 21, 2023 15:22
* memory investigation start

* add for loop over output dimensions

* improve memory footprint

* minor

---------

Co-authored-by: Metod Jazbec <mjazbec@ivi-cn011.ivi.local>
* memory investigation start

* add for loop over output dimensions

* improve memory footprint

* minor

* fix tests

---------

Co-authored-by: Metod Jazbec <mjazbec@ivi-cn011.ivi.local>
@metodj
Copy link
Contributor

metodj commented Feb 21, 2023

Improved memory performance (see changes in _kernel methods in FunctionalLaplace and in jacobians method in BackPackInterface). Can use larger batch sizes (e.g. b=256 on Nvidia 3090 GPU) now which results in around 2x speedup, so more or less on par with implementation in BNN-predictions-repo

@wiseodd
Copy link
Collaborator

wiseodd commented Jun 23, 2024

Replaced by #192

@wiseodd wiseodd closed this Jun 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants