Official repository of the paper (UAI 2021) "Graph Reparameterizations for Enabling 1000+ Monte Carlo Iterations in Bayesian Deep Neural Networks".
Video presentation: youtube
Monte Carlo (MC) estimator is the core concept used in Bayesian Variational Inference. Higher number of MC samples lead to lower variance of the MC estimator and higher accuracy. However, with the direct implementation of MC estimator for KL term, increasing number of MC sampels results in the GPU memory explosion in Deep Bayesian Neural Networks.
We present the new scheme to compute MC estimator of KL term in Bayesian VI settings with almost no memory cost in GPU, regardless of the number of samples (even 1000+), and significantly improves run time (Figure below).
The main idea of our method is to reparameterize MC expression to avoid computation graph explosion.
In addition, we provide an implementation framework to make your deterministic
network Bayesian in PyTorch
.
If you like our work, please give us a star. If you use our code in your research projects, please cite our paper as
@inproceedings{nazarovs2021graph,
title={Graph reparameterizations for enabling 1000+ Monte Carlo iterations in Bayesian deep neural networks},
author={Nazarovs, Jurijs and Mehta, Ronak R and Lokhande, Vishnu Suresh and Singh, Vikas},
booktitle={Uncertainty in Artificial Intelligence},
pages={118--128},
year={2021},
organization={PMLR}
}
Note: there are 2 ways to run the Bayesian network from our project:
- You can use established code for appropriate problem in section Current implementation of networks for different problems
- In case we do not have an appropriate network for you, you can Bayesify your own Deterministic Neural Network
Note: Bayesian neural network usually has double number of parameters, compare to determenistic version. That is, if for example determenistic ResNet-18 can fit in your RAM/GPU memory, it does not guarantee that Bayesian version of it fits as well. Then you would need to create your own BNN.
Remember to adjust main Bayesian arguments:
--approx_post
: Approximate posterior: Gaus, Radial (use Radial, if you do not have preferences)--kl_method
: method to compute KL: repar (use this one always), direct, closed--n_mc_iter
: number of mc iterations to approximate kl (higher number => less variance, but larger run time. Usually use 100)--n_test_iter
: number of test iterations to estimate uncertainty. Used in testing script, to compute statistics, like mean/std of samples from posterior distribution and credible intervals.
There are 3 main files which help you to Bayesify
your deterministic network:
-
bayes_layers.py
- file contains a bayesian implementation of convolution(1d, 2d, 3d, transpose) and linear layers, according to approx posterior fromLocation-Scale
family, i.e. which has 2 parameters mu and sigma. This file contains general definition, independent of specific distribution, as long as distribution contains 2 parameters mu and sigma. It uses forward method defined invi_posteriors.py
file. One of the main arguments for redefined classes isapprox_post
, which defined which posterior class to use fromvi_posteriors.py
. Please, specify this name same way as defined class invi_posteriors.py
. For example, ifvi_posteriors.py
contains class Gaus, thenapprox_post='Gaus'
. -
vi_posteriors.py
- file describes forward method, including kl term, for different approximate posterior distributions. Current implementation contains following disutributions:
- Radial (more stable training than Gaussian, use it if you don't have preferences)
- Gaus
If you would like to implement your own class of distrubtions, in vi_posteriors.py
copy one of defined classes
and redefine following functions: forward(obj, x, fun="")
, get_kl(obj, n_mc_iter, device)
.
It also contains usefull Utils class which provides
- definition of loss functions:
- get_loss_categorical
- get_loss_normal,
- different beta coefficients:
get_beta
for KL term and - allows to turn on/off computing the KL term, with function
set_compute_kl
. this is useful, when you perform testing/evaluation, and kl term is not required to be computed. In that case it accelerates computations.
Below is an example to bayesify your own network. Note the forward method, which handles situations if a layer is not of a Bayesian type, and thus, does not return kl term, e.g. ReLU(x).
import bayes_layers as bl # important for defining bayesian layers
class YourBayesNet(nn.Module):
def __init__(self, num_classes, in_channels,
**bayes_args):
super(YourBayesNet, self).__init__()
self.conv1 = bl.Conv2d(in_channels, 64,
kernel_size=11, stride=4,
padding=5,
**bayes_args)
self.classifier = bl.Linear(1*1*128,
num_classes,
**bayes_args)
self.layers = [self.conv1, nn.ReLU(), self.classifier]
def forward(self, x):
kl = 0
for layer in self.layers:
tmp = layer(x)
if isinstance(tmp, tuple):
x, kl_ = tmp
kl += kl_
else:
x = tmp
x = x.view(x.size(0), -1)
logits, _kl = self.classifier.forward(x)
kl += _kl
return logits, kl
Then later in the main file during training, you can either use one of the loss functions, defined in utils as following:
output, kl = model(inputs)
kl = kl.mean() # if several gpus are used to split minibatch
loss, _ = vi.Utils.get_loss_categorical(kl, output, targets, beta=beta)
#loss, _ = vi.Utils.get_loss_normal(kl, output, targets, beta=beta)
loss.backward()
or design your own, e.g.
loss = kl_coef*kl - loglikelihood
loss.backward()
uncertainty_estimate.py
- file describes set of functions to perform uncertainty estimation, e.g.
- get_prediction_class - function which return the most common class in iterations
- summary_class - function creates a summary file with statistics
Script bayesian_dnn_class/main.py is the main executable code and all standard DNN models are located in bayesian_dnn_class/models, and are:
- AlexNet
- Fully Connected
- DenseNet
- ResNet
- VGG
To understand the size of the Bayesian Versions of the networks, refer to Figure 2 (right) in the paper: "Graph Reparameterizations for Enabling 1000+ Monte Carlo Iterations in Bayesian Deep Neural Networks".