Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support BatchNormalization Layer #493

Merged

Conversation

tagomaru
Copy link
Contributor

Signed-off-by: tagomaru tagomaru@users.noreply.github.com

This PR is going to support Batch Normalization layer.

linear2-3_bn1-linear3-1.onnx that was used for the tests was created the way of the following with PyTorch.

# model
class BNModel(nn.Module):
    def __init__(self):
        super(BNModel, self).__init__()
        self.linear1 = nn.Linear(2, 3)
        self.bn1 =  nn.BatchNorm1d(3)
        self.linear2 = nn.Linear(3, 1)

    def forward(self, x):
        out = self.linear1(x)
        out = self.bn1(out)
        out = self.linear2(out)
        return out

# fix seed
torch.manual_seed(42)
torch.cuda.manual_seed_all(42)
torch.backends.cudnn.deterministic = True

# instantiate model
bnmodel = BNModel()

# set parameters of BatchNorm
bnmodel.state_dict()['linear1.bias'].copy_(torch.zeros(3))
bnmodel.state_dict()['linear1.weight'].copy_(torch.ones([3, 2]))
bnmodel.state_dict()['bn1.bias'].copy_(torch.Tensor([1, 2, 3]))
bnmodel.state_dict()['bn1.weight'].copy_(torch.Tensor([1, 0, 1]))
bnmodel.state_dict()['linear2.bias'].copy_(torch.zeros(1))
bnmodel.state_dict()['linear2.weight'].copy_(torch.ones([1, 3]))
print(list(bnmodel.named_parameters()))

# save model in onnx format
bnmodel.train(False)
x = torch.ones(1, 2)
torch.onnx.export(bnmodel, x, '../models/linear2-3_bn1-linear3-1.onnx', input_names=['X'], output_names=['Y'])

# forward
print("\noutput:%.10f" % bnmodel(x))

The output:

[('linear1.weight', Parameter containing:
tensor([[1., 1.],
        [1., 1.],
        [1., 1.]], requires_grad=True)), ('linear1.bias', Parameter containing:
tensor([0., 0., 0.], requires_grad=True)), ('bn1.weight', Parameter containing:
tensor([1., 0., 1.], requires_grad=True)), ('bn1.bias', Parameter containing:
tensor([1., 2., 3.], requires_grad=True)), ('linear2.weight', Parameter containing:
tensor([[1., 1., 1.]], requires_grad=True)), ('linear2.bias', Parameter containing:
tensor([0.], requires_grad=True))]

output:9.9999799728

Signed-off-by: tagomaru <tagomaru@users.noreply.github.com>
Signed-off-by: tagomaru <tagomaru@users.noreply.github.com>
Signed-off-by: tagomaru <tagomaru@users.noreply.github.com>
@wu-haoze wu-haoze merged commit ffd353b into NeuralNetworkVerification:master Dec 14, 2021
omriisack pushed a commit to omriisack/Marabou that referenced this pull request Feb 6, 2022
* Add batch normalizatioin layer

Signed-off-by: tagomaru <tagomaru@users.noreply.github.com>

* Add line

Signed-off-by: tagomaru <tagomaru@users.noreply.github.com>

* update version of onnx*

Signed-off-by: tagomaru <tagomaru@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants