This package provides an implementation of a Deep Differential Network. This
network architecture is a variant of a fully connected network that in addition
to computing the function value
Let
The partial derivative w.r.t. the previous layer can be computed for
the fully connected layer, i.e.,
with the non-linearity
This Deep Differential Network architecture was introduced in the ICLR 2019 paper Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning. Within this paper the differential network was used to represent the kinetic and potential energy of a rigid body. These energies as well as the Jacobian of the energies were embedded within the Euler-Lagrange differential equation and the the physically plausible network parameters were learned by minimising the residual of this differential equation on recored data.
The deep differential network architecture was used in the following papers:
- Lutter et. al., (2019). Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning,
International Conference on Learning Representations (ICLR). - Lutter et. al., (2019). Deep Lagrangian Networks for end-to-end learning of energy-based control for under-actuated systems,
International Conference on Intelligent Robots & Systems (IROS). - Lutter et. al., (2019). HJB Optimal Feedback Control with Deep Differential Value Functions and Action Constraints,
Conference on Robot Learning
If you use this implementation within your paper, please cite:
@inproceedings{lutter2019deep,
author = "Lutter, M. and Ritter, C. and Peters, J.",
year = "2019",
title = "Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning",
booktitle = "International Conference on Learning Representations (ICLR)",
}
The example scripts 1d_example_diff_net.py
& 2d_example_diff_net.py
provide an example to train a differential network to approximate the 1d-function
The ReLu differential network and the SoftPlus differential network are able to approximate the functions very accurately. However, only the SoftPlus network yields a smooth Jacobian, whereas the Jacobian of the ReLu network is piecewise constant due to the non-differentiable point of the ReLu activation.
Performance of the Different Models:
Type 1 Loss / ReLu | Type 1 Loss / SoftPlus | Type 1 Loss / TanH | Type 1 Loss / Cos | Type 2 Loss / ReLu | Type 2 Loss / SoftPlus | Type 2 Loss / TanH | Type 2 Loss / Cos | |
---|---|---|---|---|---|---|---|---|
MSE |
9.792e-6 | 6.909e-6 | 8.239e-7 | 1.957e-7 | 5.077e-7 | 8.666e-6 | 7.563e-6 | 1.746e-8 |
MSE |
4.514e-4 | 1.807e-4 | 3.099e-5 | 2.189e-6 | 2.661e-3 | 1.494e-4 | 4.345e-4 | 5.731e-7 |
MSE |
4.950e-1 | 5.930e-3 | 1.625e-3 | 5.451e-5 | 4.950e-1 | 4.519e-3 | 8.642e-3 | 1.209e-5 |
ReLu Differential Network | SoftPlus Differential Network |
TanH Differential Network | Cosine Differential Network |
The ReLu differential network and the SoftPlus differential network are able to approximate the functions very accurately. However, only the SoftPlus network yields a smooth Jacobian, whereas the Jacobian of the ReLu network is piecewise constant due to the non-differentiable point of the ReLu activation.
Performance of the Different Models:
Type 1 Loss / ReLu | Type 1 Loss / SoftPlus | Type 1 Loss / TanH | Type 1 Loss / Cos | Type 2 Loss / ReLu | Type 2 Loss / SoftPlus | Type 2 Loss / TanH | Type 2 Loss / Cos | |
---|---|---|---|---|---|---|---|---|
MSE |
4.151e-5 | 2.851e-6 | 2.583e-6 | 2.929e-7 | 3.293e-5 | 4.282e-5 | 8.361e-6 | 2.118e-6 |
MSE |
2.014e-3 | 3.656e-5 | 1.525e-5 | 4.344e-6 | 4.962e-3 | 8.128e-4 | 4.349e-4 | 7.038e-5 |
MSE |
9.996e-1 | 1.288e-3 | 8.535e-4 | 1.709e-4 | 9.996e-1 | 1.468e-2 | 1.163e-2 | 1.316e-3 |
ReLu Differential Network |
SoftPlus Differential Network |
TanH Differential Network |
Cosine Differential Network |
For installation this python package can be cloned and installed via pip
git clone https://github.com/milutter/deep_differential_network.git deep_differential_network
pip install deep_differential_network
If you have any further questions or suggestions, feel free to reach out to me via
michael AT robot-learning DOT de