ilploss
is a pytorch-based library which lets you train models with an Integer Linear Programming (ILP) output layer, without ever calling an ILP solver during training. It implements techniques from the paper A Solver-Free Framework for Scalable Learning in Neural ILP Architectures, accepted at NeurIPS 2022. Full code for the results in the paper can be found here.
In a python environment with torch
and gurobi
(not gurobipy
) installed:
pip install git+https://github.com/rishabh-ranjan/ilploss
The ilploss
library provides the following modules:
ilploss.encoders
: get ILP from inputilploss.samplers
: sample negativesilploss.losses
: compute and balance loss termsilploss.solvers
: solve batched ILPs at inference, in parallel
ILP instances are specified by a
, b
, c
under the convention that for solution vector z
, the cost c^T z
is to be minimized under the constraints a @ z + b >= 0
(@
denotes matrix multiply).
Check out the source code for further details.
A demo script is provided here. This replicates the ILP-Loss experiments on random constraints from Table 2 of the paper. To run the demo:
git clone https://github.com/rishabh-ranjan/ilploss
cd ilploss
tests/random.py tests/data/random/dense/16_dim/8_const/0/dataset.pt
You can choose any file from tests/data
as the argument.
@inproceedings{ilploss,
author = {Nandwani, Yatin and Ranjan, Rishabh and Mausam and Singla, Parag},
title = {A Solver-Free Framework for Scalable Learning in Neural ILP Architectures},
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, November 29-Decemer 1, 2022},
year = {2022},
}