Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Milp encoding #288

Merged
merged 64 commits into from
Jun 19, 2020
Merged

Milp encoding #288

merged 64 commits into from
Jun 19, 2020

Conversation

guykatzz
Copy link
Collaborator

No description provided.

guykatzz and others added 30 commits April 1, 2020 16:24
src/nlr/MILPFormulator.cpp Outdated Show resolved Hide resolved
src/nlr/LPFormulator.cpp Show resolved Hide resolved
src/nlr/LPFormulator.cpp Show resolved Hide resolved
src/nlr/MILPFormulator.cpp Outdated Show resolved Hide resolved
@guykatzz guykatzz merged commit 553b780 into NeuralNetworkVerification:master Jun 19, 2020
@guykatzz guykatzz deleted the milp branch June 19, 2020 18:17
AleksandarZeljic pushed a commit to AleksandarZeljic/Marabou that referenced this pull request Oct 9, 2020
* types for PL constraints

* first attempt at automatically constructing the NLR

* first unit test, couple of bug fixes

* use PiecewiseLinearFunctionType

* cleanup

* some cleanup, and store more information about discovered neurons

* wip

* complete the construction of the NLR

* bug fixes

* bug fix

* dumping functionality

* bug fix

* test

* changes per Aleks' comments

* minor

* python bindings

* move nlr to separate dir, with namesapce

* move neuron index to separate class

* basic 'evaluate' functionality for non-consecutive layers

* layer class

* cleanup

* oops

* wip

* wip

* mechanism for extracting the bounds

* SBT for weighted sum layers

* symbolic bound propagation over a relu layer

* fix an issue from the merge

* even with master

* cleanup

* restore cache

* oops

* wip

* oops

* move the error handling into the gurobi wrapper

* apply valid splits after LP/MILP steps

* an incremental mode for solving LP relaxations - achieves better
performance than non-incremental

* introduced a cutoff mechanism for the LP relaxations

* add incremental solving and cutoffs to the MILP formulator

* placeholders

* Remove faulty ASSERT

* new logger stuff

* make all 4 kinds of MILP based encoding accessbile through a
configuration option

* optimize the incremental milp mode

Co-authored-by: Guy Katz <guykatz@cs.huji.ac.il>
matanost pushed a commit that referenced this pull request Nov 2, 2021
* types for PL constraints

* first attempt at automatically constructing the NLR

* first unit test, couple of bug fixes

* use PiecewiseLinearFunctionType

* cleanup

* some cleanup, and store more information about discovered neurons

* wip

* complete the construction of the NLR

* bug fixes

* bug fix

* dumping functionality

* bug fix

* test

* changes per Aleks' comments

* minor

* python bindings

* move nlr to separate dir, with namesapce

* move neuron index to separate class

* basic 'evaluate' functionality for non-consecutive layers

* layer class

* cleanup

* oops

* wip

* wip

* mechanism for extracting the bounds

* SBT for weighted sum layers

* symbolic bound propagation over a relu layer

* fix an issue from the merge

* even with master

* cleanup

* restore cache

* oops

* wip

* oops

* move the error handling into the gurobi wrapper

* apply valid splits after LP/MILP steps

* an incremental mode for solving LP relaxations - achieves better
performance than non-incremental

* introduced a cutoff mechanism for the LP relaxations

* add incremental solving and cutoffs to the MILP formulator

* placeholders

* Remove faulty ASSERT

* new logger stuff

* make all 4 kinds of MILP based encoding accessbile through a
configuration option

* optimize the incremental milp mode

Co-authored-by: Guy Katz <guykatz@cs.huji.ac.il>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants