Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto construction of a network level reasoner #253

Merged
merged 30 commits into from
May 20, 2020

Conversation

guykatzz
Copy link
Collaborator

When a user has not provided a network level reasoner as part of the input query, Marabou will attempt to automatically figure out the topology of the network and construct one itself. This is needed especially for symbolic bound tightening.

if ( eq._type != Equation::EQ )
continue;

Set<unsigned> eqVars = eq.getParticipatingVariables();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Storing this set across iterations might be more efficient, than repeatedly attempting to reduce it to one. Even though this will probably be a rare case.

This would also address my next comment because it would be sufficient to eliminate just processed variables since the effects would accumulate across layer iterations.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't understand this comment. What does it mean about the topology if an equation's variables appear across multiple layers?

Copy link
Collaborator

@AleksandarZeljic AleksandarZeljic May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I made an assumption that's incorrect:

I assumed that the equation is a weighted sum of inputs = backwards-facing-variable, so if some variables skip levels then there is no guarantee that all members of the weighted sum belong to the same level, so they may be discharged at different times. So what I was suggesting was a mapping {vars} -> backward-facing-variable, and through iterations we eliminate variables from the set, and when the variable set is empty we know the position of the activation function and can remove the equation from consideration.

Copy link
Collaborator

@AleksandarZeljic AleksandarZeljic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The algorithm LGTM, I've added some comments that should include some corner cases, and improve efficiency a bit. See what you think is worth addressing, and I can take another look.

@guykatzz
Copy link
Collaborator Author

Hi Aleks,
Could you please have a look at the changes?

Copy link
Collaborator

@AleksandarZeljic AleksandarZeljic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

if ( bias > 0 )
printf( " + %.2lf", bias );
else
printf( " - %.2lf", -bias );
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can also be compacted by moving sign into the formatter.

@guykatzz guykatzz merged commit cf65e39 into NeuralNetworkVerification:master May 20, 2020
@guykatzz guykatzz deleted the topology branch May 20, 2020 06:18
AleksandarZeljic pushed a commit to AleksandarZeljic/Marabou that referenced this pull request Oct 9, 2020
…ion#253)

* types for PL constraints

* first attempt at automatically constructing the NLR

* first unit test, couple of bug fixes

* use PiecewiseLinearFunctionType

* cleanup

* some cleanup, and store more information about discovered neurons

* wip

* complete the construction of the NLR

* bug fixes

* bug fix

* dumping functionality

* bug fix

* test

* changes per Aleks' comments

* minor

* python bindings

* minor

Co-authored-by: Guy Katz <guykatz@cs.huji.ac.il>
matanost pushed a commit that referenced this pull request Nov 2, 2021
* types for PL constraints

* first attempt at automatically constructing the NLR

* first unit test, couple of bug fixes

* use PiecewiseLinearFunctionType

* cleanup

* some cleanup, and store more information about discovered neurons

* wip

* complete the construction of the NLR

* bug fixes

* bug fix

* dumping functionality

* bug fix

* test

* changes per Aleks' comments

* minor

* python bindings

* minor

Co-authored-by: Guy Katz <guykatz@cs.huji.ac.il>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants