Skip to content

Commit

Permalink
More robust pivot selection (#81)
Browse files Browse the repository at this point in the history
* initial work on sparse matrix representation

* store/restore functionality

* addLastRow functionality

* getRow and getColumn

* column-merging functionality

* added an interface class

* introducing also sparse vectors

* added addLastColumn functionality

* another unittest

* get sparse columns/matrices in dense form

* WIP on storing the constraint matrix inside the tableau in sparse form

* more WIP, fixed a few bugs, still have a couple of failing tests

* fixed some test issues

* initialization

* resize _a along with the rest

* sparse lu factors

* store also the transposed versions of F and V

* starting work on the sparse GE

* some work on changing the values within an existing sparse
representation, needed for sparse factorization.
still WIP

* refactoring and new functionality for CSRMatrix: any kind of
insertions and deletions

* support for empty initialization and counting elements

* sparse GE is now working. minor bug fixes elsewhere

* compute Ft and Vt as part of the G-elimination process

* tests

* basis oracles can return also sparse columns, not just dense

* sparse LU factorization class

* switch to using the sparse factorization in the engine/tableau

* bug fix in mergeColumns, and nicer printing

* bug fix

* bug fix

* bug fix: merging columns does not delete the actual column, just
leaves it empty

* configuration changes

* optimization: since the sparse columns of A are needed all the time,
just compute them once-and-for-all

* a more efficient implementation of sparse vectors

* comments and unit tests

* cleanup

* keep _A in dense column-major format, too, instead of repeatedly
invoking toDense() to gets its columns

* bad deletes

* bug fix in test

* bug fixes: relu constraint propagation, and the handling of merged
variables in the preprocessor

* new test

* compute Ft incrementally, use it whenever sparse columns are required

* did the todo

* valgrind fixes

* debugging

* un-initialized memory

* cleanup

* fixing an issue with cost function degradation

* reinstating the anti-looping trick

* debug prints

* verify invariants

* debug info for crash

* fixing a numerical stability issue
new assertions

* cleanup

* cleanup

* removing the code with the failing assertion

* weaken a couple of too-storng assertions

* bug fix and some fine-tuning of PSE

* bug fix in LU factorization
optimizations to PSE

* WIP

* cleanup

* cleanup

* fine-tuning the computation of basic costs

* more robust comparisons

* more stable cost function computation

* changes to how the cost function is adjusted at the beginning of every
loop, due to degeneracy

* made the decision on which non-basics are eligible for entry more robust

* minor

* more robust calculations

* more robust PSE computation

* if stuck with an unstable pivot, refresh the basis factorization and
try again
  • Loading branch information
guykatzz authored Jul 29, 2018
1 parent b2a5119 commit bfdb0d8
Show file tree
Hide file tree
Showing 4 changed files with 30 additions and 11 deletions.
2 changes: 2 additions & 0 deletions src/configuration/GlobalConfiguration.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,8 @@ const bool GlobalConfiguration::PREPROCESSOR_PL_CONSTRAINTS_ADD_AUX_EQUATIONS =

const unsigned GlobalConfiguration::PSE_ITERATIONS_BEFORE_RESET = 1000;
const double GlobalConfiguration::PSE_GAMMA_ERROR_THRESHOLD = 0.001;
const double GlobalConfiguration::PSE_GAMMA_UPDATE_TOLERANCE = 0.000000001;


const double GlobalConfiguration::RELU_CONSTRAINT_COMPARISON_TOLERANCE = 0.001;

Expand Down
3 changes: 3 additions & 0 deletions src/configuration/GlobalConfiguration.h
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,9 @@ class GlobalConfiguration
// An error threshold which, when crossed, causes projected steepest edge to reset the reference space
static const double PSE_GAMMA_ERROR_THRESHOLD;

// PSE's Gamma function's update tolerance
static const double PSE_GAMMA_UPDATE_TOLERANCE;

// The tolerance for checking whether f = Relu( b ), to determine a ReLU's statisfaction
static const double RELU_CONSTRAINT_COMPARISON_TOLERANCE;
/*
Expand Down
29 changes: 21 additions & 8 deletions src/engine/Engine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -308,17 +308,17 @@ void Engine::performSimplexStep()
}
});

// Obtain all eligible entering varaibles
List<unsigned> enteringVariableCandidates;
_tableau->getEntryCandidates( enteringVariableCandidates );

unsigned bestLeaving = 0;
double bestChangeRatio = 0.0;
Set<unsigned> excludedEnteringVariables;
bool haveCandidate = false;
unsigned bestEntering = 0;
double bestPivotEntry = 0.0;
unsigned tries = GlobalConfiguration::MAX_SIMPLEX_PIVOT_SEARCH_ITERATIONS;
Set<unsigned> excludedEnteringVariables;
unsigned bestLeaving = 0;
double bestChangeRatio = 0.0;

// Obtain all eligible entering varaibles
List<unsigned> enteringVariableCandidates;
_tableau->getEntryCandidates( enteringVariableCandidates );

while ( tries > 0 )
{
Expand Down Expand Up @@ -413,8 +413,21 @@ void Engine::performSimplexStep()
bool fakePivot = _tableau->performingFakePivot();

if ( !fakePivot &&
FloatUtils::lt( bestPivotEntry, GlobalConfiguration::ACCEPTABLE_SIMPLEX_PIVOT_THRESHOLD ) )
bestPivotEntry < GlobalConfiguration::ACCEPTABLE_SIMPLEX_PIVOT_THRESHOLD )
{
/*
Despite our efforts, we are stuck with a small pivot. If basis factorization
isn't fresh, refresh it and terminate this step - perhaps in the next iteration
a better pivot will be found
*/
if ( !_tableau->basisMatrixAvailable() )
{
_tableau->refreshBasisFactorization();
return;
}

_statistics.incNumSimplexUnstablePivots();
}

if ( !fakePivot )
{
Expand Down
7 changes: 4 additions & 3 deletions src/engine/ProjectedSteepestEdge.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,8 @@ void ProjectedSteepestEdgeRule::prePivotHook( const ITableau &tableau, bool fake
if ( i == enteringIndex )
continue;

if ( FloatUtils::isZero( pivotRow[i], 1e-9 ) )
if ( ( -GlobalConfiguration::PSE_GAMMA_UPDATE_TOLERANCE < pivotRow[i] ) &&
( pivotRow[i] < +GlobalConfiguration::PSE_GAMMA_UPDATE_TOLERANCE ) )
continue;

r = pivotRow[i] / -changeColumn[leavingIndex];
Expand All @@ -243,7 +244,7 @@ void ProjectedSteepestEdgeRule::prePivotHook( const ITableau &tableau, bool fake
t1 = _gamma[i] + r * ( r * accurateGamma + s + s );
t2 = ( ( _referenceSpace[nonBasic] ? 1.0 : 0.0 ) +
( ( _referenceSpace[entering] ? 1.0 : 0.0 ) * r * r ) );
_gamma[i] = FloatUtils::max( t1, t2 );
_gamma[i] = ( t1 > t2 ? t1 : t2 );
}

log( "PrePivotHook done" );
Expand Down Expand Up @@ -290,7 +291,7 @@ void ProjectedSteepestEdgeRule::postPivotHook( const ITableau &tableau, bool fak
}

// If the error is too great, reset the reference space.
if ( FloatUtils::gt( _errorInGamma, GlobalConfiguration::PSE_GAMMA_ERROR_THRESHOLD ) )
if ( _errorInGamma > GlobalConfiguration::PSE_GAMMA_ERROR_THRESHOLD )
{
log( Stringf( "PostPivotHook reseting ref space (degradation). Error = %.15lf", _errorInGamma ) );
resetReferenceSpace( tableau );
Expand Down

0 comments on commit bfdb0d8

Please sign in to comment.