Skip to content

Commit

Permalink
compute bound multipliers for fixed variables treated as parameter
Browse files Browse the repository at this point in the history
- extended ResortBoundMultipliers to compute z_L and z_U for fixed
  variables if treatment is make_parameter by evaluating gradient of
  Lagrangian from evaluation of grad_f and jac_g
- added option fixed_variable_treatment=make_parameter_nodual to restore
  original behavior (return 0 for duals of fixed vars)
- extended TNLP::get_curr_iterate to get correct duals also if requesting
  a scaled solution
- extended getcurr test
- fixes #308
  • Loading branch information
svigerske committed Apr 21, 2021
1 parent 483e4cd commit d852154
Show file tree
Hide file tree
Showing 9 changed files with 268 additions and 52 deletions.
6 changes: 6 additions & 0 deletions ChangeLog
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,12 @@ commit history.
been extended by corresponding functions, too. The hs071 examples
have been extended to show use of the new functions. Added test
getcurr to test new functions. [#382, #451]
- Bound multipliers are now computed for fixed variables if
fixed_variable_treatment is set to make_parameter (the default).
This can trigger a reevaluating of the gradient of the objective
function or the Jacobian of the constraint functions. If this
is not desired, then option fixed_variable_treatment can be set
to the new value make_parameter_nodual.

2021-0x-yy: 3.13.5
- Allow to use --without-pardiso to disable check for MKL Pardiso [#454]
Expand Down
3 changes: 2 additions & 1 deletion doc/options.dox
Original file line number Diff line number Diff line change
Expand Up @@ -334,10 +334,11 @@ Possible values:
\anchor OPT_fixed_variable_treatment
<strong>fixed_variable_treatment</strong>: Determines how fixed variables should be handled.
<blockquote>
The main difference between those options is that the starting point in the "make_constraint" case still has the fixed variables at their given values, whereas in the case "make_parameter" the functions are always evaluated with the fixed values for those variables. Also, for "relax_bounds", the fixing bound constraints are relaxed (according to" bound_relax_factor"). For both "make_constraints" and "relax_bounds", bound multipliers are computed for the fixed variables. The default value for this string option is "make_parameter".
The main difference between those options is that the starting point in the "make_constraint" case still has the fixed variables at their given values, whereas in the case "make_parameter(_nodual)" the functions are always evaluated with the fixed values for those variables. Also, for "relax_bounds", the fixing bound constraints are relaxed (according to" bound_relax_factor"). For all but "make_parameter_nodual", bound multipliers are computed for the fixed variables. The default value for this string option is "make_parameter".

Possible values:
- make_parameter: Remove fixed variable from optimization variables
- make_parameter_nodual: Remove fixed variable from optimization variables and do not compute bound multipliers for fixed variables
- make_constraint: Add equality constraints fixing variables
- relax_bounds: Relax fixing bound constraints
</blockquote>
Expand Down
8 changes: 5 additions & 3 deletions src/Interfaces/IpStdCInterface.h
Original file line number Diff line number Diff line change
Expand Up @@ -343,12 +343,15 @@ IPOPTLIB_EXPORT IPOPT_EXPORT(enum ApplicationReturnStatus) IpoptSolve(
* For the correspondence between scaled and unscaled solutions, see the detailed description of OrigIpoptNLP.
* If Ipopt is in restoration mode, it maps the current iterate of restoration %NLP (see RestoIpoptNLP) back to the original TNLP.
*
* If there are fixed variables and fixed_variable_treatment=make_parameter, then requesting z_L and z_U can trigger a reevaluation of
* the Gradient of the objective function and the Jacobian of the constraint functions.
*
* @param ipopt_problem (in) Problem that is currently optimized.
* @param n (in) the number of variables \f$x\f$ in the problem; can be arbitrary if skipping x, z_L, and z_U
* @param scaled (in) whether to retrieve scaled or unscaled iterate
* @param x (out) buffer to store value of primal variables \f$x\f$, must have length at least n; pass NULL to skip retrieving x
* @param z_L (out) buffer to store the lower bound multipliers \f$z_L\f$, must have length at least n; pass NULL to skip retrieving z_L
* @param z_U (out) buffer to store the upper bound multipliers \f$z_U\f$, must have length at least n; pass NULL to skip retrieving z_U
* @param z_L (out) buffer to store the lower bound multipliers \f$z_L\f$, must have length at least n; pass NULL to skip retrieving z_L and Z_U
* @param z_U (out) buffer to store the upper bound multipliers \f$z_U\f$, must have length at least n; pass NULL to skip retrieving z_L and Z_U
* @param m (in) the number of constraints \f$g(x)\f$; can be arbitrary if skipping g and lambda
* @param g (out) buffer to store the constraint values \f$g(x)\f$, must have length at least m; pass NULL to skip retrieving g
* @param lambda (out) buffer to store the constraint multipliers \f$\lambda\f$, must have length at least m; pass NULL to skip retrieving lambda
Expand Down Expand Up @@ -378,7 +381,6 @@ IPOPTLIB_EXPORT IPOPT_EXPORT(Bool) GetIpoptCurrentIterate(
* from ip_cq of the internal NLP representation available into the form used by the TNLP.
* If Ipopt is in restoration mode, it maps the current iterate of restoration %NLP (see RestoIpoptNLP) back to the original TNLP.
*
* @note If fixed variables are treated as parameters (the default), then their corresponding entry in the derivative of the Lagrangian is set to 0.
* @note If in restoration phase, then requesting grad_lag_x can trigger a call to Eval_F_CB.
*
* @param ipopt_problem (in) Problem that is currently optimized.
Expand Down
71 changes: 61 additions & 10 deletions src/Interfaces/IpTNLP.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -530,21 +530,70 @@ bool TNLP::get_curr_iterate(
return false;

tnlp_adapter->GetFullDimensions(n_full, m_full);
if( n != n_full && (x != NULL || z_L != NULL || z_U != NULL) )
if( n != n_full && (x != NULL || (z_L != NULL && z_U != NULL)) )
THROW_EXCEPTION(IpoptException, "Incorrect dimension of x given to TNLP::get_curr_iterate().\n");
if( m != m_full && (lambda != NULL || g != NULL) )
THROW_EXCEPTION(IpoptException, "Incorrect dimension of g(x) given to TNLP::get_curr_iterate().\n");

SmartPtr<const DenseVector> intern_x;
SmartPtr<const DenseVector> intern_y_c;
SmartPtr<const DenseVector> intern_y_d;

if( x != NULL || (z_L != NULL && z_U != NULL) )
intern_x = curr_x(ip_data, ip_cq, orignlp, restonlp, scaled);

if( (z_L != NULL && z_U != NULL) || lambda != NULL )
{
intern_y_c = curr_y_c(ip_data, ip_cq, orignlp, restonlp, scaled);
intern_y_d = curr_y_d(ip_data, ip_cq, orignlp, restonlp, scaled);
}

// resort Ipopt-internal x to TNLP-version of x, i.e., reinsert fixed variables
if( x != NULL )
tnlp_adapter->ResortX(*curr_x(ip_data, ip_cq, orignlp, restonlp, scaled), x);
tnlp_adapter->ResortX(*intern_x, x);

// resort Ipopt-internal variable duals to TNLP-version
if( z_L != NULL || z_U != NULL )
tnlp_adapter->ResortBoundMultipliers(
*curr_y_c(ip_data, ip_cq, orignlp, restonlp, scaled),
*curr_z_L(ip_data, ip_cq, orignlp, restonlp, scaled), z_L,
*curr_z_U(ip_data, ip_cq, orignlp, restonlp, scaled), z_U);
if( z_L != NULL && z_U != NULL )
{
int n_x_fixed;
Index* x_fixed_map;
TNLPAdapter::FixedVariableTreatmentEnum fixed_variable_treatment;
tnlp_adapter->GetFixedVariables(n_x_fixed, x_fixed_map, fixed_variable_treatment);

if( !scaled || n_x_fixed == 0 || fixed_variable_treatment != TNLPAdapter::MAKE_PARAMETER )
tnlp_adapter->ResortBoundMultipliers(
*intern_x, *intern_y_c, *intern_y_d,
*curr_z_L(ip_data, ip_cq, orignlp, restonlp, scaled), z_L,
*curr_z_U(ip_data, ip_cq, orignlp, restonlp, scaled), z_U);
else
{
// ResortBoundMultipliers() doesn't work for scaled input on x, y_c, and y_d in this case
// so we pass on unscaled values of x, y_c, and y_d and then scale entries of z_L and z_U for fixed vars manually
tnlp_adapter->ResortBoundMultipliers(
*curr_x(ip_data, ip_cq, orignlp, restonlp, false),
*curr_y_c(ip_data, ip_cq, orignlp, restonlp, false),
*curr_y_d(ip_data, ip_cq, orignlp, restonlp, false),
*curr_z_L(ip_data, ip_cq, orignlp, restonlp, true), z_L,
*curr_z_U(ip_data, ip_cq, orignlp, restonlp, true), z_U);
Number obj_scal = orignlp->NLP_scaling()->apply_obj_scaling(1.0);
if( obj_scal != 1.0 )
for( Index i = 0; i < n_x_fixed; ++i )
{
if( obj_scal > 0.0 )
{
z_L[x_fixed_map[i]] *= obj_scal;
z_U[x_fixed_map[i]] *= obj_scal;
}
else
{
// need to swap between z_L and z_U in this case
Number tmp = -z_L[x_fixed_map[i]] * obj_scal;
z_L[x_fixed_map[i]] = -z_U[x_fixed_map[i]] * obj_scal;
z_U[x_fixed_map[i]] = tmp;
}
}
}
}

// resort Ipopt-interval constraint activity to TNLP-version
if( g != NULL )
Expand All @@ -570,7 +619,7 @@ bool TNLP::get_curr_iterate(

// resort Ipopt-internal constraint duals to TNLP-version
if( lambda != NULL )
tnlp_adapter->ResortG(*curr_y_c(ip_data, ip_cq, orignlp, restonlp, scaled), *curr_y_d(ip_data, ip_cq, orignlp, restonlp, scaled), lambda);
tnlp_adapter->ResortG(*intern_y_c, *intern_y_d, lambda);

return true;
}
Expand Down Expand Up @@ -666,8 +715,10 @@ bool TNLP::get_curr_violations(

if( grad_lag_x != NULL )
{
// this will set the derivative of the Lagrangian w.r.t. fixed variables to 0 (for any fixed_variables_treatment)
// the actual values are nowhere stored within Ipopt Data or CQ, since TNLPAdapter does not seem to pass them on
// this will set the derivative of the Lagrangian w.r.t. fixed variables to 0 if fixed_variable_treatment is make_parameter(_nodual)
// since the actual values are not computed within Ipopt
// but for fixed_variable_treatment=make_parameter, the bound multipliers (z_L and z_U) are computed by TNLP::ResortBoundMultipliers()
// such that the Gradient of the Lagrangian will be zero, so leaving them at 0 is correct here
tnlp_adapter->ResortX(*curr_grad_lag_x(ip_data, ip_cq, orignlp, restonlp, scaled), grad_lag_x, false);

// if fixed_variable_treatment is make_constraint, then fixed variable contribute y_c*x to the Lagrangian
Expand Down
12 changes: 5 additions & 7 deletions src/Interfaces/IpTNLP.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -501,10 +501,6 @@ class IPOPTLIB_EXPORT TNLP : public ReferencedObject
/** @name Solution Methods */
///@{
/** This method is called when the algorithm has finished (successfully or not) so the TNLP can digest the outcome, e.g., store/write the solution, if any.
*
* @note If fixed_variable_treatment is make_parameter (the default), then zero is returned for the bound multipliers of fixed variables.
* Therefore, in an optimal solution, the gradient of the Lagrangian w.r.t. fixed variables may not appear to be zero.
* If this is a problem, setting parameter fixed_variable_treatment to make_constraint could be workaround.
*
* @param status @parblock (in) gives the status of the algorithm
* - SUCCESS: Algorithm terminated successfully at a locally optimal
Expand Down Expand Up @@ -709,13 +705,16 @@ class IPOPTLIB_EXPORT TNLP : public ReferencedObject
* For the correspondence between scaled and unscaled solutions, see the detailed description of OrigIpoptNLP.
* If Ipopt is in restoration mode, it maps the current iterate of restoration %NLP (see RestoIpoptNLP) back to the original TNLP.
*
* If there are fixed variables and fixed_variable_treatment=make_parameter, then requesting z_L and z_U can trigger a reevaluation of
* the Gradient of the objective function and the Jacobian of the constraint functions.
*
* @param ip_data (in) Ipopt Data
* @param ip_cq (in) Ipopt Calculated Quantities
* @param scaled (in) whether to retrieve scaled or unscaled iterate
* @param n (in) the number of variables \f$x\f$ in the problem; can be arbitrary if skipping x, z_L, and z_U
* @param x (out) buffer to store value of primal variables \f$x\f$, must have length at least n; pass NULL to skip retrieving x
* @param z_L (out) buffer to store the lower bound multipliers \f$z_L\f$, must have length at least n; pass NULL to skip retrieving z_L
* @param z_U (out) buffer to store the upper bound multipliers \f$z_U\f$, must have length at least n; pass NULL to skip retrieving z_U
* @param z_L (out) buffer to store the lower bound multipliers \f$z_L\f$, must have length at least n; pass NULL to skip retrieving z_L and z_U
* @param z_U (out) buffer to store the upper bound multipliers \f$z_U\f$, must have length at least n; pass NULL to skip retrieving z_U and z_U
* @param m (in) the number of constraints \f$g(x)\f$; can be arbitrary if skipping g and lambda
* @param g (out) buffer to store the constraint values \f$g(x)\f$, must have length at least m; pass NULL to skip retrieving g
* @param lambda (out) buffer to store the constraint multipliers \f$\lambda\f$, must have length at least m; pass NULL to skip retrieving lambda
Expand Down Expand Up @@ -748,7 +747,6 @@ class IPOPTLIB_EXPORT TNLP : public ReferencedObject
* from ip_cq of the internal NLP representation available into the form used by the TNLP.
* If Ipopt is in restoration mode, it maps the current iterate of restoration %NLP (see RestoIpoptNLP) back to the original TNLP.
*
* @note If fixed variables are treated as parameters (the default), then their corresponding entry in the derivative of the Lagrangian is set to 0.
* @note If in restoration phase, then requesting grad_lag_x can trigger a call to eval_grad_f().
*
* @param ip_data (in) Ipopt Data
Expand Down
Loading

0 comments on commit d852154

Please sign in to comment.