-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revisit fixed variables treatment #360
base: master
Are you sure you want to change the base?
Conversation
* Change the behavior of MakeParameter, depending on the callback: - `SparseCallback`: remove the fixed variables and reduce problem's dimension in MadNLP - `DenseCallback`: keep the fixed variables frozen (as done previously) * Add a new `fixed_handler` if the problem has no fixed variable: `NoFixedVariable`. * Ensure we always interact to the nonlinear program `nlp` through the callback wrapper `cb` to avoid side effect. * Remove the function `get_index_constraint` and stores indexes directly in the callback wrapper `cb`. * Fix the computation of the bound's multipliers for fixed variables.
Results on CUTEst are out (comparing this PR with MadNLP 0.8.4). It looks like we just improve very slightly the total number of iterations. In terms of computation time, we are slower. Also we are less robust now than before. Currently investigating what's going on. |
Benchmark is improved if we use the fix in #361 in this PR: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I just suggest we use copyto! Whenever we can. Thanks for working on this PR @frapac!
|
||
nnzh = length(ind_hess_free) | ||
Hi, Hj = similar(hess_I, nnzh), similar(hess_J, nnzh) | ||
Hi .= map_full_to_free[hess_I[ind_hess_free]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
views and copyto here?
|
||
nnzj = length(ind_jac_free) | ||
Ji, Jj = similar(jac_I, nnzj), similar(jac_J, nnzj) | ||
Ji .= jac_I[ind_jac_free] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
views and copyto here?
|
||
if m > 0 | ||
if equality_treatment == EnforceEquality | ||
ind_eq = findall(lcon .== ucon) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is_equality = (lcon .== ucon)
ind_eq = findall(is_equality)
ind_ineq = findall(~, is_equality)
end | ||
|
||
ind_llb = findall((lvar .!= -Inf) .* (uvar .== Inf)) | ||
ind_uub = findall((lvar .== -Inf) .* (uvar .!= Inf)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is_upper_unbounded = uvar .== Inf
is_lower_unbounded = lvar .==-Inf
ind_llb = .~ (is_lower_unbounded) .* is_upper_unbounded
ind_uub = (is_lower_unbounded) .* .~(is_upper_unbounded)
equality_handler = equality_treatment() | ||
|
||
# Get indexing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if this a repeatition from SparseCallback, I suggest writing this as a function
SparseCallback
: remove the fixed variables and reduce problem's dimension in MadNLPDenseCallback
: keep the fixed variables frozen (as done previously). On the contrary toSparseCallback
, we don't remove the fixed variables as this would require duplicating the Hessian and the Jacobian matrices to evaluate the model, which could be expensive if we are memory bounded.fixed_handler
if the problem has no fixed variable:NoFixedVariable
.nlp
through the callback wrappercb
to avoid side effect.get_index_constraint
and stores indexes directly in the callback wrappercb
.Solve #70, #229
Warning: this PR adds a small breaking change in
create_kkt_system
(we removeind_cons
from the arguments) but I think it's worth the change, as the previous situation was error prone. Now, we make sure we store all the problem's information inside the callback wrappercb
.Preliminary results on
space_shuttle_reentry
shows that the current PR improves slightly the convergence (less iterations, less feasibility restoration), see the logs in attachment:Benchmark on CUTEst is under way.