Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor objective, gradient and evaluate_fun_with_grad methods #128

Merged
merged 3 commits into from
Aug 26, 2021

Conversation

jhelgert
Copy link
Contributor

@jhelgert jhelgert commented Aug 16, 2021

This small PR is motivated by the discussion here. It refactors the evaluate_fun_with_grad, objective and gradient methods in the scipy interface and handles the jac=True option for the objective function similar to the constraints by using the MemoizeJac decorator.

Should we leave the evaluate_fun_with_grad method as it is, or do you think deprecating it is the better way?

@moorepants
Copy link
Collaborator

moorepants commented Aug 24, 2021

Can you provide more information? I'm not sure what this is doing or what you want to deprecate.

@jhelgert
Copy link
Contributor Author

Sure! While discussing my last PR, @brocksam proposed to refactor the relevant parts of the evaluate_fun_with_grad() method into the objective() and gradient() methods. The constraints already use the MemoizeJac Decorator:

for con in constraints:
con_fun = con['fun']
con_jac = con.get('jac', None)
con_args = con.get('args', [])
con_hessian = con.get('hess', None)
con_kwargs = con.get('kwargs', {})
if con_jac is None:
con_jac = lambda x0, *args, **kwargs: approx_fprime(
x0, con_fun, eps, *args, **kwargs)
elif con_jac is True:
con_fun = MemoizeJac(con_fun)
con_jac = con_fun.derivative

So there's no need to check whether the constraint function returns both the constraint function value and the jacobian inside the constraint() and jacobian() methods.

This PR does the same for the objective. That is why we no longer need to call evaluate_fun_with_grad() inside objective() and gradient() since the case that the objective returns a tuple containing the objective value and the gradient is already handled by the MemoizeJac decorator. I hope this makes things a bit clearer.

@moorepants
Copy link
Collaborator

I think we can just leave the evaluate_fun_with_grad() method and not worry about trying to deprecate.

@@ -60,8 +60,6 @@ class IpoptProblemWrapper(object):
for more information.
eps : float, optional
Epsilon used in finite differences.
n : int, optional
Total number of variables.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It isn't clear why this is removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should have been removed in this commit in the last PR. Sorry for the mess!

@moorepants moorepants merged commit f517a83 into mechmotum:master Aug 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants