Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Representing gradient of constraints as tangents on a power manifold #185

Closed
mateuszbaran opened this issue Dec 7, 2022 · 5 comments · Fixed by #386
Closed

Representing gradient of constraints as tangents on a power manifold #185

mateuszbaran opened this issue Dec 7, 2022 · 5 comments · Fixed by #386

Comments

@mateuszbaran
Copy link
Member

It would be nice to support vectorized gradient of constraints, i.e. instead of passing a vector of functions for gradients, pass a single function that returns all gradients as a tangent vector on a power manifold in a user-selected representation. It would likely be faster and make interfacing via Optimization.jl easier. See https://docs.sciml.ai/Optimization/stable/API/optimization_function/ .

@kellertuer
Copy link
Member

Currently there are

  • a set of functions, each returning a gradient / Tangent vector
  • one function returning a vector of tangent vectors, which is what you what I think? The problem of evaluating this if you just need one gradient is the necessary effort. This is also the case if you transfer this to a gradient on the power manifold. So I would always prefer the first one.

Besides that, the second point would just need a change of its representation to be one tangent vector on a power manifold (it is already in the nested sense).

@mateuszbaran
Copy link
Member Author

I mostly meant the array power representation of power manifold but maybe Optimization.jl would accept Jacobians represented by nested arrays, I will have to check.

@kellertuer
Copy link
Member

Sure, the reason why it is how it is, is that this way currently both implemented variants (yours would be a third then) yield the same and the one with an array of functions is quite benificial for the Gradient of the Exact Penalty Method, where not all gradients need to be evaluated but only a (maybe even only small) subset. That is when just one large function might be slow.

@mateuszbaran
Copy link
Member Author

I see, I think I will skip such constraints in OptimizationManopt.jl for the first version.

@kellertuer
Copy link
Member

To add to this, since I am anyways rewriting parts of the constraint objective for another task, it might also be nice to keep f, g, and h (cost, inequality and equality constraints) as their own objetives within the constrained objective, so that it is a bit more flexible which information we store for them. Most prominently maybe a Hessian of f, but also for g and h second order information might be “nice to have”.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants