Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Biased cost evaluation due to double computations #572

Closed
jcoupey opened this issue Sep 13, 2021 · 1 comment · Fixed by #573
Closed

Biased cost evaluation due to double computations #572

jcoupey opened this issue Sep 13, 2021 · 1 comment · Fixed by #573

Comments

@jcoupey
Copy link
Collaborator

jcoupey commented Sep 13, 2021

When trying to add unassigned jobs in routes from try_job_additions, we ponder the cost of adding tasks with the "regret" of not doing so which is determined by the cost it would take to insert the same tasks in another route:

const double eval = static_cast<double>(addition_cost) -
regret_coeff * static_cast<double>(regret_cost);

All is well on the white board, now what happens when several tasks can only be assigned to a single route is that their regret_cost is std::numeric_limits<Gain>::max(). So it turns out that small differences in addition_cost can result in the same evaluation because of the joys of floating-point arithmetic.

The fun part is that simply scaling the costs can trigger different behaviors (as I discovered while working on #555). It's then possible to have a different insertion choice in try_job_additions thus leading to a different solving path and solution.

I don't think this is too critical as:

  • it only happens in the specific context of several tasks being only assignable to a single route;
  • it's "just" yet another sub-optimal choice which is often fixed later on by the local search in the tests I've made.

Yet we should find a way to fix this, and also the impact on #555 feels actually embarrassing if simply scaling the costs leads to a different solution.

@jcoupey jcoupey added this to the v1.11.0 milestone Sep 13, 2021
@jcoupey
Copy link
Collaborator Author

jcoupey commented Sep 14, 2021

I don't think we can easily solve the problem by picking another arbitrary high value here. Reducing the default used will only mitigate the problem and tuning it seems out of hand to cope with any level of cost. So probably an instance-based value would make more sense. Fortunately we already compute an upper bound for solution cost. For now it's only used as a way to prevent overflows (see #50) while solving but it's actually a perfect candidate because:

  • it is a (quite loose) upper bound on any addition cost so it will take over easily in the above case;
  • it is many orders of magnitude smaller than std::numeric_limits<Gain>::max() so I expect the double precision bias to vanish.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant