-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Time-out option #594
Comments
The simplest way to get on with this is to add a stopping condition in Worth noticing that introducing such a parameter may change the resulting solution based on |
Also of course hardware and system load while computing will impact how far we reach out in a given amount of time. |
Actually based on the number of searches per thread and timeout, it's possible to decide how much time we want to allocate to each search. The result is still highly dependent on system load/concurrency handling but this way all concurrent searches are equal in term of available time. This is much better than having a few long-running searches eating up all the time and the remaining ones stopped very early due to the timeout. |
Hi Julien, |
This matches the way I did it in #595:
From the tests I've performed, this works as expected with a total running time really close to the |
This should be in a working state now in #595, happy to have any feedback! |
The
-x
flag defining the exploration level is a way to set a trade-off between computing time and solution quality. But in some situations (think huge instances) this parameter can be hard to tune. Also no matter the size it's sometimes interesting to reach out to a time-constrained solution ("give me the best solution you can get in X seconds").We could add a
-l LIMIT
flag to pass in input a number of seconds the optimization run is allowed to span.A few thoughts:
LIMIT
value, but I don't think it makes sense at all to stop the heuristic process of getting initial routes. This means that actual computing time may be higher thanLIMIT
if the heuristic process has not completed before. My guess is that in most situations this would really only happen if theLIMIT
is set to an unrealistically low value.The text was updated successfully, but these errors were encountered: