-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support restarting Nomad without restarting nspawn containers #17
Comments
Hello @mateuszlewko , If not, this is definitely a bug and I'll fix it! |
I double-checked it and it seems the issue is also present in the latest version. I will make sure to fix this 👍 |
When `RecoverTask` is called we initially tried to recover the `TaskConfig` for a given task. This was blindly copied from the nomad-driver-skeleton project and it turns out we make no use of it at all. Since this also caused Issue #17, we simply get rid of it. Recovering tasks when a Nomad client is restarted now works again.
When `RecoverTask` is called we initially tried to recover the `TaskConfig` for a given task. This was blindly copied from the nomad-driver-skeleton project and it turns out we make no use of it at all. Since this also caused Issue #17, we simply get rid of it. Recovering tasks when a Nomad client is restarted now works again.
When `RecoverTask` is called we initially tried to recover the `TaskConfig` for a given task. This was blindly copied from the nomad-driver-skeleton project and it turns out we make no use of it at all. Since this also caused Issue #17, we simply get rid of it. Recovering tasks when a Nomad client is restarted now works again.
@mateuszlewko I just published a new release 0.4.1 which contains a fix for this issue |
Confirmed that it's working now. Thank you! |
It seems that restarting Nomad service (for example when upgrading Nomad or reloading configuration) restarts jobs run by nspawn driver. Docker jobs stay alive and are not restarted when restarting Nomad.
I observed the following errors in logs:
Failed jobs are then reallocated and run fine, however, it's undesirable that they are restarted.
Would it be hard to support that?
The text was updated successfully, but these errors were encountered: