You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
The signals page in the docs says that sending a HUP signal is a way to reload tags. Changing tags are currently the only way to take a node out of service for maintance (see #239). When I send a HUP signal via kill -HUP <pid>, the Dkron log shows that the configuration was in fact reloaded, however the Dkron Dashboard shows the old tag value, and jobs are scheduled on my node even though the tag criteria should preclude it.
The only way I know of to service a Dkron node is to stop the service, which causes the forced termination of all its running jobs.
To Reproduce
Steps to reproduce the behavior:
Configure a tag in /etc/dkron.yml such as schedule_jobs_enabled: true
Start Dkron
Schedule jobs with the tag criteria: schedule_jobs_enabled:1
Remove the tag in /etc/dkron.yml
Tell Dkron to reload config with kill -HUP <pid>
Go to Dkron Dashboard, find the current node in the node list, and observe the old tag still exists.
Expected behavior
Tags can be changed without killing Dkron process or running jobs, in order to take a node out of service to drain it of jobs.
I have a similar problem. If you send reload (HUP signal) to dkron master, you will have a new configuration, but your jobs won't be executed. It's connected with tags. Dkron master doesn't see agents with the tag from config, but they exist.
Change the master can help only.
We are experiencing the same issue in version dkron-pro-3.1.3-1.x86_64
Maybe there is any update in this topic?
Would be great to reload or modify the tags without entire stop and start of the dkron service.
Describe the bug
The signals page in the docs says that sending a HUP signal is a way to reload tags. Changing tags are currently the only way to take a node out of service for maintance (see #239). When I send a HUP signal via
kill -HUP <pid>
, the Dkron log shows that the configuration was in fact reloaded, however the Dkron Dashboard shows the old tag value, and jobs are scheduled on my node even though the tag criteria should preclude it.The only way I know of to service a Dkron node is to stop the service, which causes the forced termination of all its running jobs.
To Reproduce
Steps to reproduce the behavior:
/etc/dkron.yml
such asschedule_jobs_enabled: true
schedule_jobs_enabled:1
/etc/dkron.yml
kill -HUP <pid>
Expected behavior
Tags can be changed without killing Dkron process or running jobs, in order to take a node out of service to drain it of jobs.
Specifications:
Additional context
In the original commit https://github.com/distribworks/dkron/pull/143/files on Jun 15 2016, any reload of config was followed by propagating tag changes to Serf. By the time this code was relocated to
cmd/agent.go
in Apr 2018 (4c71145#diff-8465439516f3cfcc6e18a5601c91e491), this code was gone.The text was updated successfully, but these errors were encountered: