-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Periodic tasks #87
Comments
The architecture we have for Containerbuddy hasn't been particularly optimized for high-performance and forking off lots of processes at subsecond intervals might prove to be costly. I would suggest that if we want to allow for subsecond that we do some performance testing of that design.
This also brings up the question of semantics of the frequency. Under the current design of For tasks that run long is this still the correct behavior? |
I think so. It usually doesn't make sense for periodic tasks to overlap. If they do it is probably a mistake. Either:
Perhaps the semantics could be configurable Consider the use case of the backup. If the first backup didn't finish, should we wait for it to complete? or kill it and start another? Probably the former? However, In the case of pushing metrics, if the last one didn't complete - we should probably just kill it since it is stale now anyway. We don't want to fork too many processes, so I don't think we should support scheduled tasks which overlap and continue to spawn more and more processes that don't exit. Perhaps both can be accomplished with a timeout on the scheduled task (which may default to the frequency?) {
"onScheduled": [
{ "frequency": "1s", "command": [ "/bin/push_metrics.sh" ] },
{ "frequency": "10s", "command": [ "/bin/push_other_metrics.sh" ], "timeout": "5s" }
]
} |
I like this idea. To support this we may need to update (or replace) @justenwalker I'm planning on tackling #27 as my next major task for Containerbuddy. Do you want to take ownership of this project? |
Sure 😋 🍪 |
Just an update @tgross I started a WIP branch if you want to follow it. not ready for PR yet, but perhaps we can discuss the implementation i'm going with. Also I didn't split out the module yet, but I'll work on that too. |
Cool, I've got https://github.com/tgross/containerbuddy/tree/gh27_metrics in the works myself and I started with splitting the module out just as a "let's make sure that I can call it correctly." I realize you're early in the process but you may find what you've done with |
More thoughts on module split-up here: #83 (comment) |
@justenwalker just because the timing is inconvenient on that first stage refactor, I'm going to try to push it out early Monday morning so that we can use it to base our new packages on. This way it's not getting delayed and then we end up both having to rework sections of metrics and tasks to suit. And, as noted in that #83 it'll give us a chance to make sure it's the right abstraction before refactoring the rest of the modules. |
Actually that turned out to be a much smaller intervention than I'd thought so I've opened #118. |
Once we get a green build on master back from TravisCI, I'll cut release 2.1.0 with this in it. |
Currently, we have
preStart
,preStop
andpostStart
events; As the container is running though, it may be useful to have periodic tasks execute to report on status to external systems separate from the health checks that report to the service discovery backend.The primary use-case would be a logical extension point for push-style metrics without having to build in any backends into Containerbuddy directly. (See #27 for discussion)
Configuration may look something like:
Some other things to consider:
The text was updated successfully, but these errors were encountered: