-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow service specific values for health checks and service discovery prioritization #418
Comments
This seems like a reasonable idea to me, but we would need to do so in a backwards compatible way :) |
Check already has "Output" field which can be filled with ?note parameter. |
The note field seems to be be the ideal mechanism, but it's intended purpose seems to be to have a human readable reason, instead of a metric that is to be sorted for load balancing purposes. I suppose the application which is doing service discovery can sort on the note fields, but this would promote a kind of a overloading of the note's meaning. |
+1 |
We have a use case for this around dynamic topology discovery. An example would be a MySQL service that could change from a master role to a slave role, or could serve a different shard after some time. Service tags are the currently the best place to describe topology information, but they're not dynamic. Currently, if a node is involved in a service topology change (e.g. mysql switches from master to slave), your options are:
These options seem like they're doing unnecessary duplicate work, and are racy as they could de-publish the service from the node, at least temporarily. Any other options I'm missing? |
+1 for having some kind of load value propagation option, but it will put a lot more write load on the servers which is currently under control by the check_update_interval. I don't even know if check notes are updated between these intervals currently. I'm wondering that maybe some serf event based mechanism would be better suited for this problem. |
An other idea came to my mind: what about a special check type, that receives a special service name (e.g. the associated loadbalancer service) as an argument and forwards the check values (about the local service performance) to that service endpoints, so avoiding to put write load with these performance metrics on the kv store (because this is not the kind of data that would need consistent store properties, as it is periodically regenerated). The loadbalancer service could be registered anywhere, even on any consul agent so that consul could provide a built in integration for that. Of course even a custom service could be registered as well to receive these updates. I can even imagine consul agents gathering any kind of performance data locally and forwarding them to central services to be able to make much more informed balancing or alerting decisions, but in my opinion it must be independent of the kv store and should be completely abstracted away. |
Closing this as a dup of #252. |
* Remove support for -default-protocol and -enable-central-config flags passed to consul-k8s inject-connect command. These flags would cause Consul to write a service-defaults config entry that would set the protocol of the service. * Remove support for the consul.hashicorp.com/connect-service-protocol annotation. Connect pods that have this annotation will not be injected and the injector will return an error.
At the moment, health checks binary (passing/failing), which is great for finding available services, but burdens the application to handle any kind of load balancing.
If the health checks also were to incorporate a way, where the application would respond with an additional value during the check (such a a float), and this can be stored along side with the health check for the service. Then, during service discovery, the available services can be ordered according this additional value. This value would be application specific.
Since the consul agent is already running on the machine anyway and has visibility to cpu/memory load, a similar query can be provided for service discovery to also order the available service depending on those.
The text was updated successfully, but these errors were encountered: