Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow service specific values for health checks and service discovery prioritization #418

Closed
janitha opened this issue Oct 22, 2014 · 8 comments

Comments

@janitha
Copy link

janitha commented Oct 22, 2014

At the moment, health checks binary (passing/failing), which is great for finding available services, but burdens the application to handle any kind of load balancing.

If the health checks also were to incorporate a way, where the application would respond with an additional value during the check (such a a float), and this can be stored along side with the health check for the service. Then, during service discovery, the available services can be ordered according this additional value. This value would be application specific.

Since the consul agent is already running on the machine anyway and has visibility to cpu/memory load, a similar query can be provided for service discovery to also order the available service depending on those.

@sethvargo
Copy link
Contributor

This seems like a reasonable idea to me, but we would need to do so in a backwards compatible way :)

@oliora
Copy link
Contributor

oliora commented Oct 22, 2014

Check already has "Output" field which can be filled with ?note parameter.
And any additional data can be easily stored in KV.

@janitha
Copy link
Author

janitha commented Oct 22, 2014

The note field seems to be be the ideal mechanism, but it's intended purpose seems to be to have a human readable reason, instead of a metric that is to be sorted for load balancing purposes. I suppose the application which is doing service discovery can sort on the note fields, but this would promote a kind of a overloading of the note's meaning.

@drauschenbach
Copy link

+1

@mfischer-zd
Copy link
Contributor

We have a use case for this around dynamic topology discovery. An example would be a MySQL service that could change from a master role to a slave role, or could serve a different shard after some time. Service tags are the currently the best place to describe topology information, but they're not dynamic.

Currently, if a node is involved in a service topology change (e.g. mysql switches from master to slave), your options are:

  1. Make an external watcher replace the config file describing the service with new tags on change, and reload Consul;
  2. Make an external watcher update the agent configuration using the HTTP API.

These options seem like they're doing unnecessary duplicate work, and are racy as they could de-publish the service from the node, at least temporarily.

Any other options I'm missing?

@pepov
Copy link

pepov commented Mar 17, 2015

+1 for having some kind of load value propagation option, but it will put a lot more write load on the servers which is currently under control by the check_update_interval. I don't even know if check notes are updated between these intervals currently.

I'm wondering that maybe some serf event based mechanism would be better suited for this problem.

@pepov
Copy link

pepov commented Apr 18, 2015

An other idea came to my mind: what about a special check type, that receives a special service name (e.g. the associated loadbalancer service) as an argument and forwards the check values (about the local service performance) to that service endpoints, so avoiding to put write load with these performance metrics on the kv store (because this is not the kind of data that would need consistent store properties, as it is periodically regenerated).

The loadbalancer service could be registered anywhere, even on any consul agent so that consul could provide a built in integration for that. Of course even a custom service could be registered as well to receive these updates.

I can even imagine consul agents gathering any kind of performance data locally and forwarding them to central services to be able to make much more informed balancing or alerting decisions, but in my opinion it must be independent of the kv store and should be completely abstracted away.

@slackpad
Copy link
Contributor

slackpad commented May 1, 2017

Closing this as a dup of #252.

@slackpad slackpad closed this as completed May 1, 2017
duckhan pushed a commit to duckhan/consul that referenced this issue Oct 24, 2021
* Remove support for -default-protocol and -enable-central-config flags passed to consul-k8s inject-connect command. 
 These flags would cause Consul to write a service-defaults config entry that would set the protocol of the service.

* Remove support for the consul.hashicorp.com/connect-service-protocol annotation. Connect pods that have this    annotation will not be injected and the injector will return an error.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants