-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connect Envoy Command #4735
Connect Envoy Command #4735
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good for now but your CLI needs tests :)
Willing to punt that a bit to get to a clean tree though.
I did leave a couple comments worth thinking about.
Going to punt on the CLI tests for now because this PR chain is blocking a couple other tasks. |
Hey - trying to understand how envoy map traffic from a local port to a service destination. It seems like that is configured in Is this scoped to the agent, i.e. more or less global? |
Hey @roboll We'll be writing up the real docs to go with this over next few days so that should help. Currently as in this example the config is passed as part of the (proxy) service registration so it's entirely local - every envoy sidecar could configure upstreams differently and on different local ports etc. In practice we'd expect this model to be used with Config management tool or scheduler doing that registration so would be "centrally" managed that way. We have plans in the near future to make this kind of stuff more centrally managed and pushed out via Consul itself though FYI but that won't be in this release. It's scope is local to the agent and the proxy instance - you may have many different Envoy's on a single node/agent with different configs and identities by registering them as different services. Finally, we provide an "escape hatch" mechanism where you can provide a completely custom Envoy listener config and we'll just inject certs and authz filter. Later we'll support richer config options in a nicer/more native way but this should allow more flexibility for advanced usages right away - that is still provided through the service registration. Details to come soon in docs! |
* Plumb xDS server and proxyxfg into the agent startup * Add `consul connect envoy` command to allow running Envoy as a connect sidecar. * Add test for help tabs; typos and style fixups from review
@banks Cool, thanks. As far as having port mappings scoped to the sidecar as opposed to the agent, 👍 thats what I hoped. I like the explicit nature of shipping a config file with apps that declares their dependencies and how they should be accessed (over which local port, I mean). Love the escape hatch for advanced use cases. 💯
This seems to be running consul agent with this config, so I assumed it was scoped to the agent. Where here, the envoy sidecar is just referencing config that the agent has:
Assuming agent-per-node we'll definitely have many envoy per agent and It seems like the agent will need to be aware of all of the service definitions with the local port mappings, not sure what the commands there should look like to scope it to the sidecar only (e.g. pod A may declare a dependency on service C at port 1000, and pod B may declare a dependency on service C at port 2000). If docs are coming this week I can wait for them though. 😄 |
Is only setting the agent's gRPC server port (since it's not enabled by default). Actually Since this PR landed that's redundant in I'll wait for docs to go into more detail but for many services and side cars per node it would look something like this:
That defines 3 different service instances and 3 different sidecar proxy registrations. Then each proxy would be run with:
And each envoy will get configured with whatever defaults are chosen for the specific sidecar instance. If upstreams are specified, they are specific to a proxy instance. In this magical defaults case, the proxy public TLS ports themselves are being auto-selected from a configurable range (21000, 21500). Note that in this model, anything that can talk to the upstream listeners of a proxy is effectively being granted permission to "act" as that service per the security model so you'd only deploy them unprotected on a node where all services share common access rights and can be treated as one from a security perspective, or you need to deploy the sidecar and application into network namespaces or otherwise use iptables rules to ensure only the specific ap and sidecar can talk to each other. Hope that helps. |
I think I left out a critical piece of information: we're running on Kubernetes, so our consul agent doesn't know about what services pods on the host could potentially be running. In your example it appears it has to. It seems there would need to be a consistent global service config that all agents are configured with as opposed to each consumer (sidecar) choosing how to expose things. Is my understanding correct? (Feel free to tell me to wait for the docs if this is explained in there.) FWIW: an alternative model (and what I was actually looking for) might be something like |
@roboll This is exactly the problem that our Connect injector will do via https://github.com/hashicorp/consul-k8s (and installable via our helm chart). And why we did #4732, so that pods can have a postStart hook to register and preStop hook to deregister, and the sidecar |
Yeah the workflow Mitchell describes should enable what you envisage. The reason we do it this way rather than just require a config file for each Envoy invocation directly is that later we want to support much more centralised config via Consul so we want the envoy process to declare it's purpose/identity and let Consul decide what config it should have. The current config during instance registration will always work too though for cases where different instances of the same service actually need different configs, but Mitchell's k8s work should get you pretty close right away - that should all be documented and ready within a week or two. |
Cool, glad to see it has been considered. I'm still working on groking the consul-k8s repo, but if anything isn't covered I'll open an issue. Thanks both! |
* Plumb xDS server and proxyxfg into the agent startup * Add `consul connect envoy` command to allow running Envoy as a connect sidecar. * Add test for help tabs; typos and style fixups from review
* Plumb xDS server and proxyxfg into the agent startup * Add `consul connect envoy` command to allow running Envoy as a connect sidecar. * Add test for help tabs; typos and style fixups from review
* Plumb xDS server and proxyxfg into the agent startup * Add `consul connect envoy` command to allow running Envoy as a connect sidecar. * Add test for help tabs; typos and style fixups from review
Note: this PR targets merging into f-envoy and depends directly on #4731. This is part of a series of PRs that were developed together but split for easier review.
This PR includes:
proxycfg.Manager
(Proxy Config Manager #4729) and xDS server (xDS Server Implementation #4731) into the agent startupgRPC
logs from thegRPC.Server
.consul connect envoy
command.Not Included
proxycfg.Manager
actually removes that since any resources needed by currently registered proxies are constantly blocked on and so won't be considered inactive. So we could set TTLs much more aggressively?cached
not a show stopper but we'll gain full fail-static behaviour when we do this.TODO:
Todo items before landing:
Demo 🎉
Register a couple of services with sidecar proxies:
Run the agent with gRPC enabled:
In another terminal, run the "web" envoy sidecar (this assumes envoy 1.7.1 or later is installed in your $PATH):
And in another terminal run the db sidecar (note that admin port needs overriding if you are on same host as it needs to bind:
Note, you can add
-- -l debug
to the envoy invocations to pass arbitrary flags to envoy - this one will show debug level logs including all the config received via xDS.In another terminal run an echo server:
You should now be able to connect to it as the "web" app by talking to the web sidecar's upstream:
You can verify connect AuthZ by closing the netcat session, and using:
Notice that unlike with the built-in proxy, netcat won't return until you actually try to send something - this is just how Envoy handles TCP proxying and TLS handshakes it seems. In the built-in proxy we explicitly force an immediate handshake.
You can get back to working against with: