-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add example using grpc and http2 #18
Comments
From @aledbf on December 30, 2016 3:24 Something like this should work: https://github.com/caiofilipini/grpc-weather/
|
From @philipithomas on June 28, 2017 21:58 can a grpc server listen on port 80? More specifically - how can ssl-passthrough be configured for port 80? |
From @aledbf on June 28, 2017 21:59 @philipithomas I just answered this in your issue :) |
From @nlamirault on September 6, 2017 16:6 I would like to use an nginx ingress controller to expose a grpc-gateway service. The gRPC services are on 8080 and the REST gateway on 9090. apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
name: diablo-http
namespace: nimbus
spec:
rules:
- host: diablo.caas.net
http:
paths:
- path: /
backend:
serviceName: diablo
servicePort: 9090
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-passthrough: "true"
name: diablo-grpc
namespace: nimbus
spec:
rules:
- host: diablo-rpc.caas.net
http:
paths:
- path: /
backend:
serviceName: diablo
servicePort: 8080 A HTTP request to diablo.caas.net works fine, but the CLI which use the gRPC backend is not working with diablo-rpc.caas.net. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
Any luck getting this to work, @bowei ? I am having the same issue. |
I am also interested in exposing internal Looks like some work to support http/2 is already in progress here :) |
cc @agau4779 |
Is there anyone who can speak to the priority of completing this or anyone who can provide a working example since #146 has been merged? I am struggling to configure HTTPS (HTTP/2) ingress on GKE in front of the Cloud Endpoints ESP for a gRPC service. To clarify I'm trying to achieve the following architecture:
Anything seem wrong or unreasonable with that? I have referred to the following existing documentation/samples:
At most, I am able to get ESP working with my gRPC service and can connect via the pod IPs on the private network. The health check created by the ingress does not seem to be working. Is there any way to configure it for HTTP/2 or TCP instead of HTTP? I would also welcome any alternative suggestions including manually creating an LB or anything I might orchestrate with Terraform. Thank you. |
Well after much knob twiddling I was able to access a gRPC+Cloud Endpoints (ESP) service through an HTTPS LB. I ended up configuring everything manually via the web console, but will probably write a script so I can at least replicate the process with A few notes that may help others:
Unfortunately, I was never able to make an |
@agau4779 -- did we ever update the OSS docs? |
Not yet, in progress. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@bgetsug I was able to override the LB health check to point to the ESP http_port and the probe will see it healthy. However it gets overridden (I guess by LGBC) after a while back to the main service port (HTTP/2). I assume GLBC sets the probe to Ingress/spec/backend/servicePort and, hence, any other probe will not really work. I was not able to figure out how the health probe is set by GLBC. After shaving yaks trying to get GLBC which uses non standard vendoring paths and GoLand's stubbornness in not accepting anything else but I wonder if I could at least figure out if setting a different probe (e.g. TCP) Also see https://benguild.com/2018/11/11/quickstart-golang-kubernetes-grpc-tls-lets-encrypt/ Now the second problem While the probe works, it looks like the backend over HTTP/2 won't
Regarding
Perhaps that's because gRPC downgrades HTTP/2 to h2c? Also note that while HTTP/2 is not supported, the docs mentioned it (although until a few hours the link returned a 404). I'm referring to this doc https://cloud.google.com/load-balancing/docs/https/
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2 <- this was broken The remaining problem is that it's likely the HTTP/2 backend will likely not downgrade to h2c as gRPC does. |
@gnarea That's an interesting question but I don't have a great answer for you there. I don't see a way to do this without tweaking your app. |
It seems like unencrypted HTTP2 is not as common -- https://http2.github.io/faq/#does-http2-require-encryption, so that is probably why support for it on the backend is not possible right now. Generating the self-signed cert is straightforward, can be done with an init container with a script, but you really have to watch out for cert expiry.
|
Thanks @rramkumar1 and @bowei! Putting a TLS reverse proxy in front of the gRPC server fixed that problem, but it's uncovered another problem: The load balancer is failing to connect to the proxy and I suspect it's because the LB is refusing the self-issued certificate (not necessarily because it's self-issued, but for a different reason). Here's what's happening:
How can I tell why the LB is aborting during the TLS handshake? Are there any logs I can check? Is the LB expecting the backend certificate to have a specific Common Name or Subject Alternative Name? I'm currently using the IP of the pod ( |
@gnarea Is your backend trying to do mTLS? If so, its not supported on GCLB. Your errors look very similar to other errors I've seen where folks are trying to use mTLS. |
@rramkumar1, no, no mutual TLS. I've managed to solve this issue by ditching the TLS proxy and getting the gRPC server to do TLS with a self-issued certificate. I think the problem is that the TLS proxy was generating a self-issued certificate with a CommonName (set to the IP address of the pod), but it wasn't setting the SubjectAlternativeName extension. This is important. So now I have some custom code to create the certificate and set the SAN:
I can't believe this is finally working. GCP folks, some feedback:
|
Here's the final deployment in case it helps anyone: apiVersion: apps/v1
kind: Deployment
metadata:
name: gw-test-relaynet-internet-gateway-cogrpc
labels:
helm.sh/chart: relaynet-internet-gateway-0.1.0
app.kubernetes.io/name: relaynet-internet-gateway-cogrpc
app.kubernetes.io/instance: gw-test
app.kubernetes.io/version: "1.6.3"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: relaynet-internet-gateway-cogrpc
app.kubernetes.io/instance: gw-test
template:
metadata:
labels:
app.kubernetes.io/name: relaynet-internet-gateway-cogrpc
app.kubernetes.io/instance: gw-test
spec:
containers:
- name: cogrpc
image: "<YOU-IMAGE>"
imagePullPolicy: IfNotPresent
env:
- name: SERVER_IP_ADDRESS
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- name: grpc
containerPort: 8080
protocol: TCP
- name: cogrpc-health-check
image: salrashid123/grpc_health_proxy:1.0.0
imagePullPolicy: IfNotPresent
command:
- "/bin/grpc_health_proxy"
- "-http-listen-addr"
- "0.0.0.0:8082"
- "-grpcaddr"
- "127.0.0.1:8080"
- "-service-name"
- "relaynet.cogrpc.CargoRelay"
- "-grpctls"
- "-grpc-tls-no-verify"
- "--logtostderr=1"
- "-v"
- "10"
ports:
- name: health-check
containerPort: 8082
protocol: TCP
livenessProbe:
httpGet:
port: "health-check"
readinessProbe:
httpGet:
port: "health-check"
|
While this would be a useful component in the short term, I'd say the 'real' fix would be to allow using h2c (HTTP/2 without HTTPS) between the loadbalancer and the service 'directly'. |
I totally agree with @raboof: GCP LBs should be able to handle TLS termination regardless of whether the backend uses HTTP 1 or 2. Here's an issue to improve the logging in case anyone's interested: https://issuetracker.google.com/issues/168884858 |
We will send this to the L7 proxy folks -- in the meantime, comment on the issue tracker -- it's a good way to show there is interest as well. |
@gnarea you can use envoy to proxy the tls intended for the gRPC service. here is an exampleof that i just updated plus an easy button way to gen sni from subordinate cert. |
Indeed. BTW, I've just created a separate issue for the gRPC health checks: https://issuetracker.google.com/issues/168994852 |
@bowei, is there a issue for the TLS termination on the L7 proxy? I'd like to star it and get updates. |
@gnarea I just noticed that this issue is pretty old and not specific to support for unencrypted backends for HTTP2. It probably is better to open an issue for that topic so the activity can be tracked easier. |
As one of the few workarounds for kubernetes/ingress-gce#18
* fix(CogRPC): Self-issue TLS certificate As one of the few workarounds for kubernetes/ingress-gce#18 * Update URL to CogRPC server in functional test suite * Fix COGRPC_ADDRESS in functional tests * Document need to issue cert
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
For all of you coming here as of today, this is what I did to get a grpc plaintext application working on GKE behind GCLB. Thanks @salrashid123 and @gnarea for providing insights. I used the docker-hitch sidecar from here instead of envoy (much simpler to me) which generates TLS certificates on the fly picking up pod ip address at startup. I slightly modified it to accomodate http2 and alpn with the correct ciphers supported by GCLB (here the fork, it might be better to move params to Dockerfile variable, but anyway...). Here you can see redacted manifests for k8s:
Hope this helps guys! Thanks to everyone for your posts. This is a good resource as well. |
Which was used as a workaround for kubernetes/ingress-gce#18 (comment)
Which was used as a workaround for kubernetes/ingress-gce#18 (comment)
From @aledbf on December 1, 2016 22:39
Copied from original issue: kubernetes/ingress-nginx#39
The text was updated successfully, but these errors were encountered: