-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress Healthcheck Configuration #42
Comments
From @k8s-ci-robot on May 15, 2017 21:25 @freehan: These labels do not exist in this repository: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
From @tonglil on August 14, 2017 20:8 For option 3, is the "default healthcheck" hitting the "default-backend"? |
I'm in favor of option 1. I'm sure there is a minimum subset of features common to all cloud providers that make sense to be included in the ingress spec, and it would improve the current situation a lot. |
+1 |
For those in favor of option 1, please read this conversation: #28. |
Reading that conversation -- I think configuring healthcheck via annotations would be great. |
@bowei I think that kubernetes-retired/contrib#325 is related to this? |
I ran into this as well and found this post: kubernetes/kubernetes#20555 (comment) |
"on Ingress creation, ingress controller will scan all backend pods and pick the first ReadinessProbe it encounters and configure healthcheck accordingly" I'm not seeing this, the health check will always point to default path "/" with:
|
I got the same issue as @matti. When I create an I've been trying for a while to find information on this topic, but I feel like it's not very well explained and documented. If anyone finds a solution for this it would be much appreciated! |
@Gogoro thanks, opened a new issue because this issue is for semantic discussion |
Healthcheck configuration should be provided via BackendConfig CRD and the readiness probe approach should be deprecated and eventually removed. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/help-wanted |
@rramkumar1: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is an nginx server that does a redirect to a zoom room used by the team for meetings like: standup, retro, everyone is wfh because covid-19 You may wonder: why do the redirect with an html page? You can do HTTP 301 redirect with just nginx redirect rules. and you would be correct! But then other people's opinions get pushed onto you and you have to do hacky things to get around it. tldr: the ingress comes with a health checker that is not smart. See kubernetes/ingress-gce#42 for more details. Longer answer: The ingress used by GKE setups a health check that ONLY hits '/' against your pod. Because my initial nginx.conf just did 301 redirects this healthcheck was forever failing because it would never get HTTP 200. I tried to get around this by having nginx listen on another port for health checking and always return HTTP 200 on that port. I then had to expose both ports to the ingress controller. The ingress controller decided to healthcheck BOTH ports that were exposed on the container and send all traffic to port 9000 where it was getting the HTTP 200's. At this point I threw my hands up and wrote some HTML to do the redirect for me. Signed-off-by: Taylor Silva <tsilva@pivotal.io>
Can HealthCheckConfig (added in this PR) configure L7LB to pass health checks for grpc applications? |
Hi everyone, please take a look at #1029 which implements the healthchecking overrides on the backendconfig for a service. |
@bowei thanks a lot for your PR. I think this should work well with gRPC applications since we can use a custom path for Healthchecks. Can you tell us what versions of GKE will this addition be available to use with? |
Thanks @bowei ! But are GCE-Ingress health check overrides available today ? Not seeing any related docs in https://cloud.google.com/kubernetes-engine/docs/concepts/backendconfig How to keep track if it is available in a given version of GKE (e.g. stable channel) ? |
Can someone post here what the recommended fix is? What does a BackendConfig with a health check look like? What is the min Kubernetes version where this feature is supported? |
the docs will be updated very soon -- @spencerhance |
An example BackendConfig using this feature: apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backend-config
spec:
healthCheck:
checkIntervalSec: 20
timeoutSec: 1
healthyThreshold: 1
unhealthyThreshold: 3
type: TCP
# defaults to serving port
# port:
# only for HTTP/HTTPS type
# path: ref: https://godoc.org/k8s.io/ingress-gce/pkg/apis/backendconfig/v1#HealthCheckConfig It doesn't appear to work though (health check is still created as type HTTP and path "/" regardless of what I configure in the BackendConfig). Is there any GKE release that supports this yet? |
I've tried the following setup with no success sadly: My app (nakama) exposes two ports - 7110 for gRPC (HTTP/2) and 7111 for gRPC-gateway (HTTP 1.1 with This is running on a GKE instance version 1.16.8-gke.10. kind: Service
apiVersion: v1
metadata:
name: nakama3
namespace: heroiclabs
labels:
project: heroiclabs
annotations:
cloud.google.com/app-protocols: '{"nakama-api":"HTTP","nakama-grpc-api":"HTTP2"}'
beta.cloud.google.com/backend-config: >-
{"ports":{"nakama-api":"backendconfig","nakama-grpc-api":"backendconfig"}, "default": "backendconfig"}
cloud.google.com/neg: '{"ingress": true}'
spec:
ports:
- name: nakama-grpc-api
protocol: TCP
port: 7110
targetPort: 7110
- name: nakama-api
protocol: TCP
port: 7111
targetPort: 7111
selector:
app: nakama
type: NodePort backendconfig.yml kind: BackendConfig
apiVersion: cloud.google.com/v1
metadata:
labels:
project: heroiclabs
name: backendconfig
namespace: heroiclabs
spec:
connectionDraining:
drainingTimeoutSec: 5
logging:
sampleRate: 0.0
timeoutSec: 86400
healthCheck:
port: 7111
checkIntervalSec: 10 ingress.yml kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: gundam3
namespace: heroiclabs
labels:
project: heroiclabs
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: heroiclabs
kubernetes.io/ingress.allow-http: 'false'
spec:
backend:
serviceName: gundam3
servicePort: 7110 The result is the following: Apologies for tagging you individually guys, but @bowei or @rramkumar1 can you shed some light on what might be going wrong here? |
cc: @spencerhance |
This feature has not rollout out yet, but will be available in the next release (1.17.6-gke.7) next week, which the exception of port configuration. That will require a bug fix that should roll out a few weeks after. Additionally, this feature won't be available in 1.16 clusters until about a month after it has been released in 1.17. |
Thanks @spencerhance and @bowei for the prompt response - I'll keep a look out for that. |
I can manage to have healthy ingress for around 10 minutes by giving custom health check (with node port assigned with readinessProbe port) for automatically created backend service config for load balancer which created with ingress. After about 10 minutes health check returns to look for default node port and then I get 502 again for external ip. (Using screenshot of @mofirouz, but the same config field I mentioned above.) Unfortunately they warn about not to update load balancer config manually. But without doing anything manually, I can not reach my service from assigned ip of the load balancer. By the way, I didn't try updating the actuator's health endpoint to Edit As mentioned here I tried to give exactly same port with the container port for readinessProbe
Then creating Ingress again provides health check config of the load balancer to look for the new path instead |
@ahmetgeymen -- hopefully after the healthcheck feature is available, the need to edit the healthcheck settings manually will go away. Let us know if there is anything that remains that makes custom configuration necessary.. |
This seems available on beta. I am on 1.17.6-gke.11. I don't need change in Ingress. apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: config-default
spec:
healthCheck:
checkIntervalSec: 10
timeoutSec: 3
requestPath: /healthz apiVersion: v1
kind: Service
metadata:
name: test-healthz
annotations:
cloud.google.com/neg: '{"ingress": true}'
beta.cloud.google.com/backend-config: '{"default": "config-default"}'
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: test-healthz
type: NodePort |
I'm using v1.17.6-gke.7 but can't get this to work with gRPC. I basically want to use TCP (not HTTP2) health checks because those HTTP2 health checks don't work at all with gRPC. Here's the resources I have:
The old HTTP2 healthcheck is still used. In fact, I can't see the healthcheck that should've been created by the Any idea what I'm doing wrong? |
@gnarea |
Thanks @Gatsby-Lee! I think that link helps with the structure of the data of the I also gave up on the TCP probe because I couldn't get it to work and, even if I eventually did, it'd be far too unreliable for a health check. Instead, per the suggestion of a GCP support agent, I've now created a new container in the pod, which is an HTTP app with a single endpoint that in turn pings my gRPC service using the gRPC health check protocol. This approach is basically Option 3 in https://kubernetes.io/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/ (except that I'm doing HTTP probes instead of To sum up, I want to configure the gRPC backend in the LB in such a way that the health check points to the HTTP proxy containers but the actual backend uses the gRPC containers. This is the kind of things you'd be able to do with the fix in this issue, right? If so, how can I configure the Here's my current service, backendconfig and deployment in case that's useful:
All pods are running properly as you can see in the deployment status. And as you'll see below, the healtcheck for this backend is connecting to the gRPC service (
|
I was wrong. I don't think setting healthcheck through Backendconfig works. And, even if custom healthcheck works by bringing up Ingress after Service, This is what @nicksardo explained before in different msg thread. BTW, ingress doesn't have to be removed to use custom healthcheck. |
May I suggest/request adding this to the Feature Comparison table at https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features ? I see custom health checks are marked as being in beta (for Internal Ingress[1]), but there's no version number so it's not clear how to "activate" the beta feature. [1] I assume this includes ingresses used for IAP given the healthCheck attribute doesn't have an effect in BackendConfigs used there. |
Is there any reason why we can't set the Host header with the BackendConfig CRD? |
Hey guys, Currently I have a service that publishes port The Ingress controller looks like this:
The service listens to websocket connections on port 12345, but I've also spun up a small python server that runs in the same pod experimentally just for the sole purpose of passing the health check. It listens to http requests on Is there a way for me to configure the health check so that it checks against port 80 without having to publish it as the service port on the same path as the port I want to route traffic to? I really want to stay in the GCE ingress ecosystem so I would love to get this to work if possible. |
@lelandhwu , can you please create a new issue with the same content as #42 (comment). This seems like a separate issue from Healthcheck configuration. The remaining ask on this issue is to add Host Header to the BackendConfig. Please open a different issue if this FR is still desired. Since this issue is specific to custom healthchecks we are closing it out. Custom healthchecks are configurable through the BackendConfig CRD: #42 (comment). |
From @freehan on May 15, 2017 21:25
On GCE, ingress controller sets up default healthcheck for backends. The healthcheck will point to the nodeport of backend services on every node. Currently, there is no way to describe detail configuration of healthcheck in ingress. On the other side, each application may want to handle healthcheck differently. To bypass this limitation, on Ingress creation, ingress controller will scan all backend pods and pick the first ReadinessProbe it encounters and configure healthcheck accordingly. However, healthcheck will not be updated if ReadinessProbe was updated. (Refer: kubernetes/ingress-nginx#582)
I see 3 options going forward with healthcheck
Expand the Ingress or Service spec to include more configuration for healthcheck. It should include the capabilities provided by major cloud providers, GCP, AWS...
Keep using readiness probe for healthcheck configuration,
a) Keep today's behavior and communicate clearly regarding the expectation. However, this still breaks the abstraction and declarative nature of k8s.
b) Let ingress controller watch the backend pods for any updates for ReadinessProbe. This seems expensive and complicated.
Only setup default healthcheck for ingresses. Ingress controller will only ensure the healthcheck exist periodically, but do not care about its detail configuration. User can configure it directly thru the cloud provider.
I am in favor of option 3). There are always more bells and whistles on different cloud providers. The higher layer we go, the more features we can utilize. For L7 LB, there is no clean simple way to describe every intention. So is the case for health check. To ensure a smooth experience, k8s still sets up the basics. For advance use cases, user will have to configure it thru the cloud provider.
Thoughts? @kubernetes/sig-network-misc
Copied from original issue: kubernetes/ingress-nginx#720
The text was updated successfully, but these errors were encountered: