Skip to content
This repository has been archived by the owner on Apr 17, 2019. It is now read-only.

use annotations to redefine health check endpoints #325

Closed
bprashanth opened this issue Dec 11, 2015 · 6 comments
Closed

use annotations to redefine health check endpoints #325

bprashanth opened this issue Dec 11, 2015 · 6 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@bprashanth
Copy link

Currently all backends need to serve a 200 on /
https://github.com/kubernetes/contrib/tree/master/Ingress/controllers/gce#health-checks

This makes it hard to run stock apps like word press that don't serve a 200 on /. We should have this configurable, it doesn't need to be blocked on the api exposing health checks.

@jeremywadsack
Copy link

Updated link to relevant documentation: https://github.com/kubernetes/ingress/tree/master/controllers/gce#health-checks

Also, at this time there appear to be two documented options:

  1. Respond with a 200 on '/'. The content does not matter.
  2. Expose an arbitrary url as a readiness probe on the pods backing the Service (see doc for limitations).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 21, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@wadadli
Copy link

wadadli commented Nov 13, 2018

/reopen

A deployment containing a container with two ports exposed through Service resources, service a and service bwhich do not respond 200 on GET at / and hence would run into some issues with ingress-gce to get around it, the user configures a readiness probe using a path that he knows will return 200 for one of the services, i.e

          ports:
            - name: webadmin // Service A
              containerPort: 8443
              protocol: TCP
            - name: transformer // Service B
              containerPort: 9001
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: webadmin
            initialDelaySeconds: 30
            failureThreshold: 30
          readinessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: webadmin
            initialDelaySeconds: 30
            failureThreshold: 30

If we are to add these services to an ingress resource, ingress-gce will use the readinessProbe from the deployment to configure the health checks for both service resources. The issue is that we have no control of specifying what the path should be for health checks on this service in the manifest.

I believe an annotation on the service resource would help or even kubernetes/kubernetes#37218

@k8s-ci-robot
Copy link
Contributor

@wadadli: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

I think the following scenario.

A deployment with two Service resources, service a and service bwhich both select the same pod that does not respond 200 on GET at / so the user configures a readiness probe using a path that he knows will return 200 for one of the services, i.e

         ports:
           - name: webadmin // Service A
             containerPort: 8443
             protocol: TCP
           - name: transformer // Service B
             containerPort: 9001
             protocol: TCP
         livenessProbe:
           httpGet:
             scheme: HTTPS
             path: /
             port: webadmin
           initialDelaySeconds: 30
           failureThreshold: 30
         readinessProbe:
           httpGet:
             scheme: HTTPS
             path: /
             port: webadmin
           initialDelaySeconds: 30
           failureThreshold: 30

If we are to add these services to an ingress resource, ingress-gce will use the readinessProbe from the deployment to configure the health checks for both service resources. The issue is that we have no control of specifying what the path should be for health checks on this service in the manifest.

I believe an annotation on the service resource would help here, assuming separate service resources are created.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants