Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

controllers/gce/README.md doc review #45

Closed
bowei opened this issue Oct 11, 2017 · 8 comments
Closed

controllers/gce/README.md doc review #45

bowei opened this issue Oct 11, 2017 · 8 comments
Assignees

Comments

@bowei
Copy link
Member

bowei commented Oct 11, 2017

From @ensonic on July 12, 2017 9:54

https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#the-ingress

Though glbc doesn't support HTTPS yet, security configs would also be global.

You probably want to say that it does not support https when communicating with the backends. There is a chapter on TLS termination below.


https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#creation

kubectl create -f rc.yaml
replicationcontroller "glbc" created

This seems to be outdated:

kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/controllers/gce/rc.yaml
service "default-http-backend" created
replicationcontroller "l7-lb-controller" created

This also means that the log commadns are outdated and should be updated to e.g. kubectl logs --follow l7-lb-controller-fw4ps l7-lb-controller

Go to your GCE console and confirm that the following resources have been created through the HTTPLoadbalancing panel

There is no HTTPLoadbalancing panel, but there is this page:
https://pantheon.corp.google.com/networking/loadbalancing/loadBalancers/list


https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#updates

Say you don't want a default backend ...

If you omit the default backend you seem to get some implicit default backend, which is always unhealthy - since it returns 404. Having a default that has a readyness check would be nice so that GLBC would actually use it.


https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#paths

some yaml is shown as plaintext.


There is probably more, lets fix it iteratively.

Copied from original issue: kubernetes/ingress-nginx#951

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @ensonic on July 12, 2017 12:23

Please double check that one needs to actually deploy the
kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/controllers/gce/rc.yaml

In my gce installation I do have deployment in kube-system ns called 'l7-default-backend', but there is no glbc deployment. Still if I don't deploy the rc.yaml everything works (that is the ingress gets created, thje loadbalancer gets created too), except that I can't get logging for the l7-lb-controller-fw4ps and there are less errors if I run kubectl describe ing.

There is some discussion about this in kubernetes-retired/contrib#1733

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @nicksardo on July 12, 2017 17:11

Agreed that the GCE readme is a mess. It's quite verbose yet unhelpful at the same time. You're welcome to help clean it up.

You probably want to say that it does not support https when communicating with the backends. There is a chapter on TLS termination below.

Actually, front and backend HTTPS is supported. This sentence needs to be removed.

Regarding the outdated replicationcontroller and commands, would you like to tackle this in a PR?

If you omit the default backend you seem to get some implicit default backend, which is always unhealthy - since it returns 404. Having a default that has a readyness check would be nice so that GLBC would actually use it.

The default backend does not have a readinessProbe section because the ingress controller sets the default health check path to /healthz only for the default backend (specified via flag). We can add a readinessprobe section at https://github.com/kubernetes/kubernetes/blob/4a01f44b7378fb0c58450898e1bca30ea8a5158a/cluster/addons/cluster-loadbalancing/glbc/default-svc-controller.yaml.

Please double check that one needs to actually deploy the
kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/controllers/gce/rc.yaml

Are you running a cluster on GKE (managed K8s) or GCE (self-managed)? This question needs to dominate the readme. If users are running GKE, then the GLBC controller is already running on their master - they do not need to deploy anything. While the default-backend is visible to users, the ingress controller will not appear as it's a static pod on the master.

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @ensonic on July 13, 2017 7:38

I've started a PR here: kubernetes/ingress-nginx#962

For the "replication-controller and commands" - I am running on GKE and hence I don't think there are equivalents for these commands - or is tehre are way to lok at the logs of the ingress controller? I think the logs are quite useful (e.g. to discover the SNI issue). Once we confirmed this I'll update the PR do clarify this.

Wrt to the health check:
https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#health-checks
says one needs to reply with 200 on '/' or implement a readinessProbe. The default backend you link to configures a livenessProbe. Right now I am having trouble with a grpc backend that both defines readinessProbe and livenessProbe, but the glbc only exercises

"GET / HTTP/1.1" 404 200 "-" "GoogleHC/1.0"

and declares it unhealthy. Here again the log of the controller might be helpful.

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @ensonic on July 19, 2017 17:24

I am happy to send more PRs if you could please reply.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 9, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 10, 2018
@tonglil
Copy link
Contributor

tonglil commented Feb 12, 2018

/remove-lifecycle rotten

The docs can still use work, will try to tackle sometime.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 12, 2018
@nicksardo
Copy link
Contributor

Duping to #249
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants