diff --git a/README.md b/README.md index 7dd2f2f2ef..3da47c0bfb 100644 --- a/README.md +++ b/README.md @@ -17,630 +17,18 @@ Please read the [beta limitations](BETA_LIMITATIONS.md) doc to before using this ## Overview -**GCP HTTP(S) Load Balancer**: Google Compute Platform does not have a single resource that represents a L7 loadbalancer. When a user request comes in, it is first handled by the global forwarding rule, which sends the traffic to an HTTP proxy service that sends the traffic to a URL map that parses the URL to see which backend service will handle the request. Each backend service is assigned a set of virtual machine instances grouped into instance groups, or sets of IP addresses and ports named [network endpoint groups](https://cloud.google.com/load-balancing/docs/negs/) (NEGs). +Please visit [here](https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress) for core use-cases and [here](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress) for other cool features. -**Services**: A Kubernetes Service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name. This IP defaults to a cluster VIP in a private address range. You can direct ingress traffic to a particular Service by setting its `Type` to NodePort or LoadBalancer. NodePort opens up a port on *every* node in your cluster and proxies traffic to the endpoints of your service, while LoadBalancer allocates an L4 cloud loadbalancer. +## Releases +Please visit the [changelog](CHANGELOG.md) for both high-level release notes and a detailed changelog. -### What is an Ingress Controller? +## GKE Version Mapping -Configuring a webserver or loadbalancer is harder than it should be. Most webserver configuration files are very similar. There are some applications that have weird little quirks that tend to throw a wrench in things, but for the most part you can apply the same logic to them and achieve a desired result. - -The Ingress resource embodies this idea, and an Ingress controller is meant to handle all the quirks associated with a specific "class" of Ingress (be it a single instance of a loadbalancer, or a more complicated setup of frontends that provide GSLB, DDoS protection, etc). - -An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's `/ingresses` endpoint for updates to the [Ingress resource](https://kubernetes.io/docs/concepts/services-networking/ingress/). Its job is to satisfy requests for Ingresses. - - -### L7 Load balancing on Kubernetes - -To achieve L7 loadbalancing through Kubernetes, we employ a resource called `Ingress`. The Ingress is consumed by this loadbalancer controller, which creates the following GCE resource graph: - -[Global Forwarding Rule](https://cloud.google.com/compute/docs/load-balancing/http/global-forwarding-rules) -> [TargetHttpProxy](https://cloud.google.com/compute/docs/load-balancing/http/target-proxies) -> [URL Map](https://cloud.google.com/compute/docs/load-balancing/http/url-map) -> [Backend Service](https://cloud.google.com/compute/docs/load-balancing/http/backend-service) -> [Instance Group](https://cloud.google.com/compute/docs/instance-groups/) or [Network Endpoint Group](https://cloud.google.com/load-balancing/docs/negs/) - -The controller (GLBC) manages the lifecycle of each component in the graph. It uses the Kubernetes resources as a spec for the desired state, and the GCE cloud resources as the observed state, and drives the observed to the desired. If an edge is disconnected, it fixes it. Each Ingress translates to a new GCE L7, and the rules on the Ingress become paths in the GCE URL Map. This allows you to route traffic to various backend Kubernetes Services (or directly to Pods when using NEGs) through a single public IP, which is in contrast to `Type=LoadBalancer`, which allocates a public IP *per* Kubernetes Service. For this to work, the Kubernetes Service *must* have Type=NodePort. - -### The Ingress - -An Ingress in Kubernetes is a REST object, similar to a Service. A minimal Ingress might look like: - -```yaml -01. apiVersion: extensions/v1beta1 -02. kind: Ingress -03. metadata: -04. name: hostlessendpoint -05. spec: -06. rules: -07. - http: -08. paths: -09. - path: /hostless -10. backend: -11. serviceName: test -12. servicePort: 80 -``` - -`POST` calls to the Kubernetes API server would cause GLBC to create a GCE L7 that routes all traffic sent to `http://ip-of-loadbalancer/hostless` to :80 of the service named `test`. If the service doesn't exist yet or isn't type NodePort, then GLBC will allocate an IP and wait until it does. Once the Service shows up, it will create the required path rules to route traffic. - -__Lines 1-4__: Resource metadata used to tag GCE resources. For example, if you go to the console you would see a URL Map called: `k8-fw-default-hostlessendpoint`, where default is the namespace and `hostlessendpoint` is the name of the resource. The Kubernetes API server ensures that namespace/name is unique so there will never be any collisions. - -__Lines 5-7__: Ingress Spec has all the information needed to configure a GCE L7. Most importantly, it contains a list of `rules`. A rule can take many forms, but the only rule relevant to GLBC is the `http` rule. - -__Lines 8-9__: Each HTTP rule contains the following information: A host (eg: foo.bar.com, defaults to `*` in this example), a list of paths (eg: `/hostless`) each of which has an associated backend (`test:80`). Both the `host` and `path` must match the content of an incoming request before the L7 directs traffic to the `backend`. - -__Lines 10-12__: A `backend` is a service:port combination. It selects a group of pods capable of servicing traffic sent to the path specified in the parent rule. The `port` is the desired `spec.ports[*].port` from the Service Spec -- Note, though, that the L7 actually directs traffic to the port's corresponding `NodePort`, unless configured as a NEG. - -__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters. However, one can specify a default backend (see examples below) in the absence of which requests that don't match a path in the spec are sent to the default backend of GLBC. - - -## Load Balancer Management - -You can manage a GCE L7 by creating, updating, or deleting the associated Kubernetes Ingress. - -### Creation - -Before you can start creating Ingress you need to start up GLBC. We can use the examples/deployment/gce-ingress-controller.yaml: -```shell -$ kubectl create -f examples/deployment/gce-ingress-controller.yaml -replicationcontroller "glbc" created -$ kubectl get pods -NAME READY STATUS RESTARTS AGE -glbc-6m6b6 2/2 Running 0 21s - -``` - -A couple of things to note about this controller: -* It has an intentionally long `terminationGracePeriod`, this is only required with the --delete-all-on-quit flag (see [Deletion](#deletion)) -* Don't start 2 instances of the controller in a single cluster, they will fight each other. - -The loadbalancer controller will watch for Services, Nodes and Ingress. Nodes already exist (the nodes in your cluster). We need to create the other 2. For example, create the Service with examples/multi-path/svc.yaml and the Ingress with examples/multi-path/gce-multi-path-ingress.yaml. - -A couple of things to note about the Service: -* It creates a Replication Controller for a simple "echoserver" application, with 1 replica. -* It creates 2 services for the same application pod: echoheaders[x, y] - -Something to note about the Ingress: -* It creates an Ingress with 2 hostnames and 3 endpoints (foo.bar.com{/foo} and bar.baz.com{/foo, /bar}) that access the given service - -```shell -$ kubectl create -f examples/http-svc.yaml examples/multi-path/gce-multi-path-ingress.yaml -$ kubectl get svc -NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE -echoheadersx 10.0.126.10 nodes 80/TCP app=echoheaders 16m -echoheadersy 10.0.134.238 nodes 80/TCP app=echoheaders 16m -Kubernetes 10.0.0.1 443/TCP 21h - -$ kubectl get ing -NAME RULE BACKEND ADDRESS -echomap - echoheadersx:80 - foo.bar.com - /foo echoheadersx:80 - bar.baz.com - /bar echoheadersy:80 - /foo echoheadersx:80 -``` - -You can tail the logs of the controller to observe its progress: -``` -$ kubectl logs --follow glbc-6m6b6 l7-lb-controller -I1005 22:11:26.731845 1 instances.go:48] Creating instance group k8-ig-foo -I1005 22:11:34.360689 1 controller.go:152] Created new loadbalancer controller -I1005 22:11:34.360737 1 controller.go:172] Starting loadbalancer controller -I1005 22:11:34.380757 1 controller.go:206] Syncing default/echomap -I1005 22:11:34.380763 1 loadbalancer.go:134] Syncing loadbalancers [default/echomap] -I1005 22:11:34.380810 1 loadbalancer.go:100] Creating l7 default-echomap -I1005 22:11:34.385161 1 utils.go:83] Syncing e2e-test-beeps-minion-ugv1 -... -``` - -When it's done, it will update the status of the Ingress with the IP of the L7 it created: -```shell -$ kubectl get ing -NAME RULE BACKEND ADDRESS -echomap - echoheadersdefault:80 107.178.254.239 - foo.bar.com - /foo echoheadersx:80 - bar.baz.com - /bar echoheadersy:80 - /foo echoheadersx:80 -``` - -Go to your GCE console and confirm that the following resources have been created through the HTTPLoadbalancing panel: -* Global Forwarding Rule -* URL Map -* TargetHTTPProxy -* Backend Services (one for each Kubernetes NodePort service) -* An Instance Group (with ports corresponding to the Backend Services) - -The HTTPLoadBalancing panel will also show you if your backends have responded to the health checks, wait till they do. This can take a few minutes. If you see `Health status will display here once configuration is complete.` the L7 is still bootstrapping. Wait till you have `Healthy instances: X`. Even though the GCE L7 is driven by our controller, which notices the Kubernetes healthchecks of a pod, we still need to wait on the first GCE L7 health check to complete. Once your backends are up and healthy: - -```shell -$ curl --resolve foo.bar.com:80:107.178.245.239 http://foo.bar.com/foo -CLIENT VALUES: -client_address=('10.240.29.196', 56401) (10.240.29.196) -command=GET -path=/echoheadersx -real path=/echoheadersx -query= -request_version=HTTP/1.1 - -SERVER VALUES: -server_version=BaseHTTP/0.6 -sys_version=Python/3.4.3 -protocol_version=HTTP/1.0 - -HEADERS RECEIVED: -Accept=*/* -Connection=Keep-Alive -Host=107.178.254.239 -User-Agent=curl/7.35.0 -Via=1.1 google -X-Forwarded-For=216.239.45.73, 107.178.254.239 -X-Forwarded-Proto=http -``` - -You can also edit `/etc/hosts` instead of using `--resolve`. - -#### Updates - -Say you don't want a default backend and you'd like to allow all traffic hitting your loadbalancer at `/foo` to reach your echoheaders backend service, not just the traffic for foo.bar.com. You can modify the Ingress Spec: - -```yaml -spec: - rules: - - http: - paths: - - path: /foo -.. -``` - -and replace the existing Ingress: - -``` -$ kubectl replace -f examples/multi-path/gce-multi-path-ingress.yaml -ingress "echomap" replaced - -$ curl http://107.178.254.239/foo -CLIENT VALUES: -client_address=('10.240.143.179', 59546) (10.240.143.179) -command=GET -path=/foo -real path=/foo -... - -$ curl http://107.178.254.239/ -
-INTRODUCTION
-============
-This is an nginx webserver for simple loadbalancer testing. It works well
-for me but it might not have some of the features you want. If you would
-...
-```
-
-A couple of things to note about this particular update:
-* An Ingress without a default backend inherits the backend of the Ingress controller.
-* A IngressRule without a host gets the wildcard. This is controller specific, some loadbalancer controllers do not respect anything but a DNS subdomain as the host. You *cannot* set the host to a regular expression.
-* You never want to delete then re-create an Ingress, as it will result in the controller tearing down and recreating the loadbalancer.
-
-### Paths
-
-Till now, our examples were simplified in that they hit an endpoint with a catch-all path regular expression. Most real world backends have sub-resources. Let's create service to test how the loadbalancer handles paths:
-```yaml
-apiVersion: v1
-kind: ReplicationController
-metadata:
-  name: nginxtest
-spec:
-  replicas: 1
-  template:
-    metadata:
-      labels:
-        app: nginxtest
-    spec:
-      containers:
-      - name: nginxtest
-        image: bprashanth/nginxtest:1.0
-        ports:
-        - containerPort: 80
----
-apiVersion: v1
-kind: Service
-metadata:
-  name: nginxtest
-  labels:
-    app: nginxtest
-spec:
-  type: NodePort
-  ports:
-  - port: 80
-    targetPort: 80
-    protocol: TCP
-    name: http
-  selector:
-    app: nginxtest
-```
-
-Running kubectl create against this manifest will give you a service with multiple endpoints:
-```shell
-$ kubectl get svc nginxtest -o yaml | grep -i nodeport:
-    nodePort: 30404
-$ curl nodeip:30404/
-ENDPOINTS
-=========
- hostname: An endpoint to query the hostname.
- stress: An endpoint to stress the host.
- fs: A file system for static content.
-
-```
-You can put the nodeip:port into your browser and play around with the endpoints so you're familiar with what to expect. We will test the `/hostname` and `/fs/files/nginx.html` endpoints. Modify/create your Ingress:
-```yaml
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
-  name: nginxtest-ingress
-spec:
-  rules:
-  - http:
-      paths:
-      - path: /hostname
-        backend:
-          serviceName: nginxtest
-          servicePort: 80
-```
-
-And check the endpoint (you will have to wait till the update takes effect, this could be a few minutes):
-```shell
-$ kubectl replace -f ingress.yaml
-$ curl loadbalancerip/hostname
-nginx-tester-pod-name
-```
-
-Note what just happened, the endpoint exposes /hostname, and the loadbalancer forwarded the entire matching url to the endpoint. This means if you had '/foo' in the Ingress and tried accessing /hostname, your endpoint would've received /foo/hostname and not known how to route it. Now update the Ingress to access static content via the /fs endpoint:
-```yaml
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
-  name: nginxtest-ingress
-spec:
-  rules:
-  - http:
-      paths:
-      - path: /fs/*
-        backend:
-          serviceName: nginxtest
-          servicePort: 80
-```
-
-As before, wait a while for the update to take effect, and try accessing `loadbalancerip/fs/files/nginx.html`.
-
-#### Deletion
-
-Deleting a loadbalancer controller pod will not affect the loadbalancers themselves, this way your backends won't suffer a loss of availability if the scheduler pre-empts your controller pod. Deleting a single loadbalancer is as easy as deleting an Ingress via kubectl:
-```shell
-$ kubectl delete ing echomap
-$ kubectl logs --follow glbc-6m6b6 l7-lb-controller
-I1007 00:25:45.099429       1 loadbalancer.go:144] Deleting lb default-echomap
-I1007 00:25:45.099432       1 loadbalancer.go:437] Deleting global forwarding rule k8-fw-default-echomap
-I1007 00:25:54.885823       1 loadbalancer.go:444] Deleting target proxy k8-tp-default-echomap
-I1007 00:25:58.446941       1 loadbalancer.go:451] Deleting url map k8-um-default-echomap
-I1007 00:26:02.043065       1 backends.go:176] Deleting backends []
-I1007 00:26:02.043188       1 backends.go:134] Deleting backend k8-be-30301
-I1007 00:26:05.591140       1 backends.go:134] Deleting backend k8-be-30284
-I1007 00:26:09.159016       1 controller.go:232] Finished syncing default/echomap
-```
-Note that it takes ~30 seconds per ingress to purge cloud resources. This may not be a sufficient cleanup because you might have deleted the Ingress while GLBC was down, in which case it would leak cloud resources. You can delete the GLBC and purge cloud resources in two more ways:
-
-__The dev/test way__: If you want to delete everything in the cloud when the loadbalancer controller pod dies, start it with the --delete-all-on-quit flag. When a pod is killed it's first sent a SIGTERM, followed by a grace period (set to 10minutes for loadbalancer controllers), followed by a SIGKILL. The controller pod uses this time to delete cloud resources. Be careful with --delete-all-on-quit, because if you're running a production glbc and the scheduler re-schedules your pod for some reason, it will result in a loss of availability. You can do this because your rc.yaml has:
-```yaml
-args:
-# auto quit requires a high termination grace period.
-- --delete-all-on-quit=true
-```
-
-So simply delete the replication controller:
-```shell
-$ kubectl get rc glbc
-CONTROLLER   CONTAINER(S)           IMAGE(S)                                      SELECTOR                    REPLICAS   AGE
-glbc         default-http-backend   gcr.io/google_containers/defaultbackend-amd64:1.5   k8s-app=glbc,version=v0.5   1          2m
-             l7-lb-controller       gcr.io/google_containers/glbc:0.9.7
-
-$ kubectl delete rc glbc
-replicationcontroller "glbc" deleted
-
-$ kubectl get pods
-NAME                    READY     STATUS        RESTARTS   AGE
-glbc-6m6b6              1/1       Terminating   0          13m
-```
-
-__The prod way__: If you didn't start the controller with `--delete-all-on-quit`, you can execute a GET on the `/delete-all-and-quit` endpoint. This endpoint is deliberately not exported.
-
-```shell
-$ kubectl exec -it glbc-6m6b6  -- wget -q -O- http://localhost:8081/delete-all-and-quit
-..Hangs till quit is done..
-
-$ kubectl logs glbc-6m6b6  --follow
-I1007 00:26:09.159016       1 controller.go:232] Finished syncing default/echomap
-I1007 00:29:30.321419       1 controller.go:192] Shutting down controller queues.
-I1007 00:29:30.321970       1 controller.go:199] Shutting down cluster manager.
-I1007 00:29:30.321574       1 controller.go:178] Shutting down Loadbalancer Controller
-I1007 00:29:30.322378       1 main.go:160] Handled quit, awaiting pod deletion.
-I1007 00:29:30.321977       1 loadbalancer.go:154] Creating loadbalancers []
-I1007 00:29:30.322617       1 loadbalancer.go:192] Loadbalancer pool shutdown.
-I1007 00:29:30.322622       1 backends.go:176] Deleting backends []
-I1007 00:30:00.322528       1 main.go:160] Handled quit, awaiting pod deletion.
-I1007 00:30:30.322751       1 main.go:160] Handled quit, awaiting pod deletion
-```
-
-You just instructed the loadbalancer controller to quit, however if it had done so, the replication controller would've just created another pod, so it waits around till you delete the rc.
-
-#### Health checks
-
-Currently, all service backends must satisfy *either* of the following requirements to pass the HTTP(S) health checks sent to it from the GCE loadbalancer:
-1. Respond with a 200 on '/'. The content does not matter.
-2. Expose an arbitrary URL as a `readiness` probe on the pods backing the Service.
-
-The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP(S) health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. [This is an example](/examples/health-checks/README.md) of an Ingress that adopts the readiness probe from the endpoints as its health check.
-
-## Frontend HTTPS
-
-For encrypted communication between the client to the load balancer, you need to specify a TLS private key and certificate to be used by the ingress controller.
-
-Version 1.1 of GLBC now supports (as a beta feature) using more than one SSL certificate in a single Ingress for request termination (aka Multiple-TLS).
-With this change, keep in mind that the GCP's limit is 10. Take a look at GCP's [documentation]((https://cloud.google.com/load-balancing/docs/ssl/))
-on SSL certificates for more information on how they are supported in L7 load balancing.
-
-Ingress controller can read the private key and certificate from 2 sources:
-* Kubernetes [secret](http://kubernetes.io/docs/user-guide/secrets).
-   * [Example](/examples/https)
-* [GCP SSL
-  certificate](https://cloud.google.com/compute/docs/load-balancing/http/ssl-certificates).
-
-Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
-
-### Secret
-
-For the ingress controller to use the certificate and private key stored in a
-Kubernetes secret, user needs to specify the secret name in the TLS configuration section
-of their ingress spec. The secret is assumed to exist in the same namespace as the ingress.
-
-The TLS secret must [contain keys](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2696) named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, eg:
-```shell
-$ kubectl create secret tls testsecret --key /tmp/tls.key --cert /tmp/tls.crt
-```
-
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
-  name: testsecret
-  namespace: default
-type: Opaque
-data:
-  tls.crt: base64 encoded cert
-  tls.key: base64 encoded key
-```
-
-Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS.
-
-```yaml
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
-  name: no-rules-map
-spec:
-  tls:
-  - secretName: testsecret
-  backend:
-    serviceName: s1
-    servicePort: 80
-```
-
-This creates 2 GCE forwarding rules that use a single static IP. Both `:80` and `:443` will direct traffic to your backend, which serves HTTP requests on the target port mentioned in the Service associated with the Ingress.
-
-Specifying multiple secrets can be done as follows:
-
-```yaml
-apiVersion: extensions/v1beta1
-kind: Ingress
-spec:
-  tls:
-  - secretName: svc1-certificate
-  - secretName: svc2-certificate
-  backend:
-    serviceName: svc1
-    servicePort: svc1-port
-  rules:
-  - host: svc1.example.com
-    http:
-      paths:
-      - path: /*
-        backend:
-          serviceName: svc1
-          servicePort: svc1-port
-  - host: svc2.example.com
-    http:
-      paths:
-      - path: /*
-        backend:
-          serviceName: svc2
-          servicePort: svc2-port
-```
-
-In this example, ideally svc1-certificate will contain the hostname svc1.example.com and svc2-certificate will contain the hostname svc2.example.com.
-Therefore, when a client request indicates a hostname of svc1.example.com, the certificate contained in secret svc1-certificate will be served.
-
-Keep in mind that if you downgrade to a version that does not support Multiple-TLS (< 1.1), then you will need to manually clean up the created certificates in GCP.
-
-### GCP SSL Cert
-
-For the ingress controller to use the certificate and private key stored in a
-GCP SSL cert, user needs to specify the SSL cert name using the `ingress.gcp.kubernetes.io/pre-shared-cert` annotation.
-The certificate in this case is managed by the user and it is their responsibility to create/delete it. The Ingress controller assigns the SSL certificate with this name to the target proxies of the Ingress.
-
-
-```yaml
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
-  name: no-rules-map
-  annotations:
-      ingress.gcp.kubernetes.io/pre-shared-cert: 'my-certificate'
-spec:
-...
-```
-
-Multiple pre-shared certs can be specified as follows:
-
-```yaml
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
-  name: no-rules-map
-  annotations:
-      ingress.gcp.kubernetes.io/pre-shared-cert: "my-certificate-1, my-certificate-2, my-certificate-3"
-spec:
-...
-```
-
-It is important to point out that certificates specified via the annotation take precedence over certificates specified via the secret. In other words, if both methods are used, the certificates specified via the annotation will be used while the ones specified via the secret are ignored.
-
-#### Ingress cannot redirect HTTP to HTTPS
-
-The GCP HTTP Load Balancer does not have support redirect rules. Your application must perform the redirection. With an nginx server, this is as simple as adding the following lines to your config:
-```nginx
-# Replace '_' with your hostname.
-server_name _;
-if ($http_x_forwarded_proto = "http") {
-    return 301 https://$host$request_uri;
-}
-```
-
-#### Blocking HTTP
-
-You can block traffic on `:80` through an annotation. You might want to do this if all your clients are only going to hit the loadbalancer through HTTPS and you don't want to waste the extra GCE forwarding rule, eg:
-```yaml
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
-  name: test
-  annotations:
-    kubernetes.io/ingress.allow-http: "false"
-...
-```
-
-And curling `:80` should just `404`:
-```console
-$ curl 130.211.10.121
-...
-  
-  

404. That’s an error. - -$ curl https://130.211.10.121 -k -... -SERVER VALUES: -server_version=nginx: 1.9.11 - lua: 10001 -``` - -## Backend HTTPS - -For encrypted communication between the load balancer and your Kubernetes service, you need to decorate the service's port as expecting HTTPS. There's an alpha [Service annotation](examples/backside-https/app.yaml) for specifying the expected protocol per service port. Upon seeing the protocol as HTTPS, the ingress controller will assemble a GCP L7 load balancer with an HTTPS backend-service with an HTTPS health check. - -The annotation value is a JSON map of port-name to "HTTPS" or "HTTP". If you do not specify the port, "HTTP" is assumed. -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-echo-svc - annotations: - service.alpha.kubernetes.io/app-protocols: '{"my-https-port":"HTTPS"}' - labels: - app: echo -spec: - type: NodePort - ports: - - port: 443 - protocol: TCP - name: my-https-port - selector: - app: echo -``` - -## Troubleshooting: - -This controller is complicated because it exposes a tangled set of external resources as a single logical abstraction. It's recommended that you are at least *aware* of how one creates a GCE L7 [without a kubernetes Ingress](https://cloud.google.com/container-engine/docs/tutorials/http-balancer). If weird things happen, here are some basic debugging guidelines: - -* Check loadbalancer controller pod logs via kubectl -A typical sign of trouble is repeated retries in the logs: -```shell -I1006 18:58:53.451869 1 loadbalancer.go:268] Forwarding rule k8-fw-default-echomap already exists -I1006 18:58:53.451955 1 backends.go:162] Syncing backends [30301 30284 30301] -I1006 18:58:53.451998 1 backends.go:134] Deleting backend k8-be-30302 -E1006 18:58:57.029253 1 utils.go:71] Requeuing default/echomap, err googleapi: Error 400: The backendService resource 'projects/Kubernetesdev/global/backendServices/k8-be-30302' is already being used by 'projects/Kubernetesdev/global/urlMaps/k8-um-default-echomap' -I1006 18:58:57.029336 1 utils.go:83] Syncing default/echomap -``` - -This could be a bug or quota limitation. In the case of the former, please head over to slack or github. - -* If you see a GET hanging, followed by a 502 with the following response: - -``` - - -502 Server Error - - -

Error: Server Error

-

The server encountered a temporary error and could not complete your request.

Please try again in 30 seconds.

-

- -``` -The loadbalancer is probably bootstrapping itself. - -* If a GET responds with a 404 and the following response: -``` - -

404. That’s an error. -

The requested URL /hostless was not found on this server. That’s all we know. -``` -It means you have lost your IP somehow, or just typed in the wrong IP. - -* If you see requests taking an abnormal amount of time, run the echoheaders pod and look for the client address -```shell -CLIENT VALUES: -client_address=('10.240.29.196', 56401) (10.240.29.196) -``` - -Then head over to the GCE node with internal IP 10.240.29.196 and check that the [Service is functioning](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/debugging-services.md) as expected. Remember that the GCE L7 is routing you through the NodePort service, and try to trace back. - -* Check if you can access the backend service directly via nodeip:nodeport -* Check the GCE console -* Make sure you only have a single loadbalancer controller running -* Make sure the initial GCE health checks have passed -* A crash loop looks like: -```shell -$ kubectl get pods -glbc-fjtlq 0/1 CrashLoopBackOff 17 1h -``` -If you hit that it means the controller isn't even starting. Re-check your input flags, especially the required ones. - -## GLBC Implementation Details - -For the curious, here is a high level overview of how the GCE LoadBalancer controller manages cloud resources. - -The controller manages cloud resources through a notion of pools. Each pool is the representation of the last known state of a logical cloud resource. Pools are periodically synced with the desired state, as reflected by the Kubernetes api. When you create a new Ingress, the following happens: -* Updates instance groups to reflect all nodes in the cluster. -* Creates Backend Service for each Kubernetes service referenced in the ingress spec. -* Adds named-port for each Backend Service to each instance group. -* Creates a URL Map, TargetHttpProxy, and ForwardingRule. -* Updates the URL Map according to the Ingress. - -Periodically, each pool checks that it has a valid connection to the next hop in the above resource graph. So for example, the backend pool will check that each backend is connected to the instance group and that the node ports match, the instance group will check that all the Kubernetes nodes are a part of the instance group, and so on. Since Backend Services are a limited resource, they're shared (well, everything is limited by your quota, this applies doubly to Backend Services). This means you can setup N Ingress' exposing M services through different paths and the controller will only create M backends. When all the Ingress' are deleted, the backend pool GCs the backend. - -## GCE + GKE Version Mapping - -The table below describes what version of GLBC is running on GKE. Note that these versions are simply the defaults. Users still have the power to change the version manually if they want to (see deploy/). +The table below describes what version of Ingress-GCE is running on GKE. Note that these versions are simply the defaults. Users still have the power to change the version manually if they want to (see deploy/). *Format: k8s version -> glbc version* - * GKE: * 1.9.6-gke.2 -> 0.9.7 * 1.9.7-gke.5 -> 0.9.7 * 1.10.4-gke.0 -> v1.1.1 @@ -651,14 +39,4 @@ The table below describes what version of GLBC is running on GKE. Note that thes * 1.11.2-gke.4 -> v1.3.3 * 1.11.3-gke.14 -> v1.4.0 -## Wish list: - -* More E2e, integration tests -* Better events -* Detect leaked resources even if the Ingress has been deleted when the controller isn't around -* Specify health checks (currently we just rely on kubernetes service/pod liveness probes and force pods to have a `/` endpoint that responds with 200 for GCE) -* Async pool management of backends/L7s etc -* Retry back-off when GCE Quota is done -* GCE Quota integration - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/contrib/service-loadbalancer/gce/README.md?pixel)]() diff --git a/docs/README.md b/docs/README.md deleted file mode 100644 index 2c0de03bfd..0000000000 --- a/docs/README.md +++ /dev/null @@ -1,20 +0,0 @@ -# Ingress Documentation and Examples - -This directory contains documentation. - -## File naming convention - -Try to create a README file in every directory containing documentation and index -out from there, that's what readers will notice first. Use lower case for other -file names unless you have a reason to draw someone's attention to it. -Avoid CamelCase. - -Rationale: - -* Files that are common to all controllers, or heavily index other files, are -named using ALL CAPS. This is done to indicate to the user that they should -visit these files first. Examples include PREREQUISITES and README. - -* Files specific to a controller, or files that contain information about -various controllers, are named using all lower case. Examples include -configuration and catalog files. diff --git a/docs/admin.md b/docs/admin.md deleted file mode 100644 index 38e9339663..0000000000 --- a/docs/admin.md +++ /dev/null @@ -1,33 +0,0 @@ -# Ingress Admin Guide - -This is a guide to the different deployment styles of an Ingress controller. - -## Vanillla deployments - -__GKE__: On GKE, the Ingress controller runs on the -master. If you wish to stop this controller and run another instance on your -nodes instead, you can do so by following this [example](/examples/deployment/gce). - -__Generic__: You can deploy a generic (nginx or haproxy) Ingress controller by simply -running it as a pod in your cluster, as shown in the [examples](/examples/deployment). -Please note that you must specify the `ingress.class` -[annotation](/examples/PREREQUISITES.md#ingress-class) if you're running on a -cloudprovider, or the cloudprovider controller will fight the nginx controller -for the Ingress. - -## Stacked deployments - -__Behind a LoadBalancer Service__: You can deploy a generic controller behind a -Service of `Type=LoadBalancer`, by following this [example](/examples/static-ip/nginx#acquiring-an-ip). -More specifically, first create a LoadBalancer Service that selects the generic -controller pods, then start the generic controller with the `--publish-service` -flag. - - -__Behind another Ingress__: Sometimes it is desirable to deploy a stack of -Ingresses, like the GCE Ingress -> nginx Ingress -> application. You might -want to do this because the GCE HTTP lb offers some features that the GCE -network LB does not, like a global static IP or CDN, but doesn't offer all the -features of nginx, like URL rewriting or redirects. - -TODO: Write an example diff --git a/docs/annotations.md b/docs/annotations.md deleted file mode 100644 index d066392350..0000000000 --- a/docs/annotations.md +++ /dev/null @@ -1,61 +0,0 @@ -## Ingress-GCE Supported Annotations -These are annotations on the Kubernetes **Ingress** resource which are only relevant to the ingress-gce -controller. Do not expect other controllers to evaluate them. Likewise, do not expect other controller's -annotations to work with ingress-gce unless specified in this list. - -#### Ingress Class -`kubernetes.io/ingress.class` - -The ingress-gce controller will only operate on ingresses with non-set ingress.class values or when -the value equals 'gce'. If you are using another ingress controller, be sure to set this to the -respective controller's key, such as 'nginx'. - -#### Disable HTTP Front-End -`kubernetes.io/ingress.allow-http` default: `"true"` - -This flag indicates whether the controller should create an HTTP Forwarding Rule and Target Proxy -for the GCP Load Balancer. If either unset or true, the controller will create these resources. If -set to `"false"`, the controller will only create HTTPS resources assuming TLS for the ingress is -configured. - -#### Use GCP SSL Certificate -`ingress.gcp.kubernetes.io/pre-shared-cert` - -Instead of storing certificates and keys in Kubernetes secrets, you can upload them to GCP and -reference them by name through this annotation. - -#### Specify Reserved GCP address -`kubernetes.io/ingress.global-static-ip-name` - -Provide the name of a GCP Address (Global) through this annotation and all forwarding rules for this -ingress will utilize this IP. -[Example YAML](/examples/static-ip) - - -## Service Annotations -These are annotations on the Kubernetes **Service** resource which are only relevant to the ingress-gce -controller. Do not expect other controllers to evaluate them. - -#### Set Protocol of Service Ports -`service.alpha.kubernetes.io/app-protocols` - -Provide a mapping of service-port name to either `HTTP`, `HTTPS`, or `HTTP2` to indicate what protocol -the GCP Backend Service should use. - -Example Value: `'{"my-https-port":"HTTPS"}'` -[Example YAML](/examples/backside-https) - -#### Set BackendConfig of Service Ports -`beta.cloud.google.com/backend-config` - -Provide a mapping between ports and BackendConfig objects. You can provide configuration for a Cloud load -balancer by associating Service ports with BackendConfig objects. For more details please visit -https://cloud.google.com/kubernetes-engine/docs/concepts/backendconfig. - -#### Enable Network Endpoint Group -`cloud.google.com/neg` - -Set `ingress` to enable NEG feature for the Ingress referencing this service. -Optionaly provide a map of service ports that should be exposed as stand-alone NEGs. - -Example Value: `'{"ingress":true,"exposed_ports":{"80":{}, "443":{}}}'` diff --git a/docs/troubleshooting/troubleshooting.md b/docs/troubleshooting/troubleshooting.md index 522eb2e4c4..09c9a2c5c5 100644 --- a/docs/troubleshooting/troubleshooting.md +++ b/docs/troubleshooting/troubleshooting.md @@ -8,6 +8,64 @@ Do not move it without providing redirects. # Troubleshooting +## General + +This controller is complicated because it exposes a tangled set of external resources as a single logical abstraction. It's recommended that you are at least *aware* of how one creates a GCE L7 [without a kubernetes Ingress](https://cloud.google.com/container-engine/docs/tutorials/http-balancer). If weird things happen, here are some basic debugging guidelines: + +* Check loadbalancer controller pod logs via kubectl +A typical sign of trouble is repeated retries in the logs: +```shell +I1006 18:58:53.451869 1 loadbalancer.go:268] Forwarding rule k8-fw-default-echomap already exists +I1006 18:58:53.451955 1 backends.go:162] Syncing backends [30301 30284 30301] +I1006 18:58:53.451998 1 backends.go:134] Deleting backend k8-be-30302 +E1006 18:58:57.029253 1 utils.go:71] Requeuing default/echomap, err googleapi: Error 400: The backendService resource 'projects/Kubernetesdev/global/backendServices/k8-be-30302' is already being used by 'projects/Kubernetesdev/global/urlMaps/k8-um-default-echomap' +I1006 18:58:57.029336 1 utils.go:83] Syncing default/echomap +``` + +This could be a bug or quota limitation. In the case of the former, please head over to slack or github. + +* If you see a GET hanging, followed by a 502 with the following response: + +``` + + +502 Server Error + + +

Error: Server Error

+

The server encountered a temporary error and could not complete your request.

Please try again in 30 seconds.

+

+ +``` +The loadbalancer is probably bootstrapping itself. + +* If a GET responds with a 404 and the following response: +``` + +

404. That’s an error. +

The requested URL /hostless was not found on this server. That’s all we know. +``` +It means you have lost your IP somehow, or just typed in the wrong IP. + +* If you see requests taking an abnormal amount of time, run the echoheaders pod and look for the client address +```shell +CLIENT VALUES: +client_address=('10.240.29.196', 56401) (10.240.29.196) +``` + +Then head over to the GCE node with internal IP 10.240.29.196 and check that the [Service is functioning](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/debugging-services.md) as expected. Remember that the GCE L7 is routing you through the NodePort service, and try to trace back. + +* Check if you can access the backend service directly via nodeip:nodeport +* Check the GCE console +* Make sure you only have a single loadbalancer controller running +* Make sure the initial GCE health checks have passed +* A crash loop looks like: +```shell +$ kubectl get pods +glbc-fjtlq 0/1 CrashLoopBackOff 17 1h +``` +If you hit that it means the controller isn't even starting. Re-check your input flags, especially the required ones. + ## Authentication to the Kubernetes API Server diff --git a/examples/PREREQUISITES.md b/examples/PREREQUISITES.md deleted file mode 100644 index 7c6bf2fd62..0000000000 --- a/examples/PREREQUISITES.md +++ /dev/null @@ -1,239 +0,0 @@ -# Prerequisites - -Many of the examples in this directory have common prerequisites. - -## Deploying a controller - -Unless you're running on a cloudprovider that supports Ingress out of the box -(eg: GCE/GKE), you will need to deploy a controller. You can do so following -[these instructions](/examples/deployment). - -## Firewall rules - -If you're using a generic controller (eg the nginx ingress controller), you -will need to create a firewall rule that targets port 80/443 on the specific VMs -the nginx controller is running on. On cloudproviders, the respective backend -will auto-create firewall rules for your Ingress. - -If you'd like to auto-create firewall rules for an Ingress controller, -you can put it behind a Service of `Type=Loadbalancer` as shown in -[this example](/examples/static-ip/nginx#acquiring-an-ip). - -## TLS certificates - -Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA -key/cert pair with an arbitrarily chosen hostname, created as follows - -```console -$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc" -Generating a 2048 bit RSA private key -................+++ -................+++ -writing new private key to 'tls.key' ------ - -$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt -secret "tls-secret" created -``` - -## CA Authentication -You can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our -own CA, and also generate a client certificate. - -These instructions are based on CoreOS OpenSSL [instructions](https://coreos.com/kubernetes/docs/latest/openssl.html) - -### Generating a CA - -First of all, you've to generate a CA. This is going to be the one who will sign your client certificates. -In real production world, you may face CAs with intermediate certificates, as the following: - -```console -$ openssl s_client -connect www.google.com:443 -[...] ---- -Certificate chain - 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com - i:/C=US/O=Google Inc/CN=Google Internet Authority G2 - 1 s:/C=US/O=Google Inc/CN=Google Internet Authority G2 - i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA - 2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA - i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority - -``` - -To generate our CA Certificate, we've to run the following commands: - -```console -$ openssl genrsa -out ca.key 2048 -$ openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=example-ca" -``` - -This will generate two files: A private key (ca.key) and a public key (ca.crt). This CA is valid for 10000 days. -The ca.crt can be used later in the step of creation of CA authentication secret. - -### Generating the client certificate -The following steps generate a client certificate signed by the CA generated above. This client can be -used to authenticate in a tls-auth configured ingress. - -First, we need to generate an 'openssl.cnf' file that will be used while signing the keys: - -``` -[req] -req_extensions = v3_req -distinguished_name = req_distinguished_name -[req_distinguished_name] -[ v3_req ] -basicConstraints = CA:FALSE -keyUsage = nonRepudiation, digitalSignature, keyEncipherment -``` - -Then, a user generates his very own private key (that he needs to keep secret) -and a CSR (Certificate Signing Request) that will be sent to the CA to sign and generate a certificate. - -```console -$ openssl genrsa -out client1.key 2048 -$ openssl req -new -key client1.key -out client1.csr -subj "/CN=client1" -config openssl.cnf -``` - -As the CA receives the generated 'client1.csr' file, it signs it and generates a client.crt certificate: - -```console -$ openssl x509 -req -in client1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client1.crt -days 365 -extensions v3_req -extfile openssl.cnf -``` - -Then, you'll have 3 files: the client.key (user's private key), client.crt (user's public key) and client.csr (disposable CSR). - - -### Creating the CA Authentication secret -If you're using the CA Authentication feature, you need to generate a secret containing -all the authorized CAs. You must download them from your CA site in PEM format (like the following): - -``` ------BEGIN CERTIFICATE----- -[....] ------END CERTIFICATE----- -``` - -You can have as many certificates as you want. If they're in the binary DER format, -you can convert them as the following: - -```console -$ openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem -``` - -Then, you've to concatenate them all in only one file, named 'ca.crt' as the following: - - -```console -$ cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt -``` - -The final step is to create a secret with the content of this file. This secret is going to be used in -the TLS Auth directive: - -```console -$ kubectl create secret generic caingress --namespace=default --from-file=ca.crt= -``` - -Note: You can also generate the CA Authentication Secret along with the TLS Secret by using: -```console -$ kubectl create secret generic caingress --namespace=default --from-file=ca.crt= --from-file=tls.crt= --from-file=tls.key= -``` - -## Test HTTP Service - -All examples that require a test HTTP Service use the standard http-svc pod, -which you can deploy as follows - -```console -$ kubectl create -f http-svc.yaml -service "http-svc" created -replicationcontroller "http-svc" created - -$ kubectl get po -NAME READY STATUS RESTARTS AGE -http-svc-p1t3t 1/1 Running 0 1d - -$ kubectl get svc -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -http-svc 10.0.122.116 80:30301/TCP 1d -``` - -You can test that the HTTP Service works by exposing it temporarily -```console -$ kubectl patch svc http-svc -p '{"spec":{"type": "LoadBalancer"}}' -"http-svc" patched - -$ kubectl get svc http-svc -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -http-svc 10.0.122.116 80:30301/TCP 1d - -$ kubectl describe svc http-svc -Name: http-svc -Namespace: default -Labels: app=http-svc -Selector: app=http-svc -Type: LoadBalancer -IP: 10.0.122.116 -LoadBalancer Ingress: 108.59.87.136 -Port: http 80/TCP -NodePort: http 30301/TCP -Endpoints: 10.180.1.6:8080 -Session Affinity: None -Events: - FirstSeen LastSeen Count From SubObjectPath Type Reason Message - --------- -------- ----- ---- ------------- -------- ------ ------- - 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer - 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer - 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer - -$ curl 108.59.87.126 -CLIENT VALUES: -client_address=10.240.0.3 -command=GET -real path=/ -query=nil -request_version=1.1 -request_uri=http://108.59.87.136:8080/ - -SERVER VALUES: -server_version=nginx: 1.9.11 - lua: 10001 - -HEADERS RECEIVED: -accept=*/* -host=108.59.87.136 -user-agent=curl/7.46.0 -BODY: --no body in request- - -$ kubectl patch svc http-svc -p '{"spec":{"type": "NodePort"}}' -"http-svc" patched -``` - -## Ingress Class - -If you have multiple Ingress controllers in a single cluster, you can pick one -by specifying the `ingress.class` annotation, eg creating an Ingress with an -annotation like - -```yaml -metadata: - name: foo - annotations: - kubernetes.io/ingress.class: "gce" -``` - -will target the GCE controller, forcing the nginx controller to ignore it, while -an annotation like - -```yaml -metadata: - name: foo - annotations: - kubernetes.io/ingress.class: "nginx" -``` - -will target the nginx controller, forcing the GCE controller to ignore it. - -__Note__: Deploying multiple ingress controller and not specifying the -annotation will result in both controllers fighting to satisfy the Ingress. diff --git a/examples/backside-https/app.yaml b/examples/backside-https/app.yaml deleted file mode 100644 index 6a01803a7d..0000000000 --- a/examples/backside-https/app.yaml +++ /dev/null @@ -1,50 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: my-echo-deploy -spec: - replicas: 2 - template: - metadata: - labels: - app: echo - spec: - containers: - - name: echoserver - image: nicksardo/echoserver:latest - imagePullPolicy: Always - ports: - - name: echo-443 - containerPort: 443 - # readinessProbe: # Health check settings can be retrieved from an HTTPS readinessProbe as well - # httpGet: - # path: /healthcheck # Custom health check path for testing - # scheme: HTTPS - # port: echo-443 ---- -apiVersion: v1 -kind: Service -metadata: - name: my-echo-svc - annotations: - service.alpha.kubernetes.io/app-protocols: '{"my-https-port":"HTTPS"}' # Must map port-name to HTTPS for the GCP ingress controller - labels: - app: echo -spec: - type: NodePort - ports: - - port: 12345 # Port doesn't matter as nodeport is used for Ingress - targetPort: echo-443 - protocol: TCP - name: my-https-port - selector: - app: echo ---- -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: my-echo-ingress -spec: - backend: - serviceName: my-echo-svc - servicePort: my-https-port diff --git a/examples/deployment/README.md b/examples/deployment/README.md deleted file mode 100644 index c428be4d56..0000000000 --- a/examples/deployment/README.md +++ /dev/null @@ -1,68 +0,0 @@ -# Deploying the GCE Ingress controller - -This example demonstrates the deployment of a GCE Ingress controller. - -Note: __all GCE/GKE clusters already have an Ingress controller running -on the master. The only reason to deploy another GCE controller is if you want -to debug or otherwise observe its operation via logs.__ - -__Before deploying another one in your cluster, make sure you disable the master controller.__ - -## Disabling the master controller - -See the hard disable options [here](/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller). - -## Deploying a new controller - -The following command deploys a GCE Ingress controller in your cluster: - -```console -$ kubectl create -f gce-ingress-controller.yaml -service "default-http-backend" created -replicationcontroller "l7-lb-controller" created - -$ kubectl get po -l name=glbc -NAME READY STATUS RESTARTS AGE -l7-lb-controller-1s22c 2/2 Running 0 27s -``` - -Now you can create an Ingress and observe the controller: - -```console -$ kubectl create -f gce-tls-ingress.yaml -ingress "test" created - -$ kubectl logs l7-lb-controller-1s22c -c l7-lb-controller -I0201 01:03:17.387548 1 main.go:179] Starting GLBC image: glbc:0.9.2, cluster name -I0201 01:03:18.459740 1 main.go:291] Using saved cluster uid "32658fa96c080068" -I0201 01:03:18.459771 1 utils.go:122] Changing cluster name from to 32658fa96c080068 -I0201 01:03:18.461652 1 gce.go:331] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:""}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)} -I0201 01:03:18.553142 1 cluster_manager.go:264] Created GCE client without a config file -I0201 01:03:18.553773 1 controller.go:234] Starting loadbalancer controller -I0201 01:04:58.314271 1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"test", UID:"73549716-e81a-11e6-a8c5-42010af00002", APIVersion:"extensions", ResourceVersion:"673016", FieldPath:""}): type: 'Normal' reason: 'ADD' default/test -I0201 01:04:58.413616 1 instances.go:76] Creating instance group k8s-ig--32658fa96c080068 in zone us-central1-b -I0201 01:05:01.998169 1 gce.go:2084] Adding port 30301 to instance group k8s-ig--32658fa96c080068 with 0 ports -I0201 01:05:02.444014 1 backends.go:149] Creating backend for 1 instance groups, port 30301 named port &{port30301 30301 []} -I0201 01:05:02.444175 1 utils.go:495] No pod in service http-svc with node port 30301 has declared a matching readiness probe for health checks. -I0201 01:05:02.555599 1 healthchecks.go:62] Creating health check k8s-be-30301--32658fa96c080068 -I0201 01:05:11.300165 1 gce.go:2084] Adding port 31938 to instance group k8s-ig--32658fa96c080068 with 1 ports -I0201 01:05:11.743914 1 backends.go:149] Creating backend for 1 instance groups, port 31938 named port &{port31938 31938 []} -I0201 01:05:11.744008 1 utils.go:495] No pod in service default-http-backend with node port 31938 has declared a matching readiness probe for health checks. -I0201 01:05:11.811972 1 healthchecks.go:62] Creating health check k8s-be-31938--32658fa96c080068 -I0201 01:05:19.871791 1 loadbalancers.go:121] Creating l7 default-test--32658fa96c080068 -... - -$ kubectl get ing test -NAME HOSTS ADDRESS PORTS AGE -test * 35.186.208.106 80, 443 4m - -$ curl 35.186.208.106 -kL -CLIENT VALUES: -client_address=10.180.3.1 -command=GET -real path=/ -query=nil -request_version=1.1 -request_uri=http://35.186.208.106:8080/ -... -``` diff --git a/examples/deployment/gce-ingress-controller.yaml b/examples/deployment/gce-ingress-controller.yaml deleted file mode 100644 index 61f74ec7d7..0000000000 --- a/examples/deployment/gce-ingress-controller.yaml +++ /dev/null @@ -1,81 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - # This must match the --default-backend-service argument of the l7 lb - # controller and is required because GCE mandates a default backend. - name: default-http-backend - labels: - k8s-app: glbc -spec: - # The default backend must be of type NodePort. - type: NodePort - ports: - - port: 80 - targetPort: 8080 - protocol: TCP - name: http - selector: - k8s-app: glbc ---- -apiVersion: v1 -kind: ReplicationController -metadata: - name: l7-lb-controller - labels: - k8s-app: glbc - version: v0.9.0 -spec: - # There should never be more than 1 controller alive simultaneously. - replicas: 1 - selector: - k8s-app: glbc - version: v0.9.0 - template: - metadata: - labels: - k8s-app: glbc - version: v0.9.0 - name: glbc - spec: - terminationGracePeriodSeconds: 600 - containers: - - name: default-http-backend - # Any image is permissable as long as: - # 1. It serves a 404 page at / - # 2. It serves 200 on a /healthz endpoint - image: k8s.gcr.io/defaultbackend-amd64:1.5 - livenessProbe: - httpGet: - path: /healthz - port: 8080 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - ports: - - containerPort: 8080 - resources: - limits: - cpu: 10m - memory: 20Mi - requests: - cpu: 10m - memory: 20Mi - - image: gcr.io/google_containers/glbc:0.9.2 - livenessProbe: - httpGet: - path: /healthz - port: 8081 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - name: l7-lb-controller - resources: - limits: - cpu: 100m - memory: 100Mi - requests: - cpu: 100m - memory: 50Mi - args: - - --default-backend-service=default/default-http-backend - - --sync-period=300s diff --git a/examples/deployment/gce-tls-ingress.yaml b/examples/deployment/gce-tls-ingress.yaml deleted file mode 100644 index 705a17d36e..0000000000 --- a/examples/deployment/gce-tls-ingress.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: test - annotations: - kubernetes.io/ingress.class: "gce" -spec: - tls: - # This assumes tls-secret exists. - - secretName: tls-secret - backend: - # This assumes http-svc exists and routes to healthy endpoints. - serviceName: http-svc - servicePort: 80 - diff --git a/examples/health-checks/README.md b/examples/health-checks/README.md deleted file mode 100644 index dbf171bedf..0000000000 --- a/examples/health-checks/README.md +++ /dev/null @@ -1,74 +0,0 @@ -# Simple HTTP health check example - -The GCE Ingress controller adopts the readiness probe from the matching endpoints, provided the readiness probe doesn't require HTTPS or special headers. - -Create the following app: -```console -$ kubectl create -f health_check_app.yaml -replicationcontroller "echoheaders" created -You have exposed your service on an external port on all nodes in your -cluster. If you want to expose this service to the external internet, you may -need to set up firewall rules for the service port(s) (tcp:31165) to serve traffic. - -See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details. -service "echoheadersx" created -You have exposed your service on an external port on all nodes in your -cluster. If you want to expose this service to the external internet, you may -need to set up firewall rules for the service port(s) (tcp:31020) to serve traffic. - -See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details. -service "echoheadersy" created -ingress "echomap" created -``` - -You should soon find an Ingress that is backed by a GCE Loadbalancer. - -```console -$ kubectl describe ing echomap -Name: echomap -Namespace: default -Address: 107.178.255.228 -Default backend: default-http-backend:80 (10.180.0.9:8080,10.240.0.2:8080) -Rules: - Host Path Backends - ---- ---- -------- - foo.bar.com - /foo echoheadersx:80 () - bar.baz.com - /bar echoheadersy:80 () - /foo echoheadersx:80 () -Annotations: - target-proxy: k8s-tp-default-echomap--a9d60e8176d933ee - url-map: k8s-um-default-echomap--a9d60e8176d933ee - backends: {"k8s-be-31020--a9d60e8176d933ee":"HEALTHY","k8s-be-31165--a9d60e8176d933ee":"HEALTHY","k8s-be-31686--a9d60e8176d933ee":"HEALTHY"} - forwarding-rule: k8s-fw-default-echomap--a9d60e8176d933ee -Events: - FirstSeen LastSeen Count From SubobjectPath Type Reason Message - --------- -------- ----- ---- ------------- -------- ------ ------- - 17m 17m 1 {loadbalancer-controller } Normal ADD default/echomap - 15m 15m 1 {loadbalancer-controller } Normal CREATE ip: 107.178.255.228 - -$ curl 107.178.255.228/foo -H 'Host:foo.bar.com' -CLIENT VALUES: -client_address=10.240.0.5 -command=GET -real path=/foo -query=nil -request_version=1.1 -request_uri=http://foo.bar.com:8080/foo -... -``` - -You can confirm the health check endpoint point it's using one of 2 ways: -* Through the cloud console: compute > health checks > lookup your health check. It takes the form k8s-be-nodePort-hash, where nodePort in the example above is 31165 and 31020, as shown by the kubectl output. -* Through gcloud: Run `gcloud compute http-health-checks list` - -## Limitations - -A few points to note: -* The pod's `containerPort` field must be defined -* The service's `targetPort` field must point to the pod port's `containerPort` value or `name`. Note that the `targetPort` defaults to the `port` value if not defined -* The readiness probe must be exposed on the port matching the `servicePort` specified in the Ingress -* The readiness probe cannot have special requirements like headers -* The probe timeouts are translated to GCE health check timeouts -* You must create the pods backing the endpoints with the given readiness probe. This *will not* work if you update the replication controller with a different readiness probe. diff --git a/examples/health-checks/health_check_app.yaml b/examples/health-checks/health_check_app.yaml deleted file mode 100644 index b8d36bf38d..0000000000 --- a/examples/health-checks/health_check_app.yaml +++ /dev/null @@ -1,100 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - name: echoheaders -spec: - replicas: 1 - template: - metadata: - labels: - app: echoheaders - spec: - containers: - - name: echoheaders - image: gcr.io/google_containers/echoserver:1.8 - ports: - - containerPort: 8080 - readinessProbe: - httpGet: - path: /healthz - port: 8080 - periodSeconds: 1 - timeoutSeconds: 1 - successThreshold: 1 - failureThreshold: 10 - env: - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - ---- -apiVersion: v1 -kind: Service -metadata: - name: echoheadersx - labels: - app: echoheaders -spec: - type: NodePort - ports: - - port: 80 - targetPort: 8080 - protocol: TCP - name: http - selector: - app: echoheaders ---- -apiVersion: v1 -kind: Service -metadata: - name: echoheadersy - labels: - app: echoheaders -spec: - type: NodePort - ports: - - port: 80 - targetPort: 8080 - protocol: TCP - name: http - selector: - app: echoheaders ---- -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: echomap -spec: - rules: - - host: foo.bar.com - http: - paths: - - path: /foo - backend: - serviceName: echoheadersx - servicePort: 80 - - host: bar.baz.com - http: - paths: - - path: /bar - backend: - serviceName: echoheadersy - servicePort: 80 - - path: /foo - backend: - serviceName: echoheadersx - servicePort: 80 - diff --git a/examples/http-svc.yaml b/examples/http-svc.yaml deleted file mode 100644 index ff25ab0048..0000000000 --- a/examples/http-svc.yaml +++ /dev/null @@ -1,51 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: http-svc - labels: - app: http-svc -spec: - type: NodePort - ports: - - port: 80 - # This port needs to be available on all nodes in the cluster - nodePort: 30301 - targetPort: 8080 - protocol: TCP - name: http - selector: - app: http-svc ---- -apiVersion: v1 -kind: ReplicationController -metadata: - name: http-svc -spec: - replicas: 1 - template: - metadata: - labels: - app: http-svc - spec: - containers: - - name: http-svc - image: gcr.io/google_containers/echoserver:1.8 - ports: - - containerPort: 8080 - env: - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP diff --git a/examples/https/Makefile b/examples/https/Makefile deleted file mode 100644 index d4fd4deecf..0000000000 --- a/examples/https/Makefile +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright 2016 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -all: - -KEY=/tmp/tls.key -CERT=/tmp/tls.crt -HOST=example.com -NAME=tls-secret - -keys: - # The CName used here is specific to the service specified in nginx-app.yaml. - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $(KEY) -out $(CERT) -subj "/CN=$(HOST)/O=$(HOST)" - -clean: - rm $(KEY) - rm $(CERT) diff --git a/examples/https/README.md b/examples/https/README.md deleted file mode 100644 index 8b791a06c3..0000000000 --- a/examples/https/README.md +++ /dev/null @@ -1,33 +0,0 @@ -# Simple TLS example - -Create secret -```console -$ make keys -$ kubectl create secret tls foo-secret --key /tmp/tls.key --cert /tmp/tls.crt -``` - -Make sure you have the l7 controller running: -```console -$ kubectl --namespace=kube-system get pod -l name=glbc -NAME -l7-lb-controller-v0.6.0-1770t ... -``` -Also make sure you have a [firewall rule](https://github.com/kubernetes/ingress/blob/master/controllers/gce/BETA_LIMITATIONS.md#creating-the-fir-glbc-health-checks) for the node port of the Service. - -Create Ingress: -```console -$ kubectl create -f tls-app.yaml -``` - -Test reachability: -```console -$ curl --resolve example.com:443:130.211.21.233 https://example.com --cacert /tmp/tls.crt -CLIENT VALUES: -client_address=10.240.0.4 -command=GET -real path=/ -query=nil -request_version=1.1 -request_uri=http://bitrot.com:8080/ -... -``` diff --git a/examples/https/tls-app.yaml b/examples/https/tls-app.yaml deleted file mode 100644 index 7de14d8660..0000000000 --- a/examples/https/tls-app.yaml +++ /dev/null @@ -1,46 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: echoheaders-https - labels: - app: echoheaders-https -spec: - type: NodePort - ports: - - port: 80 - targetPort: 8080 - protocol: TCP - name: http - selector: - app: echoheaders-https ---- -apiVersion: v1 -kind: ReplicationController -metadata: - name: echoheaders-https -spec: - replicas: 2 - template: - metadata: - labels: - app: echoheaders-https - spec: - containers: - - name: echoheaders-https - image: gcr.io/google_containers/echoserver:1.3 - ports: - - containerPort: 8080 ---- -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: test -spec: - tls: - # This assumes tls-secret exists. - # To generate it run the make in this directory. - - secretName: tls-secret - backend: - serviceName: echoheaders-https - servicePort: 80 - diff --git a/examples/multi-path/gce-multi-path-ingress.yaml b/examples/multi-path/gce-multi-path-ingress.yaml deleted file mode 100644 index 94aba6a946..0000000000 --- a/examples/multi-path/gce-multi-path-ingress.yaml +++ /dev/null @@ -1,29 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: echomap -spec: - backend: - # Re-use echoheadersx as the default backend so we stay under the default - # quota for gce BackendServices. - serviceName: echoheadersx - servicePort: 80 - rules: - - host: foo.bar.com - http: - paths: - - path: /foo - backend: - serviceName: echoheadersx - servicePort: 80 - - host: bar.baz.com - http: - paths: - - path: /bar - backend: - serviceName: echoheadersy - servicePort: 80 - - path: /foo - backend: - serviceName: echoheadersx - servicePort: 80 diff --git a/examples/multi-path/svc.yaml b/examples/multi-path/svc.yaml deleted file mode 100644 index 78b6debb3b..0000000000 --- a/examples/multi-path/svc.yaml +++ /dev/null @@ -1,69 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: echoheadersx - labels: - app: echoheaders -spec: - type: NodePort - ports: - - port: 80 - nodePort: 30301 - targetPort: 8080 - protocol: TCP - name: http - selector: - app: echoheaders ---- -apiVersion: v1 -kind: Service -metadata: - name: echoheadersy - labels: - app: echoheaders -spec: - type: NodePort - ports: - - port: 80 - nodePort: 30284 - targetPort: 8080 - protocol: TCP - name: http - selector: - app: echoheaders ---- -apiVersion: v1 -kind: ReplicationController -metadata: - name: echoheaders -spec: - replicas: 1 - template: - metadata: - labels: - app: echoheaders - spec: - containers: - - name: echoheaders - image: gcr.io/google_containers/echoserver:1.8 - ports: - - containerPort: 8080 - env: - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - ---- diff --git a/examples/static-ip/README.md b/examples/static-ip/README.md deleted file mode 100644 index f05b66195e..0000000000 --- a/examples/static-ip/README.md +++ /dev/null @@ -1,128 +0,0 @@ -# Static IPs - -This example demonstrates how to assign a [static-ip](https://cloud.google.com/compute/docs/configure-instance-ip-addresses#reserve_new_static) to an Ingress on GCE. - -## Prerequisites - -You need a [TLS cert](/examples/PREREQUISITES.md#tls-certificates) and a [test HTTP service](/examples/PREREQUISITES.md#test-http-service) for this example. -You will also need to make sure you Ingress targets exactly one Ingress -controller by specifying the [ingress.class annotation](/examples/PREREQUISITES.md#ingress-class), -and that you have an ingress controller [running](/examples/deployment) in your cluster. - -## Acquiring a static IP - -In GCE, static IP belongs to a given project until the owner decides to release -it. If you create a static IP and assign it to an Ingress, deleting the Ingress -or tearing down the GKE cluster *will not* delete the static IP. You can check -the static IPs you have as follows - -```console -$ gcloud compute addresses list --global -NAME REGION ADDRESS STATUS -test-ip 35.186.221.137 RESERVED - -$ gcloud compute addresses list -NAME REGION ADDRESS STATUS -test-ip 35.186.221.137 RESERVED -test-ip us-central1 35.184.21.228 RESERVED -``` - -Note the difference between a regional and a global static ip. Only global -static-ips will work with Ingress. If you don't already have an IP, you can -create it - -```console -$ gcloud compute addresses create test-ip --global -Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/global/addresses/test-ip]. ---- -address: 35.186.221.137 -creationTimestamp: '2017-01-31T10:32:29.889-08:00' -description: '' -id: '9221457935391876818' -kind: compute#address -name: test-ip -selfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/global/addresses/test-ip -status: RESERVED -``` - -## Assigning a static IP to an Ingress - -You can now add the static IP from the previous step to an Ingress, -by specifying the `kubernetes.io/ingress.global-static-ip-name` annotation, -the example yaml in this directory already has it set to `test-ip` - -```console -$ kubectl create -f gce-static-ip-ingress.yaml -ingress "static-ip" created - -$ gcloud compute addresses list test-ip -NAME REGION ADDRESS STATUS -test-ip 35.186.221.137 IN_USE -test-ip us-central1 35.184.21.228 RESERVED - -$ kubectl get ing -NAME HOSTS ADDRESS PORTS AGE -static-ip * 35.186.221.137 80, 443 1m - -$ curl 35.186.221.137 -Lk -CLIENT VALUES: -client_address=10.180.1.1 -command=GET -real path=/ -query=nil -request_version=1.1 -request_uri=http://35.186.221.137:8080/ -... -``` - -## Retaining the static IP - -You can test retention by deleting the Ingress - -```console -$ kubectl delete -f gce-static-ip-ingress.yaml -ingress "static-ip" deleted - -$ kubectl get ing -No resources found. - -$ gcloud compute addresses list test-ip --global -NAME REGION ADDRESS STATUS -test-ip 35.186.221.137 RESERVED -``` - -## Promote ephemeral to static IP - -If you simply create a HTTP Ingress resource, it gets an ephemeral IP - -```console -$ kubectl create -f gce-http-ingress.yaml -ingress "http-ingress" created - -$ kubectl get ing -NAME HOSTS ADDRESS PORTS AGE -http-ingress * 35.186.195.33 80 1h - -$ gcloud compute forwarding-rules list -NAME REGION IP_ADDRESS IP_PROTOCOL TARGET -k8s-fw-default-http-ingress--32658fa96c080068 35.186.195.33 TCP k8s-tp-default-http-ingress--32658fa96c080068 -``` - -Note that because this is an ephemeral IP, it won't show up in the output of -`gcloud compute addresses list`. - -If you either directly create an Ingress with a TLS section, or modify a HTTP -Ingress to have a TLS section, it gets a static IP. - -```console -$ kubectl patch ing http-ingress -p '{"spec":{"tls":[{"secretName":"tls-secret"}]}}' -"http-ingress" patched - -$ kubectl get ing -NAME HOSTS ADDRESS PORTS AGE -http-ingress * 35.186.195.33 80, 443 1h - -$ gcloud compute addresses list -NAME REGION ADDRESS STATUS -k8s-fw-default-http-ingress--32658fa96c080068 35.186.195.33 IN_USE -``` diff --git a/examples/static-ip/gce-http-ingress.yaml b/examples/static-ip/gce-http-ingress.yaml deleted file mode 100644 index ca0e34ca57..0000000000 --- a/examples/static-ip/gce-http-ingress.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: http-ingress - annotations: - kubernetes.io/ingress.class: "gce" -spec: - backend: - # This assumes http-svc exists and routes to healthy endpoints. - serviceName: http-svc - servicePort: 80 - diff --git a/examples/static-ip/gce-static-ip-ingress.yaml b/examples/static-ip/gce-static-ip-ingress.yaml deleted file mode 100644 index f91d0cf535..0000000000 --- a/examples/static-ip/gce-static-ip-ingress.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: static-ip - # Assumes a global static ip with the same name exists. - # You can acquire a static IP by running - # gcloud compute addresses create test-ip --global - annotations: - kubernetes.io/ingress.global-static-ip-name: "test-ip" -spec: - tls: - # This assumes tls-secret exists. - - secretName: tls-secret - backend: - # This assumes http-svc exists and routes to healthy endpoints. - serviceName: http-svc - servicePort: 80 diff --git a/examples/tls-termination/README.md b/examples/tls-termination/README.md deleted file mode 100644 index 8a711a2bf1..0000000000 --- a/examples/tls-termination/README.md +++ /dev/null @@ -1,78 +0,0 @@ -# TLS termination - -This example demonstrates how to terminate TLS through the GCE Ingress controller. - -## Prerequisites - -You need a [TLS cert](/examples/PREREQUISITES.md#tls-certificates) and a [test HTTP service](/examples/PREREQUISITES.md#test-http-service) for this example. -You will also need to make sure you Ingress targets exactly one Ingress -controller by specifying the [ingress.class annotation](/examples/PREREQUISITES.md#ingress-class), -and that you have an ingress controller [running](/examples/deployment) in your cluster. - -## Deployment - -The following command instructs the controller to terminate traffic using -the provided TLS cert, and forward unencrypted HTTP traffic to the test -HTTP service. - -```console -$ kubectl create -f gce-tls-ingress.yaml -``` - -## Validation - -You can confirm that the Ingress works. - -```console -$ kubectl describe ing test -Name: test -Namespace: default -Address: 35.186.221.137 -Default backend: http-svc:80 (10.180.1.9:8080,10.180.3.6:8080) -TLS: - tls-secret terminates -Rules: - Host Path Backends - ---- ---- -------- - * * http-svc:80 (10.180.1.9:8080,10.180.3.6:8080) -Annotations: - target-proxy: k8s-tp-default-test--32658fa96c080068 - url-map: k8s-um-default-test--32658fa96c080068 - backends: {"k8s-be-30301--32658fa96c080068":"Unknown"} - forwarding-rule: k8s-fw-default-test--32658fa96c080068 - https-forwarding-rule: k8s-fws-default-test--32658fa96c080068 - https-target-proxy: k8s-tps-default-test--32658fa96c080068 - static-ip: k8s-fw-default-test--32658fa96c080068 -Events: - FirstSeen LastSeen Count From SubObjectPath Type Reason Message - --------- -------- ----- ---- ------------- -------- ------ ------- - 2m 2m 1 {loadbalancer-controller } Normal ADD default/test - 1m 1m 1 {loadbalancer-controller } Normal CREATE ip: 35.186.221.137 - 1m 1m 3 {loadbalancer-controller } Normal Service default backend set to http-svc:30301 - -$ curl 35.186.221.137 -curl: (60) SSL certificate problem: self signed certificate -More details here: http://curl.haxx.se/docs/sslcerts.html - -$ curl 35.186.221.137 -kl -CLIENT VALUES: -client_address=10.240.0.3 -command=GET -real path=/ -query=nil -request_version=1.1 -request_uri=http://35.186.221.137:8080/ - -SERVER VALUES: -server_version=nginx: 1.9.11 - lua: 10001 - -HEADERS RECEIVED: -accept=*/* -connection=Keep-Alive -host=35.186.221.137 -user-agent=curl/7.46.0 -via=1.1 google -x-cloud-trace-context=bfa123130fd623989cca0192e43d9ba4/8610689379063045825 -x-forwarded-for=104.132.0.80, 35.186.221.137 -x-forwarded-proto=https -``` diff --git a/examples/tls-termination/gce-tls-ingress.yaml b/examples/tls-termination/gce-tls-ingress.yaml deleted file mode 100644 index 705a17d36e..0000000000 --- a/examples/tls-termination/gce-tls-ingress.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: test - annotations: - kubernetes.io/ingress.class: "gce" -spec: - tls: - # This assumes tls-secret exists. - - secretName: tls-secret - backend: - # This assumes http-svc exists and routes to healthy endpoints. - serviceName: http-svc - servicePort: 80 - diff --git a/examples/websocket/Dockerfile b/examples/websocket/Dockerfile deleted file mode 100644 index b5e679af22..0000000000 --- a/examples/websocket/Dockerfile +++ /dev/null @@ -1,5 +0,0 @@ -FROM alpine:3.5 - -COPY wsserver /wsserver - -CMD ["/wsserver"] diff --git a/examples/websocket/README.md b/examples/websocket/README.md deleted file mode 100644 index 7735af94b0..0000000000 --- a/examples/websocket/README.md +++ /dev/null @@ -1,83 +0,0 @@ -# Simple Websocket Example - -Any websocket server will suffice; however, for the purpose of demonstration, we'll use the gorilla/websocket package in a Go binary. - -### Build -```shell -➜ CGO_ENABLED=0 go build -o wsserver -``` - -### Containerize -```shell -➜ docker build -t [YOUR_IMAGE] . -... -➜ docker push [YOUR_IMAGE] -... -``` - -### Deploy -Either update the image in the `Deployment` to your newly created image. -```shell -➜ vi deployment.yaml -# Change image to your own -``` - -```shell -➜ kubectl create -f deployment.yaml -deployment "ws-example" created -service "ws-example-svc" created -ingress "ws-example-ing" created - -``` - -### Test -Retrieve the ingress external IP: -```shell -➜ kubectl get ing/ws-example-ing -NAME HOSTS ADDRESS PORTS AGE -ws-example-ing * xxx.xxx.xxx.xxx 80 3m -``` - -Wait for the loadbalancer to be created and functioning. Visit http://xxx.xxx.xxx.xxx and click 'Connect'. You should receive messages from server with timestamps. - -### Change backend timeout - -At this point, the websocket connection will be destroyed by the HTTP(S) Load Balancer after 30 seconds, which is the default timeout. Note: this timeout is not an idle timeout - it's a timeout on the connection lifetime. - -Currently, the GCE ingress controller does not provide a way to set this timeout via Ingress specification. You'll need to change this value either through the GCP Cloud Console or through gcloud CLI. - - -```shell -➜ kubectl describe ingress/ws-example-ing -Name: ws-example-ing -Namespace: default -Address: xxxxxxxxxxxx -Default backend: ws-example-svc:80 (10.48.10.12:8080,10.48.5.14:8080,10.48.7.11:8080) -Rules: - Host Path Backends - ---- ---- -------- - * * ws-example-svc:80 (10.48.10.12:8080,10.48.5.14:8080,10.48.7.11:8080) -Annotations: - target-proxy: k8s-tp-default-ws-example-ing--52aa8ae8221ffa9c - url-map: k8s-um-default-ws-example-ing--52aa8ae8221ffa9c - backends: {"k8s-be-31127--52aa8ae8221ffa9c":"HEALTHY"} - forwarding-rule: k8s-fw-default-ws-example-ing--52aa8ae8221ffa9c -Events: - FirstSeen LastSeen Count From SubObjectPath Type Reason Message - --------- -------- ----- ---- ------------- -------- ------ ------- - 12m 12m 1 loadbalancer-controller Normal ADD default/ws-example-ing - 11m 11m 1 loadbalancer-controller Normal CREATE ip: xxxxxxxxxxxx - 11m 9m 5 loadbalancer-controller Normal Service default backend set to ws-example-svc:31127 -``` - -Retrieve the name of the backend service from within the annotation section. - -Update the timeout field for every backend that needs a higher timeout. - -```shell -➜ export BACKEND=k8s-be-31127--52aa8ae8221ffa9c -➜ gcloud compute backend-services update $BACKEND --global --timeout=86400 # seconds -Updated [https://www.googleapis.com/compute/v1/projects/xxxxxxxxx/global/backendServices/k8s-be-31127--52aa8ae8221ffa9c]. -``` - -Wait up to twenty minutes for this change to propagate. diff --git a/examples/websocket/deployment.yaml b/examples/websocket/deployment.yaml deleted file mode 100644 index 06984740f3..0000000000 --- a/examples/websocket/deployment.yaml +++ /dev/null @@ -1,47 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: ws-example -spec: - replicas: 3 - template: - metadata: - labels: - app: wseg - spec: - containers: - - name: websocketexample - image: [YOUR_IMAGE] - imagePullPolicy: Always - ports: - - name: http - containerPort: 8080 - env: - - name: podname - valueFrom: - fieldRef: - fieldPath: metadata.name ---- -apiVersion: v1 -kind: Service -metadata: - name: ws-example-svc - labels: - app: wseg -spec: - type: NodePort - ports: - - port: 80 - targetPort: 8080 - protocol: TCP - selector: - app: wseg ---- -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: ws-example-ing -spec: - backend: - serviceName: ws-example-svc - servicePort: 80 diff --git a/examples/websocket/server.go b/examples/websocket/server.go deleted file mode 100644 index af9100c1a9..0000000000 --- a/examples/websocket/server.go +++ /dev/null @@ -1,192 +0,0 @@ -/* -Copyright 2017 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package main - -import ( - "fmt" - "html/template" - "log" - "net/http" - "os" - "time" - - "github.com/gorilla/websocket" -) - -var podName string -var upgrader = websocket.Upgrader{ - CheckOrigin: func(r *http.Request) bool { - return true // Ignore http origin - }, -} - -func init() { - podName = os.Getenv("podname") - if podName == "" { - podName = "UNKNOWN" - } -} - -func main() { - log.Println("Starting on :8080") - http.HandleFunc("/ws", ws) - http.HandleFunc("/", root) - log.Fatal(http.ListenAndServe(":8080", nil)) -} - -func ws(w http.ResponseWriter, r *http.Request) { - log.Println("Received request", r.RemoteAddr) - c, err := upgrader.Upgrade(w, r, nil) - if err != nil { - log.Println("failed to upgrade:", err) - return - } - defer c.Close() - - s := fmt.Sprintf("Connected to %v", podName) - if err := c.WriteMessage(websocket.TextMessage, []byte(s)); err != nil { - log.Println("err:", err) - } - handleWSConn(c) -} - -func handleWSConn(c *websocket.Conn) { - stop := make(chan struct{}) - in := make(chan string) - ticker := time.NewTicker(5 * time.Second) - - go func() { - for { - _, message, err := c.ReadMessage() - if err != nil { - log.Println("Error while reading:", err) - close(stop) - break - } - in <- string(message) - } - log.Println("Stop reading of connection from", c.RemoteAddr()) - }() - - for { - var msg string - select { - case t := <-ticker.C: - msg = fmt.Sprintf("%s reports time: %v", podName, t.String()) - case m := <-in: - msg = m - case <-stop: - break - } - - if err := c.WriteMessage(websocket.TextMessage, []byte(msg)); err != nil { - log.Println("Error while writing:", err) - break - } - } - log.Println("Stop handling of connection from", c.RemoteAddr()) -} - -func root(w http.ResponseWriter, r *http.Request) { - if r.URL.Path != "/" { - http.NotFound(w, r) - return - } - s := struct { - Host string - PodName string - }{ - Host: r.Host, - PodName: podName, - } - testPage.Execute(w, s) -} - -var testPage = template.Must(template.New("").Parse(` -Websocket Server - - - -

Page served by pod '{{.PodName}}'

-
- - - - -
- -
- -`))