Skip to content

Commit

Permalink
Merge master into release-1.10
Browse files Browse the repository at this point in the history
  • Loading branch information
chenopis committed Mar 16, 2018
2 parents 39d6d27 + 4ac2583 commit a6fd58d
Show file tree
Hide file tree
Showing 24 changed files with 341 additions and 42 deletions.
3 changes: 1 addition & 2 deletions _data/setup.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,8 @@ toc:
- docs/getting-started-guides/alternatives.md

- title: Hosted Solutions
landing_page: /docs/setup/hosted-solutions/overview/
landing_page: /docs/setup/pick-right-solution/#hosted-solutions
section:
- docs/setup/hosted-solutions/overview.md
- title: Running Kubernetes on Google Kubernetes Engine
path: https://cloud.google.com/kubernetes-engine/docs/before-you-begin/
- title: Running Kubernetes on Azure Container Service
Expand Down
1 change: 1 addition & 0 deletions _data/tasks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,7 @@ toc:
- docs/tasks/administer-cluster/kubeadm-upgrade-1-7.md
- docs/tasks/administer-cluster/kubeadm-upgrade-1-8.md
- docs/tasks/administer-cluster/kubeadm-upgrade-1-9.md
- docs/tasks/administer-cluster/kubeadm-upgrade-ha.md
- docs/tasks/administer-cluster/namespaces.md
- docs/tasks/administer-cluster/namespaces-walkthrough.md
- docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
Expand Down
1 change: 1 addition & 0 deletions _includes/footer-scripts.html
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@

function hideNav(toc){
if (!toc) toc = document.querySelector('#docsToc')
if (!toc) return
var container = toc.querySelector('.container')

// container is built dynamically, so it may not be present on the first runloop
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/admission-controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -330,7 +330,7 @@ When the admission controller sets a compute resource request, it does this by *
the pod spec rather than mutating the `container.resources` fields.
The annotations added contain the information on what compute resources were auto-populated.

See the [InitialResouces proposal](https://git.k8s.io/community/contributors/design-proposals/autoscaling/initial-resources.md) for more details.
See the [InitialResources proposal](https://git.k8s.io/community/contributors/design-proposals/autoscaling/initial-resources.md) for more details.

### LimitPodHardAntiAffinityTopology

Expand Down
2 changes: 1 addition & 1 deletion docs/admin/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -373,7 +373,7 @@ users:
refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq
name: oidc
```
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `kube/.config`.
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.


##### Option 2 - Use the `--token` Option
Expand Down
170 changes: 169 additions & 1 deletion docs/admin/extensible-admission-controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ reviewers:
- lavalamp
- whitlockjc
- caesarxuchao
- deads2k
title: Dynamic Admission Control
---

Expand All @@ -20,11 +21,175 @@ the following:
* They need to be compiled into kube-apiserver.
* They are only configurable when the apiserver starts up.

<<<<<<< HEAD
1.7 introduced two alpha features, *Initializers* and *External Admission
Webhooks*, that address these limitations. These features allow admission
controllers to be developed out-of-tree and configured at runtime.
=======
Two features, *Admission Webhooks* (beta in 1.9) and *Initializers* (alpha),
address these limitations. They allow admission controllers to be developed
out-of-tree and configured at runtime.
>>>>>>> 4ac258363735f8d35150e4dcd0213516fcdc83b9
This page describes how to use Initializers and External Admission Webhooks.
This page describes how to use Admission Webhooks and Initializers.

## Admission Webhooks

### What are admission webhooks?

Admission webhooks are HTTP callbacks that receive admission requests and do
something with them. You can define two types of admission webhooks,
[ValidatingAdmissionWebhooks](/docs/admin/admission-controllers.md#validatingadmissionwebhook-alpha-in-18-beta-in-19)
and
[MutatingAdmissionWebhooks](/docs/admin/admission-controllers.md#mutatingadmissionwebhook-beta-in-19).
With `ValidatingAdmissionWebhooks`, you may reject requests to enforce custom
admission policies. With `MutatingAdmissionWebhooks`, you may change requests to
enforce custom defaults.

### Experimenting with admission webhooks

Admission webhooks are essentially part of the cluster control-plane. You should
write and deploy them with great caution. Please read the [user
guides](https://github.com/kubernetes/website/pull/6836/files)(WIP) for
instructions if you intend to write/deploy production-grade admission webhooks.
In the following, we describe how to quickly experiment with admission webhooks.

### Prerequisites

* Ensure that the Kubernetes cluster is at least as new as v1.9.

* Ensure that MutatingAdmissionWebhook and ValidatingAdmissionWebhook
admission controllers are enabled.
[Here](/docs/admin/admission-controllers.md#is-there-a-recommended-set-of-admission-controllers-to-use)
is a recommended set of admission controllers to enable in general.

* Ensure that the admissionregistration.k8s.io/v1beta1 API is enabled.

### Write an admission webhook server

Please refer to the implementation of the [admission webhook
server](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/test/images/webhook/main.go)
that is validated in a Kubernetes e2e test. The webhook handles the
`admissionReview` requests sent by the apiservers, and sends back its decision
wrapped in `admissionResponse`.

The example admission webhook server leaves the `ClientAuth` field
[empty](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/test/images/webhook/config.go#L48-L49),
which defaults to `NoClientCert`. This means that the webhook server does not
authenticate the identity of the clients, supposedly apiservers. If you need
mutual TLS or other ways to authenticate the clients, see
how to [authenticate apiservers](#authenticate-apiservers).

### Deploy the admission webhook service

The webhook server in the e2e test is deployed in the Kubernetes cluster, via
the [deployment API](/docs/api-reference/{{page.version}}/#deployment-v1beta1-apps).
The test also creates a [service](/docs/api-reference/{{page.version}}/#service-v1-core)
as the front-end of the webhook server. See
[code](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/test/e2e/apimachinery/webhook.go#L196).

You may also deploy your webhooks outside of the cluster. You will need to update
your [webhook client configurations](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/staging/src/k8s.io/api/admissionregistration/v1beta1/types.go#L218) accordingly.

### Configure admission webhooks on the fly

You can dynamically configure what resources are subject to what admission
webhooks via
[ValidatingWebhookConfiguration](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/staging/src/k8s.io/api/admissionregistration/v1beta1/types.go#L68)
or
[MutatingWebhookConifuration](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/staging/src/k8s.io/api/admissionregistration/v1beta1/types.go#L98).

The following is an example `validatingWebhookConfiguration`, a mutating webhook
configuration is similar.

```yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: <name of this configuration object>
webhooks:
- name: <webhook name, e.g., pod-policy.example.io>
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
resources:
- pods
clientConfig:
service:
namespace: <namespace of the front-end service>
name: <name of the front-end service>
caBundle: <pem encoded ca cert that signs the server cert used by the webhook>
```
When an apiserver receives a request that matches one of the `rules`, the
apiserver sends an `admissionReview` request to webhook as specified in the
`clientConfig`.

After you create the webhook configuration, the system will take a few seconds
to honor the new configuration.

### Authenticate apiservers

If your admission webhooks require authentication, you can configure the
apiservers to use basic auth, bearer token, or a cert to authenticate itself to
the webhooks. There are three steps to complete the configuration.

* When starting the apiserver, specify the location of the admission control
configuration file via the `--admission-control-config-file` flag.

* In the admission control configuration file, specify where the
MutatingAdmissionWebhook controller and ValidatingAdmissionWebhook controller
should read the credentials. The credentials are stored in kubeConfig files
(yes, the same schema that's used by kubectl), so the field name is
`kubeConfigFile`. Here is an example admission control configuration file:

```yaml
apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: ValidatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1alpha1
kind: WebhookAdmission
kubeConfigFile: <path-to-kubeconfig-file>
- name: MutatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1alpha1
kind: WebhookAdmission
kubeConfigFile: <path-to-kubeconfig-file>
```

The schema of `admissionConfiguration` is defined
[here](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.0/staging/src/k8s.io/apiserver/pkg/apis/apiserver/v1alpha1/types.go#L27).

* In the kubeConfig file, provide the credentials:

```yaml
apiVersion: v1
kind: Config
users:
# DNS name of webhook service, i.e., <service name>.<namespace>.svc, or the URL
# of the webhook server.
- name: 'webhook1.ns1.svc'
user:
client-certificate-data: <pem encoded certificate>
client-key-data: <pem encoded key>
# The `name` supports using * to wildmatch prefixing segments.
- name: '*.webhook-company.org'
user:
password: <password>
username: <name>
# '*' is the default match.
- name: '*'
user:
token: <token>
```
Of course you need to set up the webhook server to handle these authentications.
## Initializers
Expand Down Expand Up @@ -135,6 +300,7 @@ the pods will be stuck in an uninitialized state.

Make sure that all expansions of the `<apiGroup, apiVersions, resources>` tuple
in a `rule` are valid. If they are not, separate them in different `rules`.
<<<<<<< HEAD

## External Admission Webhooks

Expand Down Expand Up @@ -285,3 +451,5 @@ operation based on the configured policy.

After you create the `externalAdmissionHookConfiguration`, the system will take a few
seconds to honor the new configuration.
=======
>>>>>>> 4ac258363735f8d35150e4dcd0213516fcdc83b9
2 changes: 1 addition & 1 deletion docs/admin/high-availability/building.md
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ endpoints. You can switch to the new reconciler by adding the flag
{% include feature-state-alpha.md %}

If you want to know more, you can check the following resources:
- [issue kubernetes/kuberenetes#22609](https://github.com/kubernetes/kubernetes/issues/22609),
- [issue kubernetes/kubernetes#22609](https://github.com/kubernetes/kubernetes/issues/22609),
which gives additional context
- [master/reconcilers/mastercount.go](https://github.com/kubernetes/kubernetes/blob/dd9981d038012c120525c9e6df98b3beb3ef19e1/pkg/master/reconcilers/mastercount.go#L63),
the implementation of the master count reconciler
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/multiple-zones.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ zone information.

Kubernetes will automatically spread the pods in a replication controller
or service across nodes in a single-zone cluster (to reduce the impact of
failures.) With multiple-zone clusters, this spreading behaviour is
failures.) With multiple-zone clusters, this spreading behavior is
extended across zones (to reduce the impact of zone failures.) (This is
achieved via `SelectorSpreadPriority`). This is a best-effort
placement, and so if the zones in your cluster are heterogeneous
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/cluster-administration/device-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ For examples of device plugin implementations, see:
* it requires using [nvidia-docker 2.0](https://github.com/NVIDIA/nvidia-docker) which allows you to run GPU enabled docker containers
* The [NVIDIA GPU device plugin for COS base OS](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu).
* The [RDMA device plugin](https://github.com/hustcat/k8s-rdma-device-plugin)

* The [Solarflare device plugin](https://github.com/vikaschoudhary16/sfc-device-plugin)
{% endcapture %}

{% include templates/concept.md %}
2 changes: 1 addition & 1 deletion docs/concepts/cluster-administration/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ The Nuage platform uses overlays to provide seamless policy-based networking bet

### OpenVSwitch

[OpenVSwitch](/docs/admin/ovs-networking) is a somewhat more mature but also
[OpenVSwitch](https://www.openvswitch.org/) is a somewhat more mature but also
complicated way to build an overlay network. This is endorsed by several of the
"Big Shops" for networking.

Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/configuration/assign-pod-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ feature, currently in beta, greatly expands the types of constraints you can exp
3. you can constrain against labels on other pods running on the node (or other topological domain),
rather than against labels on the node itself, which allows rules about which pods can and cannot be co-located

The affinity feature consists of two types of affinity, "node affinity" and "inter-pod affinity/anti-affinity."
The affinity feature consists of two types of affinity, "node affinity" and "inter-pod affinity/anti-affinity".
Node affinity is like the existing `nodeSelector` (but with the first two benefits listed above),
while inter-pod affinity/anti-affinity constrains against pod labels rather than node labels, as
described in the third item listed above, in addition to having the first and second properties listed above.
Expand Down Expand Up @@ -147,7 +147,7 @@ For more information on node affinity, see the design doc
Inter-pod affinity and anti-affinity were introduced in Kubernetes 1.4.
Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled *based on
labels on pods that are already running on the node* rather than based on labels on nodes. The rules are of the form "this pod should (or, in the case of
anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y." Y is expressed
anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y". Y is expressed
as a LabelSelector with an associated list of namespaces (or "all" namespaces); unlike nodes, because pods are namespaced
(and therefore the labels on pods are implicitly namespaced),
a label selector over pod labels must specify which namespaces the selector should apply to. Conceptually X is a topology domain
Expand Down Expand Up @@ -203,7 +203,7 @@ empty `topologyKey` is not allowed.
In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces`
of namespaces which the `labelSelector` should match against (this goes at the same level of the definition as `labelSelector` and `topologyKey`).
If omitted, it defaults to the namespace of the pod where the affinity/anti-affinity definition appears.
If defined but empty, it means "all namespaces."
If defined but empty, it means "all namespaces".

All `matchExpressions` associated with `requiredDuringSchedulingIgnoredDuringExecution` affinity and anti-affinity
must be satisfied for the pod to be scheduled onto a node.
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/policy/resource-quotas.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Resource quotas work like this:
status code `403 FORBIDDEN` with a message explaining the constraint that would have been violated.
- If quota is enabled in a namespace for compute resources like `cpu` and `memory`, users must specify
requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use
the LimitRange admission controller to force defaults for pods that make no compute resource requirements.
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem.

Examples of policies that could be created using namespaces and quotas are:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -185,10 +185,10 @@ Following are the manual steps to follow in case you run into problems running m

```shell
#create a public private key pair
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj "/CN=nginxsvc/O=nginxsvc"
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx"
#convert the keys to base64 encoding
cat /d/tmp/nginx.crt | base 64
cat /d/tmp/nginx.key | base 64
cat /d/tmp/nginx.crt | base64
cat /d/tmp/nginx.key | base64
```
Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.

Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/services-networking/ingress.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Throughout this doc you will see a few terms that are sometimes used interchange
* Node: A single virtual or physical machine in a Kubernetes cluster.
* Cluster: A group of nodes firewalled from the internet, that are the primary compute resources managed by Kubernetes.
* Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the [Kubernetes networking model](/docs/concepts/cluster-administration/networking/). Examples of a Cluster network include Overlays such as [flannel](https://github.com/coreos/flannel#flannel) or SDNs such as [OVS](/docs/admin/ovs-networking/).
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the [Kubernetes networking model](/docs/concepts/cluster-administration/networking/). Examples of a Cluster network include Overlays such as [flannel](https://github.com/coreos/flannel#flannel) or SDNs such as [OVS](https://www.openvswitch.org/).
* Service: A Kubernetes [Service](/docs/concepts/services-networking/service/) that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.

## What is Ingress?
Expand Down
1 change: 1 addition & 0 deletions docs/concepts/services-networking/network-policies.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,3 +186,4 @@ This ensures that even pods that aren't selected by any other NetworkPolicy will

- See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/)
walkthrough for further examples.
- See more [Recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource.
5 changes: 3 additions & 2 deletions docs/getting-started-guides/kubespray.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [in

* a highly available cluster
* composable attributes
* support for most popular Linux distributions (CoreOS, Debian Jessie, Ubuntu 16.04, CentOS/RHEL 7)
* support for most popular Linux distributions (CoreOS, Debian Jessie, Ubuntu 16.04, CentOS/RHEL 7, Fedora/CentOS Atomic)
* continuous integration tests

To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops).
Expand Down Expand Up @@ -79,7 +79,8 @@ Kubespray provides additional playbooks to manage your cluster: _scale_ and _upg

### Scale your cluster

You can scale your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#adding-nodes)".
You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#adding-nodes)".
You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#remove-nodes)".

### Upgrade your cluster

Expand Down
Loading

0 comments on commit a6fd58d

Please sign in to comment.