Skip to content

Commit

Permalink
Merge branch 'master' into release-1.8
Browse files Browse the repository at this point in the history
  • Loading branch information
steveperry-53 committed Aug 9, 2017
2 parents b2eebbb + 25f0f09 commit 1025476
Show file tree
Hide file tree
Showing 25 changed files with 198 additions and 106 deletions.
9 changes: 3 additions & 6 deletions OWNERS
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
approvers:
- smarterclayton
- janetkuo
- pwittrock
- kelseyhightower
- jaredbhatti
reviewers:
- chenopis
- zacharysarah
6 changes: 4 additions & 2 deletions _data/tutorials.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,11 @@ toc:
section:
- docs/tutorials/kubernetes-basics/update-intro.html
- docs/tutorials/kubernetes-basics/update-interactive.html
- title: Online Training Course
- title: Online Training Courses
section: Scalable Microservices with Kubernetes (Udacity)
path: https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615
section: Introduction to Kubernetes (edX)
path: https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#
- docs/tutorials/stateless-application/hello-minikube.md
- title: Configuration
section:
Expand Down Expand Up @@ -61,4 +64,3 @@ toc:
- title: Services
section:
- docs/tutorials/services/source-ip.md

2 changes: 1 addition & 1 deletion docs/concepts/architecture/master-node-communication.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ To verify this connection, use the `--kubelet-certificate-authority` flag to
provide the apiserver with a root certificates bundle to use to verify the
kubelet's serving certificate.

If that is not possible, use [SSH tunneling](/docs/admin/master-node-communication/#ssh-tunnels)
If that is not possible, use [SSH tunneling](/docs/concepts/architecture/master-node-communication/#ssh-tunnels)
between the apiserver and kubelet if required to avoid connecting over an
untrusted or public network.

Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/overview/working-with-objects/namespaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ across namespaces, you need to use the fully qualified domain name (FQDN).
## Not All Objects are in a Namespace

Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are
in some namespace. However namespace resources are not themselves in a namespace.
in some namespaces. However namespace resources are not themselves in a namespace.
And low-level resources, such as [nodes](/docs/admin/node) and
persistentVolumes, are not in any namespace. Events are an exception: they may or may not
have a namespace, depending on the object the event is about.
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ Kubernetes supports 2 primary modes of finding a Service - environment variables

### Environment Variables

When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods (your pod name will be different):
When a Pod runs on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods (your pod name will be different):

```shell
$ kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/workloads/controllers/daemonset.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Normally, the machine that a pod runs on is selected by the Kubernetes scheduler
created by the Daemon controller have the machine already selected (`.spec.nodeName` is specified
when the pod is created, so it is ignored by the scheduler). Therefore:

- the [`unschedulable`](/docs/admin/node/#manual-node-administration) field of a node is not respected
- The [`unschedulable`](/docs/admin/node/#manual-node-administration) field of a node is not respected
by the DaemonSet controller.
- DaemonSet controller can make pods even when the scheduler has not been started, which can help cluster
bootstrap.
Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/workloads/controllers/statefulset.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,10 +94,9 @@ spec:
volumeClaimTemplates:
- metadata:
name: www
annotations:
volume.beta.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: my-storage-class
resources:
requests:
storage: 1Gi
Expand Down Expand Up @@ -143,7 +142,8 @@ Note that Cluster Domain will be set to `cluster.local` unless

Kubernetes creates one [PersistentVolume](/docs/concepts/storage/volumes/) for each
VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume
with a storage class of `anything` and 1 Gib of provisioned storage. When a Pod is (re)scheduled
with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass
is specified, then the default StorageClass will be used. When a Pod is (re)scheduled
onto a node, its `volumeMounts` mount the PersistentVolumes associated with its
PersistentVolume Claims. Note that, the PersistentVolumes associated with the
Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted.
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/workloads/pods/init-containers.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Here are some ideas for how to use Init Containers:
* Wait for some time before starting the app Container with a command like `sleep 60`.
* Clone a git repository into a volume.
* Place values into a configuration file and run a template tool to dynamically
generate a configuration file for the the main app Container. For example,
generate a configuration file for the main app Container. For example,
place the POD_IP value in a configuration and generate the main app
configuration file using Jinja.

Expand Down
138 changes: 73 additions & 65 deletions docs/getting-started-guides/kubespray.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,94 +4,102 @@ title: Installing Kubernetes On-premises/Cloud Providers with Kubespray

## Overview

This quickstart helps to install a Kubernetes cluster hosted
on GCE, Azure, OpenStack, AWS or Baremetal with
[`Kubespray`](https://github.com/kubernetes-incubator/kubespray) tool.

Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks,
[inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md)
generation CLI tools and domain knowledge for generic OS/Kubernetes
clusters configuration management tasks. It provides:

* [High available cluster](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md)
* [Composable](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/vars.md)
(Choice of the network plugin, for instance)
* Support most popular Linux
[distributions](https://github.com/kubernetes-incubator/kubespray#supported-linux-distributions)
* Continuous integration tests

To choose a tool which fits your use case the best, you may want to read this
[comparison](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md)
to [kubeadm](../kubeadm) and [kops](../kops).
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, or Baremetal with [Kubespray](https://github.com/kubernetes-incubator/kubespray).

Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:

* a highly available cluster
* composable attributes
* support for most popular Linux distributions
* continuous integration tests

To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md) to [kubeadm](../kubeadm) and [kops](../kops).

## Creating a cluster

### (1/4) Ensure the underlay [requirements](https://github.com/kubernetes-incubator/kubespray#requirements) are met
### (1/5) Meet the underlay [requirements](https://github.com/kubernetes-incubator/kubespray#requirements)

Provision servers with the following requirements:

* `Ansible v2.3` (or newer)
* `Jinja 2.9` (or newer)
* `python-netaddr` installed on the machine that running Ansible commands
* Target servers must have access to the Internet in order to pull docker images
* Target servers are configured to allow IPv4 forwarding
* Target servers have SSH connectivity ( tcp/22 ) directly to your nodes or through a bastion host/ssh jump box
* Target servers have a privileged user
* Your SSH key must be copied to all the servers that are part of your inventory
* Firewall rules configured properly to allow Ansible and Kubernetes components to communicate
* If using a cloud provider, you must have the appropriate credentials available and exported as environment variables

Kubespray provides the following utilities to help provision your environment:

* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:
* [AWS](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/aws)
* [OpenStack](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/aws)
* [kubespray-cli](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md)

**Note:** kubespray-cli is no longer actively maintained.
{. :note}

### (2/5) Compose an inventory file

After you provision your servers, create an [inventory file for Ansible](http://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".

### (3/5) Plan your cluster deployment

Kubespray provides the ability to customize many aspects of the deployment:

* CNI (networking) plugins
* DNS configuration
* Choice of control plane: native/binary or containerized with docker or rkt)
* Component versions
* Calico route reflectors
* Component runtime options
* Certificate generation methods

Kubespray customizations can be made to a [variable file](http://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.

### (4/5) Deploy a Cluster

#### Checklist
Next, deploy your cluster with one of two methods:

* You must have cloud instances or baremetal nodes running for your future Kubernetes cluster.
A way to achieve that is to use the
[kubespray-cli tool](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md).
* Or provision baremetal hosts with a tool-of-your-choice or launch cloud instances,
then create an inventory file for Ansible with this [tool](https://github.com/kubernetes-incubator/kubespray/blob/master/contrib/inventory_builder/inventory.py).
* [ansible-playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).
* [kubespray-cli tool](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md)

### (2/4) Compose the deployment
**Note:** kubespray-cli is no longer actively maintained.
{. :note}

#### Checklist
Both methods run the default [cluster definition file](https://github.com/kubernetes-incubator/kubespray/blob/master/cluster.yml).

* Customize your deployment by usual Ansible meanings, which is
[generating inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)
and overriding default data [variables](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/vars.md).
Or just stick with default values (Kubespray will choose Calico networking plugin for you
then). This includes steps like deciding on the:
* DNS [configuration options](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/dns-stack.md)
* [Networking plugin](https://github.com/kubernetes-incubator/kubespray#network-plugins) to use
* [Versions](https://github.com/kubernetes-incubator/kubespray#versions-of-supported-components)
of components.
* Additional node groups like [bastion hosts](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md#bastion-host) or
[Calico BGP route reflectors](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/calico.md#optional--bgp-peering-with-border-routers).
* Plan custom deployment steps, if any, or use the default composition layer in the
[cluster definition file](https://github.com/kubernetes-incubator/kubespray/blob/master/cluster.yml).
Taking the best from Ansible world, Kubespray allows users to execute arbitrary steps via the
``ansible-playbook`` with given inventory, playbooks, data overrides and tags, limits, batches
of nodes to deploy and so on.
* For large deployments (100+ nodes), you may want to
[tweak things](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/large-deployments.md)
for best results.
Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/large-deployments.md) for best results.

### (3/4) Run the deployment
### (5/5) Verify the deployment

#### Checklist
Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.

* Apply deployment with
[kubespray-cli tool](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md)
or ``ansible-playbook``
[manual commands](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).
## Cluster operations

### (4/4) (Optional) verify inter-pods connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/netcheck.md)
Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_.

#### Checklist
### Scale your cluster

* Ensure the netchecker-agent's pods can resolve DNS requests and ping each over within the default namespace.
Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.
You can scale your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#Adding-nodes)".

## Explore contributed add-ons
### Upgrade your cluster

See the [list of contributed playbooks](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib)
to explore other deployment options.
You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/upgrades.md)".

## What's next

Kubespray has quite a few [marks on the radar](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/roadmap.md).
Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/roadmap.md).

## Cleanup

To delete your scratch cluster, you can apply the
[reset role](https://github.com/kubernetes-incubator/kubespray/blob/master/roles/reset/tasks/main.yml)
with the manual ``ansible-playbook`` command.
You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/reset.yml).

Note, that it is highly unrecommended to delete production clusters with the reset playbook!
**Caution:** When running the reset playbook, be sure not to accidentally target your production cluster!
{. :caution}

## Feedback

Expand Down
9 changes: 6 additions & 3 deletions docs/getting-started-guides/mesos/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,13 +228,16 @@ Note that we have passed these two values already as parameter to the apiserver

A template for a replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/kubedns-controller.yaml.in][12] in the repository. The following steps are necessary in order to get a valid replication controller yaml file:

- replace `{% raw %}{{ pillar['dns_replicas'] }}{% endraw %}` with `1`
- replace `{% raw %}{{ pillar['dns_domain'] }}{% endraw %}` with `cluster.local.`
{% assign dns_replicas = "{{ pillar['dns_replicas'] }}" %}
{% assign dns_domain = "{{ pillar['dns_domain'] }}" %}
- replace `{{ dns_replicas }}` with `1`
- replace `{{ dns_domain }}` with `cluster.local.`
- add `--kube_master_url=${KUBERNETES_MASTER}` parameter to the kube2sky container command.

In addition the service template at [cluster/addons/dns/kubedns-controller.yaml.in][12] needs the following replacement:

- `{% raw %}{{ pillar['dns_server'] }}{% endraw %}` with `10.10.10.10`.
{% assign dns_server = "{{ pillar['dns_server'] }}" %}
- `{{ dns_server }}` with `10.10.10.10`.

To do this automatically:

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/ubuntu/upgrades.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ The Kubernetes Charms use snap channels to drive payloads. The channels are defi
| beta | Latest alpha or beta of Kubernetes for that minor release |
| edge | Nightly builds of that minor release of Kubernetes |

If a release isn't available, the next highest channel is used. For example, 1.6/beta will load `/candidate` or `/stable` depending on availablility of release. Development versions of Kubernetes are available in that minor releases edge channel. There is no guarantee that edge or master will work with the current charms.
If a release isn't available, the next highest channel is used. For example, 1.6/beta will load `/candidate` or `/stable` depending on availability of release. Development versions of Kubernetes are available in that minor releases edge channel. There is no guarantee that edge or master will work with the current charms.

## Master Upgrades

Expand Down
2 changes: 1 addition & 1 deletion docs/home/contribute/review-issues.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ The following labels and definitions should be used to prioritize issues. If you
## Handling special issue types

### Duplicate issues
If a single problem has one or more issues open for it, the problem should be consolodated into a single issue. You should decide which issue to keep open (or open a new issue), port over all relevant information, link related issues, and close all the other issues that describe the same problem. Only having a single issue to work on will help reduce confusion and avoid duplicating work on the same problem.
If a single problem has one or more issues open for it, the problem should be consolidated into a single issue. You should decide which issue to keep open (or open a new issue), port over all relevant information, link related issues, and close all the other issues that describe the same problem. Only having a single issue to work on will help reduce confusion and avoid duplicating work on the same problem.

### Dead link issues
Depending on where the dead link is reported, different actions are required to resolve the issue. Dead links in the API and Kubectl docs are automation issues and should be assigned a P1 until the problem can be fully understood. All other dead links are issues that need to be manually fixed and can be assigned a P3.
Expand Down
2 changes: 1 addition & 1 deletion docs/home/contribute/style-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ I didn't read the stlye guide.

### Ordered Lists

Callouts will interupt numbered lists unless you indent three spaces before the notice and the tag.
Callouts will interrupt numbered lists unless you indent three spaces before the notice and the tag.

For example:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ $ kubectl get crontab -o json
"uid": "6f65e7a3-8601-11e6-a23e-42010af0000c"
}
}
]
],
"kind": "List",
"metadata": {},
"resourceVersion": "",
Expand Down
2 changes: 1 addition & 1 deletion docs/tasks/administer-cluster/encrypt-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ program to retrieve the contents of your secret.
Since secrets are encrypted on write, performing an update on a secret will encrypt that content.

```
kubectl get secrets -o json | kubectl replace -f -
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
```
The command above reads all secrets and then updates them to apply server side encryption.
Expand Down
Loading

0 comments on commit 1025476

Please sign in to comment.