-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add kubeadm upgrade
docs
#4770
Add kubeadm upgrade
docs
#4770
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before the docs review, please revert the addition of the extra spacing in the ToC.
_data/tasks.yml
Outdated
@@ -1,15 +1,15 @@ | |||
bigheader: "Tasks" | |||
abstract: "Step-by-step instructions for performing operations with Kubernetes." | |||
bigheader: "Tasks" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please revert the addition of the extra spacing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, sorry, didn't notice this. It's due to an Atom package that automatically formats YAML files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial points.
@mhausenblas it would be cool if you wrote down the notes about upgrading the kubelets as well that we talked about
cc @lukemarsden as well -- here's user-facing docs about our newest feature
@@ -0,0 +1,54 @@ | |||
--- | |||
approvers: | |||
- pipejakob |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and luxas ;)
and @roberthbailey and @jbeda I guess...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
he he, a copy and paste thing, yes makes sense …
{% endcapture %} | ||
|
||
{% capture prerequisites %} | ||
You need to have a Kubernetes cluster running version 1.7.x in order to use the process described here. Note that only one minor version upgrade is supported, that is, you can only upgrade from, say 1.7 to 1.8, not from 1.7 to 1.9. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: v1.7.0 or higher
Maybe we can move the note down to the out-of-scope section?
- Any app-level state, for example, a database an app might depend on (like MySQL or MongoDB) must be backed up beforehand. | ||
|
||
|
||
Note that `kubeadm upgrade` is 'eventually idempotent', that is, you can run it over and over again if you find yourself in a bad state and it should be able to recover. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you move this down to the recovery section?
|
||
## On the master | ||
|
||
1. Upgrade `kubectl` using [curl](/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl). Note: DO NOT use `apt` or `yum` or any other package manager to upgrade it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only upgrading kubeadm manually is actually required.
kubelet will be upgraded later using the debs/rpms
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, now I'm a bit confused. At least that was the state when we discussed it. Has this changed or did I misunderstand you here?
|
||
{% capture steps %} | ||
|
||
## On the master |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we name this Upgrading your control plane
?
Then remind the user that this has to be done on the master
|
||
2. Install the most recent version of `kubeadm` using curl. | ||
|
||
3. On the master node, run `kubeadm upgrade plan`, which tells you what versions are available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add these in code blocks:
```
$ kubeadm upgrade plan
```
and paste some preliminary output (I have that on the upgrades PR).
Please explain what this command does as well, like:
- It checks that the cluster is in an upgradeable state
- It fetches the versions available to upgrade to in an user-friendly way
|
||
3. On the master node, run `kubeadm upgrade plan`, which tells you what versions are available. | ||
|
||
4. Pick a version to upgrade to and run, for example, `kubeadm upgrade apply --version v1.7.3`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add this in a code block as well so it's more discoverable; and with some simple outputs.
Please tell the user what's happening:
- It checks that the cluster is in an upgradeable state (API Server reachable, all nodes in the Ready state, control plane healthy)
- It enforces the version skew policies
- It makes sure the control plane images are available or available to pull to the machine
- It upgrades the control plane components or rollbacks if any of them fails to come up
- It applies the new kube-dns and kube-proxy manifests and enforces all necessary RBAC rules are created
|
||
## Recovering from a bad state | ||
|
||
You can use `kubeadm upgrade` to change a running cluster with `x.x.x --> x.x.x` with `--force`, which can be used to recover from a bad state. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: If kubeadm upgrade
somehow fails and fails to rollback (due to an unexpected shutdown during execution for instance) you may run kubeadm upgrade
again, as it is idempotent and should eventually make sure the actual state is the desired state that you are declaring.
The following is out of scope for `kubeadm upgrade`, that is, you need to take care of it yourself: | ||
|
||
- No etcd upgrades are performed. You can, for example, use `etcdctl` to take care of this. | ||
- Any app-level state, for example, a database an app might depend on (like MySQL or MongoDB) must be backed up beforehand. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we break this out to a section where you're saying the general: "As a best-practice; you should back up what's important to you" or something like that
We should clarify that this should not touch any workloads, just the k8s internal components, but it's always good to be on the safe side
@mhausenblas please ping me when ready so I know (no rush though) |
@luxas sure thing, scheduled for beginning of next week. Will ping you then … |
SUGGESTED CHANGES TO THE TEXT Opening paragraph Remove comma between cluster and currently. Before you Begin Change "You need to have a Kubernetes cluster" to "You need to have a self-hosted Kubernetes cluster" or "You need to have a kubeadm Kubernetes cluster". Put the note about one minor version somewhere outside of the Before you begin section. Does upgrading etcd have to be done before you begin? If not, move the note out of the Before you begin section. Make the steps more concise. For example:
Move the note about being eventually idempotent outside of the Before you begin section. On the master Include a link to using curl to install the most recent version of kubeadm. Spell out Software Defined Network. Spell out Container Network Interface once. No need to say, "After kubeadm upgrade you need to". Just say "Manually upgrade your Software Defined Network." Recovering from a bad state I don't understand what's being said here. Maybe rephrase and blend in the statement about kubeadm upgrade being eventually idempotent. |
@mhausenblas any updates here? |
@Kargakis I'm on it |
Feedback from @chenopis, @luxas, and @steveperry-53 addressed with this commit
Deploy preview ready! Built with commit be8d67a https://deploy-preview-4770--kubernetes-io-vnext-staging.netlify.com |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
much better, thanks!
We still need new CLI output, I can get that for you though soon-ish.
Please add a section with "how to upgrade your kubelets" as well
Before proceeding: | ||
|
||
- You need to have a `kubeadm` Kubernetes cluster running version 1.7.0 or higher in order to use the process described here. | ||
- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v180-alpha2) carefully. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't have to pin this to alpha2
|
||
You have to carry out the following steps on the master: | ||
|
||
1. Upgrade `kubectl` using [curl](/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl). Note: DO NOT use `apt` or `yum` or any other package manager to upgrade it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unnecessary. Only kubeadm is needed
|
||
1. Upgrade `kubectl` using [curl](/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl). Note: DO NOT use `apt` or `yum` or any other package manager to upgrade it. | ||
|
||
2. Install the most recent version of `kubeadm` using curl. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please tell us how to do that:
$ export VERSION=v1.8.0 # or any given released k8s version
$ export ARCH=amd64 # or arm, arm64, ppc64le or s390x
$ curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm
|
||
## Upgrading your control plane | ||
|
||
You have to carry out the following steps on the master: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by executing these commands on your master node
|
||
Before proceeding: | ||
|
||
- You need to have a `kubeadm` Kubernetes cluster running version 1.7.0 or higher in order to use the process described here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...a functional kubeadm
Kubernetes... ?
[upgrade/config] Reading configuration from the cluster (you can get this with 'kubectl -n kube-system get cm kubeadm-config -oyaml') | ||
[upgrade] Fetching available versions to upgrade to: | ||
[upgrade/versions] Cluster version: v1.7.1 | ||
[upgrade/versions] kubeadm version: v1.8.0-alpha.2.789+11f48dc291fe93 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's fake all these values to use v1.8.0 :)
4. Pick a version to upgrade to and run, for example, `kubeadm upgrade apply` as follows: | ||
|
||
```shell | ||
$ kubeadm upgrade apply --version v1.7.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's upgrade to v1.8.0 here in the docs
[apiclient] Found 0 Pods for label selector component=kube-scheduler | ||
[apiclient] Found 1 Pods for label selector component=kube-scheduler | ||
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully! | ||
[self-hosted] Created TLS secret "ca" from ca.crt and ca.key |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should not be a self-hosted cluster in the example
[addons] Applied essential addon: kube-dns | ||
``` | ||
|
||
The `kubeadm upgrade apply` does the following: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove The
?
2nd iteration review comments by @luxas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments
@steveperry-53 PTAL for wording as well
_data/tasks.yml
Outdated
@@ -122,6 +122,7 @@ toc: | |||
- docs/tasks/administer-cluster/cluster-management.md | |||
- docs/tasks/administer-cluster/upgrade-1-6.md | |||
- docs/tasks/administer-cluster/kubeadm-upgrade-1-7.md | |||
- docs/tasks/administer-cluster/kubeadm-upgrade-cmd.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kubeadm-upgrade-1.8
please?
|
||
{% capture overview %} | ||
|
||
This guide is for upgrading `kubeadm` clusters from version 1.7.x to 1.8.x. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
say that this also goes for upgrading clusters from v1.7.x
to v1.7.y
and v1.8.x
to v1.8.y
where y>x
1. On the master node, run the following: | ||
|
||
```shell | ||
$ kubeadm upgrade plan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First of all, you have to remember what CLI args you passed to kubeadm init
the first time.
If you used flags, do this kubeadm config upload from-flags [flags]
, flags can be empty
If you used a config file, do this kubeadm config upload from-file --config [config]
, config mandatory
This is to preserve the config for future upgrades. Note that this has only to be done on the first time you use kubeadm upgrade
|
||
```shell | ||
$ kubeadm upgrade plan | ||
[upgrade] Making sure the cluster is healthy: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace with
[preflight] Running pre-flight checks
[upgrade] Making sure the cluster is healthy:
[upgrade/health] Checking API Server health: Healthy
[upgrade/health] Checking Node health: All Nodes are healthy
[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to:
[upgrade/versions] Cluster version: v1.7.1
[upgrade/versions] kubeadm version: v1.8.0
[upgrade/versions] Latest stable version: v1.8.0
[upgrade/versions] Latest version in the v1.7 series: v1.7.6
Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.7.1 v1.7.6
Upgrade to the latest version in the v1.7 series:
COMPONENT CURRENT AVAILABLE
API Server v1.7.1 v1.7.6
Controller Manager v1.7.1 v1.7.6
Scheduler v1.7.1 v1.7.6
Kube Proxy v1.7.1 v1.7.6
Kube DNS 1.14.4 1.14.4
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.7.6
_____________________________________________________________________
Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.7.1 v1.8.0
Upgrade to the latest experimental version:
COMPONENT CURRENT AVAILABLE
API Server v1.7.1 v1.8.0
Controller Manager v1.7.1 v1.8.0
Scheduler v1.7.1 v1.8.0
Kube Proxy v1.7.1 v1.8.0
Kube DNS 1.14.4 1.14.4
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.8.0
Note: Before you do can perform this upgrade, you have to update kubeadm to v1.8.0
_____________________________________________________________________
1. Pick a version to upgrade to and run, for example, `kubeadm upgrade apply` as follows: | ||
|
||
```shell | ||
$ kubeadm upgrade apply --version v1.8.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
now only kubeadm upgrade apply v1.8.0
|
||
```shell | ||
$ kubeadm upgrade apply --version v1.8.0 | ||
[upgrade] Making sure the cluster is healthy: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change output to
[preflight] Running pre-flight checks
[upgrade] Making sure the cluster is healthy:
[upgrade/health] Checking API Server health: Healthy
[upgrade/health] Checking Node health: All Nodes are healthy
[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to upgrade to version "v1.8.0"
[upgrade/versions] Cluster version: v1.7.1
[upgrade/versions] kubeadm version: v1.8.0
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.8.0"...
[upgrade/staticpods] Writing upgraded Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-scheduler.yaml"
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.8.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn.
|
||
```shell | ||
$ sudo systemctl stop kubelet | ||
$ curl -s -L -o kubelet \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This won't work, as the kubelet droping manifest also gets updated.
At this stage, run the full apt-get upgrade
or yum update
and write that here.
You may use tabs for this, look at the main kubeadm doc how to do so
@luxas I've addressed you latest round of comments now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
/lgtm
One last nit; we should link to this page from the main kubeadm getting started guide. Can be done in a follow-up though
|
||
kubeadm upgrade apply v1.8.0 | ||
|
||
Note: Before you do can perform this upgrade, you have to update kubeadm to v1.8.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this line should be removed, missed that when I posted this to you
|
||
For each worker node (referred to as `$WORKER` below) in your cluster, upgrade `kubelet` by executing the following commands: | ||
|
||
1. Prepare the node for maintenance, marking it unschedulable and evicting the workload: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this has to be done on the master
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, if I understand you right then it's probably more than a nit ;)
So, are you saying this has to be done as well on the master or only on the master. If the former, then no problem, if the latter then this shouldn't be in a separate section at all but in the previous section and all references to worker nodes removed. Please advise @luxas …
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm saying that this step might be of use if you want to upgrade your master kubelet, yes.
But in particular, I referred to that these kubectl
commands can only be done on the master
After you've run drain/cordon WORKER1 on the master, go ahead and do apt-get install kubelet
on WORKER1. Finally, go back to the master and uncordon.
Makes sense?
Merging this based on @chenopis docs review. |
|
||
```shell | ||
$ kubectl cordon $WORKER | ||
$ kubectl drain $WORKER |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think drain
is cordoning the node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addresses /issues/4689
note this is the first draft, just to get the ball rolling ;)
CC: @luxas
This change is